1
|
Rittenberg BSP, Holland CW, Barnhart GE, Gaudreau SM, Neyedli HF. Trust with increasing and decreasing reliability. HUMAN FACTORS 2024:187208241228636. [PMID: 38445652 DOI: 10.1177/00187208241228636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/07/2024]
Abstract
OBJECTIVE The primary purpose was to determine how trust changes over time when automation reliability increases or decreases. A secondary purpose was to determine how task-specific self-confidence is associated with trust and reliability level. BACKGROUND Both overtrust and undertrust can be detrimental to system performance; therefore, the temporal dynamics of trust with changing reliability level need to be explored. METHOD Two experiments used a dominant-color identification task, where automation provided a recommendation to users, with the reliability of the recommendation changing over 300 trials. In Experiment 1, two groups of participants interacted with the system: one group started with a 50% reliable system which increased to 100%, while the other used a system that decreased from 100% to 50%. Experiment 2 included a group where automation reliability increased from 70% to 100%. RESULTS Trust was initially high in the decreasing group and then declined as reliability level decreased; however, trust also declined in the 50% increasing reliability group. Furthermore, when user self-confidence increased, automation reliability had a greater influence on trust. In Experiment 2, the 70% increasing reliability group showed increased trust in the system. CONCLUSION Trust does not always track the reliability of automated systems; in particular, it is difficult for trust to recover once the user has interacted with a low reliability system. APPLICATIONS This study provides initial evidence into the dynamics of trust for automation that gets better over time suggesting that users should only start interacting with automation when it is sufficiently reliable.
Collapse
|
2
|
Ling S, Zhang Y, Du N. More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction. HUMAN FACTORS 2024:187208241234810. [PMID: 38437598 DOI: 10.1177/00187208241234810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
OBJECTIVE The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human-automation interaction. BACKGROUND System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics. METHOD We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated. RESULTS Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities. CONCLUSION The study demonstrated that visual explanations could improve performance, trust, and preference in human-automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects. APPLICATION These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human-machine collaboration.
Collapse
Affiliation(s)
| | | | - Na Du
- University of Pittsburgh, USA
| |
Collapse
|
3
|
Knocton S, Hunter A, Connors W, Dithurbide L, Neyedli HF. The Effect of Informing Participants of the Response Bias of an Automated Target Recognition System on Trust and Reliance Behavior. HUMAN FACTORS 2023; 65:189-199. [PMID: 34078167 PMCID: PMC9969489 DOI: 10.1177/00187208211021711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 03/29/2021] [Indexed: 06/12/2023]
Abstract
OBJECTIVE To determine how changing and informing a user of the false alarm (FA) rate of an automated target recognition (ATR) system affects the user's trust in and reliance on the system and their performance during an underwater mine detection task. BACKGROUND ATR systems are designed to operate using a high sensitivity and a liberal decision criterion to reduce the risk of the ATR system missing a target. A high number of FAs in general may lead to a decrease in operator trust and reliance. METHODS Participants viewed sonar images and were asked to identify mines in the images. They performed the task without ATR and with ATR at a lower and higher FA rate. The participants were split into two groups-one informed and one uninformed of the changed FA rate. Trust and/or confidence in detecting mines was measured after each block. RESULTS When not informed of the FA rate, the FA rate had a significant effect on the participants' response bias. Participants had greater trust in the system and a more consistent response bias when informed of the FA rate. Sensitivity and confidence were not influenced by disclosure of the FA rate but were significantly worse for the high FA rate condition compared with performance without the ATR. CONCLUSION AND APPLICATION Informing a user of the FA rate of automation may positively influence the level of trust in and reliance on the aid.
Collapse
Affiliation(s)
| | - Aren Hunter
- Defence Research and Development Canada, Dartmouth, Nova Scotia,
Canada
| | - Warren Connors
- Defence Research and Development Canada, Dartmouth, Nova Scotia,
Canada
| | | | | |
Collapse
|
4
|
Caldwell S, Sweetser P, O’Donnell N, Knight MJ, Aitchison M, Gedeon T, Johnson D, Brereton M, Gallagher M, Conroy D. An Agile New Research Framework for Hybrid Human-AI Teaming: Trust, Transparency, and Transferability. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3514257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
We propose a new research framework by which the nascent discipline of human-AI teaming can be explored within experimental environments in preparation for transferal to real-world contexts. We examine the existing literature and unanswered research questions through the lens of an Agile approach to construct our proposed framework. Our framework aims to provide a structure for understanding the macro features of this research landscape, supporting holistic research into the acceptability of human-AI teaming to human team members and the affordances of AI team members. The framework has the potential to enhance decision-making and performance of hybrid human-AI teams. Further, our framework proposes the application of Agile methodology for research management and knowledge discovery. We propose a transferability pathway for hybrid teaming to be initially tested in a safe environment, such as a real-time strategy video game, with elements of lessons learned that can be transferred to real-world situations.
Collapse
Affiliation(s)
| | | | | | | | | | - Tom Gedeon
- Australian National University, Australia
| | | | | | | | | |
Collapse
|
5
|
Rosenthal S, Vichivanives P, Carter E. The Impact of Route Descriptions on Human Expectations for Robot Navigation. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
As robots are deployed to work in our environments, we must build appropriate expectations of their behavior so that we can trust them to perform their jobs autonomously as we attend to other tasks. Many types of explanations for robot behavior have been proposed, but they have not been fully analyzed for their impact on aligning expectations of robot paths for navigation. In this work, we evaluate several types of robot navigation explanations to understand their impact on the ability of humans to anticipate a robot’s paths. We performed an experiment in which we gave participants an explanation of a robot path and then measured (i) their ability to predict that path, (ii) their allocation of attention on the robot navigating the path versus their own dot-tracking task, and (iii) their subjective ratings of the robot’s predictability and trustworthiness. Our results show that explanations do significantly affect people’s ability to predict robot paths and that explanations that are concise and do not require readers to perform mental transformations are most effective at reducing attention to the robot.
Collapse
|
6
|
Yang XJ, Schemanske C, Searle C. Toward Quantifying Trust Dynamics: How People Adjust Their Trust After Moment-to-Moment Interaction With Automation. HUMAN FACTORS 2021:187208211034716. [PMID: 34459266 PMCID: PMC10374998 DOI: 10.1177/00187208211034716] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE We examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. BACKGROUND Most existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. METHOD Seventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. RESULTS Outcome bias and contrast effect significantly influence human operators' trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him/herself. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. CONCLUSION Human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. APPLICATION Understanding the trust adjustment process enables accurate prediction of the operators' moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.
Collapse
|
7
|
Guo Y, Shi C, Yang XJ. Reverse Psychology in Trust-Aware Human-Robot Interaction. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3067626] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
8
|
Luo R, Weng Y, Wang Y, Jayakumar P, Brudnak MJ, Paul V, Desaraju VR, Stein JL, Ersal T, Yang XJ. A workload adaptive haptic shared control scheme for semi-autonomous driving. ACCIDENT; ANALYSIS AND PREVENTION 2021; 152:105968. [PMID: 33578217 DOI: 10.1016/j.aap.2020.105968] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 11/27/2020] [Accepted: 12/27/2020] [Indexed: 06/12/2023]
Abstract
Haptic shared control is used to manage the control authority allocation between a human and an autonomous agent in semi-autonomous driving. Existing haptic shared control schemes, however, do not take full consideration of the human agent. To fill this research gap, this study presents a haptic shared control scheme that adapts to a human operator's workload, eyes on road and input torque in real time. We conducted human-in-the-loop experiments with 24 participants. In the experiment, a human operator and an autonomy module for navigation shared the control of a simulated notional High Mobility Multipurpose Wheeled Vehicle (HMMWV) at a fixed speed. At the same time, the human operator performed a target detection task. The autonomy could be either adaptive or non-adaptive to the above-mentioned human factors. Results indicate that the adaptive haptic control scheme resulted in significantly lower workload, higher trust in autonomy, better driving task performance and smaller control effort.
Collapse
Affiliation(s)
- Ruikun Luo
- Robotics Institute, University of Michigan, Ann Arbor, MI, United States
| | - Yifan Weng
- Mechanical Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Yifan Wang
- Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, United States
| | | | - Mark J Brudnak
- U.S. Army Ground Vehicles System Center, Warren, MI, United States
| | - Victor Paul
- U.S. Army Ground Vehicles System Center, Warren, MI, United States
| | | | - Jeffrey L Stein
- Mechanical Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Tulga Ersal
- Mechanical Engineering, University of Michigan, Ann Arbor, MI, United States.
| | - X Jessie Yang
- Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI, United States.
| |
Collapse
|
9
|
Abstract
PURPOSE OF REVIEW The goal of automation is to decrease the anesthesiologist's workload and to decrease the possibility of human error. Automated systems introduce problems of its own, however, including loss of situation awareness, leaving the physician out of the loop, and training physicians how to monitor autonomous systems. This review will discuss the growing role of automated systems in healthcare and describe two types of automation failures. RECENT FINDINGS An automation surprise occurs when an automated system takes an action that is unexpected by the user. Mode confusion occurs when the operator does not understand what an automated system is programmed to do and may prevent the clinician from fully understanding what the device is doing during a critical event. Both types of automation failures can decrease a clinician's trust in the system. They may also prevent a clinician from regaining control of a failed system (e.g., a ventilator that is no longer working) during a critical event. SUMMARY Clinicians should receive generalized training on how to manage automation and should also be required to demonstrate competency before using medical equipment that employs automation, including electronic health records, infusion pumps, and ventilators.
Collapse
|
10
|
Guo Y, Yang XJ. Modeling and Predicting Trust Dynamics in Human–Robot Teaming: A Bayesian Inference Approach. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00703-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
AbstractTrust in automation, or more recently trust in autonomy, has received extensive research attention in the past three decades. The majority of prior literature adopted a “snapshot” view of trust and typically evaluated trust through questionnaires administered at the end of an experiment. This “snapshot” view, however, does not acknowledge that trust is a dynamic variable that can strengthen or decay over time. To fill the research gap, the present study aims to model trust dynamics when a human interacts with a robotic agent over time. The underlying premise of the study is that by interacting with a robotic agent and observing its performance over time, a rational human agent will update his/her trust in the robotic agent accordingly. Based on this premise, we develop a personalized trust prediction model and learn its parameters using Bayesian inference. Our proposed model adheres to three properties of trust dynamics characterizing human agents’ trust development process de facto and thus guarantees high model explicability and generalizability. We tested the proposed method using an existing dataset involving 39 human participants interacting with four drones in a simulated surveillance mission. The proposed method obtained a root mean square error of 0.072, significantly outperforming existing prediction methods. Moreover, we identified three distinct types of trust dynamics, the Bayesian decision maker, the oscillator, and the disbeliever, respectively. This prediction model can be used for the design of individualized and adaptive technologies.
Collapse
|
11
|
|