1
|
Alarcon GM, Capiola A, Lee MA, Willis S, Hamdan IA, Jessup SA, Harris KN. Development and Validation of the System Trustworthiness Scale. Hum Factors 2024; 66:1893-1913. [PMID: 37458319 DOI: 10.1177/00187208231189000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2024]
Abstract
OBJECTIVE We created and validated a scale to measure perceptions of system trustworthiness. BACKGROUND Several scales exist in the literature that attempt to assess trustworthiness of system referents. However, existing measures suffer from limitations in their development and validation. The current study sought to develop a scale based on theory and methodological rigor. METHOD We conducted exploratory and confirmatory factor analyses on data from two online studies to develop the System Trustworthiness Scale (STS). Additional analyses explored the manipulation of the factors and assessed convergent and divergent validity. RESULTS The exploratory factor analyses resulted in a three-factor solution that represented the theoretical constructs of trustworthiness: performance, purpose, and process. Confirmatory factor analyses confirmed the three-factor solution. In addition, correlation and regression analyses demonstrated the scale's divergent and predictive validity. CONCLUSION The STS is a psychometrically valid and predictive scale for assessing trustworthiness perceptions of system referents. APPLICATIONS The STS assesses trustworthiness perceptions of systems. Importantly, the scale differentiates performance, purpose, and process constructs and is adaptable to a variety of system referents.
Collapse
Affiliation(s)
- Gene M Alarcon
- Air Force Research Laboratory, Wright Patterson AFB, OH, USA
| | - August Capiola
- Air Force Research Laboratory, Wright Patterson AFB, OH, USA
| | - Michael A Lee
- General Dynamics Information Technology Inc, Falls Church, VA, USA
| | - Sasha Willis
- General Dynamics Information Technology Inc, Falls Church, VA, USA
| | - Izz Aldin Hamdan
- General Dynamics Information Technology Inc, Falls Church, VA, USA
| | - Sarah A Jessup
- Air Force Research Laboratory, Wright Patterson AFB, OH, USA
| | | |
Collapse
|
2
|
Capiola A, Hamdan IA, Lyons JB, Lewis M, Alarcon GM, Sycara K. The Effect of Asset Degradation on Trust in Swarms: A Reexamination of System-Wide Trust in Human-Swarm Interaction. Hum Factors 2024; 66:1475-1489. [PMID: 36511147 DOI: 10.1177/00187208221145261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE The effects of asset degradation on trust in human-swarm interaction were investigated through the lens of system-wide trust theory. BACKGROUND Researchers have begun investigating contextual features that shape human interactions with robotic swarms-systems comprising assets that coordinate behavior based on their nearest neighbors. Recent work has begun investigating how human trust toward swarms is affected by asset degradation through the lens of system-wide trust theory, but these studies have been marked by several limitations. METHOD In an online study, the current work manipulated asset degradation and measured trust-relevant criteria in a within-subjects design and addressed the limitations of past work. RESULTS Controlling for swarm performance (i.e., target acquisition), asset degradation and trust (i.e., reliance intentions) in swarms were negatively related. In addition, as degradation increased, perceptions of swarm cohesion, obstacle avoidance, target acquisition, and terrain exploration efficiency decreased, the latter two of which (coupled with the reliance intentions criterion) support the tenets of system-wide trust theory as well as replicate and extend past work on the effects of asset degradation on trust in swarms. CONCLUSION Human-swarm interaction is a context in which system-wide trust is relevant, and future work ought to investigate how to calibrate human trust toward swarm systems. APPLICATIONS Based on these findings, design professionals should prioritize ways to depict swarm performance and system health such that humans do not abandon trust in systems that are still functional yet not over-trust those systems which are indeed performing poorly.
Collapse
Affiliation(s)
- August Capiola
- Air Force Research Laboratory, Wright-Patterson AFB, Ohio, USA
| | | | - Joseph B Lyons
- Air Force Research Laboratory, Wright-Patterson AFB, Ohio, USA
| | | | - Gene M Alarcon
- Air Force Research Laboratory, Wright-Patterson AFB, Ohio, USA
| | | |
Collapse
|
3
|
Griffiths N, Bowden V, Wee S, Loft S. Return-to-Manual Performance can be Predicted Before Automation Fails. Hum Factors 2024; 66:1333-1349. [PMID: 36538745 DOI: 10.1177/00187208221147105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE This study aimed to examine operator state variables (workload, fatigue, and trust in automation) that may predict return-to-manual (RTM) performance when automation fails in simulated air traffic control. BACKGROUND Prior research has largely focused on triggering adaptive automation based on reactive indicators of performance degradation or operator strain. A more direct and effective approach may be to proactively engage/disengage automation based on predicted operator RTM performance (conflict detection accuracy and response time), which requires analyses of within-person effects. METHOD Participants accepted and handed-off aircraft from their sector and were assisted by imperfect conflict detection/resolution automation. To avoid aircraft conflicts, participants were required to intervene when automation failed to detect a conflict. Participants periodically rated their workload, fatigue and trust in automation. RESULTS For participants with the same or higher average trust than the sample average, an increase in their trust (relative to their own average) slowed their subsequent RTM response time. For participants with lower average fatigue than the sample average, an increase in their fatigue (relative to own average) improved their subsequent RTM response time. There was no effect of workload on RTM performance. CONCLUSIONS RTM performance degraded as trust in automation increased relative to participants' own average, but only for individuals with average or high levels of trust. APPLICATIONS Study outcomes indicate a potential for future adaptive automation systems to detect vulnerable operator states in order to predict subsequent RTM performance decrements.
Collapse
Affiliation(s)
| | - Vanessa Bowden
- The University of Western Australia, Crawley, WA, Australia
| | - Serena Wee
- The University of Western Australia, Crawley, WA, Australia
| | - Shayne Loft
- The University of Western Australia, Crawley, WA, Australia
| |
Collapse
|
4
|
Schelble BG, Lopez J, Textor C, Zhang R, McNeese NJ, Pak R, Freeman G. Towards Ethical AI: Empirically Investigating Dimensions of AI Ethics, Trust Repair, and Performance in Human-AI Teaming. Hum Factors 2024; 66:1037-1055. [PMID: 35938319 DOI: 10.1177/00187208221116952] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE Determining the efficacy of two trust repair strategies (apology and denial) for trust violations of an ethical nature by an autonomous teammate. BACKGROUND While ethics in human-AI interaction is extensively studied, little research has investigated how decisions with ethical implications impact trust and performance within human-AI teams and their subsequent repair. METHOD Forty teams of two participants and one autonomous teammate completed three team missions within a synthetic task environment. The autonomous teammate made an ethical or unethical action during each mission, followed by an apology or denial. Measures of individual team trust, autonomous teammate trust, human teammate trust, perceived autonomous teammate ethicality, and team performance were taken. RESULTS Teams with unethical autonomous teammates had significantly lower trust in the team and trust in the autonomous teammate. Unethical autonomous teammates were also perceived as substantially more unethical. Neither trust repair strategy effectively restored trust after an ethical violation, and autonomous teammate ethicality was not related to the team score, but unethical autonomous teammates did have shorter times. CONCLUSION Ethical violations significantly harm trust in the overall team and autonomous teammate but do not negatively impact team score. However, current trust repair strategies like apologies and denials appear ineffective in restoring trust after this type of violation. APPLICATION This research highlights the need to develop trust repair strategies specific to human-AI teams and trust violations of an ethical nature.
Collapse
Affiliation(s)
- Beau G Schelble
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| | - Jeremy Lopez
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Claire Textor
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Rui Zhang
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| | | | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Guo Freeman
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| |
Collapse
|
5
|
Rittenberg BSP, Holland CW, Barnhart GE, Gaudreau SM, Neyedli HF. Trust with increasing and decreasing reliability. Hum Factors 2024:187208241228636. [PMID: 38445652 DOI: 10.1177/00187208241228636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/07/2024]
Abstract
OBJECTIVE The primary purpose was to determine how trust changes over time when automation reliability increases or decreases. A secondary purpose was to determine how task-specific self-confidence is associated with trust and reliability level. BACKGROUND Both overtrust and undertrust can be detrimental to system performance; therefore, the temporal dynamics of trust with changing reliability level need to be explored. METHOD Two experiments used a dominant-color identification task, where automation provided a recommendation to users, with the reliability of the recommendation changing over 300 trials. In Experiment 1, two groups of participants interacted with the system: one group started with a 50% reliable system which increased to 100%, while the other used a system that decreased from 100% to 50%. Experiment 2 included a group where automation reliability increased from 70% to 100%. RESULTS Trust was initially high in the decreasing group and then declined as reliability level decreased; however, trust also declined in the 50% increasing reliability group. Furthermore, when user self-confidence increased, automation reliability had a greater influence on trust. In Experiment 2, the 70% increasing reliability group showed increased trust in the system. CONCLUSION Trust does not always track the reliability of automated systems; in particular, it is difficult for trust to recover once the user has interacted with a low reliability system. APPLICATIONS This study provides initial evidence into the dynamics of trust for automation that gets better over time suggesting that users should only start interacting with automation when it is sufficiently reliable.
Collapse
|
6
|
Rieger T, Manzey D. Understanding the Impact of Time Pressure and Automation Support in a Visual Search Task. Hum Factors 2024; 66:770-786. [PMID: 35770911 DOI: 10.1177/00187208221111236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE To understand the impact of time pressure and automated decision support systems (DSS) in a simulated medical visual search task. BACKGROUND Time pressure usually impairs manual performance in visual search tasks, but DSS support might neutralize this negative effect. Moreover, understanding the impact of time pressure and DSS support seems relevant for many real-world applications of visual search. METHOD We used a visual search paradigm where participants had to search for target letters in a simulated medical image. Participants performed the task either manually or with support of a highly reliable DSS. Time pressure was varied within-subjects by either a trialwise time-pressure manipulation (Experiment 1) or a blockwise manipulation (Experiment 2). Performance was assessed based on signal detection measures. To further analyze visual search behavior, a mouse-over approach was used. RESULTS In both experiments, results showed impaired sensitivity under high compared to low time pressure in the manual condition, but no negative effect of time pressure when working with a highly reliable DSS. Moreover, participants searched less under time pressure and when receiving DSS support, indicating participants followed the automation without thoroughly checking recommendations. However, the human-DSS team's sensitivity was always worse than that of the DSS alone, independent of the strength of time pressure. CONCLUSION Negative effects of time pressure can be ameliorated when receiving support by a DSS, but joint overall performance remains below DSS-alone performance. APPLICATION Highly reliable DSS seem capable of ameliorating the negative impact of time pressure in complex detection tasks.
Collapse
|
7
|
Elder H, Canfield C, Shank DB, Rieger T, Hines C. Knowing When to Pass: The Effect of AI Reliability in Risky Decision Contexts. Hum Factors 2024; 66:348-362. [PMID: 35603703 DOI: 10.1177/00187208221100691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE This study manipulates the presence and reliability of AI recommendations for risky decisions to measure the effect on task performance, behavioral consequences of trust, and deviation from a probability matching collaborative decision-making model. BACKGROUND Although AI decision support improves performance, people tend to underutilize AI recommendations, particularly when outcomes are uncertain. As AI reliability increases, task performance improves, largely due to higher rates of compliance (following action recommendations) and reliance (following no-action recommendations). METHODS In a between-subject design, participants were assigned to a high reliability AI, low reliability AI, or a control condition. Participants decided whether to bet that their team would win in a series of basketball games tying compensation to performance. We evaluated task performance (in accuracy and signal detection terms) and the behavioral consequences of trust (via compliance and reliance). RESULTS AI recommendations improved task performance, had limited impact on risk-taking behavior, and were under-valued by participants. Accuracy, sensitivity (d'), and reliance increased in the high reliability AI condition, but there was no effect on response bias (c) or compliance. Participant behavior was only consistent with a probability matching model for compliance in the low reliability condition. CONCLUSION In a pay-off structure that incentivized risk-taking, the primary value of the AI recommendations was in determining when to perform no action (i.e., pass on bets). APPLICATION In risky contexts, designers need to consider whether action or no-action recommendations will be more influential to design appropriate interventions.
Collapse
Affiliation(s)
- Hannah Elder
- Technische Universität Berlin, Berlin, Germany, and University of Missouri-Columbia, Columbia, Missouri, USA
| | - Casey Canfield
- Missouri University of Science & Technology, Rolla, Missouri, USA
| | - Daniel B Shank
- Missouri University of Science & Technology, Rolla, Missouri, USA
| | | | - Casey Hines
- Missouri University of Science & Technology, Rolla, Missouri, USA
| |
Collapse
|
8
|
Manchon JB, Bueno M, Navarro J. Calibration of Trust in Automated Driving: A Matter of Initial Level of Trust and Automated Driving Style? Hum Factors 2023; 65:1613-1629. [PMID: 34861787 DOI: 10.1177/00187208211052804] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Automated driving is becoming a reality, and such technology raises new concerns about human-machine interaction on road. This paper aims to investigate factors influencing trust calibration and evolution over time. BACKGROUND Numerous studies showed trust was a determinant in automation use and misuse, particularly in the automated driving context. METHOD Sixty-one drivers participated in an experiment aiming to better understand the influence of initial level of trust (Trustful vs. Distrustful) on drivers' behaviors and trust calibration during two sessions of simulated automated driving. The automated driving style was manipulated as positive (smooth) or negative (abrupt) to investigate human-machine early interactions. Trust was assessed over time through questionnaires. Drivers' visual behaviors and take-over performances during an unplanned take-over request were also investigated. RESULTS Results showed an increase of trust over time, for both Trustful and Distrustful drivers regardless the automated driving style. Trust was also found to fluctuate over time depending on the specific events handled by the automated vehicle. Take-over performances were not influenced by the initial level of trust nor automated driving style. CONCLUSION Trust in automated driving increases rapidly when drivers' experience such a system. Initial level of trust seems to be crucial in further trust calibration and modulate the effect of automation performance. Long-term trust evolutions suggest that experience modify drivers' mental model about automated driving systems. APPLICATION In the automated driving context, trust calibration is a decisive question to guide such systems' proper utilization, and road safety.
Collapse
Affiliation(s)
- J B Manchon
- VEDECOM Institute, Versailles, France, and University Lyon 2, Bron, France
| | | | - Jordan Navarro
- University Lyon 2, Bron, France, and Institut Universitaire de France, Paris
| |
Collapse
|
9
|
Candrian C, Scherer A. How Terminology Affects Users' Responses to System Failures. Hum Factors 2023:187208231202572. [PMID: 37734726 DOI: 10.1177/00187208231202572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/23/2023]
Abstract
OBJECTIVE The objective of our research is to advance the understanding of behavioral responses to a system's error. By examining trust as a dynamic variable and drawing from attribution theory, we explain the underlying mechanism and suggest how terminology can be used to mitigate the so-called algorithm aversion. In this way, we show that the use of different terms may shape consumers' perceptions and provide guidance on how these differences can be mitigated. BACKGROUND Previous research has interchangeably used various terms to refer to a system and results regarding trust in systems have been ambiguous. METHODS Across three studies, we examine the effect of different system terminology on consumer behavior following a system failure. RESULTS Our results show that terminology crucially affects user behavior. Describing a system as "AI" (i.e., self-learning and perceived as more complex) instead of as "algorithmic" (i.e., a less complex rule-based system) leads to more favorable behavioral responses by users when a system error occurs. CONCLUSION We suggest that in cases when a system's characteristics do not allow for it to be called "AI," users should be provided with an explanation of why the system's error occurred, and task complexity should be pointed out. We highlight the importance of terminology, as this can unintentionally impact the robustness and replicability of research findings. APPLICATION This research offers insights for industries utilizing AI and algorithmic systems, highlighting how strategic terminology use can shape user trust and response to errors, thereby enhancing system acceptance.
Collapse
Affiliation(s)
| | - Anne Scherer
- URPP Social Networks, Faculty of Business, Economics and Informatics, University of Zurich, Switzerland
| |
Collapse
|
10
|
Herbers E, Miller M, Neurauter L, Walters J, Glaser D. Exploratory Development of Algorithms for Determining Driver Attention Status. Hum Factors 2023:187208231198932. [PMID: 37732402 DOI: 10.1177/00187208231198932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/22/2023]
Abstract
OBJECTIVE Varying driver distraction algorithms were developed using vehicle kinematics and driver gaze data obtained from a camera-based driver monitoring system (DMS). BACKGROUND Distracted driving characteristics can be difficult to accurately detect due to wide variation in driver behavior across driving environments. The growing availability of information about drivers and their involvement in the driving task increases the opportunity for accurately recognizing attention state. METHOD A baseline for driver distraction levels was developed using a video feed of 24 separate drivers in varying naturalistic driving conditions. This initial assessment was used to develop four buffer-based algorithms that aimed to determine a driver's real-time attentiveness, via a variety of metrics and combinations thereof. RESULTS Of those tested, the optimal algorithm included ungrouped glance locations and speed. Notably, as an algorithm's performance of detecting very distracted drivers improved, its accuracy for correctly identifying attentive drivers decreased. CONCLUSION At a minimum, drivers' gaze position and vehicle speed should be included when designing driver distraction algorithms to delineate between glance patterns observed at high and low speeds. Distraction algorithms should be designed with an understanding of their limitations, including instances in which they may fail to detect distracted drivers, or falsely notify attentive drivers. APPLICATION This research adds to the body of knowledge related to driver distraction and contributes to available methods to potentially address and reduce occurrences. Machine learning algorithms can build on the data elements discussed to increase distraction detection accuracy using robust artificial intelligence.
Collapse
Affiliation(s)
- Eileen Herbers
- Virginia Tech Transportation Institute, Blacksburg, VA, USA
- Virginia Tech, Biomedical Engineering and Mechanics, Blacksburg, VA, USA
| | - Marty Miller
- Virginia Tech Transportation Institute, Blacksburg, VA, USA
| | - Luke Neurauter
- Virginia Tech Transportation Institute, Blacksburg, VA, USA
| | - Jacob Walters
- Virginia Tech Transportation Institute, Blacksburg, VA, USA
| | | |
Collapse
|
11
|
Rieger T, Kugler L, Manzey D, Roesler E. The (Im)perfect Automation Schema: Who Is Trusted More, Automated or Human Decision Support? Hum Factors 2023:187208231197347. [PMID: 37632728 DOI: 10.1177/00187208231197347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/28/2023]
Abstract
OBJECTIVE This study's purpose was to better understand the dynamics of trust attitude and behavior in human-agent interaction. BACKGROUND Whereas past research provided evidence for a perfect automation schema, more recent research has provided contradictory evidence. METHOD To disentangle these conflicting findings, we conducted an online experiment using a simulated medical X-ray task. We manipulated the framing of support agents (i.e., artificial intelligence (AI) versus expert versus novice) between-subjects and failure experience (i.e., perfect support, imperfect support, back-to-perfect support) within subjects. Trust attitude and behavior as well as perceived reliability served as dependent variables. RESULTS Trust attitude and perceived reliability were higher for the human expert than for the AI than for the human novice. Moreover, the results showed the typical pattern of trust formation, dissolution, and restoration for trust attitude and behavior as well as perceived reliability. Forgiveness after failure experience did not differ between agents. CONCLUSION The results strongly imply the existence of an imperfect automation schema. This illustrates the need to consider agent expertise for human-agent interaction. APPLICATION When replacing human experts with AI as support agents, the challenge of lower trust attitude towards the novel agent might arise.
Collapse
|
12
|
Hunter JG, Ulwelling E, Konishi M, Michelini N, Modali A, Mendoza A, Snyder J, Mehrotra S, Zheng Z, Kumar AR, Akash K, Misu T, Jain N, Reid T. The future of mobility-as-a-service: trust transfer across automated mobilities, from road to sidewalk. Front Psychol 2023; 14:1129583. [PMID: 37251058 PMCID: PMC10219791 DOI: 10.3389/fpsyg.2023.1129583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 04/12/2023] [Indexed: 05/31/2023] Open
Abstract
While trust in different types of automated vehicles has been a major focus for researchers and vehicle manufacturers, few studies have explored how people trust automated vehicles that are not cars, nor how their trust may transfer across different mobilities enabled with automation. To address this objective, a dual mobility study was designed to measure how trust in an automated vehicle with a familiar form factor-a car-compares to, and influences, trust in a novel automated vehicle-termed sidewalk mobility. A mixed-method approach involving both surveys and a semi-structured interview was used to characterize trust in these automated mobilities. Results found that the type of mobility had little to no effect on the different dimensions of trust that were studied, suggesting that trust can grow and evolve across different mobilities when the user is unfamiliar with a novel automated driving-enabled (AD-enabled) mobility. These results have important implications for the design of novel mobilities.
Collapse
Affiliation(s)
- Jacob G. Hunter
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, United States
| | - Elise Ulwelling
- Industrial and Systems Engineering, San Jose State University, San Jose, CA, United States
| | - Matthew Konishi
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, United States
| | - Noah Michelini
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, United States
| | - Akhil Modali
- Industrial and Systems Engineering, San Jose State University, San Jose, CA, United States
| | - Anne Mendoza
- Industrial and Systems Engineering, San Jose State University, San Jose, CA, United States
| | - Jessie Snyder
- Industrial and Systems Engineering, San Jose State University, San Jose, CA, United States
| | | | - Zhaobo Zheng
- Honda Research Institute USA Inc., San Jose, CA, United States
| | - Anil R. Kumar
- Industrial and Systems Engineering, San Jose State University, San Jose, CA, United States
| | - Kumar Akash
- Honda Research Institute USA Inc., San Jose, CA, United States
| | - Teruhisa Misu
- Honda Research Institute USA Inc., San Jose, CA, United States
| | - Neera Jain
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, United States
| | - Tahira Reid
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
13
|
Lopez J, Watkins H, Pak R. Enhancing component-specific trust with consumer automated systems through humanness design. Ergonomics 2023; 66:291-302. [PMID: 35583421 DOI: 10.1080/00140139.2022.2079728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 05/06/2022] [Indexed: 06/15/2023]
Abstract
Consumer automation is a suitable venue for studying the efficacy of untested humanness design methods for promoting specific trust in multi-component systems. Subjective (trust, self-confidence) and behavioural (use, manual override) measures were recorded as 82 participants interacted with a four-component automation-bearing system in a simulated smart home task for two experimental blocks. During the first block all components were perfectly reliable (100%). During the second block, one component became unreliable (60%). Participants interacted with a system containing either a single or four simulated voice assistants. In the single-assistant condition, the unreliable component resulted in trust changes for every component. In the four-assistant condition, trust decreased for only the unreliable component. Across agent-number conditions, use decreased between blocks for only the unreliable component. Self-confidence and overrides exhibited ceiling and floor effects, respectively. Our findings provide the first evidence of effectively using humanness design to enhance component-specific trust in consumer systems.Practitioner summary: Participants interacted with simulated smart-home multi-component systems that contained one or four voiced assistants. In the single-voice condition, one component's decreasing reliability coincided with trust changes for all components. In the four-voice condition, trust decreased for only the decreasingly reliable component. The number of voices did not influence use strategies.Abbreviations: ACC: adaptive cruise control; CST: component-specific trust; SWT: system-wide trust; UAV: unmanned aerial vehicle; CPRS: complacency potential rating scale; MANOVA: multivariate analysis of variance.
Collapse
Affiliation(s)
- Jeremy Lopez
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Heather Watkins
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| |
Collapse
|
14
|
Huang J, Choo S, Pugh ZH, Nam CS. Evaluating Effective Connectivity of Trust in Human-Automation Interaction: A Dynamic Causal Modeling (DCM) Study. Hum Factors 2022; 64:1051-1069. [PMID: 33657902 DOI: 10.1177/0018720820987443] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
OBJECTIVE Using dynamic causal modeling (DCM), we examined how credibility and reliability affected the way brain regions exert causal influence over each other-effective connectivity (EC)-in the context of trust in automation. BACKGROUND Multiple brain regions of the central executive network (CEN) and default mode network (DMN) have been implicated in trust judgment. However, the neural correlates of trust judgment are still relatively unexplored in terms of the directed information flow between brain regions. METHOD Sixteen participants observed the performance of four computer algorithms, which differed in credibility and reliability, of the system monitoring subtask of the Air Force Multi-Attribute Task Battery (AF-MATB). Using six brain regions of the CEN and DMN commonly identified to be activated in human trust, a total of 30 (forward, backward, and lateral) connection models were developed. Bayesian model averaging (BMA) was used to quantify the connectivity strength among the brain regions. RESULTS Relative to the high trust condition, low trust showed unique presence of specific connections, greater connectivity strengths from the prefrontal cortex, and greater network complexity. High trust condition showed no backward connections. CONCLUSION Results indicated that trust and distrust can be two distinctive neural processes in human-automation interaction-distrust being a more complex network than trust, possibly due to the increased cognitive load. APPLICATION The causal architecture of distributed brain regions inferred using DCM can help not only in the design of a balanced human-automation interface design but also in the proper use of automation in real-life situations.
Collapse
Affiliation(s)
- Jiali Huang
- 6798 North Carolina State University, Raleigh, USA
| | | | | | - Chang S Nam
- 6798 North Carolina State University, Raleigh, USA
| |
Collapse
|
15
|
Xu T, Dragomir A, Liu X, Yin H, Wan F, Bezerianos A, Wang H. An EEG study of human trust in autonomous vehicles based on graphic theoretical analysis. Front Neuroinform 2022; 16:907942. [PMID: 36051853 PMCID: PMC9426721 DOI: 10.3389/fninf.2022.907942] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 06/16/2022] [Indexed: 11/22/2022] Open
Abstract
With the development of autonomous vehicle technology, human-centered transport research will likely shift to the interaction between humans and vehicles. This study focuses on the human trust variation in autonomous vehicles (AVs) as the technology becomes increasingly intelligent. This study uses electroencephalogram data to analyze human trust in AVs during simulated driving conditions. Two driving conditions, the semi-autonomous and the autonomous, which correspond to the two highest levels of automatic driving, are used for the simulation, accompanied by various driving and car conditions. The graph theoretical analysis (GTA) is the primary method for data analysis. In semi-autonomous driving mode, the local efficiency and cluster coefficient are lower in car-normal conditions than in car-malfunction conditions with the car approaching. This finding suggests that the human brain has a strong information processing ability while facing predictable potential hazards. However, when it comes to a traffic light with a car malfunctioning under the semi-autonomous driving mode, the characteristic path length is higher for the car malfunction manifesting a weak information processing ability while facing unpredictable potential hazards. Furthermore, in fully automatic driving conditions, participants cannot do anything and need low-level brain function to take emergency actions as lower local efficiency and small worldness for car malfunction. Our results shed light on the design of the human-machine interaction and human factor engineering on the high level of an autonomous vehicle.
Collapse
Affiliation(s)
- Tao Xu
- The Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China
| | - Andrei Dragomir
- The N1 Institute, National University of Singapore, Singapore, Singapore
| | - Xucheng Liu
- Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, Macao SAR, China
| | - Haojun Yin
- The Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China
| | - Feng Wan
- Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, Macao SAR, China
| | - Anastasios Bezerianos
- Hellenic Institute of Transport (HIT), Centre for Research and Technology Hellas (CERTH), Thessaloniki, Greece
| | - Hongtao Wang
- The Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China
| |
Collapse
|
16
|
Abstract
OBJECTIVE The study addresses the impact of time pressure on human interactions with automated decision support systems (DSSs) and related performance consequences. BACKGROUND When humans interact with DSSs, this often results in worse performance than could be expected from the automation alone. Previous research has suggested that time pressure might make a difference by leading humans to rely more on a DSS. METHOD In two laboratory experiments, participants performed a luggage screening task either manually, supported by a highly reliable DSS, or by a low reliable DSS. Time provided for inspecting the X-rays was 4.5 s versus 9 s varied within-subjects as the time pressure manipulation. Participants in the automation conditions were either shown the automation's advice prior (Experiment 1) or following (Experiment 2) their own inspection, before they made their final decision. RESULTS In Experiment 1, time pressure compromised performance independent of whether the task was performed manually or with automation support. In Experiment 2, the negative impact of time pressure was only found in the manual but not in the two automation conditions. However, neither experiment revealed any positive impact of time pressure on overall performance, and the joint performance of human and automation was mostly worse than the performance of the automation alone. CONCLUSION Time pressure compromises the quality of decision-making. Providing a DSS can reduce this effect, but only if the automation's advice follows the assessment of the human. APPLICATION The study provides suggestions for the effective implementation of DSSs in addition to supporting concerns that highly reliable DSSs are not used optimally by human operators.
Collapse
|
17
|
Muslim H, Itoh M. Long-Term Evaluation of Drivers' Behavioral Adaptation to an Adaptive Collision Avoidance System. Hum Factors 2021; 63:1295-1315. [PMID: 32484749 PMCID: PMC8521345 DOI: 10.1177/0018720820926092] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Accepted: 04/16/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Taking human factors approach in which the human is involved as a part of the system design and evaluation process, this paper aims to improve driving performance and safety impact of driver support systems in the long view of human-automation interaction. BACKGROUND Adaptive automation in which the system implements the level of automation based on the situation, user capacity, and risk has proven effective in dynamic environments with wide variations of human workload over time. However, research has indicated that drivers may not efficiently deal with dynamically changing system configurations. Little effort has been made to support drivers' understanding of and behavioral adaptation to adaptive automation. METHOD Using a within-subjects design, 42 participants completed a four-stage driving simulation experiment during which they had to gradually interact with an adaptive collision avoidance system while exposed to hazardous lane-change scenarios over 1 month. RESULTS Compared to unsupported driving (stage i), although collisions have been significantly reduced when first experienced driving with the system (stage ii), improvements in drivers' trust in and understanding of the system and driving behavior have been achieved with more driver-system interaction and driver training during stages iii and iv. CONCLUSION While designing systems that take into account human skills and abilities can go some way to improving their effectiveness, this alone is not sufficient. To maximize safety and system usability, it is also essential to ensure appropriate users' understanding and acceptance of the system. APPLICATION These findings have important implications for the development of active safety systems and automated driving.
Collapse
|
18
|
Kraus J, Scholz D, Baumann M. What's Driving Me? Exploration and Validation of a Hierarchical Personality Model for Trust in Automated Driving. Hum Factors 2021; 63:1076-1105. [PMID: 32633564 DOI: 10.1177/0018720820922653] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE This paper presents a comprehensive investigation of personality traits related to trust in automated vehicles. A hierarchical personality model based on Mowen's (2000) 3M model is explored in a first and replicated in a second study. BACKGROUND Trust in automation is established in a complex psychological process involving user-, system- and situation-related variables. In this process, personality traits have been viewed as an important source of variance. METHOD Dispositional variables on three levels were included in an exploratory, hierarchical personality model (full model) of dynamic learned trust in automation, which was refined on the basis of structural equation modeling carried out in Study 1 (final model). Study 2 replicated the final model in an independent sample. RESULTS In both studies, the personality model showed a good fit and explained a large proportion of variance in trust in automation. The combined evidence supports the role of extraversion, neuroticism, and self-esteem at the elemental level; affinity for technology and dispositional interpersonal trust at the situational level; and propensity to trust in automation and a priori acceptability of automated driving at the surface level in the prediction of trust in automation. CONCLUSION Findings confirm that personality plays a substantial role in trust formation and provide evidence of the involvement of user dispositions not previously investigated in relation to trust in automation: self-esteem, dispositional interpersonal trust, and affinity for technology. APPLICATION Implications for personalization of information campaigns, driver training, and user interfaces for trust calibration in automated driving are discussed.
Collapse
|
19
|
Abstract
OBJECTIVE We examined a method of machine learning (ML) to evaluate its potential to develop more trustworthy control of unmanned vehicle area search behaviors. BACKGROUND ML typically lacks interaction with the user. Novel interactive machine learning (IML) techniques incorporate user feedback, enabling observation of emerging ML behaviors, and human collaboration during ML of a task. This may enable trust and recognition of these algorithms. METHOD Participants judged and selected behaviors in a low and a high interaction condition (IML) over the course of behavior evolution using ML. User trust in the outputs, as well as preference, and ability to discriminate and recognize the behaviors were measured. RESULTS Compared to noninteractive techniques, IML behaviors were more trusted and preferred, as well as recognizable, separate from non-IML behaviors, and approached similar performance as pure ML models. CONCLUSION IML shows promise for creating behaviors by involving the user; this is the first extension of this technique for vehicle behavior model development targeting user satisfaction and is unique in its multifaceted evaluation of how users perceived, trusted, and implemented these learned controllers. APPLICATION There are many contexts where the brittleness of ML cannot be trusted, but the advantage of ML over traditional programmed behaviors may be large, as in some military operations where they could be scaled. IML in this early form appears to generate satisfactory behaviors without sacrificing performance, use, or trust in the behavior, but more work is necessary.
Collapse
Affiliation(s)
| | - John Reeder
- 41489 Naval Information Warfare Center, San Diego, CA, USA
| |
Collapse
|
20
|
Kinosada Y, Kobayashi T, Shinohara K. Trusting Other Vehicles' Automatic Emergency Braking Decreases Self-Protective Driving. Hum Factors 2021; 63:880-895. [PMID: 32101470 PMCID: PMC8274173 DOI: 10.1177/0018720820907755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 01/24/2020] [Indexed: 06/10/2023]
Abstract
OBJECTIVE We focused on drivers in close proximity to vehicles with advanced driver assistance systems (ADAS). We examined whether the belief that an approaching vehicle is equipped with automatic emergency braking (AEB) influences behavior of those drivers. BACKGROUND In addition to benefits of ADAS, previous studies have demonstrated negative behavioral adaptation, that is, behavioral changes after introduction of ADAS, by its users. However, little is known about whether negative behavioral adaptation can occur for nonusers in close proximity to vehicles with ADAS. METHOD Experienced (Experiment 1) and novice (Experiment 2) drivers drove a simulator vehicle without ADAS and tried to pass through intersections. We manipulated participants' belief about whether an approaching vehicle had AEB and time-to-arrival of the approaching vehicle. Participants kept constant speed or pressed the brake pedal before entering each intersection. In Experiment 2, participants rated their trust in AEB by a questionnaire after driving. RESULTS In both experiments, belief about the approaching vehicle's AEB did not influence braking probability; however, belief delayed initiation of braking. The effect of belief on braking latency was only observed when trust in AEB was higher in Experiment 2. CONCLUSION Negative behavioral adaptation can occur for nonusers in close proximity to users of AEB, and trust in AEB plays an important role. APPLICATION When evaluating the effect of ADAS, the possible behavioral change of surrounding nonusers as well as users should be taken into account. To establish consumers' trust accurately, advertisements (e.g., TV commercials) must carefully consider their messages.
Collapse
|
21
|
Lebiere C, Blaha LM, Fallon CK, Jefferson B. Adaptive Cognitive Mechanisms to Maintain Calibrated Trust and Reliance in Automation. Front Robot AI 2021; 8:652776. [PMID: 34109222 PMCID: PMC8181412 DOI: 10.3389/frobt.2021.652776] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 04/22/2021] [Indexed: 12/02/2022] Open
Abstract
Trust calibration for a human–machine team is the process by which a human adjusts their expectations of the automation’s reliability and trustworthiness; adaptive support for trust calibration is needed to engender appropriate reliance on automation. Herein, we leverage an instance-based learning ACT-R cognitive model of decisions to obtain and rely on an automated assistant for visual search in a UAV interface. This cognitive model matches well with the human predictive power statistics measuring reliance decisions; we obtain from the model an internal estimate of automation reliability that mirrors human subjective ratings. The model is able to predict the effect of various potential disruptions, such as environmental changes or particular classes of adversarial intrusions on human trust in automation. Finally, we consider the use of model predictions to improve automation transparency that account for human cognitive biases in order to optimize the bidirectional interaction between human and machine through supporting trust calibration. The implications of our findings for the design of reliable and trustworthy automation are discussed.
Collapse
Affiliation(s)
- Christian Lebiere
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Leslie M Blaha
- 711th Human Performance Wing, Air Force Research Laboratory, Pittsburgh, PA, United States
| | - Corey K Fallon
- Pacific Northwest National Laboratory, Richland, WA, United States
| | - Brett Jefferson
- Pacific Northwest National Laboratory, Richland, WA, United States
| |
Collapse
|
22
|
Abstract
OBJECTIVE Test the automation transparency design principle using a full-scope nuclear power plant simulator. BACKGROUND Automation transparency is a long-held human factors design principle espousing that the responsibilities, capabilities, goals, activities, and/or effects of automation should be directly observable in the human-system interface. The anticipated benefits of transparency include more effective reliance, more appropriate trust, better understanding, and greater user satisfaction. Transparency has enjoyed a recent upsurge in use in the context of human interaction with agent-oriented automation. METHOD Three full-scope nuclear power plant simulator studies were conducted with licensed operating crews. In the first two experiments, transparency was implemented for interlocks, controllers, limitations, protections, and automatic programs that operate at the local component level of the plant. In the third experiment, procedure automation assumed control of plant operations and was represented in dedicated agent displays. RESULTS Results from Experiments 1 and 2 appear to validate the human performance benefits of automation transparency for automation at the component level. However, Experiment 3 failed to replicate these findings for automation that assumed control for executing procedural actions. CONCLUSION Automation transparency appears to yield expected benefits for component-level automation, but caution is warranted in generalizing the design principle to agent-oriented automation. APPLICATION The automation transparency design principle may offer a powerful means of compensating for the detrimental impacts of hidden automation influence at the component level of complex systems. However, system developers should exercise caution in assuming that the principle extends to agent-oriented automation.
Collapse
|
23
|
Miller L, Kraus J, Babel F, Baumann M. More Than a Feeling-Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety. Front Psychol 2021; 12:592711. [PMID: 33912098 PMCID: PMC8074795 DOI: 10.3389/fpsyg.2021.592711] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 01/27/2021] [Indexed: 12/02/2022] Open
Abstract
With service robots becoming more ubiquitous in social life, interaction design needs to adapt to novice users and the associated uncertainty in the first encounter with this technology in new emerging environments. Trust in robots is an essential psychological prerequisite to achieve safe and convenient cooperation between users and robots. This research focuses on psychological processes in which user dispositions and states affect trust in robots, which in turn is expected to impact the behavior and reactions in the interaction with robotic systems. In a laboratory experiment, the influence of propensity to trust in automation and negative attitudes toward robots on state anxiety, trust, and comfort distance toward a robot were explored. Participants were approached by a humanoid domestic robot two times and indicated their comfort distance and trust. The results favor the differentiation and interdependence of dispositional, initial, and dynamic learned trust layers. A mediation from the propensity to trust to initial learned trust by state anxiety provides an insight into the psychological processes through which personality traits might affect interindividual outcomes in human-robot interaction (HRI). The findings underline the meaningfulness of user characteristics as predictors for the initial approach to robots and the importance of considering users’ individual learning history regarding technology and robots in particular.
Collapse
Affiliation(s)
- Linda Miller
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Johannes Kraus
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Franziska Babel
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
24
|
Mühl K, Strauch C, Grabmaier C, Reithinger S, Huckauf A, Baumann M. Get Ready for Being Chauffeured : Passenger's Preferences and Trust While Being Driven by Human and Automation. Hum Factors 2020; 62:1322-1338. [PMID: 31498656 DOI: 10.1177/0018720819872893] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
OBJECTIVE We investigated passenger's trust and preferences using subjective, qualitative, and psychophysiological measures while being driven either by human or automation in a field study and a driving simulator experiment. BACKGROUND The passenger's perspective has largely been neglected in autonomous driving research, although the change of roles from an active driver to a passive passenger is incontrovertible. Investigations of passenger's appraisals on self-driving vehicles often seem convoluted with active manual driving experiences instead of comparisons with being driven by humans. METHOD We conducted an exploratory field study using an autonomous research vehicle (N = 11) and a follow-up experimental driving simulation (N = 24). Participants were driven on the same course by a human and an autonomous agent sitting on a passenger seat. Skin conductance, trust, and qualitative characteristics of the perceived driving situation were assessed. In addition, the effect of driving style (defensive vs. sporty) was evaluated in the simulator. RESULTS Both investigations revealed a close relation between subjective trust ratings and skin conductance, with increased trust and by trend reduced arousal for human compared with automation in control. Even though driving behavior was equivalent in the simulator when being driven by human and automation, passengers most preferred and trusted the human-defensive driver. CONCLUSION Individual preferences for driving style and human or autonomous vehicle control influence trust and subjective driving characterizations. APPLICATION The findings are applicable in human-automation research, reminding to not neglect subjective attributions and psychophysiological reactions as a result of ascribed control duties in relation to specific execution characteristics.
Collapse
|
25
|
Du N, Huang KY, Yang XJ. Not All Information Is Equal: Effects of Disclosing Different Types of Likelihood Information on Trust, Compliance and Reliance, and Task Performance in Human-Automation Teaming. Hum Factors 2020; 62:987-1001. [PMID: 31348863 DOI: 10.1177/0018720819862916] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
OBJECTIVE The study examines the effects of disclosing different types of likelihood information on human operators' trust in automation, their compliance and reliance behaviors, and the human-automation team performance. BACKGROUND To facilitate appropriate trust in and dependence on automation, explicitly conveying the likelihood of automation success has been proposed as one solution. Empirical studies have been conducted to investigate the potential benefits of disclosing likelihood information in the form of automation reliability, (un)certainty, and confidence. Yet, results from these studies are rather mixed. METHOD We conducted a human-in-the-loop experiment with 60 participants using a simulated surveillance task. Each participant performed a compensatory tracking task and a threat detection task with the help of an imperfect automated threat detector. Three types of likelihood information were presented: overall likelihood information, predictive values, and hit and correct rejection rates. Participants' trust in automation, compliance and reliance behaviors, and task performance were measured. RESULTS Human operators informed of the predictive values or the overall likelihood value, rather than the hit and correct rejection rates, relied on the decision aid more appropriately and obtained higher task scores. CONCLUSION Not all likelihood information is equal in aiding human-automation team performance. Directly presenting the hit and correct rejection rates of an automated decision aid should be avoided. APPLICATION The findings can be applied to the design of automated decision aids.
Collapse
Affiliation(s)
- Na Du
- University of Michigan, Ann Arbor, USA
| | | | | |
Collapse
|
26
|
Kraus J, Scholz D, Stiegemeier D, Baumann M. The More You Know: Trust Dynamics and Calibration in Highly Automated Driving and the Effects of Take-Overs, System Malfunction, and System Transparency. Hum Factors 2020; 62:718-736. [PMID: 31233695 DOI: 10.1177/0018720819853686] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE This paper presents a theoretical model and two simulator studies on the psychological processes during early trust calibration in automated vehicles. BACKGROUND The positive outcomes of automation can only reach their full potential if a calibrated level of trust is achieved. In this process, information on system capabilities and limitations plays a crucial role. METHOD In two simulator experiments, trust was repeatedly measured during an automated drive. In Study 1, all participants in a two-group experiment experienced a system-initiated take-over, and the occurrence of a system malfunction was manipulated. In Study 2 in a 2 × 2 between-subject design, system transparency was manipulated as an additional factor. RESULTS Trust was found to increase during the first interactions progressively. In Study 1, take-overs led to a temporary decrease in trust, as did malfunctions in both studies. Interestingly, trust was reestablished in the course of interaction for take-overs and malfunctions. In Study 2, the high transparency condition did not show a temporary decline in trust after a malfunction. CONCLUSION Trust is calibrated along provided information prior to and during the initial drive with an automated vehicle. The experience of take-overs and malfunctions leads to a temporary decline in trust that was recovered in the course of error-free interaction. The temporary decrease can be prevented by providing transparent information prior to system interaction. APPLICATION Transparency, also about potential limitations of the system, plays an important role in this process and should be considered in the design of tutorials and human-machine interaction (HMI) concepts of automated vehicles.
Collapse
|
27
|
Abstract
OBJECTIVE The present study aims to evaluate driver intervention behaviors during a partially automated parking task. BACKGROUND Cars with partially automated parking features are becoming widely available. Although recent research explores the use of automation features in partially automated cars, none have focused on partially automated parking. Recent incidents and research have demonstrated that drivers sometimes use partially automated features in unexpected, inefficient, and harmful ways. METHOD Participants completed a series of partially automated parking trials with a Tesla Model X and their behavioral interventions were recorded. Participants also completed a risk-taking behavior test and a post-experiment questionnaire that included questions about trust in the system, likelihood of using the Autopark feature, and preference for either the partially automated parking feature or self-parking. RESULTS Initial intervention rates were over 50%, but declined steeply in later trials. Responses to open-ended questions revealed that once participants understood what the system was doing, they were much more likely to trust it. Trust in the partially automated parking feature was predicted by a model including risk-taking behaviors, self-confidence, self-reported number of errors committed by the Tesla, and the proportion of trials in which the driver intervened. CONCLUSION Using partially automated parking with little knowledge of its workings can lead to high degree of initial distrust. Repeated exposure of partially automated features to drivers can greatly increase their use. APPLICATION Short tutorials and brief explanations of the workings of partially automated features may greatly improve trust in the system when drivers are first introduced to partially automated systems.
Collapse
Affiliation(s)
| | | | - Anthony J Ries
- United States Air Force Academy, Colorado Springs, CO, USA
| | | | - Chad C Tossell
- United States Air Force Academy, Colorado Springs, CO, USA
| |
Collapse
|
28
|
Kraus J, Scholz D, Messner EM, Messner M, Baumann M. Scared to Trust? - Predicting Trust in Highly Automated Driving by Depressiveness, Negative Self-Evaluations and State Anxiety. Front Psychol 2020; 10:2917. [PMID: 32038353 PMCID: PMC6989472 DOI: 10.3389/fpsyg.2019.02917] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 12/10/2019] [Indexed: 11/13/2022] Open
Abstract
The advantages of automated driving can only come fully into play if these systems are used in an appropriate way, which means that they are neither used in situations they are not designed for (misuse) nor used in a too restricted manner (disuse). Trust in automation has been found to be an essential psychological basis for appropriate interaction with automated systems. Well-balanced system use requires a calibrated level of trust in correspondence with the actual ability of an automated system. As for these far-reaching implications of trust for safe and efficient system use, the psychological processes, in which trust is dynamically calibrated prior and during the use of automated technology, need to be understood. At this point, only a restricted body of research investigated the role of personality and emotional states for the formation of trust in automated systems. In this research, the role of the personality variables depressiveness, self-efficacy, self-esteem, and locus of control for the experience of anxiety before the first experience with a highly automated driving system were investigated. Additionally, the relationship of the investigated personality variables and anxiety to subsequent formation of trust in automation was investigated. In a driving simulator study, personality variables and anxiety were measured before the interaction with an automated system. Trust in the system was measured after participants drove with the system for a while. Trust in the system was significantly predicted by state anxiety and the personality characteristics self-esteem and self-efficacy. The relationships of self-esteem and self-efficacy were mediated by state anxiety as supported by significant specific indirect effects. While for depression the direct relationship with trust in automation was not found to be significant, an indirect effect through the experience of anxiety was supported. Locus of control did not show a significant association to trust in automation. The reported findings support the importance of considering individual differences in negative self-evaluations and anxiety when being introduced to a new automated system for individual differences in trust in automation. Implications for future research as well as implications for the design of automated technology in general and automated driving systems are discussed.
Collapse
Affiliation(s)
- Johannes Kraus
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - David Scholz
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Eva-Maria Messner
- Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Matthias Messner
- Department of Clinical and Health Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
29
|
Jayaraman SK, Creech C, Tilbury DM, Yang XJ, Pradhan AK, Tsui KM, Robert LP. Pedestrian Trust in Automated Vehicles: Role of Traffic Signal and AV Driving Behavior. Front Robot AI 2019; 6:117. [PMID: 33501132 PMCID: PMC7805667 DOI: 10.3389/frobt.2019.00117] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 10/25/2019] [Indexed: 11/29/2022] Open
Abstract
Pedestrians' acceptance of automated vehicles (AVs) depends on their trust in the AVs. We developed a model of pedestrians' trust in AVs based on AV driving behavior and traffic signal presence. To empirically verify this model, we conducted a human–subject study with 30 participants in a virtual reality environment. The study manipulated two factors: AV driving behavior (defensive, normal, and aggressive) and the crosswalk type (signalized and unsignalized crossing). Results indicate that pedestrians' trust in AVs was influenced by AV driving behavior as well as the presence of a signal light. In addition, the impact of the AV's driving behavior on trust in the AV depended on the presence of a signal light. There were also strong correlations between trust in AVs and certain observable trusting behaviors such as pedestrian gaze at certain areas/objects, pedestrian distance to collision, and pedestrian jaywalking time. We also present implications for design and future research.
Collapse
Affiliation(s)
- Suresh Kumaar Jayaraman
- Department of Mechanical Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Chandler Creech
- Department of Electrical Engineering and Computer Science, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Dawn M Tilbury
- Department of Mechanical Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - X Jessie Yang
- Department of Industrial and Operations Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Anuj K Pradhan
- Department of Mechanical and Industrial Engineering, University of Massachusetts, Amherst, MA, United States
| | - Katherine M Tsui
- Robotics User Experience and Industrial Design, Toyota Research Institute, Cambridge, MA, United States
| | - Lionel P Robert
- School of Information, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
30
|
Sheridan TB. Extending Three Existing Models to Analysis of Trust in Automation: Signal Detection, Statistical Parameter Estimation, and Model-Based Control. Hum Factors 2019; 61:1162-1170. [PMID: 30811950 DOI: 10.1177/0018720819829951] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE The objective is to propose three quantitative models of trust in automation. BACKGROUND Current trust-in-automation literature includes various definitions and frameworks, which are reviewed. METHOD This research shows how three existing models, namely those for signal detection, statistical parameter estimation calibration, and internal model-based control, can be revised and reinterpreted to apply to trust in automation useful for human-system interaction design. RESULTS The resulting reinterpretation is presented quantitatively and graphically, and the measures for trust and trust calibration are discussed, along with examples of application. CONCLUSION The resulting models can be applied to provide quantitative trust measures in future experiments or system designs. APPLICATIONS Simple examples are provided to explain how model application works for the three trust contexts that correspond to signal detection, parameter estimation calibration, and model-based open-loop control.
Collapse
|
31
|
Gallimore D, Lyons JB, Vo T, Mahoney S, Wynne KT. Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot. Front Psychol 2019; 10:482. [PMID: 30930811 PMCID: PMC6423898 DOI: 10.3389/fpsyg.2019.00482] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Accepted: 02/18/2019] [Indexed: 11/19/2022] Open
Abstract
Little is known regarding public opinion of autonomous robots. Trust of these robots is a pertinent topic as this construct relates to one's willingness to be vulnerable to such systems. The current research examined gender-based effects of trust in the context of an autonomous security robot. Participants (N = 200; 63% male) viewed a video depicting an autonomous guard robot interacting with humans using Amazon's Mechanical Turk. The robot was equipped with a non-lethal device to deter non-authorized visitors and the video depicted the robot using this non-lethal device on one of the three humans in the video. However, the scenario was designed to create uncertainty regarding who was at fault - the robot or the human. Following the video, participants rated their trust in the robot, perceived trustworthiness of the robot, and their desire to utilize similar autonomous robots in several different contexts that varied from military use to commercial use to home use. The results of the study demonstrated that females reported higher trust and perceived trustworthiness of the robot relative to males. Implications for the role of individual differences in trust of robots are discussed.
Collapse
Affiliation(s)
- Darci Gallimore
- Environmental Health Effects Laboratory, Naval Medical Research Unit Dayton, Wright-Patterson Air Force Base, Dayton, OH, United States
| | - Joseph B. Lyons
- 711 Human Performance Wing, Air Force Research Laboratory Dayton, Wright-Patterson Air Force Base, Dayton, OH, United States
| | - Thy Vo
- Ball Aerospace & Technologies, Fairborn, OH, United States
| | - Sean Mahoney
- 711 Human Performance Wing, Air Force Research Laboratory Dayton, Wright-Patterson Air Force Base, Dayton, OH, United States
| | - Kevin T. Wynne
- Department of Management and International Business, University of Baltimore, Baltimore, MD, United States
| |
Collapse
|
32
|
Abstract
OBJECTIVE A driving simulator study was conducted to evaluate the longitudinal effects of an intervention and withdrawal of a lane keeping system on driving performance and cognitive workload. BACKGROUND Autonomous vehicle systems are being implemented into the vehicle fleet. However, limited research exists in understanding the carryover effects of long-term exposure. METHODS Forty-eight participants (30 treatment, 18 control) completed eight drives across three separate days in a driving simulator. The treatment group had an intervention and withdrawal of a lane keeping system. Changes in driving performance (standard deviation of lateral position [SDLP] and mean time to collision [TTC]) and cognitive workload (response time and miss rate to a detection response task) were modeled using mixed effects linear and negative binomial regression. RESULTS Drivers exposed to the lane keeping system had an increase in SDLP after the system was withdrawn relative to their baseline. Drivers with lane keeping had decreased mean TTC during and after system withdrawal compared with manual drivers. There was an increase in cognitive workload when the lane keeping system was withdrawn relative to when the system was engaged. CONCLUSION Behavioral adaptations in driving performance and cognitive workload were present during automation and persisted after the automation was withdrawn. APPLICATION The findings of this research emphasize the importance to consider the effects of skill atrophy and misplaced trust due to semi-autonomous vehicle systems. Designers and policymakers can utilize this for system alerts and training.
Collapse
|
33
|
Abstract
OBJECTIVE The authors investigate whether nonhuman agents, such as computers or robots, produce a social conformity effect in human operators and examine to what extent potential conformist behavior varies as a function of the human-likeness of the group members and the type of task that has to be performed. BACKGROUND People conform due to normative and/or informational motivations in human-human interactions, and conformist behavior is modulated by factors related to the individual as well as factors associated with the group, context, and culture. Studies have yet to examine whether nonhuman agents also induce social conformity. METHOD Participants were assigned to a computer, robot, or human group and completed both a social and analytical task with the respective group. RESULTS Conformity measures (percentage of times participants answered in line with agents on critical trials) subjected to a 3 × 2 mixed ANOVA showed significantly higher conformity rates for the analytical versus the social task as well as a modulation of conformity depending of the perceived agent-task fit. CONCLUSION Findings indicate that nonhuman agents were able to exert a social conformity effect, which was modulated further by the perceived match between agent and task type. Participants conformed to comparable degrees with agents during the analytical task but conformed significantly more strongly on the social task as the group's human-likeness increased. APPLICATION Results suggest that users may react differently to the influence of nonhuman agent groups with the potential for variability in conformity depending on the domain of the task.
Collapse
Affiliation(s)
| | - Eva Wiese
- George Mason University, Fairfax, Virginia
| |
Collapse
|
34
|
Balfe N, Sharples S, Wilson JR. Understanding Is Key: An Analysis of Factors Pertaining to Trust in a Real-World Automation System. Hum Factors 2018; 60:477-495. [PMID: 29613815 PMCID: PMC5958411 DOI: 10.1177/0018720818761256] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Accepted: 01/07/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE This paper aims to explore the role of factors pertaining to trust in real-world automation systems through the application of observational methods in a case study from the railway sector. BACKGROUND Trust in automation is widely acknowledged as an important mediator of automation use, but the majority of the research on automation trust is based on laboratory work. In contrast, this work explored trust in a real-world setting. METHOD Experienced rail operators in four signaling centers were observed for 90 min, and their activities were coded into five mutually exclusive categories. Their observed activities were analyzed in relation to their reported trust levels, collected via a questionnaire. RESULTS The results showed clear differences in activity, even when circumstances on the workstations were very similar, and significant differences in some trust dimensions were found between groups exhibiting different levels of intervention and time not involved with signaling. CONCLUSION Although the empirical, lab-based studies in the literature have consistently found that reliability and competence of the automation are the most important aspects of trust development, understanding of the automation emerged as the strongest dimension in this study. The implications are that development and maintenance of trust in real-world, safety-critical automation systems may be distinct from artificial laboratory automation. APPLICATION The findings have important implications for emerging automation concepts in diverse industries including highly automated vehicles and Internet of things.
Collapse
|
35
|
Abstract
OBJECTIVE It was investigated whether providing an explanation for a takeover request in automated driving influences trust in automation and acceptance. BACKGROUND Takeover requests will be recurring events in conditionally automated driving that could undermine trust as well as acceptance and, therefore, the successful introduction of automated vehicles. METHOD Forty participants were equally assigned to either an experimental group provided with an explanation of the reason for a takeover request or a control group without explanations. In a simulator drive, both groups experienced three takeover scenarios that varied in the obviousness of their causation. Participants rated their acceptance before and after the drive and rated their trust before and after each takeover situation. RESULTS All participants rated acceptance on the same high level before and after the drive, independent of the condition. The control group's trust ratings remained unchanged by takeover requests in all situations, but the experimental group showed decreased trust after experiencing a takeover caused by roadwork. Participants provided with explanation felt more strongly that they had understood the system and the reasons for the takeovers. CONCLUSION A takeover request did not lower trust or acceptance. Providing an explanation for a takeover request had no impact on trust or acceptance but increased the perceived understanding of the system. APPLICATION The results provide insights into users' perception of automated vehicles, takeover situations, and a fundament for future interface design for automated vehicles.
Collapse
|
36
|
Abstract
OBJECTIVE The objective for this study was to investigate the effects of prior familiarization with takeover requests (TORs) during conditional automated driving on drivers' initial takeover performance and automation trust. BACKGROUND System-initiated TORs are one of the biggest concerns for conditional automated driving and have been studied extensively in the past. Most, but not all, of these studies have included training sessions to familiarize participants with TORs. This makes them hard to compare and might obscure first-failure-like effects on takeover performance and automation trust formation. METHOD A driving simulator study compared drivers' takeover performance in two takeover situations across four prior familiarization groups (no familiarization, description, experience, description and experience) and automation trust before and after experiencing the system. RESULTS As hypothesized, prior familiarization with TORs had a more positive effect on takeover performance in the first than in a subsequent takeover situation. In all groups, automation trust increased after participants experienced the system. Participants who were given no prior familiarization with TORs reported highest automation trust both before and after experiencing the system. CONCLUSION The current results extend earlier findings suggesting that prior familiarization with TORs during conditional automated driving will be most relevant for takeover performance in the first takeover situation and that it lowers drivers' automation trust. APPLICATION Potential applications of this research include different approaches to familiarize users with automated driving systems, better integration of earlier findings, and sophistication of experimental designs.
Collapse
|
37
|
Bellem H, Klüver M, Schrauf M, Schöner HP, Hecht H, Krems JF. Can We Study Autonomous Driving Comfort in Moving-Base Driving Simulators? A Validation Study. Hum Factors 2017; 59:442-456. [PMID: 28005453 DOI: 10.1177/0018720816682647] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
OBJECTIVE To lay the basis of studying autonomous driving comfort using driving simulators, we assessed the behavioral validity of two moving-base simulator configurations by contrasting them with a test-track setting. BACKGROUND With increasing level of automation, driving comfort becomes increasingly important. Simulators provide a safe environment to study perceived comfort in autonomous driving. To date, however, no studies were conducted in relation to comfort in autonomous driving to determine the extent to which results from simulator studies can be transferred to on-road driving conditions. METHOD Participants ( N = 72) experienced six differently parameterized lane-change and deceleration maneuvers and subsequently rated the comfort of each scenario. One group of participants experienced the maneuvers on a test-track setting, whereas two other groups experienced them in one of two moving-base simulator configurations. RESULTS We could demonstrate relative and absolute validity for one of the two simulator configurations. Subsequent analyses revealed that the validity of the simulator highly depends on the parameterization of the motion system. CONCLUSION Moving-base simulation can be a useful research tool to study driving comfort in autonomous vehicles. However, our results point at a preference for subunity scaling factors for both lateral and longitudinal motion cues, which might be explained by an underestimation of speed in virtual environments. APPLICATION In line with previous studies, we recommend lateral- and longitudinal-motion scaling factors of approximately 50% to 60% in order to obtain valid results for both active and passive driving tasks.
Collapse
Affiliation(s)
| | | | | | | | - Heiko Hecht
- Johannes Gutenberg-University, Mainz, Germany
| | | |
Collapse
|
38
|
Abstract
OBJECTIVE We investigated the effects of exogenous oxytocin on trust, compliance, and team decision making with agents varying in anthropomorphism (computer, avatar, human) and reliability (100%, 50%). BACKGROUND Authors of recent work have explored psychological similarities in how people trust humanlike automation compared with how they trust other humans. Exogenous administration of oxytocin, a neuropeptide associated with trust among humans, offers a unique opportunity to probe the anthropomorphism continuum of automation to infer when agents are trusted like another human or merely a machine. METHOD Eighty-four healthy male participants collaborated with automated agents varying in anthropomorphism that provided recommendations in a pattern recognition task. RESULTS Under placebo, participants exhibited less trust and compliance with automated aids as the anthropomorphism of those aids increased. Under oxytocin, participants interacted with aids on the extremes of the anthropomorphism continuum similarly to placebos but increased their trust, compliance, and performance with the avatar, an agent on the midpoint of the anthropomorphism continuum. CONCLUSION This study provides the first evidence that administration of exogenous oxytocin affected trust, compliance, and team decision making with automated agents. These effects provide support for the premise that oxytocin increases affinity for social stimuli in automated aids. APPLICATION Designing automation to mimic basic human characteristics is sufficient to elicit behavioral trust outcomes that are driven by neurological processes typically observed in human-human interactions. Designers of automated systems should consider the task, the individual, and the level of anthropomorphism to achieve the desired outcome.
Collapse
Affiliation(s)
| | - Samuel S Monfort
- George Mason University, Fairfax, Virginia
- George Mason University, Fairfax, Virginia
| | - Kimberly Goodyear
- Brown University, Providence, Rhode Island
- George Mason University, Fairfax, Virginia
| | - Li Lu
- George Mason University, Fairfax, Virginia
- George Mason University, Fairfax, Virginia
| | - Martin O'Hara
- Virginia Hospital Center, Fairfax Hospital, Arlington, Virginia
- George Mason University, Fairfax, Virginia
| | - Mary R Lee
- National Institute on Alcohol Abuse and Alcoholism, Bethesda, Maryland
- George Mason University, Fairfax, Virginia
| | | | | |
Collapse
|
39
|
Chiou EK, Lee JD. Cooperation in Human-Agent Systems to Support Resilience: A Microworld Experiment. Hum Factors 2016; 58:846-63. [PMID: 27178676 DOI: 10.1177/0018720816649094] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Accepted: 04/16/2016] [Indexed: 05/27/2023]
Abstract
OBJECTIVE This study uses a dyadic approach to understand human-agent cooperation and system resilience. BACKGROUND Increasingly capable technology fundamentally changes human-machine relationships. Rather than reliance on or compliance with more or less reliable automation, we investigate interaction strategies with more or less cooperative agents. METHOD A joint-task microworld scenario was developed to explore the effects of agent cooperation on participant cooperation and system resilience. To assess the effects of agent cooperation on participant cooperation, 36 people coordinated with a more or less cooperative agent by requesting resources and responding to requests for resources in a dynamic task environment. Another 36 people were recruited to assess effects following a perturbation in their own hospital. RESULTS Experiment 1 shows people reciprocated the cooperative behaviors of the agents; a low-cooperation agent led to less effective interactions and less resource sharing, whereas a high-cooperation agent led to more effective interactions and greater resource sharing. Experiment 2 shows that an initial fast-tempo perturbation undermined proactive cooperation-people tended to not request resources. However, the initial fast tempo had little effect on reactive cooperation-people tended to accept resource requests according to cooperation level. CONCLUSION This study complements the supervisory control perspective of human-automation interaction by considering interdependence and cooperation rather than the more common focus on reliability and reliance. APPLICATION The cooperativeness of automated agents can influence the cooperativeness of human agents. Design and evaluation for resilience in teams involving increasingly autonomous agents should consider the cooperative behaviors of these agents.
Collapse
Affiliation(s)
| | - John D Lee
- University of Wisconsin-Madison, Madison
| |
Collapse
|
40
|
Drnec K, Marathe AR, Lukos JR, Metcalfe JS. From Trust in Automation to Decision Neuroscience: Applying Cognitive Neuroscience Methods to Understand and Improve Interaction Decisions Involved in Human Automation Interaction. Front Hum Neurosci 2016; 10:290. [PMID: 27445741 PMCID: PMC4927573 DOI: 10.3389/fnhum.2016.00290] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2015] [Accepted: 05/30/2016] [Indexed: 11/17/2022] Open
Abstract
Human automation interaction (HAI) systems have thus far failed to live up to expectations mainly because human users do not always interact with the automation appropriately. Trust in automation (TiA) has been considered a central influence on the way a human user interacts with an automation; if TiA is too high there will be overuse, if TiA is too low there will be disuse. However, even though extensive research into TiA has identified specific HAI behaviors, or trust outcomes, a unique mapping between trust states and trust outcomes has yet to be clearly identified. Interaction behaviors have been intensely studied in the domain of HAI and TiA and this has led to a reframing of the issues of problems with HAI in terms of reliance and compliance. We find the behaviorally defined terms reliance and compliance to be useful in their functionality for application in real-world situations. However, we note that once an inappropriate interaction behavior has occurred it is too late to mitigate it. We therefore take a step back and look at the interaction decision that precedes the behavior. We note that the decision neuroscience community has revealed that decisions are fairly stereotyped processes accompanied by measurable psychophysiological correlates. Two literatures were therefore reviewed. TiA literature was extensively reviewed in order to understand the relationship between TiA and trust outcomes, as well as to identify gaps in current knowledge. We note that an interaction decision precedes an interaction behavior and believe that we can leverage knowledge of the psychophysiological correlates of decisions to improve joint system performance. As we believe that understanding the interaction decision will be critical to the eventual mitigation of inappropriate interaction behavior, we reviewed the decision making literature and provide a synopsis of the state of the art understanding of the decision process from a decision neuroscience perspective. We forward hypotheses based on this understanding that could shape a research path toward the ability to mitigate interaction behavior in the real world.
Collapse
Affiliation(s)
- Kim Drnec
- Human Research and Engineering Directorate, U.S. Army Research Laboratory Aberdeen, MD, USA
| | - Amar R Marathe
- Human Research and Engineering Directorate, U.S. Army Research Laboratory Aberdeen, MD, USA
| | - Jamie R Lukos
- Advanced Concepts and Applied Research Branch, Space and Naval Warfare Systems Center Pacific San Diego, CA, USA
| | - Jason S Metcalfe
- Human Research and Engineering Directorate, U.S. Army Research Laboratory Aberdeen, MD, USA
| |
Collapse
|
41
|
Hergeth S, Lorenz L, Vilimek R, Krems JF. Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Hum Factors 2016; 58:509-519. [PMID: 26843570 DOI: 10.1177/0018720815625744] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2015] [Accepted: 12/08/2015] [Indexed: 06/05/2023]
Abstract
OBJECTIVE The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. BACKGROUND Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. METHOD The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. RESULTS Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. CONCLUSION We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. APPLICATION Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Collapse
Affiliation(s)
- Sebastian Hergeth
- BMW Group, Munich, GermanyTechnische Universität Chemnitz, Chemnitz, Germany
| | - Lutz Lorenz
- BMW Group, Munich, GermanyTechnische Universität Chemnitz, Chemnitz, Germany
| | | | - Josef F Krems
- BMW Group, Munich, GermanyTechnische Universität Chemnitz, Chemnitz, Germany
| |
Collapse
|
42
|
Chancey ET, Bliss JP, Proaps AB, Madhavan P. The Role of Trust as a Mediator Between System Characteristics and Response Behaviors. Hum Factors 2015; 57:947-958. [PMID: 25917611 DOI: 10.1177/0018720815582261] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2014] [Accepted: 03/24/2015] [Indexed: 06/04/2023]
Abstract
OBJECTIVE The purpose of the current work was to clarify how subjective trust determines response behavior when interacting with a signaling system. BACKGROUND In multiple theoretical frameworks, trust is acknowledged as a prime mediator between system error characteristics and automation dependence. Some researchers have operationally defined trust as the behavior exhibited. Other researchers have suggested that although trust may guide operator responses, trust does not completely determine the behavior. METHOD Forty-four participants interacted with a primary flight simulation task and a secondary signaling system task. The signaling system varied in reliability (90%, 60%) and error bias (false alarm, miss prone). Trust was measured halfway through the experimental session to address the criterion of temporal precedence in determining the effect of trust on behavior. RESULTS Analyses indicated that trust partially mediated the relationship between reliability and agreement rate. Trust did not mediate the relationship between reliability and reaction time. Trust also did not mediate the relationships between error bias and reaction time or agreement rate. Analyses of variance generally supported specific behavioral and trust hypotheses, indicating that the paradigm employed produced similar effects on response behaviors and subjective estimates of trust observed in other studies. CONCLUSION These results indicate that strong assumptions of trust acting as the prime mediator between system error characteristics and response behaviors should be viewed with caution. APPLICATION Practitioners should consider assessing factors other than trust to determine potential operator response behaviors, which may be more predictive.
Collapse
|
43
|
Hoff KA, Bashir M. Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors 2015; 57:407-434. [PMID: 25875432 DOI: 10.1177/0018720814547570] [Citation(s) in RCA: 330] [Impact Index Per Article: 36.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2013] [Accepted: 07/15/2014] [Indexed: 06/04/2023]
Abstract
OBJECTIVE We systematically review recent empirical research on factors that influence trust in automation to present a three-layered trust model that synthesizes existing knowledge. BACKGROUND Much of the existing research on factors that guide human-automation interaction is centered around trust, a variable that often determines the willingness of human operators to rely on automation. Studies have utilized a variety of different automated systems in diverse experimental paradigms to identify factors that impact operators' trust. METHOD We performed a systematic review of empirical research on trust in automation from January 2002 to June 2013. Papers were deemed eligible only if they reported the results of a human-subjects experiment in which humans interacted with an automated system in order to achieve a goal. Additionally, a relationship between trust (or a trust-related behavior) and another variable had to be measured. All together, 101 total papers, containing 127 eligible studies, were included in the review. RESULTS Our analysis revealed three layers of variability in human-automation trust (dispositional trust, situational trust, and learned trust), which we organize into a model. We propose design recommendations for creating trustworthy automation and identify environmental conditions that can affect the strength of the relationship between trust and reliance. Future research directions are also discussed for each layer of trust. CONCLUSION Our three-layered trust model provides a new lens for conceptualizing the variability of trust in automation. Its structure can be applied to help guide future research and develop training interventions and design procedures that encourage appropriate trust.
Collapse
|
44
|
Brown M, Houghton R, Sharples S, Morley J. The attribution of success when using navigation aids. Ergonomics 2014; 58:426-433. [PMID: 25384842 PMCID: PMC4404730 DOI: 10.1080/00140139.2014.977827] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2013] [Revised: 08/18/2014] [Accepted: 10/09/2014] [Indexed: 06/04/2023]
Abstract
UNLABELLED Attitudes towards geographic information technology is a seldom explored research area that can be explained with reference to established theories of attribution. This article reports on a study of how the attribution of success and failure in pedestrian navigation varies with level of automation, degree of success and locus of control. A total of 113 participants took part in a survey exploring reflections on personal experiences and vignettes describing fictional navigation experiences. A complex relationship was discovered in which success tends to be attributed to skill and failure to the navigation aid when participants describe their own experiences. A reversed pattern of results was found when discussing the navigation of others. It was also found that navigation success and failure are associated with personal skill to a greater extent when using paper maps, as compared with web-based routing engines or satellite navigation systems. PRACTITIONER SUMMARY This article explores the influences on the attribution of success and failure when using navigation aids. A survey was performed exploring interpretations of navigation experiences. Level of success, self or other as navigator and type of navigation aid used are all found to influence the attribution of outcomes to internal or external factors.
Collapse
Affiliation(s)
- Michael Brown
- Horizon Digital Economy
Research, University of Nottingham, C01, Nottingham Geospatial Building, NottinghamNG7 2TU,
UK
- Human Factors Research Group,
University of Nottingham, Nottingham,
UK
| | - Robert Houghton
- Horizon Digital Economy
Research, University of Nottingham, C01, Nottingham Geospatial Building, NottinghamNG7 2TU,
UK
- Human Factors Research Group,
University of Nottingham, Nottingham,
UK
| | - Sarah Sharples
- Horizon Digital Economy
Research, University of Nottingham, C01, Nottingham Geospatial Building, NottinghamNG7 2TU,
UK
- Human Factors Research Group,
University of Nottingham, Nottingham,
UK
| | - Jeremy Morley
- Nottingham Geospatial
Institute, University of Nottingham, Nottingham,
UK
| |
Collapse
|