1
|
Szulc J, Fletcher K. Numerical versus graphical aids for decision-making in a multi-cue signal identification task. APPLIED ERGONOMICS 2024; 118:104260. [PMID: 38417229 DOI: 10.1016/j.apergo.2024.104260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 02/07/2024] [Accepted: 02/20/2024] [Indexed: 03/01/2024]
Abstract
Decision aids are commonly used in tactical decision-making environments to help humans integrate base-rate and multi-cue information. However, it is important that users appropriately trust and rely on aids. Decision aids can be presented in many ways, but the literature lacks clarity over the conditions surrounding their effectiveness. This research aims to determine whether a numerical or graphical aid more effectively supports human performance, and explores the relationships between aid presentation, trust, and workload. Participants (N = 30) completed a signal-identification task that required integration of readings from a set of three dynamic gauges. Participants experienced three conditions: unaided, using a numerical aid, and using a graphical aid. The aids combined gauge and base-rate information in a statistically-optimal fashion. Participants also indicated how much they trusted the system and how hard they worked during the task. Analyses explored the impact of aid condition on sensitivity, response bias, response time, trust, and workload. Both the numerical and graphical aids produced significant increases in sensitivity and trust, and significant decreases in workload in comparison to the unaided condition. The difference in response time between the graphical and unaided conditions approached significance, with participants responding faster using the graphical aid without decrements in sensitivity. Significant interactions between aid and signal type indicated that both aided conditions promoted faster responding to non-hostile signals, with larger mean differences in the graphical aid condition. Practically, graphical aids in which suggestions are more salient to users may promote faster responding in tactical environments, with negligible cost of accuracy.
Collapse
|
2
|
Li Y, Wu B, Huang Y, Luan S. Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust. Front Psychol 2024; 15:1382693. [PMID: 38694439 PMCID: PMC11061529 DOI: 10.3389/fpsyg.2024.1382693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/04/2024] [Indexed: 05/04/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of trustworthy AI has gained prominence, and several guidelines for developing trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the insights in trust research can help enhance AI's trustworthiness and foster its adoption and application.
Collapse
Affiliation(s)
- Yugang Li
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Baizhou Wu
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Yuqi Huang
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shenghua Luan
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
3
|
Cockram L, Bartlett ML, McCarley JS. Simple manipulations of anthropomorphism fail to induce perceptions of humanness or improve trust in an automated agent. APPLIED ERGONOMICS 2023; 111:104027. [PMID: 37100010 DOI: 10.1016/j.apergo.2023.104027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 04/06/2023] [Accepted: 04/11/2023] [Indexed: 06/04/2023]
Abstract
Although automation is employed as an aid to human performance, operators often interact with automated decision aids inefficiently. The current study investigated whether anthropomorphic automation would engender higher trust and use, subsequently improving human-automation team performance. Participants performed a multi-element probabilistic signal detection task in which they diagnosed a hypothetical nuclear reactor as in a state of safety or danger. The task was completed unassisted and assisted by a 93%-reliable agent varying in anthropomorphism. Results gave no evidence that participants' perceptions of anthropomorphism differed between conditions. Further, anthropomorphic automation failed to bolster trust and automation-aided performance. Findings suggest that the benefits of anthropomorphism may be limited in some contexts.
Collapse
Affiliation(s)
- Lewis Cockram
- Discipline of Psychology, Flinders University, GPO Box 2100, Adelaide, South Australia, 5001, Australia
| | - Megan L Bartlett
- Discipline of Psychology, Flinders University, GPO Box 2100, Adelaide, South Australia, 5001, Australia.
| | - Jason S McCarley
- School of Psychological Science, Oregon State University, 1500 SW Jefferson Way, Corvallis, OR, 97331, United States
| |
Collapse
|
4
|
Foroughi CK, Devlin S, Pak R, Brown NL, Sibley C, Coyne JT. Near-Perfect Automation: Investigating Performance, Trust, and Visual Attention Allocation. HUMAN FACTORS 2023; 65:546-561. [PMID: 34348511 DOI: 10.1177/00187208211032889] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
OBJECTIVE Assess performance, trust, and visual attention during the monitoring of a near-perfect automated system. BACKGROUND Research rarely attempts to assess performance, trust, and visual attention in near-perfect automated systems even though they will be relied on in high-stakes environments. METHODS Seventy-three participants completed a 40-min supervisory control task where they monitored three search feeds. All search feeds were 100% reliable with the exception of two automation failures: one miss and one false alarm. Eye-tracking and subjective trust data were collected. RESULTS Thirty-four percent of participants correctly identified the automation miss, and 67% correctly identified the automation false alarm. Subjective trust increased when participants did not detect the automation failures and decreased when they did. Participants who detected the false alarm had a more complex scan pattern in the 2 min centered around the automation failure compared with those who did not. Additionally, those who detected the failures had longer dwell times in and transitioned to the center sensor feed significantly more often. CONCLUSION Not only does this work highlight the limitations of the human when monitoring near-perfect automated systems, it begins to quantify the subjective experience and attentional cost of the human. It further emphasizes the need to (1) reevaluate the role of the operator in future high-stakes environments and (2) understand the human on an individual level and actively design for the given individual when working with near-perfect automated systems. APPLICATION Multiple operator-level measures should be collected in real-time in order to monitor an operator's state and leverage real-time, individualized assistance.
Collapse
Affiliation(s)
| | - Shannon Devlin
- U.S. Naval Research Laboratory, Washington, DC, USA
- University of Virginia, Charlottesville, USA
| | | | | | - Ciara Sibley
- U.S. Naval Research Laboratory, Washington, DC, USA
| | | |
Collapse
|
5
|
Pipitone A, Geraci A, D’Amico A, Seidita V, Chella A. Robot's Inner Speech Effects on Human Trust and Anthropomorphism. Int J Soc Robot 2023:1-13. [PMID: 37359434 PMCID: PMC10162655 DOI: 10.1007/s12369-023-01002-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2023] [Indexed: 06/28/2023]
Abstract
Inner Speech is an essential but also elusive human psychological process that refers to an everyday covert internal conversation with oneself. We argued that programming a robot with an overt self-talk system that simulates human inner speech could enhance both human trust and users' perception of robot's anthropomorphism, animacy, likeability, intelligence and safety. For this reason, we planned a pre-test/post-test control group design. Participants were divided in two different groups, one experimental group and one control group. Participants in the experimental group interacted with the robot Pepper equipped with an over inner speech system whereas participants in the control group interacted with the robot that produces only outer speech. Before and after the interaction, both groups of participants were requested to complete some questionnaires about inner speech and trust. Results showed differences between participants' pretest and post-test assessment responses, suggesting that the robot's inner speech influences in participants of experimental group the perceptions of animacy and intelligence in robot. Implications for these results are discussed.
Collapse
Affiliation(s)
- Arianna Pipitone
- Department of Humanities, University of Palermo, Viale delle Scienze, 90128 Palermo, Italy
- ICAR-CNR, National Research Council, Via Ugo La Malfa, 90100 Palermo, Italy
| | - Alessandro Geraci
- Department of Psychology, Educational Science and Human Movement, University of Palermo, Viale delle Scienze, 90128 Palermo, Italy
| | - Antonella D’Amico
- Department of Psychology, Educational Science and Human Movement, University of Palermo, Viale delle Scienze, 90128 Palermo, Italy
| | - Valeria Seidita
- Department of Engineering, University of Palermo, Viale delle Scienze, 90128 Palermo, Italy
| | - Antonio Chella
- Department of Engineering, University of Palermo, Viale delle Scienze, 90128 Palermo, Italy
- ICAR-CNR, National Research Council, Via Ugo La Malfa, 90100 Palermo, Italy
| |
Collapse
|
6
|
Knocton S, Hunter A, Connors W, Dithurbide L, Neyedli HF. The Effect of Informing Participants of the Response Bias of an Automated Target Recognition System on Trust and Reliance Behavior. HUMAN FACTORS 2023; 65:189-199. [PMID: 34078167 PMCID: PMC9969489 DOI: 10.1177/00187208211021711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 03/29/2021] [Indexed: 06/12/2023]
Abstract
OBJECTIVE To determine how changing and informing a user of the false alarm (FA) rate of an automated target recognition (ATR) system affects the user's trust in and reliance on the system and their performance during an underwater mine detection task. BACKGROUND ATR systems are designed to operate using a high sensitivity and a liberal decision criterion to reduce the risk of the ATR system missing a target. A high number of FAs in general may lead to a decrease in operator trust and reliance. METHODS Participants viewed sonar images and were asked to identify mines in the images. They performed the task without ATR and with ATR at a lower and higher FA rate. The participants were split into two groups-one informed and one uninformed of the changed FA rate. Trust and/or confidence in detecting mines was measured after each block. RESULTS When not informed of the FA rate, the FA rate had a significant effect on the participants' response bias. Participants had greater trust in the system and a more consistent response bias when informed of the FA rate. Sensitivity and confidence were not influenced by disclosure of the FA rate but were significantly worse for the high FA rate condition compared with performance without the ATR. CONCLUSION AND APPLICATION Informing a user of the FA rate of automation may positively influence the level of trust in and reliance on the aid.
Collapse
Affiliation(s)
| | - Aren Hunter
- Defence Research and Development Canada, Dartmouth, Nova Scotia,
Canada
| | - Warren Connors
- Defence Research and Development Canada, Dartmouth, Nova Scotia,
Canada
| | | | | |
Collapse
|
7
|
Lopez J, Watkins H, Pak R. Enhancing component-specific trust with consumer automated systems through humanness design. ERGONOMICS 2023; 66:291-302. [PMID: 35583421 DOI: 10.1080/00140139.2022.2079728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 05/06/2022] [Indexed: 06/15/2023]
Abstract
Consumer automation is a suitable venue for studying the efficacy of untested humanness design methods for promoting specific trust in multi-component systems. Subjective (trust, self-confidence) and behavioural (use, manual override) measures were recorded as 82 participants interacted with a four-component automation-bearing system in a simulated smart home task for two experimental blocks. During the first block all components were perfectly reliable (100%). During the second block, one component became unreliable (60%). Participants interacted with a system containing either a single or four simulated voice assistants. In the single-assistant condition, the unreliable component resulted in trust changes for every component. In the four-assistant condition, trust decreased for only the unreliable component. Across agent-number conditions, use decreased between blocks for only the unreliable component. Self-confidence and overrides exhibited ceiling and floor effects, respectively. Our findings provide the first evidence of effectively using humanness design to enhance component-specific trust in consumer systems.Practitioner summary: Participants interacted with simulated smart-home multi-component systems that contained one or four voiced assistants. In the single-voice condition, one component's decreasing reliability coincided with trust changes for all components. In the four-voice condition, trust decreased for only the decreasingly reliable component. The number of voices did not influence use strategies.Abbreviations: ACC: adaptive cruise control; CST: component-specific trust; SWT: system-wide trust; UAV: unmanned aerial vehicle; CPRS: complacency potential rating scale; MANOVA: multivariate analysis of variance.
Collapse
Affiliation(s)
- Jeremy Lopez
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Heather Watkins
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| |
Collapse
|
8
|
Nordhoff S, Stapel J, He X, Gentner A, Happee R. Do driver's characteristics, system performance, perceived safety, and trust influence how drivers use partial automation? A structural equation modelling analysis. Front Psychol 2023; 14:1125031. [PMID: 37139004 PMCID: PMC10150639 DOI: 10.3389/fpsyg.2023.1125031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 03/06/2023] [Indexed: 05/05/2023] Open
Abstract
The present study surveyed actual extensive users of SAE Level 2 partially automated cars to investigate how driver’s characteristics (i.e., socio-demographics, driving experience, personality), system performance, perceived safety, and trust in partial automation influence use of partial automation. 81% of respondents stated that they use their automated car with speed (ACC) and steering assist (LKA) at least 1–2 times a week, and 84 and 92% activate LKA and ACC at least occasionally. Respondents positively rated the performance of Adaptive Cruise Control (ACC) and Lane Keeping Assistance (LKA). ACC was rated higher than LKA and detection of lead vehicles and lane markings was rated higher than smooth control for ACC and LKA, respectively. Respondents reported to primarily disengage (i.e., turn off) partial automation due to a lack of trust in the system and when driving is fun. They rarely disengaged the system when they noticed they become bored or sleepy. Structural equation modelling revealed that trust had a positive effect on driver’s propensity for secondary task engagement during partially automated driving, while the effect of perceived safety was not significant. Regarding driver’s characteristics, we did not find a significant effect of age on perceived safety and trust in partial automation. Neuroticism negatively correlated with perceived safety and trust, while extraversion did not impact perceived safety and trust. The remaining three personality dimensions ‘openness’, ‘conscientiousness’, and ‘agreeableness’ did not form valid and reliable scales in the confirmatory factor analysis, and could thus not be subjected to the structural equation modelling analysis. Future research should re-assess the suitability of the short 10-item scale as measure of the Big-Five personality traits, and investigate the impact on perceived safety, trust, use and use of automation.
Collapse
Affiliation(s)
- Sina Nordhoff
- Department Transport and Planning, Delft University of Technology, Delft, Netherlands
- *Correspondence: Sina Nordhoff,
| | - Jork Stapel
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| | - Xiaolin He
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| | | | - Riender Happee
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
9
|
“I Believe AI Can Learn from the Error. Or Can It Not?”: The Effects of Implicit Theories on Trust Repair of the Intelligent Agent. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00951-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
10
|
Saßmannshausen T, Burggräf P, Hassenzahl M, Wagner J. Human trust in otherware - a systematic literature review bringing all antecedents together. ERGONOMICS 2022:1-23. [PMID: 36062352 DOI: 10.1080/00140139.2022.2120634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 08/28/2022] [Indexed: 06/15/2023]
Abstract
Technological systems are becoming increasingly smarter, which causes a shift in the way they are seen: from tools used to execute specific tasks to social counterparts with whom to cooperate. To ensure that these interactions are successful, trust has proven to be the most important driver. We conducted an extensive and structured review with the goal to reveal all previously researched antecedents influencing the human trust in technology-based counterparts. In doing so, we synthesised 179 papers and uncovered 479 trust antecedents. We assigned these antecedents to four main groups. Three of them have been explored before: environment, trustee, and trustor. Within this paper, we argue for a fourth group, the interaction. This quadripartition allows the inclusion of antecedents that were not considered previously. Moreover, we critically question the practice of uncovering more and more trust antecedents, which already led to an opaque plethora and thus becomes increasingly complex for practitioners. Practitioner summary: Future designers of intelligent and interactive technology will have to consider trust to a greater extent. We emphasise that there are far more trust antecedents - and interdependencies - to consider than the ethically motivated discussions about "Trustworthy AI" suggest. For this purpose, we derived a trust map as a sound basis.
Collapse
Affiliation(s)
- Till Saßmannshausen
- Chair of International Production Engineering and Management, University of Siegen, Siegen, Germany
| | - Peter Burggräf
- Chair of International Production Engineering and Management, University of Siegen, Siegen, Germany
| | - Marc Hassenzahl
- Chair of Ubiquitous Design, University of Siegen, Siegen, Germany
| | - Johannes Wagner
- Chair of International Production Engineering and Management, University of Siegen, Siegen, Germany
| |
Collapse
|
11
|
Zhou Y, Cui X, Qu W, Ge Y. The effect of automation trust tendency, system reliability and feedback on users' phishing detection. APPLIED ERGONOMICS 2022; 102:103754. [PMID: 35339760 DOI: 10.1016/j.apergo.2022.103754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 03/17/2022] [Accepted: 03/19/2022] [Indexed: 06/14/2023]
Abstract
As a new intrusion method in the security field, phishing poses an enormous threat to network security and personal privacy. Thus, improving the level of network security and preventing phishing are a matter of great concern to both the state and researchers. A 2 (automation trust tendency) *2 (system reliability level) *2 (feedback) between-subjects design was adopted to study the impact of individual characteristics and system features on phishing detection. Three hundred ninety-eight participants completed a phishing email task to identify whether 40 emails were legitimate or fraudulent. The results showed that systems with feedback and high reliability improve users' performance in email identification. Users with a high tendency towards automation trust have a higher risk of phishing. However, feedback from the system helps calibrate a high automation trust tendency. These research results can promote an understanding of phishing prevention mechanisms and provide support for the design of email defence systems.
Collapse
Affiliation(s)
- Ying Zhou
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xinyue Cui
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Weina Qu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yan Ge
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
12
|
Zerilli J, Bhatt U, Weller A. How transparency modulates trust in artificial intelligence. PATTERNS (NEW YORK, N.Y.) 2022; 3:100455. [PMID: 35465233 PMCID: PMC9023880 DOI: 10.1016/j.patter.2022.100455] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The study of human-machine systems is central to a variety of behavioral and engineering disciplines, including management science, human factors, robotics, and human-computer interaction. Recent advances in artificial intelligence (AI) and machine learning have brought the study of human-AI teams into sharper focus. An important set of questions for those designing human-AI interfaces concerns trust, transparency, and error tolerance. Here, we review the emerging literature on this important topic, identify open questions, and discuss some of the pitfalls of human-AI team research. We present opposition (extreme algorithm aversion or distrust) and loafing (extreme automation complacency or bias) as lying at opposite ends of a spectrum, with algorithmic vigilance representing an ideal mid-point. We suggest that, while transparency may be crucial for facilitating appropriate levels of trust in AI and thus for counteracting aversive behaviors and promoting vigilance, transparency should not be conceived solely in terms of the explainability of an algorithm. Dynamic task allocation, as well as the communication of confidence and performance metrics—among other strategies—may ultimately prove more useful to users than explanations from algorithms and significantly more effective in promoting vigilance. We further suggest that, while both aversive and appreciative attitudes are detrimental to optimal human-AI team performance, strategies to curb aversion are likely to be more important in the longer term than those attempting to mitigate appreciation. Our wider aim is to channel disparate efforts in human-AI team research into a common framework and to draw attention to the ecological validity of results in this field. Recent advances in artificial intelligence (AI) and machine learning have brought the study of human-AI (HAI) teams into sharper focus. An important set of questions for those designing HAI interfaces concerns trust—specifically, human trust in the AI systems with which they form teams. We review the literature on how perceiving an AI making mistakes violates trust and how such violations might be repaired. In doing so, we discuss the role played by various forms of algorithmic transparency in the process of trust repair, including explanations of algorithms, uncertainty estimates, and performance metrics.
Collapse
Affiliation(s)
- John Zerilli
- Institute for Ethics in AI and Faculty of Law, University of Oxford, St Cross Building, St Cross Road, Oxford OX1 3U, UK
| | - Umang Bhatt
- Leverhulme Centre for the Future of Intelligence and Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK.,The Alan Turing Institute, British Library, 96 Euston Road, London NW1 2DB, UK
| | - Adrian Weller
- Leverhulme Centre for the Future of Intelligence and Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK.,The Alan Turing Institute, British Library, 96 Euston Road, London NW1 2DB, UK
| |
Collapse
|
13
|
Pereira AE, Kelly MO, Lu X, Risko EF. On our susceptibility to external memory store manipulation: examining the influence of perceived reliability and expected access to an external store. Memory 2021; 30:1-17. [PMID: 34756153 DOI: 10.1080/09658211.2021.1990347] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Accepted: 10/01/2021] [Indexed: 10/19/2022]
Abstract
Offloading memory to external stores (e.g., a saved file) allows us to evade the limitations of our internal memory. One cost of this strategy is that the external memory store used may be accessible to others and, thus, may be manipulated. Here we examine how reducing the perceived reliability of an external memory store and manipulating one's expectation for future access to such a store can influence participants' susceptibility to its manipulation (i.e., endorsing manipulated information as authentic). Across three pre-registered experiments, participants were able to store to-be-remembered information in an external store. On a critical trial, we surreptitiously manipulated the information in that store. Results demonstrated that an explicit notification of a previous manipulation of that store and the warning that the store will be inaccessible in the future can decrease susceptibility to manipulation of that store. Results are discussed in the context of the metacognitive monitoring and control of memory reports in situations that involve the distribution of memory demands across both internal and external spaces.
Collapse
Affiliation(s)
- April E Pereira
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
| | - Megan O Kelly
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
| | - Xinyi Lu
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
| | - Evan F Risko
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
| |
Collapse
|
14
|
Stuck RE, Tomlinson BJ, Walker BN. The importance of incorporating risk into human-automation trust. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2021. [DOI: 10.1080/1463922x.2021.1975170] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Rachel E. Stuck
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| | - Brianna J. Tomlinson
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, USA
| | - Bruce N. Walker
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
15
|
Kraus J, Scholz D, Baumann M. What's Driving Me? Exploration and Validation of a Hierarchical Personality Model for Trust in Automated Driving. HUMAN FACTORS 2021; 63:1076-1105. [PMID: 32633564 DOI: 10.1177/0018720820922653] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE This paper presents a comprehensive investigation of personality traits related to trust in automated vehicles. A hierarchical personality model based on Mowen's (2000) 3M model is explored in a first and replicated in a second study. BACKGROUND Trust in automation is established in a complex psychological process involving user-, system- and situation-related variables. In this process, personality traits have been viewed as an important source of variance. METHOD Dispositional variables on three levels were included in an exploratory, hierarchical personality model (full model) of dynamic learned trust in automation, which was refined on the basis of structural equation modeling carried out in Study 1 (final model). Study 2 replicated the final model in an independent sample. RESULTS In both studies, the personality model showed a good fit and explained a large proportion of variance in trust in automation. The combined evidence supports the role of extraversion, neuroticism, and self-esteem at the elemental level; affinity for technology and dispositional interpersonal trust at the situational level; and propensity to trust in automation and a priori acceptability of automated driving at the surface level in the prediction of trust in automation. CONCLUSION Findings confirm that personality plays a substantial role in trust formation and provide evidence of the involvement of user dispositions not previously investigated in relation to trust in automation: self-esteem, dispositional interpersonal trust, and affinity for technology. APPLICATION Implications for personalization of information campaigns, driver training, and user interfaces for trust calibration in automated driving are discussed.
Collapse
|
16
|
Miller L, Kraus J, Babel F, Baumann M. More Than a Feeling-Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety. Front Psychol 2021; 12:592711. [PMID: 33912098 PMCID: PMC8074795 DOI: 10.3389/fpsyg.2021.592711] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 01/27/2021] [Indexed: 12/02/2022] Open
Abstract
With service robots becoming more ubiquitous in social life, interaction design needs to adapt to novice users and the associated uncertainty in the first encounter with this technology in new emerging environments. Trust in robots is an essential psychological prerequisite to achieve safe and convenient cooperation between users and robots. This research focuses on psychological processes in which user dispositions and states affect trust in robots, which in turn is expected to impact the behavior and reactions in the interaction with robotic systems. In a laboratory experiment, the influence of propensity to trust in automation and negative attitudes toward robots on state anxiety, trust, and comfort distance toward a robot were explored. Participants were approached by a humanoid domestic robot two times and indicated their comfort distance and trust. The results favor the differentiation and interdependence of dispositional, initial, and dynamic learned trust layers. A mediation from the propensity to trust to initial learned trust by state anxiety provides an insight into the psychological processes through which personality traits might affect interindividual outcomes in human-robot interaction (HRI). The findings underline the meaningfulness of user characteristics as predictors for the initial approach to robots and the importance of considering users’ individual learning history regarding technology and robots in particular.
Collapse
Affiliation(s)
- Linda Miller
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Johannes Kraus
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Franziska Babel
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
17
|
Secure Autonomous Cloud Brained Humanoid Robot Assisting Rescuers in Hazardous Environments. ELECTRONICS 2021. [DOI: 10.3390/electronics10020124] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
On 31 January 2020, the World Health Organization (WHO) declared a global emergency after the discovery of a new pandemic disease that caused severe lung problems. The spread of the disease at an international level drew the attention of many researchers who attempted to find solutions to ameliorate the problem. The implementation of robotics has been one of the proposed solutions, as automated humanoid robots can be used in many situations and limit the exposure of humans to the disease. Many humanoid robot implementations are found in the literature; however, most of them have some distinct drawbacks, such as a high cost and complexity. Our research proposes a novel, secure and efficient programmable system using a humanoid robot that is able to autonomously move and detect survivors in emergency scenarios, with the potential to communicate verbally with victims. The proposed humanoid robot is powered by the cloud and benefits from the powerful storage, computation, and communication resources of a typical modern data center. In order to evaluate the proposed system, we conducted multiple experiments in synthetic hazardous environments.
Collapse
|
18
|
The effects of personality and locus of control on trust in humans versus artificial intelligence. Heliyon 2020; 6:e04572. [PMID: 32923706 PMCID: PMC7475230 DOI: 10.1016/j.heliyon.2020.e04572] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 03/30/2020] [Accepted: 07/24/2020] [Indexed: 11/20/2022] Open
Abstract
Introduction We are increasingly exposed to applications that embed some sort of artificial intelligence (AI) algorithm, and there is a general belief that people trust any AI-based product or service without question. This study investigated the effect of personality characteristics (Big Five Inventory (BFI) traits and locus of control (LOC)) on trust behaviour, and the extent to which people trust the advice from an AI-based algorithm, more than humans, in a decision-making card game. Method One hundred and seventy-one adult volunteers decided whether the final covered card, in a five-card sequence over ten trials, had a higher/lower number than the second-to-last card. They either received no suggestion (control), recommendations from what they were told were previous participants (humans), or an AI-based algorithm (AI). Trust behaviour was measured as response time and concordance (number of participants' responses that were the same as the suggestion), and trust beliefs were measured as self-reported trust ratings. Results It was found that LOC influences trust concordance and trust ratings, which are correlated. In particular, LOC negatively predicted beyond the BFI dimensions trust concordance. As LOC levels increased, people were less likely to follow suggestions from both humans or AI. Neuroticism negatively predicted trust ratings. Openness predicted reaction time, but only for suggestions from previous participants. However, people chose the AI suggestions more than those from humans, and self-reported that they believed such recommendations more. Conclusions The results indicate that LOC accounts for a significant variance for trust concordance and trust ratings, predicting beyond BFI traits, and affects the way people select whom they trust whether humans or AI. These findings also support the AI-based algorithm appreciation.
Collapse
|
19
|
Would You Fix This Code for Me? Effects of Repair Source and Commenting on Trust in Code Repair. SYSTEMS 2020. [DOI: 10.3390/systems8010008] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Automation and autonomous systems are quickly becoming a more engrained aspect of modern society. The need for effective, secure computer code in a timely manner has led to the creation of automated code repair techniques to resolve issues quickly. However, the research to date has largely ignored the human factors aspects of automated code repair. The current study explored trust perceptions, reuse intentions, and trust intentions in code repair with human generated patches versus automated code repair patches. In addition, comments in the headers were manipulated to determine the effect of the presence or absence of comments in the header of the code. Participants were 51 programmers with at least 3 years’ experience and knowledge of the C programming language. Results indicated only repair source (human vs. automated code repair) had a significant influence on trust perceptions and trust intentions. Specifically, participants consistently reported higher levels of perceived trustworthiness, intentions to reuse, and trust intentions for human referents compared to automated code repair. No significant effects were found for comments in the headers.
Collapse
|
20
|
Kraus J, Scholz D, Messner EM, Messner M, Baumann M. Scared to Trust? - Predicting Trust in Highly Automated Driving by Depressiveness, Negative Self-Evaluations and State Anxiety. Front Psychol 2020; 10:2917. [PMID: 32038353 PMCID: PMC6989472 DOI: 10.3389/fpsyg.2019.02917] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 12/10/2019] [Indexed: 11/13/2022] Open
Abstract
The advantages of automated driving can only come fully into play if these systems are used in an appropriate way, which means that they are neither used in situations they are not designed for (misuse) nor used in a too restricted manner (disuse). Trust in automation has been found to be an essential psychological basis for appropriate interaction with automated systems. Well-balanced system use requires a calibrated level of trust in correspondence with the actual ability of an automated system. As for these far-reaching implications of trust for safe and efficient system use, the psychological processes, in which trust is dynamically calibrated prior and during the use of automated technology, need to be understood. At this point, only a restricted body of research investigated the role of personality and emotional states for the formation of trust in automated systems. In this research, the role of the personality variables depressiveness, self-efficacy, self-esteem, and locus of control for the experience of anxiety before the first experience with a highly automated driving system were investigated. Additionally, the relationship of the investigated personality variables and anxiety to subsequent formation of trust in automation was investigated. In a driving simulator study, personality variables and anxiety were measured before the interaction with an automated system. Trust in the system was measured after participants drove with the system for a while. Trust in the system was significantly predicted by state anxiety and the personality characteristics self-esteem and self-efficacy. The relationships of self-esteem and self-efficacy were mediated by state anxiety as supported by significant specific indirect effects. While for depression the direct relationship with trust in automation was not found to be significant, an indirect effect through the experience of anxiety was supported. Locus of control did not show a significant association to trust in automation. The reported findings support the importance of considering individual differences in negative self-evaluations and anxiety when being introduced to a new automated system for individual differences in trust in automation. Implications for future research as well as implications for the design of automated technology in general and automated driving systems are discussed.
Collapse
Affiliation(s)
- Johannes Kraus
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - David Scholz
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Eva-Maria Messner
- Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Matthias Messner
- Department of Clinical and Health Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
21
|
Li Y, Li W, Yang Y, Wang Q. Feedback and Direction Sources Influence Navigation Decision Making on Experienced Routes. Front Psychol 2019; 10:2104. [PMID: 31572278 PMCID: PMC6753235 DOI: 10.3389/fpsyg.2019.02104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Accepted: 08/29/2019] [Indexed: 11/13/2022] Open
Abstract
When navigating in a new environment, it is typical for people to resort to external guidance such as Global Positioning System (GPS), or people. However, in the real world, even though navigators have learned the route, they may still prefer to travel with external guidance. We explored how the availability of feedback and the source of external guidance affect navigation decision-making on experienced routes in the presence of external guidance. In three experiments, participants navigated a simulated route three times and then verbally confirmed that they had learned it. They then traveled the same route again, accompanied with no, correct, or incorrect direction guidance, which latter two were provided by a GPS (Experiment 1), a stranger (Experiment 2), or a friend (Experiment 3). Half of the participants received immediate feedback on their navigation decisions, while the other half without feedback did not know if they had selected the correct directions. Generally, without feedback, participants relied on external guidance, regardless of the direction sources. Results also showed that participants trusted the GPS the most, but performed best with their friends as a direction source. With feedback, participants did not show differences in performance between the correct and incorrect guidance conditions, indicating that feedback plays a critical role in evaluating the reliability of external guidance. Our findings suggest that incorrect guidance without any feedback might disturb navigation decision-making, which was further moderated by the perceived credibility of direction sources. We discuss these results within the context of navigation decision-making theory and consider implications for wayfinding behaviors as a social activity.
Collapse
Affiliation(s)
- Yu Li
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Weijia Li
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Yingying Yang
- Department of Psychology, Montclair State University, Montclair, NJ, United States
| | - Qi Wang
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
22
|
Liu P, Du Y, Xu Z. Machines versus humans: People's biased responses to traffic accidents involving self-driving vehicles. ACCIDENT; ANALYSIS AND PREVENTION 2019; 125:232-240. [PMID: 30798148 DOI: 10.1016/j.aap.2019.02.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Revised: 01/18/2019] [Accepted: 02/10/2019] [Indexed: 06/09/2023]
Abstract
Although self-driving vehicles (SDVs) bring with them the promise of improved traffic safety, they cannot eliminate all crashes. Little is known about whether people respond crashes involving SDVs and human drivers differently and why. Across five vignette-based experiments in two studies (total N = 1267), for the first time, we witnessed that participants had a tendency to perceive traffic crashes involving SDVs to be more severe than those involving conventionally human-driven vehicles (HDVs) regardless of their severity (injury or fatality) or cause (SDVs/HDVs or others). Furthermore, we found that this biased response could be a result of people's reliance on the affect heuristic. More specifically, higher prior negative affect tagged with an SDV (vs. an HDV) intensifies people's negative affect evoked by crashes involving the SDV (vs. those involving the HDV), which subsequently results in higher perceived severity and lower acceptability of the crash. Our results imply that people's over-reaction to crashes involving SDVs may be a psychological barrier to their adoption and that we may need to forestall a less stringent introduction policy that allows SDVs on public roads as it may lead to more crashes that could possibly deter people from adopting SDVs. We discuss other theoretical and practical implications of our results and suggest potential approaches to de-biasing people's responses to crashes involving SDVs.
Collapse
Affiliation(s)
- Peng Liu
- College of Management and Economics, Tianjin University, Tianjin 300072, PR China.
| | - Yong Du
- College of Management and Economics, Tianjin University, Tianjin 300072, PR China
| | - Zhigang Xu
- School of Information Engineering, Chang'an University, Xi'an, Shaanxi 710064, PR China
| |
Collapse
|
23
|
Somon B, Campagne A, Delorme A, Berberian B. Human or not human? Performance monitoring ERPs during human agent and machine supervision. Neuroimage 2019; 186:266-277. [DOI: 10.1016/j.neuroimage.2018.11.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 10/23/2018] [Accepted: 11/09/2018] [Indexed: 11/30/2022] Open
|
24
|
Kohn SC, Quinn D, Pak R, de Visser EJ, Shaw TH. Trust Repair Strategies with Self-Driving Vehicles: An Exploratory Study. ACTA ACUST UNITED AC 2018. [DOI: 10.1177/1541931218621254] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Trust is important for any relationship, especially so with self-driving vehicles: passengers must trust these vehicles with their life. Given the criticality of maintaining passenger’s trust, yet the dearth of self-driving trust repair research relative to the growth of the self-driving industry, we conducted two studies to better understand how people view errors committed by self-driving cars, as well as what types of trust repair efforts may be viable for use by self-driving cars. Experiment 1 manipulated error type and driver types to determine whether driver type (human versus self-driving) affected how participants assessed errors. Results indicate that errors committed by both driver types are not assessed differently. Given the similarity, experiment 2 focused on self-driving cars, using a wide variety of trust repair efforts to confirm human-human research and determine which repairs were most effective at mitigating the effect of violations on trust. We confirmed the pattern of trust repairs in human-human research, and found that some apologies were more effective at repairing trust than some denials. These findings help focus future research, while providing broad guidance as to potential methods for approaching trust repair with self-driving cars.
Collapse
|
25
|
Balfe N, Sharples S, Wilson JR. Understanding Is Key: An Analysis of Factors Pertaining to Trust in a Real-World Automation System. HUMAN FACTORS 2018; 60:477-495. [PMID: 29613815 PMCID: PMC5958411 DOI: 10.1177/0018720818761256] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Accepted: 01/07/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE This paper aims to explore the role of factors pertaining to trust in real-world automation systems through the application of observational methods in a case study from the railway sector. BACKGROUND Trust in automation is widely acknowledged as an important mediator of automation use, but the majority of the research on automation trust is based on laboratory work. In contrast, this work explored trust in a real-world setting. METHOD Experienced rail operators in four signaling centers were observed for 90 min, and their activities were coded into five mutually exclusive categories. Their observed activities were analyzed in relation to their reported trust levels, collected via a questionnaire. RESULTS The results showed clear differences in activity, even when circumstances on the workstations were very similar, and significant differences in some trust dimensions were found between groups exhibiting different levels of intervention and time not involved with signaling. CONCLUSION Although the empirical, lab-based studies in the literature have consistently found that reliability and competence of the automation are the most important aspects of trust development, understanding of the automation emerged as the strongest dimension in this study. The implications are that development and maintenance of trust in real-world, safety-critical automation systems may be distinct from artificial laboratory automation. APPLICATION The findings have important implications for emerging automation concepts in diverse industries including highly automated vehicles and Internet of things.
Collapse
|
26
|
Somon B, Campagne A, Delorme A, Berberian B. Performance Monitoring Applied to System Supervision. Front Hum Neurosci 2017; 11:360. [PMID: 28744209 PMCID: PMC5504305 DOI: 10.3389/fnhum.2017.00360] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 06/26/2017] [Indexed: 12/30/2022] Open
Abstract
Nowadays, automation is present in every aspect of our daily life and has some benefits. Nonetheless, empirical data suggest that traditional automation has many negative performance and safety consequences as it changed task performers into task supervisors. In this context, we propose to use recent insights into the anatomical and neurophysiological substrates of action monitoring in humans, to help further characterize performance monitoring during system supervision. Error monitoring is critical for humans to learn from the consequences of their actions. A wide variety of studies have shown that the error monitoring system is involved not only in our own errors, but also in the errors of others. We hypothesize that the neurobiological correlates of the self-performance monitoring activity can be applied to system supervision. At a larger scale, a better understanding of system supervision may allow its negative effects to be anticipated or even countered. This review is divided into three main parts. First, we assess the neurophysiological correlates of self-performance monitoring and their characteristics during error execution. Then, we extend these results to include performance monitoring and error observation of others or of systems. Finally, we provide further directions in the study of system supervision and assess the limits preventing us from studying a well-known phenomenon: the Out-Of-the-Loop (OOL) performance problem.
Collapse
Affiliation(s)
- Bertille Somon
- ONERA, Information Processing and Systems DepartmentSalon Air, France.,Univ. Grenoble Alpes, CNRS, LPNC UMR 5105Grenoble, France
| | | | - Arnaud Delorme
- Centre de Recherche Cerveau & Cognition, Pavillon Baudot, Hopital Purpan, BP-25202Toulouse, France.,Swartz Center for Computational Neurosciences, University of California, San DiegoSan Diego, La Jolla, CA, United States
| | - Bruno Berberian
- ONERA, Information Processing and Systems DepartmentSalon Air, France
| |
Collapse
|
27
|
de Visser EJ, Monfort SS, Goodyear K, Lu L, O'Hara M, Lee MR, Parasuraman R, Krueger F. A Little Anthropomorphism Goes a Long Way. HUMAN FACTORS 2017; 59:116-133. [PMID: 28146673 PMCID: PMC5477060 DOI: 10.1177/0018720816687205] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
OBJECTIVE We investigated the effects of exogenous oxytocin on trust, compliance, and team decision making with agents varying in anthropomorphism (computer, avatar, human) and reliability (100%, 50%). BACKGROUND Authors of recent work have explored psychological similarities in how people trust humanlike automation compared with how they trust other humans. Exogenous administration of oxytocin, a neuropeptide associated with trust among humans, offers a unique opportunity to probe the anthropomorphism continuum of automation to infer when agents are trusted like another human or merely a machine. METHOD Eighty-four healthy male participants collaborated with automated agents varying in anthropomorphism that provided recommendations in a pattern recognition task. RESULTS Under placebo, participants exhibited less trust and compliance with automated aids as the anthropomorphism of those aids increased. Under oxytocin, participants interacted with aids on the extremes of the anthropomorphism continuum similarly to placebos but increased their trust, compliance, and performance with the avatar, an agent on the midpoint of the anthropomorphism continuum. CONCLUSION This study provides the first evidence that administration of exogenous oxytocin affected trust, compliance, and team decision making with automated agents. These effects provide support for the premise that oxytocin increases affinity for social stimuli in automated aids. APPLICATION Designing automation to mimic basic human characteristics is sufficient to elicit behavioral trust outcomes that are driven by neurological processes typically observed in human-human interactions. Designers of automated systems should consider the task, the individual, and the level of anthropomorphism to achieve the desired outcome.
Collapse
Affiliation(s)
| | - Samuel S Monfort
- George Mason University, Fairfax, Virginia
- George Mason University, Fairfax, Virginia
| | - Kimberly Goodyear
- Brown University, Providence, Rhode Island
- George Mason University, Fairfax, Virginia
| | - Li Lu
- George Mason University, Fairfax, Virginia
- George Mason University, Fairfax, Virginia
| | - Martin O'Hara
- Virginia Hospital Center, Fairfax Hospital, Arlington, Virginia
- George Mason University, Fairfax, Virginia
| | - Mary R Lee
- National Institute on Alcohol Abuse and Alcoholism, Bethesda, Maryland
- George Mason University, Fairfax, Virginia
| | | | | |
Collapse
|
28
|
Madhavan P, Wiegmann DA. A New Look at the Dynamics of Human-Automation Trust: Is Trust in Humans Comparable to Trust in Machines? ACTA ACUST UNITED AC 2016. [DOI: 10.1177/154193120404800365] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The trust placed in automated diagnostic aids by the human operator is one of the most critical psychological factors that influences operator reliance on decision support systems. Studies examining the nature of human interaction with automation have revealed that users have a propensity to apply norms of human-human interpersonal interaction to their interaction with ‘intelligent machines’. Nevertheless, there exist subtle differences in the manner in which humans perceive and react to automated aids compared to human teammates. The present review is focused on comparing the process of trust development in human-automation teams with that of human-human partnerships, specifically in the context of dyads that constitute a primary decision maker and either a human ‘advisor’ or an intelligent automated decision support system. A conceptual framework that synthesizes and contrasts the process of trust development in humans versus automation is proposed. Potential implications of this research include the improved design of decision support systems by incorporating features into automated aids that elicit operator responses that mirror responses in human-human interpersonal interaction.
Collapse
Affiliation(s)
- Poornima Madhavan
- Aviation Human Factors Division, Institute of Aviation University of Illinois at Urbana-Champaign
| | - Douglas A. Wiegmann
- Aviation Human Factors Division, Institute of Aviation University of Illinois at Urbana-Champaign
| |
Collapse
|
29
|
Abstract
Automation does not mean humans are replaced; quite the opposite. Increasingly, humans are asked to interact with automation in complex and typically large-scale systems, including aircraft and air traffic control, nuclear power, manufacturing plants, military systems, homes, and hospitals. This is not an easy or error-free task for either the system designer or the human operator/automation supervisor, especially as computer technology becomes ever more sophisticated. This review outlines recent research and challenges in the area, including taxonomies and qualitative models of human-automation interaction; descriptions of automation-related accidents and studies of adaptive automation; and social, political, and ethical issues.
Collapse
|
30
|
Goodyear K, Parasuraman R, Chernyak S, de Visser E, Madhavan P, Deshpande G, Krueger F. An fMRI and effective connectivity study investigating miss errors during advice utilization from human and machine agents. Soc Neurosci 2016; 12:570-581. [DOI: 10.1080/17470919.2016.1205131] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Affiliation(s)
- Kimberly Goodyear
- Molecular Neuroscience Department, George Mason University, Fairfax, VA, USA
| | - Raja Parasuraman
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Sergey Chernyak
- Molecular Neuroscience Department, George Mason University, Fairfax, VA, USA
| | - Ewart de Visser
- Department of Psychology, George Mason University, Fairfax, VA, USA
- Human Factors and UX Research, Perceptronics Solutions, Inc., Falls Church, VA, USA
| | - Poornima Madhavan
- Board on Human-Systems Integration, National Academies of Sciences, Engineering and Medicine, Washington, DC, USA
| | - Gopikrishna Deshpande
- Auburn University MRI Research Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USA
- Department of Psychology, Auburn University, Auburn, AL, USA
- Alabama Advanced Imaging Consortium, Auburn University and University of Alabama, Birmingham, AL, USA
| | - Frank Krueger
- Department of Psychology, George Mason University, Fairfax, VA, USA
| |
Collapse
|
31
|
Clare AS, Cummings ML, Repenning NP. Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles. HUMAN FACTORS 2015; 57:1208-1218. [PMID: 26060238 DOI: 10.1177/0018720815587803] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Accepted: 04/03/2015] [Indexed: 06/04/2023]
Abstract
OBJECTIVE We examined the impact of priming on operator trust and system performance when supervising a decentralized network of heterogeneous unmanned vehicles (UVs). BACKGROUND Advances in autonomy have enabled a future vision of single-operator control of multiple heterogeneous UVs. Real-time scheduling for multiple UVs in uncertain environments requires the computational ability of optimization algorithms combined with the judgment and adaptability of human supervisors. Because of system and environmental uncertainty, appropriate operator trust will be instrumental to maintain high system performance and prevent cognitive overload. METHOD Three groups of operators experienced different levels of trust priming prior to conducting simulated missions in an existing, multiple-UV simulation environment. RESULTS Participants who play computer and video games frequently were found to have a higher propensity to overtrust automation. By priming gamers to lower their initial trust to a more appropriate level, system performance was improved by 10% as compared to gamers who were primed to have higher trust in the automation. CONCLUSION Priming was successful at adjusting the operator's initial and dynamic trust in the automated scheduling algorithm, which had a substantial impact on system performance. APPLICATION These results have important implications for personnel selection and training for futuristic multi-UV systems under human supervision. Although gamers may bring valuable skills, they may also be potentially prone to automation bias. Priming during training and regular priming throughout missions may be one potential method for overcoming this propensity to overtrust automation.
Collapse
|
32
|
|
33
|
Xu J, Le K, Deitermann A, Montague E. How different types of users develop trust in technology: a qualitative analysis of the antecedents of active and passive user trust in a shared technology. APPLIED ERGONOMICS 2014; 45:1495-1503. [PMID: 24882059 PMCID: PMC4237160 DOI: 10.1016/j.apergo.2014.04.012] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Revised: 03/21/2014] [Accepted: 04/14/2014] [Indexed: 06/03/2023]
Abstract
The aim of this study was to investigate the antecedents of trust in technology for active users and passive users working with a shared technology. According to the prominence-interpretation theory, to assess the trustworthiness of a technology, a person must first perceive and evaluate elements of the system that includes the technology. An experimental study was conducted with 54 participants who worked in two-person teams in a multi-task environment with a shared technology. Trust in technology was measured using a trust in technology questionnaire and antecedents of trust were elicited using an open-ended question. A list of antecedents of trust in technology was derived using qualitative analysis techniques. The following categories emerged from the antecedent: technology factors, user factors, and task factors. Similarities and differences between active users and passive user responses, in terms of trust in technology were discussed.
Collapse
Affiliation(s)
- Jie Xu
- University of Wisconsin - Madison, Madison, WI, USA
| | - Kim Le
- University of Wisconsin - Madison, Madison, WI, USA
| | | | - Enid Montague
- Division of General Internal Medicine, Feinberg School of Medicine, Northwestern University, Rubloff Building 10th Floor, 750 N Lake Shore, Chicago, IL 60611, USA.
| |
Collapse
|
34
|
Towards the Development of an Inter-cultural Scale to Measure Trust in Automation. CROSS-CULTURAL DESIGN 2014. [DOI: 10.1007/978-3-319-07308-8_4] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
35
|
Beller J, Heesen M, Vollrath M. Improving the driver-automation interaction: an approach using automation uncertainty. HUMAN FACTORS 2013; 55:1130-1141. [PMID: 24745204 DOI: 10.1177/0018720813482327] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
OBJECTIVE The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. BACKGROUND A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. METHOD We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. RESULTS Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. CONCLUSION The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. APPLICATION Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.
Collapse
|
36
|
Verberne FMF, Ham J, Midden CJH. Trust in smart systems: sharing driving goals and giving information to increase trustworthiness and acceptability of smart systems in cars. HUMAN FACTORS 2012; 54:799-810. [PMID: 23156624 DOI: 10.1177/0018720812443825] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
OBJECTIVE We examine whether trust in smart systems is generated analogously to trust in humans and whether the automation level of smart systems affects trustworthiness and acceptability of those systems. BACKGROUND Trust is an important factor when considering acceptability of automation technology. As shared goals lead to social trust, and intelligent machines tend to be treated like humans, the authors expected that shared driving goals would also lead to increased trustworthiness and acceptability of adaptive cruise control (ACC) systems. METHOD In an experiment, participants (N = 57) were presented with descriptions of three ACCs with different automation levels that were described as systems that either shared their driving goals or did not. Trustworthiness and acceptability of all the ACCs were measured. RESULTS ACCs sharing the driving goals of the user were more trustworthy and acceptable than were ACCs not sharing the driving goals of the user. Furthermore, ACCs that took over driving tasks while providing information were more trustworthy and acceptable than were ACCs that took over driving tasks without providing information. Trustworthiness mediated the effects of both driving goals and automation level on acceptability of ACCs. CONCLUSION As when trusting other humans, trusting smart systems depends on those systems sharing the user's goals. Furthermore, based on their description, smart systems that take over tasks are judged more trustworthy and acceptable when they also provide information. APPLICATION For optimal acceptability of smart systems, goals of the user should be shared by the smart systems, and smart systems should provide information to their user.
Collapse
Affiliation(s)
- Frank M F Verberne
- Department of Human-Technology Interaction, Eindhoven University of Technology, IPO 1.27, P.O. Box 513, 5600 MB Eindhoven, Netherlands.
| | | | | |
Collapse
|
37
|
Abstract
OBJECTIVE The current study examined human-human reliance during a computer-based scenario where participants interacted with a human aid and an automated tool simultaneously. BACKGROUND Reliance on others is complex, and few studies have examined human-human reliance in the context of automation. Past research found that humans are biased in their perceived utility of automated tools such that they view them as more accurate than humans. Prior reviews have postulated differences in human-human versus human-machine reliance, yet few studies have examined such reliance when individuals are presented with divergent information from different sources. METHOD Participants (N = 40) engaged in the Convoy Leader experiment.They selected a convoy route based on explicit guidance from a human aid and information from an automated map. Subjective and behavioral human-human reliance indices were assessed. Perceptions of risk were manipulated by creating three scenarios (low, moderate, and high) that varied in the amount of vulnerability (i.e., potential for attack) associated with the convoy routes. RESULTS Results indicated that participants reduced their behavioral reliance on the human aid when faced with higher risk decisions (suggesting increased reliance on the automation); however, there were no reported differences in intentions to rely on the human aid relative to the automation. CONCLUSION The current study demonstrated that when individuals are provided information from both a human aid and automation,their reliance on the human aid decreased during high-risk decisions. APPLICATION This study adds to a growing understanding of the biases and preferences that exist during complex human-human and human-machine interactions.
Collapse
Affiliation(s)
- Joseph B Lyons
- Air Force Office of Scientific Research, 875 N. Randolph Street, Suite 325, Arlington, VA 22203, USA.
| | | |
Collapse
|
38
|
Balfe N, Wilson JR, Sharples S, Clarke T. Development of design principles for automated systems in transport control. ERGONOMICS 2012; 55:37-54. [PMID: 22176483 DOI: 10.1080/00140139.2011.636456] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
UNLABELLED This article reports the results of a qualitative study investigating attitudes towards and opinions of an advanced automation system currently used in UK rail signalling. In-depth interviews were held with 10 users, key issues associated with automation were identified and the automation's impact on the signalling task investigated. The interview data highlighted the importance of the signallers' understanding of the automation and their (in)ability to predict its outputs. The interviews also covered the methods used by signallers to interact with and control the automation, and the perceived effects on their workload. The results indicate that despite a generally low level of understanding and ability to predict the actions of the automation system, signallers have developed largely successful coping mechanisms that enable them to use the technology effectively. These findings, along with parallel work identifying desirable attributes of automation from the literature in the area, were used to develop 12 principles of automation which can be used to help design new systems which better facilitate cooperative working. PRACTITIONER SUMMARY The work reported in this article was completed with the active involvement of operational rail staff who regularly use automated systems in rail signalling. The outcomes are currently being used to inform decisions on the extent and type of automation and user interfaces in future generations of rail control systems.
Collapse
Affiliation(s)
- Nora Balfe
- Ergonomics Team , Network Rail, London, UK.
| | | | | | | |
Collapse
|
39
|
Lyons JB, Stokes CK, Eschleman KJ, Alarcon GM, Barelka AJ. Trustworthiness and IT suspicion: an evaluation of the nomological network. HUMAN FACTORS 2011; 53:219-229. [PMID: 21830509 DOI: 10.1177/0018720811406726] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
OBJECTIVE The authors evaluated the validity of trust in automation and information technology (IT) suspicion by examining their factor structure and relationship with decision confidence. BACKGROUND Research on trust has burgeoned, yet the dimensionality of trust remains elusive. Researchers suggest that trust is a unidimensional construct, whereas others believe it is multidimensional. Additionally, novel constructs,such as IT suspicion, have yet to be distinguished from trust in automation. Research is needed to examine the overlap between these constructs and to determine the dimensionality of trust in automation. METHOD Participants (N = 72) engaged in a computer-based convoy scenario involving an automated decision aid. The aid fused real-time sensor data and provided route recommendations to participants who selected a route based on (a) a map with historical enemy information, (b) sensor inputs, and (c) automation suggestions. Measures for trust in automation and IT suspicion were administered after individuals interacted with the automation. RESULTS Results indicated three orthogonal factors: trust, distrust, and IT suspicion. Each variable was explored as a predictor of decision confidence. Distrust and trust evidenced unique influences on decision confidence, albeit at different times. Higher distrust related to less confidence, whereas trust related to greater confidence. CONCLUSION The current study found that trust in automation was best characterized by two orthogonal dimensions (trust and distrust). Both trust and distrust were found to be independent from IT suspicion,and both distrust and trust uniquely predicted decision confidence. APPLICATION Researchers may consider using separate measures for trust and distrust in future studies.
Collapse
Affiliation(s)
- Joseph B Lyons
- Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio 45433-7604, USA.
| | | | | | | | | |
Collapse
|
40
|
Abstract
Why is it important to understand human behavior in automated environments? The performance of a human-automation system is a product of the quality of the support provided by the automation and the manner in which that support is used by the human. Therefore, the solution to reaching an optimal level of system performance does not lie exclusively within possible improvements to technological components, but also by understanding the interaction of humans with automated agents. A conceptual model of human-automation interaction is presented in this paper. The model includes most of the well-established variable relationships to date, as well as a brief description of the literature that supports each relationship.
Collapse
|
41
|
Merritt SM, Ilgen DR. Not all trust is created equal: dispositional and history-based trust in human-automation interactions. HUMAN FACTORS 2008; 50:194-210. [PMID: 18516832 DOI: 10.1518/001872008x288574] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
OBJECTIVE We provide an empirical demonstration of the importance of attending to human user individual differences in examinations of trust and automation use. BACKGROUND Past research has generally supported the notions that machine reliability predicts trust in automation, and trust in turn predicts automation use. However, links between user personality and perceptions of the machine with trust in automation have not been empirically established. METHOD On our X-ray screening task, 255 students rated trust and made automation use decisions while visually searching for weapons in X-ray images of luggage. RESULTS We demonstrate that individual differences affect perceptions of machine characteristics when actual machine characteristics are constant, that perceptions account for 52% of trust variance above the effects of actual characteristics, and that perceptions mediate the effects of actual characteristics on trust. Importantly, we also demonstrate that when administered at different times, the same six trust items reflect two types of trust (dispositional trust and history-based trust) and that these two trust constructs are differentially related to other variables. Interactions were found among user characteristics, machine characteristics, and automation use. CONCLUSION Our results suggest that increased specificity in the conceptualization and measurement of trust is required, future researchers should assess user perceptions of machine characteristics in addition to actual machine characteristics, and incorporation of user extraversion and propensity to trust machines can increase prediction of automation use decisions. APPLICATION Potential applications include the design of flexible automation training programs tailored to individuals who differ in systematic ways.
Collapse
|
42
|
Madhavan P, Wiegmann DA. Effects of information source, pedigree, and reliability on operator interaction with decision support systems. HUMAN FACTORS 2007; 49:773-85. [PMID: 17915596 DOI: 10.1518/001872007x230154] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
OBJECTIVE Two experiments are described that examined operators' perceptions of decision aids. BACKGROUND Research has suggested certain biases against automation that influence human interaction with automation. We differentiated preconceived biases from post hoc biases and examined their effects on advice acceptance. METHOD In Study 1 we examined operators' trust in and perceived reliability of humans versus automation of varying pedigree (expert vs. novice), based on written descriptions of these advisers prior to operators' interacting with these advisers. In Study 2 we examined participants' post hoc trust in, perceived reliability of, and dependence on these advisers after their objective experience of advisers' reliability (90% vs. 70%) in a luggage-screening task. RESULTS In Study 1 measures of perceived reliability indicated that automation was perceived as more reliable than humans across pedigrees. Measures of trust indicated that automated "novices" were trusted more than human "novices"; human "experts" were trusted more than automated "experts." In Study 2, perceived reliability varied as a function of pedigree, whereas subjective trust was always higher for automation than for humans. Advice acceptance from novice automation was always higher than from novice humans. However, when advisers were 70% reliable, errors generated by expert automation led to a drop in compliance/reliance on expert automation relative to expert humans. CONCLUSION Preconceived expectations of automation influence the use of these aids in actual tasks. APPLICATION The results provide a reference point for deriving indices of "optimal" user interaction with decision aids and for developing frameworks of trust in decision support systems.
Collapse
Affiliation(s)
- Poornima Madhavan
- University of Illinois at Urbana-Champaign, Champaign, Illinois, USA.
| | | |
Collapse
|
43
|
Madhavan P, Wiegmann DA. Similarities and differences between human–human and human–automation trust: an integrative review. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2007. [DOI: 10.1080/14639220500337708] [Citation(s) in RCA: 123] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
44
|
Bitan Y, Meyer J. Self-initiated and respondent actions in a simulated control task. ERGONOMICS 2007; 50:763-88. [PMID: 17454093 DOI: 10.1080/00140130701217149] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Operators often need to combine self-initiated and respondent actions. Two experiments dealt with the relative importance of these two types of actions as a function of the predictability of the system and the available information. Participants monitored three stations with different frequencies at which interventions were required. They were aided by warning cues, indicating the need for interventions. The frequencies of inspections of the stations, the response to the warning system and the overall performance were assessed for warning systems with different diagnostic properties. Participants adapted their responses to the relative frequency of required interventions and the reliance on and compliance to the warning system depended on the warning characteristics. The results support the notion that events, such as warning signals, have a complex role in the ongoing activity of the operator and are integrated into the set of information from external and internal sources that guide the operators' actions.
Collapse
Affiliation(s)
- Yuval Bitan
- Department of Industrial Engineering and Management, Ben Gurion University of the Negev, Beer Sheva, Israel
| | | |
Collapse
|