1
|
Ma Z, Zhang Y. Fostering Drivers' Trust in Automated Driving Styles: The Role of Driver Perception of Automated Driving Maneuvers. HUMAN FACTORS 2024; 66:1961-1976. [PMID: 37490722 DOI: 10.1177/00187208231189661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
OBJECTIVE This study investigated the impact of driving styles of drivers and automated vehicles (AVs) on drivers' perception of automated driving maneuvers and quantified the relationships among drivers' perception of AV maneuvers, driver trust, and acceptance of AVs. BACKGROUND Previous studies on automated driving styles focused on the impact of AV's global driving style on driver's attitude and driving performance. However, research on drivers' perception of automated driving maneuvers at the specific driving style level is still lacking. METHOD Sixteen aggressive drivers and sixteen defensive drivers were recruited to experience twelve driving scenarios in either an aggressive AV or a defensive AV on the driving simulator. Their perception of AV maneuvers, trust, and acceptance was measured via questionnaires, and driving performance was collected via the driving simulator. RESULTS Results revealed that drivers' trust and acceptance of AVs would decrease significantly if they perceived AVs to have a higher speed, larger deceleration, smaller deceleration, or shorter stopping distance than expected. Moreover, defensive drivers perceived significantly greater inappropriateness of these maneuvers from aggressive AVs than defensive AVs, whereas aggressive drivers didn't differ significantly in their perceived inappropriateness of these maneuvers with different driving styles. CONCLUSION The driving styles of automated vehicles and drivers influenced drivers' perception of automated driving maneuvers, which influence their trust and acceptance of AVs. APPLICATION This study suggested that the design of AVs should consider drivers' perceptions of automated driving maneuvers to avoid undermining drivers' trust and acceptance of AVs.
Collapse
Affiliation(s)
- Zheng Ma
- Department of Industrial Manufacturing Engineering, Pennsylvania State University, University Park, PA, USA
| | - Yiqi Zhang
- Department of Industrial Manufacturing Engineering, Pennsylvania State University, University Park, PA, USA
| |
Collapse
|
2
|
Knauer J, Baumeister H, Schmitt A, Terhorst Y. Acceptance of smart sensing, its determinants, and the efficacy of an acceptance-facilitating intervention in people with diabetes: results from a randomized controlled trial. Front Digit Health 2024; 6:1352762. [PMID: 38863954 PMCID: PMC11165071 DOI: 10.3389/fdgth.2024.1352762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 05/06/2024] [Indexed: 06/13/2024] Open
Abstract
Background Mental health problems are prevalent among people with diabetes, yet often under-diagnosed. Smart sensing, utilizing passively collected digital markers through digital devices, is an innovative diagnostic approach that can support mental health screening and intervention. However, the acceptance of this technology remains unclear. Grounded on the Unified Theory of Acceptance and Use of Technology (UTAUT), this study aimed to investigate (1) the acceptance of smart sensing in a diabetes sample, (2) the determinants of acceptance, and (3) the effectiveness of an acceptance facilitating intervention (AFI). Methods A total of N = 132 participants with diabetes were randomized to an intervention group (IG) or a control group (CG). The IG received a video-based AFI on smart sensing and the CG received an educational video on mindfulness. Acceptance and its potential determinants were assessed through an online questionnaire as a single post-measurement. The self-reported behavioral intention, interest in using a smart sensing application and installation of a smart sensing application were assessed as outcomes. The data were analyzed using latent structural equation modeling and t-tests. Results The acceptance of smart sensing at baseline was average (M = 12.64, SD = 4.24) with 27.8% showing low, 40.3% moderate, and 31.9% high acceptance. Performance expectancy (γ = 0.64, p < 0.001), social influence (γ = 0.23, p = .032) and trust (γ = 0.27, p = .040) were identified as potential determinants of acceptance, explaining 84% of the variance. SEM model fit was acceptable (RMSEA = 0.073, SRMR = 0.059). The intervention did not significantly impact acceptance (γ = 0.25, 95%-CI: -0.16-0.65, p = .233), interest (OR = 0.76, 95% CI: 0.38-1.52, p = .445) or app installation rates (OR = 1.13, 95% CI: 0.47-2.73, p = .777). Discussion The high variance in acceptance supports a need for acceptance facilitating procedures. The analyzed model supported performance expectancy, social influence, and trust as potential determinants of smart sensing acceptance; perceived benefit was the most influential factor towards acceptance. The AFI was not significant. Future research should further explore factors contributing to smart sensing acceptance and address implementation barriers.
Collapse
Affiliation(s)
- Johannes Knauer
- Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, University Ulm, Ulm, Germany
| | - Harald Baumeister
- Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, University Ulm, Ulm, Germany
| | - Andreas Schmitt
- Research Institute Diabetes Academy Mergentheim (FIDAM), Bad Mergentheim, Germany
| | - Yannik Terhorst
- Department of Psychological Methods and Assessment, Ludwigs-Maximilian University Munich, Munich, Germany
| |
Collapse
|
3
|
Humer C, Nicholls R, Heberle H, Heckmann M, Pühringer M, Wolf T, Lübbesmeyer M, Heinrich J, Hillenbrand J, Volpin G, Streit M. CIME4R: Exploring iterative, AI-guided chemical reaction optimization campaigns in their parameter space. J Cheminform 2024; 16:51. [PMID: 38730469 DOI: 10.1186/s13321-024-00840-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 04/05/2024] [Indexed: 05/12/2024] Open
Abstract
Chemical reaction optimization (RO) is an iterative process that results in large, high-dimensional datasets. Current tools allow for only limited analysis and understanding of parameter spaces, making it hard for scientists to review or follow changes throughout the process. With the recent emergence of using artificial intelligence (AI) models to aid RO, another level of complexity has been added. Helping to assess the quality of a model's prediction and understand its decision is critical to supporting human-AI collaboration and trust calibration. To address this, we propose CIME4R-an open-source interactive web application for analyzing RO data and AI predictions. CIME4R supports users in (i) comprehending a reaction parameter space, (ii) investigating how an RO process developed over iterations, (iii) identifying critical factors of a reaction, and (iv) understanding model predictions. This facilitates making informed decisions during the RO process and helps users to review a completed RO process, especially in AI-guided RO. CIME4R aids decision-making through the interaction between humans and AI by combining the strengths of expert experience and high computational precision. We developed and tested CIME4R with domain experts and verified its usefulness in three case studies. Using CIME4R the experts were able to produce valuable insights from past RO campaigns and to make informed decisions on which experiments to perform next. We believe that CIME4R is the beginning of an open-source community project with the potential to improve the workflow of scientists working in the reaction optimization domain. SCIENTIFIC CONTRIBUTION: To the best of our knowledge, CIME4R is the first open-source interactive web application tailored to the peculiar analysis requirements of reaction optimization (RO) campaigns. Due to the growing use of AI in RO, we developed CIME4R with a special focus on facilitating human-AI collaboration and understanding of AI models. We developed and evaluated CIME4R in collaboration with domain experts to verify its practical usefulness.
Collapse
Affiliation(s)
| | - Rachel Nicholls
- Division Crop Science, Bayer AG, Monheim am Rhein, 40789, Germany
| | - Henry Heberle
- Division Crop Science, Bayer AG, Monheim am Rhein, 40789, Germany
| | | | | | - Thomas Wolf
- Division Crop Science, Bayer AG, Frankfurt, 65926, Germany
| | | | - Julian Heinrich
- Division Crop Science, Bayer AG, Monheim am Rhein, 40789, Germany
| | | | - Giulio Volpin
- Division Crop Science, Bayer AG, Frankfurt, 65926, Germany.
| | - Marc Streit
- Johannes Kepler University Linz, Linz, 4040, Austria.
- datavisyn GmbH, Linz, 4040, Austria.
| |
Collapse
|
4
|
Deng M, Chen J, Wu Y, Ma S, Li H, Yang Z, Shen Y. Using voice recognition to measure trust during interactions with automated vehicles. APPLIED ERGONOMICS 2024; 116:104184. [PMID: 38048717 DOI: 10.1016/j.apergo.2023.104184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 11/10/2023] [Accepted: 11/20/2023] [Indexed: 12/06/2023]
Abstract
Trust in an automated vehicle system (AVs) can impact the experience and safety of drivers and passengers. This work investigates the effects of speech to measure drivers' trust in the AVs. Seventy-five participants were randomly assigned to high-trust (the AVs with 100% correctness, 0 crash, and 4 system messages with visual-auditory TORs) and low-trust group (the AVs with a correctness of 60%, a crash rate of 40%, 2 system messages with visual-only TORs). Voice interaction tasks were used to collect speech information during the driving process. The results revealed that our settings successfully induced trust and distrust states. The corresponding extracted speech feature data of the two trust groups were used for back-propagation neural network training and evaluated for its ability to accurately predict the trust classification. The highest classification accuracy of trust was 90.80%. This study proposes a method for accurately measuring trust in automated vehicles using voice recognition.
Collapse
Affiliation(s)
- Miaomiao Deng
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Jiaqi Chen
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Yue Wu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Shu Ma
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Hongting Li
- Institute of Applied Psychology, College of Education, Zhejiang University of Technology, Hangzhou, China
| | - Zhen Yang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China.
| | - Yi Shen
- Department of Mathematics, Zhejiang Sci-Tech University, Hangzhou, China.
| |
Collapse
|
5
|
Yi B, Cao H, Song X, Wang J, Zhao S, Guo W, Cao D. How Can the Trust-Change Direction be Measured and Identified During Takeover Transitions in Conditionally Automated Driving? Using Physiological Responses and Takeover-Related Factors. HUMAN FACTORS 2024; 66:1276-1301. [PMID: 36625335 DOI: 10.1177/00187208221143855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE This paper proposes an objective method to measure and identify trust-change directions during takeover transitions (TTs) in conditionally automated vehicles (AVs). BACKGROUND Takeover requests (TORs) will be recurring events in conditionally automated driving that could undermine trust, and then lead to inappropriate reliance on conditionally AVs, such as misuse and disuse. METHOD 34 drivers engaged in the non-driving-related task were involved in a sequence of takeover events in a driving simulator. The relationships and effects between drivers' physiological responses, takeover-related factors, and trust-change directions during TTs were explored by the combination of an unsupervised learning algorithm and statistical analyses. Furthermore, different typical machine learning methods were applied to establish recognition models of trust-change directions during TTs based on takeover-related factors and physiological parameters. RESULT Combining the change values in the subjective trust rating and monitoring behavior before and after takeover can reliably measure trust-change directions during TTs. The statistical analysis results showed that physiological parameters (i.e., skin conductance and heart rate) during TTs are negatively linked with the trust-change directions. And drivers were more likely to increase trust during TTs when they were in longer TOR lead time, with more takeover frequencies, and dealing with the stationary vehicle scenario. More importantly, the F1-score of the random forest (RF) model is nearly 77.3%. CONCLUSION The features investigated and the RF model developed can identify trust-change directions during TTs accurately. APPLICATION Those findings can provide additional support for developing trust monitoring systems to mitigate both drivers' overtrust and undertrust in conditionally AVs.
Collapse
Affiliation(s)
| | | | | | | | - Song Zhao
- University of Waterloo, Waterloo, ON, Canada
| | | | - Dongpu Cao
- University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
6
|
Ling S, Zhang Y, Du N. More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction. HUMAN FACTORS 2024:187208241234810. [PMID: 38437598 DOI: 10.1177/00187208241234810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
OBJECTIVE The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human-automation interaction. BACKGROUND System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics. METHOD We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated. RESULTS Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities. CONCLUSION The study demonstrated that visual explanations could improve performance, trust, and preference in human-automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects. APPLICATION These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human-machine collaboration.
Collapse
Affiliation(s)
| | | | - Na Du
- University of Pittsburgh, USA
| |
Collapse
|
7
|
Yamani Y, Glassman J, Alruwaili A, Yahoodik SE, Davis E, Lugo S, Xie K, Ishak S. Post Take-Over Performance Varies in Drivers of Automated and Connected Vehicle Technology in Near-Miss Scenarios. HUMAN FACTORS 2023:187208231219184. [PMID: 38052019 DOI: 10.1177/00187208231219184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
OBJECTIVE This study examined the impact of monitoring instructions when using an automated driving system (ADS) and road obstructions on post take-over performance in near-miss scenarios. BACKGROUND Past research indicates partial ADS reduces the driver's situation awareness and degrades post take-over performance. Connected vehicle technology may alert drivers to impending hazards in time to safely avoid near-miss events. METHOD Forty-eight licensed drivers using ADS were randomly assigned to either the active driving or passive driving condition. Participants navigated eight scenarios with or without a visual obstruction in a distributed driving simulator. The experimenter drove the other simulated vehicle to manually cause near-miss events. Participants' mean longitudinal velocity, standard deviation of longitudinal velocity, and mean longitudinal acceleration were measured. RESULTS Participants in passive ADS group showed greater, and more variable, deceleration rates than those in the active ADS group. Despite a reliable audiovisual warning, participants failed to slow down in the red-light running scenario when the conflict vehicle was occluded. Participant's trust in the automated driving system did not vary between the beginning and end of the experiment. CONCLUSION Drivers interacting with ADS in a passive manner may continue to show increased and more variable deceleration rates in near-miss scenarios even with reliable connected vehicle technology. Future research may focus on interactive effects of automated and connected driving technologies on drivers' ability to anticipate and safely navigate near-miss scenarios. APPLICATION Designers of automated and connected vehicle technologies may consider different timing and types of cues to inform the drivers of imminent hazard in high-risk scenarios for near-miss events.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Kun Xie
- Old Dominion University, USA
| | | |
Collapse
|
8
|
Swain R, Kaye SA, Rakotonirainy A. Is my AV crashing? An online photo-based experiment assessing whether shared intended pathway can help AV drivers anticipate silent failures. ERGONOMICS 2023; 66:1984-1998. [PMID: 36756954 DOI: 10.1080/00140139.2023.2176551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Abstract
The shared responsibility between conditional AVs drivers demands shared understanding. Thus, a shared intended pathway (SIP)-a graphical display of the AV's planned manoeuvres in a head-up display to help drivers anticipate silent failures is proposed. An online, randomised photo experiment was conducted with 394 drivers in Australia. The photos presented traffic scenarios where the SIP forecast either safe or unsafe manoeuvres (silent failures). Participants were required to respond by selecting whether driver intervention was necessary or not. Additionally, the effects of presented object recognition bounding boxes which indicated whether a road user was recognised or not were also tested in the experiment. The SIP led to correct intervention choices 87% of the time, and to calibrating self-reported trust, perceived ease of use and usefulness. The bounding boxes found no significant effects. Results suggest SIPs can assist in monitoring conditional automation. Future research in simulator studies is recommended. Practitioner summary: Conditional AV drivers are expected to take-over control during failures. However, drivers are not informed about the AV's planned manoeuvres. A visual display that presents the shared intended pathway is proposed to help drivers mitigate silent failures. This online photo experiment found the display helped anticipate failures with 87% accuracy.
Collapse
Affiliation(s)
- Ritwik Swain
- Queensland University of Technology (QUT), Centre for Accident Research and Road Safety (CARRS-Q), Brisbane, Australia
- University of the Sunshine Coast (USC), Road Safety Research Collaboration (RSRC), Sippy Downs, Australia
| | - Sherrie-Anne Kaye
- Queensland University of Technology (QUT), Centre for Accident Research and Road Safety (CARRS-Q), Brisbane, Australia
| | - Andry Rakotonirainy
- Queensland University of Technology (QUT), Centre for Accident Research and Road Safety (CARRS-Q), Brisbane, Australia
| |
Collapse
|
9
|
Manchon JB, Bueno M, Navarro J. Calibration of Trust in Automated Driving: A Matter of Initial Level of Trust and Automated Driving Style? HUMAN FACTORS 2023; 65:1613-1629. [PMID: 34861787 DOI: 10.1177/00187208211052804] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Automated driving is becoming a reality, and such technology raises new concerns about human-machine interaction on road. This paper aims to investigate factors influencing trust calibration and evolution over time. BACKGROUND Numerous studies showed trust was a determinant in automation use and misuse, particularly in the automated driving context. METHOD Sixty-one drivers participated in an experiment aiming to better understand the influence of initial level of trust (Trustful vs. Distrustful) on drivers' behaviors and trust calibration during two sessions of simulated automated driving. The automated driving style was manipulated as positive (smooth) or negative (abrupt) to investigate human-machine early interactions. Trust was assessed over time through questionnaires. Drivers' visual behaviors and take-over performances during an unplanned take-over request were also investigated. RESULTS Results showed an increase of trust over time, for both Trustful and Distrustful drivers regardless the automated driving style. Trust was also found to fluctuate over time depending on the specific events handled by the automated vehicle. Take-over performances were not influenced by the initial level of trust nor automated driving style. CONCLUSION Trust in automated driving increases rapidly when drivers' experience such a system. Initial level of trust seems to be crucial in further trust calibration and modulate the effect of automation performance. Long-term trust evolutions suggest that experience modify drivers' mental model about automated driving systems. APPLICATION In the automated driving context, trust calibration is a decisive question to guide such systems' proper utilization, and road safety.
Collapse
Affiliation(s)
- J B Manchon
- VEDECOM Institute, Versailles, France, and University Lyon 2, Bron, France
| | | | - Jordan Navarro
- University Lyon 2, Bron, France, and Institut Universitaire de France, Paris
| |
Collapse
|
10
|
Qu J, Zhou R, Zhang Y, Ma Q. Understanding trust calibration in automated driving: the effect of time, personality, and system warning design. ERGONOMICS 2023; 66:2165-2181. [PMID: 36920361 DOI: 10.1080/00140139.2023.2191907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 03/13/2023] [Indexed: 06/18/2023]
Abstract
Under the human-automation codriving future, dynamic trust should be considered. This paper explored how trust changes over time and how multiple factors (time, trust propensity, neuroticism, and takeover warning design) calibrate trust together. We launched two driving simulator experiments to measure drivers' trust before, during, and after the experiment under takeover scenarios. The results showed that trust in automation increased during short-term interactions and dropped after four months, which is still higher than pre-experiment trust. Initial trust and trust propensity had a stable impact on trust. Drivers trusted the system more with the two-stage (MR + TOR) warning design than the one-stage (TOR). Neuroticism had a significant effect on the countdown compared with the content warning.Practitioner summary: The results provide new data and knowledge for trust calibration in the takeover scenario. The findings can help design a more reasonable automated driving system in long-term human-automation interactions.
Collapse
Affiliation(s)
- Jianhong Qu
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Ronggang Zhou
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Yaping Zhang
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Qianli Ma
- School of Economics and Management, Beihang University, Beijing, P. R. China
| |
Collapse
|
11
|
De Freitas J, Agarwal S, Schmitt B, Haslam N. Psychological factors underlying attitudes toward AI tools. Nat Hum Behav 2023; 7:1845-1854. [PMID: 37985913 DOI: 10.1038/s41562-023-01734-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 09/26/2023] [Indexed: 11/22/2023]
Abstract
What are the psychological factors driving attitudes toward artificial intelligence (AI) tools, and how can resistance to AI systems be overcome when they are beneficial? Here we first organize the main sources of resistance into five main categories: opacity, emotionlessness, rigidity, autonomy and group membership. We relate each of these barriers to fundamental aspects of cognition, then cover empirical studies providing correlational or causal evidence for how the barrier influences attitudes toward AI tools. Second, we separate each of the five barriers into AI-related and user-related factors, which is of practical relevance in developing interventions towards the adoption of beneficial AI tools. Third, we highlight potential risks arising from these well-intentioned interventions. Fourth, we explain how the current Perspective applies to various stakeholders, including how to approach interventions that carry known risks, and point to outstanding questions for future work.
Collapse
Affiliation(s)
| | - Stuti Agarwal
- Marketing Unit, Harvard Business School, Boston, MA, USA
| | - Bernd Schmitt
- Marketing Division, Columbia Business School, New York, NY, USA
| | - Nick Haslam
- School of Psychological Sciences, University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
12
|
Terhorst Y, Weilbacher N, Suda C, Simon L, Messner EM, Sander LB, Baumeister H. Acceptance of smart sensing: a barrier to implementation-results from a randomized controlled trial. Front Digit Health 2023; 5:1075266. [PMID: 37519894 PMCID: PMC10373890 DOI: 10.3389/fdgth.2023.1075266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 06/26/2023] [Indexed: 08/01/2023] Open
Abstract
Background Accurate and timely diagnostics are essential for effective mental healthcare. Given a resource- and time-limited mental healthcare system, novel digital and scalable diagnostic approaches such as smart sensing, which utilizes digital markers collected via sensors from digital devices, are explored. While the predictive accuracy of smart sensing is promising, its acceptance remains unclear. Based on the unified theory of acceptance and use of technology, the present study investigated (1) the effectiveness of an acceptance facilitating intervention (AFI), (2) the determinants of acceptance, and (3) the acceptance of adults toward smart sensing. Methods The participants (N = 202) were randomly assigned to a control group (CG) or intervention group (IG). The IG received a video AFI on smart sensing, and the CG a video on mindfulness. A reliable online questionnaire was used to assess acceptance, performance expectancy, effort expectancy, facilitating conditions, social influence, and trust. The self-reported interest in using and the installation of a smart sensing app were assessed as behavioral outcomes. The intervention effects were investigated in acceptance using t-tests for observed data and latent structural equation modeling (SEM) with full information maximum likelihood to handle missing data. The behavioral outcomes were analyzed with logistic regression. The determinants of acceptance were analyzed with SEM. The root mean square error of approximation (RMSEA) and standardized root mean square residual (SRMR) were used to evaluate the model fit. Results The intervention did not affect the acceptance (p = 0.357), interest (OR = 0.75, 95% CI: 0.42-1.32, p = 0.314), or installation rate (OR = 0.29, 95% CI: 0.01-2.35, p = 0.294). The performance expectancy (γ = 0.45, p < 0.001), trust (γ = 0.24, p = 0.002), and social influence (γ = 0.32, p = 0.008) were identified as the core determinants of acceptance explaining 68% of its variance. The SEM model fit was excellent (RMSEA = 0.06, SRMR = 0.05). The overall acceptance was M = 10.9 (SD = 3.73), with 35.41% of the participants showing a low, 47.92% a moderate, and 10.41% a high acceptance. Discussion The present AFI was not effective. The low to moderate acceptance of smart sensing poses a major barrier to its implementation. The performance expectancy, social influence, and trust should be targeted as the core factors of acceptance. Further studies are needed to identify effective ways to foster the acceptance of smart sensing and to develop successful implementation strategies. Clinical Trial Registration identifier 10.17605/OSF.IO/GJTPH.
Collapse
Affiliation(s)
- Yannik Terhorst
- Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, University Ulm, Ulm, Germany
| | - Nadine Weilbacher
- Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, University Ulm, Ulm, Germany
| | - Carolin Suda
- Department of Rehabilitation Psychology and Psychotherapy, Institute of Psychology, Albert-Ludwigs University Freiburg, Freiburg, Germany
| | - Laura Simon
- Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, University Ulm, Ulm, Germany
| | - Eva-Maria Messner
- Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, University Ulm, Ulm, Germany
| | - Lasse Bosse Sander
- Medical Psychology and Medical Sociology, Faculty of Medicine, Albert-Ludwigs University Freiburg, Freiburg, Germany
| | - Harald Baumeister
- Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, University Ulm, Ulm, Germany
| |
Collapse
|
13
|
Momen A, de Visser EJ, Fraune MR, Madison A, Rueben M, Cooley K, Tossell CC. Group trust dynamics during a risky driving experience in a Tesla Model X. Front Psychol 2023; 14:1129369. [PMID: 37408965 PMCID: PMC10319128 DOI: 10.3389/fpsyg.2023.1129369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 05/23/2023] [Indexed: 07/07/2023] Open
Abstract
The growing concern about the risk and safety of autonomous vehicles (AVs) has made it vital to understand driver trust and behavior when operating AVs. While research has uncovered human factors and design issues based on individual driver performance, there remains a lack of insight into how trust in automation evolves in groups of people who face risk and uncertainty while traveling in AVs. To this end, we conducted a naturalistic experiment with groups of participants who were encouraged to engage in conversation while riding a Tesla Model X on campus roads. Our methodology was uniquely suited to uncover these issues through naturalistic interaction by groups in the face of a risky driving context. Conversations were analyzed, revealing several themes pertaining to trust in automation: (1) collective risk perception, (2) experimenting with automation, (3) group sense-making, (4) human-automation interaction issues, and (5) benefits of automation. Our findings highlight the untested and experimental nature of AVs and confirm serious concerns about the safety and readiness of this technology for on-road use. The process of determining appropriate trust and reliance in AVs will therefore be essential for drivers and passengers to ensure the safe use of this experimental and continuously changing technology. Revealing insights into social group-vehicle interaction, our results speak to the potential dangers and ethical challenges with AVs as well as provide theoretical insights on group trust processes with advanced technology.
Collapse
Affiliation(s)
- Ali Momen
- United States Air Force Academy, Colorado Springs, CO, United States
| | | | - Marlena R. Fraune
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Anna Madison
- United States Air Force Academy, Colorado Springs, CO, United States
- United States Army Research Laboratory, Aberdeen Proving Ground, Aberdeen, MD, United States
| | - Matthew Rueben
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Katrina Cooley
- United States Air Force Academy, Colorado Springs, CO, United States
| | - Chad C. Tossell
- United States Air Force Academy, Colorado Springs, CO, United States
| |
Collapse
|
14
|
Taylor S, Wang M, Jeon M. Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems. Front Psychol 2023; 14:1121622. [PMID: 37275735 PMCID: PMC10232983 DOI: 10.3389/fpsyg.2023.1121622] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 05/02/2023] [Indexed: 06/07/2023] Open
Abstract
Trust is critical for human-automation collaboration, especially under safety-critical tasks such as driving. Providing explainable information on how the automation system reaches decisions and predictions can improve system transparency, which is believed to further facilitate driver trust and user evaluation of the automated vehicles. However, what the optimal level of transparency is and how the system communicates it to calibrate drivers' trust and improve their driving performance remain uncertain. Such uncertainty becomes even more unpredictable given that the system reliability remains dynamic due to current technological limitations. To address this issue in conditionally automated vehicles, a total of 30 participants were recruited in a driving simulator study and assigned to either a low or a high system reliability condition. They experienced two driving scenarios accompanied by two types of in-vehicle agents delivering information with different transparency types: "what"-then-wait (on-demand) and "what + why" (proactive). The on-demand agent provided some information about the upcoming event and delivered more information if prompted by the driver, whereas the proactive agent provided all information at once. Results indicated that the on-demand agent was more habitable, or naturalistic, to drivers and was perceived with faster system response speed compared to the proactive agent. Drivers under the high-reliability condition complied with the takeover request (TOR) more (if the agent was on-demand) and had shorter takeover times (in both agent conditions) compared to those under the low-reliability condition. These findings inspire how the automation system can deliver information to improve system transparency while adapting to system reliability and user evaluation, which further contributes to driver trust calibration and performance correction in future automated vehicles.
Collapse
Affiliation(s)
- Skye Taylor
- Mind Music Machine Lab, Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, United States
- Link Lab, Department of Systems and Information Engineering, University of Virginia, Charlottesville, VA, United States
| | - Manhua Wang
- Mind Music Machine Lab, Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, United States
| | - Myounghoon Jeon
- Mind Music Machine Lab, Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, United States
| |
Collapse
|
15
|
Hungund AP, Kumar Pradhan A. Impact of non-driving related tasks while operating automated driving systems (ADS): A systematic review. ACCIDENT; ANALYSIS AND PREVENTION 2023; 188:107076. [PMID: 37150132 DOI: 10.1016/j.aap.2023.107076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 03/28/2023] [Accepted: 04/13/2023] [Indexed: 05/09/2023]
Abstract
Automated Driving Systems (ADS) (SAE, 2021), promise improved safety and comfort for drivers. Current technological advances have resulted in increased automation capabilities. However, with the increase in automation capabilities, there is a shift in how drivers interact with their vehicles. Drivers can now temporarily hand over the control of the driving task to ADS under certain conditions. However, with ADS in temporary control of the vehicle, drivers may choose to engage in non-driving related tasks (NDRT). The current capabilities of ADS do not allow drivers to hand over control of the driving task indefinitely. Drivers must remain aware and be ready to take back control if necessary. There is a need to better understand drivers' performance and behaviors when driving with ADS, especially when engaged in NDRTs. This literature review, therefore, aims to understand the state of knowledge on automated vehicle systems and driver distraction. This review was conducted as per PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Studies found a significant increase in takeover times while engaging in NDRTs and driving with automation active. Studies also discuss a change in driver's visual attention, with more focus given to NDRTs as compared to the front roadway. The concerning effects of increasing reaction times and decreases in visual attention can be mitigated by using interventions and studies have had success in redirecting drivers attention and reorient them to the task of driving. The review, therefore, includes a discussion of ADS and NDRT engagement and its impact on driving behaviors such as take-over times, visual attention, trust, and workload. Implications on driver safety and performance are discussed in light of this synthesis.
Collapse
Affiliation(s)
- Apoorva Pramod Hungund
- Mechanical, and Industrial Engineering, University of Massachusetts, Amherst 01002, USA.
| | - Anuj Kumar Pradhan
- Mechanical, and Industrial Engineering, University of Massachusetts, Amherst 01002, USA.
| |
Collapse
|
16
|
On the Role of Beliefs and Trust for the Intention to Use Service Robots: An Integrated Trustworthiness Beliefs Model for Robot Acceptance. Int J Soc Robot 2023. [DOI: 10.1007/s12369-022-00952-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
Abstract
AbstractWith the increasing abilities of robots, the prediction of user decisions needs to go beyond the usability perspective, for example, by integrating distinctive beliefs and trust. In an online study (N = 400), first, the relationship between general trust in service robots and trust in a specific robot was investigated, supporting the role of general trust as a starting point for trust formation. On this basis, it was explored—both for general acceptance of service robots and acceptance of a specific robot—if technology acceptance models can be meaningfully complemented by specific beliefs from the theory of planned behavior (TPB) and trust literature to enhance understanding of robot adoption. First, models integrating all belief groups were fitted, providing essential variance predictions at both levels (general and specific) and a mediation of beliefs via trust to the intention to use. The omission of the performance expectancy and reliability belief was compensated for by more distinctive beliefs. In the final model (TB-RAM), effort expectancy and competence predicted trust at the general level. For a specific robot, competence and social influence predicted trust. Moreover, the effect of social influence on trust was moderated by the robot's application area (public > private), supporting situation-specific belief relevance in robot adoption. Taken together, in line with the TPB, these findings support a mediation cascade from beliefs via trust to the intention to use. Furthermore, an incorporation of distinctive instead of broad beliefs is promising for increasing the explanatory and practical value of acceptance modeling.
Collapse
|
17
|
Li Y, Xuan Z, Li X. A Study on the Entire Take-Over Process-Based Emergency Obstacle Avoidance Behavior. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:3069. [PMID: 36833756 PMCID: PMC9961172 DOI: 10.3390/ijerph20043069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/08/2023] [Accepted: 02/07/2023] [Indexed: 06/18/2023]
Abstract
Nowadays, conditional automated driving vehicles still need drivers to take-over in the scenarios such as emergency hazard events or driving environments beyond the system's control. This study aimed to explore the changing trend of the drivers' takeover behavior under the influence of traffic density and take-over budget time for the entire take-over process in emergency obstacle avoidance scenarios. In the driving simulator, a 2 × 2 factorial design was adopted, including two traffic densities (high density and low density) and two kinds of take-over budget time (3 s and 5 s). A total of 40 drivers were recruited, and each driver was required to complete four simulation experiments. The driver's take-over process was divided into three phases, including the reaction phase, control phase, and recovery phase. Time parameters, dynamics parameters, and operation parameters were collected for each take-over phase in different obstacle avoidance scenarios. This study analyzed the variability of traffic density and take-over budget time with take-over time, lateral behavior, and longitudinal behavior. The results showed that in the reaction phase, the driver's reaction time became shorter as the scenario urgency increased. In the control phase, the steering wheel reversal rate, lateral deviation rate, braking rate, average speed, and takeover time were significantly different at different urgency levels. In the recovery phase, the average speed, accelerating rate, and take-over time differed significantly at different urgency levels. For the entire take-over process, the entire take-over time increased with the increase in urgency. The lateral take-over behavior tended to be aggressive first and then became defensive, and the longitudinal take-over behavior was defensive with the increase in urgency. The findings will provide theoretical and methodological support for the improvement of take-over behavior assistance in emergency take-over scenarios. It will also be helpful to optimize the human-machine interaction system.
Collapse
Affiliation(s)
- Yi Li
- Logistics Research Center, Shanghai Maritime University, Shanghai 201306, China
| | - Zhaoze Xuan
- Logistics Research Center, Shanghai Maritime University, Shanghai 201306, China
| | - Xianyu Li
- Tongji Architectural Design (Group) Co., Ltd., Shanghai 200092, China
- Shanghai Research Center for Smart Mobility and Road Safety, Shanghai 200092, China
| |
Collapse
|
18
|
Payre W, Perelló-March J, Birrell S. Under pressure: Effect of a ransomware and a screen failure on trust and driving performance in an automated car simulation. Front Psychol 2023; 14:1078723. [PMID: 36935947 PMCID: PMC10014733 DOI: 10.3389/fpsyg.2023.1078723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 02/13/2023] [Indexed: 03/05/2023] Open
Abstract
One major challenge for automated cars is to not only be safe, but also secure. Indeed, connected vehicles are vulnerable to cyberattacks, which may jeopardize individuals' trust in these vehicles and their safety. In a driving simulator experiment, 38 participants were exposed to two screen failures: silent (i.e., no turn signals on the in-vehicle screen and instrument cluster) and explicit (i.e., ransomware attack), both while performing a non-driving related task (NDRT) in a conditionally automated vehicle. Results showed that objective trust decreased after experiencing the failures. Drivers took over control of the vehicle and stopped their NDRT more often after the explicit failure than after the silent failure. Lateral control of the vehicle was compromised when taking over control after both failures compared to automated driving performance. However, longitudinal control proved to be smoother in terms of speed homogeneity compared to automated driving performance. These findings suggest that connectivity failures negatively affect trust in automation and manual driving performance after taking over control. This research posits the question of the importance of connectivity in the realm of trust in automation. Finally, we argue that engagement in a NDRT while riding in automated mode is an indicator of trust in the system and could be used as a surrogate measure for trust.
Collapse
|
19
|
Why Does the Automation Say One Thing but Does Something Else? Effect of the Feedback Consistency and the Timing of Error on Trust in Automated Driving. INFORMATION 2022. [DOI: 10.3390/info13100480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Driving automation deeply modifies the role of the human operator behind the steering wheel. Trust is required for drivers to engage in such automation, and this trust also seems to be a determinant of drivers’ behaviors during automated drives. On the one hand, first experiences with automation, either positive or not, are essential for drivers to calibrate their level of trust. On the other hand, an automation that provides feedback about its own level of capability to handle a specific driving situation may also help drivers to calibrate their level of trust. The reported experiment was undertaken to examine how the combination of these two effects will impact the driver trust calibration process. Four groups of drivers were randomly created. Each experienced either an early (i.e., directly after the beginning of the drive) or a late (i.e., directly before the end of it) critical situation that was poorly handled by the automation. In addition, they experienced either a consistent continuous feedback (i.e., that always correctly informed them about the situation), or an inconsistent one (i.e., that sometimes indicated dangers when there were none) during an automated drive in a driving simulator. Results showed the early- and poorly-handled critical situation had an enduring negative effect on drivers’ trust development compared to drivers who did not experience it. While being correctly understood, inconsistent feedback did not have an effect on trust during properly managed situations. These results suggest that the performance of the automation has the most severe influence on trust, and the automation’s feedback does not necessarily have the ability to influence drivers’ trust calibration during automated driving.
Collapse
|
20
|
Leichtmann B, Humer C, Hinterreiter A, Streit M, Mara M. Effects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
21
|
The Effects of Transparency and Reliability of In-Vehicle Intelligent Agents on Driver Perception, Takeover Performance, Workload and Situation Awareness in Conditionally Automated Vehicles. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6090082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
In the context of automated vehicles, transparency of in-vehicle intelligent agents (IVIAs) is an important contributor to driver perception, situation awareness (SA), and driving performance. However, the effects of agent transparency on driver performance when the agent is unreliable have not been fully examined yet. This paper examined how transparency and reliability of the IVIAs affect drivers’ perception of the agent, takeover performance, workload and SA. A 2 × 2 mixed factorial design was used in this study, with transparency (Push: proactive vs. Pull: on-demand) as a within-subjects variable and reliability (high vs. low) as a between-subjects variable. In a driving simulator, 27 young drivers drove with two types of in-vehicle agents during the conditionally automated driving. Results suggest that transparency influenced participants’ perception on the agent and perceived workload. High reliability agent was associated with higher situation awareness and less effort, compared to low reliability agent. There was an interaction effect between transparency and reliability on takeover performance. These findings could have important implications for the continued design and development of IVIAs of the automated vehicle system.
Collapse
|
22
|
Xu T, Dragomir A, Liu X, Yin H, Wan F, Bezerianos A, Wang H. An EEG study of human trust in autonomous vehicles based on graphic theoretical analysis. Front Neuroinform 2022; 16:907942. [PMID: 36051853 PMCID: PMC9426721 DOI: 10.3389/fninf.2022.907942] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 06/16/2022] [Indexed: 11/22/2022] Open
Abstract
With the development of autonomous vehicle technology, human-centered transport research will likely shift to the interaction between humans and vehicles. This study focuses on the human trust variation in autonomous vehicles (AVs) as the technology becomes increasingly intelligent. This study uses electroencephalogram data to analyze human trust in AVs during simulated driving conditions. Two driving conditions, the semi-autonomous and the autonomous, which correspond to the two highest levels of automatic driving, are used for the simulation, accompanied by various driving and car conditions. The graph theoretical analysis (GTA) is the primary method for data analysis. In semi-autonomous driving mode, the local efficiency and cluster coefficient are lower in car-normal conditions than in car-malfunction conditions with the car approaching. This finding suggests that the human brain has a strong information processing ability while facing predictable potential hazards. However, when it comes to a traffic light with a car malfunctioning under the semi-autonomous driving mode, the characteristic path length is higher for the car malfunction manifesting a weak information processing ability while facing unpredictable potential hazards. Furthermore, in fully automatic driving conditions, participants cannot do anything and need low-level brain function to take emergency actions as lower local efficiency and small worldness for car malfunction. Our results shed light on the design of the human-machine interaction and human factor engineering on the high level of an autonomous vehicle.
Collapse
Affiliation(s)
- Tao Xu
- The Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China
| | - Andrei Dragomir
- The N1 Institute, National University of Singapore, Singapore, Singapore
| | - Xucheng Liu
- Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, Macao SAR, China
| | - Haojun Yin
- The Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China
| | - Feng Wan
- Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, Macao SAR, China
| | - Anastasios Bezerianos
- Hellenic Institute of Transport (HIT), Centre for Research and Technology Hellas (CERTH), Thessaloniki, Greece
| | - Hongtao Wang
- The Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China
| |
Collapse
|
23
|
Babel F, Kraus J, Baumann M. Findings From A Qualitative Field Study with An Autonomous Robot in Public: Exploration of User Reactions and Conflicts. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00894-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractSoon service robots will be employed in public spaces with frequent human-robot interaction (HRI). To achieve a safe, trustworthy and acceptable HRI, service robots need to be equipped with interaction strategies suitable for the robot, user, and context. To gain realistic insights into the initial user reactions and challenges that arise when a mechanoid, autonomous service robot in public is applied, a field study with three data sources was conducted. In a first step, lay users’ intuitive reactions to a cleaning robot at a train station were observed ($$N = 344$$
N
=
344
). Second, passersby’s preferences for HRI interaction strategies were explored in interviews ($$n = 54$$
n
=
54
). As a third step, trust and acceptance of the robot were assessed with questionnaires ($$n = 32$$
n
=
32
). Identified challenges were social robot navigation in crowded places also applicable to vulnerable passersby, inclusive communication modalities, information of staff and public about the service robot application and the need for conflict resolution strategies to avoid an inefficient robot (e.g., testing behavior, path is blocked). This study provides insights into naive HRI in public and illustrates challenges, provides recommendations supported by literature and highlights aspects for future research to inspire a research agenda in the field of public HRI.
Collapse
|
24
|
Kox ES, Siegling LB, Kerstholt JH. Trust Development in Military and Civilian Human–Agent Teams: The Effect of Social-Cognitive Recovery Strategies. Int J Soc Robot 2022; 14:1323-1338. [PMID: 35432627 PMCID: PMC8994847 DOI: 10.1007/s12369-022-00871-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/21/2022] [Indexed: 11/06/2022]
Abstract
Autonomous agents (AA) will increasingly be deployed as teammates instead of tools. In many operational situations, flawless performance from AA cannot be guaranteed. This may lead to a breach in the human’s trust, which can compromise collaboration. This highlights the importance of thinking about how to deal with error and trust violations when designing AA. The aim of this study was to explore the influence of uncertainty communication and apology on the development of trust in a Human–Agent Team (HAT) when there is a trust violation. Two experimental studies following the same method were performed with (I) a civilian group and (II) a military group of participants. The online task environment resembled a house search in which the participant was accompanied and advised by an AA as their artificial team member. Halfway during the task, an incorrect advice evoked a trust violation. Uncertainty communication was manipulated within-subjects, apology between-subjects. Our results showed that (a) communicating uncertainty led to higher levels of trust in both studies, (b) an incorrect advice by the agent led to a less severe decline in trust when that advice included a measure of uncertainty, and (c) after a trust violation, trust recovered significantly more when the agent offered an apology. The two latter effects were only found in the civilian study. We conclude that tailored agent communication is a key factor in minimizing trust reduction in face of agent failure to maintain effective long-term relationships in HATs. The difference in findings between participant groups emphasizes the importance of considering the (organizational) culture when designing artificial team members.
Collapse
|
25
|
Hollander C, Hartwich F, Krems JF. Looking at HMI Concepts for Highly Automated Vehicles: Permanent vs. Context-Adaptive Information Presentation. OPEN PSYCHOLOGY 2022. [DOI: 10.1515/psych-2022-0124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
To facilitate the usage and expected benefits of higher-level automated vehicles, passengers’ distrust and safety concerns should be reduced through increasing system transparency (ST) by providing driving-related information. We therefore examined the effects of ST on passengers’ gaze behavior during driving, trust in automated driving and evaluation of different human-machine interface (HMI) concepts. In a driving simulator, 50 participants experienced three identical highly automated drives under three HMI conditions: no HMI (only conventional speedometer), context-adaptive HMI (all system information only available in more complex situations) or permanent HMI (all system information permanently available). Compared to driving without HMI, the introduction of the two HMIs resulted in significantly higher usage of the center stack display (i.e. gazes towards the HMIs), which was accompanied by significantly higher trust ratings. The considerable differences in information availability provided by the context-adaptive versus permanent HMI did not reflect in similarly considerable differences regarding the passengers’ gaze behavior or accompanied trust ratings. Additionally, user experience evaluations expressed preferences for the context-adaptive HMI. Hence, the permanent HMI did not seem to create benefits over the context-adaptive HMI, supporting the usage of more economical, context-adaptive HMIs in higher-level automated vehicles.
Collapse
Affiliation(s)
- Cornelia Hollander
- Chemnitz University of Technology , Cognitive and Engineering Psychology , Wilhelm-Raabe Str. 43 , Chemnitz , Germany
| | - Franziska Hartwich
- Chemnitz University of Technology , Cognitive and Engineering Psychology , Wilhelm-Raabe Str. 43 , Chemnitz , Germany
| | - Josef F. Krems
- Chemnitz University of Technology , Cognitive and Engineering Psychology , Wilhelm-Raabe Str. 43 , Chemnitz , Germany
| |
Collapse
|
26
|
Liu P, Jiang Z, Li T, Wang G, Wang R, Xu Z. User experience and usability when the automated driving system fails: Findings from a field experiment. ACCIDENT; ANALYSIS AND PREVENTION 2021; 161:106383. [PMID: 34469855 DOI: 10.1016/j.aap.2021.106383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/27/2021] [Accepted: 08/21/2021] [Indexed: 06/13/2023]
Abstract
We are entering an era of automated vehicles (AVs), which has potential to improve road safety considerably. A compelling user experience is crucial to AV adoption in the future commercial market. The automated driving system (ADS) that replaces human drivers should be perceived as very useful before the latter are willing to give up their control and entrust their lives to the ADS. However, compared with the growing number of studies on public acceptance of AVs, there has been limited research focusing on user experience and usability. We examined AV and ADS user experience and usability, ADS failures' influence on them, and their influences on re-riding willingness. We conducted a field study using a real AV and a large-scale test track. We invited participants (N = 261) to travel in the AV as passengers in a low-speed environment. Participants were randomly assigned into the normal condition or the fault condition (its participants were exposed to an ADS failure). We measured participants' positive experience (feeling relaxed, safe, and comfortable) and negative experience (feeling tense and risky) while riding in the AV and perceived usability of the ADS based on the System Usability Scale. In both conditions, participants reported moderate positive experience and perceived usability but a relatively high level of willingness to ride in our AV again. The ADS failure reduced positive experience and perceived usability, and it increased negative experience. Positive experience and perceived usability, but not negative experience, influenced re-riding willingness. Compared with male participants, female participants reported less positive experience and lower perceived usability. We discuss implications of our results as well as limitations of this research.
Collapse
Affiliation(s)
- Peng Liu
- Center for Psychological Sciences, Zhejiang University, Hangzhou, Zhejiang 310058, PR China
| | - Zijun Jiang
- School of Information Engineering, Chang'an University, Xi'an, Shaanxi 710064, PR China
| | - Tingting Li
- College of Management and Economics, Tianjin University, Tianjin 300072, PR China
| | - Guanqun Wang
- School of Information Engineering, Chang'an University, Xi'an, Shaanxi 710064, PR China
| | - Runmin Wang
- School of Information Engineering, Chang'an University, Xi'an, Shaanxi 710064, PR China
| | - Zhigang Xu
- School of Information Engineering, Chang'an University, Xi'an, Shaanxi 710064, PR China; The Joint Laboratory for Internet of Vehicles, Ministry of Education-China Mobile Communications Corporation, Xi'an, Shaanxi 710064, PR China.
| |
Collapse
|
27
|
Kraus J, Scholz D, Baumann M. What's Driving Me? Exploration and Validation of a Hierarchical Personality Model for Trust in Automated Driving. HUMAN FACTORS 2021; 63:1076-1105. [PMID: 32633564 DOI: 10.1177/0018720820922653] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE This paper presents a comprehensive investigation of personality traits related to trust in automated vehicles. A hierarchical personality model based on Mowen's (2000) 3M model is explored in a first and replicated in a second study. BACKGROUND Trust in automation is established in a complex psychological process involving user-, system- and situation-related variables. In this process, personality traits have been viewed as an important source of variance. METHOD Dispositional variables on three levels were included in an exploratory, hierarchical personality model (full model) of dynamic learned trust in automation, which was refined on the basis of structural equation modeling carried out in Study 1 (final model). Study 2 replicated the final model in an independent sample. RESULTS In both studies, the personality model showed a good fit and explained a large proportion of variance in trust in automation. The combined evidence supports the role of extraversion, neuroticism, and self-esteem at the elemental level; affinity for technology and dispositional interpersonal trust at the situational level; and propensity to trust in automation and a priori acceptability of automated driving at the surface level in the prediction of trust in automation. CONCLUSION Findings confirm that personality plays a substantial role in trust formation and provide evidence of the involvement of user dispositions not previously investigated in relation to trust in automation: self-esteem, dispositional interpersonal trust, and affinity for technology. APPLICATION Implications for personalization of information campaigns, driver training, and user interfaces for trust calibration in automated driving are discussed.
Collapse
|
28
|
Lanzer M, Stoll T, Colley M, Baumann M. Intelligent Mobility in the City: The Influence of System and Context Factors on Drivers’ Takeover Willingness and Trust in Automated Vehicles. FRONTIERS IN HUMAN DYNAMICS 2021. [DOI: 10.3389/fhumd.2021.676667] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Automated driving in urban environments not only has the potential to improve traffic flow and heighten driver comfort but also to increase traffic safety, particularly for vulnerable road users such as pedestrians. For these benefits to take effect, drivers need to trust and use automated vehicles. This decision is influenced by both system and context factors. However, it is not yet clear how these factors interact with each other, especially for automated driving in city scenarios with crossing pedestrians. Therefore, we conducted an online experiment in which participants (N = 68) experienced short automated rides from the driver’s perspective through an urban environment. In each of the presented videos, a pedestrian crossed the street in front of the automated vehicle while system and context factors were varied: 1) the crossing pedestrian’s intention was either visualized correctly (as crossing) or incorrectly (visualization missing) by the automated vehicle (system factor), 2) the pedestrian was either distracted by using a smartphone while crossing or not (context factor), and 3) the scenario was either more or less complex depending on the number of other vehicles and pedestrians being present (context factor). In situations with a system malfunction where the crossing pedestrian’s intention was not visualized, participants perceived the situation as more critical, had less trust in the automated system, and a higher willingness to take over control regardless of any context factors. However, when the system worked correctly, the crossing pedestrian’s smartphone usage came into play, especially in the less complex scenario. Participants perceived situations with a distracted pedestrian as more critical, trusted the system less, indicated a higher willingness to take over control, and were more uncertain about their decision. As this study demonstrates the influence of distracted pedestrians, more research is needed on context factors and their inclusion in the design of interfaces to keep drivers informed during automated driving in urban environments.
Collapse
|
29
|
Faas SM, Baumann M. Pedestrian assessment: Is displaying automated driving mode in self-driving vehicles as relevant as emitting an engine sound in electric vehicles? APPLIED ERGONOMICS 2021; 94:103425. [PMID: 33865206 DOI: 10.1016/j.apergo.2021.103425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 03/16/2021] [Accepted: 03/27/2021] [Indexed: 06/12/2023]
Abstract
Pedestrians rely on vehicle dynamics, engine sound, and driver cues. The lack of engine sound now constitutes an addressed pedestrian safety issue for (hybrid) electric vehicles ((H)EVs). Analogously, lacking driver cues may constitute a pedestrian safety issue for self-driving vehicles (SDVs). The purpose of this study was to systematically compare the relevance of substituting driver cues with an external human-machine interface among SDVs (no eHMI vs. eHMI) with the relevance of substituting engine sound with artificial sound among (H)EVs (no engine sound vs. engine sound). In a within-subject design, twenty-nine participants acting as pedestrians encountered a simulated SDV in a parking lot. The results revealed that both informational cues have equally large effects on subjective measures such as perceived safety. In semi-structured interviews, participants stated that it is equally crucial to equip SDVs with an eHMI as equipping (H)EVs with an artificial sound generator. We conclude that an eHMI for SDVs seems to be as relevant as an artificial sound for (H)EVs.
Collapse
Affiliation(s)
- Stefanie M Faas
- Mercedes-Benz AG, Leibnizstr. 2, 71032, Böblingen, Germany; Ulm University, Dept. Human Factors, Albert-Einstein-Allee 41, 89081, Ulm, Germany.
| | - Martin Baumann
- Ulm University, Dept. Human Factors, Albert-Einstein-Allee 41, 89081, Ulm, Germany
| |
Collapse
|
30
|
Hartwich F, Hollander C, Johannmeyer D, Krems JF. Improving Passenger Experience and Trust in Automated Vehicles Through User-Adaptive HMIs: “The More the Better” Does Not Apply to Everyone. FRONTIERS IN HUMAN DYNAMICS 2021. [DOI: 10.3389/fhumd.2021.669030] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Automated vehicles promise transformational benefits for future mobility systems, but only if they will be used regularly. However, due to the associated loss of control and fundamental change of in-vehicle user experience (shifting from active driver to passive passenger experience), many humans have reservations toward driving automation, which question their sufficient usage and market penetration. These reservations vary based on individual characteristics such as initial attitudes. User-adaptive in-vehicle Human-Machine Interfaces (HMIs) meeting varying user requirements may represent an important component of higher-level automated vehicles providing a pleasant and trustworthy passenger experience despite these barriers. In a driving simulator study, we evaluated the effects of two HMI versions (with permanent vs. context-adaptive information availability) on the passenger experience (perceived safety, understanding of driving behavior, driving comfort, driving enjoyment) and trust in automated vehicles of 50 first-time users with varying initial trust (lower vs. higher trust group). Additionally, we compared the user experience of both HMIs. Presenting driving-related information via HMI during driving improved all assessed aspects of passenger experience and trust. The higher trust group experienced automated driving as safest, most understandable and most comfortable with the context-adaptive HMI, while the lower trust group tended to experience the highest safety, understanding and comfort with the permanent HMI. Both HMIs received positive user experience ratings. The context-adaptive HMI received generally more positive ratings, even though this preference was more pronounced for the higher trust group. The results demonstrate the potential of increasing the system transparency of higher-level automated vehicles through HMIs to enhance users’ passenger experience and trust. They also consolidate previous findings on varying user requirements based on individual characteristics. User group-specific HMI effects on passenger experience support the relevance of user-adaptive HMI concepts addressing varying needs of different users by customizing HMI features, such as information availability. Consequently, providing full information permanently cannot be recommended as a universal standard for HMIs in automated vehicles. These insights represent next steps toward a pleasant and trustworthy passenger experience in higher-level automated vehicles for everyone, and support their market acceptance and thus the realization of their expected benefits for future mobility and society.
Collapse
|
31
|
Miller L, Kraus J, Babel F, Baumann M. More Than a Feeling-Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety. Front Psychol 2021; 12:592711. [PMID: 33912098 PMCID: PMC8074795 DOI: 10.3389/fpsyg.2021.592711] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 01/27/2021] [Indexed: 12/02/2022] Open
Abstract
With service robots becoming more ubiquitous in social life, interaction design needs to adapt to novice users and the associated uncertainty in the first encounter with this technology in new emerging environments. Trust in robots is an essential psychological prerequisite to achieve safe and convenient cooperation between users and robots. This research focuses on psychological processes in which user dispositions and states affect trust in robots, which in turn is expected to impact the behavior and reactions in the interaction with robotic systems. In a laboratory experiment, the influence of propensity to trust in automation and negative attitudes toward robots on state anxiety, trust, and comfort distance toward a robot were explored. Participants were approached by a humanoid domestic robot two times and indicated their comfort distance and trust. The results favor the differentiation and interdependence of dispositional, initial, and dynamic learned trust layers. A mediation from the propensity to trust to initial learned trust by state anxiety provides an insight into the psychological processes through which personality traits might affect interindividual outcomes in human-robot interaction (HRI). The findings underline the meaningfulness of user characteristics as predictors for the initial approach to robots and the importance of considering users’ individual learning history regarding technology and robots in particular.
Collapse
Affiliation(s)
- Linda Miller
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Johannes Kraus
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Franziska Babel
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
32
|
Supporting User Onboarding in Automated Vehicles through Multimodal Augmented Reality Tutorials. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti5050022] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Misconceptions of vehicle automation functionalities lead to either non-use or dangerous misuse of assistant systems, harming the users’ experience by reducing potential comfort or compromise safety. Thus, users must understand how and when to use an assistant system. In a preliminary online survey, we examined the use, trust, and the perceived understanding of modern vehicle assistant systems. Despite remaining incomprehensibility (36–64%), experienced misunderstandings (up to 9%), and the need for training (around 30%), users reported high trust in the systems. In the following study with first-time users, we examine the effect of different User Onboarding approaches for an automated parking assistant system in a Tesla and compare the traditional text-based manual with a multimodal augmented reality (AR) smartphone application in means of user acceptance, UX, trust, understanding, and task performance. While the User Onboarding experience for both approaches shows high pragmatic quality, the hedonic quality was perceived significantly higher in AR. For the automated parking process, reported hedonic and pragmatic user experience, trust, automation understanding, and acceptance do not differ, yet the observed task performance was higher in the AR condition. Overall, AR might help motivate proper User Onboarding and better communicate how to operate the system for inexperienced users.
Collapse
|
33
|
Babel F, Kraus JM, Baumann M. Development and Testing of Psychological Conflict Resolution Strategies for Assertive Robots to Resolve Human-Robot Goal Conflict. Front Robot AI 2021; 7:591448. [PMID: 33718437 PMCID: PMC7945950 DOI: 10.3389/frobt.2020.591448] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 12/14/2020] [Indexed: 11/13/2022] Open
Abstract
As service robots become increasingly autonomous and follow their own task-related goals, human-robot conflicts seem inevitable, especially in shared spaces. Goal conflicts can arise from simple trajectory planning to complex task prioritization. For successful human-robot goal-conflict resolution, humans and robots need to negotiate their goals and priorities. For this, the robot might be equipped with effective conflict resolution strategies to be assertive and effective but similarly accepted by the user. In this paper, conflict resolution strategies for service robots (public cleaning robot, home assistant robot) are developed by transferring psychological concepts (e.g., negotiation, cooperation) to HRI. Altogether, fifteen strategies were grouped by the expected affective outcome (positive, neutral, negative). In two online experiments, the acceptability of and compliance with these conflict resolution strategies were tested with humanoid and mechanic robots in two application contexts (public: n1 = 61; private: n2 = 93). To obtain a comparative value, the strategies were also applied by a human. As additional outcomes trust, fear, arousal, and valence, as well as perceived politeness of the agent were assessed. The positive/neutral strategies were found to be more acceptable and effective than negative strategies. Some negative strategies (i.e., threat, command) even led to reactance and fear. Some strategies were only positively evaluated and effective for certain agents (human or robot) or only acceptable in one of the two application contexts (i.e., approach, empathy). Influences on strategy acceptance and compliance in the public context could be found: acceptance was predicted by politeness and trust. Compliance was predicted by interpersonal power. Taken together, psychological conflict resolution strategies can be applied in HRI to enhance robot task effectiveness. If applied robot-specifically and context-sensitively they are accepted by the user. The contribution of this paper is twofold: conflict resolution strategies based on Human Factors and Social Psychology are introduced and empirically evaluated in two online studies for two application contexts. Influencing factors and requirements for the acceptance and effectiveness of robot assertiveness are discussed.
Collapse
Affiliation(s)
- Franziska Babel
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Johannes M Kraus
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
34
|
Manchon JB, Bueno M, Navarro J. From manual to automated driving: how does trust evolve? THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2020. [DOI: 10.1080/1463922x.2020.1830450] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- J. B. Manchon
- VEDECOM Institute, Versailles, France
- Laboratoire d’Etude des Mécanismes Cognitifs (EA 3082), University Lyon 2, Bron, France
| | | | - Jordan Navarro
- Laboratoire d’Etude des Mécanismes Cognitifs (EA 3082), University Lyon 2, Bron, France
| |
Collapse
|
35
|
Abstract
The advancement of SAE Level 3 automated driving systems requires best practices to guide the development process. In the past, the Code of Practice for the Design and Evaluation of ADAS served this role for SAE Level 1 and 2 systems. The challenges of Level 3 automation make it necessary to create a new Code of Practice for automated driving (CoP-AD) as part of the public-funded European project L3Pilot. It provides the developer with a comprehensive guideline on how to design and test automated driving functions, with a focus on highway driving and parking. A variety of areas such as Functional Safety, Cybersecurity, Ethics, and finally the Human–Vehicle Integration are part of it. This paper focuses on the latter, the Human Factors aspects addressed in the CoP-AD. The process of gathering the topics for this category is outlined in the body of the paper. Thorough literature reviews and workshops were part of it. A summary is given on the draft content of the CoP-AD Human–Vehicle Integration topics. This includes general Human Factors related guidelines as well as Mode Awareness, Trust, and Misuse. Driver Monitoring is highlighted as well, together with the topic of Controllability and the execution of Customer Clinics. Furthermore, the Training and Variability of Users is included. Finally, the application of the CoP-AD in the development process for Human-Vehicle Integration is illustrated.
Collapse
|
36
|
Kraus J, Scholz D, Messner EM, Messner M, Baumann M. Scared to Trust? - Predicting Trust in Highly Automated Driving by Depressiveness, Negative Self-Evaluations and State Anxiety. Front Psychol 2020; 10:2917. [PMID: 32038353 PMCID: PMC6989472 DOI: 10.3389/fpsyg.2019.02917] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 12/10/2019] [Indexed: 11/13/2022] Open
Abstract
The advantages of automated driving can only come fully into play if these systems are used in an appropriate way, which means that they are neither used in situations they are not designed for (misuse) nor used in a too restricted manner (disuse). Trust in automation has been found to be an essential psychological basis for appropriate interaction with automated systems. Well-balanced system use requires a calibrated level of trust in correspondence with the actual ability of an automated system. As for these far-reaching implications of trust for safe and efficient system use, the psychological processes, in which trust is dynamically calibrated prior and during the use of automated technology, need to be understood. At this point, only a restricted body of research investigated the role of personality and emotional states for the formation of trust in automated systems. In this research, the role of the personality variables depressiveness, self-efficacy, self-esteem, and locus of control for the experience of anxiety before the first experience with a highly automated driving system were investigated. Additionally, the relationship of the investigated personality variables and anxiety to subsequent formation of trust in automation was investigated. In a driving simulator study, personality variables and anxiety were measured before the interaction with an automated system. Trust in the system was measured after participants drove with the system for a while. Trust in the system was significantly predicted by state anxiety and the personality characteristics self-esteem and self-efficacy. The relationships of self-esteem and self-efficacy were mediated by state anxiety as supported by significant specific indirect effects. While for depression the direct relationship with trust in automation was not found to be significant, an indirect effect through the experience of anxiety was supported. Locus of control did not show a significant association to trust in automation. The reported findings support the importance of considering individual differences in negative self-evaluations and anxiety when being introduced to a new automated system for individual differences in trust in automation. Implications for future research as well as implications for the design of automated technology in general and automated driving systems are discussed.
Collapse
Affiliation(s)
- Johannes Kraus
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - David Scholz
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Eva-Maria Messner
- Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Matthias Messner
- Department of Clinical and Health Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|