1
|
Wei R, McDonald AD, Mehta RK, Garcia A. Active Inference Models of AV Takeovers: Relating Model Parameters to Trust, Situation Awareness, and Fatigue. HUMAN FACTORS 2025; 67:616-634. [PMID: 39486160 DOI: 10.1177/00187208241295932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2024]
Abstract
ObjectiveOur objectives were to assess the efficacy of active inference models for capturing driver takeovers from automated vehicles and to evaluate the links between model parameters and self-reported cognitive fatigue, trust, and situation awareness.BackgroundControl transitions between human drivers and automation pose a substantial safety and performance risk. Models of driver behavior that predict these transitions from data are a critical tool for designing safer, human-centered, systems but current models do not sufficiently account for human factors. Active inference theory is a promising approach to integrate human factors because of its grounding in cognition and translation to a quantitative modeling framework.MethodWe used data from a driving simulation to develop an active inference model of takeover performance. After validating the model's predictions, we used Bayesian regression with a spike and slab prior to assess substantial correlations between model parameters and self-reported trust, situation awareness, fatigue, and demographic factors.ResultsThe model accurately captured driving takeover times. The regression results showed that increases in cognitive fatigue were associated with increased uncertainty about the need to takeover, attributable to mapping observations to environmental states. Higher situation awareness was correlated with a more precise understanding of the environment and state transitions. Higher trust was associated with increased variance in environmental conditions associated with environmental states.ConclusionThe results align with prior theory on trust and active inference and provide a critical connection between complex driver states and interpretable model parameters.ApplicationThe active inference framework can be used in the testing and validation of automated vehicle technology to calibrate design parameters to ensure safety.
Collapse
|
2
|
Hansen A, Kiely K, Attuquayefio T, Hosking D, Regan M, Eramudugolla R, Ross LA, Anstey KJ. Assessment of the application of technology acceptance measures to older drivers' acceptance of advanced driver-assistance systems. APPLIED ERGONOMICS 2025; 125:104474. [PMID: 39893764 DOI: 10.1016/j.apergo.2025.104474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 01/16/2025] [Accepted: 01/23/2025] [Indexed: 02/04/2025]
Abstract
Older adults' road safety is a concern given the ageing population and increasing numbers of licensed older drivers. Advanced Driver-Assistance Systems (ADAS) are designed to improve safety, however little is known about the relationship between ADAS use and its trust and acceptance in older adults. The purpose of this study was to assesses an instrument of older driver acceptance of and trust in ADAS. A survey distributed to 1008 older Australian drivers (M = 72.1, SD = 6.94) found there was an overwhelmingly positive attitude towards ADAS, however trust in the systems were low and drivers had concerns with privacy, safety and failure of the systems. The Partial Automation Acceptance Scale was validated, producing a four-factor model measuring attitudes towards ADAS, attitudes towards technology, trust and perceptions on risk. Multiple regression showed three of the four factors predict use of ADAS, providing preliminary evidence of the validity and reliability of the scale.
Collapse
Affiliation(s)
- Abigail Hansen
- School of Psychology, University of New South Wales, Kensington, NSW, 2052, Australia; Neuroscience Research Australia, 139 Barker St, Randwick, 2031, Australia; University of New South Wales, Ageing Futures Institute, Kensington, NSW, 2052, Australia.
| | - Kim Kiely
- School of Psychology, University of New South Wales, Kensington, NSW, 2052, Australia; University of New South Wales, Ageing Futures Institute, Kensington, NSW, 2052, Australia; School of Health and Society and School of Mathematics and Applied Statistics, University of Wollongong, Wollongong, Australia
| | - Tuki Attuquayefio
- School of Psychology, University of New South Wales, Kensington, NSW, 2052, Australia; School of Psychology, Western Sydney University, Australia
| | - Diane Hosking
- National Seniors Australia, Level 18 215 Adelaide St, Brisbane, QLD, 4000, Australia
| | - Michael Regan
- Research Centre for Integrated Transport Innovation, University of New South Wales, Kensington, NSW, 2052, Australia
| | - Ranmalee Eramudugolla
- School of Psychology, University of New South Wales, Kensington, NSW, 2052, Australia; Neuroscience Research Australia, 139 Barker St, Randwick, 2031, Australia
| | - Lesley A Ross
- Institute for Engaged Aging and Department of Psychology, Clemson University, 418 Brackett Hall, Clemson, SC, 29634, USA
| | - Kaarin J Anstey
- School of Psychology, University of New South Wales, Kensington, NSW, 2052, Australia; Neuroscience Research Australia, 139 Barker St, Randwick, 2031, Australia; University of New South Wales, Ageing Futures Institute, Kensington, NSW, 2052, Australia
| |
Collapse
|
3
|
Yuan M, Yu R. Exploring the influential factors of initial trust in autonomous cars. ERGONOMICS 2024:1-19. [PMID: 39671328 DOI: 10.1080/00140139.2024.2439915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 12/03/2024] [Indexed: 12/15/2024]
Abstract
Initial trust is one of the critical factors that influence the acceptance of and reliance on autonomous cars (ACs). This study identified the determinants of the initial trust in ACs and explored the relationships between them using structural equation modelling. A survey was conducted using a questionnaire to obtain demographic information, personality traits, design features, task scenarios, and human perception factors from 101 participants without prior interactions with ACs. The results showed that the perceived safety (0.716), capability (0.222), and external locus of control (0.101) are the main positive factors fostering initial trust in ACs, while task risk (-0.349) was the main negative factor. Multigroup analysis demonstrated that the respondents' previous experience with driver-assistance systems encouraged the development of initial trust. The results of this study can provide guidelines for the design and promotion of ACs to develop individuals' initial trust in ACs.
Collapse
Affiliation(s)
- Minhui Yuan
- Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Ruifeng Yu
- Department of Industrial Engineering, Tsinghua University, Beijing, China
| |
Collapse
|
4
|
Martins H, Romeiro J, Casaleiro T, Vieira M, Caldeira S. Insights on spirituality and bereavement: A systematic review of qualitative studies. J Clin Nurs 2024; 33:1593-1603. [PMID: 38345102 DOI: 10.1111/jocn.17052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 01/11/2024] [Accepted: 01/23/2024] [Indexed: 04/04/2024]
Abstract
AIM To describe a synthesis of the experience related to the spirituality of those living a bereavement journey in primary qualitative studies. DESIGN A systematic review of qualitative studies. DATA SOURCE A systematic review was carried out in March 2019 and was updated in January 2023. Searching was accomplished by an online database, such as CINAHL, MEDLINE, PsycINFO, MedicLatina, LILACS, SciELO and Academic Search Complete. The search strategy did not consider a timeline as an eligibility criterion. The quality of the studies was assessed, and a thematic synthesis was performed in this review. METHODS A systematic review of qualitative studies was conducted according to Saini and Shlonsky's methodology. REPORTING METHOD PRISMA checklist. RESULTS The review included 33 articles. Most of the studies were phenomenological and focused on parents' and family experiences of bereavement. Seven significant categories emerged, which match unmet spiritual needs during the grieving process. Two major categories were identified regarding the role of spirituality in bereavement: Spirituality as a process and spirituality as an outcome. CONCLUSION In clinical practice, attention to spirituality and providing spiritual care is critical to guarantee a holistic approach for those experiencing bereavement. IMPLICATIONS The findings of our study could foster awareness that healthcare professionals should include the spiritual dimension in their clinical practice to provide holistic care to individuals, enhancing the healing process in bereavement. NO PATIENT OR PUBLIC CONTRIBUTION This is a systematic review.
Collapse
Affiliation(s)
- Helga Martins
- Post Doctoral Program in Integral Human Development, CADOS, Universidade Católica Portuguesa, Lisbon, Portugal
- Polytechnic Institute of Beja, Beja, Portugal
- Centre for Interdisciplinary Research in Health, Faculty of Health Sciences and Nursing, Universidade Católica Portuguesa, Lisbon, Portugal
| | - Joana Romeiro
- Post Doctoral Program in Integral Human Development, CADOS, Universidade Católica Portuguesa, Lisbon, Portugal
- Centre for Interdisciplinary Research in Health, Faculty of Health Sciences and Nursing, Universidade Católica Portuguesa, Lisbon, Portugal
| | - Tiago Casaleiro
- Centre for Interdisciplinary Research in Health, Faculty of Health Sciences and Nursing, Universidade Católica Portuguesa, Lisbon, Portugal
| | - Margarida Vieira
- Centre for Interdisciplinary Research in Health, Faculty of Health Sciences and Nursing, Universidade Católica Portuguesa, Porto, Portugal
| | - Sílvia Caldeira
- Centre for Interdisciplinary Research in Health, Faculty of Health Sciences and Nursing, Universidade Católica Portuguesa, Lisbon, Portugal
| |
Collapse
|
5
|
Li Y, Wu B, Huang Y, Luan S. Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust. Front Psychol 2024; 15:1382693. [PMID: 38694439 PMCID: PMC11061529 DOI: 10.3389/fpsyg.2024.1382693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/04/2024] [Indexed: 05/04/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of trustworthy AI has gained prominence, and several guidelines for developing trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the insights in trust research can help enhance AI's trustworthiness and foster its adoption and application.
Collapse
Affiliation(s)
- Yugang Li
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Baizhou Wu
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Yuqi Huang
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shenghua Luan
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
6
|
Manchon JB, Bueno M, Navarro J. Calibration of Trust in Automated Driving: A Matter of Initial Level of Trust and Automated Driving Style? HUMAN FACTORS 2023; 65:1613-1629. [PMID: 34861787 DOI: 10.1177/00187208211052804] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Automated driving is becoming a reality, and such technology raises new concerns about human-machine interaction on road. This paper aims to investigate factors influencing trust calibration and evolution over time. BACKGROUND Numerous studies showed trust was a determinant in automation use and misuse, particularly in the automated driving context. METHOD Sixty-one drivers participated in an experiment aiming to better understand the influence of initial level of trust (Trustful vs. Distrustful) on drivers' behaviors and trust calibration during two sessions of simulated automated driving. The automated driving style was manipulated as positive (smooth) or negative (abrupt) to investigate human-machine early interactions. Trust was assessed over time through questionnaires. Drivers' visual behaviors and take-over performances during an unplanned take-over request were also investigated. RESULTS Results showed an increase of trust over time, for both Trustful and Distrustful drivers regardless the automated driving style. Trust was also found to fluctuate over time depending on the specific events handled by the automated vehicle. Take-over performances were not influenced by the initial level of trust nor automated driving style. CONCLUSION Trust in automated driving increases rapidly when drivers' experience such a system. Initial level of trust seems to be crucial in further trust calibration and modulate the effect of automation performance. Long-term trust evolutions suggest that experience modify drivers' mental model about automated driving systems. APPLICATION In the automated driving context, trust calibration is a decisive question to guide such systems' proper utilization, and road safety.
Collapse
Affiliation(s)
- J B Manchon
- VEDECOM Institute, Versailles, France, and University Lyon 2, Bron, France
| | | | - Jordan Navarro
- University Lyon 2, Bron, France, and Institut Universitaire de France, Paris
| |
Collapse
|
7
|
Qu J, Zhou R, Zhang Y, Ma Q. Understanding trust calibration in automated driving: the effect of time, personality, and system warning design. ERGONOMICS 2023; 66:2165-2181. [PMID: 36920361 DOI: 10.1080/00140139.2023.2191907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 03/13/2023] [Indexed: 06/18/2023]
Abstract
Under the human-automation codriving future, dynamic trust should be considered. This paper explored how trust changes over time and how multiple factors (time, trust propensity, neuroticism, and takeover warning design) calibrate trust together. We launched two driving simulator experiments to measure drivers' trust before, during, and after the experiment under takeover scenarios. The results showed that trust in automation increased during short-term interactions and dropped after four months, which is still higher than pre-experiment trust. Initial trust and trust propensity had a stable impact on trust. Drivers trusted the system more with the two-stage (MR + TOR) warning design than the one-stage (TOR). Neuroticism had a significant effect on the countdown compared with the content warning.Practitioner summary: The results provide new data and knowledge for trust calibration in the takeover scenario. The findings can help design a more reasonable automated driving system in long-term human-automation interactions.
Collapse
Affiliation(s)
- Jianhong Qu
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Ronggang Zhou
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Yaping Zhang
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Qianli Ma
- School of Economics and Management, Beihang University, Beijing, P. R. China
| |
Collapse
|
8
|
Hidalgo-Muñoz AR, Jallais C, Evennou M, Fort A. Driving anxiety and anxiolytics while driving: Their impacts on behaviour and cognition behind the wheel. Heliyon 2023; 9:e16008. [PMID: 37305507 PMCID: PMC10256919 DOI: 10.1016/j.heliyon.2023.e16008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 04/20/2023] [Accepted: 04/28/2023] [Indexed: 06/13/2023] Open
Abstract
Introduction The interaction between road safety and drivers' mental health is an important issue to take into consideration on transportation and safety research. The present review deals specifically with the link between anxiety and driving activity from two complementary points of view. Method A systematic review into primary studies, following the PRISMA statement, was carried out in four databases: Scopus, Web of Science, Transport Research International Documentation and Pubmed. A total of 29 papers were retained. On the one hand, we present a systematic review of research articles exploring the cognitive and behavioural effects of driving anxiety, regardless its onset, when concerned people have to drive. The second goal of the review is to compile the available literature on the influence of legal drugs, which are used to fight against anxiety, on actual driving tasks. Results Eighteen papers have been retained for the first question, whose main findings show that exaggerated cautious driving, negative feelings and avoidance are associated with driving anxiety. Most of the conclusions were drawn from self-reported questionnaires and little is known about the effects in situ. Concerning the second question, benzodiazepines are the most studied legal drugs. They affect different attentional processes and could slow reaction times down depending on the population and treatment features. Conclusions The two standpoints included in the present work allow us to propose some possible lines of research to study certain aspects that have not been explored in depth about people who either feel apprehensive about driving or who drive under the effects of anxiolytics. Practical applications The study on driving anxiety may be crucial to estimate the consequences for traffic safety. Furthermore, it is relevant to design effective campaigns to raise awareness about the issues discussed. To propose standard evaluations of driving anxiety and exhaustive research works to find out the extent of anxiolytics use are also important to be considered for traffic policies.
Collapse
Affiliation(s)
- Antonio R. Hidalgo-Muñoz
- Department of Basic Psychology, Psychobiology and Methodology of Behavioural Science, University of Salamanca, Salamanca, Spain
- Instituto de Neurociencias de Castilla y León, University of Salamanca, Salamanca, Spain
| | - Christophe Jallais
- University Gustave Eiffel, University Lyon, TS2-LESCOT, F-69675 Lyon, France
| | - Myriam Evennou
- University Gustave Eiffel, University Lyon, TS2-LESCOT, F-69675 Lyon, France
| | - Alexandra Fort
- University Gustave Eiffel, University Lyon, TS2-LESCOT, F-69675 Lyon, France
| |
Collapse
|
9
|
On the Role of Beliefs and Trust for the Intention to Use Service Robots: An Integrated Trustworthiness Beliefs Model for Robot Acceptance. Int J Soc Robot 2023. [DOI: 10.1007/s12369-022-00952-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
Abstract
AbstractWith the increasing abilities of robots, the prediction of user decisions needs to go beyond the usability perspective, for example, by integrating distinctive beliefs and trust. In an online study (N = 400), first, the relationship between general trust in service robots and trust in a specific robot was investigated, supporting the role of general trust as a starting point for trust formation. On this basis, it was explored—both for general acceptance of service robots and acceptance of a specific robot—if technology acceptance models can be meaningfully complemented by specific beliefs from the theory of planned behavior (TPB) and trust literature to enhance understanding of robot adoption. First, models integrating all belief groups were fitted, providing essential variance predictions at both levels (general and specific) and a mediation of beliefs via trust to the intention to use. The omission of the performance expectancy and reliability belief was compensated for by more distinctive beliefs. In the final model (TB-RAM), effort expectancy and competence predicted trust at the general level. For a specific robot, competence and social influence predicted trust. Moreover, the effect of social influence on trust was moderated by the robot's application area (public > private), supporting situation-specific belief relevance in robot adoption. Taken together, in line with the TPB, these findings support a mediation cascade from beliefs via trust to the intention to use. Furthermore, an incorporation of distinctive instead of broad beliefs is promising for increasing the explanatory and practical value of acceptance modeling.
Collapse
|
10
|
Kraus J, Babel F, Hock P, Hauber K, Baumann M. The trustworthy and acceptable HRI checklist (TA-HRI): questions and design recommendations to support a trust-worthy and acceptable design of human-robot interaction. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2022. [DOI: 10.1007/s11612-022-00643-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
AbstractThis contribution to the journal Gruppe. Interaktion. Organisation. (GIO) presents a checklist of questions and design recommendations for designing acceptable and trustworthy human-robot interaction (HRI). In order to extend the application scope of robots towards more complex contexts in the public domain and in private households, robots have to fulfill requirements regarding social interaction between humans and robots in addition to safety and efficiency. In particular, this results in recommendations for the design of the appearance, behavior, and interaction strategies of robots that can contribute to acceptance and appropriate trust. The presented checklist was derived from existing guidelines of associated fields of application, the current state of research on HRI, and the results of the BMBF-funded project RobotKoop. The trustworthy and acceptable HRI checklist (TA-HRI) contains 60 design topics with questions and design recommendations for the development and design of acceptable and trustworthy robots. The TA-HRI Checklist provides a basis for discussion of the design of service robots for use in public and private environments and will be continuously refined based on feedback from the community.
Collapse
|
11
|
Babel F, Kraus J, Baumann M. Findings From A Qualitative Field Study with An Autonomous Robot in Public: Exploration of User Reactions and Conflicts. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00894-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractSoon service robots will be employed in public spaces with frequent human-robot interaction (HRI). To achieve a safe, trustworthy and acceptable HRI, service robots need to be equipped with interaction strategies suitable for the robot, user, and context. To gain realistic insights into the initial user reactions and challenges that arise when a mechanoid, autonomous service robot in public is applied, a field study with three data sources was conducted. In a first step, lay users’ intuitive reactions to a cleaning robot at a train station were observed ($$N = 344$$
N
=
344
). Second, passersby’s preferences for HRI interaction strategies were explored in interviews ($$n = 54$$
n
=
54
). As a third step, trust and acceptance of the robot were assessed with questionnaires ($$n = 32$$
n
=
32
). Identified challenges were social robot navigation in crowded places also applicable to vulnerable passersby, inclusive communication modalities, information of staff and public about the service robot application and the need for conflict resolution strategies to avoid an inefficient robot (e.g., testing behavior, path is blocked). This study provides insights into naive HRI in public and illustrates challenges, provides recommendations supported by literature and highlights aspects for future research to inspire a research agenda in the field of public HRI.
Collapse
|
12
|
Hollander C, Hartwich F, Krems JF. Looking at HMI Concepts for Highly Automated Vehicles: Permanent vs. Context-Adaptive Information Presentation. OPEN PSYCHOLOGY 2022. [DOI: 10.1515/psych-2022-0124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
To facilitate the usage and expected benefits of higher-level automated vehicles, passengers’ distrust and safety concerns should be reduced through increasing system transparency (ST) by providing driving-related information. We therefore examined the effects of ST on passengers’ gaze behavior during driving, trust in automated driving and evaluation of different human-machine interface (HMI) concepts. In a driving simulator, 50 participants experienced three identical highly automated drives under three HMI conditions: no HMI (only conventional speedometer), context-adaptive HMI (all system information only available in more complex situations) or permanent HMI (all system information permanently available). Compared to driving without HMI, the introduction of the two HMIs resulted in significantly higher usage of the center stack display (i.e. gazes towards the HMIs), which was accompanied by significantly higher trust ratings. The considerable differences in information availability provided by the context-adaptive versus permanent HMI did not reflect in similarly considerable differences regarding the passengers’ gaze behavior or accompanied trust ratings. Additionally, user experience evaluations expressed preferences for the context-adaptive HMI. Hence, the permanent HMI did not seem to create benefits over the context-adaptive HMI, supporting the usage of more economical, context-adaptive HMIs in higher-level automated vehicles.
Collapse
Affiliation(s)
- Cornelia Hollander
- Chemnitz University of Technology , Cognitive and Engineering Psychology , Wilhelm-Raabe Str. 43 , Chemnitz , Germany
| | - Franziska Hartwich
- Chemnitz University of Technology , Cognitive and Engineering Psychology , Wilhelm-Raabe Str. 43 , Chemnitz , Germany
| | - Josef F. Krems
- Chemnitz University of Technology , Cognitive and Engineering Psychology , Wilhelm-Raabe Str. 43 , Chemnitz , Germany
| |
Collapse
|
13
|
Kraus J, Scholz D, Baumann M. What's Driving Me? Exploration and Validation of a Hierarchical Personality Model for Trust in Automated Driving. HUMAN FACTORS 2021; 63:1076-1105. [PMID: 32633564 DOI: 10.1177/0018720820922653] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE This paper presents a comprehensive investigation of personality traits related to trust in automated vehicles. A hierarchical personality model based on Mowen's (2000) 3M model is explored in a first and replicated in a second study. BACKGROUND Trust in automation is established in a complex psychological process involving user-, system- and situation-related variables. In this process, personality traits have been viewed as an important source of variance. METHOD Dispositional variables on three levels were included in an exploratory, hierarchical personality model (full model) of dynamic learned trust in automation, which was refined on the basis of structural equation modeling carried out in Study 1 (final model). Study 2 replicated the final model in an independent sample. RESULTS In both studies, the personality model showed a good fit and explained a large proportion of variance in trust in automation. The combined evidence supports the role of extraversion, neuroticism, and self-esteem at the elemental level; affinity for technology and dispositional interpersonal trust at the situational level; and propensity to trust in automation and a priori acceptability of automated driving at the surface level in the prediction of trust in automation. CONCLUSION Findings confirm that personality plays a substantial role in trust formation and provide evidence of the involvement of user dispositions not previously investigated in relation to trust in automation: self-esteem, dispositional interpersonal trust, and affinity for technology. APPLICATION Implications for personalization of information campaigns, driver training, and user interfaces for trust calibration in automated driving are discussed.
Collapse
|
14
|
Miller L, Kraus J, Babel F, Baumann M. More Than a Feeling-Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety. Front Psychol 2021; 12:592711. [PMID: 33912098 PMCID: PMC8074795 DOI: 10.3389/fpsyg.2021.592711] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 01/27/2021] [Indexed: 12/02/2022] Open
Abstract
With service robots becoming more ubiquitous in social life, interaction design needs to adapt to novice users and the associated uncertainty in the first encounter with this technology in new emerging environments. Trust in robots is an essential psychological prerequisite to achieve safe and convenient cooperation between users and robots. This research focuses on psychological processes in which user dispositions and states affect trust in robots, which in turn is expected to impact the behavior and reactions in the interaction with robotic systems. In a laboratory experiment, the influence of propensity to trust in automation and negative attitudes toward robots on state anxiety, trust, and comfort distance toward a robot were explored. Participants were approached by a humanoid domestic robot two times and indicated their comfort distance and trust. The results favor the differentiation and interdependence of dispositional, initial, and dynamic learned trust layers. A mediation from the propensity to trust to initial learned trust by state anxiety provides an insight into the psychological processes through which personality traits might affect interindividual outcomes in human-robot interaction (HRI). The findings underline the meaningfulness of user characteristics as predictors for the initial approach to robots and the importance of considering users’ individual learning history regarding technology and robots in particular.
Collapse
Affiliation(s)
- Linda Miller
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Johannes Kraus
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Franziska Babel
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
15
|
Babel F, Kraus JM, Baumann M. Development and Testing of Psychological Conflict Resolution Strategies for Assertive Robots to Resolve Human-Robot Goal Conflict. Front Robot AI 2021; 7:591448. [PMID: 33718437 PMCID: PMC7945950 DOI: 10.3389/frobt.2020.591448] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 12/14/2020] [Indexed: 11/13/2022] Open
Abstract
As service robots become increasingly autonomous and follow their own task-related goals, human-robot conflicts seem inevitable, especially in shared spaces. Goal conflicts can arise from simple trajectory planning to complex task prioritization. For successful human-robot goal-conflict resolution, humans and robots need to negotiate their goals and priorities. For this, the robot might be equipped with effective conflict resolution strategies to be assertive and effective but similarly accepted by the user. In this paper, conflict resolution strategies for service robots (public cleaning robot, home assistant robot) are developed by transferring psychological concepts (e.g., negotiation, cooperation) to HRI. Altogether, fifteen strategies were grouped by the expected affective outcome (positive, neutral, negative). In two online experiments, the acceptability of and compliance with these conflict resolution strategies were tested with humanoid and mechanic robots in two application contexts (public: n1 = 61; private: n2 = 93). To obtain a comparative value, the strategies were also applied by a human. As additional outcomes trust, fear, arousal, and valence, as well as perceived politeness of the agent were assessed. The positive/neutral strategies were found to be more acceptable and effective than negative strategies. Some negative strategies (i.e., threat, command) even led to reactance and fear. Some strategies were only positively evaluated and effective for certain agents (human or robot) or only acceptable in one of the two application contexts (i.e., approach, empathy). Influences on strategy acceptance and compliance in the public context could be found: acceptance was predicted by politeness and trust. Compliance was predicted by interpersonal power. Taken together, psychological conflict resolution strategies can be applied in HRI to enhance robot task effectiveness. If applied robot-specifically and context-sensitively they are accepted by the user. The contribution of this paper is twofold: conflict resolution strategies based on Human Factors and Social Psychology are introduced and empirically evaluated in two online studies for two application contexts. Influencing factors and requirements for the acceptance and effectiveness of robot assertiveness are discussed.
Collapse
Affiliation(s)
- Franziska Babel
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Johannes M Kraus
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
16
|
Abstract
Automated vehicles (AV) have the potential to benefit our society. Providing explanations is one approach to facilitating AV trust by decreasing uncertainty about automated decision-making. However, it is not clear whether explanations are equally beneficial for drivers across age groups in terms of trust and anxiety. To examine this, we conducted a mixed-design experiment with 40 participants divided into three age groups (i.e., younger, middle-age, and older). Participants were presented with: (1) no explanation, or (2) explanation given before or (3) after the AV took action, or (4) explanation along with a request for permission to take action. Results highlight both commonalities and differences between age groups. These results have important implications in designing AV explanations and promoting trust.
Collapse
|