1
|
Holbrook C, Holman D, Clingo J, Wagner AR. Overtrust in AI Recommendations About Whether or Not to Kill: Evidence from Two Human-Robot Interaction Studies. Sci Rep 2024; 14:19751. [PMID: 39231986 PMCID: PMC11375177 DOI: 10.1038/s41598-024-69771-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 08/08/2024] [Indexed: 09/06/2024] Open
Abstract
This research explores prospective determinants of trust in the recommendations of artificial agents regarding decisions to kill, using a novel visual challenge paradigm simulating threat-identification (enemy combatants vs. civilians) under uncertainty. In Experiment 1, we compared trust in the advice of a physically embodied versus screen-mediated anthropomorphic robot, observing no effects of embodiment; in Experiment 2, we manipulated the relative anthropomorphism of virtual robots, observing modestly greater trust in the most anthropomorphic agent relative to the least. Across studies, when any version of the agent randomly disagreed, participants reversed their threat-identifications and decisions to kill in the majority of cases, substantially degrading their initial performance. Participants' subjective confidence in their decisions tracked whether the agent (dis)agreed, while both decision-reversals and confidence were moderated by appraisals of the agent's intelligence. The overall findings indicate a strong propensity to overtrust unreliable AI in life-or-death decisions made under uncertainty.
Collapse
Affiliation(s)
- Colin Holbrook
- Department of Cognitive and Information Sciences, University of California, Merced, 5200 N. Lake Rd., Merced, CA, 95343, USA.
| | - Daniel Holman
- Department of Cognitive and Information Sciences, University of California, Merced, 5200 N. Lake Rd., Merced, CA, 95343, USA
| | - Joshua Clingo
- Department of Cognitive and Information Sciences, University of California, Merced, 5200 N. Lake Rd., Merced, CA, 95343, USA
| | - Alan R Wagner
- Department of Aerospace Engineering, The Pennsylvania State University, State College, PA, 16802, USA
| |
Collapse
|
2
|
Plomin J, Schweidler P, Oehme A. Virtual reality check: a comparison of virtual reality, screen-based, and real world settings as research methods for HRI. Front Robot AI 2023; 10:1156715. [PMID: 37441227 PMCID: PMC10333925 DOI: 10.3389/frobt.2023.1156715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 06/06/2023] [Indexed: 07/15/2023] Open
Abstract
To reduce costs and effort, experiments in human-robot interaction can be carried out in Virtual Reality (VR) or in screen-based (SB) formats. However, it is not well examined whether robots are perceived and experienced in the same way in VR and SB as they are in the physical world. This study addresses this topic in a between-subjects experiment, measuring trust and engagement of an interaction with a mobile service robot in a museum scenario. Measures were made in three different settings, either the real world, in VR or in a game-like SB and then compared with an ANOVA. The results indicate, that neither trust nor engagement differ dependent on the experimental setting. The results imply that both VR and SB are eligible ways to explore the interaction with a mobile service robot, if some peculiarities of each medium are taken into account.
Collapse
Affiliation(s)
- Jana Plomin
- Fraunhofer Fokus, Digital Public Services, Berlin, Germany
| | | | | |
Collapse
|
3
|
Zonca J, Folsø A, Sciutti A. Social Influence Under Uncertainty in Interaction with Peers, Robots and Computers. Int J Soc Robot 2023. [DOI: 10.1007/s12369-022-00959-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
AbstractTaking advice from others requires confidence in their competence. This is important for interaction with peers, but also for collaboration with social robots and artificial agents. Nonetheless, we do not always have access to information about others’ competence or performance. In these uncertain environments, do our prior beliefs about the nature and the competence of our interacting partners modulate our willingness to rely on their judgments? In a joint perceptual decision making task, participants made perceptual judgments and observed the simulated estimates of either a human participant, a social humanoid robot or a computer. Then they could modify their estimates based on this feedback. Results show participants’ belief about the nature of their partner biased their compliance with its judgments: participants were more influenced by the social robot than human and computer partners. This difference emerged strongly at the very beginning of the task and decreased with repeated exposure to empirical feedback on the partner’s responses, disclosing the role of prior beliefs in social influence under uncertainty. Furthermore, the results of our functional task suggest an important difference between human–human and human–robot interaction in the absence of overt socially relevant signal from the partner: the former is modulated by social normative mechanisms, whereas the latter is guided by purely informational mechanisms linked to the perceived competence of the partner.
Collapse
|
4
|
Croijmans I, van Erp L, Bakker A, Cramer L, Heezen S, Van Mourik D, Weaver S, Hortensius R. No Evidence for an Effect of the Smell of Hexanal on Trust in Human-Robot Interaction. Int J Soc Robot 2022; 15:1-10. [PMID: 36128582 PMCID: PMC9477175 DOI: 10.1007/s12369-022-00918-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2022] [Indexed: 11/06/2022]
Abstract
The level of interpersonal trust among people is partially determined through the sense of smell. Hexanal, a molecule which smell resembles freshly cut grass, can increase trust in people. Here, we ask the question if smell can be leveraged to facilitate human-robot interaction and test whether hexanal also increases the level of trust during collaboration with a social robot. In a preregistered double-blind, placebo-controlled study, we tested if trial-by-trial and general trust during perceptual decision making in collaboration with a social robot is affected by hexanal across two samples (n = 46 and n = 44). It was hypothesized that unmasked hexanal and hexanal masked by eugenol, a molecule with a smell resembling clove, would increase the level of trust in human-robot interaction, compared to eugenol alone or a control condition consisting of only the neutral smelling solvent propylene glycol. Contrasting previous findings in human interaction, no significant effect of unmasked or eugenol-masked hexanal on trust in robots was observed. These findings indicate that the conscious or nonconscious impact of smell on trust might not generalise to interactions with social robots. One explanation could be category- and context-dependency of smell leading to a mismatch between the natural smell of hexanal, a smell also occurring in human sweat, and the mechanical physical or mental representation of the robot.
Collapse
Affiliation(s)
- Ilja Croijmans
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Laura van Erp
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Annelie Bakker
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Lara Cramer
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Sophie Heezen
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Dana Van Mourik
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Sterre Weaver
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Ruud Hortensius
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| |
Collapse
|
5
|
Babel F, Kraus J, Baumann M. Findings From A Qualitative Field Study with An Autonomous Robot in Public: Exploration of User Reactions and Conflicts. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00894-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractSoon service robots will be employed in public spaces with frequent human-robot interaction (HRI). To achieve a safe, trustworthy and acceptable HRI, service robots need to be equipped with interaction strategies suitable for the robot, user, and context. To gain realistic insights into the initial user reactions and challenges that arise when a mechanoid, autonomous service robot in public is applied, a field study with three data sources was conducted. In a first step, lay users’ intuitive reactions to a cleaning robot at a train station were observed ($$N = 344$$
N
=
344
). Second, passersby’s preferences for HRI interaction strategies were explored in interviews ($$n = 54$$
n
=
54
). As a third step, trust and acceptance of the robot were assessed with questionnaires ($$n = 32$$
n
=
32
). Identified challenges were social robot navigation in crowded places also applicable to vulnerable passersby, inclusive communication modalities, information of staff and public about the service robot application and the need for conflict resolution strategies to avoid an inefficient robot (e.g., testing behavior, path is blocked). This study provides insights into naive HRI in public and illustrates challenges, provides recommendations supported by literature and highlights aspects for future research to inspire a research agenda in the field of public HRI.
Collapse
|
6
|
Insights into the relationship between usability and willingness to use a robot in the future workplaces: Studying the mediating role of trust and the moderating roles of age and STARA. PLoS One 2022; 17:e0268942. [PMID: 35657928 PMCID: PMC9165858 DOI: 10.1371/journal.pone.0268942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 05/10/2022] [Indexed: 11/19/2022] Open
Abstract
Background and aim
Human–robot collaboration is the key component of the fourth industrial revolution concept. Workers’ willingness to collaborate with industrial robots is a basic requirement for an efficient and effective interaction. The roles of human-robot trust and technology affinity as mediators in the relationship between robot usability and worker willingness were analyzed in this study. As other critical variables, the mediator roles of Age and STARA were also calculated.
Materials and methods
This study included 400 workers from a car company who interacted with industrial robots in their daily work activities. After examining the questionnaires’ validity and reliability, the main variables were determined to be willingness to use robots and robot usability. AMOS software also considered human-robot trust and worker technology affinity as mediators. The bootstrapping method was used to evaluate indirect relationships. A set of goodness-of-fit indices were presented to determine the adequacy of the goodness of fit between the proposed model and the data.
Results
Based on model fit indices, an overall satisfactory model fit was obtained for the direct/indirect relationship between robot usability and worker willingness to use it (with mediating role of human-robot trust). Workers’ age and fear of Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA) were identified as moderators in the relationship between usability and willingness.
Conclusion
Attention to the robot usability and the role of workers’ trust in robots appears to be required to ensure workers’ willingness to use robots and the success of human-robot collaboration in future workplaces. As the workers age and their fear of robots grows, usability can play a larger role in increasing their willingness to put robots to work.
Collapse
|
7
|
Zonca J, Folsø A, Sciutti A. The role of reciprocity in human-robot social influence. iScience 2021; 24:103424. [PMID: 34877490 PMCID: PMC8633024 DOI: 10.1016/j.isci.2021.103424] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 10/11/2021] [Accepted: 11/05/2021] [Indexed: 11/19/2022] Open
Abstract
Humans are constantly influenced by others’ behavior and opinions. Of importance, social influence among humans is shaped by reciprocity: we follow more the advice of someone who has been taking into consideration our opinions. In the current work, we investigate whether reciprocal social influence can emerge while interacting with a social humanoid robot. In a joint task, a human participant and a humanoid robot made perceptual estimates and then could overtly modify them after observing the partner’s judgment. Results show that endowing the robot with the ability to express and modulate its own level of susceptibility to the human’s judgments represented a double-edged sword. On the one hand, participants lost confidence in the robot’s competence when the robot was following their advice; on the other hand, participants were unwilling to disclose their lack of confidence to the susceptible robot, suggesting the emergence of reciprocal mechanisms of social influence supporting human-robot collaboration. If a social robot is susceptible to our advice, we lose confidence in it However, robot’s susceptibility does not deteriorate social influence These effects do not appear during interaction with a computer Susceptible robots can promote reciprocity but also hinder social learning
Collapse
Affiliation(s)
- Joshua Zonca
- Cognitive Architecture for Collaborative Technologies (CONTACT) Unit, Italian Institute of Technology, Via Enrico Melen, 83, 16152 Genoa, GE, Italy
- Corresponding author
| | - Anna Folsø
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, 16145 Genoa, Italy
| | - Alessandra Sciutti
- Cognitive Architecture for Collaborative Technologies (CONTACT) Unit, Italian Institute of Technology, Via Enrico Melen, 83, 16152 Genoa, GE, Italy
| |
Collapse
|
8
|
Emotion-Driven Analysis and Control of Human-Robot Interactions in Collaborative Applications. SENSORS 2021; 21:s21144626. [PMID: 34300366 PMCID: PMC8309492 DOI: 10.3390/s21144626] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 06/27/2021] [Accepted: 07/01/2021] [Indexed: 11/23/2022]
Abstract
The utilization of robotic systems has been increasing in the last decade. This increase has been derived by the evolvement in the computational capabilities, communication systems, and the information systems of the manufacturing systems which is reflected in the concept of Industry 4.0. Furthermore, the robotics systems are continuously required to address new challenges in the industrial and manufacturing domain, like keeping humans in the loop, among other challenges. Briefly, the keeping humans in the loop concept focuses on closing the gap between humans and machines by introducing a safe and trustworthy environment for the human workers to work side by side with robots and machines. It aims at increasing the engagement of the human as the automation level increases rather than replacing the human, which can be nearly impossible in some applications. Consequently, the collaborative robots (Cobots) have been created to allow physical interaction with the human worker. However, these cobots still lack of recognizing the human emotional state. In this regard, this paper presents an approach for adapting cobot parameters to the emotional state of the human worker. The approach utilizes the Electroencephalography (EEG) technology for digitizing and understanding the human emotional state. Afterwards, the parameters of the cobot are instantly adjusted to keep the human emotional state in a desirable range which increases the confidence and the trust between the human and the cobot. In addition, the paper includes a review on technologies and methods for emotional sensing and recognition. Finally, this approach is tested on an ABB YuMi cobot with commercially available EEG headset.
Collapse
|
9
|
Small Talk with a Robot? The Impact of Dialog Content, Talk Initiative, and Gaze Behavior of a Social Robot on Trust, Acceptance, and Proximity. Int J Soc Robot 2021. [DOI: 10.1007/s12369-020-00730-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
10
|
Oksanen A, Savela N, Latikka R, Koivula A. Trust Toward Robots and Artificial Intelligence: An Experimental Approach to Human-Technology Interactions Online. Front Psychol 2020; 11:568256. [PMID: 33343447 PMCID: PMC7744307 DOI: 10.3389/fpsyg.2020.568256] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 11/04/2020] [Indexed: 12/01/2022] Open
Abstract
Robotization and artificial intelligence (AI) are expected to change societies profoundly. Trust is an important factor of human-technology interactions, as robots and AI increasingly contribute to tasks previously handled by humans. Currently, there is a need for studies investigating trust toward AI and robots, especially in first-encounter meetings. This article reports findings from a study investigating trust toward robots and AI in an online trust game experiment. The trust game manipulated the hypothetical opponents that were described as either AI or robots. These were compared with control group opponents using only a human name or a nickname. Participants (N = 1077) lived in the United States. Describing opponents with robots or AI did not impact participants' trust toward them. The robot called jdrx894 was the most trusted opponent. Opponents named "jdrx894" were trusted more than opponents called "Michael." Further analysis showed that having a degree in technology or engineering, exposure to robots online and robot use self-efficacy predicted higher trust toward robots and AI. Out of Big Five personality characteristics, openness to experience predicted higher trust, and conscientiousness predicted lower trust. Results suggest trust on robots and AI is contextual and it is also dependent on individual differences and knowledge on technology.
Collapse
Affiliation(s)
- Atte Oksanen
- Faculty of Social Sciences, Tampere University, Tampere, Finland
| | - Nina Savela
- Faculty of Social Sciences, Tampere University, Tampere, Finland
| | - Rita Latikka
- Faculty of Social Sciences, Tampere University, Tampere, Finland
| | - Aki Koivula
- Faculty of Social Sciences, University of Turku, Turku, Finland
| |
Collapse
|
11
|
Gutzwiller RS, Chiou EK, Craig SD, Lewis CM, Lematta GJ, Hsiung CP. Positive bias in the ‘Trust in Automated Systems Survey’? An examination of the Jian et al. (2000) scale. ACTA ACUST UNITED AC 2019. [DOI: 10.1177/1071181319631201] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Measuring trust in technology is a mainstay in Human Factors research. While trust may not perfectly predict reliance on technology or compliance with alarm signals, it is routinely used as a design consideration and assessment goalpost. Several methods of measuring trust have been employed in the past decades, but one self-report measure stands out due to its popular use, the Trust in Automated Systems Survey (Jian, Bisantz, & Drury, 2000). We conducted a study to assess whether the survey could create biased responses, and found evidence the original scale is in fact skewed toward positive ratings. Assessing the literature revealed the survey has been used in unaltered form across at least 100 different reports and remains frequently administered – therefore, the potential impact of this bias may be widespread. Future directions, considerations, and caveats for our assessment, and for using this scale, are discussed.
Collapse
|
12
|
Althoff M, Giusti A, Liu SB, Pereira A. Effortless creation of safe robots from modules through self-programming and self-verification. Sci Robot 2019; 4:4/31/eaaw1924. [PMID: 33137767 DOI: 10.1126/scirobotics.aaw1924] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Accepted: 05/05/2019] [Indexed: 11/02/2022]
Abstract
Industrial robots cannot be reconfigured to optimally fulfill a given task and often have to be caged to guarantee human safety. Consequently, production processes are meticulously planned so that they last for long periods to make automation affordable. However, the ongoing trend toward mass customization and small-scale manufacturing requires purchasing new robots on a regular basis to cope with frequently changing production. Modular robots are a natural answer: Robots composed of standardized modules can be easily reassembled for new tasks, can be quickly repaired by exchanging broken modules, and are cost-effective by mass-producing standard modules usable for a large variety of robot types. Despite these advantages, modular robots have not yet left research laboratories because an expert must reprogram each new robot after assembly, rendering reassembly impractical. This work presents our set of interconnectable modules (IMPROV), which programs and verifies the safety of assembled robots themselves. Experiments show that IMPROV robots retained the same control performance as nonmodular robots, despite their reconfigurability. With respect to human-robot coexistence, our user study shows a reduction of robot idle time by 36% without compromising on safety using our self-verification concept compared with current safety standards. We believe that by using self-programming and self-verification, modular robots can transform current automation practices.
Collapse
Affiliation(s)
- M Althoff
- Cyber-Physical Systems Group, Technical University of Munich, 85748 Garching, Germany.
| | - A Giusti
- Cyber-Physical Systems Group, Technical University of Munich, 85748 Garching, Germany.,Team of Automation and Mechatronics, Fraunhofer Italia Research, Bolzano 39100, Italy
| | - S B Liu
- Cyber-Physical Systems Group, Technical University of Munich, 85748 Garching, Germany
| | - A Pereira
- Cyber-Physical Systems Group, Technical University of Munich, 85748 Garching, Germany.,Institute of Robotics and Mechatronics, German Aerospace Center (DLR), 82234 Wessling, Germany
| |
Collapse
|
13
|
Aroyo AM, Rea F, Sandini G, Sciutti A. Trust and Social Engineering in Human Robot Interaction: Will a Robot Make You Disclose Sensitive Information, Conform to Its Recommendations or Gamble? IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2856272] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
14
|
Calitz AP, Poisat P, Cullen M. The future African workplace: The use of collaborative robots in manufacturing. SOUTH AFRICAN JOURNAL OF HUMAN RESOURCE MANAGEMENT 2017. [DOI: 10.4102/sajhrm.v15i0.901] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022] Open
Abstract
Orientation: Industry 4.0 promotes technological innovations and human–robot collaboration (HRC). Human–robot interaction (HRI) and HRC on the manufacturing assembly line have been implemented in numerous advanced production environments worldwide. Collaborative robots (Cobots) are increasingly being used as collaborators with humans in factory production and assembly environments.Research purpose: The purpose of the research is to investigate the current use and future implementation of Cobots worldwide and its specific impact on the African workforce.Motivation for the study: Exploring the gap that exists between the international implementation of Cobots and the potential implementation and impact on the African manufacturing and assembly environment and specifically on the African workforce.Research design, approach and method: The study features a qualitative research design. An open-ended question survey was conducted amongst leading manufacturing companies in South Africa in order to determine the status and future implementation of Cobot practices. Thematic analysis and content analysis were conducted using AtlasTi.Main findings: The findings indicate that the African businesses were aware of the international business trends, regarding Cobot implementation, and the possible impact of Cobots on the African work force. Factors specifically highlighted in this study are fear of retrenchment, human–Cobot trust and the African culture.Practical implications and value-add: This study provides valuable background on the international status of Cobot implementation and the possible impact on the African workforce. The study highlights the importance of building employee trust, providing the relevant training and addressing the fear of retrenchment amongst employees.
Collapse
|
15
|
Abstract
Increasingly autonomous machines may lead to issues in human-automation systems that go beyond the typical concerns of reliance and compliance. This study used an interaction-oriented approach that considers interdependence in coordinating and cooperating on a joint task. A shared-resource microworld environment was developed to assess how changes in environmental demands and agent behavior affect cooperation and system performance. Seventy-two participants were recruited to perform a scheduling task that required coordination with a cooperative and a relatively uncooperative automated agent. Cooperative automation enhanced performance because it provided more resources to the person and because the person provided more resources to the automation. Considering interdependence theory and the associated structure, signal, strategy, and sequence of human-automation interaction can guide design for appropriate trust and cooperation.
Collapse
|
16
|
Smith MA, Allaham MM, Wiese E. Trust in Automated Agents is Modulated by the Combined Influence of Agent and Task Type. ACTA ACUST UNITED AC 2016. [DOI: 10.1177/1541931213601046] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Trust in automation is an important topic in the field of human factors and has a substantial impact on both attitudes towards and performance with automated systems. One variable that has been shown to influence trust is the degree of human-likeness that is displayed by the automated system with the main finding being that increased human-like appearance leads to increased ratings of trust. In the current study, we investigate whether humanness unanimously leads to higher trust or whether the degree to which an agent is trusted depends on context variables (i.e., task type). For that purpose, we created a task with a social (i.e., judging emotional states from the eye region) and an analytical component (i.e., mathematical task) and measured how strongly participants complied to human, avatar or computer agents when performing the social versus the analytical version with them. We hypothesized that human-like agents are trusted more on social tasks, while machine-like agents are trusted more on analytical tasks. In line with our hypothesis, the results show that, human agents are in general not trusted more than automated agents but that the degree to which an agent is trusted depends on the anticipated expertise of an agent for a given task. The findings suggest that when designing automated systems that are supposed to interact with humans, the degree of humanness of the agent needs to match the degree to which a task requires social skills.
Collapse
|
17
|
|
18
|
Schaefer KE, Sanders TL, Yordon RE, Billings DR, Hancock P. Classification of Robot Form: Factors Predicting Perceived Trustworthiness. ACTA ACUST UNITED AC 2012. [DOI: 10.1177/1071181312561308] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Many factors influence perceived usability of robots, including attributes of the human user, the environment, and the robot itself. Traditionally, the primary focus of research has been on performance-based characteristics of the robot for the purposes of classification, design, and understanding human-robot trust. In this work, we examine the human perceptions of the aesthetic dimensions of a variety of robot domains to gain insight into the impact of physical form on perceived trustworthiness that occurs prior to human-robot interaction. Results show that the physical form does matter when predicting initial trustworthiness of a robot, primarily through the perceived intelligence and classification of the robot.
Collapse
Affiliation(s)
| | | | | | | | - P.A. Hancock
- Institute for Simulation & Training, University of Central Florida
- Department of Psychology, University of Central Florida
| |
Collapse
|