1
|
Yi B, Cao H, Song X, Wang J, Zhao S, Guo W, Cao D. How Can the Trust-Change Direction be Measured and Identified During Takeover Transitions in Conditionally Automated Driving? Using Physiological Responses and Takeover-Related Factors. HUMAN FACTORS 2024; 66:1276-1301. [PMID: 36625335 DOI: 10.1177/00187208221143855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE This paper proposes an objective method to measure and identify trust-change directions during takeover transitions (TTs) in conditionally automated vehicles (AVs). BACKGROUND Takeover requests (TORs) will be recurring events in conditionally automated driving that could undermine trust, and then lead to inappropriate reliance on conditionally AVs, such as misuse and disuse. METHOD 34 drivers engaged in the non-driving-related task were involved in a sequence of takeover events in a driving simulator. The relationships and effects between drivers' physiological responses, takeover-related factors, and trust-change directions during TTs were explored by the combination of an unsupervised learning algorithm and statistical analyses. Furthermore, different typical machine learning methods were applied to establish recognition models of trust-change directions during TTs based on takeover-related factors and physiological parameters. RESULT Combining the change values in the subjective trust rating and monitoring behavior before and after takeover can reliably measure trust-change directions during TTs. The statistical analysis results showed that physiological parameters (i.e., skin conductance and heart rate) during TTs are negatively linked with the trust-change directions. And drivers were more likely to increase trust during TTs when they were in longer TOR lead time, with more takeover frequencies, and dealing with the stationary vehicle scenario. More importantly, the F1-score of the random forest (RF) model is nearly 77.3%. CONCLUSION The features investigated and the RF model developed can identify trust-change directions during TTs accurately. APPLICATION Those findings can provide additional support for developing trust monitoring systems to mitigate both drivers' overtrust and undertrust in conditionally AVs.
Collapse
Affiliation(s)
| | | | | | | | - Song Zhao
- University of Waterloo, Waterloo, ON, Canada
| | | | - Dongpu Cao
- University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
2
|
Cucciniello I, Sangiovanni S, Maggi G, Rossi S. Mind Perception in HRI: Exploring Users' Attribution of Mental and Emotional States to Robots with Different Behavioural Styles. Int J Soc Robot 2023; 15:867-877. [PMID: 37251279 PMCID: PMC10040176 DOI: 10.1007/s12369-023-00989-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/03/2023] [Indexed: 03/29/2023]
Abstract
Theory of Mind is crucial to understand and predict others' behaviour, underpinning the ability to engage in complex social interactions. Many studies have evaluated a robot's ability to attribute thoughts, beliefs, and emotions to humans during social interactions, but few studies have investigated human attribution to robots with such capabilities. This study contributes to this direction by evaluating how the cognitive and emotional capabilities attributed to the robot by humans may be influenced by some behavioural characteristics of robots during the interaction. For this reason, we used the Dimensions of Mind Perception questionnaire to measure participants' perceptions of different robot behaviour styles, namely Friendly, Neutral, and Authoritarian, which we designed and validated in our previous works. The results obtained confirmed our hypotheses because people judged the robot's mental capabilities differently depending on the interaction style. Particularly, the Friendly is considered more capable of experiencing positive emotions such as Pleasure, Desire, Consciousness, and Joy; conversely, the Authoritarian is considered more capable of experiencing negative emotions such as Fear, Pain, and Rage than the Friendly. Moreover, they confirmed that interaction styles differently impacted the perception of the participants on the Agency dimension, Communication, and Thought.
Collapse
Affiliation(s)
- Ilenia Cucciniello
- Department of Electrical Engineering and Information Technologies, University of Naples Federico II, Via Claudio 80, 80125 Naples, Italy
| | - Sara Sangiovanni
- Department of Electrical Engineering and Information Technologies, University of Naples Federico II, Via Claudio 80, 80125 Naples, Italy
| | - Gianpaolo Maggi
- Department of Psychology, University of Campania Luigi Vanvitelli, Viale Ellittico, 31, 81100 Caserta, Italy
| | - Silvia Rossi
- Department of Electrical Engineering and Information Technologies, University of Naples Federico II, Via Claudio 80, 80125 Naples, Italy
| |
Collapse
|
3
|
The Effects of Robots’ Altruistic Behaviours and Reciprocity on Human-robot Trust. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00899-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
4
|
Step Aside! VR-Based Evaluation of Adaptive Robot Conflict Resolution Strategies for Domestic Service Robots. Int J Soc Robot 2022. [DOI: 10.1007/s12369-021-00858-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractAs domestic service robots become more prevalent and act autonomously, conflicts of interest between humans and robots become more likely. Hereby, the robot shall be able to negotiate with humans effectively and appropriately to fulfill its tasks. One promising approach could be the imitation of human conflict resolution behaviour and the use of persuasive requests. The presented study complements previous work by investigating combinations of assertive and polite request elements (appeal, showing benefit, command), which have been found to be effective in HRI. The conflict resolution strategies each contained two types of requests, the order of which was varied to either mimic or contradict human conflict resolution behaviour. The strategies were also adapted to the users’ compliance behaviour. If the participant complied after the first request, no second request was issued. In a virtual reality experiment ($$N = 57$$
N
=
57
) with two trials, six different strategies were evaluated regarding user compliance, robot acceptance, trust, and fear and compared to a control condition featuring no request elements. The experiment featured a human-robot goal conflict scenario concerning household tasks at home. The results show that in trial 1, strategies reflecting human politeness and conflict resolution norms were more accepted, polite, and trustworthier than strategies entailing a command. No differences were found for trial 2. Overall, compliance rates were comparable to human-human-requests. Compliance rates did not differ between strategies. The contribution is twofold: presenting an experimental paradigm to investigate a human-robot conflict scenario and providing a first step to developing acceptable robot conflict resolution strategies based on human behaviour.
Collapse
|
5
|
Geraci A, D'Amico A, Pipitone A, Seidita V, Chella A. Automation Inner Speech as an Anthropomorphic Feature Affecting Human Trust: Current Issues and Future Directions. Front Robot AI 2021; 8:620026. [PMID: 33969001 PMCID: PMC8102901 DOI: 10.3389/frobt.2021.620026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Accepted: 02/26/2021] [Indexed: 11/18/2022] Open
Abstract
This paper aims to discuss the possible role of inner speech in influencing trust in human–automation interaction. Inner speech is an everyday covert inner monolog or dialog with oneself, which is essential for human psychological life and functioning as it is linked to self-regulation and self-awareness. Recently, in the field of machine consciousness, computational models using different forms of robot speech have been developed that make it possible to implement inner speech in robots. As is discussed, robot inner speech could be a new feature affecting human trust by increasing robot transparency and anthropomorphism.
Collapse
Affiliation(s)
- Alessandro Geraci
- Robotics Lab, Department of Engineering, University of Palermo, Palermo, Italy.,Department of Psychology, Educational Science and Human Movement, University of Palermo, Palermo, Italy
| | - Antonella D'Amico
- Department of Psychology, Educational Science and Human Movement, University of Palermo, Palermo, Italy
| | - Arianna Pipitone
- Robotics Lab, Department of Engineering, University of Palermo, Palermo, Italy
| | - Valeria Seidita
- Robotics Lab, Department of Engineering, University of Palermo, Palermo, Italy
| | - Antonio Chella
- Robotics Lab, Department of Engineering, University of Palermo, Palermo, Italy
| |
Collapse
|
6
|
Babel F, Kraus JM, Baumann M. Development and Testing of Psychological Conflict Resolution Strategies for Assertive Robots to Resolve Human-Robot Goal Conflict. Front Robot AI 2021; 7:591448. [PMID: 33718437 PMCID: PMC7945950 DOI: 10.3389/frobt.2020.591448] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 12/14/2020] [Indexed: 11/13/2022] Open
Abstract
As service robots become increasingly autonomous and follow their own task-related goals, human-robot conflicts seem inevitable, especially in shared spaces. Goal conflicts can arise from simple trajectory planning to complex task prioritization. For successful human-robot goal-conflict resolution, humans and robots need to negotiate their goals and priorities. For this, the robot might be equipped with effective conflict resolution strategies to be assertive and effective but similarly accepted by the user. In this paper, conflict resolution strategies for service robots (public cleaning robot, home assistant robot) are developed by transferring psychological concepts (e.g., negotiation, cooperation) to HRI. Altogether, fifteen strategies were grouped by the expected affective outcome (positive, neutral, negative). In two online experiments, the acceptability of and compliance with these conflict resolution strategies were tested with humanoid and mechanic robots in two application contexts (public: n1 = 61; private: n2 = 93). To obtain a comparative value, the strategies were also applied by a human. As additional outcomes trust, fear, arousal, and valence, as well as perceived politeness of the agent were assessed. The positive/neutral strategies were found to be more acceptable and effective than negative strategies. Some negative strategies (i.e., threat, command) even led to reactance and fear. Some strategies were only positively evaluated and effective for certain agents (human or robot) or only acceptable in one of the two application contexts (i.e., approach, empathy). Influences on strategy acceptance and compliance in the public context could be found: acceptance was predicted by politeness and trust. Compliance was predicted by interpersonal power. Taken together, psychological conflict resolution strategies can be applied in HRI to enhance robot task effectiveness. If applied robot-specifically and context-sensitively they are accepted by the user. The contribution of this paper is twofold: conflict resolution strategies based on Human Factors and Social Psychology are introduced and empirically evaluated in two online studies for two application contexts. Influencing factors and requirements for the acceptance and effectiveness of robot assertiveness are discussed.
Collapse
Affiliation(s)
- Franziska Babel
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Johannes M Kraus
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
7
|
Du N, Zhou F, Pulver EM, Tilbury DM, Robert LP, Pradhan AK, Yang XJ. Predicting driver takeover performance in conditionally automated driving. ACCIDENT; ANALYSIS AND PREVENTION 2020; 148:105748. [PMID: 33099127 DOI: 10.1016/j.aap.2020.105748] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2020] [Revised: 07/05/2020] [Accepted: 08/22/2020] [Indexed: 06/11/2023]
Abstract
In conditionally automated driving, drivers have difficulty taking over control when requested. To address this challenge, we aimed to predict drivers' takeover performance before the issue of a takeover request (TOR) by analyzing drivers' physiological data and external environment data. We used data sets from two human-in-the-loop experiments, wherein drivers engaged in non-driving-related tasks (NDRTs) were requested to take over control from automated driving in various situations. Drivers' physiological data included heart rate indices, galvanic skin response indices, and eye-tracking metrics. Driving environment data included scenario type, traffic density, and TOR lead time. Drivers' takeover performance was categorized as good or bad according to their driving behaviors during the transition period and was treated as the ground truth. Using six machine learning methods, we found that the random forest classifier performed the best and was able to predict drivers' takeover performance when they were engaged in NDRTs with different levels of cognitive load. We recommended 3 s as the optimal time window to predict takeover performance using the random forest classifier, with an accuracy of 84.3% and an F1-score of 64.0%. Our findings have implications for the algorithm development of driver state detection and the design of adaptive in-vehicle alert systems in conditionally automated driving.
Collapse
Affiliation(s)
- Na Du
- Industrial and Operations Engineering, University of Michigan, United States
| | - Feng Zhou
- Industrial and Manufacturing Systems Engineering, University of Michigan-Dearborn, United States
| | | | - Dawn M Tilbury
- Mechanical Engineering, University of Michigan, United States
| | - Lionel P Robert
- School of Information, University of Michigan, United States
| | - Anuj K Pradhan
- Industrial and Mechanical Engineering, University of Massachusetts Amherst, United States
| | - X Jessie Yang
- Industrial and Operations Engineering, University of Michigan, United States.
| |
Collapse
|
8
|
Abstract
Purpose of Review To assess the state-of-the-art in research on trust in robots and to examine if recent methodological advances can aid in the development of trustworthy robots. Recent Findings While traditional work in trustworthy robotics has focused on studying the antecedents and consequences of trust in robots, recent work has gravitated towards the development of strategies for robots to actively gain, calibrate, and maintain the human user’s trust. Among these works, there is emphasis on endowing robotic agents with reasoning capabilities (e.g., via probabilistic modeling). Summary The state-of-the-art in trust research provides roboticists with a large trove of tools to develop trustworthy robots. However, challenges remain when it comes to trust in real-world human-robot interaction (HRI) settings: there exist outstanding issues in trust measurement, guarantees on robot behavior (e.g., with respect to user privacy), and handling rich multidimensional data. We examine how recent advances in psychometrics, trustworthy systems, robot-ethics, and deep learning can provide resolution to each of these issues. In conclusion, we are of the opinion that these methodological advances could pave the way for the creation of truly autonomous, trustworthy social robots.
Collapse
Affiliation(s)
- Bing Cai Kok
- Dept. of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore, 119077 Singapore
| | - Harold Soh
- Dept. of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore, 119077 Singapore
| |
Collapse
|
9
|
Powell H, Michael J. Feeling committed to a robot: why, what, when and how? Philos Trans R Soc Lond B Biol Sci 2020; 374:20180039. [PMID: 30853005 DOI: 10.1098/rstb.2018.0039] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The paper spells out the rationale for developing means of manipulating and of measuring people's sense of commitment to robot interaction partners. A sense of commitment may lead people to be patient when a robot is not working smoothly, to remain vigilant when a robot is working so smoothly that a task becomes boring and to increase their willingness to invest effort in teaching a robot. We identify a range of contexts in which a sense of commitment to robot interaction partners may be particularly important. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
Collapse
Affiliation(s)
- Henry Powell
- 1 Institute of Neuroscience and Psychology, Glasgow University , Glasgow , UK
| | - John Michael
- 2 Central European University, Cognitive Science, University of Warwick , Coventry , UK
| |
Collapse
|
10
|
Kory-Westlund JM, Breazeal C. Exploring the Effects of a Social Robot's Speech Entrainment and Backstory on Young Children's Emotion, Rapport, Relationship, and Learning. Front Robot AI 2019; 6:54. [PMID: 33501069 PMCID: PMC7806080 DOI: 10.3389/frobt.2019.00054] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Accepted: 06/24/2019] [Indexed: 11/13/2022] Open
Abstract
In positive human-human relationships, people frequently mirror or mimic each other's behavior. This mimicry, also called entrainment, is associated with rapport and smoother social interaction. Because rapport in learning scenarios has been shown to lead to improved learning outcomes, we examined whether enabling a social robotic learning companion to perform rapport-building behaviors could improve children's learning and engagement during a storytelling activity. We enabled the social robot to perform two specific rapport and relationship-building behaviors: speech entrainment and self-disclosure (shared personal information in the form of a backstory about the robot's poor speech and hearing abilities). We recruited 86 children aged 3–8 years to interact with the robot in a 2 × 2 between-subjects experimental study testing the effects of robot entrainment Entrainment vs. No entrainment and backstory about abilities Backstory vs. No Backstory. The robot engaged the children one-on-one in conversation, told a story embedded with key vocabulary words, and asked children to retell the story. We measured children's recall of the key words and their emotions during the interaction, examined their story retellings, and asked children questions about their relationship with the robot. We found that the robot's entrainment led children to show more positive emotions and fewer negative emotions. Children who heard the robot's backstory were more likely to accept the robot's poor hearing abilities. Entrainment paired with backstory led children to use more of the key words and match more of the robot's phrases in their story retells. Furthermore, these children were more likely to consider the robot more human-like and were more likely to comply with one of the robot's requests. These results suggest that the robot's speech entrainment and backstory increased children's engagement and enjoyment in the interaction, improved their perception of the relationship, and contributed to children's success at retelling the story.
Collapse
Affiliation(s)
| | - Cynthia Breazeal
- MIT Media Lab, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
11
|
Nabian M, Yin Y, Wormwood J, Quigley KS, Barrett LF, Ostadabbas S. An Open-Source Feature Extraction Tool for the Analysis of Peripheral Physiological Data. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2018; 6:2800711. [PMID: 30443441 PMCID: PMC6231905 DOI: 10.1109/jtehm.2018.2878000] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Revised: 09/05/2018] [Accepted: 10/22/2018] [Indexed: 11/09/2022]
Abstract
Electrocardiogram, electrodermal activity, electromyogram, continuous blood pressure, and impedance cardiography are among the most commonly used peripheral physiological signals (biosignals) in psychological studies and healthcare applications, including health tracking, sleep quality assessment, disease early-detection/diagnosis, and understanding human emotional and affective phenomena. This paper presents the development of a biosignal-specific processing toolbox (Bio-SP tool) for preprocessing and feature extraction of these physiological signals according to the state-of-the-art studies reported in the scientific literature and feedback received from the field experts. Our open-source Bio-SP tool is intended to assist researchers in affective computing, digital and mobile health, and telemedicine to extract relevant physiological patterns (i.e., features) from these biosignals semi-automatically and reliably. In this paper, we describe the successful algorithms used for signal-specific quality checking, artifact/noise filtering, and segmentation along with introducing features shown to be highly relevant to category discrimination in several healthcare applications (e.g., discriminating patterns associated with disease versus non-disease). Further, the Bio-SP tool is a publicly-available software written in MATLAB with a user-friendly graphical user interface (GUI), enabling future crowd-sourced modification to these tools. The GUI is compatible with MathWorks Classification Learner app for inference model development, such as model training, cross-validation scheme farming, and classification result computation.
Collapse
Affiliation(s)
- Mohsen Nabian
- Augmented Cognition LabElectrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
- Harvard Medical SchoolBostonMA02115USA
| | - Yu Yin
- Augmented Cognition LabElectrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
| | | | | | - Lisa F. Barrett
- Department of PsychologyNortheastern UniversityBostonMA02115USA
| | - Sarah Ostadabbas
- Augmented Cognition LabElectrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
| |
Collapse
|
12
|
|
13
|
Nikolaidis S, Hsu D, Srinivasa S. Human-robot mutual adaptation in collaborative tasks: Models and experiments. Int J Rob Res 2017; 36:618-634. [PMID: 32855581 PMCID: PMC7449140 DOI: 10.1177/0278364917690593] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Adaptation is critical for effective team collaboration. This paper introduces a computational formalism for mutual adaptation between a robot and a human in collaborative tasks. We propose the Bounded-Memory Adaptation Model, which is a probabilistic finite-state controller that captures human adaptive behaviors under a bounded-memory assumption. We integrate the Bounded-Memory Adaptation Model into a probabilistic decision process, enabling the robot to guide adaptable participants towards a better way of completing the task. Human subject experiments suggest that the proposed formalism improves the effectiveness of human-robot teams in collaborative tasks, when compared with one-way adaptations of the robot to the human, while maintaining the human's trust in the robot.
Collapse
Affiliation(s)
| | - David Hsu
- Department of Computer Science, National University of Singapore, Singapore
| | | |
Collapse
|
14
|
Nikolaidis S, Hsu D, Zhu YX, Srinivasa S. Human-Robot Mutual Adaptation in Shared Autonomy. PROCEEDINGS OF THE ... ACM SIGCHI. ACM CONFERENCE ON HUMAN-ROBOT INTERACTION 2017; 2017:294-302. [PMID: 31198909 DOI: 10.1145/2909824.3020252] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Shared autonomy integrates user input with robot autonomy in order to control a robot and help the user to complete a task. Our work aims to improve the performance of such a human-robot team: the robot tries to guide the human towards an effective strategy, sometimes against the human's own preference, while still retaining his trust. We achieve this through a principled human-robot mutual adaptation formalism. We integrate a bounded-memory adaptation model of the human into a partially observable stochastic decision model, which enables the robot to adapt to an adaptable human. When the human is adaptable, the robot guides the human towards a good strategy, maybe unknown to the human in advance. When the human is stubborn and not adaptable, the robot complies with the human's preference in order to retain their trust. In the shared autonomy setting, unlike many other common human-robot collaboration settings, only the robot actions can change the physical state of the world, and the human and robot goals are not fully observable. We address these challenges and show in a human subject experiment that the proposed mutual adaptation formalism improves human-robot team performance, while retaining a high level of user trust in the robot, compared to the common approach of having the robot strictly following participants' preference.
Collapse
|
15
|
Abstract
The sense of commitment is a fundamental building block of human social life. By generating and/or stabilizing expectations about contributions that individual agents will make to the goals of other agents or to shared goals, a sense of commitment can facilitate the planning and coordination of actions involving multiple agents. Moreover, it can also increase individual agents' motivation to contribute to other agents' goals or to shared goals, as well as their willingness to rely on other agents' contributions. In this paper, we provide a starting point for designing robots that exhibit and/or elicit a sense of commitment. We identify several challenges that such a project would likely confront, and consider possibilities for meeting these challenges.
Collapse
Affiliation(s)
- John Michael
- Department of Cognitive Science, Central European University, Oktober 6 Utca 7, Budapest, 1051 Hungary
| | - Alessandro Salice
- Center for Subjectivity Research Copenhagen University, Njalsgade 140-142, Building 25, 5th floor, 2300 Copenhagen, Denmark
| |
Collapse
|
16
|
Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. COMPUTERS IN HUMAN BEHAVIOR 2016. [DOI: 10.1016/j.chb.2016.03.057] [Citation(s) in RCA: 96] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
17
|
Nikolaidis S, Kuznetsov A, Hsu D, Srinivasa S. Formalizing Human-Robot Mutual Adaptation: A Bounded Memory Model. PROCEEDINGS OF THE ... ACM SIGCHI. ACM CONFERENCE ON HUMAN-ROBOT INTERACTION 2016; 2016:75-82. [PMID: 30637416 DOI: 10.1109/hri.2016.7451736] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Mutual adaptation is critical for effective team collaboration. This paper presents a formalism for human-robot mutual adaptation in collaborative tasks. We propose the bounded-memory adaptation model (BAM), which captures human adaptive behaviors based on a bounded memory assumption. We integrate BAM into a partially observable stochastic model, which enables robot adaptation to the human. When the human is adaptive, the robot will guide the human towards a new, optimal collaborative strategy unknown to the human in advance. When the human is not willing to change their strategy, the robot adapts to the human in order to retain human trust. Human subject experiments indicate that the proposed formalism can significantly improve the effectiveness of human-robot teams, while human subject ratings on the robot performance and trust are comparable to those achieved by cross training, a state-of-the-art human-robot team training practice.
Collapse
Affiliation(s)
| | | | - David Hsu
- Department of Computer Science, National University of Singapore
| | | |
Collapse
|
18
|
Towards Safe and Trustworthy Social Robots: Ethical Challenges and Practical Issues. SOCIAL ROBOTICS 2015. [DOI: 10.1007/978-3-319-25554-5_58] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|