1
|
Karwowski J, Szynkiewicz W, Niewiadomska-Szynkiewicz E. Bridging Requirements, Planning, and Evaluation: A Review of Social Robot Navigation. SENSORS (BASEL, SWITZERLAND) 2024; 24:2794. [PMID: 38732900 PMCID: PMC11086376 DOI: 10.3390/s24092794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 04/21/2024] [Accepted: 04/24/2024] [Indexed: 05/13/2024]
Abstract
Navigation lies at the core of social robotics, enabling robots to navigate and interact seamlessly in human environments. The primary focus of human-aware robot navigation is minimizing discomfort among surrounding humans. Our review explores user studies, examining factors that cause human discomfort, to perform the grounding of social robot navigation requirements and to form a taxonomy of elementary necessities that should be implemented by comprehensive algorithms. This survey also discusses human-aware navigation from an algorithmic perspective, reviewing the perception and motion planning methods integral to social navigation. Additionally, the review investigates different types of studies and tools facilitating the evaluation of social robot navigation approaches, namely datasets, simulators, and benchmarks. Our survey also identifies the main challenges of human-aware navigation, highlighting the essential future work perspectives. This work stands out from other review papers, as it not only investigates the variety of methods for implementing human awareness in robot control systems but also classifies the approaches according to the grounded requirements regarded in their objectives.
Collapse
Affiliation(s)
| | | | - Ewa Niewiadomska-Szynkiewicz
- Institute of Control and Computation Engineering, Warsaw University of Technology, 00-665 Warsaw, Poland; (J.K.); (W.S.)
| |
Collapse
|
2
|
On the Role of Beliefs and Trust for the Intention to Use Service Robots: An Integrated Trustworthiness Beliefs Model for Robot Acceptance. Int J Soc Robot 2023. [DOI: 10.1007/s12369-022-00952-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
Abstract
AbstractWith the increasing abilities of robots, the prediction of user decisions needs to go beyond the usability perspective, for example, by integrating distinctive beliefs and trust. In an online study (N = 400), first, the relationship between general trust in service robots and trust in a specific robot was investigated, supporting the role of general trust as a starting point for trust formation. On this basis, it was explored—both for general acceptance of service robots and acceptance of a specific robot—if technology acceptance models can be meaningfully complemented by specific beliefs from the theory of planned behavior (TPB) and trust literature to enhance understanding of robot adoption. First, models integrating all belief groups were fitted, providing essential variance predictions at both levels (general and specific) and a mediation of beliefs via trust to the intention to use. The omission of the performance expectancy and reliability belief was compensated for by more distinctive beliefs. In the final model (TB-RAM), effort expectancy and competence predicted trust at the general level. For a specific robot, competence and social influence predicted trust. Moreover, the effect of social influence on trust was moderated by the robot's application area (public > private), supporting situation-specific belief relevance in robot adoption. Taken together, in line with the TPB, these findings support a mediation cascade from beliefs via trust to the intention to use. Furthermore, an incorporation of distinctive instead of broad beliefs is promising for increasing the explanatory and practical value of acceptance modeling.
Collapse
|
3
|
Babel F, Kraus J, Baumann M. Findings From A Qualitative Field Study with An Autonomous Robot in Public: Exploration of User Reactions and Conflicts. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00894-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractSoon service robots will be employed in public spaces with frequent human-robot interaction (HRI). To achieve a safe, trustworthy and acceptable HRI, service robots need to be equipped with interaction strategies suitable for the robot, user, and context. To gain realistic insights into the initial user reactions and challenges that arise when a mechanoid, autonomous service robot in public is applied, a field study with three data sources was conducted. In a first step, lay users’ intuitive reactions to a cleaning robot at a train station were observed ($$N = 344$$
N
=
344
). Second, passersby’s preferences for HRI interaction strategies were explored in interviews ($$n = 54$$
n
=
54
). As a third step, trust and acceptance of the robot were assessed with questionnaires ($$n = 32$$
n
=
32
). Identified challenges were social robot navigation in crowded places also applicable to vulnerable passersby, inclusive communication modalities, information of staff and public about the service robot application and the need for conflict resolution strategies to avoid an inefficient robot (e.g., testing behavior, path is blocked). This study provides insights into naive HRI in public and illustrates challenges, provides recommendations supported by literature and highlights aspects for future research to inspire a research agenda in the field of public HRI.
Collapse
|
4
|
Herzog O, Forchhammer N, Kong P, Maruhn P, Cornet H, Frenkler F. The Influence of Robot Designs on Human Compliance and Emotion: A Virtual Reality Study in the Context of Future Public Transport. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3507472] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
As robots enter everyday environments, they start performing tasks originally performed by humans. One field of application is the public transport sector. The deployment of autonomous transport systems comes with a lack of human contact persons for help, guidance, and crowd management. This elicits challenges regarding redirecting and managing passengers. Current solutions on platforms can be replaced or enriched with service robots whose task includes crowd management as well as social interaction. This study investigates how the human-likeness of a robot influences the compliance and emotions of public transport users. A Virtual Reality experiment was conducted (N=33) to evaluate two different robot designs in a bus stop boarding scenario. The two robot designs differ in terms of humanoid appearance. In different experimental trials, participants had to perform a given task that was nullified by instructions from one of the two robots. Additionally, the dissonance of the situation was altered so that the environment either justified the robot's interference or not. Compliant behavior, pleasure, and arousal ratings, as well as task processing times were recorded. The experiment included an individual interview and a post-study questionnaire. The results suggest that future deployment of service robots has the potential to redirect passengers. In dissonant situations, clear reasoning must be given to make the robot effective. However, the robot's visual appearance has a more substantial impact on arousal and subjective preferences than on evoked behavior. The study implies that the presence of a service robot can influence peoples’ choices and gives hints about the importance of giving a reason. However, objectively, the level of the robot's humanoid appearance did not make a difference.
Collapse
Affiliation(s)
- Olivia Herzog
- Technical University Munich, Chair of Ergonomics, Germany, and TUMCREATE Ltd., Design for Autonomous Mobility, Singapore
| | | | - Penny Kong
- TUMCREATE Ltd., Design for Autonomous Mobility, Singapore
| | - Philipp Maruhn
- Technical University Munich, Chair of Ergonomics, Germany
| | | | - Fritz Frenkler
- Technical University Munich, Chair of Industrial Design, München, Germany
| |
Collapse
|
5
|
Boos A, Herzog O, Reinhardt J, Bengler K, Zimmermann M. A Compliance–Reactance Framework for Evaluating Human-Robot Interaction. Front Robot AI 2022; 9:733504. [PMID: 35685618 PMCID: PMC9171073 DOI: 10.3389/frobt.2022.733504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 04/26/2022] [Indexed: 11/13/2022] Open
Abstract
When do we follow requests and recommendations and which ones do we choose not to comply with? This publication combines definitions of compliance and reactance as behaviours and as affective processes in one model for application to human-robot interaction. The framework comprises three steps: human perception, comprehension, and selection of an action following a cue given by a robot. The paper outlines the application of the model in different study settings such as controlled experiments that allow for the assessment of cognition as well as observational field studies that lack this possibility. Guidance for defining and measuring compliance and reactance is outlined and strategies for improving robot behaviour are derived for each step in the process model. Design recommendations for each step are condensed into three principles on information economy, adequacy, and transparency. In summary, we suggest that in order to maximise the probability of compliance with a cue and to avoid reactance, interaction designers should aim for a high probability of perception, a high probability of comprehension and prevent negative affect. Finally, an example application is presented that uses existing data from a laboratory experiment in combination with data collected in an online survey to outline how the model can be applied to evaluate a new technology or interaction strategy using the concepts of compliance and reactance as behaviours and affective constructs.
Collapse
Affiliation(s)
- Annika Boos
- TUM School of Engineering and Design, Institute of Ergonomics, Technical University of Munich, Garching, Germany
- *Correspondence: Annika Boos,
| | - Olivia Herzog
- TUM School of Engineering and Design, Institute of Ergonomics, Technical University of Munich, Garching, Germany
| | - Jakob Reinhardt
- TUM School of Engineering and Design, Institute of Ergonomics, Technical University of Munich, Garching, Germany
| | - Klaus Bengler
- TUM School of Engineering and Design, Institute of Ergonomics, Technical University of Munich, Garching, Germany
| | | |
Collapse
|
6
|
Step Aside! VR-Based Evaluation of Adaptive Robot Conflict Resolution Strategies for Domestic Service Robots. Int J Soc Robot 2022. [DOI: 10.1007/s12369-021-00858-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractAs domestic service robots become more prevalent and act autonomously, conflicts of interest between humans and robots become more likely. Hereby, the robot shall be able to negotiate with humans effectively and appropriately to fulfill its tasks. One promising approach could be the imitation of human conflict resolution behaviour and the use of persuasive requests. The presented study complements previous work by investigating combinations of assertive and polite request elements (appeal, showing benefit, command), which have been found to be effective in HRI. The conflict resolution strategies each contained two types of requests, the order of which was varied to either mimic or contradict human conflict resolution behaviour. The strategies were also adapted to the users’ compliance behaviour. If the participant complied after the first request, no second request was issued. In a virtual reality experiment ($$N = 57$$
N
=
57
) with two trials, six different strategies were evaluated regarding user compliance, robot acceptance, trust, and fear and compared to a control condition featuring no request elements. The experiment featured a human-robot goal conflict scenario concerning household tasks at home. The results show that in trial 1, strategies reflecting human politeness and conflict resolution norms were more accepted, polite, and trustworthier than strategies entailing a command. No differences were found for trial 2. Overall, compliance rates were comparable to human-human-requests. Compliance rates did not differ between strategies. The contribution is twofold: presenting an experimental paradigm to investigate a human-robot conflict scenario and providing a first step to developing acceptable robot conflict resolution strategies based on human behaviour.
Collapse
|