1
|
Hani Daniel Zakaria M, Lengagne S, Corrales Ramón JA, Mezouar Y. General Framework for the Optimization of the Human-Robot Collaboration Decision-Making Process Through the Ability to Change Performance Metrics. Front Robot AI 2021; 8:736644. [PMID: 34760932 PMCID: PMC8573032 DOI: 10.3389/frobt.2021.736644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 09/28/2021] [Indexed: 11/16/2022] Open
Abstract
This paper proposes a new decision-making framework in the context of Human-Robot Collaboration (HRC). State-of-the-art techniques consider the HRC as an optimization problem in which the utility function, also called reward function, is defined to accomplish the task regardless of how well the interaction is performed. When the performance metrics are considered, they cannot be easily changed within the same framework. In contrast, our decision-making framework can easily handle the change of the performance metrics from one case scenario to another. Our method treats HRC as a constrained optimization problem where the utility function is split into two main parts. Firstly, a constraint defines how to accomplish the task. Secondly, a reward evaluates the performance of the collaboration, which is the only part that is modified when changing the performance metrics. It gives control over the way the interaction unfolds, and it also guarantees the adaptation of the robot actions to the human ones in real-time. In this paper, the decision-making process is based on Nash Equilibrium and perfect-information extensive form from game theory. It can deal with collaborative interactions considering different performance metrics such as optimizing the time to complete the task, considering the probability of human errors, etc. Simulations and a real experimental study on “an assembly task” -i.e., a game based on a construction kit-illustrate the effectiveness of the proposed framework.
Collapse
Affiliation(s)
| | - Sébastien Lengagne
- CNRS, Clermont Auvergne INP, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Juan Antonio Corrales Ramón
- Centro Singular de Investigación en Tecnoloxías Intelixentes (CiTIUS), Universidade de Santiago de Compostela, Santiago de Compostela, Spain
| | - Youcef Mezouar
- CNRS, Clermont Auvergne INP, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, France
| |
Collapse
|
2
|
Bieńkiewicz MMN, Smykovskyi AP, Olugbade T, Janaqi S, Camurri A, Bianchi-Berthouze N, Björkman M, Bardy BG. Bridging the gap between emotion and joint action. Neurosci Biobehav Rev 2021; 131:806-833. [PMID: 34418437 DOI: 10.1016/j.neubiorev.2021.08.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 08/08/2021] [Accepted: 08/13/2021] [Indexed: 11/17/2022]
Abstract
Our daily human life is filled with a myriad of joint action moments, be it children playing, adults working together (i.e., team sports), or strangers navigating through a crowd. Joint action brings individuals (and embodiment of their emotions) together, in space and in time. Yet little is known about how individual emotions propagate through embodied presence in a group, and how joint action changes individual emotion. In fact, the multi-agent component is largely missing from neuroscience-based approaches to emotion, and reversely joint action research has not found a way yet to include emotion as one of the key parameters to model socio-motor interaction. In this review, we first identify the gap and then stockpile evidence showing strong entanglement between emotion and acting together from various branches of sciences. We propose an integrative approach to bridge the gap, highlight five research avenues to do so in behavioral neuroscience and digital sciences, and address some of the key challenges in the area faced by modern societies.
Collapse
Affiliation(s)
- Marta M N Bieńkiewicz
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France.
| | - Andrii P Smykovskyi
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France
| | | | - Stefan Janaqi
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France
| | | | | | | | - Benoît G Bardy
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France.
| |
Collapse
|
3
|
Del Duchetto F, Baxter P, Hanheide M. Are You Still With Me? Continuous Engagement Assessment From a Robot's Point of View. Front Robot AI 2020; 7:116. [PMID: 33501282 PMCID: PMC7805701 DOI: 10.3389/frobt.2020.00116] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2020] [Accepted: 07/24/2020] [Indexed: 11/30/2022] Open
Abstract
Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement_detector, as a tool to measure engagement in a variety of settings.
Collapse
Affiliation(s)
- Francesco Del Duchetto
- Lincoln Centre for Autonomous Systems, School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | | | | |
Collapse
|
4
|
The Challenges of Designing a Robot for a Satisfaction Survey: Surveying Humans Using a Social Robot. Int J Soc Robot 2019. [DOI: 10.1007/s12369-019-00604-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
5
|
Vanman EJ, Kappas A. “Danger, Will Robinson!” The challenges of social robots for intergroup relations. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2019. [DOI: 10.1111/spc3.12489] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
6
|
Deer W, Pounds PEI. Lightweight Whiskers for Contact, Pre-Contact, and Fluid Velocity Sensing. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2899215] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
7
|
Cochet H, Guidetti M. Contribution of Developmental Psychology to the Study of Social Interactions: Some Factors in Play, Joint Attention and Joint Action and Implications for Robotics. Front Psychol 2018; 9:1992. [PMID: 30405484 PMCID: PMC6202940 DOI: 10.3389/fpsyg.2018.01992] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 09/28/2018] [Indexed: 11/29/2022] Open
Abstract
Children exchange information through multiple modalities, including verbal communication, gestures and social gaze and they gradually learn to plan their behavior and coordinate successfully with their partners. The development of joint attention and joint action, especially in the context of social play, provides rich opportunities for describing the characteristics of interactions that can lead to shared outcomes. In the present work, we argue that human-robot interactions (HRI) can benefit from these developmental studies, through influencing the human's perception and interpretation of the robot's behavior. We thus endeavor to describe some components that could be implemented in the robot to strengthen the feeling of dealing with a social agent, and therefore improve the success of collaborative tasks. Focusing in particular on motor precision, coordination, and anticipatory planning, we discuss the question of complexity in HRI. In the context of joint activities, we highlight the necessity of (1) considering multiple speech acts involving multimodal communication (both verbal and non-verbal signals), and (2) analyzing separately the forms and functions of communication. Finally, we examine some challenges related to robot competencies, such as the issue of language and symbol grounding, which might be tackled by bringing together expertise of researchers in developmental psychology and robotics.
Collapse
Affiliation(s)
- Hélène Cochet
- CLLE, Université de Toulouse, CNRS, UT2J, Toulouse, France
| | | |
Collapse
|
8
|
Sandini G, Mohan V, Sciutti A, Morasso P. Social Cognition for Human-Robot Symbiosis-Challenges and Building Blocks. Front Neurorobot 2018; 12:34. [PMID: 30050425 PMCID: PMC6051162 DOI: 10.3389/fnbot.2018.00034] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Accepted: 06/11/2018] [Indexed: 11/22/2022] Open
Abstract
The next generation of robot companions or robot working partners will need to satisfy social requirements somehow similar to the famous laws of robotics envisaged by Isaac Asimov time ago (Asimov, 1942). The necessary technology has almost reached the required level, including sensors and actuators, but the cognitive organization is still in its infancy and is only partially supported by the current understanding of brain cognitive processes. The brain of symbiotic robots will certainly not be a “positronic” replica of the human brain: probably, the greatest part of it will be a set of interacting computational processes running in the cloud. In this article, we review the challenges that must be met in the design of a set of interacting computational processes as building blocks of a cognitive architecture that may give symbiotic capabilities to collaborative robots of the next decades: (1) an animated body-schema; (2) an imitation machinery; (3) a motor intentions machinery; (4) a set of physical interaction mechanisms; and (5) a shared memory system for incremental symbiotic development. We would like to stress that our approach is totally un-hierarchical: the five building blocks of the shared cognitive architecture are fully bi-directionally connected. For example, imitation and intentional processes require the “services” of the animated body schema which, on the other hand, can run its simulations if appropriately prompted by imitation and/or intention, with or without physical interaction. Successful experiences can leave a trace in the shared memory system and chunks of memory fragment may compete to participate to novel cooperative actions. And so on and so forth. At the heart of the system is lifelong training and learning but, different from the conventional learning paradigms in neural networks, where learning is somehow passively imposed by an external agent, in symbiotic robots there is an element of free choice of what is worth learning, driven by the interaction between the robot and the human partner. The proposed set of building blocks is certainly a rough approximation of what is needed by symbiotic robots but we believe it is a useful starting point for building a computational framework.
Collapse
Affiliation(s)
- Giulio Sandini
- Research Unit of Robotics, Brain, and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy
| | - Vishwanathan Mohan
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Alessandra Sciutti
- Research Unit of Robotics, Brain, and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy
| | - Pietro Morasso
- Research Unit of Robotics, Brain, and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|