1
|
Mehak S, Ramos IF, Sagar K, Ramasubramanian A, Kelleher JD, Guilfoyle M, Gianini G, Damiani E, Leva MC. A roadmap for improving data quality through standards for collaborative intelligence in human-robot applications. Front Robot AI 2024; 11:1434351. [PMID: 39726729 PMCID: PMC11669550 DOI: 10.3389/frobt.2024.1434351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Accepted: 11/06/2024] [Indexed: 12/28/2024] Open
Abstract
Collaborative intelligence (CI) involves human-machine interactions and is deemed safety-critical because their reliable interactions are crucial in preventing severe injuries and environmental damage. As these applications become increasingly data-driven, the reliability of CI applications depends on the quality of data, shaping the system's ability to interpret and respond in diverse and often unpredictable environments. In this regard, it is important to adhere to data quality standards and guidelines, thus facilitating the advancement of these collaborative systems in industry. This study presents the challenges of data quality in CI applications within industrial environments, with two use cases that focus on the collection of data in Human-Robot Interaction (HRI). The first use case involves a framework for quantifying human and robot performance within the context of naturalistic robot learning, wherein humans teach robots using intuitive programming methods within the domain of HRI. The second use case presents real-time user state monitoring for adaptive multi-modal teleoperation, that allows for a dynamic adaptation of the system's interface, interaction modality and automation level based on user needs. The article proposes a hybrid standardization derived from established data quality-related ISO standards and addresses the unique challenges associated with multi-modal HRI data acquisition. The use cases presented in this study were carried out as part of an EU-funded project, Collaborative Intelligence for Safety-Critical Systems (CISC).
Collapse
Affiliation(s)
- Shakra Mehak
- Pilz Ireland Industrial Automation, Cork, Ireland
- School of Food Science and Environmental Health, Technological University Dublin, Dublin, Ireland
| | - Inês F. Ramos
- Secure Service-oriented Architectures Research Lab, Department of Computer Science, Università degli Studi di Milano, Milan, Italy
| | - Keerthi Sagar
- Robotics and Automation Group, Irish Manufacturing Research Centre, Mullingar, Ireland
| | - Aswin Ramasubramanian
- Robotics and Automation Group, Irish Manufacturing Research Centre, Mullingar, Ireland
| | - John D. Kelleher
- School of Computer Science and Statistics, Trinity College, Dublin, Ireland
| | | | - Gabriele Gianini
- Department of Informatics, Systems and Communication (DISCo) Università degli Studi di Milano-Bicocca, Milano, Italy
| | - Ernesto Damiani
- Secure Service-oriented Architectures Research Lab, Department of Computer Science, Università degli Studi di Milano, Milan, Italy
| | - Maria Chiara Leva
- School of Food Science and Environmental Health, Technological University Dublin, Dublin, Ireland
| |
Collapse
|
2
|
Qiu S, Fan T, Jiang J, Wang Z, Wang Y, Xu J, Sun T, Jiang N. A novel two-level interactive action recognition model based on inertial data fusion. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.03.058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
|
3
|
Rudaz D, Tatarian K, Stower R, Licoppe C. From Inanimate Object to Agent: Impact of Pre-Beginnings on the Emergence of Greetings with a Robot. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2023. [DOI: 10.1145/3575806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
The very first moments of co-presence, during which a robot appears to a participant for the first time, are often “off-the-record” in the data collected from human-robot experiments (video recordings, motion tracking, methodology sections, etc.). Yet, this “pre-beginning” phase, well documented in the case of human-human interactions, is not an interactional vacuum: it is where interactional work from participants can take place so that the production of a first speaking turn (like greeting the robot) becomes relevant and expected. We base our analysis on an experiment which replicated the interaction opening delays sometimes observed in laboratory or "in-the-wild" human-robot interaction studies – where robots can require time before springing to life after they are in co-presence with a human. Using an ethnomethodological and multimodal conversation analytic methodology (EMCA), we identify which properties of the robot's behavior were oriented to by participants as creating the adequate conditions to produce a first greeting. Our findings highlight the importance of the state in which the robot originally appears to participants: as an immobile object or, instead, as an entity already involved in preexisting activity. Participants’ orientations to the very first behaviors manifested by the robot during this “pre-beginning” phase produced
a priori
unpredictable sequential trajectories, which configured the timing and the manner in which the robot emerged as a social agent. We suggest that these first instants of co-presence are not peripheral issues with respect to human-robot experiments but should be thought about and designed as an integral part of those.
Collapse
Affiliation(s)
- Damien Rudaz
- Dept. of Economics and Social Sciences, Telecom Paris and Institut Polytechnique de Paris, France
| | - Karen Tatarian
- Institute for Intelligent Systems and Robotics, Sorbonne University, France
| | - Rebecca Stower
- Division of Robotics, Perception, and Learning, KTH Royal Institute of Technology, Sweden
| | - Christian Licoppe
- Dept. of Economics and Social Sciences, Telecom Paris and Institut Polytechnique de Paris, France
| |
Collapse
|
4
|
de Sousa RM, Barrios-Aranibar D, Diaz-Amado J, Patiño-Escarcina RE, Trindade RMP. A New Approach for Including Social Conventions into Social Robots Navigation by Using Polygonal Triangulation and Group Asymmetric Gaussian Functions. SENSORS 2022; 22:s22124602. [PMID: 35746384 PMCID: PMC9230447 DOI: 10.3390/s22124602] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/11/2022] [Accepted: 06/13/2022] [Indexed: 02/01/2023]
Abstract
Many authors have been working on approaches that can be applied to social robots to allow a more realistic/comfortable relationship between humans and robots in the same space. This paper proposes a new navigation strategy for social environments by recognizing and considering the social conventions of people and groups. To achieve that, we proposed the application of Delaunay triangulation for connecting people as vertices of a triangle network. Then, we defined a complete asymmetric Gaussian function (for individuals and groups) to decide zones where the robot must avoid passing. Furthermore, a feature generalization scheme called socialization feature was proposed to incorporate perception information that can be used to change the variance of the Gaussian function. Simulation results have been presented to demonstrate that the proposed approach can modify the path according to the perception of the robot compared to a standard A* algorithm.
Collapse
Affiliation(s)
- Raphaell Maciel de Sousa
- Instituto Federal da Paraíba (IFPB), Campus Cajazeiras, Cajazeiras 58900-000, PB, Brazil
- Correspondence:
| | - Dennis Barrios-Aranibar
- Electrical and Electronic Engineering Department, Universidad Católica San Pablo, Arequipa 04001, Peru; (D.B.-A.); (J.D.-A.); (R.E.P.-E.)
| | - Jose Diaz-Amado
- Electrical and Electronic Engineering Department, Universidad Católica San Pablo, Arequipa 04001, Peru; (D.B.-A.); (J.D.-A.); (R.E.P.-E.)
- Instituto Federal da Bahia (IFBA), Campus Vitória da Conquista, Vitória da Conquista 45078-300, BA, Brazil
| | - Raquel E. Patiño-Escarcina
- Electrical and Electronic Engineering Department, Universidad Católica San Pablo, Arequipa 04001, Peru; (D.B.-A.); (J.D.-A.); (R.E.P.-E.)
| | - Roque Mendes Prado Trindade
- Department of Technologics and Exacts Sciences, State University of Southwest Bahia (UESB), Vitória da Conquista 45083-900, BA, Brazil;
| |
Collapse
|
5
|
Andriella A, Torras C, Abdelnour C, Alenyà G. Introducing CARESSER: A framework for in situ learning robot social assistance from expert knowledge and demonstrations. USER MODELING AND USER-ADAPTED INTERACTION 2022; 33:441-496. [PMID: 35311217 PMCID: PMC8916953 DOI: 10.1007/s11257-021-09316-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 11/28/2021] [Indexed: 06/14/2023]
Abstract
Socially assistive robots have the potential to augment and enhance therapist's effectiveness in repetitive tasks such as cognitive therapies. However, their contribution has generally been limited as domain experts have not been fully involved in the entire pipeline of the design process as well as in the automatisation of the robots' behaviour. In this article, we present aCtive leARning agEnt aSsiStive bEhaviouR (CARESSER), a novel framework that actively learns robotic assistive behaviour by leveraging the therapist's expertise (knowledge-driven approach) and their demonstrations (data-driven approach). By exploiting that hybrid approach, the presented method enables in situ fast learning, in a fully autonomous fashion, of personalised patient-specific policies. With the purpose of evaluating our framework, we conducted two user studies in a daily care centre in which older adults affected by mild dementia and mild cognitive impairment (N = 22) were requested to solve cognitive exercises with the support of a therapist and later on of a robot endowed with CARESSER. Results showed that: (i) the robot managed to keep the patients' performance stable during the sessions even more so than the therapist; (ii) the assistance offered by the robot during the sessions eventually matched the therapist's preferences. We conclude that CARESSER, with its stakeholder-centric design, can pave the way to new AI approaches that learn by leveraging human-human interactions along with human expertise, which has the benefits of speeding up the learning process, eliminating the need for the design of complex reward functions, and finally avoiding undesired states.
Collapse
Affiliation(s)
- Antonio Andriella
- CSIC-UPC, Institut de Robòtica i Informàtica Industrial, C/Llorens i Artigas 4-6, 08028 Barcelona, Spain
| | - Carme Torras
- CSIC-UPC, Institut de Robòtica i Informàtica Industrial, C/Llorens i Artigas 4-6, 08028 Barcelona, Spain
| | - Carla Abdelnour
- Research Center and Memory Clinic, Fundació ACE, Institut Català de Neurociències Aplicades, Universitat Internacional de Catalunya, Barcelona, Spain
| | - Guillem Alenyà
- CSIC-UPC, Institut de Robòtica i Informàtica Industrial, C/Llorens i Artigas 4-6, 08028 Barcelona, Spain
| |
Collapse
|
6
|
Doering M, Brščić D, Kanda T. Data-Driven Imitation Learning for a Shopkeeper Robot with Periodically Changing Product Information. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2021. [DOI: 10.1145/3451883] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Data-driven imitation learning enables service robots to learn social interaction behaviors, but these systems cannot adapt after training to changes in the environment, such as changing products in a store. To solve this, a novel learning system that uses neural attention and approximate string matching to copy information from a product information database to its output is proposed. A camera shop interaction dataset was simulated for training/testing. The proposed system was found to outperform a baseline and a previous state of the art in an offline, human-judged evaluation.
Collapse
|
7
|
Winkle K, Senft E, Lemaignan S. LEADOR: A Method for End-To-End Participatory Design of Autonomous Social Robots. Front Robot AI 2021; 8:704119. [PMID: 34926589 PMCID: PMC8678512 DOI: 10.3389/frobt.2021.704119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Accepted: 10/08/2021] [Indexed: 11/13/2022] Open
Abstract
Participatory design (PD) has been used to good success in human-robot interaction (HRI) but typically remains limited to the early phases of development, with subsequent robot behaviours then being hardcoded by engineers or utilised in Wizard-of-Oz (WoZ) systems that rarely achieve autonomy. In this article, we present LEADOR (Led-by-Experts Automation and Design Of Robots), an end-to-end PD methodology for domain expert co-design, automation, and evaluation of social robot behaviour. This method starts with typical PD, working with the domain expert(s) to co-design the interaction specifications and state and action space of the robot. It then replaces the traditional offline programming or WoZ phase by an in situ and online teaching phase where the domain expert can live-program or teach the robot how to behave whilst being embedded in the interaction context. We point out that this live teaching phase can be best achieved by adding a learning component to a WoZ setup, which captures implicit knowledge of experts, as they intuitively respond to the dynamics of the situation. The robot then progressively learns an appropriate, expert-approved policy, ultimately leading to full autonomy, even in sensitive and/or ill-defined environments. However, LEADOR is agnostic to the exact technical approach used to facilitate this learning process. The extensive inclusion of the domain expert(s) in robot design represents established responsible innovation practice, lending credibility to the system both during the teaching phase and when operating autonomously. The combination of this expert inclusion with the focus on in situ development also means that LEADOR supports a mutual shaping approach to social robotics. We draw on two previously published, foundational works from which this (generalisable) methodology has been derived to demonstrate the feasibility and worth of this approach, provide concrete examples in its application, and identify limitations and opportunities when applying this framework in new environments.
Collapse
Affiliation(s)
- Katie Winkle
- Division of Robotics, Perception and Learning, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Emmanuel Senft
- Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI, United States
| | - Séverin Lemaignan
- Bristol Robotics Laboratory, University of the West of England, Bristol, United Kingdom
| |
Collapse
|
8
|
Maniadakis M, Hourdakis E, Sigalas M, Piperakis S, Koskinopoulou M, Trahanias P. Time-Aware Multi-Agent Symbiosis. Front Robot AI 2021; 7:503452. [PMID: 33501296 PMCID: PMC7805830 DOI: 10.3389/frobt.2020.503452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Accepted: 09/29/2020] [Indexed: 11/13/2022] Open
Abstract
Contemporary research in human-machine symbiosis has mainly concentrated on enhancing relevant sensory, perceptual, and motor capacities, assuming short-term and nearly momentary interaction sessions. Still, human-machine confluence encompasses an inherent temporal dimension that is typically overlooked. The present work shifts the focus on the temporal and long-lasting aspects of symbiotic human-robot interaction (sHRI). We explore the integration of three time-aware modules, each one focusing on a diverse part of the sHRI timeline. Specifically, the Episodic Memory considers past experiences, the Generative Time Models estimate the progress of ongoing activities, and the Daisy Planner devices plans for the timely accomplishment of goals. The integrated system is employed to coordinate the activities of a multi-agent team. Accordingly, the proposed system (i) predicts human preferences based on past experience, (ii) estimates performance profile and task completion time, by monitoring human activity, and (iii) dynamically adapts multi-agent activity plans to changes in expectation and Human-Robot Interaction (HRI) performance. The system is deployed and extensively assessed in real-world and simulated environments. The obtained results suggest that building upon the unfolding and the temporal properties of team tasks can significantly enhance the fluency of sHRI.
Collapse
Affiliation(s)
- Michail Maniadakis
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece
| | - Emmanouil Hourdakis
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece
| | - Markos Sigalas
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece
| | - Stylianos Piperakis
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece
| | - Maria Koskinopoulou
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece
| | - Panos Trahanias
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece
| |
Collapse
|
9
|
Okafuji Y, Baba J, Nakanishi J, Kuramoto I, Ogawa K, Yoshikawa Y, Ishiguro H. Can a humanoid robot continue to draw attention in an office environment? Adv Robot 2020. [DOI: 10.1080/01691864.2020.1769724] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Yuki Okafuji
- Department of Information Science and Engineering, Ritsumeikan University, Kusatsu, Shiga, Japan
| | - Jun Baba
- AI Lab, CyberAgent, Inc., Shibuya, Tokyo, Japan
| | - Junya Nakanishi
- Graduation School of Engineering Science, Osaka University, Toyonaka, Osaka, Japan
| | - Itaru Kuramoto
- Faculty of Informatics, The University of Fukuchiyama, Fukuchiyama, Kyoto, Japan
| | - Kohei Ogawa
- Graduation School of Engineering Science, Osaka University, Toyonaka, Osaka, Japan
| | - Yuichiro Yoshikawa
- Graduation School of Engineering Science, Osaka University, Toyonaka, Osaka, Japan
| | - Hiroshi Ishiguro
- Graduation School of Engineering Science, Osaka University, Toyonaka, Osaka, Japan
| |
Collapse
|
10
|
Jiang C, Ni Z, Guo Y, He H. Pedestrian Flow Optimization to Reduce the Risk of Crowd Disasters Through Human–Robot Interaction. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2020. [DOI: 10.1109/tetci.2019.2930249] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
11
|
Doering M, Kanda T, Ishiguro H. Neural-network-based Memory for a Social Robot. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2019. [DOI: 10.1145/3338810] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Many recent studies have shown that behaviors and interaction logic for social robots can be learned automatically from natural examples of human-human interaction by machine learning algorithms, with minimal input from human designers [1--4]. In this work, we exceed the capabilities of the previous approaches by giving the robot
memory
. In earlier work, the robot's actions were decided based only on a narrow temporal window of the current interaction context. However, human behaviors often depend on more temporally distant events in the interaction history. Thus, we raise the question of whether (and how) an automated behavior learning system can learn a memory representation of interaction history within a simulated camera shop scenario. An analysis of the types of
memory-setting
and
memory-dependent
actions that occur in the camera shop scenario is presented. Then, to create more examples of such actions for evaluating a shopkeeper robot behavior learning system, an interaction dataset is simulated. A Gated Recurrent Unit (GRU) neural network architecture is applied in the behavior learning system, which learns a memory representation for performing memory-dependent actions. In an offline evaluation, the GRU system significantly outperformed a without-memory baseline system at generating appropriate memory-dependent actions. Finally, an analysis of the GRU architecture's memory representation is presented.
Collapse
|
12
|
Li Y, Hsieh WF, Sato-Shimokawara E, Yamaguchi T. Expression and Identification of Confidence Based on Individual Verbal and Non-Verbal Features in Human-Robot Interaction. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2019. [DOI: 10.20965/jaciii.2019.p1089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In our daily life, it is inevitable to confront the condition which we feel confident or unconfident. Under these conditions, we might have different expressions and responses. Not to mention under the situation when a human communicates with a robot. It is necessary for robots to behave in various styles to show adaptive confidence degree, for example, in previous work, when the robot made mistakes during the interaction, different certainty expression styles have shown influence on humans’ truthfulness and acceptance. On the other hand, when human feel uncertain on the robot’s utterance, the approach of how the robot recognizes human’s uncertainty is crucial. However, relative researches are still scarce and ignore individual characteristics. In current study, we designed an experiment to obtain human verbal and non-verbal features under certain and uncertain condition. From the certain/uncertain answer experiment, we extracted the head movement and voice factors as features to investigate if we can classify these features correctly. From the result, we have found that different people had distinct features to show different certainty degree but some participants might have a similar pattern considering their relatively close psychological feature value. We aim to explore different individuals’ certainty expression patterns because it can not only facilitate humans’ confidence status detection but also is expected to be utilized on robot side to give the proper response adaptively and thus spice up the Human-Robot Interaction.
Collapse
|
13
|
Senft E, Lemaignan S, Baxter PE, Bartlett M, Belpaeme T. Teaching robots social autonomy from in situ human guidance. Sci Robot 2019; 4:4/35/eaat1186. [PMID: 33137729 DOI: 10.1126/scirobotics.aat1186] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2019] [Accepted: 09/16/2019] [Indexed: 11/02/2022]
Abstract
Striking the right balance between robot autonomy and human control is a core challenge in social robotics, in both technical and ethical terms. On the one hand, extended robot autonomy offers the potential for increased human productivity and for the off-loading of physical and cognitive tasks. On the other hand, making the most of human technical and social expertise, as well as maintaining accountability, is highly desirable. This is particularly relevant in domains such as medical therapy and education, where social robots hold substantial promise, but where there is a high cost to poorly performing autonomous systems, compounded by ethical concerns. We present a field study in which we evaluate SPARC (supervised progressively autonomous robot competencies), an innovative approach addressing this challenge whereby a robot progressively learns appropriate autonomous behavior from in situ human demonstrations and guidance. Using online machine learning techniques, we demonstrate that the robot could effectively acquire legible and congruent social policies in a high-dimensional child-tutoring situation needing only a limited number of demonstrations while preserving human supervision whenever desirable. By exploiting human expertise, our technique enables rapid learning of autonomous social and domain-specific policies in complex and nondeterministic environments. Last, we underline the generic properties of SPARC and discuss how this paradigm is relevant to a broad range of difficult human-robot interaction scenarios.
Collapse
Affiliation(s)
- Emmanuel Senft
- Centre for Robotics and Neural Systems, University of Plymouth, Plymouth, UK.
| | - Séverin Lemaignan
- Bristol Robotics Laboratory, University of the West of England, Bristol, UK
| | | | - Madeleine Bartlett
- Centre for Robotics and Neural Systems, University of Plymouth, Plymouth, UK
| | - Tony Belpaeme
- Centre for Robotics and Neural Systems, University of Plymouth, Plymouth, UK.,ID Lab-imec, Ghent University, Ghent, Belgium
| |
Collapse
|
14
|
Two Demonstrators Are Better Than One—A Social Robot That Learns to Imitate People With Different Interaction Styles. IEEE Trans Cogn Dev Syst 2019. [DOI: 10.1109/tcds.2017.2787062] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
15
|
Doering M, Liu P, Glas DF, Kanda T, Kulić D, Ishiguro H. Curiosity Did Not Kill the Robot. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2019. [DOI: 10.1145/3326462] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Learning from human interaction data is a promising approach for developing robot interaction logic, but behaviors learned only from offline data simply represent the most frequent interaction patterns in the training data, without any adaptation for individual differences. We developed a robot that incorporates both data-driven and interactive learning. Our robot first learns high-level dialog and spatial behavior patterns from offline examples of human--human interaction. Then, during live interactions, it chooses among appropriate actions according to its curiosity about the customer's expected behavior, continually updating its predictive model to learn and adapt to each individual. In a user study, we found that participants thought the curious robot was significantly more humanlike with respect to repetitiveness and diversity of behavior, more interesting, and better overall in comparison to a non-curious robot.
Collapse
Affiliation(s)
| | | | | | | | - Dana Kulić
- University of Waterloo, Waterloo, Ontario, Canada
| | | |
Collapse
|
16
|
Liu P, Glas DF, Kanda T, Ishiguro H. Learning proactive behavior for interactive social robots. Auton Robots 2017. [DOI: 10.1007/s10514-017-9671-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|