1
|
Mohan M, Nunez CM, Kuchenbecker KJ. Closing the loop in minimally supervised human-robot interaction: formative and summative feedback. Sci Rep 2024; 14:10564. [PMID: 38719859 PMCID: PMC11079071 DOI: 10.1038/s41598-024-60905-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 04/29/2024] [Indexed: 05/12/2024] Open
Abstract
Human instructors fluidly communicate with hand gestures, head and body movements, and facial expressions, but robots rarely leverage these complementary cues. A minimally supervised social robot with such skills could help people exercise and learn new activities. Thus, we investigated how nonverbal feedback from a humanoid robot affects human behavior. Inspired by the education literature, we evaluated formative feedback (real-time corrections) and summative feedback (post-task scores) for three distinct tasks: positioning in the room, mimicking the robot's arm pose, and contacting the robot's hands. Twenty-eight adults completed seventy-five 30-s-long trials with no explicit instructions or experimenter help. Motion-capture data analysis shows that both formative and summative feedback from the robot significantly aided user performance. Additionally, formative feedback improved task understanding. These results show the power of nonverbal cues based on human movement and the utility of viewing feedback through formative and summative lenses.
Collapse
Affiliation(s)
- Mayumi Mohan
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany.
| | - Cara M Nunez
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany
- Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, 14853, USA
| | - Katherine J Kuchenbecker
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany.
| |
Collapse
|
2
|
Liu R, Liu Y, Wang H, Lu F. PnP-GA+: Plug-and-Play Domain Adaptation for Gaze Estimation Using Model Variants. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; 46:3707-3721. [PMID: 38163314 DOI: 10.1109/tpami.2023.3348528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Appearance-based gaze estimation has garnered increasing attention in recent years. However, deep learning-based gaze estimation models still suffer from suboptimal performance when deployed in new domains, e.g., unseen environments or individuals. In our previous work, we took this challenge for the first time by introducing a plug-and-play method (PnP-GA) to adapt the gaze estimation model to new domains. The core concept of PnP-GA is to leverage the diversity brought by a group of model variants to enhance the adaptability to diverse environments. In this article, we propose the PnP-GA+ by extending our approach to explore the impact of assembling model variants using three additional perspectives: color space, data augmentation, and model structure. Moreover, we propose an intra-group attention module that dynamically optimizes pseudo-labeling during adaptation. Experimental results demonstrate that by directly plugging several existing gaze estimation networks into the PnP-GA+ framework, it outperforms state-of-the-art domain adaptation approaches on four standard gaze domain adaptation tasks on public datasets. Our method consistently enhances cross-domain performance, and its versatility is improved through various ways of assembling the model group.
Collapse
|
3
|
Goldman EJ, Poulin-Dubois D. Children's anthropomorphism of inanimate agents. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024:e1676. [PMID: 38659105 DOI: 10.1002/wcs.1676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 10/30/2023] [Accepted: 01/12/2024] [Indexed: 04/26/2024]
Abstract
This review article examines the extant literature on animism and anthropomorphism in infants and young children. A substantial body of work indicates that both infants and young children have a broad concept of what constitutes a sentient agent and react to inanimate objects as they do to people in the same context. The literature has also revealed a developmental pattern in which anthropomorphism decreases with age, but social robots appear to be an exception to this pattern. Additionally, the review shows that children attribute psychological properties to social robots less so than people but still anthropomorphize them. Importantly, some research suggests that anthropomorphism of social robots is dependent upon their morphology and human-like behaviors. The extent to which children anthropomorphize robots is dependent on their exposure to them and the presence of human-like features. Based on the existing literature, we conclude that in infancy, a large range of inanimate objects (e.g., boxes, geometric figures) that display animate motion patterns trigger the same behaviors observed in child-adult interactions, suggesting some implicit form of anthropomorphism. The review concludes that additional research is needed to understand what infants and children judge as social agents and how the perception of inanimate agents changes over the lifespan. As exposure to robots and virtual assistants increases, future research must focus on better understanding the full impact that regular interactions with such partners will have on children's anthropomorphizing. This article is categorized under: Psychology > Learning Cognitive Biology > Cognitive Development Computer Science and Robotics > Robotics.
Collapse
|
4
|
Ishikawa K, Oyama T, Tanaka Y, Okubo M. Perceiving social gaze produces the reversed congruency effect. Q J Exp Psychol (Hove) 2024:17470218241232981. [PMID: 38320865 DOI: 10.1177/17470218241232981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Numerous studies have shown that the gaze of others produces a special attentional process, such as the eye contact effect or joint attention. This study investigated the attentional process triggered by various types of gaze stimuli (i.e., human, cat, fish, koala, and robot gaze). A total of 300 university students participated in five experiments. They performed a spatial Stroop task in which five types of gaze stimuli were presented as targets. Participants were asked to judge the direction of the target (left or right) irrespective of its location (left or right). The results showed that the social gaze targets (i.e., human and cat gaze) produced a reversed congruency effect. In contrast to the social gaze targets, the non-social gaze (i.e., fish and robot) target did not produce the reversed congruency effect (Experiments 2, 2B, 3, and 4). These results suggest that attention to the gaze of socially communicable beings (i.e., humans and cats) is responsible for the reversed congruency effect. Our findings support the notion that the theory of mind or social interaction plays an important role in producing specific attentional processes in response to gaze stimuli.
Collapse
Affiliation(s)
- Kenta Ishikawa
- Department of Psychology, Senshu University, Kawasaki, Japan
| | - Takato Oyama
- Graduate School of Humanities, Senshu University, Kawasaki, Japan
| | - Yoshihiko Tanaka
- Graduate School of Humanities, Senshu University, Kawasaki, Japan
| | - Matia Okubo
- Department of Psychology, Senshu University, Kawasaki, Japan
| |
Collapse
|
5
|
Fiorini L, D'Onofrio G, Sorrentino A, Cornacchia Loizzo FG, Russo S, Ciccone F, Giuliani F, Sancarlo D, Cavallo F. The Role of Coherent Robot Behavior and Embodiment in Emotion Perception and Recognition During Human-Robot Interaction: Experimental Study. JMIR Hum Factors 2024; 11:e45494. [PMID: 38277201 PMCID: PMC10858416 DOI: 10.2196/45494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 04/24/2023] [Accepted: 11/29/2023] [Indexed: 01/27/2024] Open
Abstract
BACKGROUND Social robots are becoming increasingly important as companions in our daily lives. Consequently, humans expect to interact with them using the same mental models applied to human-human interactions, including the use of cospeech gestures. Research efforts have been devoted to understanding users' needs and developing robot's behavioral models that can perceive the user state and properly plan a reaction. Despite the efforts made, some challenges regarding the effect of robot embodiment and behavior in the perception of emotions remain open. OBJECTIVE The aim of this study is dual. First, it aims to assess the role of the robot's cospeech gestures and embodiment in the user's perceived emotions in terms of valence (stimulus pleasantness), arousal (intensity of evoked emotion), and dominance (degree of control exerted by the stimulus). Second, it aims to evaluate the robot's accuracy in identifying positive, negative, and neutral emotions displayed by interacting humans using 3 supervised machine learning algorithms: support vector machine, random forest, and K-nearest neighbor. METHODS Pepper robot was used to elicit the 3 emotions in humans using a set of 60 images retrieved from a standardized database. In particular, 2 experimental conditions for emotion elicitation were performed with Pepper robot: with a static behavior or with a robot that expresses coherent (COH) cospeech behavior. Furthermore, to evaluate the role of the robot embodiment, the third elicitation was performed by asking the participant to interact with a PC, where a graphical interface showed the same images. Each participant was requested to undergo only 1 of the 3 experimental conditions. RESULTS A total of 60 participants were recruited for this study, 20 for each experimental condition for a total of 3600 interactions. The results showed significant differences (P<.05) in valence, arousal, and dominance when stimulated with the Pepper robot behaving COH with respect to the PC condition, thus underlying the importance of the robot's nonverbal communication and embodiment. A higher valence score was obtained for the elicitation of the robot (COH and robot with static behavior) with respect to the PC. For emotion recognition, the K-nearest neighbor classifiers achieved the best accuracy results. In particular, the COH modality achieved the highest level of accuracy (0.97) when compared with the static behavior and PC elicitations (0.88 and 0.94, respectively). CONCLUSIONS The results suggest that the use of multimodal communication channels, such as cospeech and visual channels, as in the COH modality, may improve the recognition accuracy of the user's emotional state and can reinforce the perceived emotion. Future studies should investigate the effect of age, culture, and cognitive profile on the emotion perception and recognition going beyond the limitation of this work.
Collapse
Affiliation(s)
- Laura Fiorini
- Department of Industrial Engineering, University of Florence, Firenze, Italy
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pontedera (Pisa), Italy
| | - Grazia D'Onofrio
- Clinical Psychology Service, Health Department, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (Foggia), Italy
| | | | | | - Sergio Russo
- Innovation & Research Unit, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (Foggia), Italy
| | - Filomena Ciccone
- Clinical Psychology Service, Health Department, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (Foggia), Italy
| | - Francesco Giuliani
- Innovation & Research Unit, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (Foggia), Italy
| | - Daniele Sancarlo
- Complex Unit of Geriatrics, Department of Medical Sciences, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (Foggia), Italy
| | - Filippo Cavallo
- Department of Industrial Engineering, University of Florence, Firenze, Italy
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pontedera (Pisa), Italy
| |
Collapse
|
6
|
Parenti L, Belkaid M, Wykowska A. Differences in Social Expectations About Robot Signals and Human Signals. Cogn Sci 2023; 47:e13393. [PMID: 38133602 DOI: 10.1111/cogs.13393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 11/22/2023] [Accepted: 11/27/2023] [Indexed: 12/23/2023]
Abstract
In our daily lives, we are continually involved in decision-making situations, many of which take place in the context of social interaction. Despite the ubiquity of such situations, there remains a gap in our understanding of how decision-making unfolds in social contexts, and how communicative signals, such as social cues and feedback, impact the choices we make. Interestingly, there is a new social context to which humans are recently increasingly more frequently exposed-social interaction with not only other humans but also artificial agents, such as robots or avatars. Given these new technological developments, it is of great interest to address the question of whether-and in what way-social signals exhibited by non-human agents influence decision-making. The present study aimed to examine whether robot non-verbal communicative behavior has an effect on human decision-making. To this end, we implemented a two-alternative-choice task where participants were to guess which of two presented cups was covering a ball. This game was an adaptation of a "Shell Game." A robot avatar acted as a game partner producing social cues and feedback. We manipulated robot's cues (pointing toward one of the cups) before the participant's decision and the robot's feedback ("thumb up" or no feedback) after the decision. We found that participants were slower (compared to other conditions) when cues were mostly invalid and the robot reacted positively to wins. We argue that this was due to the incongruence of the signals (cue vs. feedback), and thus violation of expectations. In sum, our findings show that incongruence in pre- and post-decision social signals from a robot significantly influences task performance, highlighting the importance of understanding expectations toward social robots for effective human-robot interactions.
Collapse
Affiliation(s)
- Lorenzo Parenti
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
- Department of Psychology, University of Turin
| | - Marwen Belkaid
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
- ETIS UMR 8051, CY Cergy Paris Université, ENSEA, CNRS
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
| |
Collapse
|
7
|
Robb DA, Lopes J, Ahmad MI, McKenna PE, Liu X, Lohan K, Hastie H. Seeing eye to eye: trustworthy embodiment for task-based conversational agents. Front Robot AI 2023; 10:1234767. [PMID: 37711593 PMCID: PMC10499495 DOI: 10.3389/frobt.2023.1234767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 08/02/2023] [Indexed: 09/16/2023] Open
Abstract
Smart speakers and conversational agents have been accepted into our homes for a number of tasks such as playing music, interfacing with the internet of things, and more recently, general chit-chat. However, they have been less readily accepted in our workplaces. This may be due to data privacy and security concerns that exist with commercially available smart speakers. However, one of the reasons for this may be that a smart speaker is simply too abstract and does not portray the social cues associated with a trustworthy work colleague. Here, we present an in-depth mixed method study, in which we investigate this question of embodiment in a serious task-based work scenario of a first responder team. We explore the concepts of trust, engagement, cognitive load, and human performance using a humanoid head style robot, a commercially available smart speaker, and a specially developed dialogue manager. Studying the effect of embodiment on trust, being a highly subjective and multi-faceted phenomena, is clearly challenging, and our results indicate that potentially, the robot, with its anthropomorphic facial features, expressions, and eye gaze, was trusted more than the smart speaker. In addition, we found that embodying a conversational agent helped increase task engagement and performance compared to the smart speaker. This study indicates that embodiment could potentially be useful for transitioning conversational agents into the workplace, and further in situ, "in the wild" experiments with domain workers could be conducted to confirm this.
Collapse
Affiliation(s)
- David A. Robb
- Department of Computer Science, Heriot-Watt University, Edinburgh, United Kingdom
| | - José Lopes
- Department of Computer Science, Heriot-Watt University, Edinburgh, United Kingdom
- Semasio, Porto, Portugal
| | - Muneeb I. Ahmad
- Department of Computer Science, Swansea University, Swansea, United Kingdom
| | - Peter E. McKenna
- Department of Psychology, Heriot-Watt University, Edinburgh, United Kingdom
| | - Xingkun Liu
- Department of Computer Science, Heriot-Watt University, Edinburgh, United Kingdom
| | - Katrin Lohan
- Eastern Switzerland University of Applied Sciences, Buchs SG, Switzerland
| | - Helen Hastie
- Department of Computer Science, Heriot-Watt University, Edinburgh, United Kingdom
| |
Collapse
|
8
|
Schmitz I, Einhäuser W. Effects of interpreting a dynamic geometric cue as gaze on attention allocation. J Vis 2023; 23:8. [PMID: 37548959 PMCID: PMC10414131 DOI: 10.1167/jov.23.8.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 07/10/2023] [Indexed: 08/08/2023] Open
Abstract
Gaze is a powerful cue for directing attention. We investigate the interpretation of an abstract figure as gaze modulates its efficacy as an attentional cue. In each trial, two vertical lines on a central disk moved to one side (left or right). Independent of this "feature-cued" side, a target (black disk) subsequently appeared on one side. After 300 trials (phase 1), participants watched a video of a human avatar walking away. For one group, the avatar wore a helmet that visually matched the central disk and looked at black disks to either side. The other group's video was unrelated to the cueing task. After another 300 trials (phase 2), videos were swapped between groups; 300 further trials (phase 3) followed. In all phases, participants responded more quickly for targets appearing on the feature-cued side. There was a significant interaction between group and phase for reaction times: In phase 3, the group who had just watched the avatar with the helmet had a reduced advantage to the feature-cued side. Hence, interpreting the disk as a turning head seen from behind counteracts the cueing by the motion of the disk. This suggests that the mere perceptual interpretation of an abstract stimulus as gaze yields social cueing effects.
Collapse
Affiliation(s)
- Inka Schmitz
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
9
|
Onnasch L, Schweidler P, Schmidt H. The potential of robot eyes as predictive cues in HRI-an eye-tracking study. Front Robot AI 2023; 10:1178433. [PMID: 37575370 PMCID: PMC10416260 DOI: 10.3389/frobt.2023.1178433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 07/03/2023] [Indexed: 08/15/2023] Open
Abstract
Robots currently provide only a limited amount of information about their future movements to human collaborators. In human interaction, communication through gaze can be helpful by intuitively directing attention to specific targets. Whether and how this mechanism could benefit the interaction with robots and how a design of predictive robot eyes in general should look like is not well understood. In a between-subjects design, four different types of eyes were therefore compared with regard to their attention directing potential: a pair of arrows, human eyes, and two anthropomorphic robot eye designs. For this purpose, 39 subjects performed a novel, screen-based gaze cueing task in the laboratory. Participants' attention was measured using manual responses and eye-tracking. Information on the perception of the tested cues was provided through additional subjective measures. All eye models were overall easy to read and were able to direct participants' attention. The anthropomorphic robot eyes were most efficient at shifting participants' attention which was revealed by faster manual and saccadic reaction times. In addition, a robot equipped with anthropomorphic eyes was perceived as being more competent. Abstract anthropomorphic robot eyes therefore seem to trigger a reflexive reallocation of attention. This points to a social and automatic processing of such artificial stimuli.
Collapse
|
10
|
Paul SK, Nicolescu M, Nicolescu M. Enhancing Human-Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:5798. [PMID: 37447647 DOI: 10.3390/s23135798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 06/05/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023]
Abstract
With the increasing presence of robots in our daily lives, it is crucial to design interaction interfaces that are natural, easy to use and meaningful for robotic tasks. This is important not only to enhance the user experience but also to increase the task reliability by providing supplementary information. Motivated by this, we propose a multi-modal framework consisting of multiple independent modules. These modules take advantage of multiple sensors (e.g., image, sound, depth) and can be used separately or in combination for effective human-robot collaborative interaction. We identified and implemented four key components of an effective human robot collaborative setting, which included determining object location and pose, extracting intricate information from verbal instructions, resolving user(s) of interest (UOI), and gesture recognition and gaze estimation to facilitate the natural and intuitive interactions. The system uses a feature-detector-descriptor approach for object recognition and a homography-based technique for planar pose estimation and a deep multi-task learning model to extract intricate task parameters from verbal communication. The user of interest (UOI) is detected by estimating the facing state and active speakers. The framework also includes gesture detection and gaze estimation modules, which are combined with a verbal instruction component to form structured commands for robotic entities. Experiments were conducted to assess the performance of these interaction interfaces, and the results demonstrated the effectiveness of the approach.
Collapse
Affiliation(s)
- Shuvo Kumar Paul
- Department of Computer Science and Engineering, University of Nevada, Reno, 1664 N Virginia St, Reno, NV 89557, USA
| | - Mircea Nicolescu
- Department of Computer Science and Engineering, University of Nevada, Reno, 1664 N Virginia St, Reno, NV 89557, USA
| | - Monica Nicolescu
- Department of Computer Science and Engineering, University of Nevada, Reno, 1664 N Virginia St, Reno, NV 89557, USA
| |
Collapse
|
11
|
Wang Y, Zhang M, Wu J, Zhang H, Yang H, Guo S, Lin Z, Lu C. Effects of the Interactive Features of Virtual Partner on Individual Exercise Level and Exercise Perception. Behav Sci (Basel) 2023; 13:bs13050434. [PMID: 37232671 DOI: 10.3390/bs13050434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/17/2023] [Accepted: 05/19/2023] [Indexed: 05/27/2023] Open
Abstract
BACKGROUND We designed an exercise system in which the user is accompanied by a virtual partner (VP) and tested bodyweight squat performance with different interactive VP features to explore the comprehensive impact of these VP features on the individual's exercise level (EL) and exercise perception. METHODS This experiment used three interactive features of VP, including body movement (BM), eye gaze (EG), and sports performance (SP), as independent variables, and the exercise level (EL), subjective exercise enjoyment, attitude toward the team formed with the VP, and local muscle fatigue degree of the exerciser as observational indicators. We designed a 2 (with or without VP's BM) × 2 (with or without VP's EG) × 2 (with or without VP's SP) within-participants factorial experiment. A total of 40 college students were invited to complete 320 groups of experiments. RESULTS (1) Regarding EL, the main effects of BM and SP were significant (p < 0.001). The pairwise interaction effects of the three independent variables on EL were all significant (p < 0.05). (2) Regarding exercise perception, the main effects of BM (p < 0.001) and EG (p < 0.001) on subjective exercise enjoyment were significant. The main effect of BM on the attitude toward the sports team formed with the VP was significant (p < 0.001). The interaction effect of BM and SP on the attitude toward the sports team formed with the VP was significant (p < 0.001). (3) Regarding the degree of local muscle fatigue, the main effects of BM, EG, and SP and their interaction effects were not significant (p > 0.05). CONCLUSION BM and EG from the VP elevate EL and exercise perception during squat exercises, while the VP with SP inhibited the EL and harmed exercise perception. The conclusions of this study can provide references to guide the interactive design of VP-accompanied exercise systems.
Collapse
Affiliation(s)
- Yinghao Wang
- Industrial Design and Research Institute, Zhejiang University of Technology, Hangzhou 310023, China
| | - Mengsi Zhang
- School of Design and Architecture, Zhejiang University of Technology, Hangzhou 310023, China
| | - Jianfeng Wu
- Industrial Design and Research Institute, Zhejiang University of Technology, Hangzhou 310023, China
| | - Haonan Zhang
- School of Design and Architecture, Zhejiang University of Technology, Hangzhou 310023, China
| | - Hongchun Yang
- Industrial Design and Research Institute, Zhejiang University of Technology, Hangzhou 310023, China
| | - Songyang Guo
- School of Design and Architecture, Zhejiang University of Technology, Hangzhou 310023, China
| | - Zishuo Lin
- School of Design and Architecture, Zhejiang University of Technology, Hangzhou 310023, China
| | - Chunfu Lu
- Industrial Design and Research Institute, Zhejiang University of Technology, Hangzhou 310023, China
| |
Collapse
|
12
|
Su H, Qi W, Chen J, Yang C, Sandoval J, Laribi MA. Recent advancements in multimodal human-robot interaction. Front Neurorobot 2023; 17:1084000. [PMID: 37250671 PMCID: PMC10210148 DOI: 10.3389/fnbot.2023.1084000] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Accepted: 04/20/2023] [Indexed: 05/31/2023] Open
Abstract
Robotics have advanced significantly over the years, and human-robot interaction (HRI) is now playing an important role in delivering the best user experience, cutting down on laborious tasks, and raising public acceptance of robots. New HRI approaches are necessary to promote the evolution of robots, with a more natural and flexible interaction manner clearly the most crucial. As a newly emerging approach to HRI, multimodal HRI is a method for individuals to communicate with a robot using various modalities, including voice, image, text, eye movement, and touch, as well as bio-signals like EEG and ECG. It is a broad field closely related to cognitive science, ergonomics, multimedia technology, and virtual reality, with numerous applications springing up each year. However, little research has been done to summarize the current development and future trend of HRI. To this end, this paper systematically reviews the state of the art of multimodal HRI on its applications by summing up the latest research articles relevant to this field. Moreover, the research development in terms of the input signal and the output signal is also covered in this manuscript.
Collapse
Affiliation(s)
- Hang Su
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Wen Qi
- School of Future Technology, South China University of Technology, Guangzhou, China
| | - Jiahao Chen
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Chenguang Yang
- Bristol Robotics Laboratory, University of the West of England, Bristol, United Kingdom
| | - Juan Sandoval
- Department of GMSC, Pprime Institute, CNRS, ENSMA, University of Poitiers, Poitiers, France
| | - Med Amine Laribi
- Department of GMSC, Pprime Institute, CNRS, ENSMA, University of Poitiers, Poitiers, France
| |
Collapse
|
13
|
Fu D, Abawi F, Carneiro H, Kerzel M, Chen Z, Strahl E, Liu X, Wermter S. A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution. Int J Soc Robot 2023; 15:1-16. [PMID: 37359433 PMCID: PMC10067521 DOI: 10.1007/s12369-023-00993-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/08/2023] [Indexed: 04/05/2023]
Abstract
To enhance human-robot social interaction, it is essential for robots to process multiple social cues in a complex real-world environment. However, incongruency of input information across modalities is inevitable and could be challenging for robots to process. To tackle this challenge, our study adopted the neurorobotic paradigm of crossmodal conflict resolution to make a robot express human-like social attention. A behavioural experiment was conducted on 37 participants for the human study. We designed a round-table meeting scenario with three animated avatars to improve ecological validity. Each avatar wore a medical mask to obscure the facial cues of the nose, mouth, and jaw. The central avatar shifted its eye gaze while the peripheral avatars generated sound. Gaze direction and sound locations were either spatially congruent or incongruent. We observed that the central avatar's dynamic gaze could trigger crossmodal social attention responses. In particular, human performance was better under the congruent audio-visual condition than the incongruent condition. Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study. After mounting the trained model on the iCub, the robot was exposed to laboratory conditions similar to the human experiment. While the human performance was overall superior, our trained model demonstrated that it could replicate attention responses similar to humans.
Collapse
Affiliation(s)
- Di Fu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Fares Abawi
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Hugo Carneiro
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Matthias Kerzel
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Ziwei Chen
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Erik Strahl
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Xun Liu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Stefan Wermter
- Department of Informatics, University of Hamburg, Hamburg, Germany
| |
Collapse
|
14
|
Maalouly E, Yamazaki R, Nishio S, Nørskov M, Kamaga K, Komai S, Chiba K, Atsumi K, Akao KI. Assessing the effect of dialogue on altruism toward future generations: A preliminary study. FRONTIERS IN COMPUTER SCIENCE 2023. [DOI: 10.3389/fcomp.2023.1129340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023] Open
Abstract
IntroductionDespite the abundance of evidence on climate change and its consequences on future generations, people, in general, are still reluctant to change their actions and behaviors toward the environment that would particularly benefit posterity. In this study, we took a preliminary step in a new research direction to explore humans' altruistic behavior toward future generations of people and whether it can be affected by dialogue.MethodsWe used an android robot called Telenoid as a representative of future generations by explaining that the robot is controlled by an Artificial Intelligence (AI) living in a simulation of our world in the future. To measure people's altruistic behavior toward it, we asked the participants to play a round of the Dictator Game with the Telenoid before having an interactive conversation with the Telenoid and then playing another round.ResultsOn average, participants gave more money to the Telenoid in the second round (after having an interactive conversation). The average amount of money increased from 20% in the first to about 30% in the second round.DiscussionThe results indicate that the conversation with the robot might have been responsible for the change in altruistic behavior toward the Telenoid. Contrary to our expectations, the personality of the participants did not appear to have an influence on their change of behavior, but other factors might have contributed. We finally discuss the influence of other possible factors such as empathy and the appearance of the robot. However, the preliminary nature of this study should deter us from making any definitive conclusions, but the results are promising for establishing the ground for future experiments.
Collapse
|
15
|
Mavrogiannis C, Baldini F, Wang A, Zhao D, Trautman P, Steinfeld A, Oh J. Core Challenges of Social Robot Navigation: A Survey. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2023. [DOI: 10.1145/3583741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
Abstract
Robot navigation in crowded public spaces is a complex task that requires addressing a variety of engineering and human factors challenges. These challenges have motivated a great amount of research resulting in important developments for the fields of robotics and human-robot interaction over the past three decades. Despite the significant progress and the massive recent interest, we observe a number of significant remaining challenges that prohibit the seamless deployment of autonomous robots in crowded environments. In this survey article, we organize existing challenges into a set of categories related to broader open problems in robot planning, behavior design, and evaluation methodologies. Within these categories, we review past work, and offer directions for future research. Our work builds upon and extends earlier survey efforts by a) taking a critical perspective and diagnosing fundamental limitations of adopted practices in the field and b) offering constructive feedback and ideas that could inspire research in the field over the coming decade.
Collapse
Affiliation(s)
| | - Francesca Baldini
- Honda Research Institute and California Institute of Technology, USA
| | - Allan Wang
- The Robotics Institute, Carnegie Mellon University, USA
| | - Dapeng Zhao
- The Robotics Institute, Carnegie Mellon University, USA
| | | | | | - Jean Oh
- The Robotics Institute, Carnegie Mellon University, USA
| |
Collapse
|
16
|
Zonca J, Folsø A, Sciutti A. Social Influence Under Uncertainty in Interaction with Peers, Robots and Computers. Int J Soc Robot 2023. [DOI: 10.1007/s12369-022-00959-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
AbstractTaking advice from others requires confidence in their competence. This is important for interaction with peers, but also for collaboration with social robots and artificial agents. Nonetheless, we do not always have access to information about others’ competence or performance. In these uncertain environments, do our prior beliefs about the nature and the competence of our interacting partners modulate our willingness to rely on their judgments? In a joint perceptual decision making task, participants made perceptual judgments and observed the simulated estimates of either a human participant, a social humanoid robot or a computer. Then they could modify their estimates based on this feedback. Results show participants’ belief about the nature of their partner biased their compliance with its judgments: participants were more influenced by the social robot than human and computer partners. This difference emerged strongly at the very beginning of the task and decreased with repeated exposure to empirical feedback on the partner’s responses, disclosing the role of prior beliefs in social influence under uncertainty. Furthermore, the results of our functional task suggest an important difference between human–human and human–robot interaction in the absence of overt socially relevant signal from the partner: the former is modulated by social normative mechanisms, whereas the latter is guided by purely informational mechanisms linked to the perceived competence of the partner.
Collapse
|
17
|
Irfan B, Céspedes N, Casas J, Senft E, Gutiérrez LF, Rincon-Roncancio M, Cifuentes CA, Belpaeme T, Múnera M. Personalised socially assistive robot for cardiac rehabilitation: Critical reflections on long-term interactions in the real world. USER MODELING AND USER-ADAPTED INTERACTION 2023; 33:497-544. [PMID: 35874292 PMCID: PMC9294801 DOI: 10.1007/s11257-022-09323-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 03/04/2022] [Indexed: 05/03/2023]
Abstract
Lack of motivation and low adherence rates are critical concerns of long-term rehabilitation programmes, such as cardiac rehabilitation. Socially assistive robots are known to be effective in improving motivation in therapy. However, over longer durations, generic and repetitive behaviours by the robot often result in a decrease in motivation and engagement, which can be overcome by personalising the interaction, such as recognising users, addressing them with their name, and providing feedback on their progress and adherence. We carried out a real-world clinical study, lasting 2.5 years with 43 patients to evaluate the effects of using a robot and personalisation in cardiac rehabilitation. Due to dropouts and other factors, 26 patients completed the programme. The results derived from these patients suggest that robots facilitate motivation and adherence, enable prompt detection of critical conditions by clinicians, and improve the cardiovascular functioning of the patients. Personalisation is further beneficial when providing high-intensity training, eliciting and maintaining engagement (as measured through gaze and social interactions) and motivation throughout the programme. However, relying on full autonomy for personalisation in a real-world environment resulted in sensor and user recognition failures, which caused negative user perceptions and lowered the perceived utility of the robot. Nonetheless, personalisation was positively perceived, suggesting that potential drawbacks need to be weighed against various benefits of the personalised interaction.
Collapse
Affiliation(s)
- Bahar Irfan
- Centre for Robotics and Neural Systems, University of Plymouth, Plymouth, UK
- Present Address: Evinoks Service Equipment Industry and Commerce Inc., Bursa, Turkey
| | - Nathalia Céspedes
- Department of Biomedical Engineering, Colombian School of Engineering Julio Garavito, Bogotá, Colombia
- Present Address: Department of Computer Science and Electronic Engineering, Queen Mary University of London, London, UK
| | - Jonathan Casas
- Department of Biomedical Engineering, Colombian School of Engineering Julio Garavito, Bogotá, Colombia
- Present Address: Mechanical and Aerospace Engineering Department, Syracuse University, Syracuse, NY USA
| | - Emmanuel Senft
- Centre for Robotics and Neural Systems, University of Plymouth, Plymouth, UK
- Present Address: Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI USA
| | | | | | - Carlos A. Cifuentes
- Present Address: School of Engineering, Science and Technology, Universidad del Rosario, Bogotá, Colombia
| | - Tony Belpaeme
- Centre for Robotics and Neural Systems, University of Plymouth, Plymouth, UK
- IDLab-imec, Ghent University, Ghent, Belgium
| | - Marcela Múnera
- Department of Biomedical Engineering, Colombian School of Engineering Julio Garavito, Bogotá, Colombia
| |
Collapse
|
18
|
Pasupuleti D, Sasidharan S, Manikutty G, Das AM, Pankajakshan P, Strauss S. Co-designing the Embodiment of a Minimalist Social Robot to Encourage Hand Hygiene Practises Among Children in India. Int J Soc Robot 2023; 15:345-367. [PMID: 36778903 PMCID: PMC9900201 DOI: 10.1007/s12369-023-00969-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/22/2023] [Indexed: 02/09/2023]
Abstract
We conducted an empirical study to co-design a social robot with children to bring about long-term behavioural changes. As a case study, we focused our efforts to create a social robot to promote handwashing in community settings while adhering to minimalistic design principles. Since cultural views influence design preferences and technology acceptance, we selected forty children from different socio-economic backgrounds across India as informants for our design study. We asked the children to design paper mock-ups using pre-cut geometrical shapes to understand their mental models of such a robot. The children also shared their feedback on the eight resulting different conceptual designs of minimalistic caricatured social robots. Our findings show that children had varied expectations of the robot's emotional intelligence, interactions, and social roles even though it was being designed for a specific context of use. The children unequivocally liked and trusted anthropomorphized caricatured designs of everyday objects for the robot's morphology. Based on these findings, we present our recommendations for the physical and interaction features of a minimalist social robot assimilating the children's inputs and social robot design principles grounded in prior research. Future studies will examine the children's interactions with a built prototype.
Collapse
Affiliation(s)
- Devasena Pasupuleti
- grid.411370.00000 0000 9081 2061AMMACHI labs, Amrita Vishwa Vidyapeetham, Amritapuri, Kollam, Kerala 690525 India
| | - Sreejith Sasidharan
- grid.411370.00000 0000 9081 2061AMMACHI labs, Amrita Vishwa Vidyapeetham, Amritapuri, Kollam, Kerala 690525 India
| | - Gayathri Manikutty
- grid.411370.00000 0000 9081 2061AMMACHI labs, Amrita Vishwa Vidyapeetham, Amritapuri, Kollam, Kerala 690525 India
| | - Anand M. Das
- grid.411370.00000 0000 9081 2061AMMACHI labs, Amrita Vishwa Vidyapeetham, Amritapuri, Kollam, Kerala 690525 India
| | - Praveen Pankajakshan
- grid.411370.00000 0000 9081 2061AMMACHI labs, Amrita Vishwa Vidyapeetham, Amritapuri, Kollam, Kerala 690525 India ,Cropin AI Lab, Bengaluru, India
| | - Sidney Strauss
- grid.411370.00000 0000 9081 2061AMMACHI labs, Amrita Vishwa Vidyapeetham, Amritapuri, Kollam, Kerala 690525 India
| |
Collapse
|
19
|
Koller M, Weiss A, Hirschmanner M, Vincze M. Robotic gaze and human views: A systematic exploration of robotic gaze aversion and its effects on human behaviors and attitudes. Front Robot AI 2023; 10:1062714. [PMID: 37102131 PMCID: PMC10123290 DOI: 10.3389/frobt.2023.1062714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 03/13/2023] [Indexed: 04/28/2023] Open
Abstract
Similar to human-human interaction (HHI), gaze is an important modality in conversational human-robot interaction (HRI) settings. Previously, human-inspired gaze parameters have been used to implement gaze behavior for humanoid robots in conversational settings and improve user experience (UX). Other robotic gaze implementations disregard social aspects of gaze behavior and pursue a technical goal (e.g., face tracking). However, it is unclear how deviating from human-inspired gaze parameters affects the UX. In this study, we use eye-tracking, interaction duration, and self-reported attitudinal measures to study the impact of non-human inspired gaze timings on the UX of the participants in a conversational setting. We show the results for systematically varying the gaze aversion ratio (GAR) of a humanoid robot over a broad parameter range from almost always gazing at the human conversation partner to almost always averting the gaze. The main results reveal that on a behavioral level, a low GAR leads to shorter interaction durations and that human participants change their GAR to mimic the robot. However, they do not copy the robotic gaze behavior strictly. Additionally, in the lowest gaze aversion setting, participants do not gaze back as much as expected, which indicates a user aversion to the robot gaze behavior. However, participants do not report different attitudes toward the robot for different GARs during the interaction. In summary, the urge of humans in conversational settings with a humanoid robot to adapt to the perceived GAR is stronger than the urge of intimacy regulation through gaze aversion, and a high mutual gaze is not always a sign of high comfort, as suggested earlier. This result can be used as a justification to deviate from human-inspired gaze parameters when necessary for specific robot behavior implementations.
Collapse
Affiliation(s)
- Michael Koller
- Automation and Control Institute, TU Wien, Vienna, Austria
- *Correspondence: Michael Koller,
| | - Astrid Weiss
- Human Computer Interaction Group, TU Wien, Vienna, Austria
| | | | - Markus Vincze
- Automation and Control Institute, TU Wien, Vienna, Austria
| |
Collapse
|
20
|
Sun YC, Effati M, Naguib HE, Nejat G. SoftSAR: The New Softer Side of Socially Assistive Robots-Soft Robotics with Social Human-Robot Interaction Skills. SENSORS (BASEL, SWITZERLAND) 2022; 23:432. [PMID: 36617030 PMCID: PMC9824785 DOI: 10.3390/s23010432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/20/2022] [Accepted: 12/26/2022] [Indexed: 06/17/2023]
Abstract
When we think of "soft" in terms of socially assistive robots (SARs), it is mainly in reference to the soft outer shells of these robots, ranging from robotic teddy bears to furry robot pets. However, soft robotics is a promising field that has not yet been leveraged by SAR design. Soft robotics is the incorporation of smart materials to achieve biomimetic motions, active deformations, and responsive sensing. By utilizing these distinctive characteristics, a new type of SAR can be developed that has the potential to be safer to interact with, more flexible, and uniquely uses novel interaction modes (colors/shapes) to engage in a heighted human-robot interaction. In this perspective article, we coin this new collaborative research area as SoftSAR. We provide extensive discussions on just how soft robotics can be utilized to positively impact SARs, from their actuation mechanisms to the sensory designs, and how valuable they will be in informing future SAR design and applications. With extensive discussions on the fundamental mechanisms of soft robotic technologies, we outline a number of key SAR research areas that can benefit from using unique soft robotic mechanisms, which will result in the creation of the new field of SoftSAR.
Collapse
Affiliation(s)
- Yu-Chen Sun
- Autonomous Systems and Biomechatronics Laboratory (ASBLab), Department of Mechanical & Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
- Toronto Smart Materials and Structures (TSMART), Department of Mechanical & Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
| | - Meysam Effati
- Autonomous Systems and Biomechatronics Laboratory (ASBLab), Department of Mechanical & Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
| | - Hani E. Naguib
- Toronto Smart Materials and Structures (TSMART), Department of Mechanical & Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
- Toronto Institute of Advanced Manufacturing (TIAM), University of Toronto, Toronto, ON M5S 3G8, Canada
- Toronto Rehabilitation Institute, Toronto, ON M5G 2A2, Canada
| | - Goldie Nejat
- Autonomous Systems and Biomechatronics Laboratory (ASBLab), Department of Mechanical & Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
- Toronto Institute of Advanced Manufacturing (TIAM), University of Toronto, Toronto, ON M5S 3G8, Canada
- Toronto Rehabilitation Institute, Toronto, ON M5G 2A2, Canada
- Rotman Research Institute, Baycrest Health Sciences, North York, ON M6A 2E1, Canada
| |
Collapse
|
21
|
Robotic Gaze Responsiveness in Multiparty Teamwork. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00955-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
22
|
Servais A, Hurter C, Barbeau EJ. Gaze direction as a facial cue of memory retrieval state. Front Psychol 2022; 13:1063228. [PMID: 36619020 PMCID: PMC9813397 DOI: 10.3389/fpsyg.2022.1063228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 12/02/2022] [Indexed: 12/24/2022] Open
Abstract
Gaze direction is a powerful social cue that indicates the direction of attention and can be used to decode others' mental states. When an individual looks at an external object, inferring where their attention is focused from their gaze direction is easy. But when people are immersed in memories, their attention is oriented towards their inner world. Is there any specific gaze direction in this situation, and if so, which one? While trying to remember, a common behavior is gaze aversion, which has mostly been reported as an upward-directed gaze. Our primary aim was to evaluate whether gaze direction plays a role in the inference of the orientation of attention-i.e., external vs. internal-in particular, whether an upward direction is considered as an indicator of attention towards the internal world. Our secondary objective was to explore whether different gaze directions are consistently attributed to different types of internal mental states and, more specifically, memory states (autobiographical or semantic memory retrieval, or working memory). Gaze aversion is assumed to play a role in perceptual decoupling, which is supposed to support internal attention. We therefore also tested whether internal attention was associated with high gaze eccentricity because the mismatch between head and eye direction alters visual acuity. We conducted two large-sample (160-163 participants) online experiments. Participants were asked to choose which mental state-among different internal and external attentional states-they would attribute to faces with gazes oriented in different directions. Participants significantly associated internal attention with an upward-averted gaze across experiments, while external attention was mostly associated with a gaze remaining on the horizontal axis. This shows that gaze direction is robustly used by observers to infer others' mental states. Unexpectedly, internal attentional states were not more associated with gaze eccentricity at high (30°) than low (10°) eccentricity and we found that autobiographical memory retrieval, but not the other memory states, was highly associated with 10° downward gaze. This reveals the possible existence of different types of gaze aversion for different types of memories and opens new perspectives.
Collapse
Affiliation(s)
- Anaïs Servais
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS-UPS, UMR5549, Toulouse, France,Ecole Nationale d’Aviation Civile (ENAC), Toulouse, France,*Correspondence: Anaïs Servais,
| | | | - Emmanuel J. Barbeau
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS-UPS, UMR5549, Toulouse, France
| |
Collapse
|
23
|
Hostettler D, Mayer S, Hildebrand C. Human-Like Movements of Industrial Robots Positively Impact Observer Perception. Int J Soc Robot 2022; 15:1-19. [PMID: 36570426 PMCID: PMC9763088 DOI: 10.1007/s12369-022-00954-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/02/2022] [Indexed: 12/24/2022]
Abstract
The number of industrial robots and collaborative robots on manufacturing shopfloors has been rapidly increasing over the past decades. However, research on industrial robot perception and attributions toward them is scarce as related work has predominantly explored the effect of robot appearance, movement patterns, or human-likeness of humanoid robots. The current research specifically examines attributions and perceptions of industrial robots-specifically, articulated collaborative robots-and how the type of movements of such robots impact human perception and preference. We developed and empirically tested a novel model of robot movement behavior and demonstrate how altering the movement behavior of a robotic arm leads to differing attributions of the robot's human-likeness. These findings have important implications for emerging research on the impact of robot movement on worker perception, preferences, and behavior in industrial settings.
Collapse
Affiliation(s)
- Damian Hostettler
- Institute of Computer Science, University of St. Gallen, Rosenbergstrasse 30, 9000 St. Gallen, Switzerland
| | - Simon Mayer
- Institute of Computer Science, University of St. Gallen, Rosenbergstrasse 30, 9000 St. Gallen, Switzerland
| | - Christian Hildebrand
- Institute of Behavioral Science and Technology, University of St. Gallen, Torstrasse 25, 9000 St. Gallen, Switzerland
| |
Collapse
|
24
|
Li M, Guo F, Wang X, Chen J, Ham J. Effects of robot gaze and voice human-likeness on users’ subjective perception, visual attention, and cerebral activity in voice conversations. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
25
|
Pauketat JV, Anthis JR. Predicting the moral consideration of artificial intelligences. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
26
|
Han Z, Yanco HA. Communicating Missing Causal Information to Explain a Robot’s Past Behavior. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3568024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Robots need to explain their behavior to gain trust. Existing research has focused on explaining a robot’s current behavior, yet it remains unknown yet challenging how to provide explanations of past actions in an environment that might change after a robot’s actions, leading to critical missing causal information due to moved objects.
We conducted an experiment (N=665) investigating how a robot could help participants infer the missing causal information by replaying the past behavior physically, using verbal explanations, and projecting visual information onto the environment. Participants watched videos of the robot replaying its completion of an integrated mobile kitting task. During the replay, the objects are already gone, so participants needed to infer where an object was picked, where a ground obstacle had been, and where the object was placed.
Based on the results, we recommend combining physical replay with speech and projection indicators (Replay-Project-Say) to help infer all the missing causal information (picking, navigation, and placement) from the robot’s past actions. This condition had the best outcome in both task-based – effectiveness, efficiency, and confidence – and team-based metrics – workload and trust. If one’s focus is efficiency, we recommend projection markers for navigation inferences and verbal markers for placing inferences.
Collapse
Affiliation(s)
- Zhao Han
- University of Massachusetts Lowell, USA
| | | |
Collapse
|
27
|
Morillo-Mendez L, Schrooten MGS, Loutfi A, Mozos OM. Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction. Int J Soc Robot 2022:1-13. [PMID: 36185773 PMCID: PMC9510350 DOI: 10.1007/s12369-022-00926-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2022] [Indexed: 11/12/2022]
Abstract
There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults. Supplementary Information The online version contains supplementary material available at 10.1007/s12369-022-00926-6.
Collapse
Affiliation(s)
- Lucas Morillo-Mendez
- Centre for Applied Autonomous Sensor Systems, Örebro University, Fakultetsgatan 1, Örebro, 702 81 Sweden
| | | | - Amy Loutfi
- Centre for Applied Autonomous Sensor Systems, Örebro University, Fakultetsgatan 1, Örebro, 702 81 Sweden
| | - Oscar Martinez Mozos
- Centre for Applied Autonomous Sensor Systems, Örebro University, Fakultetsgatan 1, Örebro, 702 81 Sweden
| |
Collapse
|
28
|
Brown L, Hamilton J, Han Z, Phan A, Phung T, Hansen E, Tran N, Williams T. Best of Both Worlds? Combining Different Forms of Mixed Reality Deictic Gestures. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3563387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
Mixed Reality provides a powerful medium for transparent and effective human-robot communication, especially for robots with significant physical limitations (e.g., those without arms). To enhance nonverbal capabilities for armless robots, this paper presents two studies that explore two different categories of mixed-reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arm positioned over the gesturing robot (an ego-sensitive allocentric gesture). In Study 1, we explore the trade-offs between these two types of gestures with respect to both objective performance and subjective social perceptions. Our results show fundamentally different task-oriented versus social benefits, with non-ego-sensitive allocentric gestures enabling faster reaction time and higher accuracy, but ego-sensitive gestures enabling higher perceived social presence, anthropomorphism, and likability. In Study 2, we refine our design recommendations, by showing that in fact these different gestures should not be viewed as mutually exclusive alternatives, and that by using them together, robots can achieve both task-oriented and social benefits.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Tom Williams
- All authors are affiliated with the MIRRORLab, Department of Computer ScienceColorado School of Mines, USA
| |
Collapse
|
29
|
Jirak D, Aoki M, Yanagi T, Takamatsu A, Bouet S, Yamamura T, Sandini G, Rea F. Is It Me or the Robot? A Critical Evaluation of Human Affective State Recognition in a Cognitive Task. Front Neurorobot 2022; 16:882483. [PMID: 35978569 PMCID: PMC9377278 DOI: 10.3389/fnbot.2022.882483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 04/28/2022] [Indexed: 11/13/2022] Open
Abstract
A key goal in human-robot interaction (HRI) is to design scenarios between humanoid robots and humans such that the interaction is perceived as collaborative and natural, yet safe and comfortable for the human. Human skills like verbal and non-verbal communication are essential elements as humans tend to attribute social behaviors to robots. However, aspects like the uncanny valley and different technical affinity levels can impede the success of HRI scenarios, which has consequences on the establishment of long-term interaction qualities like trust and rapport. In the present study, we investigate the impact of a humanoid robot on human emotional responses during the performance of a cognitively demanding task. We set up three different conditions for the robot with increasing levels of social cue expressions in a between-group study design. For the analysis of emotions, we consider the eye gaze behavior, arousal-valence for affective states, and the detection of action units. Our analysis reveals that the participants display a high tendency toward positive emotions in presence of a robot with clear social skills compared to other conditions, where we show how emotions occur only at task onset. Our study also shows how different expression levels influence the analysis of the robots' role in HRI. Finally, we critically discuss the current trend of automatized emotion or affective state recognition in HRI and demonstrate issues that have direct consequences on the interpretation and, therefore, claims about human emotions in HRI studies.
Collapse
Affiliation(s)
- Doreen Jirak
- Robotics, Brain and Cognitive Science Group (RBCS), Istituto Italiano di Tecnologia, Genova, Italy
- *Correspondence: Doreen Jirak
| | - Motonobu Aoki
- Robotics, Brain and Cognitive Science Group (RBCS), Istituto Italiano di Tecnologia, Genova, Italy
- Department of Computer Science, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genova, Italy
- Mobility and AI Laboratory, Research Division, Nissan Motor Co., Ltd., Atsugi, Japan
| | - Takura Yanagi
- Mobility and AI Laboratory, Research Division, Nissan Motor Co., Ltd., Atsugi, Japan
| | - Atsushi Takamatsu
- Mobility and AI Laboratory, Research Division, Nissan Motor Co., Ltd., Atsugi, Japan
| | - Stephane Bouet
- Mobility and AI Laboratory, Research Division, Nissan Motor Co., Ltd., Atsugi, Japan
| | - Tomohiro Yamamura
- Mobility and AI Laboratory, Research Division, Nissan Motor Co., Ltd., Atsugi, Japan
| | - Giulio Sandini
- Robotics, Brain and Cognitive Science Group (RBCS), Istituto Italiano di Tecnologia, Genova, Italy
| | - Francesco Rea
- Robotics, Brain and Cognitive Science Group (RBCS), Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
30
|
Okafuji Y, Ozaki Y, Baba J, Nakanishi J, Ogawa K, Yoshikawa Y, Ishiguro H. Behavioral Assessment of a Humanoid Robot When Attracting Pedestrians in a Mall. Int J Soc Robot 2022; 14:1731-1747. [PMID: 35915857 PMCID: PMC9331028 DOI: 10.1007/s12369-022-00907-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/22/2022] [Indexed: 11/26/2022]
Abstract
AbstractResearch is currently being conducted on the use of robots as human labor support technology. In particular, the service industry needs to allocate more manpower, and it will be important for robots to support people. This study focuses on using a humanoid robot as a social service robot to convey information in a shopping mall, and the types of robot behaviors were analyzed. In order to convey the information, two processes must occur. Pedestrians must stop in front of the robot, and the robot must continue the engagement with them. For the purpose of this study, three types of autonomous robot behaviors were analyzed and compared in these processes in the experiment: greeting, in-trouble, dancing behaviors. After interactions were attempted with 5,000+ pedestrians, this study revealed that the in-trouble behavior can make pedestrians stop more and stay longer. In addition, in order to evaluate the effectiveness of the robot in a real environment, the comparative results between three robot behaviors and human advertisers revealed that (1) the results of the greeting and dancing behavior are comparable to those of the humans, and (2) the performance of the in-trouble behavior in providing information tasks is higher than that of all human advertisers. These findings demonstrate that the performance of robots is comparable to that of humans in providing information tasks in a limited environment; therefore, it is expected that service robots as a labor support technology will be able to perform well in the real world.
Collapse
Affiliation(s)
- Yuki Okafuji
- AI Lab, CyberAgent, Inc., Shibuya, 150–6121 Tokyo Japan
- Graduation School of Engineering Science, Osaka University, Toyonaka, 560–0043 Osaka Japan
- School of Information Science and Engineering, Ritsumeikan University, Kusatsu, 525–8577 Shiga Japan
| | - Yasunori Ozaki
- AI Lab, CyberAgent, Inc., Shibuya, 150–6121 Tokyo Japan
- Graduation School of Engineering Science, Osaka University, Toyonaka, 560–0043 Osaka Japan
- School of Information Science and Engineering, Ritsumeikan University, Kusatsu, 525–8577 Shiga Japan
| | - Jun Baba
- AI Lab, CyberAgent, Inc., Shibuya, 150–6121 Tokyo Japan
- Graduation School of Engineering Science, Osaka University, Toyonaka, 560–0043 Osaka Japan
| | - Junya Nakanishi
- Graduation School of Engineering Science, Osaka University, Toyonaka, 560–0043 Osaka Japan
| | - Kohei Ogawa
- Graduation School of Engineering, Nagoya University, Nagoya, 464–8603 Aichi Japan
| | - Yuichiro Yoshikawa
- Graduation School of Engineering Science, Osaka University, Toyonaka, 560–0043 Osaka Japan
| | - Hiroshi Ishiguro
- Graduation School of Engineering Science, Osaka University, Toyonaka, 560–0043 Osaka Japan
| |
Collapse
|
31
|
Ahmad MI, Refik R. “No Chit Chat!” A Warning From a Physical Versus Virtual Robot Invigilator: Which Matters Most? Front Robot AI 2022; 9:908013. [PMID: 35937616 PMCID: PMC9355029 DOI: 10.3389/frobt.2022.908013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 06/14/2022] [Indexed: 11/25/2022] Open
Abstract
Past work has not considered social robots as proctors or monitors to prevent cheating or maintain discipline in the context of exam invigilation with adults. Further, we do not see an investigation into the role of invigilation for the robot presented in two different embodiments (physical vs. virtual). We demonstrate a system that enables a robot (physical and virtual) to act as an invigilator and deploy an exam setup with two participants completing a programming task. We conducted two studies (an online video-based survey and an in-person evaluation) to understand participants’ perceptions of the invigilator robot presented in two different embodiments. Additionally, we investigated whether participants showed cheating behaviours in one condition more than the other. The findings showed that participants’ ratings did not differ significantly. Further, participants were more talkative in the virtual robot condition compared to the physical robot condition. These findings are promising and call for further research into the invigilation role of social robots in more subtle and complex exam-like settings.
Collapse
|
32
|
Kubota T, Ogawa K, Yoshikawa Y, Ishiguro H. Alignment of the attitude of teleoperators with that of a semi-autonomous android. Sci Rep 2022; 12:10473. [PMID: 35760935 PMCID: PMC9237015 DOI: 10.1038/s41598-022-13829-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 05/27/2022] [Indexed: 11/21/2022] Open
Abstract
Studies on social robots that can communicate with humans are increasingly important. In particular, semi-aautonomous robots have shown potential for practical applications in which robot autonomy and human teleoperation are jointly used to accomplish difficult tasks. However, it is unknown how the attitude represented in the autonomous behavior of the robots affects teleoperators. Previous studies reported that when humans play a particular role, their attitudes align with that role. The teleoperators of semi-autonomous robots also play the role given to the robots and may assimilate their autonomous expression. We hypothesized that the attitude of teleoperators may align with that of robots through teleoperation. To verify this, we conducted an experiment with conditions under which a participant operated a part of the body of an android robot that autonomously expressed a preferential attitude toward a painting and a condition under which they did not. Experimental results demonstrated that the preferential attitude of participants who teleoperated the android aligned statistically significantly more with that of the robot in comparison to those who did not teleoperate it, thereby supporting our hypothesis. This finding is novel regarding attitude change in teleoperators of semi-autonomous robots and can support the implementation of effective human-robot collaboration systems.
Collapse
Affiliation(s)
- Tomonori Kubota
- Department of Systems Innovation, Osaka University, Toyonaka, Osaka, 560-8531, Japan. .,JSPS Research Fellow, 8 Ichiban-cho, Chiyoda-ku, Tokyo, 102-8472, Japan. .,Department of Information and Communication Engineering, Nagoya University, Nagoya, Aichi, 464-8603, Japan.
| | - Kohei Ogawa
- Department of Information and Communication Engineering, Nagoya University, Nagoya, Aichi, 464-8603, Japan
| | - Yuichiro Yoshikawa
- Department of Systems Innovation, Osaka University, Toyonaka, Osaka, 560-8531, Japan
| | - Hiroshi Ishiguro
- Department of Systems Innovation, Osaka University, Toyonaka, Osaka, 560-8531, Japan
| |
Collapse
|
33
|
Matarese M, Rea F, Sciutti A. Perception is Only Real When Shared: A Mathematical Model for Collaborative Shared Perception in Human-Robot Interaction. Front Robot AI 2022; 9:733954. [PMID: 35783020 PMCID: PMC9240641 DOI: 10.3389/frobt.2022.733954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 05/30/2022] [Indexed: 11/15/2022] Open
Abstract
Partners have to build a shared understanding of their environment in everyday collaborative tasks by aligning their perceptions and establishing a common ground. This is one of the aims of shared perception: revealing characteristics of the individual perception to others with whom we share the same environment. In this regard, social cognitive processes, such as joint attention and perspective-taking, form a shared perception. From a Human-Robot Interaction (HRI) perspective, robots would benefit from the ability to establish shared perception with humans and a common understanding of the environment with their partners. In this work, we wanted to assess whether a robot, considering the differences in perception between itself and its partner, could be more effective in its helping role and to what extent this improves task completion and the interaction experience. For this purpose, we designed a mathematical model for a collaborative shared perception that aims to maximise the collaborators’ knowledge of the environment when there are asymmetries in perception. Moreover, we instantiated and tested our model via a real HRI scenario. The experiment consisted of a cooperative game in which participants had to build towers of Lego bricks, while the robot took the role of a suggester. In particular, we conducted experiments using two different robot behaviours. In one condition, based on shared perception, the robot gave suggestions by considering the partners’ point of view and using its inference about their common ground to select the most informative hint. In the other condition, the robot just indicated the brick that would have yielded a higher score from its individual perspective. The adoption of shared perception in the selection of suggestions led to better performances in all the instances of the game where the visual information was not a priori common to both agents. However, the subjective evaluation of the robot’s behaviour did not change between conditions.
Collapse
Affiliation(s)
- Marco Matarese
- DIBRIS Department, University of Genoa, Genoa, Italy
- RBCS Unit, Italian Institute of Technology, Genoa, Italy
- *Correspondence: Marco Matarese,
| | - Francesco Rea
- RBCS Unit, Italian Institute of Technology, Genoa, Italy
| | | |
Collapse
|
34
|
Tidoni E, Holle H, Scandola M, Schindler I, Hill L, Cross ES. Human but not robotic gaze facilitates action prediction. iScience 2022; 25:104462. [PMID: 35707718 PMCID: PMC9189121 DOI: 10.1016/j.isci.2022.104462] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/05/2022] [Accepted: 05/17/2022] [Indexed: 01/08/2023] Open
Abstract
Do people ascribe intentions to humanoid robots as they would to humans or non-human-like animated objects? In six experiments, we compared people’s ability to extract non-mentalistic (i.e., where an agent is looking) and mentalistic (i.e., what an agent is looking at; what an agent is going to do) information from gaze and directional cues performed by humans, human-like robots, and a non-human-like object. People were faster to infer the mental content of human agents compared to robotic agents. Furthermore, although the absence of differences in control conditions rules out the use of non-mentalizing strategies, the human-like appearance of non-human agents may engage mentalizing processes to solve the task. Overall, results suggest that human-like robotic actions may be processed differently from humans’ and objects’ behavior. These findings inform our understanding of the relevance of an object’s physical features in triggering mentalizing abilities and its relevance for human–robot interaction. People differently ascribe mental content to human-like and non-human-like agents A human-like shape may automatically engage mentalizing processes Human actions are interpreted faster than non-human actions
Collapse
|
35
|
Norton A, Admoni H, Crandall J, Fitzgerald T, Gautam A, Goodrich M, Saretsky A, Scheutz M, Simmons R, Steinfeld A, Yanco H. Metrics for Robot Proficiency Self-Assessment and Communication of Proficiency in Human-Robot Teams. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3522579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
As development of robots with the ability to self-assess their proficiency for accomplishing tasks continues to grow, metrics are needed to evaluate the characteristics and performance of these robot systems and their interactions with humans. This proficiency-based human-robot interaction (HRI) use case can occur before, during, or after the performance of a task. This paper presents a set of metrics for this use case, driven by a four stage cyclical interaction flow: 1) robot self-assessment of proficiency (RSA), 2) robot communication of proficiency to the human (RCP), 3) human understanding of proficiency (HUP), and 4) robot perception of the human’s intentions, values, and assessments (RPH). This effort leverages work from related fields including explainability, transparency, and introspection, by repurposing metrics under the context of proficiency self-assessment. Considerations for temporal level (
a priori
,
in situ
, and
post hoc
) on the metrics are reviewed, as are the connections between metrics within or across stages in the proficiency-based interaction flow. This paper provides a common framework and language for metrics to enhance the development and measurement of HRI in the field of proficiency self-assessment.
Collapse
|
36
|
Abstract
AbstractThe anthropomorphization of human-robot interactions is a fundamental aspect of the design of social robotics applications. This article describes how an interaction model based on multimodal signs like visual, auditory, tactile, proxemic, and others can improve the communication between humans and robots. We have examined and appropriately filtered all the robot sensory data needed to realize our interaction model. We have also paid a lot of attention to communication on the backchannel, making it both bidirectional and evident through auditory and visual signals. Our model, based on a task-level architecture, was integrated into an application called W@ICAR, which proved efficient and intuitive with people not interacting with the robot. It has been validated both from a functional and user experience point of view, showing positive results. Both the pragmatic and the hedonic estimators have shown how many users particularly appreciated the application. The model component has been implemented through Python scripts in the robot operating system environment.
Collapse
|
37
|
Would You Trust Driverless Service? Formation of Pedestrian's Trust and Attitude Using Non-Verbal Social Cues. SENSORS 2022; 22:s22072809. [PMID: 35408424 PMCID: PMC9002600 DOI: 10.3390/s22072809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 03/25/2022] [Accepted: 04/01/2022] [Indexed: 02/01/2023]
Abstract
Despite the widespread application of Autonomous Vehicles (AV) to various services, there has been relatively little research carried out on pedestrian–AV interaction and trust within the context of service provided by AV. This study explores the communication design strategy promoting a pedestrian’s trust and positive attitude to driverless services within the context of pedestrian–AV interaction using non-verbal social cues. An empirical study was conducted with an experimental VR environment to measure participants’ intimacy, trust, and brand attitude toward AV. Further understanding of their social interaction experiences was explored through semi-structured interviews. As a result of the study, the interaction effect of social cues was found, and it was revealed that brand attitude was formed by the direct effects of intimacy and trust as well as the indirect effects of intimacy through trust’s mediation. Furthermore, ‘Conceptual Definition of Space’ was identified to generate differences in the interplay among intimacy, trust, and brand attitude according to social cues. Quantitative and qualitative results were synthesized to discuss implications considering the service context. Practical implications were also addressed suggesting specific design strategies for utilizing the sociality of AV.
Collapse
|
38
|
Onnasch L, Hildebrandt CL. Impact of Anthropomorphic Robot Design on Trust and Attention in Industrial Human-Robot Interaction. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3472224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The application of anthropomorphic features to robots is generally considered beneficial for
human-robot interaction (HRI
). Although previous research has mainly focused on social robots, the phenomenon gains increasing attention in industrial human-Robot interaction as well. In this study, the impact of anthropomorphic design of a collaborative industrial robot on the dynamics of trust and visual attention allocation was examined. Participants interacted with a robot, which was either anthropomorphically or non-anthropomorphically designed. Unexpectedly, attribute-based trust measures revealed no beneficial effect of anthropomorphism but even a negative impact on the perceived reliability of the robot. Trust behavior was not significantly affected by an anthropomorphic robot design during faultless interactions, but showed a relatively steeper decrease after participants experienced a failure of the robot. With regard to attention allocation, the study clearly reveals a distracting effect of anthropomorphic robot design. The results emphasize that anthropomorphism might not be an appropriate feature in industrial HRI as it not only failed to reveal positive effects on trust, but distracted participants from relevant task areas which might be a significant drawback with regard to occupational safety in HRI.
Collapse
Affiliation(s)
- Linda Onnasch
- Engineering Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | | |
Collapse
|
39
|
On multi-human multi-robot remote interaction: a study of transparency, inter-human communication, and information loss in remote interaction. SWARM INTELLIGENCE 2022. [DOI: 10.1007/s11721-021-00209-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
40
|
Klüber K, Onnasch L. Appearance is not everything - Preferred feature combinations for care robots. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2021.107128] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
41
|
Obo T, Takizawa K. Analysis of Timing and Effect of Visual Cue on Turn-Taking in Human-Robot Interaction. JOURNAL OF ROBOTICS AND MECHATRONICS 2022. [DOI: 10.20965/jrm.2022.p0055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents a communication robot system with a simple LED display for representing timing for turn-taking in human-robot interaction. Human-like conversation with non-verbal information, such as gestures, facial expressions, tone of voice, and eye contact enables more natural communication. If the robots could use such verbal and non-verbal communication skills, it would establish a social relation between a robot and human. Timing and time interval for turn-taking in human communication are important non-verbal cues to efficiently convey messages and to share opinions with each other. In this study, we present some experimental results to discuss the effect of response timing for turn-taking in communication between a person and a robot.
Collapse
|
42
|
Ognibene D, Foulsham T, Marchegiani L, Farinella GM. Editorial: Active Vision and Perception in Human-Robot Collaboration. Front Neurorobot 2022; 16:848065. [PMID: 35211002 PMCID: PMC8860825 DOI: 10.3389/fnbot.2022.848065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 01/12/2022] [Indexed: 11/13/2022] Open
Affiliation(s)
- Dimitri Ognibene
- Department of Psychology, Università degli Studi di Milano-Bicocca, Milan, Italy
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Tom Foulsham
- Department of Psychology, University of Essex, Colchester, United Kingdom
| | | | | |
Collapse
|
43
|
Kathleen B, Víctor FC, Amandine M, Aurélie C, Elisabeth P, Michèle G, Rachid A, Hélène C. Addressing joint action challenges in HRI: Insights from psychology and philosophy. Acta Psychol (Amst) 2022; 222:103476. [PMID: 34974283 DOI: 10.1016/j.actpsy.2021.103476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 11/19/2021] [Accepted: 12/15/2021] [Indexed: 11/24/2022] Open
Abstract
The vast expansion of research in human-robot interactions (HRI) these last decades has been accompanied by the design of increasingly skilled robots for engaging in joint actions with humans. However, these advances have encountered significant challenges to ensure fluent interactions and sustain human motivation through the different steps of joint action. After exploring current literature on joint action in HRI, leading to a more precise definition of these challenges, the present article proposes some perspectives borrowed from psychology and philosophy showing the key role of communication in human interactions. From mutual recognition between individuals to the expression of commitment and social expectations, we argue that communicative cues can facilitate coordination, prediction, and motivation in the context of joint action. The description of several notions thus suggests that some communicative capacities can be implemented in the context of joint action for HRI, leading to an integrated perspective of robotic communication.
Collapse
Affiliation(s)
- Belhassein Kathleen
- CLLE, UMR5263, Toulouse University, CNRS, UT2J, France; LAAS-CNRS, UPR8001, Toulouse University, CNRS, France
| | | | | | | | | | | | - Alami Rachid
- LAAS-CNRS, UPR8001, Toulouse University, CNRS, France
| | - Cochet Hélène
- CLLE, UMR5263, Toulouse University, CNRS, UT2J, France
| |
Collapse
|
44
|
Marge M, Espy-Wilson C, Ward NG, Alwan A, Artzi Y, Bansal M, Blankenship G, Chai J, Daumé H, Dey D, Harper M, Howard T, Kennington C, Kruijff-Korbayová I, Manocha D, Matuszek C, Mead R, Mooney R, Moore RK, Ostendorf M, Pon-Barry H, Rudnicky AI, Scheutz M, Amant RS, Sun T, Tellex S, Traum D, Yu Z. Spoken language interaction with robots: Recommendations for future research. COMPUT SPEECH LANG 2022. [DOI: 10.1016/j.csl.2021.101255] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
45
|
Chen J, Ding Y, Xin B, Yang Q, Fang H. A Unifying Framework for Human-Agent Collaborative Systems-Part I: Element and Relation Analysis. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:138-151. [PMID: 32191906 DOI: 10.1109/tcyb.2020.2977602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The human-agent collaboration (HAC) is a prospective research topic whose great applications and future scenarios have attracted vast attention. In a broad sense, the HAC system (HACS) can be broken down into six elements: "Man," "Agents," "Goal," "Network," "Environment," and "Tasks." By merging these elements and building a relation graph, this article proposes a systematic analysis framework for HACS, and attempts to make a comprehensive analysis of these elements and their relationships. We coin the abbreviation "MAGNET" to name the framework by stringing together the initials of the above six terms. The framework provides novel insights into analyzing various HAC patterns and integrates different types of HACSs in a unifying way. The presentation of the HACS framework is divided into two parts. This article, part I, presents the systematic analysis framework. Part II proposes a normalized two-stage top-level design procedure for designing an HACS from the perspective of MAGNET.
Collapse
|
46
|
Spatola N, Huguet P. Cognitive Impact of Anthropomorphized Robot Gaze. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2021. [DOI: 10.1145/3459994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Attentional control does not have fix functioning and can be strongly impacted by the presence of other human beings or humanoid robots. In two studies, this phenomenon was investigated while focusing exclusively on robot gaze as a potential determinant of attentional control along with the role of participants’ anthropomorphic inferences toward the robot. In study 1, we expected and found higher interference in trials including a direct robot gaze compared to an averted gaze on a task measuring attentional control (Eriksen flanker task). Participants’ anthropomorphic inferences about the social robot mediated this interference. In study 2, we found that averted gazes congruent with the correct answer (same task as study 1) facilitated performance. Again, this effect was mediated by anthropomorphic inferences. These two studies show the importance of anthropomorphic robotic gaze on human cognitive processing, especially attentional control, and also open new avenues of research in social robotics.
Collapse
Affiliation(s)
- Nicolas Spatola
- Istituto Italiano di Tecnologia, Social Cognition in Human-Robot Interaction, 16152 Genova, Italy
| | - Pascal Huguet
- Université Clermont Auvergne et CNRS, LAPSCO, UMR 6024 63000 Clermont-Ferrand, France
| |
Collapse
|
47
|
Improving HRI with Force Sensing. MACHINES 2021. [DOI: 10.3390/machines10010015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In the future, in a society where robots and humans live together, HRI is an important field of research. While most human–robot-interaction (HRI) studies focus on appearance and dialogue, touch-communication has not been the focus of many studies despite the importance of its role in human–human communication. This paper investigates how and where humans touch an inorganic non-zoomorphic robot arm. Based on these results, we install touch sensors on the robot arm and conduct experiments to collect data of users’ impressions towards the robot when touching it. Our results suggest two main things. First, the touch gestures were collected with two sensors, and the collected data can be analyzed using machine learning to classify the gestures. Second, communication between humans and robots using touch can improve the user’s impression of the robots.
Collapse
|
48
|
Zonca J, Folsø A, Sciutti A. The role of reciprocity in human-robot social influence. iScience 2021; 24:103424. [PMID: 34877490 PMCID: PMC8633024 DOI: 10.1016/j.isci.2021.103424] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 10/11/2021] [Accepted: 11/05/2021] [Indexed: 11/19/2022] Open
Abstract
Humans are constantly influenced by others’ behavior and opinions. Of importance, social influence among humans is shaped by reciprocity: we follow more the advice of someone who has been taking into consideration our opinions. In the current work, we investigate whether reciprocal social influence can emerge while interacting with a social humanoid robot. In a joint task, a human participant and a humanoid robot made perceptual estimates and then could overtly modify them after observing the partner’s judgment. Results show that endowing the robot with the ability to express and modulate its own level of susceptibility to the human’s judgments represented a double-edged sword. On the one hand, participants lost confidence in the robot’s competence when the robot was following their advice; on the other hand, participants were unwilling to disclose their lack of confidence to the susceptible robot, suggesting the emergence of reciprocal mechanisms of social influence supporting human-robot collaboration. If a social robot is susceptible to our advice, we lose confidence in it However, robot’s susceptibility does not deteriorate social influence These effects do not appear during interaction with a computer Susceptible robots can promote reciprocity but also hinder social learning
Collapse
Affiliation(s)
- Joshua Zonca
- Cognitive Architecture for Collaborative Technologies (CONTACT) Unit, Italian Institute of Technology, Via Enrico Melen, 83, 16152 Genoa, GE, Italy
- Corresponding author
| | - Anna Folsø
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, 16145 Genoa, Italy
| | - Alessandra Sciutti
- Cognitive Architecture for Collaborative Technologies (CONTACT) Unit, Italian Institute of Technology, Via Enrico Melen, 83, 16152 Genoa, GE, Italy
| |
Collapse
|
49
|
Eldardeer O, Gonzalez-Billandon J, Grasse L, Tata M, Rea F. A Biological Inspired Cognitive Framework for Memory-Based Multi-Sensory Joint Attention in Human-Robot Interactive Tasks. Front Neurorobot 2021; 15:648595. [PMID: 34887738 PMCID: PMC8650613 DOI: 10.3389/fnbot.2021.648595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Accepted: 09/10/2021] [Indexed: 11/26/2022] Open
Abstract
One of the fundamental prerequisites for effective collaborations between interactive partners is the mutual sharing of the attentional focus on the same perceptual events. This is referred to as joint attention. In psychological, cognitive, and social sciences, its defining elements have been widely pinpointed. Also the field of human-robot interaction has extensively exploited joint attention which has been identified as a fundamental prerequisite for proficient human-robot collaborations. However, joint attention between robots and human partners is often encoded in prefixed robot behaviours that do not fully address the dynamics of interactive scenarios. We provide autonomous attentional behaviour for robotics based on a multi-sensory perception that robustly relocates the focus of attention on the same targets the human partner attends. Further, we investigated how such joint attention between a human and a robot partner improved with a new biologically-inspired memory-based attention component. We assessed the model with the humanoid robot iCub involved in performing a joint task with a human partner in a real-world unstructured scenario. The model showed a robust performance on capturing the stimulation, making a localisation decision in the right time frame, and then executing the right action. We then compared the attention performance of the robot against the human performance when stimulated from the same source across different modalities (audio-visual and audio only). The comparison showed that the model is behaving with temporal dynamics compatible with those of humans. This provides an effective solution for memory-based joint attention in real-world unstructured environments. Further, we analyzed the localisation performances (reaction time and accuracy), the results showed that the robot performed better in an audio-visual condition than an audio only condition. The performance of the robot in the audio-visual condition was relatively comparable with the behaviour of the human participants whereas it was less efficient in audio-only localisation. After a detailed analysis of the internal components of the architecture, we conclude that the differences in performance are due to egonoise which significantly affects the audio-only localisation performance.
Collapse
Affiliation(s)
- Omar Eldardeer
- Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi, Università di Genova, Genova, Italy.,Robotics, Brain, and Cognitive Science Department, Istituto Italiano di Tecnologia, Genova, Italy
| | - Jonas Gonzalez-Billandon
- Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi, Università di Genova, Genova, Italy.,COgNiTive Architecture for Collaborative Technologies, Istituto Italiano di Tecnologia, Genova, Italy
| | - Lukas Grasse
- Neuroscience/CCBN Department, The University of Lethbridge, Lethbridge, AB, Canada
| | - Matthew Tata
- Neuroscience/CCBN Department, The University of Lethbridge, Lethbridge, AB, Canada
| | - Francesco Rea
- Robotics, Brain, and Cognitive Science Department, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
50
|
Tatarian K, Stower R, Rudaz D, Chamoux M, Kappas A, Chetouani M. How does Modality Matter? Investigating the Synthesis and Effects of Multi-modal Robot Behavior on Social Intelligence. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00839-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|