1
|
Shiomi M, Sumioka H. Differential effects of robot's touch on perceived emotions and the feeling of Kawaii in adults and seniors. Sci Rep 2025; 15:7590. [PMID: 40038425 PMCID: PMC11880510 DOI: 10.1038/s41598-025-92172-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Accepted: 02/25/2025] [Indexed: 03/06/2025] Open
Abstract
Using social robots is a promising approach for supporting senior citizens in the context of super-aging societies. The essential design factors for achieving socially acceptable robots include effective emotional expressions and cuteness. Past studies have reported the effectiveness of robot-initiated touching behaviors toward interacting partners on these two factors in the context of interaction with adults, although the effects of such touch behaviors on them are unknown in seniors. Therefore, in this study, we investigated the effects of robot-initiated touch behaviors on perceived emotions (valence and arousal) and the feeling of kawaii, a common Japanese adjective for expressing cute, lovely, or adorable. In experiments with Japanese participants (adults: 21-49, seniors: 65-79) using a baby-type robot, our results showed that the robot's touch significantly increased the perceived valence regardless of the expressed emotions and the ages of the participants. Our results also showed that the robot's touch was effective in adults in the context of arousal and the feeling of kawaii, but not in seniors. We discussed the differential effects of robot-initiated touch between adults and seniors by focusing on emotional processing in the latter. The findings of this study have implications for designing social robots that have the capability of physical interaction with seniors.
Collapse
Affiliation(s)
- Masahiro Shiomi
- Department of Interaction Science Laboratories, Deep Interaction Laboratory Group, Advanced Telecommunications Research Institute International, Kyoto, 619-0288, Japan.
| | - Hidenobu Sumioka
- Presence Media Research Group, Hiroshi Ishiguro Laboratories, Deep Interaction Laboratory Group, Advanced Telecommunications Research Institute International, Kyoto, 619-0288, Japan
| |
Collapse
|
2
|
Gao W, Jiang T, Zhai W, Zha F. An Emotion Recognition Method for Humanoid Robot Body Movements Based on a PSO-BP-RMSProp Neural Network. SENSORS (BASEL, SWITZERLAND) 2024; 24:7227. [PMID: 39599003 PMCID: PMC11598485 DOI: 10.3390/s24227227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2024] [Revised: 11/02/2024] [Accepted: 11/08/2024] [Indexed: 11/29/2024]
Abstract
This paper mainly explores the computational model that connects a robot's emotional body movements with human emotion to propose an emotion recognition method for humanoid robot body movements. There is sparse research directly carried out from this perspective to recognize robot bodily expression. A robot's body movements are designed by imitating human emotional body movements. Subjective questionnaires and statistical methods are used to analyze the characteristics of a user's perceptions and select appropriate designs. An emotional body movement recognition model using a BP neural network (EBMR-BP model) is proposed, in which the selected robot's body movements and corresponding emotions are used as inputs and outputs. The corresponding topological architecture, encoding rules, and training process are illustrated in detail. Then, the PSO method and the RMSProp algorithm are introduced to optimize the EBMR-BP method, and the PSO-BP-RMSProp model is developed. Through experiments and comparisons for emotion recognition of a robot's body movements, the feasibility and effectiveness of the EBMR-BP model, with a recognition rate of 66.67%, and the PSO-BP-RMSProp model, with a recognition rate of 88.89%, are verified. This indicates that the proposed method can be used for emotion recognition of a robot's body movements, and optimization can improve emotion recognition. The contributions are beneficial for emotional interaction design in HRI.
Collapse
Affiliation(s)
- Wa Gao
- Co-Innovation Center of Efficient Processing and Utilization of Forest Resources, Nanjing Forestry University, Nanjing 210037, China
- College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210037, China
| | - Tanfeng Jiang
- College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210037, China
| | - Wanli Zhai
- College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210037, China
| | - Fusheng Zha
- The State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| |
Collapse
|
3
|
Gao W, Shen S, Ji Y, Tian Y. Human Perception of the Emotional Expressions of Humanoid Robot Body Movements: Evidence from Survey and Eye-Tracking Measurements. Biomimetics (Basel) 2024; 9:684. [PMID: 39590256 PMCID: PMC11591740 DOI: 10.3390/biomimetics9110684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Revised: 10/30/2024] [Accepted: 11/04/2024] [Indexed: 11/28/2024] Open
Abstract
The emotional expression of body movement, which is an aspect of emotional communication between humans, has not been considered enough in the field of human-robot interactions (HRIs). This paper explores human perceptions of the emotional expressions of humanoid robot body movements to study the emotional design of the bodily expressions of robots and the characteristics of the human perception of these emotional body movements. Six categories of emotional behaviors, including happiness, anger, sadness, surprise, fear, and disgust, were designed by imitating human emotional body movements, and they were implemented on a Yanshee robot. A total of 135 participants were recruited for questionnaires and eye-tracking measurements. Statistical methods, including K-means clustering, repeated analysis of variance (ANOVA), Friedman's ANOVA, and Spearman's correlation test, were used to analyze the data. According to the statistical results of emotional categories, intensities, and arousals perceived by humans, a guide to grading the designed robot's bodily expressions of emotion is created. By combining this guide with certain objective analyses, such as fixation and trajectory of eye movements, the characteristics of human perception, including the perceived differences between happiness and negative emotions and the trends of eye movements for different emotional categories, are described. This study not only illustrates subjective and objective evidence that humans can perceive robot bodily expressions of emotions through only vision but also provides helpful guidance for designing appropriate emotional bodily expressions in HRIs.
Collapse
Affiliation(s)
- Wa Gao
- Co-Innovation Center of Efficient Processing and Utilization of Forest Resources, Nanjing Forestry University, Nanjing 210038, China
- College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210038, China; (S.S.); (Y.J.); (Y.T.)
| | - Shiyi Shen
- College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210038, China; (S.S.); (Y.J.); (Y.T.)
| | - Yang Ji
- College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210038, China; (S.S.); (Y.J.); (Y.T.)
| | - Yuan Tian
- College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210038, China; (S.S.); (Y.J.); (Y.T.)
| |
Collapse
|
4
|
Sirapangi MD, Gopikrishnan S. Predictive health behavior modeling using multimodal feature correlations via Medical Internet-of-Things devices. Heliyon 2024; 10:e34429. [PMID: 39145001 PMCID: PMC11320131 DOI: 10.1016/j.heliyon.2024.e34429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 06/13/2024] [Accepted: 07/09/2024] [Indexed: 08/16/2024] Open
Abstract
Due to the advent of IoT (Internet of Things) based devices that help to monitor different human behavioral aspects. These aspects include sleeping patterns, activity patterns, heart rate variability (HRV) patterns, location-based moving patterns, blood oxygen levels, etc. A correlative study of these patterns can be used to find linkages of behavioral patterns with human health conditions. To perform this task, a wide variety of models is proposed by researchers, but most of them vary in terms of used parameters, which limits their accuracy of analysis. Moreover, most of these models are highly complex and have lower parameter flexibility, thus, cannot be scaled for real-time use cases. To overcome these issues, this paper proposes design of a behavior modeling method that assists in future health predictions via multimodal feature correlations using medical IoT devices via deep transfer learning analysis. The proposed model initially collects large-scale sensor data about the subjects, and correlates them with the existing medical conditions. This correlation is done via extraction of multidomain feature sets that assist in spectral analysis, entropy evaluations, scaling estimation, and window-based analysis. These multidomain feature sets are selected by a Firefly Optimizer (FFO) and are used to train a Recurrent Neural Network (RNN) Model, that assists in prediction of different diseases. These predictions are used to train a recommendation engine that uses Apriori and Fuzzy C Means (FCM) for suggesting corrective behavioral measures for a healthier lifestyle under real-time conditions. Due to these operations, the proposed model is able to improve behavior prediction accuracy by 16.4%, precision of prediction by 8.3%, AUC (area under the curve) of prediction by 9.5%, and accuracy of corrective behavior recommendation by 3.9% when compared with existing methods under similar evaluation conditions.
Collapse
Affiliation(s)
- Moshe Dayan Sirapangi
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, Andhra Pradesh, India
| | - S. Gopikrishnan
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, Andhra Pradesh, India
| |
Collapse
|
5
|
Sandini G, Sciutti A, Morasso P. Artificial cognition vs. artificial intelligence for next-generation autonomous robotic agents. Front Comput Neurosci 2024; 18:1349408. [PMID: 38585280 PMCID: PMC10995397 DOI: 10.3389/fncom.2024.1349408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 02/20/2024] [Indexed: 04/09/2024] Open
Abstract
The trend in industrial/service robotics is to develop robots that can cooperate with people, interacting with them in an autonomous, safe and purposive way. These are the fundamental elements characterizing the fourth and the fifth industrial revolutions (4IR, 5IR): the crucial innovation is the adoption of intelligent technologies that can allow the development of cyber-physical systems, similar if not superior to humans. The common wisdom is that intelligence might be provided by AI (Artificial Intelligence), a claim that is supported more by media coverage and commercial interests than by solid scientific evidence. AI is currently conceived in a quite broad sense, encompassing LLMs and a lot of other things, without any unifying principle, but self-motivating for the success in various areas. The current view of AI robotics mostly follows a purely disembodied approach that is consistent with the old-fashioned, Cartesian mind-body dualism, reflected in the software-hardware distinction inherent to the von Neumann computing architecture. The working hypothesis of this position paper is that the road to the next generation of autonomous robotic agents with cognitive capabilities requires a fully brain-inspired, embodied cognitive approach that avoids the trap of mind-body dualism and aims at the full integration of Bodyware and Cogniware. We name this approach Artificial Cognition (ACo) and ground it in Cognitive Neuroscience. It is specifically focused on proactive knowledge acquisition based on bidirectional human-robot interaction: the practical advantage is to enhance generalization and explainability. Moreover, we believe that a brain-inspired network of interactions is necessary for allowing humans to cooperate with artificial cognitive agents, building a growing level of personal trust and reciprocal accountability: this is clearly missing, although actively sought, in current AI. The ACo approach is a work in progress that can take advantage of a number of research threads, some of them antecedent the early attempts to define AI concepts and methods. In the rest of the paper we will consider some of the building blocks that need to be re-visited in a unitary framework: the principles of developmental robotics, the methods of action representation with prospection capabilities, and the crucial role of social interaction.
Collapse
Affiliation(s)
| | | | - Pietro Morasso
- Italian Institute of Technology, Cognitive Architecture for Collaborative Technologies (CONTACT) and Robotics, Brain and Cognitive Sciences (RBCS) Research Units, Genoa, Italy
| |
Collapse
|
6
|
de Wit J, Vogt P, Krahmer E. The Design and Observed Effects of Robot-Performed Manual Gestures: A Systematic Review. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3549530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Communication using manual (hand) gestures is considered a defining property of social robots, and their physical embodiment and presence, therefore we see a need for a comprehensive overview of the state of the art in social robots that use gestures. This systematic literature review aims to address this need by (1) describing the gesture production process of a social robot, including the design and planning steps, and (2) providing a survey of the effects of robot-performed gestures on human-robot interactions in a multitude of domains. We identify patterns and themes from the existing body of literature, resulting in nine outstanding questions for research on robot-performed gestures regarding: developments in sensor technology and AI, structuring the gesture design and evaluation process, the relationship between physical appearance and gestures, the effects of planning on the overall interaction, standardizing measurements of gesture ‘quality’, individual differences, gesture mirroring, whether human-likeness is desirable, and universal accessibility of robots. We also reflect on current methodological practices in studies of robot-performed gestures, and suggest improvements regarding replicability, external validity, measurement instruments used, and connections with other disciplines. These outstanding questions and methodological suggestions can guide future work in this field of research.
Collapse
Affiliation(s)
| | - Paul Vogt
- Hanze University of Applied Sciences, the Netherlands
| | | |
Collapse
|