1
|
Matthews G, Cumings R, De Los Santos EP, Feng IY, Mouloua SA. A new era for stress research: supporting user performance and experience in the digital age. ERGONOMICS 2025; 68:913-946. [PMID: 39520089 DOI: 10.1080/00140139.2024.2425953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Accepted: 10/16/2024] [Indexed: 11/16/2024]
Abstract
Stress is both a driver of objective performance impairments and a source of negative user experience of technology. This review addresses future directions for research on stress and ergonomics in the digital age. The review is structured around three levels of analysis. At the individual user level, stress is elicited by novel technologies and tasks including interaction with AI and robots, working in Virtual Reality, and operating autonomous vehicles. At the organisational level, novel, potentially stressful challenges include maintaining cybersecurity, surveillance and monitoring of employees supported by technology, and addressing bias and discrimination in the workplace. At the sociocultural level, technology, values and norms are evolving symbiotically, raising novel demands illustrated with respect to interactions with social media and new ethical challenges. We also briefly review the promise of neuroergonomics and emotional design to support stress mitigation. We conclude with seven high-level principles that may guide future work.
Collapse
Affiliation(s)
- Gerald Matthews
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Ryon Cumings
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | | | - Irene Y Feng
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Salim A Mouloua
- Department of Psychology, George Mason University, Fairfax, VA, USA
| |
Collapse
|
2
|
Immel D, Hilpert B, Schwarz P, Hein A, Gebhard P, Barton S, Hurlemann R. Patients' and Health Care Professionals' Expectations of Virtual Therapeutic Agents in Outpatient Aftercare: Qualitative Survey Study. JMIR Form Res 2025; 9:e59527. [PMID: 40138692 PMCID: PMC11982758 DOI: 10.2196/59527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 12/25/2024] [Accepted: 02/03/2025] [Indexed: 03/29/2025] Open
Abstract
BACKGROUND Depression is a serious mental health condition that can have a profound impact on the individual experiencing the disorder and those providing care. While psychotherapy and medication can be effective, there are gaps in current approaches, particularly in outpatient care. This phase is often associated with a high risk of relapse and readmission, and patients often report a lack of support. Socially interactive agents represent an innovative approach to the provision of assistance. Often powered by artificial intelligence, these virtual agents can interact socially and elicit humanlike emotions. In health care, they are used as virtual therapeutic assistants to fill gaps in outpatient aftercare. OBJECTIVE We aimed to explore the expectations of patients with depression and health care professionals by conducting a qualitative survey. Our analysis focused on research questions related to the appearance and role of the assistant, the assistant-patient interaction (time of interaction, skills and abilities of the assistant, and modes of interaction) and the therapist-assistant interaction. METHODS A 2-part qualitative study was conducted to explore the perspectives of the 2 groups (patients and care providers). In the first step, care providers (n=30) were recruited during a regional offline meeting. After a short presentation, they were given a link and were asked to complete a semistructured web-based questionnaire. Next, patients (n=20) were recruited from a clinic and were interviewed in a semistructured face-to-face interview. RESULTS The survey findings suggested that the assistant should be a multimodal communicator (voice, facial expressions, and gestures) and counteract negative self-evaluation. Most participants preferred a female assistant or wanted the option to choose the gender. In total, 24 (80%) health care professionals wanted a selectable option, while patients exhibited a marked preference for a female or diverse assistant. Regrading patient-assistant interaction, the assistant was seen as a proactive recipient of information, and the patient as a passive one. Gaps in aftercare could be filled by the unlimited availability of the assistant. However, patients should retain their autonomy to avoid dependency. The monitoring of health status was viewed positively by both groups. A biofeedback function was desired to detect early warning signs of disease. When appropriate to the situation, a sense of humor in the assistant was desirable. The desired skills of the assistant can be summarized as providing structure and emotional support, especially warmth and competence to build trust. Consistency was important for the caregiver to appear authentic. Regarding the assistant-care provider interaction, 3 key areas were identified: objective patient status measurement, emergency suicide prevention, and an information tool and decision support system for health care professionals. CONCLUSIONS Overall, the survey conducted provides innovative guidelines for the development of virtual therapeutic assistants to fill the gaps in patient aftercare.
Collapse
Affiliation(s)
- Diana Immel
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Bernhard Hilpert
- Affective Computing Group, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, German Research Center for Artificial Intelligence, Kaiserslautern, Germany
- Leiden Institute of Advanced Computer Science, Leiden University, Snellius Gebouw, Leiden, The Netherlands
- Department of Computer Science, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Patricia Schwarz
- Assistance Systems and Medical Device Technology, Department for Health Services Research, School of Medicine & Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Andreas Hein
- Assistance Systems and Medical Device Technology, Department for Health Services Research, School of Medicine & Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Patrick Gebhard
- Affective Computing Group, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, German Research Center for Artificial Intelligence, Kaiserslautern, Germany
| | - Simon Barton
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - René Hurlemann
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
3
|
Ackermann H, Henke A, Chevalère J, Yun HS, Hafner VV, Pinkwart N, Lazarides R. Physical embodiment and anthropomorphism of AI tutors and their role in student enjoyment and performance. NPJ SCIENCE OF LEARNING 2025; 10:1. [PMID: 39779711 PMCID: PMC11711547 DOI: 10.1038/s41539-024-00293-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 12/20/2024] [Indexed: 01/11/2025]
Abstract
Rising interest in artificial intelligence in education reinforces the demand for evidence-based implementation. This study investigates how tutor agents' physical embodiment and anthropomorphism (student-reported sociability, animacy, agency, and disturbance) relate to affective (on-task enjoyment) and cognitive (task performance) learning within an intelligent tutoring system (ITS). Data from 56 students (M = 17.75 years, SD = 2.63 years; 30.4% female), working with an emotionally-adaptive version of the ITS "Betty's Brain", were analyzed. The ITS' agents were either depicted as on-screen robots (condition A) or as both on-screen avatars and physical robots (condition B). Physical presence of the tutor agent was not significantly related to task performance or anthropomorphism, but to higher initial on-task enjoyment. Student-reported disturbance was negatively related to initial on-task enjoyment, and student-reported sociability was negatively related to task performance. While physical robots may increase initial on-task enjoyment, students' perception of certain characteristics may hinder learning, providing implications for designing social robots for education.
Collapse
Affiliation(s)
- Helene Ackermann
- Department of Educational Sciences, University of Potsdam, Karl-Liebknecht-Straße 24/25, 14476, Potsdam, Germany.
- Science of Intelligence, Research Cluster of Excellence, Marchstraße 23, 10587, Berlin, Germany.
| | - Anja Henke
- Department of Educational Sciences, University of Potsdam, Karl-Liebknecht-Straße 24/25, 14476, Potsdam, Germany
- Science of Intelligence, Research Cluster of Excellence, Marchstraße 23, 10587, Berlin, Germany
| | - Johann Chevalère
- Laboratoire de Psychologie Sociale et Cognitive (LAPSCO) UMR 6024, CNRS & Université Clermont-Auvergne, 17 Rue Paul Collomp, 63000, Clermont-Ferrand, France
| | - Hae Seon Yun
- Science of Intelligence, Research Cluster of Excellence, Marchstraße 23, 10587, Berlin, Germany
- Humboldt-University Berlin, Unter den Linden 6, 10099, Berlin, Germany
| | - Verena V Hafner
- Science of Intelligence, Research Cluster of Excellence, Marchstraße 23, 10587, Berlin, Germany
- Humboldt-University Berlin, Unter den Linden 6, 10099, Berlin, Germany
| | - Niels Pinkwart
- Science of Intelligence, Research Cluster of Excellence, Marchstraße 23, 10587, Berlin, Germany
- Humboldt-University Berlin, Unter den Linden 6, 10099, Berlin, Germany
- Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI), Alt-Moabit 91c, 10559, Berlin, Germany
| | - Rebecca Lazarides
- Department of Educational Sciences, University of Potsdam, Karl-Liebknecht-Straße 24/25, 14476, Potsdam, Germany
- Science of Intelligence, Research Cluster of Excellence, Marchstraße 23, 10587, Berlin, Germany
| |
Collapse
|
4
|
Morgante E, Susinna C, Culicetto L, Quartarone A, Lo Buono V. Is it possible for people to develop a sense of empathy toward humanoid robots and establish meaningful relationships with them? Front Psychol 2024; 15:1391832. [PMID: 39188868 PMCID: PMC11346246 DOI: 10.3389/fpsyg.2024.1391832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 07/18/2024] [Indexed: 08/28/2024] Open
Abstract
Introduction Empathy can be described as the ability to adopt another person's perspective and comprehend, feel, share, and respond to their emotional experiences. Empathy plays an important role in these relationships and is constructed in human-robot interaction (HRI). This systematic review focuses on studies investigating human empathy toward robots. We intend to define empathy as the cognitive capacity of humans to perceive robots as equipped with emotional and psychological states. Methods We conducted a systematic search of peer-reviewed articles using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. We searched Scopus, PubMed, Web of Science, and Embase databases. All articles were reviewed based on the titles, abstracts, and full texts by two investigators (EM and CS) who independently performed data collection. The researchers read the full-text articles deemed suitable for the study, and in cases of disagreement regarding the inclusion and exclusion criteria, the final decision was made by a third researcher (VLB). Results The electronic search identified 484 articles. After reading the full texts of the selected publications and applying the predefined inclusion criteria, we selected 11 articles that met our inclusion criteria. Robots that could identify and respond appropriately to the emotional states of humans seemed to evoke empathy. In addition, empathy tended to grow more when the robots exhibited anthropomorphic traits. Discussion Humanoid robots can be programmed to understand and react to human emotions and simulate empathetic responses; however, they are not endowed with the same innate capacity for empathy as humans.
Collapse
Affiliation(s)
| | - Carla Susinna
- IRCCS Centro Neurolesi Bonino Pulejo, Messina, Italy
| | | | | | | |
Collapse
|
5
|
Irfan B, Kuoppamäki S, Skantze G. Recommendations for designing conversational companion robots with older adults through foundation models. Front Robot AI 2024; 11:1363713. [PMID: 38860032 PMCID: PMC11163135 DOI: 10.3389/frobt.2024.1363713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 05/07/2024] [Indexed: 06/12/2024] Open
Abstract
Companion robots are aimed to mitigate loneliness and social isolation among older adults by providing social and emotional support in their everyday lives. However, older adults' expectations of conversational companionship might substantially differ from what current technologies can achieve, as well as from other age groups like young adults. Thus, it is crucial to involve older adults in the development of conversational companion robots to ensure that these devices align with their unique expectations and experiences. The recent advancement in foundation models, such as large language models, has taken a significant stride toward fulfilling those expectations, in contrast to the prior literature that relied on humans controlling robots (i.e., Wizard of Oz) or limited rule-based architectures that are not feasible to apply in the daily lives of older adults. Consequently, we conducted a participatory design (co-design) study with 28 older adults, demonstrating a companion robot using a large language model (LLM), and design scenarios that represent situations from everyday life. The thematic analysis of the discussions around these scenarios shows that older adults expect a conversational companion robot to engage in conversation actively in isolation and passively in social settings, remember previous conversations and personalize, protect privacy and provide control over learned data, give information and daily reminders, foster social skills and connections, and express empathy and emotions. Based on these findings, this article provides actionable recommendations for designing conversational companion robots for older adults with foundation models, such as LLMs and vision-language models, which can also be applied to conversational robots in other domains.
Collapse
Affiliation(s)
- Bahar Irfan
- Division of Speech, Music and Hearing, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Sanna Kuoppamäki
- Division of Health Informatics and Logistics, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Gabriel Skantze
- Division of Speech, Music and Hearing, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
6
|
Gupta K, Zhang Y, Gunasekaran TS, Krishna N, Pai YS, Billinghurst M. CAEVR: Biosignals-Driven Context-Aware Empathy in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2671-2681. [PMID: 38437090 DOI: 10.1109/tvcg.2024.3372130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
There is little research on how Virtual Reality (VR) applications can identify and respond meaningfully to users' emotional changes. In this paper, we investigate the impact of Context-Aware Empathic VR (CAEVR) on the emotional and cognitive aspects of user experience in VR. We developed a real-time emotion prediction model using electroencephalography (EEG), electrodermal activity (EDA), and heart rate variability (HRV) and used this in personalized and generalized models for emotion recognition. We then explored the application of this model in a context-aware empathic (CAE) virtual agent and an emotion-adaptive (EA) VR environment. We found a significant increase in positive emotions, cognitive load, and empathy toward the CAE agent, suggesting the potential of CAEVR environments to refine user-agent interactions. We identify lessons learned from this study and directions for future work.
Collapse
|
7
|
Matsumura T, Esaki K, Yang S, Yoshimura C, Mizuno H. Active Inference With Empathy Mechanism for Socially Behaved Artificial Agents in Diverse Situations. ARTIFICIAL LIFE 2024; 30:277-297. [PMID: 38018026 DOI: 10.1162/artl_a_00416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
This article proposes a method for an artificial agent to behave in a social manner. Although defining proper social behavior is difficult because it differs from situation to situation, the agent following the proposed method adaptively behaves appropriately in each situation by empathizing with the surrounding others. The proposed method is achieved by incorporating empathy into active inference. We evaluated the proposed method regarding control of autonomous mobile robots in diverse situations. From the evaluation results, an agent controlled by the proposed method could behave more adaptively socially than an agent controlled by the standard active inference in the diverse situations. In the case of two agents, the agent controlled with the proposed method behaved in a social way that reduced the other agent's travel distance by 13.7% and increased the margin between the agents by 25.8%, even though it increased the agent's travel distance by 8.2%. Also, the agent controlled with the proposed method behaved more socially when it was surrounded by altruistic others but less socially when it was surrounded by selfish others.
Collapse
|
8
|
Singh V, Sarkar S, Gaur V, Grover S, Singh OP. Clinical Practice Guidelines on using artificial intelligence and gadgets for mental health and well-being. Indian J Psychiatry 2024; 66:S414-S419. [PMID: 38445270 PMCID: PMC10911327 DOI: 10.4103/indianjpsychiatry.indianjpsychiatry_926_23] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 12/12/2023] [Accepted: 12/18/2023] [Indexed: 03/07/2024] Open
Affiliation(s)
- Vipul Singh
- Department of Psychiatry, Government Medical College, Kannauj, Uttar Pradesh, India
| | - Sharmila Sarkar
- Department of Psychiatry, Calcutta National Medical College, Kolkata, West Bengal, India
| | - Vikas Gaur
- Department of Psychiatry, Jaipur National University Institute for Medical Sciences and Research Centre, Jaipur, Rajasthan, India
| | - Sandeep Grover
- Department of Psychiatry, Post Graduate Institute of Medical Education and Research, Chandigarh, India
| | - Om Prakash Singh
- Department of Psychiatry, Midnapore Medical College, Midnapore, West Bengal, India E-mail:
| |
Collapse
|
9
|
Orden-Mejía M, Carvache-Franco M, Huertas A, Carvache-Franco O, Carvache-Franco W. Modeling users' satisfaction and visit intention using AI-based chatbots. PLoS One 2023; 18:e0286427. [PMID: 37682931 PMCID: PMC10490898 DOI: 10.1371/journal.pone.0286427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 05/16/2023] [Indexed: 09/10/2023] Open
Abstract
AI-based chatbots are an emerging technology disrupting the tourism industry. Although chatbots have received increasing attention, there is little evidence of their impact on tourists' decisions to visit a destination. This study evaluates the key attributes of chatbots and their effects on user satisfaction and visit intention. We use structural equation modeling with covariance procedures to test the proposed model and its hypotheses. The results showed that informativeness, empathy, and interactivity are critical attributes for satisfaction, which drive tourists' intention to visit a destination.
Collapse
Affiliation(s)
- Miguel Orden-Mejía
- Facultat de Turisme i Geografia, Universitat Rovira I Virgili, Vila-seca, Spain
| | | | - Assumpció Huertas
- Department of Communication, Universitat Rovira I Virgili, Tarragona, Spain
| | - Orly Carvache-Franco
- Facultad de Econonía y Empresa, Universidad Católica de Santiago de Guayaquil, Guayaquil, Ecuador
| | - Wilmer Carvache-Franco
- Facultad de Ciencias Sociales y Humanísticas, Escuela Superior Politécnica del Litoral, ESPOL, Guayaquil, Ecuador
| |
Collapse
|
10
|
Zeladita-Huaman JA, Huyhua-Gutierrez SC, Castillo-Parra H, Zegarra-Chapoñan R, Tejada-Muñoz S, Díaz-Manchay RJ. Technological variables predictors of academic stress in nursing students in times of COVID-19. Rev Lat Am Enfermagem 2023; 31:e3851. [PMID: 37194890 PMCID: PMC10202226 DOI: 10.1590/1518-8345.6386.3851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 09/26/2022] [Indexed: 05/18/2023] Open
Abstract
OBJECTIVE to analyze which technological variables, derived from the use of electronic devices, predict academic stress and its dimensions in Nursing students. METHOD analytical cross-sectional study carried out with a total of 796 students from six universities in Peru. The SISCO scale was used and four logistic regression models were estimated for the analysis, with selection of variables in stages. RESULTS among the participants, 87.6% had a high level of academic stress; time using the electronic device, screen brightness, age and sex were associated with academic stress and its three dimensions; the position of using the electronic device was associated with the total scale and the stressors and reactions dimensions. Finally, the distance between the face and the electronic device was associated with the total scale and size of reactions. CONCLUSION technological variables and sociodemographic characteristics predict academic stress in nursing students. It is suggested to optimize the time of use of computers, regulate the brightness of the screen, avoid sitting in inappropriate positions and pay attention to the distance, in order to reduce academic stress during distance learning.
Collapse
Affiliation(s)
| | | | - Henry Castillo-Parra
- Universidad de San Buenaventura, Facultad de Psicología, Medellín, Antioquia, Colombia
| | - Roberto Zegarra-Chapoñan
- Universidad María Auxiliadora, Escuela Profesional de Enfermería, Lima, Lima, Perú
- Ministerio de Salud, Escuela Nacional de Salud Pública, Lima, Lima, Perú
| | - Sonia Tejada-Muñoz
- Universidad Nacional Toribio Rodríguez de Mendoza de Amazonas, Facultad de Ciencia de la Salud, Amazonas, Amazonas, Perú
| | - Rosa Jeuna Díaz-Manchay
- Universidad Católica Santo Toribio de Mogrovejo, Facultad de Medicina, Lambayeque, Lambayeque, Perú
| |
Collapse
|
11
|
Understanding AI-based customer service resistance: A perspective of defective AI features and tri-dimensional distrusting beliefs. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2022.103257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
12
|
Tojib D, Abdi E, Tian L, Rigby L, Meads J, Prasad T. What’s Best for Customers: Empathetic Versus Solution-Oriented Service Robots. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00970-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
Abstract
AbstractA promising application of social robots highlighted by the ongoing labor shortage is to deploy them as service robots at organizational frontlines. As the face of the firms, service robots are expected to provide cognitive and affective supports in response to customer inquiries. However, one question remains unanswered: Would having a robot with a high level of affective support be helpful when such a robot cannot provide a satisfactory level of cognitive support to users? In this study, we aim to address this question by showing that empathetic service robots can be beneficial, although the extent of such benefits depends on the quality of services they provide. Our in-person human–robot interaction study (n = 55) shows that when a service robot can only provide a partial solution, it is preferable for it to express more empathetic behaviors, as users will perceive it to be more useful and will have a better customer experience. However, when a service robot is able to provide a full solution, the level of empathy displayed by it does not result in significant differences on perceived usefulness and customer experience. These findings are further validated in an online experimental study performed in another country (n = 395).
Collapse
|
13
|
Tsumura T, Yamada S. Influence of agent's self-disclosure on human empathy. PLoS One 2023; 18:e0283955. [PMID: 37163467 PMCID: PMC10171667 DOI: 10.1371/journal.pone.0283955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 03/21/2023] [Indexed: 05/12/2023] Open
Abstract
As AI technologies progress, social acceptance of AI agents, including intelligent virtual agents and robots, is becoming even more important for more applications of AI in human society. One way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents. By empathizing, humans act positively and kindly toward agents, which makes it easier for them to accept the agents. In this study, we focus on self-disclosure from agents to humans in order to increase empathy felt by humans toward anthropomorphic agents. We experimentally investigate the possibility that self-disclosure from an agent facilitates human empathy. We formulate hypotheses and experimentally analyze and discuss the conditions in which humans have more empathy toward agents. Experiments were conducted with a three-way mixed plan, and the factors were the agents' appearance (human, robot), self-disclosure (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure), and empathy before/after a video stimulus. An analysis of variance (ANOVA) was performed using data from 918 participants. We found that the appearance factor did not have a main effect, and self-disclosure that was highly relevant to the scenario used facilitated more human empathy with a statistically significant difference. We also found that no self-disclosure suppressed empathy. These results support our hypotheses. This study reveals that self-disclosure represents an important characteristic of anthropomorphic agents which helps humans to accept them.
Collapse
Affiliation(s)
- Takahiro Tsumura
- Department of Informatics, The Graduate University for Advanced Studies, SOKENDAI, Tokyo, Japan
- National Institute of Informatics, Tokyo, Japan
| | - Seiji Yamada
- Department of Informatics, The Graduate University for Advanced Studies, SOKENDAI, Tokyo, Japan
- National Institute of Informatics, Tokyo, Japan
| |
Collapse
|
14
|
Chevalère J, Kirtay M, Hafner VV, Lazarides R. Who to Observe and Imitate in Humans and Robots: The Importance of Motivational Factors. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00923-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AbstractImitation is a vital skill that humans leverage in various situations. Humans achieve imitation by observing others with apparent ease. Yet, in reality, it is computationally expensive to model on artificial agents (e.g., social robots) to acquire new skills by imitating an expert agent. Although learning through imitation has been extensively addressed in the robotic literature, most studies focus on answering the following questions: what to imitate and how to imitate. In this conceptual paper, we focus on one of the overlooked questions of imitation through observation: who to imitate. We present possible answers to the who-to-imitate question by exploring motivational factors documented in psychological research and their possible implementation in robotics. To this end, we focus on two critical instances of the who-to-imitate question that guide agents to prioritize one demonstrator over another: outcome expectancies, viewed as the anticipated learning gains, and efficacy expectations, viewed as the anticipated costs of performing actions, respectively.
Collapse
|
15
|
del Valle-Canencia M, Moreno Martínez C, Rodríguez-Jiménez RM, Corrales-Paredes A. The emotions effect on a virtual characters design–A student perspective analysis. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.892597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Interaction between people and virtual characters through digital and electronic devices is a reality. In this context, the design of virtual characters must incorporate emotional expression at a nonverbal level looking for effective communication with the user. This exploratory study investigates the design features of an avatar functioning as a virtual assistant in educational contexts. From a multidisciplinary approach, the user's research was elaborated by a semi-open questionnaire of self-perception of emotional characteristics: likeability, attractiveness, and applicability of a set of six 2D and 3D characters. The results extracted from a sample of 69 university students provide a relevant information on design features and open new lines for future research. Aspects such as Ekman's basic emotion discrimination and the design of facial expression are analyzed. The incorporation of other body parts, their spatial orientation and contextual elements, seems to contribute to effective emotional communication. The results also highlight how the design of a virtual character should take into consideration the complexity involved in facial gestures and changes in relation to the vertical axis and planes of movement. Finally, this article discusses the complexity involved in expressing a given emotion in a virtual character.
Collapse
|
16
|
Scorici G, Schultz MD, Seele P. Anthropomorphization and beyond: conceptualizing humanwashing of AI-enabled machines. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01492-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractThe complex relationships between humans and AI-empowered machines have created and inspired new products and services as well as controversial debates, fiction and entertainment, and last but not least, a striving and vital field of research. The (theoretical) convergence between the two categories of entities has created stimulating concepts and theories in the past, such as the uncanny valley, machinization of humans through datafication, or humanization of machines, known as anthropomorphization. In this article, we identify a new gap in the relational interaction between humans and AI triggered by commercial interests, making use of AI through advertisement, marketing, and corporate communications. Our scope is to broaden the field of AI and society by adding the business-society-nexus. Thus, we build on existing research streams of machinewashing and the analogous phenomenon of greenwashing to theorize about the humanwashing of AI-enabled machines as a specific anthropomorphization notion. In this way, the article offers a contribution to the anthropomorphization literature conceptualizing humanwashing as a deceptive use of AI-enabled machines (AIEMs) aimed at intentionally or unintentionally misleading organizational stakeholders and the broader public about the true capabilities that AIEMs possess.
Collapse
|
17
|
Koulouri T, Macredie RD, Olakitan D. Chatbots to Support Young Adults’ Mental Health: an Exploratory Study of Acceptability. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3485874] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
Despite the prevalence of mental health conditions, stigma, lack of awareness and limited resources impede access to care, creating a need to improve mental health support. The recent surge in scientific and commercial interest in conversational agents and their potential to improve diagnosis and treatment seems a potentially fruitful area in this respect, particularly for young adults who widely use such systems in other contexts. Yet, there is little research that considers the acceptability of conversational agents in mental health. This study, therefore, presents three research activities that explore whether conversational agents and, in particular, chatbots can be an acceptable solution in mental healthcare for young adults. First, a survey of young adults (in a university setting) provides an understanding of the landscape of mental health in this age group and of their views around mental health technology, including chatbots. Second, a literature review synthesises current evidence relating to the acceptability of mental health conversational agents and points to future research priorities. Third, interviews with counsellors who work with young adults, supported by a chatbot prototype and user-centred design techniques, reveal the perceived benefits and potential roles of mental health chatbots from the perspective of mental health professionals, while suggesting preconditions for the acceptability of the technology. Taken together, these research activities: provide evidence that chatbots are an acceptable solution to offering mental health support for young adults; identify specific challenges relating to both the technology and environment; and argue for the application of user-centred approaches during development of mental health chatbots and more systematic and rigorous evaluations of the resulting solutions.
Collapse
|
18
|
Spitale M, Okamoto S, Gupta M, Xi H, Matarić MJ. Socially Assistive Robots as Storytellers That Elicit Empathy. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3538409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Empathy is the ability to share someone else’s feelings or experiences; it influences how people interact and relate. Socially assistive robots (SAR) are a promising means of conveying and eliciting empathy toward facilitating human-robot interaction. This work examines factors that influence the amount of empathy elicited by a SAR storyteller and users’ perceptions of that robot. We conducted an empirical mixed-design study (N=46) using an autonomous SAR storyteller that told three stories, each with a different human or robot target of empathy. The robot storyteller used the first-person narrative voice (1PNV) with half of the participants and the third-person narrative voice (3PNV) with the other half. We found that the SAR storyteller elicited significantly more empathy when the story target of empathy matched the SAR narrator, i.e., was also a robot. Additionally, the 1PNV robot elicited significantly more empathy and was perceived as more human-like, easy to interact with, and trustworthy than the 3PNV robot. Finally, participants who empathized more with the robot displayed facial expressions consistent with the emotional story content. These insights inform the design of SAR storytellers capable of eliciting empathy toward creating compelling and effective human-robot interactions.
Collapse
Affiliation(s)
- Micol Spitale
- Politecnico di Milano, Italy and University of Southern California, United States
| | | | - Mahima Gupta
- University of Southern California, United States
| | - Hao Xi
- University of Southern California, United States
| | | |
Collapse
|
19
|
Daher K, Saad D, Mugellini E, Lalanne D, Abou Khaled O. Empathic and Empathetic Systematic Review to Standardize the Development of Reliable and Sustainable Empathic Systems. SENSORS (BASEL, SWITZERLAND) 2022; 22:3046. [PMID: 35459031 PMCID: PMC9031842 DOI: 10.3390/s22083046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 04/08/2022] [Accepted: 04/12/2022] [Indexed: 11/16/2022]
Abstract
Empathy plays a crucial role in human life, and the evolution of technology is affecting the way humans interact with machines. The area of affective computing is attracting considerable interest within the human-computer interaction community. However, the area of empathic interactions has not been explored in depth. This systematic review explores the latest advances in empathic interactions and behaviour. We provide key insights into the exploration, design, implementation, and evaluation of empathic interactions. Data were collected from the CHI conference between 2011 and 2021 to provide an overview of all studies covering empathic and empathetic interactions. Two authors screened and extracted data from a total of 59 articles relevant to this review. The features extracted cover interaction modalities, context understanding, usage fields, goals, and evaluation. The results reported here can be used as a foundation for the future research and development of empathic systems and interfaces and as a starting point for the gaps found.
Collapse
Affiliation(s)
- Karl Daher
- HumanTech Institute, HEIA-FR, University of Applied Sciences Western Switzerland HES-SO, 2800 Delémont, Switzerland; (D.S.); (E.M.); (O.A.K.)
| | - Dahlia Saad
- HumanTech Institute, HEIA-FR, University of Applied Sciences Western Switzerland HES-SO, 2800 Delémont, Switzerland; (D.S.); (E.M.); (O.A.K.)
| | - Elena Mugellini
- HumanTech Institute, HEIA-FR, University of Applied Sciences Western Switzerland HES-SO, 2800 Delémont, Switzerland; (D.S.); (E.M.); (O.A.K.)
| | - Denis Lalanne
- Human-IST, University of Fribourg, 1700 Fribourg, Switzerland;
| | - Omar Abou Khaled
- HumanTech Institute, HEIA-FR, University of Applied Sciences Western Switzerland HES-SO, 2800 Delémont, Switzerland; (D.S.); (E.M.); (O.A.K.)
| |
Collapse
|
20
|
Churamani N, Barros P, Gunes H, Wermter S. Affect-Driven Learning of Robot Behaviour for Collaborative Human-Robot Interactions. Front Robot AI 2022; 9:717193. [PMID: 35265672 PMCID: PMC8898942 DOI: 10.3389/frobt.2022.717193] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 01/17/2022] [Indexed: 11/29/2022] Open
Abstract
Collaborative interactions require social robots to share the users’ perspective on the interactions and adapt to the dynamics of their affective behaviour. Yet, current approaches for affective behaviour generation in robots focus on instantaneous perception to generate a one-to-one mapping between observed human expressions and static robot actions. In this paper, we propose a novel framework for affect-driven behaviour generation in social robots. The framework consists of (i) a hybrid neural model for evaluating facial expressions and speech of the users, forming intrinsic affective representations in the robot, (ii) an Affective Core, that employs self-organising neural models to embed behavioural traits like patience and emotional actuation that modulate the robot’s affective appraisal, and (iii) a Reinforcement Learning model that uses the robot’s appraisal to learn interaction behaviour. We investigate the effect of modelling different affective core dispositions on the affective appraisal and use this affective appraisal as the motivation to generate robot behaviours. For evaluation, we conduct a user study (n = 31) where the NICO robot acts as a proposer in the Ultimatum Game. The effect of the robot’s affective core on its negotiation strategy is witnessed by participants, who rank a patient robot with high emotional actuation higher on persistence, while an impatient robot with low emotional actuation is rated higher on its generosity and altruistic behaviour.
Collapse
Affiliation(s)
- Nikhil Churamani
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
- *Correspondence: Nikhil Churamani,
| | - Pablo Barros
- Cognitive Architecture for Collaborative Technologies (CONTACT) Unit, Istituto Italiano di Tecnologia, Genova, Italy
| | - Hatice Gunes
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Stefan Wermter
- Knowledge Technology, Department of Informatics, University of Hamburg, Hamburg, Germany
| |
Collapse
|
21
|
Busse TS, Kernebeck S, Nef L, Rebacz P, Kickbusch I, Ehlers JP. Views on Using Social Robots in Professional Caregiving: Content Analysis of a Scenario Method Workshop. J Med Internet Res 2021; 23:e20046. [PMID: 34757318 PMCID: PMC8663608 DOI: 10.2196/20046] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 06/21/2020] [Accepted: 09/23/2021] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND Interest in digital technologies in the health care sector is growing and can be a way to reduce the burden on professional caregivers while helping people to become more independent. Social robots are regarded as a special form of technology that can be usefully applied in professional caregiving with the potential to focus on interpersonal contact. While implementation is progressing slowly, a debate on the concepts and applications of social robots in future care is necessary. OBJECTIVE In addition to existing studies with a focus on societal attitudes toward social robots, there is a need to understand the views of professional caregivers and patients. This study used desired future scenarios to collate the perspectives of experts and analyze the significance for developing the place of social robots in care. METHODS In February 2020, an expert workshop was held with 88 participants (health professionals and educators; [PhD] students of medicine, health care, professional care, and technology; patient advocates; software developers; government representatives; and research fellows) from Austria, Germany, and Switzerland. Using the scenario methodology, the possibilities of analog professional care (Analog Care), fully robotic professional care (Robotic Care), teams of robots and professional caregivers (Deep Care), and professional caregivers supported by robots (Smart Care) were discussed. The scenarios were used as a stimulus for the development of ideas about future professional caregiving. The discussion was evaluated using qualitative content analysis. RESULTS The majority of the experts were in favor of care in which people are supported by technology (Deep Care) and developed similar scenarios with a focus on dignity-centeredness. The discussions then focused on the steps necessary for its implementation, highlighting a strong need for the development of eHealth competence in society, a change in the training of professional caregivers, and cross-sectoral concepts. The experts also saw user acceptance as crucial to the use of robotics. This involves the acceptance of both professional caregivers and care recipients. CONCLUSIONS The literature review and subsequent workshop revealed how decision-making about the value of social robots depends on personal characteristics related to experience and values. There is therefore a strong need to recognize individual perspectives of care before social robots become an integrated part of care in the future.
Collapse
Affiliation(s)
- Theresa Sophie Busse
- Department of Didactics and Educational Research in Healthcare, Faculty of Health, Witten/Herdecke University, Witten, Germany
| | - Sven Kernebeck
- Department of Didactics and Educational Research in Healthcare, Faculty of Health, Witten/Herdecke University, Witten, Germany
| | | | - Patrick Rebacz
- Visionom, Witten, Germany.,Department and Institute for Anatomy and Clinical Morphology, Faculty of Health, Witten/Herdecke University, Witten, Germany
| | | | - Jan Peter Ehlers
- Department of Didactics and Educational Research in Healthcare, Faculty of Health, Witten/Herdecke University, Witten, Germany
| |
Collapse
|
22
|
In Search of Embodied Conversational and Explainable Agents for Health Behaviour Change and Adherence. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti5090056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Conversational agents offer promise to provide an alternative to costly and scarce access to human health providers. Particularly in the context of adherence to treatment advice and health behavior change, they can provide an ongoing coaching role to motivate and keep the health consumer on track. Due to the recognized importance of face-to-face communication and establishment of a therapist-patient working alliance as the biggest single predictor of adherence, our review focuses on embodied conversational agents (ECAs) and their use in health and well-being interventions. The article also introduces ECAs who provide explanations of their recommendations, known as explainable agents (XAs), as a way to build trust and enhance the working alliance towards improved behavior change. Of particular promise, is work in which XAs are able to engage in conversation to learn about their user and personalize their recommendations based on their knowledge of the user and then tailor their explanations to the beliefs and goals of the user to increase relevancy and motivation and address possible barriers to increase intention to perform the healthy behavior.
Collapse
|
23
|
Lee EE, Torous J, De Choudhury M, Depp CA, Graham SA, Kim HC, Paulus MP, Krystal JH, Jeste DV. Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2021; 6:856-864. [PMID: 33571718 PMCID: PMC8349367 DOI: 10.1016/j.bpsc.2021.02.001] [Citation(s) in RCA: 82] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 02/01/2021] [Accepted: 02/02/2021] [Indexed: 12/19/2022]
Abstract
Artificial intelligence (AI) is increasingly employed in health care fields such as oncology, radiology, and dermatology. However, the use of AI in mental health care and neurobiological research has been modest. Given the high morbidity and mortality in people with psychiatric disorders, coupled with a worsening shortage of mental health care providers, there is an urgent need for AI to help identify high-risk individuals and provide interventions to prevent and treat mental illnesses. While published research on AI in neuropsychiatry is rather limited, there is a growing number of successful examples of AI's use with electronic health records, brain imaging, sensor-based monitoring systems, and social media platforms to predict, classify, or subgroup mental illnesses as well as problems such as suicidality. This article is the product of a study group held at the American College of Neuropsychopharmacology conference in 2019. It provides an overview of AI approaches in mental health care, seeking to help with clinical diagnosis, prognosis, and treatment, as well as clinical and technological challenges, focusing on multiple illustrative publications. Although AI could help redefine mental illnesses more objectively, identify them at a prodromal stage, personalize treatments, and empower patients in their own care, it must address issues of bias, privacy, transparency, and other ethical concerns. These aspirations reflect human wisdom, which is more strongly associated than intelligence with individual and societal well-being. Thus, the future AI or artificial wisdom could provide technology that enables more compassionate and ethically sound care to diverse groups of people.
Collapse
Affiliation(s)
- Ellen E Lee
- Department of Psychiatry, University of California San Diego, San Diego, California; Sam and Rose Stein Institute for Research on Aging, University of California San Diego, San Diego, California; VA San Diego Healthcare System, San Diego, California
| | - John Torous
- Department of Psychiatry, Beth Israel Deaconess Medical Center and Harvard University, Boston, Massachusetts
| | - Munmun De Choudhury
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, Georgia
| | - Colin A Depp
- Department of Psychiatry, University of California San Diego, San Diego, California; Sam and Rose Stein Institute for Research on Aging, University of California San Diego, San Diego, California; VA San Diego Healthcare System, San Diego, California
| | - Sarah A Graham
- Department of Psychiatry, University of California San Diego, San Diego, California; Sam and Rose Stein Institute for Research on Aging, University of California San Diego, San Diego, California
| | - Ho-Cheol Kim
- AI and Cognitive Software, IBM Research-Almaden, San Jose, California
| | | | - John H Krystal
- Department of Psychiatry, Yale University, New Haven, Connecticut
| | - Dilip V Jeste
- Department of Psychiatry, University of California San Diego, San Diego, California; Department of Neurosciences, University of California San Diego, San Diego, California; Sam and Rose Stein Institute for Research on Aging, University of California San Diego, San Diego, California.
| |
Collapse
|
24
|
Ozeki T, Mouri T, Sugiura H, Yano Y, Miyosawa K. Impression Survey and Grounded Theory Analysis of the Development of Medication Support Robots for Patients with Schizophrenia. JOURNAL OF ROBOTICS AND MECHATRONICS 2021. [DOI: 10.20965/jrm.2021.p0747] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Medication is a key treatment for patients with schizophrenia. Patients with schizophrenia tend to easily decrease medication adherence with long-term treatment. However, there is a chronic shortage of specialists who provide medication support, such as visiting nurses. In addition, these patients do not often use smartphones or PCs in their daily lives. Thus, schizophrenic patients need a direct approach in the physical world because they are unfamiliar with cyberspace. This study aims to improve the home treatment environment using robot technology that can approach in the physical world of schizophrenic patients who need medication support. In this study, collaboration between psychiatric nursing specialists and medical engineers investigated the interaction between communication robots and patients. The results showed that the robot was accepted by patients with schizophrenia as a talking partner. The amount of robot talking seemed to affect the impression of the robot on schizophrenics. Utterance process analysis showed that the smoothness of the conversation affected the relationship between robots and schizophrenics.
Collapse
|
25
|
Biancardi B, Dermouche S, Pelachaud C. Adaptation Mechanisms in Human–Agent Interaction: Effects on User’s Impressions and Engagement. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2021.696682] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Adaptation is a key mechanism in human–human interaction. In our work, we aim at endowing embodied conversational agents with the ability to adapt their behavior when interacting with a human interlocutor. With the goal to better understand what the main challenges concerning adaptive agents are, we investigated the effects on the user’s experience of three adaptation models for a virtual agent. The adaptation mechanisms performed by the agent take into account the user’s reaction and learn how to adapt on the fly during the interaction. The agent’s adaptation is realized at several levels (i.e., at the behavioral, conversational, and signal levels) and focuses on improving the user’s experience along different dimensions (i.e., the user’s impressions and engagement). In our first two studies, we aim to learn the agent’s multimodal behaviors and conversational strategies to dynamically optimize the user’s engagement and impressions of the agent, by taking them as input during the learning process. In our third study, our model takes both the user’s and the agent’s past behavior as input and predicts the agent’s next behavior. Our adaptation models have been evaluated through experimental studies sharing the same interacting scenario, with the agent playing the role of a virtual museum guide. These studies showed the impact of the adaptation mechanisms on the user’s experience of the interaction and their perception of the agent. Interacting with an adaptive agent vs. a nonadaptive agent tended to be more positively perceived. Finally, the effects of people’s a priori about virtual agents found in our studies highlight the importance of taking into account the user’s expectancies in human–agent interaction.
Collapse
|
26
|
Savery R, Weinberg G. Robots and emotion: a survey of trends, classifications, and forms of interaction. Adv Robot 2021. [DOI: 10.1080/01691864.2021.1957014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Richard Savery
- Georgia Tech Center for Music Technology, Georgia Institute of Technology, Atlanta, GA, USA
| | - Gil Weinberg
- Georgia Tech Center for Music Technology, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
27
|
Remote Virtual Simulation for Incident Commanders—Cognitive Aspects. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11146434] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Due to the COVID-19 restrictions, on-site Incident Commander (IC) practical training and examinations in Sweden were canceled as of March 2020. The graduation of one IC class was, however, conducted through Remote Virtual Simulation (RVS), the first such examination to our current knowledge. This paper presents the necessary enablers for setting up RVS and its influence on cognitive aspects of assessing practical competences. Data were gathered through observations, questionnaires, and interviews from students and instructors, using action-case research methodology. The results show the potential of RVS for supporting higher cognitive processes, such as recognition, comprehension, problem solving, decision making, and allowed students to demonstrate whether they had achieved the required learning objectives. Other reported benefits were the value of not gathering people (imposed by the pandemic), experiencing new, challenging incident scenarios, increased motivation for applying RVS based training both for students and instructors, and reduced traveling (corresponding to 15,400 km for a class). While further research is needed for defining how to integrate RVS in practical training and assessment for IC education and for increased generalizability, this research pinpoints current benefits and limitations, in relation to the cognitive aspects and in comparison, to previous examination formats.
Collapse
|
28
|
Carolus A, Wienrich C, Törke A, Friedel T, Schwietering C, Sperzel M. ‘Alexa, I feel for you!’ Observers’ Empathetic Reactions towards a Conversational Agent. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2021.682982] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Conversational agents and smart speakers have grown in popularity offering a variety of options for use, which are available through intuitive speech operation. In contrast to the standard dyad of a single user and a device, voice-controlled operations can be observed by further attendees resulting in new, more social usage scenarios. Referring to the concept of ‘media equation’ and to research on the idea of ‘computers as social actors,’ which describes the potential of technology to trigger emotional reactions in users, this paper asks for the capacity of smart speakers to elicit empathy in observers of interactions. In a 2 × 2 online experiment, 140 participants watched a video of a man talking to an Amazon Echo either rudely or neutrally (factor 1), addressing it as ‘Alexa’ or ‘Computer’ (factor 2). Controlling for participants’ trait empathy, the rude treatment results in participants’ significantly higher ratings of empathy with the device, compared to the neutral treatment. The form of address had no significant effect. Results were independent of the participants’ gender and usage experience indicating a rather universal effect, which confirms the basic idea of the media equation. Implications for users, developers and researchers were discussed in the light of (future) omnipresent voice-based technology interaction scenarios.
Collapse
|
29
|
Tian L, Oviatt S. A Taxonomy of Social Errors in Human-Robot Interaction. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2021. [DOI: 10.1145/3439720] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Robotic applications have entered various aspects of our lives, such as health care and educational services. In such Human-robot Interaction (HRI), trust and mutual adaption are established and maintained through a positive social relationship between a user and a robot. This social relationship relies on the perceived competence of a robot on the social-emotional dimension. However, because of technical limitations and user heterogeneity, current HRI is far from error-free, especially when a system leaves controlled lab environments and is applied to in-the-wild conditions. Errors in HRI may either degrade a user’s perception of a robot’s capability in achieving a task (defined as
performance errors
in this work) or degrade a user’s perception of a robot’s socio-affective competence (defined as
social errors
in this work). The impact of these errors and effective strategies to handle such an impact remains an open question. We focus on social errors in HRI in this work. In particular, we identify the major attributes of perceived socio-affective competence by reviewing human social interaction studies and HRI error studies. This motivates us to propose a taxonomy of social errors in HRI. We then discuss the impact of social errors situated in three representative HRI scenarios. This article provides foundations for a systematic analysis of the social-emotional dimension of HRI. The proposed taxonomy of social errors encourages the development of user-centered HRI systems, designed to offer positive and adaptive interaction experiences and improved interaction outcomes.
Collapse
Affiliation(s)
- Leimin Tian
- Faculty of Engineering, Monash University, Australia
| | - Sharon Oviatt
- Faculty of Engineering, Monash University, Australia
| |
Collapse
|
30
|
Saunderson S, Nejat G. Robots Asking for Favors: The Effects of Directness and Familiarity on Persuasive HRI. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3060369] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
31
|
|
32
|
Abstract
Virtual and physical embodiments of interactive artificial agents utilize similar core technologies for perception, planning, and interaction and engage with people in similar ways. Thus, designers have typically considered these embodiments to be broadly interchangeable, and the choice of embodiment primarily depends on the practical demands of an application. This paper makes the case that virtual and physical embodiments elicit fundamentally different "frames of mind" in the users of the technology and follow different metaphors for interaction, resulting in diverging expectations, forms of engagement, and eventually interaction outcomes. It illustrates these differences through the lens of five key mechanisms: "situativity, interactivity, agency, proxemics, and believability". It also outlines the design implications of the two frames of mind, arguing for different domains of interaction serving as appropriate context for virtual and physical embodiments.
Collapse
Affiliation(s)
- Bilge Mutlu
- Department of Computer Sciences, University of Wisconsin–Madison, Madison, WI 53706, USA
| |
Collapse
|
33
|
Castellano G, De Carolis B, D’Errico F, Macchiarulo N, Rossano V. PeppeRecycle: Improving Children’s Attitude Toward Recycling by Playing with a Social Robot. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00754-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractIn this paper, we investigate the use of a social robot as an engaging interface of a serious game intended to make children more aware and well disposed towards waste recycle. The game has been designed as a competition between the robot Pepper and a child. During the game, the robot simultaneously challenges and teaches the child how to recycle waste materials. To endow the robot with the capability to play as a game opponent in a real-world context, it is equipped with an image recognition module based on a Convolutional Neural Network to detect and classify the waste material as a child would do, i.e. by simply looking at it. A formal experiment involving 51 primary school students is carried out to evaluate the effectiveness of the game in terms of different factors such as the interaction with the robot, the users’ cognitive and affective dimensions towards ecological sustainability, and the propensity to recycle. The obtained results are encouraging and draw promising scenarios for educational robotics in changing children’s attitudes toward recycling. Indeed Pepper turns out to be positively evaluated by children as a trustful and believable companion and this allows children to be concentrated on the “memorization” task during the game. Moreover, the use of real objects as waste items during the game turns out to be a successful approach not only for perceived learning effectiveness but also for the children’s engagement.
Collapse
|
34
|
Adapting a Virtual Advisor’s Verbal Conversation Based on Predicted User Preferences: A Study of Neutral, Empathic and Tailored Dialogue. MULTIMODAL TECHNOLOGIES AND INTERACTION 2020. [DOI: 10.3390/mti4030055] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Virtual agents that improve the lives of humans need to be more than user-aware and adaptive to the user’s current state and behavior. Additionally, they need to apply expertise gained from experience that drives their adaptive behavior based on deep understanding of the user’s features (such as gender, culture, personality, and psychological state). Our work has involved extension of FAtiMA (Fearnot AffecTive Mind Architecture) with the addition of an Adaptive Engine to the FAtiMA cognitive agent architecture. We use machine learning to acquire the agent’s expertise by capturing a collection of user profiles into a user model and development of agent expertise based on the user model. In this paper, we describe a study to evaluate the Adaptive Engine, which compares the benefit (i.e., reduced stress, increased rapport) of tailoring dialogue to the specific user (Adaptive group) with dialogues that are either empathic (Empathic group) or neutral (Neutral group). Results showed a significant reduction in stress in the empathic and neutral groups, but not the adaptive group. Analyses of rule accuracy, participants’ dialogue preferences, and individual differences reveal that the three groups had different needs for empathic dialogue and highlight the importance and challenges of getting the tailoring right.
Collapse
|
35
|
Pepito JA, Ito H, Betriana F, Tanioka T, Locsin RC. Intelligent humanoid robots expressing artificial humanlike empathy in nursing situations. Nurs Philos 2020; 21:e12318. [PMID: 33462939 DOI: 10.1111/nup.12318] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2020] [Revised: 06/28/2020] [Accepted: 06/29/2020] [Indexed: 12/22/2022]
Abstract
Intelligent humanoid robots (IHRs) are becoming likely to be integrated into nursing practice. However, a proper integration of IHRs requires a detailed description and explanation of their essential capabilities, particularly regarding their competencies in replicating and portraying emotive functions such as empathy. Existing humanoid robots can exhibit rudimentary forms of empathy; as these machines slowly become commonplace in healthcare settings, they will be expected to express empathy as a natural function, rather than merely to portray artificial empathy as a replication of human empathy. This article works with a twofold purpose: firstly, to consider the impact of artificial empathy in nursing and, secondly, to describe the influence of Affective Developmental Robotics (ADR) in anticipation of the empathic behaviour presented by artificial humanoid robots. The ADR has demonstrated that it can be one means by which humanoid nurse robots can achieve expressions of more relatable artificial empathy. This will be one of the vital models for intelligent humanoid robots currently in nurse robot development for the healthcare industry. A discussion of IHRs demonstrating artificial empathy is critical to nursing practice today, particularly in healthcare settings dense with technology.
Collapse
Affiliation(s)
- Joseph Andrew Pepito
- College of Allied Medical Sciences, Cebu Doctors' University, Cebu City, Philippines
| | - Hirokazu Ito
- Department of Nursing, Tokushima University, Tokushima, Japan
| | - Feni Betriana
- Department of Health Sciences, Tokushima University, Graduate School, Tokushima, Japan
| | - Tetsuya Tanioka
- Department of Nursing Outcomes Management, Tokushima University Graduate School of Biomedical Sciences, Tokushima, Japan
| | - Rozzano C Locsin
- Tokushima University Graduate School of Biomedical Sciences, Tokushima, Japan.,Florida Atlantic University, Boca Raton, FL, USA
| |
Collapse
|
36
|
Yalçın ÖN, DiPaola S. Modeling empathy: building a link between affective and cognitive processes. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09753-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
37
|
Tessier MH, Gingras C, Robitaille N, Jackson PL. Toward dynamic pain expressions in avatars: Perceived realism and pain level of different action unit orders. COMPUTERS IN HUMAN BEHAVIOR 2019. [DOI: 10.1016/j.chb.2019.02.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
38
|
Provoost S, Ruwaard J, van Breda W, Riper H, Bosse T. Validating Automated Sentiment Analysis of Online Cognitive Behavioral Therapy Patient Texts: An Exploratory Study. Front Psychol 2019; 10:1065. [PMID: 31156504 PMCID: PMC6530336 DOI: 10.3389/fpsyg.2019.01065] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Accepted: 04/24/2019] [Indexed: 11/21/2022] Open
Abstract
Introduction Sentiment analysis may be a useful technique to derive a user’s emotional state from free text input, allowing for more empathic automated feedback in online cognitive behavioral therapy (iCBT) interventions for psychological disorders such as depression. As guided iCBT is considered more effective than unguided iCBT, such automated feedback may help close the gap between the two. The accuracy of automated sentiment analysis is domain dependent, and it is unclear how well the technology is applicable to iCBT. This paper presents an empirical study in which automated sentiment analysis by an algorithm for the Dutch language is validated against human judgment. Methods A total of 493 iCBT user texts were evaluated on overall sentiment and the presence of five specific emotions by an algorithm, and by 52 psychology students who evaluated 75 randomly selected texts each, providing about eight human evaluations per text. Inter-rater agreement (IRR) between algorithm and humans, and humans among each other, was analyzed by calculating the intra-class correlation under a numerical interpretation of the data, and Cohen’s kappa, and Krippendorff’s alpha under a categorical interpretation. Results All analyses indicated moderate agreement between the algorithm and average human judgment with respect to evaluating overall sentiment, and low agreement for the specific emotions. Somewhat surprisingly, the same was the case for the IRR among human judges, which means that the algorithm performed about as well as a randomly selected human judge. Thus, considering average human judgment as a benchmark for the applicability of automated sentiment analysis, the technique can be considered for practical application. Discussion/Conclusion The low human-human agreement on the presence of emotions may be due to the nature of the texts, it may simply be difficult for humans to agree on the presence of the selected emotions, or perhaps trained therapists would have reached more consensus. Future research may focus on validating the algorithm against a more solid benchmark, on applying the algorithm in an application in which empathic feedback is provided, for example, by an embodied conversational agent, or on improving the algorithm for the iCBT domain with a bottom-up machine learning approach.
Collapse
Affiliation(s)
- Simon Provoost
- Department of Clinical, Neuro- and Developmental Psychology, Section Clinical Psychology, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Jeroen Ruwaard
- Department of Psychiatry, Amsterdam Public Health Research Institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.,GGZ inGeest Specialized Mental Health Care, Amsterdam, Netherlands
| | | | - Heleen Riper
- Department of Clinical, Neuro- and Developmental Psychology, Section Clinical Psychology, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.,Department of Psychiatry, Amsterdam Public Health Research Institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.,GGZ inGeest Specialized Mental Health Care, Amsterdam, Netherlands.,Centre for Telepsychiatry, Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Tibor Bosse
- Behavioural Science Institute, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
39
|
Alves-Oliveira P, Sequeira P, Melo FS, Castellano G, Paiva A. Empathic Robot for Group Learning. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2019. [DOI: 10.1145/3300188] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
This work explores a group learning scenario with an autonomous empathic robot. We address two research questions: (1) Can an autonomous robot designed with empathic competencies foster collaborative learning in a group context? (2) Can an empathic robot sustain positive educational outcomes in long-term collaborative learning interactions with groups of students? To answer these questions, we developed an autonomous robot with empathic competencies that is able to interact with a group of students in a learning activity about sustainable development. Two studies were conducted. The first study compares learning outcomes in children across three conditions: learning with an empathic robot; learning with a robot without empathic capabilities; and learning without a robot. The results show that the autonomous robot with empathy fosters meaningful discussions about sustainability, which is a learning outcome in sustainability education. The second study features groups of students who interact with the robot in a school classroom for 2 months. The long-term educational interaction did not seem to provide significant learning gains, although there was a change in game-actions to achieve more sustainability during game-play. This result reflects the need to perform more long-term research in the field of educational robots for group learning.
Collapse
Affiliation(s)
- Patrícia Alves-Oliveira
- Instituto Universitário de Lisboa (ISCTE-IUL), CIS-IUL, and Group on Artificial Intelligence for People and Society, GAIPS, from INESC-ID, Lisbon, Portugal
| | - Pedro Sequeira
- Northeastern University, College of Computer and Information Science, Boston, MA, USA
| | - Francisco S. Melo
- Instituto Superior Técnico, Universidade de Lisboa, and Group on Artificial Intelligence for People and Society, GAIPS, from INESC-ID, Porto Salvo, Portugal
| | - Ginevra Castellano
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Ana Paiva
- Instituto Superior Técnico, Universidade de Lisboa, and Group on Artificial Intelligence for People and Society, GAIPS, from INESC-ID, Porto Salvo, Portugal
| |
Collapse
|
40
|
Heyes C. Empathy is not in our genes. Neurosci Biobehav Rev 2018; 95:499-507. [PMID: 30399356 DOI: 10.1016/j.neubiorev.2018.11.001] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Revised: 11/02/2018] [Accepted: 11/02/2018] [Indexed: 01/10/2023]
Abstract
In academic and public life empathy is seen as a fundamental force of morality - a psychological phenomenon, rooted in biology, with profound effects in law, policy, and international relations. But the roots of empathy are not as firm as we like to think. The matching mechanism that distinguishes empathy from compassion, envy, schadenfreude, and sadism is a product of learning. Here I present a dual system model that distinguishes Empathy1, an automatic process that catches the feelings of others, from Empathy2, controlled processes that interpret those feelings. Research with animals, infants, adults and robots suggests that the mechanism of Empathy1, emotional contagion, is constructed in the course of development through social interaction. Learned Matching implies that empathy is both agile and fragile. It can be enhanced and redirected by novel experience, and broken by social change.
Collapse
Affiliation(s)
- Cecilia Heyes
- All Souls College & Department of Experimental Psychology, University of Oxford, Oxford, OX1 4AL, United Kingdom.
| |
Collapse
|
41
|
Wachsmuth I. Robots Like Me: Challenges and Ethical Issues in Aged Care. Front Psychol 2018; 9:432. [PMID: 29666601 PMCID: PMC5892289 DOI: 10.3389/fpsyg.2018.00432] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2017] [Accepted: 03/15/2018] [Indexed: 11/25/2022] Open
Affiliation(s)
- Ipke Wachsmuth
- Center of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|