1
|
Olatunji SA, Nguyen V, Cakmak M, Edsinger A, Kemp CC, Rogers WA, Mahajan HP. Immersive participatory design of assistive robots to support older adults. Ergonomics 2024; 67:717-731. [PMID: 38351886 DOI: 10.1080/00140139.2024.2312529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 01/26/2024] [Indexed: 05/22/2024]
Abstract
Assistive robots have the potential to support independence, enhance safety, and lower healthcare costs for older adults, as well as alleviate the demands of their care partners. However, ensuring that these robots will effectively and reliably address end-user needs in the long term requires user-specific design factors to be considered during the robot development process. To identify these design factors, we embedded Stretch, a mobile manipulator created by Hello Robot Inc., in the home of an older adult with motor impairments and his care partner for four weeks to support them with everyday activities. An occupational therapist and a robotics engineer lived with them during this period, employing an immersive participatory design approach to co-design and customise the robot with them. We highlight the benefits of this immersive participatory design experience and provide insights into robot design that can be applied broadly to other assistive technologies.
Collapse
Affiliation(s)
- Samuel A Olatunji
- College of Applied Health Sciences, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Vy Nguyen
- Research and Development, Hello Robot Inc., Martinez, CA, USA
| | - Maya Cakmak
- School of Computer Science and Engineering, University of Washington, Seattle, WA, USA
| | - Aaron Edsinger
- Research and Development, Hello Robot Inc., Martinez, CA, USA
| | - Charles C Kemp
- Research and Development, Hello Robot Inc., Martinez, CA, USA
| | - Wendy A Rogers
- College of Applied Health Sciences, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Harshal P Mahajan
- College of Applied Health Sciences, University of Illinois Urbana-Champaign, Urbana, IL, USA
| |
Collapse
|
2
|
Kappas A, Gratch J. These Aren't The Droids You Are Looking for: Promises and Challenges for the Intersection of Affective Science and Robotics/AI. Affect Sci 2023; 4:580-585. [PMID: 37744970 PMCID: PMC10514249 DOI: 10.1007/s42761-023-00211-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 07/19/2023] [Indexed: 09/26/2023]
Abstract
AI research focused on interactions with humans, particularly in the form of robots or virtual agents, has expanded in the last two decades to include concepts related to affective processes. Affective computing is an emerging field that deals with issues such as how the diagnosis of affective states of users can be used to improve such interactions, also with a view to demonstrate affective behavior towards the user. This type of research often is based on two beliefs: (1) artificial emotional intelligence will improve human computer interaction (or more specifically human robot interaction), and (2) we understand the role of affective behavior in human interaction sufficiently to tell artificial systems what to do. However, within affective science the focus of research is often to test a particular assumption, such as "smiles affect liking." Such focus does not provide the information necessary to synthesize affective behavior in long dynamic and real-time interactions. In consequence, theories do not play a large role in the development of artificial affective systems by engineers, but self-learning systems develop their behavior out of large corpora of recorded interactions. The status quo is characterized by measurement issues, theoretical lacunae regarding prevalence and functions of affective behavior in interaction, and underpowered studies that cannot provide the solid empirical foundation for further theoretical developments. This contribution will highlight some of these challenges and point towards next steps to create a rapprochement between engineers and affective scientists with a view to improving theory and solid applications.
Collapse
Affiliation(s)
- Arvid Kappas
- Constructor University, Campus Ring 1, 28759 Bremen, Germany
| | - Jonathan Gratch
- Institute for Creative Technologies, University of Southern California, Los Angeles, CA USA
| |
Collapse
|
3
|
Surendran V, Wagner AR. That was not what I was aiming at! Differentiating human intent and outcome in a physically dynamic throwing task. Auton Robots 2023; 47:249-65. [PMID: 36530466 DOI: 10.1007/s10514-022-10074-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 10/27/2022] [Indexed: 12/04/2022]
Abstract
Recognising intent in collaborative human robot tasks can improve team performance and human perception of robots. Intent can differ from the observed outcome in the presence of mistakes which are likely in physically dynamic tasks. We created a dataset of 1227 throws of a ball at a target from 10 participants and observed that 47% of throws were mistakes with 16% completely missing the target. Our research leverages facial images capturing the person's reaction to the outcome of a throw to predict when the resulting throw is a mistake and then we determine the actual intent of the throw. The approach we propose for outcome prediction performs 38% better than the two-stream architecture used previously for this task on front-on videos. In addition, we propose a 1D-CNN model which is used in conjunction with priors learned from the frequency of mistakes to provide an end-to-end pipeline for outcome and intent recognition in this throwing task.
Collapse
|
4
|
Zhao M, Simmons R, Admoni H. The Role of Adaptation in Collective Human-AI Teaming. Top Cogn Sci 2022. [PMID: 36374986 DOI: 10.1111/tops.12633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 10/17/2022] [Accepted: 10/19/2022] [Indexed: 11/16/2022]
Abstract
This paper explores a framework for defining artificial intelligence (AI) that adapts to individuals within a group, and discusses the technical challenges for collaborative AI systems that must work with different human partners. Collaborative AI is not one-size-fits-all, and thus AI systems must tune their output based on each human partner's needs and abilities. For example, when communicating with a partner, an AI should consider how prepared their partner is to receive and correctly interpret the information they are receiving. Forgoing such individual considerations may adversely impact the partner's mental state and proficiency. On the other hand, successfully adapting to each person's (or team member's) behavior and abilities can yield performance benefits for the human-AI team. Under this framework, an AI teammate adapts to human partners by first learning components of the human's decision-making process and then updating its own behaviors to positively influence the ongoing collaboration. This paper explains the role of this AI adaptation formalism in dyadic human-AI interactions and examines its application through a case study in a simulated navigation domain.
Collapse
|
5
|
Özçevik Y. Human robot interaction as a service for combatting COVID-19: an experimental case study. J Ambient Intell Humaniz Comput 2022; 14:1-10. [PMID: 35340699 PMCID: PMC8934408 DOI: 10.1007/s12652-022-03815-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 03/05/2022] [Indexed: 06/14/2023]
Abstract
COVID-19 pandemic has changed today's routines in a variety of fields such as society, economics, health, etc. It is surely known that the most powerful weapon to fight against the disease is the social distancing. Hence, it is strongly recommended by the authorities to decrease human to human interaction (HHI) in order to stop the spread. However, daily routine of people must continue somehow, because of the fact that it is not known when the pandemic will end permanently. Thus, new approaches should be adapted in social environments for COVID-19 prevention. Human robot interaction (HRI) can be seen as a vital mechanism to provide risk free routines in the society. For this purpose, we offer a human robot interaction as a service (HRIaaS) for eatery locations such as restaurants, cafes, etc. where customers should interact with the staff. The proposed service aims to utilize personal smartphones and decrease the number of HHIs for such environments in which strange people involved. Moreover, an experimental case study is conducted to obtain an evaluation with a real world scenario when the proposed service is used versus a contemporary routine with HHIs. The evaluation results show that an average reduction of 41% is achieved per customer in the number of HHIs between customers and serving staff.
Collapse
Affiliation(s)
- Yusuf Özçevik
- Department of Software Engineering, Manisa Celal Bayar University, Manisa, Turkey
| |
Collapse
|
6
|
Abstract
In order to navigate safely and effectively with humans in close proximity, robots must be capable of predicting the future motions of humans. This study first consolidates human studies in motion, intention, and preference into a discretized human model that can readily be used in robotics decision making algorithms. Cooperative Markov Decision Process (Co-MDP), a novel framework that improves upon Multiagent MDPs, is then proposed for enabling socially aware robot obstacle avoidance. Utilizing the consolidated and discretized human model, Co-MDP allows the system to (1) approximate rational human behavior and intention, (2) generate socially-aware robotic obstacle avoidance behavior, and (3) remain robust to the uncertainty of human intention and motion variance. Simulations of a human-robot co-populated environment verify Co-MDP as a feasible obstacle avoidance algorithm. In addition, the anthropomorphic behavior of Co-MDP was assessed and confirmed with a human-in-the-loop experiment. Results reveal that participants can not directly differentiate agents that were controlled by human operators from Co-MDP, and the reported confidences of their choices indicates that the predictions from participants were backed by behavioral evidence rather than random guesses. Thus the main contributions for this paper are: consolidating past human studies of rational human behavior and intention into a simple, discretized model; the development of Co-MDP: a robotic decision framework that can utilize this human model and maximize the joint utility between the human and robot; and an experimental design for evaluation of the human acceptance of obstacle avoidance algorithms.
Collapse
Affiliation(s)
- Trevor Smith
- Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, WV USA
| | - Yuhao Chen
- Department of Industrial and Systems Engineering, University of Florida, Gainesville, FL USA
| | - Nathan Hewitt
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC USA
| | - Boyi Hu
- Department of Industrial and Systems Engineering, University of Florida, Gainesville, FL USA
| | - Yu Gu
- Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, WV USA
| |
Collapse
|
7
|
Nishimura Y, Nakamura Y, Ishiguro H. Human interaction behavior modeling using Generative Adversarial Networks. Neural Netw 2020; 132:521-31. [PMID: 33039789 DOI: 10.1016/j.neunet.2020.09.019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 09/01/2020] [Accepted: 09/25/2020] [Indexed: 11/22/2022]
Abstract
Recently, considerable research has focused on personal assistant robots, and robots capable of rich human-like communication are expected. Among humans, non-verbal elements contribute to effective and dynamic communication. However, people use a wide range of diverse gestures, and a robot capable of expressing various human gestures has not been realized. In this study, we address human behavior modeling during interaction using a deep generative model. In the proposed method, to consider interaction motion, three factors, i.e., interaction intensity, time evolution, and time resolution, are embedded in the network structure. Subjective evaluation results suggest that the proposed method can generate high-quality human motions.
Collapse
|
8
|
Zoller EI, Faludi B, Gerig N, Jost GF, Cattin PC, Rauter G. Force quantification and simulation of pedicle screw tract palpation using direct visuo-haptic volume rendering. Int J Comput Assist Radiol Surg 2020; 15:1797-1805. [PMID: 32959159 PMCID: PMC7603448 DOI: 10.1007/s11548-020-02258-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Accepted: 09/03/2020] [Indexed: 11/16/2022]
Abstract
Purpose We present a feasibility study for the visuo-haptic simulation of pedicle screw tract palpation in virtual reality, using an approach that requires no manual processing or segmentation of the volumetric medical data set. Methods In a first experiment, we quantified the forces and torques present during the palpation of a pedicle screw tract in a real boar vertebra. We equipped a ball-tipped pedicle probe with a 6-axis force/torque sensor and a motion capture marker cluster. We simultaneously recorded the pose of the probe relative to the vertebra and measured the generated forces and torques during palpation. This allowed us replaying the recorded palpation movements in our simulator and to fine-tune the haptic rendering to approximate the measured forces and torques. In a second experiment, we asked two neurosurgeons to palpate a virtual version of the same vertebra in our simulator, while we logged the forces and torques sent to the haptic device. Results In the experiments with the real vertebra, the maximum measured force along the longitudinal axis of the probe was 7.78 N and the maximum measured bending torque was 0.13 Nm. In an offline simulation of the motion of the pedicle probe recorded during the palpation of a real pedicle screw tract, our approach generated forces and torques that were similar in magnitude and progression to the measured ones. When surgeons tested our simulator, the distributions of the computed forces and torques were similar to the measured ones; however, higher forces and torques occurred more frequently. Conclusions We demonstrated the suitability of direct visual and haptic volume rendering to simulate a specific surgical procedure. Our approach of fine-tuning the simulation by measuring the forces and torques that are prevalent while palpating a real vertebra produced promising results. Electronic supplementary material The online version of this article (10.1007/s11548-020-02258-0) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Esther I Zoller
- BIROMED-Lab, Department of Biomedical Engineering, University of Basel, Basel, Switzerland.
| | - Balázs Faludi
- CIAN, Department of Biomedical Engineering, University of Basel, Basel, Switzerland
| | - Nicolas Gerig
- BIROMED-Lab, Department of Biomedical Engineering, University of Basel, Basel, Switzerland
| | - Gregory F Jost
- Spinale Chirurgie, Spitalzentrum Biel, Biel, Switzerland
| | - Philippe C Cattin
- CIAN, Department of Biomedical Engineering, University of Basel, Basel, Switzerland
| | - Georg Rauter
- BIROMED-Lab, Department of Biomedical Engineering, University of Basel, Basel, Switzerland
| |
Collapse
|
9
|
Chita-Tegmark M, Scheutz M. Assistive Robots for the Social Management of Health: A Framework for Robot Design and Human-Robot Interaction Research. Int J Soc Robot 2020; 13:197-217. [PMID: 32421077 PMCID: PMC7223628 DOI: 10.1007/s12369-020-00634-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/14/2020] [Indexed: 01/31/2023]
Abstract
There is a close connection between health and the quality of one's social life. Strong social bonds are essential for health and wellbeing, but often health conditions can detrimentally affect a person's ability to interact with others. This can become a vicious cycle resulting in further decline in health. For this reason, the social management of health is an important aspect of healthcare. We propose that socially assistive robots (SARs) could help people with health conditions maintain positive social lives by supporting them in social interactions. This paper makes three contributions, as detailed below. We develop a framework of social mediation functions that robots could perform, motivated by the special social needs that people with health conditions have. In this framework we identify five types of functions that SARs could perform: (a) changing how the person is perceived, (b) enhancing the social behavior of the person, (c) modifying the social behavior of others, (d) providing structure for interactions, and (e) changing how the person feels. We thematically organize and review the existing literature on robots supporting human-human interactions, in both clinical and non-clinical settings, and explain how the findings and design ideas from these studies can be applied to the functions identified in the framework. Finally, we point out and discuss challenges in designing SARs for supporting social interactions, and highlight opportunities for future robot design and HRI research on the mediator role of robots.
Collapse
|