1
|
San Martin A, Kildal J, Lazkano E. An analysis of the role of different levels of exchange of explicit information in human-robot cooperation. Front Robot AI 2025; 12:1511619. [PMID: 39995755 PMCID: PMC11848069 DOI: 10.3389/frobt.2025.1511619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Accepted: 01/14/2025] [Indexed: 02/26/2025] Open
Abstract
For smooth human-robot cooperation, it is crucial that robots understand social cues from humans and respond accordingly. Contextual information provides the human partner with real-time insights into how the robot interprets social cues and what action decisions it makes as a result. We propose and implement a novel design for a human-robot cooperation framework that uses augmented reality and user gaze to enable bidirectional communication. Through this framework, the robot can recognize the objects in the scene that the human is looking at and infer the human's intentions within the context of the cooperative task. We proposed three levels of exchange of explicit information designs, each providing increasingly more information. These designs enable the robot to offer contextual information about what user actions it has identified and how it intends to respond, which is in line with the goal of cooperation. We report a user study (n = 24) in which we analyzed the performance and user experience with the three different levels of exchange of explicit information. Results indicate that users preferred an intermediate level of exchange of information, in which users knew how the robot was interpreting their intentions, but where the robot was autonomous to take unsupervised action in response to gaze input from the user, needing a less informative action from the human's side.
Collapse
Affiliation(s)
- Ane San Martin
- Department of Autonomous and Intelligent Systems, Tekniker, Eibar, Spain
| | - Johan Kildal
- Department of Autonomous and Intelligent Systems, Tekniker, Eibar, Spain
| | - Elena Lazkano
- Faculty of Informatics, University of the Basque Country (UPV/EHU), Bilbao, Spain
| |
Collapse
|
2
|
Diller F, Scheuermann G, Wiebel A. Visual Cue Based Corrective Feedback for Motor Skill Training in Mixed Reality: A Survey. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:3121-3134. [PMID: 37015488 DOI: 10.1109/tvcg.2022.3227999] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
When learning a motor skill it is helpful to get corrective feedback from an instructor. This will support the learner to execute the movement correctly. With modern technology, it is possible to provide this feedback via mixed reality. In most cases, this involves visual cues to help the user understand the corrective feedback. We analyzed recent research approaches utilizing visual cues for feedback in mixed reality. The scope of this article is visual feedback for motor skill learning, which involves physical therapy, exercise, rehabilitation etc. While some of the surveyed literature discusses therapeutic effects of the training, this article focuses on visualization techniques. We categorized the literature from a visualization standpoint, including visual cues, technology and characteristics of the feedback. This provided insights into how visual feedback in mixed reality is applied in the literature and how different aspects of the feedback are related. The insights obtained can help to better adjust future feedback systems to the target group and their needs. This article also provides a deeper understanding of the characteristics of the visual cues in general and promotes future, more detailed research on this topic.
Collapse
|
3
|
Minh Tran TT, Brown S, Weidlich O, Billinghurst M, Parker C. Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4782-4793. [PMID: 37782599 DOI: 10.1109/tvcg.2023.3320231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Wearable Augmented Reality (AR) has attracted considerable attention in recent years, as evidenced by the growing number of research publications and industry investments. With swift advancements and a multitude of interdisciplinary research areas within wearable AR, a comprehensive review is crucial for integrating the current state of the field. In this paper, we present a review of 389 research papers on wearable AR, published between 2018 and 2022 in three major venues: ISMAR, TVCG, and CHI. Drawing inspiration from previous works by Zhou et al. and Kim et al., which summarized AR research at ISMAR over the past two decades (1998-2017), we categorize the papers into different topics and identify prevailing trends. One notable finding is that wearable AR research is increasingly geared towards enabling broader consumer adoption. From our analysis, we highlight key observations related to potential future research areas essential for capitalizing on this trend and achieving widespread adoption. These include addressing challenges in Display, Tracking, Interaction, and Applications, and exploring emerging frontiers in Ethics, Accessibility, Avatar and Embodiment, and Intelligent Virtual Agents.
Collapse
|
4
|
Romano S, Laviola E, Gattullo M, Fiorentino M, Uva AE. More Arrows in the Quiver: Investigating the Use of Auxiliary Models to Localize In-View Components with Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4483-4493. [PMID: 37782614 DOI: 10.1109/tvcg.2023.3320229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
The creation and management of content are among the main open issues for the spread of Augmented Reality. In Augmented Reality interfaces for procedural tasks, a key authoring strategy is chunking instructions and using optimized visual cues, i.e., tailored to the specific information to convey. Nevertheless, research works rarely present rationales behind their choice. This work aims to provide design guidelines for the localization of in-view and not occluded components, which is recurrent information in technical documentation. Previous studies revealed that the most suited visual cues to convey this information are auxiliary models, i.e., abstract shapes that highlight the space region where the component is located. Among them, 3D arrows are widely used, but they may produce ambiguity of information. Furthermore, from the literature, it is unclear how to design auxiliary model shapes and if they are affected by the component shapes. To fill this gap, we conducted two user studies. In the first study, we collected the preference of 45 users regarding the shape, color, and animation of auxiliary models for the localization of various component shapes. According to the results of this study, we defined guidelines for designing optimized auxiliary models based on the component shapes. In the second user study, we validated these guidelines by evaluating the performance (localization time and recognition accuracy) and user experience of 24 users. The results of this study allowed us to confirm that designing auxiliary models following our guidelines leads to a higher recognition accuracy and user experience than using 3D arrows.
Collapse
|
5
|
Alatawi H, Albalawi N, Shahata G, Aljohani K, Alhakamy A, Tuceryan M. Augmented Reality-Assisted Deep Reinforcement Learning-Based Model towards Industrial Training and Maintenance for NanoDrop Spectrophotometer. SENSORS (BASEL, SWITZERLAND) 2023; 23:6024. [PMID: 37447876 PMCID: PMC10347177 DOI: 10.3390/s23136024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 06/27/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023]
Abstract
The use of augmented reality (AR) technology is growing in the maintenance industry because it can improve efficiency and reduce costs by providing real-time guidance and instruction to workers during repairs and maintenance tasks. AR can also assist with equipment training and visualization, allowing users to explore the equipment's internal structure and size. The adoption of AR in maintenance is expected to increase as hardware options expand and development costs decrease. To implement AR for job aids in mobile applications, 3D spatial information and equipment details must be addressed, and calibrated using image-based or object-based tracking, which is essential for integrating 3D models with physical components. The present paper suggests a system using AR-assisted deep reinforcement learning (RL)-based model for NanoDrop Spectrophotometer training and maintenance purposes that can be used for rapid repair procedures in the Industry 4.0 (I4.0) setting. The system uses a camera to detect the target asset via feature matching, tracking techniques, and 3D modeling. Once the detection is completed, AR technologies generate clear and easily understandable instructions for the maintenance operator's device. According to the research findings, the model's target technique resulted in a mean reward of 1.000 and a standard deviation of 0.000. This means that all the rewards that were obtained in the given task or environment were exactly the same. The fact that the reward standard deviation is 0.000 shows that there is no variability in the outcomes.
Collapse
Affiliation(s)
- Hibah Alatawi
- Department of Computer Science, Faculty of Computers and Information Technology, University of Tabuk, Tabuk 47512, Saudi Arabia; (H.A.); (N.A.); (G.S.); (K.A.)
| | - Nouf Albalawi
- Department of Computer Science, Faculty of Computers and Information Technology, University of Tabuk, Tabuk 47512, Saudi Arabia; (H.A.); (N.A.); (G.S.); (K.A.)
| | - Ghadah Shahata
- Department of Computer Science, Faculty of Computers and Information Technology, University of Tabuk, Tabuk 47512, Saudi Arabia; (H.A.); (N.A.); (G.S.); (K.A.)
| | - Khulud Aljohani
- Department of Computer Science, Faculty of Computers and Information Technology, University of Tabuk, Tabuk 47512, Saudi Arabia; (H.A.); (N.A.); (G.S.); (K.A.)
| | - A’aeshah Alhakamy
- Department of Computer Science, Faculty of Computers and Information Technology, University of Tabuk, Tabuk 47512, Saudi Arabia; (H.A.); (N.A.); (G.S.); (K.A.)
- Artificial Intelligence and Sensing Technologies (AIST) Research Center, University of Tabuk, Tabuk 47512, Saudi Arabia
| | - Mihran Tuceryan
- Department of Computer Science, School of Science, Indiana University-Purdue University, Indianapolis, IN 46202, USA;
| |
Collapse
|
6
|
Burova A, Mäkelä J, Heinonen H, Palma PB, Hakulinen J, Opas V, Siltanen S, Raisamo R, Turunen M. Asynchronous industrial collaboration: How virtual reality and virtual tools aid the process of maintenance method development and documentation creation. COMPUT IND 2022. [DOI: 10.1016/j.compind.2022.103663] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
7
|
Abstract
In this work, we propose a Mixed Reality (MR) application to support laboratory lectures in STEM distance education. It was designed following a methodology extendable to diverse STEM laboratory lectures. We formulated this methodology considering the main issues found in the literature that limit MR’s use in education. Thus, the main design features of the resulting MR application are students’ and teachers’ involvement, use of not distracting graphics, integration of traditional didactic material, and easy scalability to new learning activities. In this work, we present how we applied the design methodology and used the framework for the case study of an engineering course to support students in understanding drawings of complex machines without being physically in the laboratory. We finally evaluated the usability and cognitive load of the implemented MR application through two user studies, involving, respectively, 48 and 36 students. The results reveal that the usability of our application is “excellent” (mean SUS score 84.7), and it is not influenced by familiarity with Mixed Reality and distance education tools. Furthermore, the cognitive load is medium (mean NASA TLX score below 29) for all four learning tasks that students can accomplish through the MR application.
Collapse
|
8
|
Abstract
Augmented Reality (AR) is worldwide recognized as one of the leading technologies of the 21st century and one of the pillars of the new industrial revolution envisaged by the Industry 4.0 international program. Several papers describe, in detail, specific applications of Augmented Reality developed to test its potentiality in a variety of fields. However, there is a lack of sources detailing the current limits of this technology in the event of its introduction in a real working environment where everyday tasks could be carried out by operators using an AR-based approach. A literature analysis to detect AR strength and weakness has been carried out, and a set of case studies has been implemented by authors to find the limits of current AR technologies in industrial applications outside the laboratory-protected environment. The outcome of this paper is that, even though Augmented Reality is a well-consolidated computer graphic technique in research applications, several improvements both from a software and hardware point of view should be introduced before its introduction in industrial operations. The originality of this paper lies in the detection of guidelines to improve the Augmented Reality potentialities in factories and industries.
Collapse
|