1
|
Esaki H, Sekiyama K. Immersive Robot Teleoperation Based on User Gestures in Mixed Reality Space. SENSORS (BASEL, SWITZERLAND) 2024; 24:5073. [PMID: 39124119 PMCID: PMC11314798 DOI: 10.3390/s24155073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 07/08/2024] [Accepted: 07/29/2024] [Indexed: 08/12/2024]
Abstract
Recently, research has been conducted on mixed reality (MR), which provides immersive visualization and interaction experiences, and on mapping human motions directly onto a robot in a mixed reality (MR) space to achieve a high level of immersion. However, even though the robot is mapped onto the MR space, their surrounding environment is often not mapped sufficiently; this makes it difficult to comfortably perform tasks that require precise manipulation of the objects that are difficult to see from the human perspective. Therefore, we propose a system that allows users to operate a robot in real space by mapping the task environment around the robot on the MR space and performing operations within the MR space.
Collapse
|
2
|
Borhani Z, Sharma P, Ortega FR. Survey of Annotations in Extended Reality Systems. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5074-5096. [PMID: 37352090 DOI: 10.1109/tvcg.2023.3288869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2023]
Abstract
Annotation in 3D user interfaces such as Augmented Reality (AR) and Virtual Reality (VR) is a challenging and promising area; however, there are not currently surveys reviewing these contributions. In order to provide a survey of annotations for Extended Reality (XR) environments, we conducted a structured literature review of papers that used annotation in their AR/VR systems from the period between 2001 and 2021. Our literature review process consists of several filtering steps which resulted in 103 XR publications with a focus on annotation. We classified these papers based on the display technologies, input devices, annotation types, target object under annotation, collaboration type, modalities, and collaborative technologies. A survey of annotation in XR is an invaluable resource for researchers and newcomers. Finally, we provide a database of the collected information for each reviewed paper. This information includes applications, the display technologies and its annotator, input devices, modalities, annotation types, interaction techniques, collaboration types, and tasks for each paper. This database provides a rapid access to collected data and gives users the ability to search or filter the required information. This survey provides a starting point for anyone interested in researching annotation in XR environments.
Collapse
|
3
|
Minh Tran TT, Brown S, Weidlich O, Billinghurst M, Parker C. Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4782-4793. [PMID: 37782599 DOI: 10.1109/tvcg.2023.3320231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Wearable Augmented Reality (AR) has attracted considerable attention in recent years, as evidenced by the growing number of research publications and industry investments. With swift advancements and a multitude of interdisciplinary research areas within wearable AR, a comprehensive review is crucial for integrating the current state of the field. In this paper, we present a review of 389 research papers on wearable AR, published between 2018 and 2022 in three major venues: ISMAR, TVCG, and CHI. Drawing inspiration from previous works by Zhou et al. and Kim et al., which summarized AR research at ISMAR over the past two decades (1998-2017), we categorize the papers into different topics and identify prevailing trends. One notable finding is that wearable AR research is increasingly geared towards enabling broader consumer adoption. From our analysis, we highlight key observations related to potential future research areas essential for capitalizing on this trend and achieving widespread adoption. These include addressing challenges in Display, Tracking, Interaction, and Applications, and exploring emerging frontiers in Ethics, Accessibility, Avatar and Embodiment, and Intelligent Virtual Agents.
Collapse
|
4
|
Marques B, Silva S, Dias P, Santos BS, Basole RC, Ferrise F. How to Evaluate If Collaborative Augmented Reality Speaks to Its Users. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2023; 43:107-113. [PMID: 37708002 DOI: 10.1109/mcg.2023.3298168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
Augmented reality (AR) is increasingly considered to support scenarios of co-located and remote collaboration. Thus far, the core goal has been advancing the supporting technologies and assessing how they perform to inform design and development, thus providing support toward their maturity. Nevertheless, while understanding the performance and impact of supporting technology is indisputable groundwork, we argue that the field needs to adopt a framework that moves from answering questions about the proposed methods and technologies to a more holistic view, also encompassing collaboration. However, moving toward this goal challenges how evaluations are designed, adding complexity and raising several questions about what needs to be considered. In this article, we briefly examine the different dimensions entailed in collaborative AR and argue in favor of a distinctive evaluation framework that goes beyond current practice and sets its eyes on the elements that allow judging how collaboration unfolds while informing the role of the supporting technology.
Collapse
|
5
|
Wang P, Wang Y, Billinghurst M, Yang H, Xu P, Li Y. BeHere: a VR/SAR remote collaboration system based on virtual replicas sharing gesture and avatar in a procedural task. VIRTUAL REALITY 2023; 27:1409-1430. [PMID: 36686612 PMCID: PMC9838545 DOI: 10.1007/s10055-023-00748-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 01/04/2023] [Indexed: 06/05/2023]
Abstract
In this paper, we focus on the significance of remote collaboration using virtual replicas, avatar, and gesture on a procedural task in industry; thus, we present a Virtual Reality (VR)/Spatial Augmented Reality (SAR) remote collaboration system, BeHere, based on 3D virtual replicas and sharing gestures and avatar. BeHere enables a remote expert in VR to guide a local worker in real-time to complete a procedural task in the real-world. For the remote VR site, we construct a 3D virtual environment using virtual replicas, and the user can manipulate them by using gestures in an intuitive interaction and see their partners' 3D virtual avatar. For the local site, we use SAR to enable the local worker to see instructions projected onto the real-world based on the shared virtual replicas and gestures. We conducted a formal user study to evaluate the prototype system in terms of performance, social presence, workload, and ranking and user preference. We found that the combination of visual cues of gestures, avatar, and virtual replicas plays a positive role in improving user experience, especially for remote VR users. More significantly, our study provides useful information and important design implications for further research on the use of gesture-, gaze- and avatar-based cues as well as virtual replicas in VR/AR remote collaboration on a procedural task in industry. Supplementary Information The online version contains supplementary material available at 10.1007/s10055-023-00748-5.
Collapse
Affiliation(s)
- Peng Wang
- School of Advanced Manufacturing Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065 China
| | - Yue Wang
- School of Advanced Manufacturing Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065 China
| | - Mark Billinghurst
- Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Huizhen Yang
- Chongqing Innovation Center of Beijing Institute of Technology, Beijing, China
| | - Peng Xu
- Jiangsu JARI Technology Group Co., Ltd, The 716Th Research Institute of CSIC, Lianyungang, Jiang Su China
| | - Yanhong Li
- Jiuquan Iron and Steel (Group) Co., Ltd, Jiayuguan City, 735100 China
| |
Collapse
|
6
|
Are the Instructions Clear? Evaluating the Visual Characteristics of Augmented Reality Content for Remote Guidance. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6100092] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Augmented Reality (AR) solutions are emerging in multiple scenarios of application as Industry 4.0 takes shape. In particular, for remote collaboration, flexible mechanisms such as authoring tools can be used to generate instructions and assist human operators as they experience increased complexity in their daily tasks. In addition to the traditional handicap of ensuring instructions can be intuitively created without having to understand complicated AR concepts, another relevant topic is the fact that the quality of said instructions is not properly analyzed prior to the tools being evaluated. This means that the characteristics of the visual content are not adequately assessed beforehand. Hence, it is essential to be aware of the cognitive workload associated with AR instructions to assert if they can be easily understood and accepted before being deployed in real-world scenarios. To address this, we focused on AR during sessions of remote guidance. Based on a participatory process with domain experts from the industry sector, a prototype for creating AR-based instructions was developed, and a user study with two parts was conducted: (1) first, a set of step-by-step instructions was produced, and their visual characteristics were evaluated by 129 participants based on a set of relevant dimensions; (2) afterward, these instructions were used by nine participants to understand if they could be used to assist on-site collaborators during real-life remote maintenance tasks. The results suggest that the AR instructions offer low visual complexity and considerable visual impact, clarity, and directed focus, thus improving situational understanding and promoting task resolution.
Collapse
|
7
|
Maio R, Marques B, Alves J, Santos BS, Dias P, Lau N. An Augmented Reality Serious Game for Learning Intelligent Wheelchair Control: Comparing Configuration and Tracking Methods. SENSORS (BASEL, SWITZERLAND) 2022; 22:7788. [PMID: 36298139 PMCID: PMC9610184 DOI: 10.3390/s22207788] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/04/2022] [Accepted: 10/08/2022] [Indexed: 06/16/2023]
Abstract
This work proposes an augmented reality serious game (ARSG) for supporting individuals with motor disabilities while controlling robotic wheelchairs. A racing track was used as the game narrative; this included restriction areas, static and dynamic virtual objects, as well as obstacles and signs. To experience the game, a prior configuration of the environment, made through a smartphone or a computer, was required. Furthermore, a visualization tool was developed to exhibit user performance while using the ARSG. Two user studies were conducted with 10 and 20 participants, respectively, to compare (1) how different devices enable configuring the ARSG, and (2) different tracking capabilities, i.e., methods used to place virtual content on the real-world environment while the user interacts with the game and controls the wheelchair in the physical space: C1-motion tracking using cloud anchors; C2-offline motion tracking. Results suggest that configuring the environment with the computer is more efficient and accurate, in contrast to the smartphone, which is characterized as more engaging. In addition, condition C1 stood out as more accurate and robust, while condition C2 appeared to be easier to use.
Collapse
Affiliation(s)
- Rafael Maio
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Bernardo Marques
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
- DigiMedia, DeCA, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| | - João Alves
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Beatriz Sousa Santos
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Paulo Dias
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Nuno Lau
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| |
Collapse
|
8
|
Current Current Challenges and Future Research Directions in Augmented Reality for Education. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6090075] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The progression and adoption of innovative learning methodologies signify that a respective part of society is open to new technologies and ideas and thus is advancing. The latest innovation in teaching is the use of Augmented Reality (AR). Applications using this technology have been deployed successfully in STEM (Science, Technology, Engineering, and Mathematics) education for delivering the practical and creative parts of teaching. Since AR technology already has a large volume of published studies about education that reports advantages, limitations, effectiveness, and challenges, classifying these projects will allow for a review of the success in the different educational settings and discover current challenges and future research areas. Due to COVID-19, the landscape of technology-enhanced learning has shifted more toward blended learning, personalized learning spaces and user-centered approach with safety measures. The main findings of this paper include a review of the current literature, investigating the challenges, identifying future research areas, and finally, reporting on the development of two case studies that can highlight the first steps needed to address these research areas. The result of this research ultimately details the research gap required to facilitate real-time touchless hand interaction, kinesthetic learning, and machine learning agents with a remote learning pedagogy.
Collapse
|
9
|
Exploring Teaching and Learning Experience during COVID-19 Pandemic in Engineering Education. SUSTAINABILITY 2022. [DOI: 10.3390/su14127501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The education system is continuously modernizing by accommodating the need due to the industrial revolution. Various teaching modes are also introduced including a diverse range of students, particularly in engineering education. The COVID-19 pandemic has disrupted normal education worldwide, forced to shut down campus activity for an extended period which forced Universities to adopt alternative approaches to continue student’s academic year. Engineering education faced significant challenges to find a realistic substitution for lab-based hands-on activity as well as group or team-based learning experiences. It is therefore very important to know the challenges and ways to address them. This paper evaluates the teaching and learning experiences observed in engineering education in Australia and abroad during the COVID-19 period compared to the pre-COVID period. The key motivation of this study is to identify key challenges arises due to COVID-19, develop Teaching & Learning (T & L) approaches to address these challenges and evaluate the effectiveness of the applied changes in the T & L approach, identify shortcomings, and find ways to improve them. The student feedback on selected engineering units have been collected from Deakin and Murdoch university in Australia to evaluate the performances of the applied changes. This data is considered as an authentic source of information to compare and identify the key challenges and effectiveness for students’ learning in pre-COVID and during COVID condition. This study later explored various literatures to gather experiences from other universities across the globe and by analysing all findings including academic experiences finally developed constructive recommendations for improvement. It is found that the current form of online mode of teaching has room to improve further as one segment of students finds it challenging and some others like a few approaches. It is also found that the online infrastructure, staff skills to innovate new unit designs, and motivating students are the other challenging areas. Therefore, a new teaching and learning framework is required to overcome all the challenges for future learning.
Collapse
|
10
|
Virtual Reality for Safe Testing and Development in Collaborative Robotics: Challenges and Perspectives. ELECTRONICS 2022. [DOI: 10.3390/electronics11111726] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Collaborative robots (cobots) could help humans in tasks that are mundane, dangerous or where direct human contact carries risk. Yet, the collaboration between humans and robots is severely limited by the aspects of the safety and comfort of human operators. In this paper, we outline the use of extended reality (XR) as a way to test and develop collaboration with robots. We focus on virtual reality (VR) in simulating collaboration scenarios and the use of cobot digital twins. This is specifically useful in situations that are difficult or even impossible to safely test in real life, such as dangerous scenarios. We describe using XR simulations as a means to evaluate collaboration with robots without putting humans at harm. We show how an XR setting enables combining human behavioral data, subjective self-reports, and biosignals signifying human comfort, stress and cognitive load during collaboration. Several works demonstrate XR can be used to train human operators and provide them with augmented reality (AR) interfaces to enhance their performance with robots. We also provide a first attempt at what could become the basis for a human–robot collaboration testing framework, specifically for designing and testing factors affecting human–robot collaboration. The use of XR has the potential to change the way we design and test cobots, and train cobot operators, in a range of applications: from industry, through healthcare, to space operations.
Collapse
|
11
|
Nikolaidis A. What is Significant in Modern Augmented Reality: A Systematic Analysis of Existing Reviews. J Imaging 2022; 8:jimaging8050145. [PMID: 35621909 PMCID: PMC9144923 DOI: 10.3390/jimaging8050145] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 05/17/2022] [Accepted: 05/19/2022] [Indexed: 11/16/2022] Open
Abstract
Augmented reality (AR) is a field of technology that has evolved drastically during the last decades, due to its vast range of applications in everyday life. The aim of this paper is to provide researchers with an overview of what has been surveyed since 2010 in terms of AR application areas as well as in terms of its technical aspects, and to discuss the extent to which both application areas and technical aspects have been covered, as well as to examine whether one can extract useful evidence of what aspects have not been covered adequately and whether it is possible to define common taxonomy criteria for performing AR reviews in the future. To this end, a search with inclusion and exclusion criteria has been performed in the Scopus database, producing a representative set of 47 reviews, covering the years from 2010 onwards. A proper taxonomy of the results is introduced, and the findings reveal, among others, the lack of AR application reviews covering all suggested criteria.
Collapse
Affiliation(s)
- Athanasios Nikolaidis
- Department of Informatics, Computer and Telecommunications Engineering, International Hellenic University, 62124 Serres, Greece
| |
Collapse
|
12
|
Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094740] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This paper presents an integrated mapping of motion and visualization scheme based on a Mixed Reality (MR) subspace approach for the intuitive and immersive telemanipulation of robotic arm-hand systems. The effectiveness of different control-feedback methods for the teleoperation system is validated and compared. The robotic arm-hand system consists of a 6 Degrees-of-Freedom (DOF) industrial manipulator and a low-cost 2-finger gripper, which can be manipulated in a natural manner by novice users physically distant from the working site. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time 3D visual feedback from the robot working site. Imitation-based velocity-centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control and enables spatial velocity-based control of the robot Tool Center Point (TCP). The user control space and robot working space are overlaid through the MR subspace, and the local user and a digital twin of the remote robot share the same environment in the MR subspace. The MR-based motion and visualization mapping scheme for telerobotics is compared to conventional 2D Baseline and MR tele-control paradigms over two tabletop object manipulation experiments. A user survey of 24 participants was conducted to demonstrate the effectiveness and performance enhancements enabled by the proposed system. The MR-subspace-integrated 3D mapping of motion and visualization scheme reduced the aggregate task completion time by 48% compared to the 2D Baseline module and 29%, compared to the MR SpaceMouse module. The perceived workload decreased by 32% and 22%, compared to the 2D Baseline and MR SpaceMouse approaches.
Collapse
|
13
|
Comparing Desktop vs. Mobile Interaction for the Creation of Pervasive Augmented Reality Experiences. J Imaging 2022; 8:jimaging8030079. [PMID: 35324634 PMCID: PMC8949857 DOI: 10.3390/jimaging8030079] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 03/08/2022] [Accepted: 03/16/2022] [Indexed: 02/04/2023] Open
Abstract
This paper presents an evaluation and comparison of interaction methods for the configuration and visualization of pervasive Augmented Reality (AR) experiences using two different platforms: desktop and mobile. AR experiences consist of the enhancement of real-world environments by superimposing additional layers of information, real-time interaction, and accurate 3D registration of virtual and real objects. Pervasive AR extends this concept through experiences that are continuous in space, being aware of and responsive to the user’s context and pose. Currently, the time and technical expertise required to create such applications are the main reasons preventing its widespread use. As such, authoring tools which facilitate the development and configuration of pervasive AR experiences have become progressively more relevant. Their operation often involves the navigation of the real-world scene and the use of the AR equipment itself to add the augmented information within the environment. The proposed experimental tool makes use of 3D scans from physical environments to provide a reconstructed digital replica of such spaces for a desktop-based method, and to enable positional tracking for a mobile-based one. While the desktop platform represents a non-immersive setting, the mobile one provides continuous AR in the physical environment. Both versions can be used to place virtual content and ultimately configure an AR experience. The authoring capabilities of the different platforms were compared by conducting a user study focused on evaluating their usability. Although the AR interface was generally considered more intuitive, the desktop platform shows promise in several aspects, such as remote configuration, lower required effort, and overall better scalability.
Collapse
|