1
|
Caby B, Bataille G, Danglade F, Chardonnet JR. Environment Spatial Restitution for Remote Physical AR Collaboration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:3067-3076. [PMID: 40063454 DOI: 10.1109/tvcg.2025.3549533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/29/2025]
Abstract
The emergence of spatial immersive technologies allows new ways to collaborate remotely. However, they still need to be studied and enhanced in order to improve their effectiveness and usability for collaborators. Remote Physical Collaborative Extended Reality (RPC-XR) consists in solving augmented physical tasks with the help of remote collaborators. This paper presents our RPC-AR system and a user study evaluating this system during a network hardware assembly task. Our system offers verbal and non-verbal interpersonal communication functionalities. Users embody avatars and interact with their remote collaborators thanks to hand, head and eye tracking, and voice. Our system also captures an environment spatially, in real-time and renders it in a shared virtual space. We designed it to be lightweight and to avoid instrumenting collaborative environments and preliminary steps. It performs capture, transmission and remote rendering of real environments in less than 250ms. We ran a cascading user study to compare our system with a commercial 2D video collaborative application. We measured mutual awareness, task load, usability and task performance. We present an adapted Uncanny Valley questionnaire to compare the perception of remote environments between systems. We found that our application resulted in better empathy between collaborators, a higher cognitive load and a lower level of usability, remaining acceptable, to the remote user. We did not observe any significant difference in performance. These results are encouraging, as participants' observations provide insights to further improve the performance and usability of RPC-AR.
Collapse
|
2
|
Gilbert D, Bose A, Kuhlen TW, Weissker T. PASCAL - A Collaboration Technique Between Non-Collocated Avatars in Large Collaborative Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:3525-3535. [PMID: 40053636 DOI: 10.1109/tvcg.2025.3549175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2025]
Abstract
Collaborative work in large virtual environments often requires transitions from loosely-coupled collaboration at different locations to tightly-coupled collaboration at a common meeting point. Inspired by prior work on the continuum between these extremes, we present two novel interaction techniques designed to share spatial context while collaborating over large virtual distances. The first method replicates the familiar setup of a video conference by providing users with a virtual tablet to share video feeds with their peers. The second method called PASCAL (Parallel Avatars in a Shared Collaborative Aura Link) enables users to share their immediate spatial surroundings with others by creating synchronized copies of it at the remote locations of their collaborators. We evaluated both techniques in a within-subject user study, in which 24 participants were tasked with solving a puzzle in groups of two. Our results indicate that the additional contextual information provided by PASCAL had significantly positive effects on task completion time, ease of communication, mutual understanding, and co-presence. As a result, our insights contribute to the repertoire of successful interaction techniques to mediate between loosely- and tightly-coupled work in collaborative virtual environments.
Collapse
|
3
|
Zhao L, Isenberg T, Xie F, Liang HN, Yu L. SpatialTouch: Exploring Spatial Data Visualizations in Cross-Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:897-907. [PMID: 39255119 DOI: 10.1109/tvcg.2024.3456368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures-often at multiple spatial or semantic scales-across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction, including mid-air gestures, touch interactions, pen interactions, and combinations thereof, to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.
Collapse
|
4
|
Pooryousef V, Cordeil M, Besancon L, Bassed R, Dwyer T. Collaborative Forensic Autopsy Documentation and Supervised Report Generation Using a Hybrid Mixed-Reality Environment and Generative AI. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7452-7462. [PMID: 39250385 DOI: 10.1109/tvcg.2024.3456212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Forensic investigation is a complex procedure involving experts working together to establish cause of death and report findings to legal authorities. While new technologies are being developed to provide better post-mortem imaging capabilities-including mixed-reality (MR) tools to support 3D visualisation of such data-these tools do not integrate seamlessly into their existing collaborative workflow and report authoring process, requiring extra steps, e.g. to extract imagery from the MR tool and combine with physical autopsy findings for inclusion in the report. Therefore, in this work we design and evaluate a new forensic autopsy report generation workflow and present a novel documentation system using hybrid mixed-reality approaches to integrate visualisation, voice and hand interaction, as well as collaboration and procedure recording. Our preliminary findings indicate that this approach has the potential to improve data management, aid reviewability, and thus, achieve more robust standards. Further, it potentially streamlines report generation and minimise dependency on external tools and assistance, reducing autopsy time and related costs. This system also offers significant potential for education. A free copy of this paper and all supplemental materials are available at https://osf.io/ygfzx.
Collapse
|
5
|
Borhani Z, Sharma P, Ortega FR. Survey of Annotations in Extended Reality Systems. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5074-5096. [PMID: 37352090 DOI: 10.1109/tvcg.2023.3288869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2023]
Abstract
Annotation in 3D user interfaces such as Augmented Reality (AR) and Virtual Reality (VR) is a challenging and promising area; however, there are not currently surveys reviewing these contributions. In order to provide a survey of annotations for Extended Reality (XR) environments, we conducted a structured literature review of papers that used annotation in their AR/VR systems from the period between 2001 and 2021. Our literature review process consists of several filtering steps which resulted in 103 XR publications with a focus on annotation. We classified these papers based on the display technologies, input devices, annotation types, target object under annotation, collaboration type, modalities, and collaborative technologies. A survey of annotation in XR is an invaluable resource for researchers and newcomers. Finally, we provide a database of the collected information for each reviewed paper. This information includes applications, the display technologies and its annotator, input devices, modalities, annotation types, interaction techniques, collaboration types, and tasks for each paper. This database provides a rapid access to collected data and gives users the ability to search or filter the required information. This survey provides a starting point for anyone interested in researching annotation in XR environments.
Collapse
|
6
|
Wang M, Li YJ, Shi J, Steinicke F. SceneFusion: Room-Scale Environmental Fusion for Efficient Traveling Between Separate Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4615-4630. [PMID: 37126613 DOI: 10.1109/tvcg.2023.3271709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Traveling between scenes has become a major requirement for navigation in numerous virtual reality (VR) social platforms and game applications, allowing users to efficiently explore multiple virtual environments (VEs). To facilitate scene transition, prevalent techniques such as instant teleportation and virtual portals have been extensively adopted. However, these techniques exhibit limitations when there is a need for frequent travel between separate VEs, particularly within indoor environments, resulting in low efficiency. In this article, we first analyze the design rationale for a novel navigation method supporting efficient travel between virtual indoor scenes. Based on the analysis, we introduce the SceneFusion technique that fuses separate virtual rooms into an integrated environment. SceneFusion enables users to perceive rich visual information from both rooms simultaneously, achieving high visual continuity and spatial awareness. While existing teleportation techniques passively transport users, SceneFusion allows users to actively access the fused environment using short-range locomotion techniques. User experiments confirmed that SceneFusion outperforms instant teleportation and virtual portal techniques in terms of efficiency, workload, and preference for both single-user exploration and multi-user collaboration tasks in separate VEs. Thus, SceneFusion presents an effective solution for seamless traveling between virtual indoor scenes.
Collapse
|
7
|
Cortes CAT, Thurow S, Ong A, Sharples JJ, Bednarz T, Stevens G, Favero DD. Analysis of Wildfire Visualization Systems for Research and Training: Are They Up for the Challenge of the Current State of Wildfires? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4285-4303. [PMID: 37030767 DOI: 10.1109/tvcg.2023.3258440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Wildfires affect many regions across the world. The accelerated progression of global warming has amplified their frequency and scale, deepening their impact on human life, the economy, and the environment. The temperature rise has been driving wildfires to behave unpredictably compared to those previously observed, challenging researchers and fire management agencies to understand the factors behind this behavioral change. Furthermore, this change has rendered fire personnel training outdated and lost its ability to adequately prepare personnel to respond to these new fires. Immersive visualization can play a key role in tackling the growing issue of wildfires. Therefore, this survey reviews various studies that use immersive and non-immersive data visualization techniques to depict wildfire behavior and train first responders and planners. This paper identifies the most useful characteristics of these systems. While these studies support knowledge creation for certain situations, there is still scope to comprehensively improve immersive systems to address the unforeseen dynamics of wildfires.
Collapse
|
8
|
Perz M, Luijten G, Kleesiek J, Schmalstieg D, Egger J, Gsaxner C. MultiAR: A Multi-User Augmented Reality Platform for Biomedical Education. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-7. [PMID: 40040092 DOI: 10.1109/embc53108.2024.10782948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
This paper addresses the growing integration of Augmented Reality (AR) in biomedical sciences, emphasizing collaborative learning experiences. We present MultiAR, a versatile, domain-specific platform enabling multi-user interactions in AR for biomedical education. Unlike platform-specific solutions, MultiAR supports various AR devices, including handheld and head-mounted options. The framework extends across domains, augmenting biomedical education applications with collaborative capabilities. We define essential requirements for a multi-user AR framework in education, detail MultiAR's design and implementation, and comprehensively evaluate it using anatomy education examples. Quantitative and qualitative analyses, covering system performance, accuracy metrics, and a user study with 20 participants, highlight the urgent need for a tailored collaborative AR platform in biomedical education. Results underscore enthusiasm for collaborative AR technology, endorsing MultiAR as an accessible, versatile solution for developers and end-users in biomedical education.
Collapse
|
9
|
Jackson B, Lor L, Heggeseth BC. Workspace Guardian: Investigating Awareness of Personal Workspace Between Co-Located Augmented Reality Users. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2724-2733. [PMID: 38437099 DOI: 10.1109/tvcg.2024.3372073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
As augmented reality (AR) systems proliferate and the technology gets smaller and less intrusive, we imagine a future where many AR users will interact in the same physical locations (e.g., in shared work places and public spaces). While previous research has explored AR collaboration in these spaces, our focus is on co-located but independent work. In this paper, we explore co-located AR user behavior and investigate techniques for promoting awareness of personal workspace boundaries. Specifically, we compare three techniques: showing all virtual content, visualizing bounding box outlines of content, and a self-defined workspace boundary. The findings suggest that a self-defined boundary led to significantly more personal workspace encroachments.
Collapse
|
10
|
Friedl-Knirsch J, Stach C, Pointecker F, Anthes C, Roth D. A Study on Collaborative Visual Data Analysis in Augmented Reality with Asymmetric Display Types. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2633-2643. [PMID: 38437119 DOI: 10.1109/tvcg.2024.3372103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Collaboration is a key aspect of immersive visual data analysis. Due to its inherent benefit of seeing co-located collaborators, augmented reality is often useful in such collaborative scenarios. However, to enable the augmentation of the real environment, there are different types of technology available. While there are constant developments in specific devices, each of these device types provide different premises for collaborative visual data analysis. In our work we combine handheld, optical see-through and video see-through displays to explore and understand the impact of these different device types in collaborative immersive analytics. We conducted a mixed-methods collaborative user study where groups of three performed a shared data analysis task in augmented reality with each user working on a different device, to explore differences in collaborative behaviour, user experience and usage patterns. Both quantitative and qualitative data revealed differences in user experience and usage patterns. For collaboration, the different display types influenced how well participants could participate in the collaborative data analysis, nevertheless, there was no measurable effect in verbal communication.
Collapse
|
11
|
Minh Tran TT, Brown S, Weidlich O, Billinghurst M, Parker C. Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4782-4793. [PMID: 37782599 DOI: 10.1109/tvcg.2023.3320231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Wearable Augmented Reality (AR) has attracted considerable attention in recent years, as evidenced by the growing number of research publications and industry investments. With swift advancements and a multitude of interdisciplinary research areas within wearable AR, a comprehensive review is crucial for integrating the current state of the field. In this paper, we present a review of 389 research papers on wearable AR, published between 2018 and 2022 in three major venues: ISMAR, TVCG, and CHI. Drawing inspiration from previous works by Zhou et al. and Kim et al., which summarized AR research at ISMAR over the past two decades (1998-2017), we categorize the papers into different topics and identify prevailing trends. One notable finding is that wearable AR research is increasingly geared towards enabling broader consumer adoption. From our analysis, we highlight key observations related to potential future research areas essential for capitalizing on this trend and achieving widespread adoption. These include addressing challenges in Display, Tracking, Interaction, and Applications, and exploring emerging frontiers in Ethics, Accessibility, Avatar and Embodiment, and Intelligent Virtual Agents.
Collapse
|
12
|
Marques B, Silva S, Dias P, Santos BS, Basole RC, Ferrise F. How to Evaluate If Collaborative Augmented Reality Speaks to Its Users. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2023; 43:107-113. [PMID: 37708002 DOI: 10.1109/mcg.2023.3298168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
Augmented reality (AR) is increasingly considered to support scenarios of co-located and remote collaboration. Thus far, the core goal has been advancing the supporting technologies and assessing how they perform to inform design and development, thus providing support toward their maturity. Nevertheless, while understanding the performance and impact of supporting technology is indisputable groundwork, we argue that the field needs to adopt a framework that moves from answering questions about the proposed methods and technologies to a more holistic view, also encompassing collaboration. However, moving toward this goal challenges how evaluations are designed, adding complexity and raising several questions about what needs to be considered. In this article, we briefly examine the different dimensions entailed in collaborative AR and argue in favor of a distinctive evaluation framework that goes beyond current practice and sets its eyes on the elements that allow judging how collaboration unfolds while informing the role of the supporting technology.
Collapse
|
13
|
Fidalgo CG, Yan Y, Cho H, Sousa M, Lindlbauer D, Jorge J. A Survey on Remote Assistance and Training in Mixed Reality Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2291-2303. [PMID: 37027742 DOI: 10.1109/tvcg.2023.3247081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The recent pandemic, war, and oil crises have caused many to reconsider their need to travel for education, training, and meetings. Providing assistance and training remotely has thus gained importance for many applications, from industrial maintenance to surgical telemonitoring. Current solutions such as video conferencing platforms lack essential communication cues such as spatial referencing, which negatively impacts both time completion and task performance. Mixed Reality (MR) offers opportunities to improve remote assistance and training, as it opens the way to increased spatial clarity and large interaction space. We contribute a survey of remote assistance and training in MR environments through a systematic literature review to provide a deeper understanding of current approaches, benefits and challenges. We analyze 62 articles and contextualize our findings along a taxonomy based on degree of collaboration, perspective sharing, MR space symmetry, time, input and output modality, visual display, and application domain. We identify the main gaps and opportunities in this research area, such as exploring collaboration scenarios beyond one-expert-to-one-trainee, enabling users to move across the reality-virtuality spectrum during a task, or exploring advanced interaction techniques that resort to hand or eye tracking. Our survey informs and helps researchers in different domains, including maintenance, medicine, engineering, or education, build and evaluate novel MR approaches to remote training and assistance. All supplemental materials are available at https://augmented-perception.org/publications/2023-training-survey.html.
Collapse
|
14
|
Tian H, Lee GA, Bai H, Billinghurst M. Using Virtual Replicas to Improve Mixed Reality Remote Collaboration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2785-2795. [PMID: 37027731 DOI: 10.1109/tvcg.2023.3247113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In this paper, we explore how virtual replicas can enhance Mixed Reality (MR) remote collaboration with a 3D reconstruction of the task space. People in different locations may need to work together remotely on complicated tasks. For example, a local user could follow a remote expert's instructions to complete a physical task. However, it could be challenging for the local user to fully understand the remote expert's intentions without effective spatial referencing and action demonstration. In this research, we investigate how virtual replicas can work as a spatial communication cue to improve MR remote collaboration. This approach segments the foreground manipulable objects in the local environment and creates corresponding virtual replicas of physical task objects. The remote user can then manipulate these virtual replicas to explain the task and guide their partner. This enables the local user to rapidly and accurately understand the remote expert's intentions and instructions. Our user study with an object assembly task found that using virtual replica manipulation was more efficient than using 3D annotation drawing in an MR remote collaboration scenario. We report and discuss the findings and limitations of our system and study, and present directions for future research.
Collapse
|
15
|
Ye S, Chen Z, Chu X, Li K, Luo J, Li Y, Geng G, Wu Y. PuzzleFixer: A Visual Reassembly System for Immersive Fragments Restoration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:429-439. [PMID: 36179001 DOI: 10.1109/tvcg.2022.3209388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We present PuzzleFixer, an immersive interactive system for experts to rectify defective reassembled 3D objects. Reassembling the fragments of a broken object to restore its original state is the prerequisite of many analytical tasks such as cultural relics analysis and forensics reasoning. While existing computer-aided methods can automatically reassemble fragments, they often derive incorrect objects due to the complex and ambiguous fragment shapes. Thus, experts usually need to refine the object manually. Prior advances in immersive technologies provide benefits for realistic perception and direct interactions to visualize and interact with 3D fragments. However, few studies have investigated the reassembled object refinement. The specific challenges include: 1) the fragment combination set is too large to determine the correct matches, and 2) the geometry of the fragments is too complex to align them properly. To tackle the first challenge, PuzzleFixer leverages dimensionality reduction and clustering techniques, allowing users to review possible match categories, select the matches with reasonable shapes, and drill down to shapes to correct the corresponding faces. For the second challenge, PuzzleFixer embeds the object with node-link networks to augment the perception of match relations. Specifically, it instantly visualizes matches with graph edges and provides force feedback to facilitate the efficiency of alignment interactions. To demonstrate the effectiveness of PuzzleFixer, we conducted an expert evaluation based on two cases on real-world artifacts and collected feedback through post-study interviews. The results suggest that our system is suitable and efficient for experts to refine incorrect reassembled objects.
Collapse
|
16
|
Syed TA, Siddiqui MS, Abdullah HB, Jan S, Namoun A, Alzahrani A, Nadeem A, Alkhodre AB. In-Depth Review of Augmented Reality: Tracking Technologies, Development Tools, AR Displays, Collaborative AR, and Security Concerns. SENSORS (BASEL, SWITZERLAND) 2022; 23:146. [PMID: 36616745 PMCID: PMC9824627 DOI: 10.3390/s23010146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/11/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
Augmented reality (AR) has gained enormous popularity and acceptance in the past few years. AR is indeed a combination of different immersive experiences and solutions that serve as integrated components to assemble and accelerate the augmented reality phenomena as a workable and marvelous adaptive solution for many realms. These solutions of AR include tracking as a means for keeping track of the point of reference to make virtual objects visible in a real scene. Similarly, display technologies combine the virtual and real world with the user's eye. Authoring tools provide platforms to develop AR applications by providing access to low-level libraries. The libraries can thereafter interact with the hardware of tracking sensors, cameras, and other technologies. In addition to this, advances in distributed computing and collaborative augmented reality also need stable solutions. The various participants can collaborate in an AR setting. The authors of this research have explored many solutions in this regard and present a comprehensive review to aid in doing research and improving different business transformations. However, during the course of this study, we identified that there is a lack of security solutions in various areas of collaborative AR (CAR), specifically in the area of distributed trust management in CAR. This research study also proposed a trusted CAR architecture with a use-case of tourism that can be used as a model for researchers with an interest in making secure AR-based remote communication sessions.
Collapse
Affiliation(s)
- Toqeer Ali Syed
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Muhammad Shoaib Siddiqui
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Hurria Binte Abdullah
- School of Social Sciences and Humanities, National University of Science and Technology (NUST), Islamabad 44000, Pakistan
| | - Salman Jan
- Malaysian Institute of Information Technology, Universiti Kuala Lumpur, Kuala Lumpur 50250, Malaysia
- Department of Computer Science, Bacha Khan University Charsadda, Charsadda 24420, Pakistan
| | - Abdallah Namoun
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Ali Alzahrani
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Adnan Nadeem
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Ahmad B. Alkhodre
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| |
Collapse
|
17
|
Petkova R, Poulkov V, Manolova A, Tonchev K. Challenges in Implementing Low-Latency Holographic-Type Communication Systems. SENSORS (BASEL, SWITZERLAND) 2022; 22:9617. [PMID: 36559984 PMCID: PMC9784801 DOI: 10.3390/s22249617] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 11/28/2022] [Accepted: 12/06/2022] [Indexed: 06/17/2023]
Abstract
Holographic-type communication (HTC) permits new levels of engagement between remote users. It is anticipated that it will give a very immersive experience while enhancing the sense of spatial co-presence. In addition to the newly revealed advantages, however, stringent system requirements are imposed, such as multi-sensory and multi-dimensional data capture and reproduction, ultra-lightweight processing, ultra-low-latency transmission, realistic avatar embodiment conveying gestures and facial expressions, support for an arbitrary number of participants, etc. In this paper, we review the current limitations to the HTC system implementation and systemize the main challenges into a few major groups. Furthermore, we propose a conceptual framework for the realization of an HTC system that will guarantee the desired low-latency transmission, lightweight processing, and ease of scalability, all accompanied with a higher level of realism in human body appearance and dynamics.
Collapse
|
18
|
Marques B, Silva S, Alves J, Araujo T, Dias P, Santos BS. A Conceptual Model and Taxonomy for Collaborative Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:5113-5133. [PMID: 34347599 DOI: 10.1109/tvcg.2021.3101545] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
To support the nuances of collaborative work, many researchers have been exploring the field of Augmented Reality (AR), aiming to assist in co-located or remote scenarios. Solutions using AR allow taking advantage from seamless integration of virtual objects and real-world objects, thus providing collaborators with a shared understanding or common ground environment. However, most of the research efforts, so far, have been devoted to experiment with technology and mature methods to support its design and development. Therefore, it is now time to understand where the field stands and how well can it address collaborative work with AR, to better characterize and evaluate the collaboration process. In this article, we perform an analysis of the different dimensions that should be taken into account when analysing the contributions of AR to the collaborative work effort. Then, we bring these dimensions forward into a conceptual framework and propose an extended human-centered taxonomy for the categorization of the main features of Collaborative AR. Our goal is to foster harmonization of perspectives for the field, which may help create a common ground for systematization and discussion. We hope to influence and improve how research in this field is reported by providing a structured list of the defining characteristics. Finally, some examples of the use of the taxonomy are presented to show how it can serve to gather information for characterizing AR-supported collaborative work, and illustrate its potential as the grounds to elicit further studies.
Collapse
|
19
|
Valades-Cruz CA, Leconte L, Fouche G, Blanc T, Van Hille N, Fournier K, Laurent T, Gallean B, Deslandes F, Hajj B, Faure E, Argelaguet F, Trubuil A, Isenberg T, Masson JB, Salamero J, Kervrann C. Challenges of intracellular visualization using virtual and augmented reality. FRONTIERS IN BIOINFORMATICS 2022; 2:997082. [PMID: 36304296 PMCID: PMC9580941 DOI: 10.3389/fbinf.2022.997082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 08/26/2022] [Indexed: 11/22/2022] Open
Abstract
Microscopy image observation is commonly performed on 2D screens, which limits human capacities to grasp volumetric, complex, and discrete biological dynamics. With the massive production of multidimensional images (3D + time, multi-channels) and derived images (e.g., restored images, segmentation maps, and object tracks), scientists need appropriate visualization and navigation methods to better apprehend the amount of information in their content. New modes of visualization have emerged, including virtual reality (VR)/augmented reality (AR) approaches which should allow more accurate analysis and exploration of large time series of volumetric images, such as those produced by the latest 3D + time fluorescence microscopy. They include integrated algorithms that allow researchers to interactively explore complex spatiotemporal objects at the scale of single cells or multicellular systems, almost in a real time manner. In practice, however, immersion of the user within 3D + time microscopy data represents both a paradigm shift in human-image interaction and an acculturation challenge, for the concerned community. To promote a broader adoption of these approaches by biologists, further dialogue is needed between the bioimaging community and the VR&AR developers.
Collapse
Affiliation(s)
- Cesar Augusto Valades-Cruz
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| | - Ludovic Leconte
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| | - Gwendal Fouche
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
- Inria, CNRS, IRISA, University Rennes, Rennes, France
| | - Thomas Blanc
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, Sorbonne Universites, CNRS UMR168, Paris, France
| | | | - Kevin Fournier
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
- Inria, CNRS, IRISA, University Rennes, Rennes, France
| | - Tao Laurent
- LIRMM, Université Montpellier, CNRS, Montpellier, France
| | | | | | - Bassam Hajj
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, Sorbonne Universites, CNRS UMR168, Paris, France
| | - Emmanuel Faure
- LIRMM, Université Montpellier, CNRS, Montpellier, France
| | | | - Alain Trubuil
- MaIAGE, INRAE, Université Paris-Saclay, Jouy-en-Josas, France
| | | | - Jean-Baptiste Masson
- Decision and Bayesian Computation, Neuroscience and Computational Biology Departments, CNRS UMR 3571, Institut Pasteur, Université Paris Cité, Paris, France
| | - Jean Salamero
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| | - Charles Kervrann
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| |
Collapse
|
20
|
Ayyanchira A, Mahfoud E, Wang W, Lu A. Toward cross-platform immersive visualization for indoor navigation and collaboration with augmented reality. J Vis (Tokyo) 2022. [DOI: 10.1007/s12650-022-00852-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
21
|
Nikolaidis A. What is Significant in Modern Augmented Reality: A Systematic Analysis of Existing Reviews. J Imaging 2022; 8:jimaging8050145. [PMID: 35621909 PMCID: PMC9144923 DOI: 10.3390/jimaging8050145] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 05/17/2022] [Accepted: 05/19/2022] [Indexed: 11/16/2022] Open
Abstract
Augmented reality (AR) is a field of technology that has evolved drastically during the last decades, due to its vast range of applications in everyday life. The aim of this paper is to provide researchers with an overview of what has been surveyed since 2010 in terms of AR application areas as well as in terms of its technical aspects, and to discuss the extent to which both application areas and technical aspects have been covered, as well as to examine whether one can extract useful evidence of what aspects have not been covered adequately and whether it is possible to define common taxonomy criteria for performing AR reviews in the future. To this end, a search with inclusion and exclusion criteria has been performed in the Scopus database, producing a representative set of 47 reviews, covering the years from 2010 onwards. A proper taxonomy of the results is introduced, and the findings reveal, among others, the lack of AR application reviews covering all suggested criteria.
Collapse
Affiliation(s)
- Athanasios Nikolaidis
- Department of Informatics, Computer and Telecommunications Engineering, International Hellenic University, 62124 Serres, Greece
| |
Collapse
|
22
|
Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094740] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This paper presents an integrated mapping of motion and visualization scheme based on a Mixed Reality (MR) subspace approach for the intuitive and immersive telemanipulation of robotic arm-hand systems. The effectiveness of different control-feedback methods for the teleoperation system is validated and compared. The robotic arm-hand system consists of a 6 Degrees-of-Freedom (DOF) industrial manipulator and a low-cost 2-finger gripper, which can be manipulated in a natural manner by novice users physically distant from the working site. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time 3D visual feedback from the robot working site. Imitation-based velocity-centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control and enables spatial velocity-based control of the robot Tool Center Point (TCP). The user control space and robot working space are overlaid through the MR subspace, and the local user and a digital twin of the remote robot share the same environment in the MR subspace. The MR-based motion and visualization mapping scheme for telerobotics is compared to conventional 2D Baseline and MR tele-control paradigms over two tabletop object manipulation experiments. A user survey of 24 participants was conducted to demonstrate the effectiveness and performance enhancements enabled by the proposed system. The MR-subspace-integrated 3D mapping of motion and visualization scheme reduced the aggregate task completion time by 48% compared to the 2D Baseline module and 29%, compared to the MR SpaceMouse module. The perceived workload decreased by 32% and 22%, compared to the 2D Baseline and MR SpaceMouse approaches.
Collapse
|
23
|
Kurazume R, Hiramatsu T, Kamei M, Inoue D, Kawamura A, Miyauchi S, An Q. Development of AR training systems for Humanitude dementia care. Adv Robot 2022. [DOI: 10.1080/01691864.2021.2017342] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Affiliation(s)
- Ryo Kurazume
- Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Tomoki Hiramatsu
- Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Masaya Kamei
- Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Daiji Inoue
- Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Akihiro Kawamura
- Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Shoko Miyauchi
- Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Qi An
- Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| |
Collapse
|
24
|
Cross-Device Augmented Reality Annotations Method for Asynchronous Collaboration in Unprepared Environments. INFORMATION 2021. [DOI: 10.3390/info12120519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Augmented Reality (AR) annotations are a powerful way of communication when collaborators cannot be present at the same time in a given environment. However, this situation presents several challenges, for example: how to record the AR annotations for later consumption, how to align virtual and real world in unprepared environments or how to offer the annotations to users with different AR devices. In this paper we present a cross-device AR annotation method that allows users to create and display annotations asynchronously in environments without the need for prior preparation (AR markers, point cloud capture, etc.). This is achieved through an easy user-assisted calibration process and a data model that allows any type of annotation to be stored on any device. The experimental study carried out with 40 participants has verified our two hypotheses: we are able to visualize AR annotations in indoor environments without prior preparation regardless of the device used and the overall usability of the system is satisfactory.
Collapse
|
25
|
Role-Aware Information Spread in Online Social Networks. ENTROPY 2021; 23:e23111542. [PMID: 34828240 PMCID: PMC8618065 DOI: 10.3390/e23111542] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 12/29/2022]
Abstract
Understanding the complex process of information spread in online social networks (OSNs) enables the efficient maximization/minimization of the spread of useful/harmful information. Users assume various roles based on their behaviors while engaging with information in these OSNs. Recent reviews on information spread in OSNs have focused on algorithms and challenges for modeling the local node-to-node cascading paths of viral information. However, they neglected to analyze non-viral information with low reach size that can also spread globally beyond OSN edges (links) via non-neighbors through, for example, pushed information via content recommendation algorithms. Previous reviews have also not fully considered user roles in the spread of information. To address these gaps, we: (i) provide a comprehensive survey of the latest studies on role-aware information spread in OSNs, also addressing the different temporal spreading patterns of viral and non-viral information; (ii) survey modeling approaches that consider structural, non-structural, and hybrid features, and provide a taxonomy of these approaches; (iii) review software platforms for the analysis and visualization of role-aware information spread in OSNs; and (iv) describe how information spread models enable useful applications in OSNs such as detecting influential users. We conclude by highlighting future research directions for studying information spread in OSNs, accounting for dynamic user roles.
Collapse
|
26
|
Abstract
Augmented reality (AR) allows the real and digital worlds to converge and overlap in a new way of observation and understanding. The architectural field can significantly benefit from AR applications, due to their systemic complexity in terms of knowledge and process management. Global interest and many research challenges are focused on this field, thanks to the conjunction of technological and algorithmic developments from one side, and the massive digitization of built data. A significant quantity of research in the AEC and educational fields describes this state of the art. Moreover, it is a very fragmented domain, in which specific advances or case studies are often described without considering the complexity of the whole development process. The article illustrates the entire AR pipeline development in architecture, from the conceptual phase to its application, highlighting each step’s specific aspects. This storytelling aims to provide a general overview to a non-expert, deepening the topic and stimulating a democratization process. The aware and extended use of AR in multiple areas of application can lead a new way forward for environmental understanding, bridging the gap between real and virtual space in an innovative perception of architecture.
Collapse
|
27
|
Yoon H, Kim SK, Lee Y, Choi J. Google Glass-Supported Cooperative Training for Health Professionals: A Case Study Based on Using Remote Desktop Virtual Support. J Multidiscip Healthc 2021; 14:1451-1462. [PMID: 34168458 PMCID: PMC8216757 DOI: 10.2147/jmdh.s311766] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 05/31/2021] [Indexed: 01/18/2023] Open
Abstract
Purpose Observation of medical trainees’ care performance by experts can be extremely helpful for ensuring safety and providing quality care. The advanced technology of smart glasses enables health professionals to video stream their operations to remote supporters for collaboration and cooperation. This study monitored the clinical situation by using smart glasses for remote cooperative training via video streaming and clinical decision-making through simulation based on a scenario of emergency nursing care for patients with arrhythmia. Participants and Methods The clinical operations of bedside trainees, who is Google Glass Enterprise Edition 2(Glass EE2) wearers, were live streamed via their Google Glasses, which were viewed at a remote site by remote supporters via a desktop computer. Data were obtained from 31 nursing students using eight essay questions regarding their experience as desktop-side remote supporters. Results Most of the participants reported feeling uneasy about identifying clinical situations (84%), patients’ condition (72%), and trainees’ performance (69%). The current system demonstrated sufficient performance with a satisfactory level of image quality and auditory communication, while network and connectivity are areas that require further improvement. The reported barriers to identifying situations on the remote desktop were predominantly a narrow field of view and motion blur in videos captured by Glass EE2s, and using the customized mirror mode. Conclusion The current commercial Glass EE2 can facilitate enriched communication between remotely located supporters and trainees by sharing live videos and audio during clinical operations. Further improvement of hardware and software user interfaces will ensure better applicability of smart glasses and video streaming functions to clinical practice settings.
Collapse
Affiliation(s)
- Hyoseok Yoon
- Division of Computer Engineering, Hanshin University, Osan, Korea
| | - Sun Kyung Kim
- Department of Nursing, and Department of Biomedicine, Health & Life Convergence Sciences, BK21 Four, Biomedical and Healthcare Research Institute, Mokpo National University, Jeonnam, Korea
| | - Youngho Lee
- Department of Computer Engineering, Mokpo National University, Jeonnam, Korea
| | - Jongmyung Choi
- Department of Computer Engineering, Mokpo National University, Jeonnam, Korea
| |
Collapse
|
28
|
Creating Collaborative Augmented Reality Experiences for Industry 4.0 Training and Assistance Applications: Performance Evaluation in the Shipyard of the Future. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10249073] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Industrial Augmented Reality (IAR) is one of the key technologies pointed out by the Industry 4.0 paradigm as a tool for improving industrial processes and for maximizing worker efficiency. Training and assistance are two of the most popular IAR-enabled applications, since they may significantly facilitate, support, and optimize production and assembly tasks in industrial environments. This article presents an IAR collaborative application developed jointly by Navantia, one of the biggest European shipbuilders, and the University of A Coruña (Spain). The analysis, design, and implementation of such an IAR application are described thoroughly so as to enable future developers to create similar IAR applications. The IAR application is based on the Microsoft HoloLens smart glasses and is able to assist and to guide shipyard operators during their training and in assembly tasks. The proposed IAR application embeds a novel collaborative protocol that allows operators to visualize and interact in a synchronized way with the same virtual content. Thus, all operators that share an IAR experience see each virtual object positioned at the same physical spot and in the same state. The collaborative application is first evaluated and optimized in terms of packet communications delay and anchor transmission latency, and then, its validation in a shipyard workshop by Navantia’s operators is presented. The performance results show fast response times for regular packets (less than 5 ms), low interference rates in the 5 GHz band, and an anchor transmission latency of up to 30 s. Regarding the validation tests, they allow for obtaining useful insights and feedback from the industrial operators, as well as clear guidelines that will help future developers to face the challenges that will arise when creating the next generation of IAR applications.
Collapse
|