1
|
Gilbert D, Bose A, Kuhlen TW, Weissker T. PASCAL - A Collaboration Technique Between Non-Collocated Avatars in Large Collaborative Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:3525-3535. [PMID: 40053636 DOI: 10.1109/tvcg.2025.3549175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2025]
Abstract
Collaborative work in large virtual environments often requires transitions from loosely-coupled collaboration at different locations to tightly-coupled collaboration at a common meeting point. Inspired by prior work on the continuum between these extremes, we present two novel interaction techniques designed to share spatial context while collaborating over large virtual distances. The first method replicates the familiar setup of a video conference by providing users with a virtual tablet to share video feeds with their peers. The second method called PASCAL (Parallel Avatars in a Shared Collaborative Aura Link) enables users to share their immediate spatial surroundings with others by creating synchronized copies of it at the remote locations of their collaborators. We evaluated both techniques in a within-subject user study, in which 24 participants were tasked with solving a puzzle in groups of two. Our results indicate that the additional contextual information provided by PASCAL had significantly positive effects on task completion time, ease of communication, mutual understanding, and co-presence. As a result, our insights contribute to the repertoire of successful interaction techniques to mediate between loosely- and tightly-coupled work in collaborative virtual environments.
Collapse
|
2
|
Wang M, Li YJ, Shi J, Steinicke F. SceneFusion: Room-Scale Environmental Fusion for Efficient Traveling Between Separate Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4615-4630. [PMID: 37126613 DOI: 10.1109/tvcg.2023.3271709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Traveling between scenes has become a major requirement for navigation in numerous virtual reality (VR) social platforms and game applications, allowing users to efficiently explore multiple virtual environments (VEs). To facilitate scene transition, prevalent techniques such as instant teleportation and virtual portals have been extensively adopted. However, these techniques exhibit limitations when there is a need for frequent travel between separate VEs, particularly within indoor environments, resulting in low efficiency. In this article, we first analyze the design rationale for a novel navigation method supporting efficient travel between virtual indoor scenes. Based on the analysis, we introduce the SceneFusion technique that fuses separate virtual rooms into an integrated environment. SceneFusion enables users to perceive rich visual information from both rooms simultaneously, achieving high visual continuity and spatial awareness. While existing teleportation techniques passively transport users, SceneFusion allows users to actively access the fused environment using short-range locomotion techniques. User experiments confirmed that SceneFusion outperforms instant teleportation and virtual portal techniques in terms of efficiency, workload, and preference for both single-user exploration and multi-user collaboration tasks in separate VEs. Thus, SceneFusion presents an effective solution for seamless traveling between virtual indoor scenes.
Collapse
|
3
|
Maloca PM, Zarranz-Ventura J, Valmaggia P, Faludi B, Zelechowski M, Tufail A, Zentai NZ, Scholl HPN, Cattin PC. Validation of collaborative cyberspace virtual reality oculometry enhanced with near real-time spatial audio. Sci Rep 2023; 13:10076. [PMID: 37344554 DOI: 10.1038/s41598-023-37267-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 06/19/2023] [Indexed: 06/23/2023] Open
Abstract
Currently, most medical image data, such as optical coherence tomography (OCT) images, are displayed in two dimensions on a computer screen. Advances in computer information technology have contributed to the growing storage of these data in electronic form. However, the data are usually processed only locally on site. To overcome such hurdles, a cyberspace virtual reality (csVR) application was validated, in which interactive OCT data were presented simultaneously to geographically distant sites (Lucerne, London, and Barcelona) where three graders independently measured the ocular csVR OCT diameters. A total of 109 objects were measured, each three times, resulting in a total of 327 csVR measurements. A minor mean absolute difference of 5.3 µm was found among the 3 measurements of an object (standard deviation 4.2 µm, coefficient of variation 0.3% with respect to the mean object size). Despite the 5 h of online work, csVR was well tolerated and safe. Digital high-resolution OCT data can be remotely and collaboratively processed in csVR. With csVR, measurements and actions enhanced with spatial audio communication can be made consistently in near real time, even if the users are situated geographically far apart. The proposed visuo-auditory framework has the potential to further boost the convenience of digital medicine toward csVR precision and collaborative medicine.
Collapse
Affiliation(s)
- Peter M Maloca
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), 4031, Basel, Switzerland.
- Department of Ophthalmology, University Hospital Basel, 4031, Basel, Switzerland.
- Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK.
| | | | - Philippe Valmaggia
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), 4031, Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, 4031, Basel, Switzerland
- Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK
| | - Balázs Faludi
- Centre for Medical Image Analysis & Navigation, University of Basel, 4123, Allschwil-Basel, Switzerland
| | - Marek Zelechowski
- Centre for Medical Image Analysis & Navigation, University of Basel, 4123, Allschwil-Basel, Switzerland
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK
| | - Norbert Z Zentai
- Centre for Medical Image Analysis & Navigation, University of Basel, 4123, Allschwil-Basel, Switzerland
| | - Hendrik P N Scholl
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), 4031, Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, 4031, Basel, Switzerland
| | - Philippe C Cattin
- Centre for Medical Image Analysis & Navigation, University of Basel, 4123, Allschwil-Basel, Switzerland
| |
Collapse
|
4
|
Abstract
The development of services and applications involving drones is promoting the growth of the unmanned-aerial-vehicle industry. Moreover, the supply of low-cost compact drones has greatly contributed to the popularization of drone flying. However, flying first-person-view (FPV) drones requires considerable experience because the remote pilot views a video transmitted from a camera mounted on the drone. In this paper, we propose a remote training system for FPV drone flying in mixed reality. Thereby, beginners who are inexperienced in FPV drone flight control can practice under the guidance of remote experts.
Collapse
|
5
|
Lee B, Hu X, Cordeil M, Prouzeau A, Jenny B, Dwyer T. Shared Surfaces and Spaces: Collaborative Data Visualisation in a Co-located Immersive Environment. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1171-1181. [PMID: 33048740 DOI: 10.1109/tvcg.2020.3030450] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Immersive technologies offer new opportunities to support collaborative visual data analysis by providing each collaborator a personal, high-resolution view of a flexible shared visualisation space through a head mounted display. However, most prior studies of collaborative immersive analytics have focused on how groups interact with surface interfaces such as tabletops and wall displays. This paper reports on a study in which teams of three co-located participants are given flexible visualisation authoring tools to allow a great deal of control in how they structure their shared workspace. They do so using a prototype system we call FIESTA: the Free-roaming Immersive Environment to Support Team-based Analysis. Unlike traditional visualisation tools, FIESTA allows users to freely position authoring interfaces and visualisation artefacts anywhere in the virtual environment, either on virtual surfaces or suspended within the interaction space. Our participants solved visual analytics tasks on a multivariate data set, doing so individually and collaboratively by creating a large number of 2D and 3D visualisations. Their behaviours suggest that the usage of surfaces is coupled with the type of visualisation used, often using walls to organise 2D visualisations, but positioning 3D visualisations in the space around them. Outside of tightly-coupled collaboration, participants followed social protocols and did not interact with visualisations that did not belong to them even if outside of its owner's personal workspace.
Collapse
|
6
|
Weissker T, Bimberg P, Froehlich B. Getting There Together: Group Navigation in Distributed Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1860-1870. [PMID: 32070981 DOI: 10.1109/tvcg.2020.2973474] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We analyzed the design space of group navigation tasks in distributed virtual environments and present a framework consisting of techniques to form groups, distribute responsibilities, navigate together, and eventually split up again. To improve joint navigation, our work focused on an extension of the Multi-Ray Jumping technique that allows adjusting the spatial formation of two distributed users as part of the target specification process. The results of a quantitative user study showed that these adjustments lead to significant improvements in joint two-user travel, which is evidenced by more efficient travel sequences and lower task loads imposed on the navigator and the passenger. In a qualitative expert review involving all four stages of group navigation, we confirmed the effective and efficient use of our technique in a more realistic use-case scenario and concluded that remote collaboration benefits from fluent transitions between individual and group navigation.
Collapse
|