1
|
Wu J, Zhang W, Chen H, Lin W, Shi X, Wang L. PwP: Permutating with Probability for Efficient Group Selection in VR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:2384-2394. [PMID: 40067701 DOI: 10.1109/tvcg.2025.3549560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2025]
Abstract
Group selection in virtual reality is an important means of multi-object selection, which allows users to quickly group multiple objects and can significantly improve the operation efficiency of multiple types of objects. In this paper, we propose a group selection method based on multiple rounds of probability permutation, in which the efficiency of group selection is substantially improved by making the object layout of the next round easier to be batch-selected through interactive selection, object grouping probability computation, and position rearrangement in each round of the selection process. We conducted ablation experiments to determine the algorithm coefficients and validate the effectiveness of the algorithm. In addition, an empirical user study was conducted to evaluate the ability of our method to significantly improve the efficiency of the group selection task in an immersive virtual reality environment. The reduced operations also indirectly reduce the user task load and improve usability.
Collapse
|
2
|
Wang X, Shen L, Chen L, Fan M, Lee LH. TeamPortal: Exploring Virtual Reality Collaboration Through Shared and Manipulating Parallel Views. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:3314-3324. [PMID: 40067698 DOI: 10.1109/tvcg.2025.3549569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2025]
Abstract
Virtual Reality (VR) offers a unique collaborative experience, with parallel views playing a pivotal role in Collaborative Virtual Environments by supporting the transfer and delivery of items. Sharing and manipulating partners' views provides users with a broader perspective that helps them identify the targets and partner actions. We proposed TeamPortal accordingly and conducted two user studies with 72 participants (36 pairs) to investigate the potential benefits of interactive, shared perspectives in VR collaboration. Our first study compared ShaView and TeamPortal against a baseline in a collaborative task that encompassed a series of searching and manipulation tasks. The results show that TeamPortal significantly reduced movement and increased collaborative efficiency and social presence in complex tasks. Following the results, the second study evaluated three variants: TeamPortal+, SnapTeamPortal+, and DropTeamPortal+. The results show that both SnapTeamPortal+ and DropTeamPortal+ improved task efficiency and willingness to further adopt these technologies, though SnapTeamPortal+ reduced co-presence. Based on the findings, we proposed three design implications to inform the development of future VR collaboration systems.
Collapse
|
3
|
Dai S, Li Y, Ens B, Besancon L, Dwyer T. Precise Embodied Data Selection with Haptic Feedback while Retaining Room-Scale Visualisation Context. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:602-612. [PMID: 39250401 DOI: 10.1109/tvcg.2024.3456399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Room-scale immersive data visualisations provide viewers a wide-scale overview of a large dataset, but to interact precisely with individual data points they typically have to navigate to change their point of view. In traditional screen-based visualisations, focus-and-context techniques allow visualisation users to keep a full dataset in view while making detailed selections. Such techniques have been studied extensively on desktop to allow precise selection within large data sets, but they have not been explored in immersive 3D modalities. In this paper we develop a novel immersive focus-and-context technique based on a "magic portal" metaphor adapted specifically for data visualisation scenarios. An extendable-hand interaction technique is used to place a portal close to the region of interest. The other end of the portal then opens comfortably within the user's physical reach such that they can reach through to precisely select individual data points. Through a controlled study with 12 participants, we find strong evidence that portals reduce overshoots in selection and overall hand trajectory length, reducing arm and shoulder fatigue compared to ranged interaction without the portal. The portals also enable us to use a robot arm to provide haptic feedback for data within the limited volume of the portal region. In a second study with another 12 participants we found that haptics provided a positive experience (qualitative feedback) but did not significantly reduce fatigue. We demonstrate applications for portal-based selection through two use-case scenarios.
Collapse
|
4
|
Kim W, Xiong S. TouchView: Mid-Air Touch on Zoomable 2D View for Distant Freehand Selection on a Virtual Reality User Interface. SENSORS (BASEL, SWITZERLAND) 2024; 24:7202. [PMID: 39598980 PMCID: PMC11598294 DOI: 10.3390/s24227202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2024] [Revised: 10/31/2024] [Accepted: 11/01/2024] [Indexed: 11/29/2024]
Abstract
Selection is a fundamental interaction element in virtual reality (VR) and 3D user interfaces (UIs). Raycasting, one of the most common object selection techniques, is known to have difficulties in selecting small or distant objects. Meanwhile, recent advancements in computer vision technology have enabled seamless vision-based hand tracking in consumer VR headsets, enhancing accessibility to freehand mid-air interaction and highlighting the need for further research in this area. This study proposes a new technique called TouchView, which utilizes a virtual panel with a modern adaptation of the Through-the-Lens metaphor to improve freehand selection for VR UIs. TouchView enables faster and less demanding target selection by allowing direct touch interaction with the magnified object proxies reflected on the panel view. A repeated-measures ANOVA on the results of a follow-up experiment on multitarget selection with 23 participants showed that TouchView outperformed the current market-dominating freehand raycasting technique, Hybrid Ray, in terms of task performance, perceived workload, and preference. User behavior was also analyzed to understand the underlying reasons for these improvements. The proposed technique can be used in VR UI applications to enhance the selection of distant objects, especially for cases with frequent view shifts.
Collapse
Affiliation(s)
- Woojoo Kim
- Division of Liberal Studies, Kangwon National University, Chuncheon 24341, Republic of Korea;
| | - Shuping Xiong
- Department of Industrial and Systems Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| |
Collapse
|
5
|
Wang M, Li YJ, Shi J, Steinicke F. SceneFusion: Room-Scale Environmental Fusion for Efficient Traveling Between Separate Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4615-4630. [PMID: 37126613 DOI: 10.1109/tvcg.2023.3271709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Traveling between scenes has become a major requirement for navigation in numerous virtual reality (VR) social platforms and game applications, allowing users to efficiently explore multiple virtual environments (VEs). To facilitate scene transition, prevalent techniques such as instant teleportation and virtual portals have been extensively adopted. However, these techniques exhibit limitations when there is a need for frequent travel between separate VEs, particularly within indoor environments, resulting in low efficiency. In this article, we first analyze the design rationale for a novel navigation method supporting efficient travel between virtual indoor scenes. Based on the analysis, we introduce the SceneFusion technique that fuses separate virtual rooms into an integrated environment. SceneFusion enables users to perceive rich visual information from both rooms simultaneously, achieving high visual continuity and spatial awareness. While existing teleportation techniques passively transport users, SceneFusion allows users to actively access the fused environment using short-range locomotion techniques. User experiments confirmed that SceneFusion outperforms instant teleportation and virtual portal techniques in terms of efficiency, workload, and preference for both single-user exploration and multi-user collaboration tasks in separate VEs. Thus, SceneFusion presents an effective solution for seamless traveling between virtual indoor scenes.
Collapse
|
6
|
Wu H, Sun X, Tu H, Zhang X. ClockRay: A Wrist-Rotation Based Technique for Occluded-Target Selection in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:3767-3778. [PMID: 37022075 DOI: 10.1109/tvcg.2023.3239951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Target selection is one of essential operation made available by interaction techniques in virtual reality (VR) environments. However, effectively positioning or selecting occluded objects is under-investigated in VR, especially in the context of high-density or a high-dimensional data visualization with VR. In this paper, we propose ClockRay, an occluded-object selection technique that can maximize the intrinsic human wrist rotation skills through the integration of emerging ray selection techniques in VR environments. We describe the design space of the ClockRay technique and then evaluate its performance in a series of user studies. Drawing on the experimental results, we discuss the benefits of ClockRay compared to two popular ray selection techniques - RayCursor and RayCasting. Our findings can inform the design of VR-based interactive visualization systems for high-density data.
Collapse
|
7
|
Kruger M, Gerrits T, Romer T, Kuhlen T, Weissker T. IntenSelect+: Enhancing Score-Based Selection in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2829-2838. [PMID: 38437105 DOI: 10.1109/tvcg.2024.3372077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Object selection in virtual environments is one of the most common and recurring interaction tasks. Therefore, the used technique can critically influence a system's overall efficiency and usability. IntenSelect is a scoring-based selection-by-volume technique that was shown to offer improved selection performance over conventional raycasting in virtual reality. This initial method, however, is most pronounced for small spherical objects that converge to a point-like appearance only, is challenging to parameterize, and has inherent limitations in terms of flexibility. We present an enhanced version of IntenSelect called IntenSelect+ designed to overcome multiple shortcomings of the original IntenSelect approach. In an empirical within-subjects user study with 42 participants, we compared IntenSelect+ to IntenSelect and conventional raycasting on various complex object configurations motivated by prior work. In addition to replicating the previously shown benefits of IntenSelect over raycasting, our results demonstrate significant advantages of IntenSelect+ over IntenSelect regarding selection performance, task load, and user experience. We, therefore, conclude that IntenSelect+ is a promising enhancement of the original approach that enables faster, more precise, and more comfortable object selection in immersive virtual environments.
Collapse
|
8
|
Tian Y, Zheng Y, Zhao S, Ma X, Wang Y. Balancing Accuracy and Speed in Gaze-Touch Grid Menu Selection in AR via Mapping Sub-Menus to a Hand-Held Device. SENSORS (BASEL, SWITZERLAND) 2023; 23:9587. [PMID: 38067960 PMCID: PMC10708592 DOI: 10.3390/s23239587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 11/10/2023] [Accepted: 12/01/2023] [Indexed: 12/18/2023]
Abstract
Eye gaze can be a potentially fast and ergonomic method for target selection in augmented reality (AR). However, the eye-tracking accuracy of current consumer-level AR systems is limited. While state-of-the-art AR target selection techniques based on eye gaze and touch (gaze-touch), which follow the "eye gaze pre-selects, touch refines and confirms" mechanism, can significantly enhance selection accuracy, their selection speeds are usually compromised. To balance accuracy and speed in gaze-touch grid menu selection in AR, we propose the Hand-Held Sub-Menu (HHSM) technique.tou HHSM divides a grid menu into several sub-menus and maps the sub-menu pointed to by eye gaze onto the touchscreen of a hand-held device. To select a target item, the user first selects the sub-menu containing it via eye gaze and then confirms the selection on the touchscreen via a single touch action. We derived the HHSM technique's design space and investigated it through a series of empirical studies. Through an empirical study involving 24 participants recruited from a local university, we found that HHSM can effectively balance accuracy and speed in gaze-touch grid menu selection in AR. The error rate was approximately 2%, and the completion time per selection was around 0.93 s when participants used two thumbs to interact with the touchscreen, and approximately 1.1 s when they used only one finger.
Collapse
Affiliation(s)
- Yang Tian
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China;
| | - Yulin Zheng
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China;
| | - Shengdong Zhao
- Department of Computer Science, National University of Singapore, Singapore 119077, Singapore;
| | - Xiaojuan Ma
- Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong 999077, China;
| | - Yunhai Wang
- School of Computer Science and Technology, Shandong University, Qingdao 266237, China;
| |
Collapse
|
9
|
Wilson G, McGill M, Medeiros D, Brewster S. A Lack of Restraint: Comparing Virtual Reality Interaction Techniques for Constrained Transport Seating. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2390-2400. [PMID: 37028078 DOI: 10.1109/tvcg.2023.3247084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Standalone Virtual Reality (VR) headsets can be used when travelling in cars, trains and planes. However, the constrained spaces around transport seating can leave users with little physical space in which to interact using their hands or controllers, and can increase the risk of invading other passengers' personal space or hitting nearby objects and surfaces. This hinders transport VR users from using most commercial VR applications, which are designed for unobstructed 1-2m 360° home spaces. In this paper, we investigated whether three at-a-distance interaction techniques from the literature could be adapted to support common commercial VR movement inputs and so equalise the interaction capabilities of at-home and on-transport users: Linear Gain, Gaze-Supported Remote Hand, and AlphaCursor. First, we analysed commercial VR experiences to identify the most common movement inputs so that we could create gamified tasks based on them. We then investigated how well each technique could support these inputs from a constrained $50\mathrm{x}50\text{cm}$ space (representative of an economy plane seat) through a user study $(\mathrm{N}=16)$, where participants played all three games with each technique. We measured task performance, unsafe movements (play boundary violations, total arm movement) and subjective experience and compared results to a control 'at-home' condition (with unconstrained movement) to determine how similar performance and experience were. Results showed that Linear Gain was the best technique, with similar performance and user experience to the 'at-home' condition, albeit at the expense of a high number of boundary violations and large arm movements. In contrast, AlphaCursor kept users within bounds and minimised arm movement, but suffered from poorer performance and experience. Based on the results, we provide eight guidelines for the use of, and research into, at-a-distance techniques and constrained spaces.
Collapse
|
10
|
Wang Z, Zhao Y, Lu F. Gaze-Vergence-Controlled See-Through Vision in Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3843-3853. [PMID: 36049007 DOI: 10.1109/tvcg.2022.3203110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Augmented Reality (AR) see-through vision is an interesting research topic since it enables users to see through a wall and see the occluded objects. Most existing research focuses on the visual effects of see-through vision, while the interaction method is less studied. However, we argue that using common interaction modalities, e.g., midair click and speech, may not be the optimal way to control see-through vision. This is because when we want to see through something, it is physically related to our gaze depth/vergence and thus should be naturally controlled by the eyes. Following this idea, this paper proposes a novel gaze-vergence-controlled (GVC) see-through vision technique in AR. Since gaze depth is needed, we build a gaze tracking module with two infrared cameras and the corresponding algorithm and assemble it into the Microsoft HoloLens 2 to achieve gaze depth estimation. We then propose two different GVC modes for see-through vision to fit different scenarios. Extensive experimental results demonstrate that our gaze depth estimation is efficient and accurate. By comparing with conventional interaction modalities, our GVC techniques are also shown to be superior in terms of efficiency and more preferred by users. Finally, we present four example applications of gaze-vergence-controlled see-through vision.
Collapse
|
11
|
Sidenmark L, Parent M, Wu CH, Chan J, Glueck M, Wigdor D, Grossman T, Giordano M. Weighted Pointer: Error-aware Gaze-based Interaction through Fallback Modalities. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3585-3595. [PMID: 36048981 DOI: 10.1109/tvcg.2022.3203096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Gaze-based interaction is a fast and ergonomic type of hands-free interaction that is often used with augmented and virtual reality when pointing at targets. Such interaction, however, can be cumbersome whenever user, tracking, or environmental factors cause eye tracking errors. Recent research has suggested that fallback modalities could be leveraged to ensure stable interaction irrespective of the current level of eye tracking error. This work thus presents Weighted Pointer interaction, a collection of error-aware pointing techniques that determine whether pointing should be performed by gaze, a fallback modality, or a combination of the two, depending on the level of eye tracking error that is present. These techniques enable users to accurately point at targets when eye tracking is accurate and inaccurate. A virtual reality target selection study demonstrated that Weighted Pointer techniques were more performant and preferred over techniques that required the use of manual modality switching.
Collapse
|