1
|
Borhani Z, Sharma P, Ortega FR. Survey of Annotations in Extended Reality Systems. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5074-5096. [PMID: 37352090 DOI: 10.1109/tvcg.2023.3288869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2023]
Abstract
Annotation in 3D user interfaces such as Augmented Reality (AR) and Virtual Reality (VR) is a challenging and promising area; however, there are not currently surveys reviewing these contributions. In order to provide a survey of annotations for Extended Reality (XR) environments, we conducted a structured literature review of papers that used annotation in their AR/VR systems from the period between 2001 and 2021. Our literature review process consists of several filtering steps which resulted in 103 XR publications with a focus on annotation. We classified these papers based on the display technologies, input devices, annotation types, target object under annotation, collaboration type, modalities, and collaborative technologies. A survey of annotation in XR is an invaluable resource for researchers and newcomers. Finally, we provide a database of the collected information for each reviewed paper. This information includes applications, the display technologies and its annotator, input devices, modalities, annotation types, interaction techniques, collaboration types, and tasks for each paper. This database provides a rapid access to collected data and gives users the ability to search or filter the required information. This survey provides a starting point for anyone interested in researching annotation in XR environments.
Collapse
|
2
|
Wang CH, Hsiao CY, Tai AT, Wang MJJ. Usability evaluation of augmented reality visualizations on an optical see-through head-mounted display for assisting machine operations. APPLIED ERGONOMICS 2023; 113:104112. [PMID: 37591157 DOI: 10.1016/j.apergo.2023.104112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 08/06/2023] [Accepted: 08/10/2023] [Indexed: 08/19/2023]
Abstract
This study explores the effect of using different visual information overlays and guiding arrows on a machine operation task with an optical see-through head-mounted display (OST-HMD). Thirty-four participants were recruited in the experiment. The independent variables included visual information mode (text, animation, and mixed text and animation) and the use of guiding arrows (with and without arrows). In addition, gender difference was also an objective of this study. The task performance indicators were determined based on task completion time and error counts as well as subjective measures (system usability scale, NASA task load index, and immersion scale). This study used the mixed analysis of variance design to evaluate the main and interaction effects. The results showed that males performed better when using the mixed text and animation mode. Females performed better when using the text mode. In addition, using the mixed text and animation mode demonstrated the best outcome in system usability scale and NASA task load index. For the use of guiding arrows, the task completion time was reduced and the system usability scale, NASA task load index, and immersion scale showed positive effects.
Collapse
Affiliation(s)
- Chao-Hung Wang
- Department of Business Administration, Fu Jen Catholic University, No.510, Zhongzheng Rd., Xinzhuang Dist., New Taipei City, 242062, Taiwan, ROC
| | - Chih-Yu Hsiao
- Department of Industrial Engineering and Engineering Management, National Tsing Hua University, No.101, Sec.2, Kuangfu Road, Hsinchu, 30013, Taiwan, ROC
| | - An-Ting Tai
- Department of Industrial Engineering and Engineering Management, National Tsing Hua University, No.101, Sec.2, Kuangfu Road, Hsinchu, 30013, Taiwan, ROC
| | - Mao-Jiun J Wang
- Department of Industrial Engineering and Enterprise Information, Tunghai University, No.1727, Sec.4, Taiwan Boulevard, Xitun District, Taichung, 40704, Taiwan, ROC.
| |
Collapse
|
3
|
Volmer B, Liu JS, Matthews B, Bornkessel-Schlesewsky I, Feiner S, Thomas BH. Multi-Level Precues for Guiding Tasks Within and Between Workspaces in Spatial Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4449-4459. [PMID: 37874709 DOI: 10.1109/tvcg.2023.3320246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2023]
Abstract
We explore Spatial Augmented Reality (SAR) precues (predictive cues) for procedural tasks within and between workspaces and for visualizing multiple upcoming steps in advance. We designed precues based on several factors: cue type, color transparency, and multi-level (number of precues). Precues were evaluated in a procedural task requiring the user to press buttons in three surrounding workspaces. Participants performed fastest in conditions where tasks were linked with line cues with different levels of color transparency. Precue performance was also affected by whether the next task was in the same workspace or a different one.
Collapse
|
4
|
Romano S, Laviola E, Gattullo M, Fiorentino M, Uva AE. More Arrows in the Quiver: Investigating the Use of Auxiliary Models to Localize In-View Components with Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4483-4493. [PMID: 37782614 DOI: 10.1109/tvcg.2023.3320229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
The creation and management of content are among the main open issues for the spread of Augmented Reality. In Augmented Reality interfaces for procedural tasks, a key authoring strategy is chunking instructions and using optimized visual cues, i.e., tailored to the specific information to convey. Nevertheless, research works rarely present rationales behind their choice. This work aims to provide design guidelines for the localization of in-view and not occluded components, which is recurrent information in technical documentation. Previous studies revealed that the most suited visual cues to convey this information are auxiliary models, i.e., abstract shapes that highlight the space region where the component is located. Among them, 3D arrows are widely used, but they may produce ambiguity of information. Furthermore, from the literature, it is unclear how to design auxiliary model shapes and if they are affected by the component shapes. To fill this gap, we conducted two user studies. In the first study, we collected the preference of 45 users regarding the shape, color, and animation of auxiliary models for the localization of various component shapes. According to the results of this study, we defined guidelines for designing optimized auxiliary models based on the component shapes. In the second user study, we validated these guidelines by evaluating the performance (localization time and recognition accuracy) and user experience of 24 users. The results of this study allowed us to confirm that designing auxiliary models following our guidelines leads to a higher recognition accuracy and user experience than using 3D arrows.
Collapse
|
5
|
In-situ or side-by-side? A user study on augmented reality maintenance instructions in blind areas. COMPUT IND 2023. [DOI: 10.1016/j.compind.2022.103795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
6
|
Liu JS, Tversky B, Feiner S. Precueing Object Placement and Orientation for Manual Tasks in Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3799-3809. [PMID: 36049002 DOI: 10.1109/tvcg.2022.3203111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
When a user is performing a manual task, AR or VR can provide information about the current subtask (cueing) and upcoming subtasks (precueing) that makes them easier and faster to complete. Previous research on cueing and precueing in AR and VR has focused on path-following tasks requiring simple actions at each of a series of locations, such as pushing a button or just visiting. We consider a more complex task, whose subtasks involve moving to and picking up an item, moving that item to a designated place while rotating it to a specific angle, and depositing it. We conducted two user studies to examine how people accomplish this task while wearing an AR headset, guided by different visualizations that cue and precue movement and rotation. Participants performed best when given movement information for two successive subtasks and rotation information for a single subtask. In addition, participants performed best when the rotation visualization was split across the manipulated object and its destination.
Collapse
|
7
|
Liu JS, Elvezio C, Tversky B, Feiner S. Using Multi-Level Precueing to Improve Performance in Path-Following Tasks in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4311-4320. [PMID: 34449370 DOI: 10.1109/tvcg.2021.3106476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Work on VR and AR task interaction and visualization paradigms has typically focused on providing information about the current step (a cue) immediately before or during its performance. Some research has also shown benefits to simultaneously providing information about the next step (a precue). We explore whether it would be possible to improve efficiency by precueing information about multiple upcoming steps before completing the current step. To accomplish this, we developed a remote VR user study comparing task completion time and subjective metrics for different levels and styles of precueing in a path-following task. Our visualizations vary the precueing level (number of steps precued in advance) and style (whether the path to a target is communicated through a line to the target, and whether the place of a target is communicated through graphics at the target). Participants in our study performed best when given two to three precues for visualizations using lines to show the path to targets. However, performance degraded when four precues were used. On the other hand, participants performed best with only one precue for visualizations without lines, showing only the places of targets, and performance degraded when a second precue was given. In addition, participants performed better using visualizations with lines than ones without lines.
Collapse
|
8
|
Runji JM, Lin CY. Switchable Glass Enabled Contextualization for a Cyber-Physical Safe and Interactive Spatial Augmented Reality PCBA Manufacturing Inspection System. SENSORS 2020; 20:s20154286. [PMID: 32752016 PMCID: PMC7435772 DOI: 10.3390/s20154286] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 07/23/2020] [Accepted: 07/28/2020] [Indexed: 11/24/2022]
Abstract
Augmented reality (AR) has been demonstrated to improve efficiency by up to thrice the level of traditional methods. Specifically, the adoption of visual AR is performed widely using handheld and head-mount technologies. Despite spatial augmented reality (SAR) addressing several shortcomings of wearable AR, its potential is yet to be fully explored. To date, it enhances the cooperation of users with its wide field of view and supports hands-free mobile operation, yet it has remained a challenge to provide references without relying on restrictive static empty surfaces of the same object or nearby objects for projection. Towards this end, we propose a novel approach that contextualizes projected references in real-time and on demand, onto and through the surface across a wireless network. To demonstrate the effectiveness of the approach, we apply the method to the safe inspection of printed circuit board assembly (PCBA) wirelessly networked to a remote automatic optical inspection (AOI) system. A defect detected and localized by the AOI system is wirelessly remitted to the proposed remote inspection system for prompt guidance to the inspector by augmenting a rectangular bracket and a reference image. The rectangular bracket transmitted through the switchable glass aids defect localization over the PCBA, whereas the image is projected over the opaque cells of the switchable glass to provide reference to a user. The developed system is evaluated in a user study for its robustness, precision and performance. Results indicate that the resulting contextualization from variability in occlusion levels not only positively affect inspection performance but also supersedes the state of the art in user preference. Furthermore, it supports a variety of complex visualization needs including varied sizes, contrast, online or offline tracking, with a simple robust integration requiring no additional calibration for registration.
Collapse
Affiliation(s)
- Joel Murithi Runji
- Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan;
| | - Chyi-Yeu Lin
- Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan;
- Center for Cyber-Physical System Innovation, National Taiwan University of Science and Technology, Taipei 106, Taiwan
- Taiwan Building Technology Center, National Taiwan University of Science and Technology, Taipei 106, Taiwan
- Correspondence:
| |
Collapse
|