1
|
Zhu Q, Lu T, Guo S, Ma X, Yang Y. CompositingVis: Exploring Interactions for Creating Composite Visualizations in Immersive Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:591-601. [PMID: 39250414 DOI: 10.1109/tvcg.2024.3456210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view. However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts. In this work, we aim to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions. This could provide a flexible and fluid experience with immersive visualization and has the potential to facilitate understanding of the relationship between visualization views. We begin with developing a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions based on the combination of 3D manipulations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interaction to create different kinds of composite visualizations in virtual reality. Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of creating composite visualizations through embodied interactions. We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization views for understanding and communicating the relationships between different views, which underscores the potential of several future application scenarios.
Collapse
|
2
|
Zhao L, Isenberg T, Xie F, Liang HN, Yu L. SpatialTouch: Exploring Spatial Data Visualizations in Cross-Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:897-907. [PMID: 39255119 DOI: 10.1109/tvcg.2024.3456368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures-often at multiple spatial or semantic scales-across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction, including mid-air gestures, touch interactions, pen interactions, and combinations thereof, to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.
Collapse
|
3
|
Kazemipour N, Hooshiar A, Kersten-Oertel M. A usability analysis of augmented reality and haptics for surgical planning. Int J Comput Assist Radiol Surg 2024; 19:2069-2078. [PMID: 38942947 DOI: 10.1007/s11548-024-03207-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 05/30/2024] [Indexed: 06/30/2024]
Abstract
PURPOSE Proper visualization and interaction with complex anatomical data can improve understanding, allowing for more intuitive surgical planning. The goal of our work was to study what the most intuitive yet practical platforms for interacting with 3D medical data are in the context of surgical planning. METHODS We compared planning using a monitor and mouse, a monitor with a haptic device, and an augmented reality (AR) head-mounted display which uses a gesture-based interaction. To determine the most intuitive system, two user studies, one with novices and one with experts, were conducted. The studies involved planning of three scenarios: (1) heart valve repair, (2) hip tumor resection, and (3) pedicle screw placement. Task completion time, NASA Task Load Index and system-specific questionnaires were used for the evaluation. RESULTS Both novices and experts preferred the AR system for pedicle screw placement. Novices preferred the haptic system for hip tumor planning, while experts preferred the mouse and keyboard. In the case of heart valve planning, novices preferred the AR system but there was no clear preference for experts. Both groups reported that AR provides the best spatial depth perception. CONCLUSION The results of the user studies suggest that different surgical cases may benefit from varying interaction and visualization methods. For example, for planning surgeries with implants and instrumentations, mixed reality could provide better 3D spatial perception, whereas using landmarks for delineating specific targets may be more effective using a traditional 2D interface.
Collapse
Affiliation(s)
- Negar Kazemipour
- Gina Cody School of Engineering and Computer Science, Concordia University, 1455 De Maisonneuve Blvd. W., Montreal, QC, H3G 1M8, Canada.
| | - Amir Hooshiar
- Department of Surgery, McGill University Health Center, 1001 Decarie Boulevard, Montreal, QC, H4A 3J1, Canada
| | - Marta Kersten-Oertel
- Gina Cody School of Engineering and Computer Science, Concordia University, 1455 De Maisonneuve Blvd. W., Montreal, QC, H3G 1M8, Canada
| |
Collapse
|
4
|
In S, Lin T, North C, Pfister H, Yang Y. This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5635-5650. [PMID: 37506003 DOI: 10.1109/tvcg.2023.3299602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.
Collapse
|
5
|
Rau T, Sedlmair M, Köhn A. chARpack: The Chemistry Augmented Reality Package. J Chem Inf Model 2024; 64:4700-4708. [PMID: 38814047 DOI: 10.1021/acs.jcim.4c00462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
Off-loading visualization and interaction into virtual reality (VR) using head-mounted displays (HMDs) has gained considerable popularity in simulation sciences, particularly in chemical modeling. Because of its unique way of soft immersion, augmented reality (AR) HMD technology has even more potential to be integrated into the everyday workflow of computational chemists. In this work, we present our environment to explore the prospects of AR in chemistry and general molecular sciences: The chemistry in Augmented Reality package (chARpack). Besides providing an extensible framework, our software focuses on a seamless transition between a 3D stereoscopic view with true 3D interactions and the traditional desktop PC setup to provide users with the best setup for all tasks in their workflow. Using feedback from domain experts, we discuss our design requirements for this kind of hybrid working environment (AR + PC), regarding input, features, degree of immersion, and collaboration.
Collapse
Affiliation(s)
- Tobias Rau
- Institute for Theoretical Chemistry, University of Stuttgart, Stuttgart 70569, Germany
- Institute for Visualization and Interactive Systems, University of Stuttgart, Stuttgart 70569, Germany
| | - Michael Sedlmair
- Institute for Visualization and Interactive Systems, University of Stuttgart, Stuttgart 70569, Germany
| | - Andreas Köhn
- Institute for Theoretical Chemistry, University of Stuttgart, Stuttgart 70569, Germany
| |
Collapse
|
6
|
Tung YH, Chang CY. How three-dimensional sketching environments affect spatial thinking: A functional magnetic resonance imaging study of virtual reality. PLoS One 2024; 19:e0294451. [PMID: 38466671 PMCID: PMC10927127 DOI: 10.1371/journal.pone.0294451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 10/31/2023] [Indexed: 03/13/2024] Open
Abstract
Designers rely on sketching to visualize and refine their initial ideas, and virtual reality (VR) tools now facilitate sketching in immersive 3D environments. However, little research has been conducted on the differences in the visual and spatial processes involved in 3D versus 2D sketching and their effects on cognition. This study investigated potential differences in spatial and visual functions related to the use of 3D versus 2D sketching media by analyzing functional magnetic resonance imaging (fMRI) data. We recruited 20 healthy, right-handed students from the Department of Horticulture and Landscape Architecture with at least three years of experience in freehand landscape drawing. Using an Oculus Quest VR headset controller and a 12.9-inch iPad Pro with an Apple Pencil, we tested participants individually with 3D and 2D sketching, respectively. When comparing 2D and 3D sketches, our fMRI results revealed significant differences in the activation of several brain regions, including the right middle temporal gyrus, both sides of the parietal lobe, and the left middle occipital gyrus. We also compared different sketching conditions, such as lines, geometrical objects (cube), and naturalistic objects (perspective view of a tree), and found significant differences in the spatial and visual recognition of brain areas that support visual recognition, composition, and spatial perception. This finding suggests that 3D sketching environments, such as VR, may activate more visual-spatial functions during sketching compared to 2D environments. The result highlights the potential of immersive sketching environments for design-related processes and spatial thinking.
Collapse
Affiliation(s)
- Yu-Hsin Tung
- Department of Horticulture and Landscape Architecture, National Taiwan University, Taipei, Taiwan
| | - Chun-Yen Chang
- Department of Horticulture and Landscape Architecture, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
7
|
Minh Tran TT, Brown S, Weidlich O, Billinghurst M, Parker C. Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4782-4793. [PMID: 37782599 DOI: 10.1109/tvcg.2023.3320231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Wearable Augmented Reality (AR) has attracted considerable attention in recent years, as evidenced by the growing number of research publications and industry investments. With swift advancements and a multitude of interdisciplinary research areas within wearable AR, a comprehensive review is crucial for integrating the current state of the field. In this paper, we present a review of 389 research papers on wearable AR, published between 2018 and 2022 in three major venues: ISMAR, TVCG, and CHI. Drawing inspiration from previous works by Zhou et al. and Kim et al., which summarized AR research at ISMAR over the past two decades (1998-2017), we categorize the papers into different topics and identify prevailing trends. One notable finding is that wearable AR research is increasingly geared towards enabling broader consumer adoption. From our analysis, we highlight key observations related to potential future research areas essential for capitalizing on this trend and achieving widespread adoption. These include addressing challenges in Display, Tracking, Interaction, and Applications, and exploring emerging frontiers in Ethics, Accessibility, Avatar and Embodiment, and Intelligent Virtual Agents.
Collapse
|
8
|
Fouché G, Argelaguet F, Faure E, Kervrann C. Immersive and interactive visualization of 3D spatio-temporal data using a space time hypercube: Application to cell division and morphogenesis analysis. FRONTIERS IN BIOINFORMATICS 2023; 3:998991. [PMID: 36969798 PMCID: PMC10031126 DOI: 10.3389/fbinf.2023.998991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 02/21/2023] [Indexed: 03/11/2023] Open
Abstract
The analysis of multidimensional time-varying datasets faces challenges, notably regarding the representation of the data and the visualization of temporal variations. We propose an extension of the well-known Space-Time Cube (STC) visualization technique in order to visualize time-varying 3D spatial data, taking advantage of the interaction capabilities of Virtual Reality (VR). First, we propose the Space-Time Hypercube (STH) as an abstraction for 3D temporal data, extended from the STC concept. Second, through the example of embryo development imaging dataset, we detail the construction and visualization of a STC based on a user-driven projection of the spatial and temporal information. This projection yields a 3D STC visualization, which can also encode additional numerical and categorical data. Additionally, we propose a set of tools allowing the user to filter and manipulate the 3D STC which benefits the visualization, exploration and interaction possibilities offered by VR. Finally, we evaluated the proposed visualization method in the context of 3D temporal cell imaging data analysis, through a user study (n = 5) reporting the feedback from five biologists. These domain experts also accompanied the application design as consultants, providing insights on how the STC visualization could be used for the exploration of complex 3D temporal morphogenesis data.
Collapse
Affiliation(s)
- Gwendal Fouché
- Inria de l’Université de Rennes, IRISA, CNRS, Rennes, France
| | | | - Emmanuel Faure
- LIRMM, Université Montpellier, CNRS, Montpellier, France
| | - Charles Kervrann
- Inria de l’Université de Rennes, Rennes, France
- UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universités, Paris, France
| |
Collapse
|
9
|
Tong W, Chen Z, Xia M, Lo LYH, Yuan L, Bach B, Qu H. Exploring Interactions with Printed Data Visualizations in Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:418-428. [PMID: 36166542 DOI: 10.1109/tvcg.2022.3209386] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
This paper presents a design space of interaction techniques to engage with visualizations that are printed on paper and augmented through Augmented Reality. Paper sheets are widely used to deploy visualizations and provide a rich set of tangible affordances for interactions, such as touch, folding, tilting, or stacking. At the same time, augmented reality can dynamically update visualization content to provide commands such as pan, zoom, filter, or detail on demand. This paper is the first to provide a structured approach to mapping possible actions with the paper to interaction commands. This design space and the findings of a controlled user study have implications for future designs of augmented reality systems involving paper sheets and visualizations. Through workshops ( N=20) and ideation, we identified 81 interactions that we classify in three dimensions: 1) commands that can be supported by an interaction, 2) the specific parameters provided by an (inter)action with paper, and 3) the number of paper sheets involved in an interaction. We tested user preference and viability of 11 of these interactions with a prototype implementation in a controlled study ( N=12, HoloLens 2) and found that most of the interactions are intuitive and engaging to use. We summarized interactions (e.g., tilt to pan) that have strong affordance to complement "point" for data exploration, physical limitations and properties of paper as a medium, cases requiring redundancy and shortcuts, and other implications for design.
Collapse
|
10
|
Shi H, Vardhan M, Randles A. The Role of Immersion for Improving Extended Reality Analysis of Personalized Flow Simulations. Cardiovasc Eng Technol 2022; 14:194-203. [PMID: 36385239 DOI: 10.1007/s13239-022-00646-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/28/2022] [Indexed: 11/17/2022]
Abstract
PURPOSE Computational models of flow in patient-derived arterial geometries have become a key paradigm of biomedical research. These fluid models are often challenging to visualize due to high spatial heterogeneity and visual complexity. Virtual immersive environments can offer advantageous visualization of spatially heterogeneous and complex systems. However, as different VR devices offer varying levels of immersion, there remains a crucial lack of understanding regarding what level of immersion is best suited for interactions with patient-specific flow models. METHODS We conducted a quantitative user evaluation with multiple VR devices testing an important use of hemodynamic simulations-analysis of surface parameters within complex patient-specific geometries. This task was compared for the semi-immersive zSpace 3D monitor and the fully immersive HTC Vive system. RESULTS The semi-immersive device was more accurate than the fully immersive device. The two devices showed similar results for task duration and performance (accuracy/duration). The accuracy of the semi-immersive device was also higher for arterial geometries of greater complexity and branching. CONCLUSION This assessment demonstrates that the level of immersion plays a significant role in the accuracy of assessing arterial flow models. We found that the semi-immersive VR device was a generally optimal choice for arterial visualization.
Collapse
|
11
|
Effect of display platforms on spatial knowledge acquisition and engagement: an evaluation with 3D geometry visualizations. J Vis (Tokyo) 2022. [DOI: 10.1007/s12650-022-00889-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
12
|
Wagner J, Stuerzlinger W, Nedel L. The Effect of Exploration Mode and Frame of Reference in Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3252-3264. [PMID: 33606632 DOI: 10.1109/tvcg.2021.3060666] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The design space for user interfaces for Immersive Analytics applications is vast. Designers can combine navigation and manipulation to enable data exploration with ego- or exocentric views, have the user operate at different scales, or use different forms of navigation with varying levels of physical movement. This freedom results in a multitude of different viable approaches. Yet, there is no clear understanding of the advantages and disadvantages of each choice. Our goal is to investigate the affordances of several major design choices, to enable both application designers and users to make better decisions. In this article, we assess two main factors, exploration mode and frame of reference, consequently also varying visualization scale and physical movement demand. To isolate each factor, we implemented nine different conditions in a Space-Time Cube visualization use case and asked 36 participants to perform multiple tasks. We analyzed the results in terms of performance and qualitative measures and correlated them with participants' spatial abilities. While egocentric room-scale exploration significantly reduced mental workload, exocentric exploration improved performance in some tasks. Combining navigation and manipulation made tasks easier by reducing workload, temporal demand, and physical effort.
Collapse
|
13
|
Abstract
Recent research in the area of immersive analytics demonstrated the utility of augmented reality for data analysis. However, there is a lack of research on how to facilitate engaging, embodied, and interactive AR graph visualization. In this paper, we explored the design space for combining the capabilities of AR with node-link diagrams to create immersive data visualization. We first systematically described the design rationale and the design process of the mobile based AR graph including the layout, interactions, and aesthetics. Then, we validated the AR concept by conducting a user study with 36 participants to examine users’ behaviors with an AR graph and a 2D graph. The results of our study showed the feasibility of using an AR graph to present data relations and also introduced interaction challenges in terms of the effectiveness and usability with mobile devices. Third, we iterated the AR graph by implementing embodied interactions with hand gestures and addressing the connection between the physical objects and the digital graph. This study is the first step in our research, aiming to guide the design of the application of immersive AR data visualization in the future.
Collapse
|
14
|
Sereno M, Wang X, Besancon L, McGuffin MJ, Isenberg T. Collaborative Work in Augmented Reality: A Survey. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2530-2549. [PMID: 33085619 DOI: 10.1109/tvcg.2020.3032761] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In Augmented Reality (AR), users perceive virtual content anchored in the real world. It is used in medicine, education, games, navigation, maintenance, product design, and visualization, in both single-user and multi-user scenarios. Multi-user AR has received limited attention from researchers, even though AR has been in development for more than two decades. We present the state of existing work at the intersection of AR and Computer-Supported Collaborative Work (AR-CSCW), by combining a systematic survey approach with an exploratory, opportunistic literature search. We categorize 65 papers along the dimensions of space, time, role symmetry (whether the roles of users are symmetric), technology symmetry (whether the hardware platforms of users are symmetric), and output and input modalities. We derive design considerations for collaborative AR environments, and identify under-explored research topics. These include the use of heterogeneous hardware considerations and 3D data exploration research areas. This survey is useful for newcomers to the field, readers interested in an overview of CSCW in AR applications, and domain experts seeking up-to-date information.
Collapse
|
15
|
Fombona-Pascual A, Fombona J, Vicente R. Augmented Reality, a Review of a Way to Represent and Manipulate 3D Chemical Structures. J Chem Inf Model 2022; 62:1863-1872. [PMID: 35373563 PMCID: PMC9044447 DOI: 10.1021/acs.jcim.1c01255] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
![]()
Augmented reality
(AR) is a mixed technology that superimposes
three-dimensional (3D) digital data onto an image of reality. This
technology enables users to represent and manipulate 3D chemical structures.
In spite of its potential, the use of these tools in chemistry is
still scarce. The aim of this work is to identify the real situation
of AR developments and its potential for 3D visualization of molecules.
A descriptive analysis of a selection of 143 research publications
(extracted from Web of Science between 2018 and 2020) highlights some
significant AR examples that had been implemented in chemistry, in
both education and research environments. Although the traditional
2D screen visualization is still preferred when teaching chemistry,
the application of AR in early education has shown potential to facilitate
the understanding and visualization of chemical structures. The increasing
connectivity of the AR technology to web platforms and scientific
networks should translate into new opportunities for teaching and
learning strategies.
Collapse
Affiliation(s)
- Alba Fombona-Pascual
- Organic and Inorganic Chemistry Department, University of Oviedo, Av. Julian Clavería, Oviedo 33006, Spain
| | - Javier Fombona
- Education Sciences Department, University of Oviedo, C. Aniceto Sela, Oviedo 33005, Spain
| | - Rubén Vicente
- Organic and Inorganic Chemistry Department, University of Oviedo, Av. Julian Clavería, Oviedo 33006, Spain
| |
Collapse
|
16
|
Interactive Geological Data Visualization in an Immersive Environment. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2022. [DOI: 10.3390/ijgi11030176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Underground flow paths (UFP) often play an important role in the illustration of geological data by geologists, especially in illustrating geological data and revealing stratigraphic structures, which can help domain experts in their exploration of petroleum information. In this paper, we present a new immersive visualization tool to help domain experts better illustrate stratigraphic data. We use a visualization method based on bit-array-based 3-D texture to represent stratigraphic data. Our visualization tool has three major advantages: it allows for flexible interaction at the immersive device, it enables domain experts to obtain their desired UFP structure through the execution of quadratic surface queries, and supports different stratigraphic display modes, as well as switching and integration geological information flexibly. Feedback from domain experts has shown that our tool can contribute more for domain experts in the scientific exploration of stratigraphic data, compared to the existing UFP visualization tools in the field. Thus, experts in geology can have a more comprehensive understanding and more effective illustration of the structure and distribution of UFPs.
Collapse
|
17
|
Abriata LA. How Technologies Assisted Science Learning at Home During the COVID-19 Pandemic. DNA Cell Biol 2022; 41:19-24. [PMID: 34515524 PMCID: PMC8787708 DOI: 10.1089/dna.2021.0497] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 07/11/2021] [Accepted: 07/12/2021] [Indexed: 12/31/2022] Open
Abstract
As most other aspects of life, education was strongly affected by the lockdowns imposed to slow down the spread of the COVID-19 pandemic. Teachers at all levels of education suddenly faced the challenge of adapting their courses to online versions. This posed various problems, from the pedagogical and psychological components of having to teach and learn online to the technical problems of internet connectivity and especially of rethinking hands-on activities. The latter point was especially important for subjects who involve very practical learning, for which teachers had to find out alternative activities that the students could carry out at home. In the subjects dealing with natural sciences, impaired access to instrumentation and reagents was a major limitation, but the community turned out very resourceful. Here I demonstrate this resourcefulness for the case of undergraduate chemistry and biology courses, focusing on how do-it-yourself open technologies, smartphone-based instruments and simulations, at-home chemistry with household reagents, online video material, and introductory programming and bioinformatics, which helped to overcome these difficult times and likely even shape the future of science education.
Collapse
Affiliation(s)
- Luciano A. Abriata
- Laboratory for Biomolecular Modeling, École Polytechnique Fédérale de Lausanne and Swiss Institute of Bioinformatics, Lausanne, Switzerland
- Protein Production and Structure Core Facility, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
18
|
Liu W, Wang C, Chen S, Bian X, Lai B, Shen X, Cheng M, Lai SH, Weng D, Li J. Y-Net: Learning Domain Robust Feature Representation for ground camera image and large-scale image-based point cloud registration. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.10.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
19
|
Höhler C, Rasamoel ND, Rohrbach N, Hansen JP, Jahn K, Hermsdörfer J, Krewer C. The impact of visuospatial perception on distance judgment and depth perception in an Augmented Reality environment in patients after stroke: an exploratory study. J Neuroeng Rehabil 2021; 18:127. [PMID: 34419086 PMCID: PMC8379833 DOI: 10.1186/s12984-021-00920-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Accepted: 07/29/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Augmented Reality (AR)-based interventions are applied in neurorehabilitation with increasing frequency. Depth perception is required for the intended interaction within AR environments. Until now, however, it is unclear whether patients after stroke with impaired visuospatial perception (VSP) are able to perceive depth in the AR environment. METHODS Different aspects of VSP (stereovision and spatial localization/visuoconstruction) were assessed in 20 patients after stroke (mean age: 64 ± 14 years) and 20 healthy subjects (HS, mean age: 28 ± 8 years) using clinical tests. The group of HS was recruited to assess the validity of the developed AR tasks in testing stereovision. To measure perception of holographic objects, three distance judgment tasks and one three-dimensionality task were designed. The effect of impaired stereovision on performance in each AR task was analyzed. AR task performance was modeled by aspects of VSP using separate regression analyses for HS and for patients. RESULTS In HS, stereovision had a significant effect on the performance in all AR distance judgment tasks (p = 0.021, p = 0.002, p = 0.046) and in the three-dimensionality task (p = 0.003). Individual quality of stereovision significantly predicted the accuracy in each distance judgment task and was highly related to the ability to perceive holograms as three-dimensional (p = 0.001). In stroke-survivors, impaired stereovision had a specific deterioration effect on only one distance judgment task (p = 0.042), whereas the three-dimensionality task was unaffected (p = 0.317). Regression analyses confirmed a lacking impact of patients' quality of stereovision on AR task performance, while spatial localization/visuoconstruction significantly prognosticated the accuracy in distance estimation of geometric objects in two AR tasks. CONCLUSION Impairments in VSP reduce the ability to estimate distance and to perceive three-dimensionality in an AR environment. While stereovision is key for task performance in HS, spatial localization/visuoconstruction is predominant in patients. Since impairments in VSP are present after stroke, these findings might be crucial when AR is applied for neurorehabilitative treatment. In order to maximize the therapy outcome, the design of AR games should be adapted to patients' impaired VSP. Trial registration: The trial was not registered, as it was an observational study.
Collapse
Affiliation(s)
- Chiara Höhler
- Technical University of Munich, Georg-Brauchle Ring 60/62, 80992, Munich, Germany.
- Schoen Clinic Bad Aibling, Kolbermoorer Strasse 72, 83043, Bad Aibling, Germany.
| | - Nils David Rasamoel
- Technical University of Denmark, Anker Engelunds Vej 1, 2800, Kgs. Lyngby, Denmark
| | - Nina Rohrbach
- Technical University of Munich, Georg-Brauchle Ring 60/62, 80992, Munich, Germany
| | - John Paulin Hansen
- Technical University of Denmark, Anker Engelunds Vej 1, 2800, Kgs. Lyngby, Denmark
| | - Klaus Jahn
- Schoen Clinic Bad Aibling, Kolbermoorer Strasse 72, 83043, Bad Aibling, Germany
- Ludwig-Maximilians University of Munich, University Hospital Grosshadern, Marchioninistrasse 15, 81377, Munich, Germany
| | - Joachim Hermsdörfer
- Technical University of Munich, Georg-Brauchle Ring 60/62, 80992, Munich, Germany
| | - Carmen Krewer
- Technical University of Munich, Georg-Brauchle Ring 60/62, 80992, Munich, Germany
- Schoen Clinic Bad Aibling, Kolbermoorer Strasse 72, 83043, Bad Aibling, Germany
| |
Collapse
|
20
|
Butcher PWS, John NW, Ritsos PD. VRIA: A Web-Based Framework for Creating Immersive Analytics Experiences. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3213-3225. [PMID: 31944959 DOI: 10.1109/tvcg.2020.2965109] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present VRIA, a Web-based framework for creating Immersive Analytics (IA) experiences in Virtual Reality. VRIA is built upon WebVR, A-Frame, React and D3.js, and offers a visualization creation workflow which enables users, of different levels of expertise, to rapidly develop Immersive Analytics experiences for the Web. The use of these open-standards Web-based technologies allows us to implement VR experiences in a browser and offers strong synergies with popular visualization libraries, through the HTML Document Object Model (DOM). This makes VRIA ubiquitous and platform-independent. Moreover, by using WebVR's progressive enhancement, the experiences VRIA creates are accessible on a plethora of devices. We elaborate on our motivation for focusing on open-standards Web technologies, present the VRIA creation workflow and detail the underlying mechanics of our framework. We also report on techniques and optimizations necessary for implementing Immersive Analytics experiences on the Web, discuss scalability implications of our framework, and present a series of use case applications to demonstrate the various features of VRIA. Finally, we discuss current limitations of our framework, the lessons learned from its development, and outline further extensions.
Collapse
|
21
|
Lareyre F, Chaudhuri A, Adam C, Carrier M, Mialhe C, Raffort J. Applications of Head-Mounted Displays and Smart Glasses in Vascular Surgery. Ann Vasc Surg 2021; 75:497-512. [PMID: 33823254 DOI: 10.1016/j.avsg.2021.02.033] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 02/22/2021] [Accepted: 02/25/2021] [Indexed: 12/11/2022]
Abstract
OBJECTIVES Advances in virtual, augmented and mixed reality have led to the development of wearable technologies including head mounted displays (HMD) and smart glasses. While there is a growing interest on their potential applications in health, only a few studies have addressed so far their use in vascular surgery. The aim of this review was to summarize the fundamental notions associated with these technologies and to discuss potential applications and current limits for their use in vascular surgery. METHODS A comprehensive literature review was performed to introduce the fundamental concepts and provide an overview of applications of HMD and smart glasses in surgery. RESULTS HMD and smart glasses demonstrated a potential interest for the education of surgeons including anatomical teaching, surgical training, teaching and telementoring. Applications for pre-surgical planning have been developed in general and cardiac surgery and could be transposed for a use in vascular surgery. The use of wearable technologies in the operating room has also been investigated in both general and cardiovascular surgery and demonstrated its potential interest for image-guided surgery and data collection. CONCLUSION Studies performed so far represent a proof of concept of the interest of HMD and smart glasses in vascular surgery for education of surgeons and for surgical practice. Although these technologies exhibited encouraging results for applications in vascular surgery, technical improvements and further clinical research in large series are required before hoping using them in daily clinical practice.
Collapse
Affiliation(s)
- Fabien Lareyre
- Department of Vascular Surgery, Hospital of Antibes-Juan-les-Pins, France; Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France.
| | - Arindam Chaudhuri
- Bedfordshire-Milton Keynes Vascular Centre, Bedfordshire Hospitals NHS Foundation Trust, Bedford, UK
| | - Cédric Adam
- Laboratory of Applied Mathematics and Computer Science (MICS), CentraleSupélec, Université Paris-Saclay, France
| | - Marion Carrier
- Laboratory of Applied Mathematics and Computer Science (MICS), CentraleSupélec, Université Paris-Saclay, France
| | - Claude Mialhe
- Cardiovascular Surgery Unit, Cardio Thoracic Centre of Monaco, Monaco
| | - Juliette Raffort
- Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France; Clinical Chemistry Laboratory, University Hospital of Nice, France
| |
Collapse
|
22
|
Fonnet A, Prie Y. Survey of Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2101-2122. [PMID: 31352344 DOI: 10.1109/tvcg.2019.2929033] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Immersive analytics (IA) is a new term referring to the use of immersive technologies for data analysis. Yet such applications are not new, and numerous contributions have been made in the last three decades. However, no survey reviewing all these contributions is available. Here we propose a survey of IA from the early nineties until the present day, describing how rendering technologies, data, sensory mapping, and interaction means have been used to build IA systems, as well as how these systems have been evaluated. The conclusions that emerge from our analysis are that: multi-sensory aspects of IA are under-exploited, the 3DUI and VR community knowledge regarding immersive interaction is not sufficiently utilised, the IA community should focus on converging towards best practices, as well as aim for real life IA systems.
Collapse
|
23
|
StARboard & TrACTOr: Actuated Tangibles in an Educational TAR Application. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti5020006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
We explore the potential of direct haptic interaction in a novel approach to Tangible Augmented Reality in an educational context. Employing our prototyping platform ACTO, we developed a tabletop Augmented Reality application StARboard for sailing students. In this personal viewpoint environment virtual objects, e.g., sailing ships, are physically represented by actuated micro robots. These align with virtual objects, allowing direct physical interaction with the scene. When a user tries to pick up a virtual ship, its physical robot counterpart is grabbed instead. We also developed a tracking solution TrACTOr, employing a depth sensor to allow tracking independent of the table surface. In this paper we present concept and development of StARboard and TrACTOr. We report results of our user study with 18 participants using our prototype. They show that direct haptic interaction in tabletop AR scores en-par with traditional mouse interaction on a desktop setup in usability (mean SUS = 86.7 vs. 82.9) and performance (mean RTLX = 15.0 vs. 14.8), while outperforming the mouse in factors related to learning like presence (mean 6.0 vs 3.1) and absorption (mean 5.4 vs. 4.2). It was also rated the most fun (13× vs. 0×) and most suitable for learning (9× vs. 4×).
Collapse
|
24
|
Ens B, Goodwin S, Prouzeau A, Anderson F, Wang FY, Gratzl S, Lucarelli Z, Moyle B, Smiley J, Dwyer T. Uplift: A Tangible and Immersive Tabletop System for Casual Collaborative Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1193-1203. [PMID: 33074810 DOI: 10.1109/tvcg.2020.3030334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Collaborative visual analytics leverages social interaction to support data exploration and sensemaking. These processes are typically imagined as formalised, extended activities, between groups of dedicated experts, requiring expertise with sophisticated data analysis tools. However, there are many professional domains that benefit from support for short 'bursts' of data exploration between a subset of stakeholders with a diverse breadth of knowledge. Such 'casual collaborative' scenarios will require engaging features to draw users' attention, with intuitive, 'walk-up and use' interfaces. This paper presents Uplift, a novel prototype system to support 'casual collaborative visual analytics' for a campus microgrid, co-designed with local stakeholders. An elicitation workshop with key members of the building management team revealed relevant knowledge is distributed among multiple experts in their team, each using bespoke analysis tools. Uplift combines an engaging 3D model on a central tabletop display with intuitive tangible interaction, as well as augmented-reality, mid-air data visualisation, in order to support casual collaborative visual analytics for this complex domain. Evaluations with expert stakeholders from the building management and energy domains were conducted during and following our prototype development and indicate that Uplift is successful as an engaging backdrop for casual collaboration. Experts see high potential in such a system to bring together diverse knowledge holders and reveal complex interactions between structural, operational, and financial aspects of their domain. Such systems have further potential in other domains that require collaborative discussion or demonstration of models, forecasts, or cost-benefit analyses to high-level stakeholders.
Collapse
|
25
|
Reipschlager P, Flemisch T, Dachselt R. Personal Augmented Reality for Information Visualization on Large Interactive Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1182-1192. [PMID: 33052863 DOI: 10.1109/tvcg.2020.3030460] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this work we propose the combination of large interactive displays with personal head-mounted Augmented Reality (AR) for information visualization to facilitate data exploration and analysis. Even though large displays provide more display space, they are challenging with regard to perception, effective multi-user support, and managing data density and complexity. To address these issues and illustrate our proposed setup, we contribute an extensive design space comprising first, the spatial alignment of display, visualizations, and objects in AR space. Next, we discuss which parts of a visualization can be augmented. Finally, we analyze how AR can be used to display personal views in order to show additional information and to minimize the mutual disturbance of data analysts. Based on this conceptual foundation, we present a number of exemplary techniques for extending visualizations with AR and discuss their relation to our design space. We further describe how these techniques address typical visualization problems that we have identified during our literature research. To examine our concepts, we introduce a generic AR visualization framework as well as a prototype implementing several example techniques. In order to demonstrate their potential, we further present a use case walkthrough in which we analyze a movie data set. From these experiences, we conclude that the contributed techniques can be useful in exploring and understanding multivariate data. We are convinced that the extension of large displays with AR for information visualization has a great potential for data analysis and sense-making.
Collapse
|
26
|
Yang Y, Cordeil M, Beyer J, Dwyer T, Marriott K, Pfister H. Embodied Navigation in Immersive Abstract Data Visualization: Is Overview+Detail or Zooming Better for 3D Scatterplots? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1214-1224. [PMID: 33048730 DOI: 10.1109/tvcg.2020.3030427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
data has no natural scale and so interactive data visualizations must provide techniques to allow the user to choose their viewpoint and scale. Such techniques are well established in desktop visualization tools. The two most common techniques are zoom+pan and overview+detail. However, how best to enable the analyst to navigate and view abstract data at different levels of scale in immersive environments has not previously been studied. We report the findings of the first systematic study of immersive navigation techniques for 3D scatterplots. We tested four conditions that represent our best attempt to adapt standard 2D navigation techniques to data visualization in an immersive environment while still providing standard immersive navigation techniques through physical movement and teleportation. We compared room-sized visualization versus a zooming interface, each with and without an overview. We find significant differences in participants' response times and accuracy for a number of standard visual analysis tasks. Both zoom and overview provide benefits over standard locomotion support alone (i.e., physical movement and pointer teleportation). However, which variation is superior, depends on the task. We obtain a more nuanced understanding of the results by analyzing them in terms of a time-cost model for the different components of navigation: way-finding, travel, number of travel steps, and context switching.
Collapse
|
27
|
Wang Z, Ritchie J, Zhou J, Chevalier F, Bach B. Data Comics for Reporting Controlled User Studies in Human-Computer Interaction. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:967-977. [PMID: 33048732 DOI: 10.1109/tvcg.2020.3030433] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Inspired by data comics, this paper introduces a novel format for reporting controlled studies in the domain of human-computer interaction (HCI). While many studies in HCI follow similar steps in explaining hypotheses, laying out a study design, and reporting results, many of these decisions are buried in blocks of dense scientific text. We propose leveraging data comics as study reports to provide an open and glanceable view of studies by tightly integrating text and images, illustrating design decisions and key insights visually, resulting in visual narratives that can be compelling to non-scientists and researchers alike. Use cases of data comics study reports range from illustrations for non-scientific audiences to graphical abstracts, study summaries, technical talks, textbooks, teaching, blogs, supplementary submission material, and inclusion in scientific articles. This paper provides examples of data comics study reports alongside a graphical repertoire of examples, embedded in a framework of guidelines for creating comics reports which was iterated upon and evaluated through a series of collaborative design sessions.
Collapse
|
28
|
Luo T, Zhang M, Pan Z, Li Z, Cai N, Miao J, Chen Y, Xu M. Dream-Experiment: A MR User Interface with Natural Multi-channel Interaction for Virtual Experiments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3524-3534. [PMID: 32941147 DOI: 10.1109/tvcg.2020.3023602] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This paper studies a set of MR technologies for middle school experimental teaching environments and develops a multi-channel MR user interface called Dream-Experiment. The goal of Dream-Experiment is to improve the traditional MR user interface, so that users can get a real, natural 3D interactive experience like real experiments, but without danger and pollution. In terms of visual presentation, we design multi-camera collaborative registration to realize robust 6-DoF MR interactive space, and also define a complete rendering pipeline to provide improved processing of virtual-real objects' occlusion including translucent devices. In the virtual-real interaction, we provide six interaction modes that support visual interaction, tangible interaction, virtual-real gestures with touching, voice, thermal feeling, and olfactory feeling. After users' testing, we find that Dream-Experiment has better interactive efficiency and user experience than traditional MR environments.
Collapse
|
29
|
Yu D, Zhou Q, Newn J, Dingler T, Velloso E, Goncalves J. Fully-Occluded Target Selection in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3402-3413. [PMID: 32986552 DOI: 10.1109/tvcg.2020.3023606] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The presence of fully-occluded targets is common within virtual environments, ranging from a virtual object located behind a wall to a datapoint of interest hidden in a complex visualization. However, efficient input techniques for locating and selecting these targets are mostly underexplored in virtual reality (VR) systems. In this paper, we developed an initial set of seven techniques techniques for fully-occluded target selection in VR. We then evaluated their performance in a user study and derived a set of design implications for simple and more complex tasks from our results. Based on these insights, we refined the most promising techniques and conducted a second, more comprehensive user study. Our results show how factors, such as occlusion layers, target depths, object densities, and the estimation of target locations, can affect technique performance. Our findings from both studies and distilled recommendations can inform the design of future VR systems that offer selections for fully-occluded targets.
Collapse
|
30
|
Zorzal ER, Paulo SF, Rodrigues P, Mendes JJ, Lopes DS. An immersive educational tool for dental implant placement: A study on user acceptance. Int J Med Inform 2020; 146:104342. [PMID: 33310434 DOI: 10.1016/j.ijmedinf.2020.104342] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 11/11/2020] [Accepted: 11/13/2020] [Indexed: 10/22/2022]
Abstract
BACKGROUND Tools for training and education of dental students can improve their ability to perform technical procedures such as dental implant placement. Shortage of training can negatively affect dental implantologists' performance during intraoperative procedures, resulting in lack of surgical precision and, consequently, inadequate implant placement, which may lead to unsuccessful implant supported restorations or other complications. OBJECTIVE We designed and developed IMMPLANT a virtual reality educational tool to assist implant placement learning, which allows users to freely manipulate 3D dental models (e.g., a simulated patient's mandible and implant) with their dominant hand while operating a touchscreen device to assist 3D manipulation. METHODS The proposed virtual reality tool combines an immersive head-mounted display, a small hand tracking device and a smartphone that are all connected to a laptop. The operator's dominant hand is tracked to quickly and coarsely manipulate either the 3D dental model or the virtual implant, while the non-dominant hand holds a smartphone converted into a controller to assist button activation and a greater input precision for 3D implant positioning and inclination. We evaluated IMMPLANT's usability and acceptance during training sessions with 16 dental professionals. RESULTS The conducted user acceptance study revealed that IMMPLANT constitutes a versatile, portable, and complementary tool to assist implant placement learning, as it promotes immersive visualization and spatial manipulation of 3D dental anatomy. CONCLUSIONS IMMPLANT is a promising virtual reality tool to assist student learning and 3D dental visualization for implant placement education. IMMPLANT may also be easily incorporated into training programs for dental students.
Collapse
Affiliation(s)
- Ezequiel Roberto Zorzal
- ICT/UNIFESP, Instituto de Ciência e Tecnologia, Universidade Federal de São Paulo, Brazil; INESC-ID Lisboa, Instituto Superior Técnico, Universidade de Lisboa, Portugal.
| | | | - Pedro Rodrigues
- Clinical Research Unit (CRU), Centro de Investigação Interdisciplinar Egas Moniz (CiiEM), Instituto Universitário Egas Moniz, Almada, Portugal
| | - José João Mendes
- Clinical Research Unit (CRU), Centro de Investigação Interdisciplinar Egas Moniz (CiiEM), Instituto Universitário Egas Moniz, Almada, Portugal
| | - Daniel Simões Lopes
- INESC-ID Lisboa, Instituto Superior Técnico, Universidade de Lisboa, Portugal.
| |
Collapse
|
31
|
Kim JH, Ari H, Madasu C, Hwang J. Evaluation of the biomechanical stress in the neck and shoulders during augmented reality interactions. APPLIED ERGONOMICS 2020; 88:103175. [PMID: 32678782 DOI: 10.1016/j.apergo.2020.103175] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 05/20/2020] [Accepted: 05/26/2020] [Indexed: 06/11/2023]
Abstract
This study aimed to characterize the biomechanical stresses in the neck and shoulder, self-reported discomfort, and usability by different target distance or size during augmented reality (AR) interactions. In a repeated-measures laboratory-based study, 20 participants (10 males) performed three standardized AR tasks (3-dimensional (3-D) cube, omni-directional pointing, and web-browsing tasks) with three target distances (0.3, 0.6, and 0.9 m from each participant denoted by near, middle, far targets) for the 3-D cube and omni-directional pointing tasks or three target sizes: small (30% smaller than default), medium (default: 1.0 × 1.1 m), and large (30% larger than default) for the web-browsing task. Joint angle, joint moment, muscle activity, self-reported discomfort and comfort in the neck and shoulders; and subjective usability ratings were measured. The results showed that shoulder angle (flexion and abduction), shoulder moment (flexion), middle deltoid muscle activity significantly increased as the target distance increased during the 3-D cube task (p's < 0.001). Self-reported neck and shoulder discomfort significantly increased after completing each task (p's < 0.001). The participants preferred the near to middle distance (0.3-0.6 m) or the medium to large window size due to task easiness (p's < 0.005). The highest task performance (speed) was occurred at the near distance or the large window size during the 3-D cube and web-browsing tasks (p's < 0.001). The results indicate that AR interactions with the far target distance (close to maximum reach envelop) may increase the risk for musculoskeletal discomfort in the shoulder regions. Given the increased usability and task performance, the near to middle distance (less than 0.6 m) or the medium to large window size (greater than 1.0 × 1.1 m) would be recommended for AR interactions.
Collapse
Affiliation(s)
- Jeong Ho Kim
- School of Biological and Population Health Sciences, College of Public Health and Human Sciences, Oregon State University, Corvallis, OR, USA
| | - Hemateja Ari
- Department of Industrial and Systems Engineering, College of Engineering and Engineering Technology, Northern Illinois University, DeKalb, IL, USA
| | - Charan Madasu
- Department of Industrial and Systems Engineering, College of Engineering and Engineering Technology, Northern Illinois University, DeKalb, IL, USA
| | - Jaejin Hwang
- Department of Industrial and Systems Engineering, College of Engineering and Engineering Technology, Northern Illinois University, DeKalb, IL, USA.
| |
Collapse
|
32
|
Chen Z, Su Y, Wang Y, Wang Q, Qu H, Wu Y. MARVisT: Authoring Glyph-Based Visualization in Mobile Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2645-2658. [PMID: 30640614 DOI: 10.1109/tvcg.2019.2892415] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent advances in mobile augmented reality (AR) techniques have shed new light on personal visualization for their advantages of fitting visualization within personal routines, situating visualization in a real-world context, and arousing users' interests. However, enabling non-experts to create data visualization in mobile AR environments is challenging given the lack of tools that allow in-situ design while supporting the binding of data to AR content. Most existing AR authoring tools require working on personal computers or manually creating each virtual object and modifying its visual attributes. We systematically study this issue by identifying the specificity of AR glyph-based visualization authoring tool and distill four design considerations. Following these design considerations, we design and implement MARVisT, a mobile authoring tool that leverages information from reality to assist non-experts in addressing relationships between data and virtual glyphs, real objects and virtual glyphs, and real objects and data. With MARVisT, users without visualization expertise can bind data to real-world objects to create expressive AR glyph-based visualizations rapidly and effortlessly, reshaping the representation of the real world with data. We use several examples to demonstrate the expressiveness of MARVisT. A user study with non-experts is also conducted to evaluate the authoring experience of MARVisT.
Collapse
|
33
|
Blanco-Novoa Ó, Fraga-Lamas P, A. Vilar-Montesinos M, Fernández-Caramés TM. Creating the Internet of Augmented Things: An Open-Source Framework to Make IoT Devices and Augmented and Mixed Reality Systems Talk to Each Other. SENSORS 2020; 20:s20113328. [PMID: 32545277 PMCID: PMC7309179 DOI: 10.3390/s20113328] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 06/06/2020] [Accepted: 06/10/2020] [Indexed: 11/17/2022]
Abstract
Augmented Reality (AR) and Mixed Reality (MR) devices have evolved significantly in the last years, providing immersive AR/MR experiences that allow users to interact with virtual elements placed on the real-world. However, to make AR/MR devices reach their full potential, it is necessary to go further and let them collaborate with the physical elements around them, including the objects that belong to the Internet of Things (IoT). Unfortunately, AR/MR and IoT devices usually make use of heterogeneous technologies that complicate their intercommunication. Moreover, the implementation of the intercommunication mechanisms requires involving specialized developers with have experience on the necessary technologies. To tackle such problems, this article proposes the use of a framework that makes it easy to integrate AR/MR and IoT devices, allowing them to communicate dynamically and in real time. The presented AR/MR-IoT framework makes use of standard and open-source protocols and tools like MQTT, HTTPS or Node-RED. After detailing the inner workings of the framework, it is illustrated its potential through a practical use case: a smart power socket that can be monitored and controlled through Microsoft HoloLens AR/MR glasses. The performance of such a practical use case is evaluated and it is demonstrated that the proposed framework, under normal operation conditions, enables to respond in less than 100 ms to interaction and data update requests.
Collapse
Affiliation(s)
- Óscar Blanco-Novoa
- Department of Computer Engineering, Faculty of Computer Science, Universidade da Coruña, 15071 A Coruña, Spain;
- Centro de Investigación CITIC, Universidade da Coruña, 15071 A Coruña, Spain
| | - Paula Fraga-Lamas
- Department of Computer Engineering, Faculty of Computer Science, Universidade da Coruña, 15071 A Coruña, Spain;
- Centro de Investigación CITIC, Universidade da Coruña, 15071 A Coruña, Spain
- Correspondence: (P.F.-L.); (T.M.F.-C.)
| | | | - Tiago M. Fernández-Caramés
- Department of Computer Engineering, Faculty of Computer Science, Universidade da Coruña, 15071 A Coruña, Spain;
- Centro de Investigación CITIC, Universidade da Coruña, 15071 A Coruña, Spain
- Correspondence: (P.F.-L.); (T.M.F.-C.)
| |
Collapse
|
34
|
Wainman B, Pukas G, Wolak L, Mohanraj S, Lamb J, Norman GR. The Critical Role of Stereopsis in Virtual and Mixed Reality Learning Environments. ANATOMICAL SCIENCES EDUCATION 2020; 13:401-412. [PMID: 31665563 DOI: 10.1002/ase.1928] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Revised: 10/24/2019] [Accepted: 10/25/2019] [Indexed: 05/26/2023]
Abstract
Anatomy education has been revolutionized through digital media, resulting in major advances in realism, portability, scalability, and user satisfaction. However, while such approaches may well be more portable, realistic, or satisfying than traditional photographic presentations, it is less clear that they have any superiority in terms of student learning. In this study, it was hypothesized that virtual and mixed reality presentations of pelvic anatomy will have an advantage over two-dimensional (2D) presentations and perform approximately equal to physical models and that this advantage over 2D presentations will be reduced when stereopsis is decreased by covering the non-dominant eye. Groups of 20 undergraduate students learned pelvic anatomy under seven conditions: physical model with and without stereo vision, mixed reality with and without stereo vision, virtual reality with and without stereo vision, and key views on a computer monitor. All were tested with a cadaveric pelvis and a 15-item, short-answer recognition test. Compared to the key views, the physical model had a 70% increase in accuracy in structure identification; the virtual reality a 25% increase, and the mixed reality a non-significant 2.5% change. Blocking stereopsis reduced performance on the physical model by 15%, on virtual reality by 60%, but by only 2.5% on the mixed reality technology. The data show that virtual and mixed reality technologies tested are inferior to physical models and that true stereopsis is critical in learning anatomy.
Collapse
Affiliation(s)
- Bruce Wainman
- Department of Pathology and Molecular Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Giancarlo Pukas
- Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Liliana Wolak
- Department of Anatomy and Cell Biology, Western University, London, Ontario, Canada
| | - Sylvia Mohanraj
- Department of Pathology and Molecular Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Jason Lamb
- Department of Health Research Methods, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Geoffrey R Norman
- Department of Health Research Methods, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
35
|
Abriata LA. Building blocks for commodity augmented reality-based molecular visualization and modeling in web browsers. PeerJ Comput Sci 2020; 6:e260. [PMID: 33816912 PMCID: PMC7924717 DOI: 10.7717/peerj-cs.260] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Accepted: 01/22/2020] [Indexed: 06/12/2023]
Abstract
For years, immersive interfaces using virtual and augmented reality (AR) for molecular visualization and modeling have promised a revolution in the way how we teach, learn, communicate and work in chemistry, structural biology and related areas. However, most tools available today for immersive modeling require specialized hardware and software, and are costly and cumbersome to set up. These limitations prevent wide use of immersive technologies in education and research centers in a standardized form, which in turn prevents large-scale testing of the actual effects of such technologies on learning and thinking processes. Here, I discuss building blocks for creating marker-based AR applications that run as web pages on regular computers, and explore how they can be exploited to develop web content for handling virtual molecular systems in commodity AR with no more than a webcam- and internet-enabled computer. Examples span from displaying molecules, electron microscopy maps and molecular orbitals with minimal amounts of HTML code, to incorporation of molecular mechanics, real-time estimation of experimental observables and other interactive resources using JavaScript. These web apps provide virtual alternatives to physical, plastic-made molecular modeling kits, where the computer augments the experience with information about spatial interactions, reactivity, energetics, etc. The ideas and prototypes introduced here should serve as starting points for building active content that everybody can utilize online at minimal cost, providing novel interactive pedagogic material in such an open way that it could enable mass-testing of the effect of immersive technologies on chemistry education.
Collapse
Affiliation(s)
- Luciano A. Abriata
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Swiss Institute of Bioinformatics, Lausanne, Switzerland
| |
Collapse
|
36
|
Coolen B, Beek PJ, Geerse DJ, Roerdink M. Avoiding 3D Obstacles in Mixed Reality: Does It Differ from Negotiating Real Obstacles? SENSORS (BASEL, SWITZERLAND) 2020; 20:E1095. [PMID: 32079351 PMCID: PMC7071133 DOI: 10.3390/s20041095] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 01/29/2020] [Accepted: 02/14/2020] [Indexed: 12/22/2022]
Abstract
Mixed-reality technologies are evolving rapidly, allowing for gradually more realistic interaction with digital content while moving freely in real-world environments. In this study, we examined the suitability of the Microsoft HoloLens mixed-reality headset for creating locomotor interactions in real-world environments enriched with 3D holographic obstacles. In Experiment 1, we compared the obstacle-avoidance maneuvers of 12 participants stepping over either real or holographic obstacles of different heights and depths. Participants' avoidance maneuvers were recorded with three spatially and temporally integrated Kinect v2 sensors. Similar to real obstacles, holographic obstacles elicited obstacle-avoidance maneuvers that scaled with obstacle dimensions. However, with holographic obstacles, some participants showed dissimilar trail or lead foot obstacle-avoidance maneuvers compared to real obstacles: they either consistently failed to raise their trail foot or crossed the obstacle with extreme lead-foot margins. In Experiment 2, we examined the efficacy of mixed-reality video feedback in altering such dissimilar avoidance maneuvers. Participants quickly adjusted their trail-foot crossing height and gradually lowered extreme lead-foot crossing heights in the course of mixed-reality video feedback trials, and these improvements were largely retained in subsequent trials without feedback. Participant-specific differences in real and holographic obstacle avoidance notwithstanding, the present results suggest that 3D holographic obstacles supplemented with mixed-reality video feedback may be used for studying and perhaps also training 3D obstacle avoidance.
Collapse
Affiliation(s)
- Bert Coolen
- Department of Human Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands; (P.J.B.); (D.J.G.); (M.R.)
| | | | | | | |
Collapse
|
37
|
Kraus M, Weiler N, Oelke D, Kehrer J, Keim DA, Fuchs J. The Impact of Immersion on Cluster Identification Tasks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:525-535. [PMID: 31536002 DOI: 10.1109/tvcg.2019.2934395] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Recent developments in technology encourage the use of head-mounted displays (HMDs) as a medium to explore visualizations in virtual realities (VRs). VR environments (VREs) enable new, more immersive visualization design spaces compared to traditional computer screens. Previous studies in different domains, such as medicine, psychology, and geology, report a positive effect of immersion, e.g., on learning performance or phobia treatment effectiveness. Our work presented in this paper assesses the applicability of those findings to a common task from the information visualization (InfoVis) domain. We conducted a quantitative user study to investigate the impact of immersion on cluster identification tasks in scatterplot visualizations. The main experiment was carried out with 18 participants in a within-subjects setting using four different visualizations, (1) a 2D scatterplot matrix on a screen, (2) a 3D scatterplot on a screen, (3) a 3D scatterplot miniature in a VRE and (4) a fully immersive 3D scatterplot in a VRE. The four visualization design spaces vary in their level of immersion, as shown in a supplementary study. The results of our main study indicate that task performance differs between the investigated visualization design spaces in terms of accuracy, efficiency, memorability, sense of orientation, and user preference. In particular, the 2D visualization on the screen performed worse compared to the 3D visualizations with regard to the measured variables. The study shows that an increased level of immersion can be a substantial benefit in the context of 3D data and cluster detection.
Collapse
|
38
|
Whitlock M, Wu K, Szafir DA. Designing for Mobile and Immersive Visual Analytics in the Field. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:503-513. [PMID: 31425088 DOI: 10.1109/tvcg.2019.2934282] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Data collection and analysis in the field is critical for operations in domains such as environmental science and public safety. However, field workers currently face data- and platform-oriented issues in efficient data collection and analysis in the field, such as limited connectivity, screen space, and attentional resources. In this paper, we explore how visual analytics tools might transform field practices by more deeply integrating data into these operations. We use a design probe coupling mobile, cloud, and immersive analytics components to guide interviews with ten experts from five domains to explore how visual analytics could support data collection and analysis needs in the field. The results identify shortcomings of current approaches and target scenarios and design considerations for future field analysis systems. We embody these findings in FieldView, an extensible, open-source prototype designed to support critical use cases for situated field analysis. Our findings suggest the potential for integrating mobile and immersive technologies to enhance data's utility for various field operations and new directions for visual analytics tools to transform fieldwork.
Collapse
|
39
|
Filho JAW, Freitas CMDS, Nedel L. Comfortable Immersive Analytics With the VirtualDesk Metaphor. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2019; 39:41-53. [PMID: 30762533 DOI: 10.1109/mcg.2019.2898856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The VirtualDesk metaphor is an opportunity for more comfortable and efficient immersive data exploration, using tangible interaction with the analyst's physical work desk and embodied manipulation of mid-air data representations. In this paper, we present an extended discussion of its underlying concepts, and review and compare two previous case studies where promising results were obtained in terms of user comfort, engagement, and usability. We also discuss findings of a novel study conducted with geovisualization experts, pointing directions for improvement and future research.
Collapse
|
40
|
Sicat R, Li J, Choi J, Cordeil M, Jeong WK, Bach B, Pfister H. DXR: A Toolkit for Building Immersive Data Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:715-725. [PMID: 30136991 DOI: 10.1109/tvcg.2018.2865152] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper presents DXR, a toolkit for building immersive data visualizations based on the Unity development platform. Over the past years, immersive data visualizations in augmented and virtual reality (AR, VR) have been emerging as a promising medium for data sense-making beyond the desktop. However, creating immersive visualizations remains challenging, and often require complex low-level programming and tedious manual encoding of data attributes to geometric and visual properties. These can hinder the iterative idea-to-prototype process, especially for developers without experience in 3D graphics, AR, and VR programming. With DXR, developers can efficiently specify visualization designs using a concise declarative visualization grammar inspired by Vega-Lite. DXR further provides a GUI for easy and quick edits and previews of visualization designs in-situ, i.e., while immersed in the virtual world. DXR also provides reusable templates and customizable graphical marks, enabling unique and engaging visualizations. We demonstrate the flexibility of DXR through several examples spanning a wide range of applications.
Collapse
|
41
|
Proniewska K, Dołęga-Dołęgowski D, Dudek D. A holographic doctors’ assistant on the example of a wireless heart rate monitor. BIO-ALGORITHMS AND MED-SYSTEMS 2018. [DOI: 10.1515/bams-2018-0007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Abstract
Microsoft has created HoloLens glasses, a high-tech device used for holographic purposes, which is unique and superior to other available solutions. We present a new idea of a holographic assistant to doctors, using as an example a wireless patient data monitor. A dedicated application will be created to be used by doctors, allowing hands-free access to patient cards/data, reviewing of new/old examination results, and even the ability to work on real-time data. Doctors will be able to use this in the examination room, at a patient’s bedside, or in an entirely different location. Currently, analysis of patient data is done mostly by the doctor; however, huge progress in computer hardware performance and artificial intelligence (AI) algorithms has allowed the development of new methods used to analyze and classify patient examination results. In the same way that doctors learn and practice how to treat patients during their studies, algorithms can learn to spot abnormalities, allowing current technology and advanced AI algorithms to be joined in one high-tech solution that should provide initial assessment of patients’ health and give treatment guidance, if necessary.
Collapse
|