1
|
Liao S, Chaudhri S, Karwa MK, Popescu V. SeamlessVR: Bridging the Immersive to Non-Immersive Visualization Divide. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:2806-2816. [PMID: 40063473 DOI: 10.1109/tvcg.2025.3549564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2025]
Abstract
The paper describes SeamlessVR, a method for switching effectively from immersive visualization, in a virtual reality (VR) headset, to non-immersive visualization, on screen. SeamlessVR implements a continuous morph of the 3D visualization to a 2D visualization that matches what the user will see on screen after removing the headset. This visualization continuity reduces the cognitive effort of connecting the immersive to the non-immersive visualization, helping the user continue on screen a visualization task started in the headset. We have compared SeamlessVR to the conventional approach of directly removing the headset in an IRB-approved user study with N = 30 participants. SeamlessVRhad a significant advantage over the conventional approach in terms of time and accuracy for target tracking in complex abstract and realistic scenes and in terms of participants' perception of the switch from immersive to non-immersive visualization, as well as in terms of usability. SeamlessVR did not pose cybersickness concerns.
Collapse
|
2
|
Pavanatto L, Lu F, North C, Bowman DA. Multiple Monitors or Single Canvas? Evaluating Window Management and Layout Strategies on Virtual Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1713-1730. [PMID: 38386585 DOI: 10.1109/tvcg.2024.3368930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2024]
Abstract
Virtual displays enabled through head-worn augmented reality have unique characteristics that can yield extensive amounts of screen space. Existing research has shown that increasing the space on a computer screen can enhance usability. Since virtual displays offer the unique ability to present content without rigid physical space constraints, they provide various new design possibilities. Therefore, we must understand the trade-offs of layout choices when structuring that space. We propose a single Canvas approach that eliminates boundaries from traditional multi-monitor approaches and instead places windows in one large, unified space. Our user study compared this approach against a multi-monitor setup, and we considered both purely virtual systems and hybrid systems that included a physical monitor. We looked into usability factors such as performance, accuracy, and overall window management. Results show that Canvas displays can cause users to compact window layouts more than multiple monitors with snapping behavior, even though such optimizations may not lead to longer window management times. We did not find conclusive evidence of either setup providing a better user experience. Multi-Monitor displays offer quick window management with snapping and a structured layout through subdivisions. However, Canvas displays allow for more control in placement and size, lowering the amount of space used and, thus, head rotation. Multi-Monitor benefits were more prominent in the hybrid configuration, while the Canvas display was more beneficial in the purely virtual configuration.
Collapse
|
3
|
Yao L, Bucchieri F, McArthur V, Bezerianos A, Isenberg P. User Experience of Visualizations in Motion: A Case Study and Design Considerations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:174-184. [PMID: 39269804 DOI: 10.1109/tvcg.2024.3456319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2024]
Abstract
We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-ofts are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/.
Collapse
|
4
|
Zhu Q, Lu T, Guo S, Ma X, Yang Y. CompositingVis: Exploring Interactions for Creating Composite Visualizations in Immersive Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:591-601. [PMID: 39250414 DOI: 10.1109/tvcg.2024.3456210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view. However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts. In this work, we aim to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions. This could provide a flexible and fluid experience with immersive visualization and has the potential to facilitate understanding of the relationship between visualization views. We begin with developing a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions based on the combination of 3D manipulations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interaction to create different kinds of composite visualizations in virtual reality. Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of creating composite visualizations through embodied interactions. We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization views for understanding and communicating the relationships between different views, which underscores the potential of several future application scenarios.
Collapse
|
5
|
Leon GM, Bezerianos A, Gladin O, Isenberg P. Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:941-951. [PMID: 39250400 DOI: 10.1109/tvcg.2024.3456335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
We present the results of an exploratory study on how pairs interact with speech commands and touch gestures on a wall-sized display during a collaborative sensemaking task. Previous work has shown that speech commands, alone or in combination with other input modalities, can support visual data exploration by individuals. However, it is still unknown whether and how speech commands can be used in collaboration, and for what tasks. To answer these questions, we developed a functioning prototype that we used as a technology probe. We conducted an in-depth exploratory study with 10 participant pairs to analyze their interaction choices, the interplay between the input modalities, and their collaboration. While touch was the most used modality, we found that participants preferred speech commands for global operations, used them for distant interaction, and that speech interaction contributed to the awareness of the partner's actions. Furthermore, the likelihood of using speech commands during collaboration was related to the personality trait of agreeableness. Regarding collaboration styles, participants interacted with speech equally often whether they were in loosely or closely coupled collaboration. While the partners stood closer to each other during close collaboration, they did not distance themselves to use speech commands. From our findings, we derive and contribute a set of design considerations for collaborative and multimodal interactive data analysis systems. All supplemental materials are available at https://osf.io/8gpv2.
Collapse
|
6
|
Zhao L, Isenberg T, Xie F, Liang HN, Yu L. SpatialTouch: Exploring Spatial Data Visualizations in Cross-Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:897-907. [PMID: 39255119 DOI: 10.1109/tvcg.2024.3456368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures-often at multiple spatial or semantic scales-across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction, including mid-air gestures, touch interactions, pen interactions, and combinations thereof, to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.
Collapse
|
7
|
Pooryousef V, Cordeil M, Besancon L, Bassed R, Dwyer T. Collaborative Forensic Autopsy Documentation and Supervised Report Generation Using a Hybrid Mixed-Reality Environment and Generative AI. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7452-7462. [PMID: 39250385 DOI: 10.1109/tvcg.2024.3456212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Forensic investigation is a complex procedure involving experts working together to establish cause of death and report findings to legal authorities. While new technologies are being developed to provide better post-mortem imaging capabilities-including mixed-reality (MR) tools to support 3D visualisation of such data-these tools do not integrate seamlessly into their existing collaborative workflow and report authoring process, requiring extra steps, e.g. to extract imagery from the MR tool and combine with physical autopsy findings for inclusion in the report. Therefore, in this work we design and evaluate a new forensic autopsy report generation workflow and present a novel documentation system using hybrid mixed-reality approaches to integrate visualisation, voice and hand interaction, as well as collaboration and procedure recording. Our preliminary findings indicate that this approach has the potential to improve data management, aid reviewability, and thus, achieve more robust standards. Further, it potentially streamlines report generation and minimise dependency on external tools and assistance, reducing autopsy time and related costs. This system also offers significant potential for education. A free copy of this paper and all supplemental materials are available at https://osf.io/ygfzx.
Collapse
|
8
|
Ferrarotti A, Baldoni S, Carli M, Battisti F. Stress Assessment for Augmented Reality Applications Based on Head Movement Features. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:6970-6983. [PMID: 38578850 DOI: 10.1109/tvcg.2024.3385637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Augmented reality is one of the enabling technologies of the upcoming future. Its usage in working and learning scenarios may lead to a better quality of work and training by helping the operators during the most crucial stages of processes. Therefore, the automatic detection of stress during augmented reality experiences can be a valuable support to prevent consequences on people's health and foster the spreading of this technology. In this work, we present the design of a non-invasive stress assessment approach. The proposed system is based on the analysis of the head movements of people wearing a Head Mounted Display while performing stress-inducing tasks. First, we designed a subjective experiment consisting of two stress-related tests for data acquisition. Then, a statistical analysis of head movements has been performed to determine which features are representative of the presence of stress. Finally, a stress classifier based on a combination of Support Vector Machines has been designed and trained. The proposed approach achieved promising performances thus paving the way for further studies in this research direction.
Collapse
|
9
|
Lim CH, Cha MC, Lee SC. Physical loads on upper extremity muscles while interacting with virtual objects in an augmented reality context. APPLIED ERGONOMICS 2024; 120:104340. [PMID: 38964218 DOI: 10.1016/j.apergo.2024.104340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 06/20/2024] [Accepted: 06/25/2024] [Indexed: 07/06/2024]
Abstract
Augmented reality (AR) environments are emerging as prominent user interfaces and gathering significant attention. However, the associated physical strain on the users presents a considerable challenge. Within this background, this study explores the impact of movement distance (MD) and target-to-user distance (TTU) on the physical load during drag-and-drop (DND) tasks in an AR environment. To address this objective, a user experiment was conducted utilizing a 5× 5 within-subject design with MD (16, 32, 48, 64, and 80 cm) and TTU (40, 80, 120, 160, and 200 cm) as the variables. Physical load was assessed using normalized electromyography (NEMG) (%MVC) indicators of the upper extremity muscles and the physical item of NASA-Task load index (TLX). The results revealed significant variations in the physical load based on MD and TTU. Specifically, both the NEMG and subjective physical workload values increased with increasing MD. Moreover, NEMG increased with decreasing TTU, whereas the subjective physical workload scores increased with increasing TTU. Interaction effects of MD and TTU on NEMG were also significantly observed. These findings suggest that considering the MD and TTU when developing content for interacting with AR objects in AR environments could potentially alleviate user load.
Collapse
Affiliation(s)
- Chae Heon Lim
- Department of Human Computer Interaction, Hanyang University ERICA, Ansan, Republic of Korea
| | - Min Chul Cha
- Division of Media and Communication, Hankuk University of Foreign Studies, Seoul, Republic of Korea
| | - Seul Chan Lee
- Department of Human Computer Interaction, Hanyang University ERICA, Ansan, Republic of Korea.
| |
Collapse
|
10
|
Lin P, Li C, Chen S, Huangfu J, Yuan W. Intelligent Gesture Recognition Based on Screen Reflectance Multi-Band Spectral Features. SENSORS (BASEL, SWITZERLAND) 2024; 24:5519. [PMID: 39275430 PMCID: PMC11398176 DOI: 10.3390/s24175519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Revised: 08/22/2024] [Accepted: 08/24/2024] [Indexed: 09/16/2024]
Abstract
Human-computer interaction (HCI) with screens through gestures is a pivotal method amidst the digitalization trend. In this work, a gesture recognition method is proposed that combines multi-band spectral features with spatial characteristics of screen-reflected light. Based on the method, a red-green-blue (RGB) three-channel spectral gesture recognition system has been developed, composed of a display screen integrated with narrowband spectral receivers as the hardware setup. During system operation, emitted light from the screen is reflected by gestures and received by the narrowband spectral receivers. These receivers at various locations are tasked with capturing multiple narrowband spectra and converting them into light-intensity series. The availability of multi-narrowband spectral data integrates multidimensional features from frequency and spatial domains, enhancing classification capabilities. Based on the RGB three-channel spectral features, this work formulates an RGB multi-channel convolutional neural network long short-term memory (CNN-LSTM) gesture recognition model. It achieves accuracies of 99.93% in darkness and 99.89% in illuminated conditions. This indicates the system's capability for stable operation across different lighting conditions and accurate interaction. The intelligent gesture recognition method can be widely applied for interactive purposes on various screens such as computers and mobile phones, facilitating more convenient and precise HCI.
Collapse
Affiliation(s)
- Peiying Lin
- School of Electrical and Information Engineering, Jiangsu University of Science and Technology, Zhangjiagang 215600, China
| | - Chenrui Li
- Laboratory of Applied Research on Electromagnetics, Zhejiang University, Hangzhou 310027, China
| | - Sijie Chen
- Laboratory of Applied Research on Electromagnetics, Zhejiang University, Hangzhou 310027, China
| | - Jiangtao Huangfu
- Laboratory of Applied Research on Electromagnetics, Zhejiang University, Hangzhou 310027, China
| | - Wei Yuan
- School of Electrical and Information Engineering, Jiangsu University of Science and Technology, Zhangjiagang 215600, China
| |
Collapse
|
11
|
Hong J, Hnatyshyn R, Santos EAD, Maciejewski R, Isenberg T. A Survey of Designs for Combined 2D+3D Visual Representations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2888-2902. [PMID: 38648152 DOI: 10.1109/tvcg.2024.3388516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
We examine visual representations of data that make use of combinations of both 2D and 3D data mappings. Combining 2D and 3D representations is a common technique that allows viewers to understand multiple facets of the data with which they are interacting. While 3D representations focus on the spatial character of the data or the dedicated 3D data mapping, 2D representations often show abstract data properties and take advantage of the unique benefits of mapping to a plane. Many systems have used unique combinations of both types of data mappings effectively. Yet there are no systematic reviews of the methods in linking 2D and 3D representations. We systematically survey the relationships between 2D and 3D visual representations in major visualization publications-IEEE VIS, IEEE TVCG, and EuroVis-from 2012 to 2022. We closely examined 105 articles where 2D and 3D representations are connected visually, interactively, or through animation. These approaches are designed based on their visual environment, the relationships between their visual representations, and their possible layouts. Through our analysis, we introduce a design space as well as provide design guidelines for effectively linking 2D and 3D visual representations.
Collapse
|
12
|
Seraji MR, Piray P, Zahednejad V, Stuerzlinger W. Analyzing User Behaviour Patterns in a Cross-Virtuality Immersive Analytics System. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2613-2623. [PMID: 38470602 DOI: 10.1109/tvcg.2024.3372129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Recent work in immersive analytics suggests benefits for systems that support work across both 2D and 3D data visualizations, i.e., cross-virtuality analytics systems. Here, we introduce HybridAxes, an immersive visual analytics system that enables users to conduct their analysis either in 2D on desktop monitors or in 3D within an immersive AR environment - while enabling them to seamlessly switch and transfer their graphs between modes. Our user study results show that the cross-virtuality sub-systems in HybridAxes complement each other well in helping the users in their data-understanding journey. We show that users preferred using the AR component for exploring the data, while they used the desktop to work on more detail-intensive tasks. Despite encountering some minor challenges in switching between the two virtuality modes, users consistently rated the whole system as highly engaging, user-friendly, and helpful in streamlining their analytics processes. Finally, we present suggestions for designers of cross-virtuality visual analytics systems and identify avenues for future work.
Collapse
|
13
|
Friedl-Knirsch J, Stach C, Pointecker F, Anthes C, Roth D. A Study on Collaborative Visual Data Analysis in Augmented Reality with Asymmetric Display Types. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2633-2643. [PMID: 38437119 DOI: 10.1109/tvcg.2024.3372103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Collaboration is a key aspect of immersive visual data analysis. Due to its inherent benefit of seeing co-located collaborators, augmented reality is often useful in such collaborative scenarios. However, to enable the augmentation of the real environment, there are different types of technology available. While there are constant developments in specific devices, each of these device types provide different premises for collaborative visual data analysis. In our work we combine handheld, optical see-through and video see-through displays to explore and understand the impact of these different device types in collaborative immersive analytics. We conducted a mixed-methods collaborative user study where groups of three performed a shared data analysis task in augmented reality with each user working on a different device, to explore differences in collaborative behaviour, user experience and usage patterns. Both quantitative and qualitative data revealed differences in user experience and usage patterns. For collaboration, the different display types influenced how well participants could participate in the collaborative data analysis, nevertheless, there was no measurable effect in verbal communication.
Collapse
|
14
|
Lee B, Sedlmair M, Schmalstieg D. Design Patterns for Situated Visualization in Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:1324-1335. [PMID: 37883275 DOI: 10.1109/tvcg.2023.3327398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Situated visualization has become an increasingly popular research area in the visualization community, fueled by advancements in augmented reality (AR) technology and immersive analytics. Visualizing data in spatial proximity to their physical referents affords new design opportunities and considerations not present in traditional visualization, which researchers are now beginning to explore. However, the AR research community has an extensive history of designing graphics that are displayed in highly physical contexts. In this work, we leverage the richness of AR research and apply it to situated visualization. We derive design patterns which summarize common approaches of visualizing data in situ. The design patterns are based on a survey of 293 papers published in the AR and visualization communities, as well as our own expertise. We discuss design dimensions that help to describe both our patterns and previous work in the literature. This discussion is accompanied by several guidelines which explain how to apply the patterns given the constraints imposed by the real world. We conclude by discussing future research directions that will help establish a complete understanding of the design of situated visualization, including the role of interactivity, tasks, and workflows.
Collapse
|
15
|
Minh Tran TT, Brown S, Weidlich O, Billinghurst M, Parker C. Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4782-4793. [PMID: 37782599 DOI: 10.1109/tvcg.2023.3320231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Wearable Augmented Reality (AR) has attracted considerable attention in recent years, as evidenced by the growing number of research publications and industry investments. With swift advancements and a multitude of interdisciplinary research areas within wearable AR, a comprehensive review is crucial for integrating the current state of the field. In this paper, we present a review of 389 research papers on wearable AR, published between 2018 and 2022 in three major venues: ISMAR, TVCG, and CHI. Drawing inspiration from previous works by Zhou et al. and Kim et al., which summarized AR research at ISMAR over the past two decades (1998-2017), we categorize the papers into different topics and identify prevailing trends. One notable finding is that wearable AR research is increasingly geared towards enabling broader consumer adoption. From our analysis, we highlight key observations related to potential future research areas essential for capitalizing on this trend and achieving widespread adoption. These include addressing challenges in Display, Tracking, Interaction, and Applications, and exploring emerging frontiers in Ethics, Accessibility, Avatar and Embodiment, and Intelligent Virtual Agents.
Collapse
|
16
|
Yao L, Bezerianos A, Vuillemot R, Isenberg P. Visualization in Motion: A Research Agenda and Two Evaluations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3546-3562. [PMID: 35727779 DOI: 10.1109/tvcg.2022.3184993] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
We contribute a research agenda for visualization in motion and two experiments to understand how well viewers can read data from moving visualizations. We define visualizations in motion as visual data representations that are used in contexts that exhibit relative motion between a viewer and an entire visualization. Sports analytics, video games, wearable devices, or data physicalizations are example contexts that involve different types of relative motion between a viewer and a visualization. To analyze the opportunities and challenges for designing visualization in motion, we show example scenarios and outline a first research agenda. Motivated primarily by the prevalence of and opportunities for visualizations in sports and video games we started to investigate a small aspect of our research agenda: the impact of two important characteristics of motion-speed and trajectory on a stationary viewer's ability to read data from moving donut and bar charts. We found that increasing speed and trajectory complexity did negatively affect the accuracy of reading values from the charts and that bar charts were more negatively impacted. In practice, however, this impact was small: both charts were still read fairly accurately.
Collapse
|
17
|
Abstract
Recent research in the area of immersive analytics demonstrated the utility of augmented reality for data analysis. However, there is a lack of research on how to facilitate engaging, embodied, and interactive AR graph visualization. In this paper, we explored the design space for combining the capabilities of AR with node-link diagrams to create immersive data visualization. We first systematically described the design rationale and the design process of the mobile based AR graph including the layout, interactions, and aesthetics. Then, we validated the AR concept by conducting a user study with 36 participants to examine users’ behaviors with an AR graph and a 2D graph. The results of our study showed the feasibility of using an AR graph to present data relations and also introduced interaction challenges in terms of the effectiveness and usability with mobile devices. Third, we iterated the AR graph by implementing embodied interactions with hand gestures and addressing the connection between the physical objects and the digital graph. This study is the first step in our research, aiming to guide the design of the application of immersive AR data visualization in the future.
Collapse
|
18
|
Ard T, Bienkowski MS, Liew SL, Sepehrband F, Yan L, Toga AW. Integrating Data Directly into Publications with Augmented Reality and Web-Based Technologies – Schol-AR. Sci Data 2022. [PMCID: PMC9197835 DOI: 10.1038/s41597-022-01426-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Scientific research has become highly intertwined with digital information, however scientific publication remains based on the static text and figures of principal articles. This discrepancy constrains complex scientific data into 2D static figures, hindering our ability to effectively exchange the complex and extensive information that underlies modern research. Here, we demonstrate how the viewing of digital data can be directly integrated into the existing publication system through both web based and augmented reality (AR) technologies. We additionally provide a framework that makes these capabilities available to the scientific community. Ultimately, augmenting articles with data can modernize scientific communication by bridging the gap between the digital basis of present-day research and the natural limitations of printable articles.
Collapse
|