1
|
Cortes CAT, Thurow S, Ong A, Sharples JJ, Bednarz T, Stevens G, Favero DD. Analysis of Wildfire Visualization Systems for Research and Training: Are They Up for the Challenge of the Current State of Wildfires? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4285-4303. [PMID: 37030767 DOI: 10.1109/tvcg.2023.3258440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Wildfires affect many regions across the world. The accelerated progression of global warming has amplified their frequency and scale, deepening their impact on human life, the economy, and the environment. The temperature rise has been driving wildfires to behave unpredictably compared to those previously observed, challenging researchers and fire management agencies to understand the factors behind this behavioral change. Furthermore, this change has rendered fire personnel training outdated and lost its ability to adequately prepare personnel to respond to these new fires. Immersive visualization can play a key role in tackling the growing issue of wildfires. Therefore, this survey reviews various studies that use immersive and non-immersive data visualization techniques to depict wildfire behavior and train first responders and planners. This paper identifies the most useful characteristics of these systems. While these studies support knowledge creation for certain situations, there is still scope to comprehensively improve immersive systems to address the unforeseen dynamics of wildfires.
Collapse
|
2
|
Lin T, Aouididi A, Chen Z, Beyer J, Pfister H, Wang JH. VIRD: Immersive Match Video Analysis for High-Performance Badminton Coaching. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:458-468. [PMID: 37878442 DOI: 10.1109/tvcg.2023.3327161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
Badminton is a fast-paced sport that requires a strategic combination of spatial, temporal, and technical tactics. To gain a competitive edge at high-level competitions, badminton professionals frequently analyze match videos to gain insights and develop game strategies. However, the current process for analyzing matches is time-consuming and relies heavily on manual note-taking, due to the lack of automatic data collection and appropriate visualization tools. As a result, there is a gap in effectively analyzing matches and communicating insights among badminton coaches and players. This work proposes an end-to-end immersive match analysis pipeline designed in close collaboration with badminton professionals, including Olympic and national coaches and players. We present VIRD, a VR Bird (i.e., shuttle) immersive analysis tool, that supports interactive badminton game analysis in an immersive environment based on 3D reconstructed game views of the match video. We propose a top-down analytic workflow that allows users to seamlessly move from a high-level match overview to a detailed game view of individual rallies and shots, using situated 3D visualizations and video. We collect 3D spatial and dynamic shot data and player poses with computer vision models and visualize them in VR. Through immersive visualizations, coaches can interactively analyze situated spatial data (player positions, poses, and shot trajectories) with flexible viewpoints while navigating between shots and rallies effectively with embodied interaction. We evaluated the usefulness of VIRD with Olympic and national-level coaches and players in real matches. Results show that immersive analytics supports effective badminton match analysis with reduced context-switching costs and enhances spatial understanding with a high sense of presence.
Collapse
|
3
|
Paes D, Irizarry J, Billinghurst M, Pujoni D. Investigating the relationship between three-dimensional perception and presence in virtual reality-reconstructed architecture. APPLIED ERGONOMICS 2023; 109:103953. [PMID: 36642060 DOI: 10.1016/j.apergo.2022.103953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 12/02/2022] [Accepted: 12/06/2022] [Indexed: 06/17/2023]
Abstract
Identifying and characterizing the factors that affect presence in virtual environments has been acknowledged as a critical step to improving Virtual Reality (VR) applications in the built environment domain. In the search to identify those factors, the research objective was to test whether three-dimensional perception affects presence in virtual environments. A controlled within-group experiment utilizing perception and presence questionnaires was conducted, followed by data analysis, to test the hypothesized unidirectional association between three-dimensional perception and presence in two different virtual environments (non-immersive and immersive). Results indicate no association in either of the systems studied, contrary to the assumption of many scholars in the field but in line with recent studies on the topic. Consequently, VR applications in architectural design may not necessarily need to incorporate advanced stereoscopic visualization techniques to deliver highly immersive experiences, which may be achieved by addressing factors other than depth realism. As findings suggest that the levels of presence experienced by users are not subject to the display mode of a 3D model (whether immersive or non-immersive display), it may still be possible for professionals involved in the review of 3D models (e.g., designers, contractors, clients) to experience high levels of presence through non-stereoscopic VR systems provided that other presence-promoting factors are included.
Collapse
Affiliation(s)
- Daniel Paes
- School of Built Environment, Massey University, Auckland, New Zealand.
| | - Javier Irizarry
- School of Building Construction, Georgia Institute of Technology, Atlanta, GA, United States.
| | - Mark Billinghurst
- Empathic Computing Laboratory, Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand; Empathic Computing Laboratory, STEM, University of South Australia, Mawson Lakes, SA, Australia.
| | - Diego Pujoni
- Institute of Biological Sciences, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil.
| |
Collapse
|
4
|
Alharbi R, Strnad O, Luidolt LR, Waldner M, Kouril D, Bohak C, Klein T, Groller E, Viola I. Nanotilus: Generator of Immersive Guided-Tours in Crowded 3D Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1860-1875. [PMID: 34882555 DOI: 10.1109/tvcg.2021.3133592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Immersive virtual reality environments are gaining popularity for studying and exploring crowded three-dimensional structures. When reaching very high structural densities, the natural depiction of the scene produces impenetrable clutter and requires visibility and occlusion management strategies for exploration and orientation. Strategies developed to address the crowdedness in desktop applications, however, inhibit the feeling of immersion. They result in nonimmersive, desktop-style outside-in viewing in virtual reality. This article proposes Nanotilus-a new visibility and guidance approach for very dense environments that generates an endoscopic inside-out experience instead of outside-in viewing, preserving the immersive aspect of virtual reality. The approach consists of two novel, tightly coupled mechanisms that control scene sparsification simultaneously with camera path planning. The sparsification strategy is localized around the camera and is realized as a multi-scale, multi-shell, variety-preserving technique. When Nanotilus dives into the structures to capture internal details residing on multiple scales, it guides the camera using depth-based path planning. In addition to sparsification and path planning, we complete the tour generation with an animation controller, textual annotation, and text-to-visualization conversion. We demonstrate the generated guided tours on mesoscopic biological models - SARS-CoV-2 and HIV. We evaluate the Nanotilus experience with a baseline outside-in sparsification and navigational technique in a formal user study with 29 participants. While users can maintain a better overview using the outside-in sparsification, the study confirms our hypothesis that Nanotilus leads to stronger engagement and immersion.
Collapse
|
5
|
Wagner J, Stuerzlinger W, Nedel L. The Effect of Exploration Mode and Frame of Reference in Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3252-3264. [PMID: 33606632 DOI: 10.1109/tvcg.2021.3060666] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The design space for user interfaces for Immersive Analytics applications is vast. Designers can combine navigation and manipulation to enable data exploration with ego- or exocentric views, have the user operate at different scales, or use different forms of navigation with varying levels of physical movement. This freedom results in a multitude of different viable approaches. Yet, there is no clear understanding of the advantages and disadvantages of each choice. Our goal is to investigate the affordances of several major design choices, to enable both application designers and users to make better decisions. In this article, we assess two main factors, exploration mode and frame of reference, consequently also varying visualization scale and physical movement demand. To isolate each factor, we implemented nine different conditions in a Space-Time Cube visualization use case and asked 36 participants to perform multiple tasks. We analyzed the results in terms of performance and qualitative measures and correlated them with participants' spatial abilities. While egocentric room-scale exploration significantly reduced mental workload, exocentric exploration improved performance in some tasks. Combining navigation and manipulation made tasks easier by reducing workload, temporal demand, and physical effort.
Collapse
|
6
|
Chu X, Xie X, Ye S, Lu H, Xiao H, Yuan Z, Zhu-Tian C, Zhang H, Wu Y. TIVEE: Visual Exploration and Explanation of Badminton Tactics in Immersive Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:118-128. [PMID: 34596547 DOI: 10.1109/tvcg.2021.3114861] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Tactic analysis is a major issue in badminton as the effective usage of tactics is the key to win. The tactic in badminton is defined as a sequence of consecutive strokes. Most existing methods use statistical models to find sequential patterns of strokes and apply 2D visualizations such as glyphs and statistical charts to explore and analyze the discovered patterns. However, in badminton, spatial information like the shuttle trajectory, which is inherently 3D, is the core of a tactic. The lack of sufficient spatial awareness in 2D visualizations largely limited the tactic analysis of badminton. In this work, we collaborate with domain experts to study the tactic analysis of badminton in a 3D environment and propose an immersive visual analytics system, TIVEE, to assist users in exploring and explaining badminton tactics from multi-levels. Users can first explore various tactics from the third-person perspective using an unfolded visual presentation of stroke sequences. By selecting a tactic of interest, users can turn to the first-person perspective to perceive the detailed kinematic characteristics and explain its effects on the game result. The effectiveness and usefulness of TIVEE are demonstrated by case studies and an expert interview.
Collapse
|
7
|
Role-Aware Information Spread in Online Social Networks. ENTROPY 2021; 23:e23111542. [PMID: 34828240 PMCID: PMC8618065 DOI: 10.3390/e23111542] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 12/29/2022]
Abstract
Understanding the complex process of information spread in online social networks (OSNs) enables the efficient maximization/minimization of the spread of useful/harmful information. Users assume various roles based on their behaviors while engaging with information in these OSNs. Recent reviews on information spread in OSNs have focused on algorithms and challenges for modeling the local node-to-node cascading paths of viral information. However, they neglected to analyze non-viral information with low reach size that can also spread globally beyond OSN edges (links) via non-neighbors through, for example, pushed information via content recommendation algorithms. Previous reviews have also not fully considered user roles in the spread of information. To address these gaps, we: (i) provide a comprehensive survey of the latest studies on role-aware information spread in OSNs, also addressing the different temporal spreading patterns of viral and non-viral information; (ii) survey modeling approaches that consider structural, non-structural, and hybrid features, and provide a taxonomy of these approaches; (iii) review software platforms for the analysis and visualization of role-aware information spread in OSNs; and (iv) describe how information spread models enable useful applications in OSNs such as detecting influential users. We conclude by highlighting future research directions for studying information spread in OSNs, accounting for dynamic user roles.
Collapse
|
8
|
Thrun MC, Pape F, Ultsch A. Conventional displays of structures in data compared with interactive projection-based clustering (IPBC). INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS 2021. [DOI: 10.1007/s41060-021-00264-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
AbstractClustering is an important task in knowledge discovery with the goal to identify structures of similar data points in a dataset. Here, the focus lies on methods that use a human-in-the-loop, i.e., incorporate user decisions into the clustering process through 2D and 3D displays of the structures in the data. Some of these interactive approaches fall into the category of visual analytics and emphasize the power of such displays to identify the structures interactively in various types of datasets or to verify the results of clustering algorithms. This work presents a new method called interactive projection-based clustering (IPBC). IPBC is an open-source and parameter-free method using a human-in-the-loop for an interactive 2.5D display and identification of structures in data based on the user’s choice of a dimensionality reduction method. The IPBC approach is systematically compared with accessible visual analytics methods for the display and identification of cluster structures using twelve clustering benchmark datasets and one additional natural dataset. Qualitative comparison of 2D, 2.5D and 3D displays of structures and empirical evaluation of the identified cluster structures show that IPBC outperforms comparable methods. Additionally, IPBC assists in identifying structures previously unknown to domain experts in an application.
Collapse
|
9
|
Yang Y, Cordeil M, Beyer J, Dwyer T, Marriott K, Pfister H. Embodied Navigation in Immersive Abstract Data Visualization: Is Overview+Detail or Zooming Better for 3D Scatterplots? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1214-1224. [PMID: 33048730 DOI: 10.1109/tvcg.2020.3030427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
data has no natural scale and so interactive data visualizations must provide techniques to allow the user to choose their viewpoint and scale. Such techniques are well established in desktop visualization tools. The two most common techniques are zoom+pan and overview+detail. However, how best to enable the analyst to navigate and view abstract data at different levels of scale in immersive environments has not previously been studied. We report the findings of the first systematic study of immersive navigation techniques for 3D scatterplots. We tested four conditions that represent our best attempt to adapt standard 2D navigation techniques to data visualization in an immersive environment while still providing standard immersive navigation techniques through physical movement and teleportation. We compared room-sized visualization versus a zooming interface, each with and without an overview. We find significant differences in participants' response times and accuracy for a number of standard visual analysis tasks. Both zoom and overview provide benefits over standard locomotion support alone (i.e., physical movement and pointer teleportation). However, which variation is superior, depends on the task. We obtain a more nuanced understanding of the results by analyzing them in terms of a time-cost model for the different components of navigation: way-finding, travel, number of travel steps, and context switching.
Collapse
|
10
|
Lee B, Hu X, Cordeil M, Prouzeau A, Jenny B, Dwyer T. Shared Surfaces and Spaces: Collaborative Data Visualisation in a Co-located Immersive Environment. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1171-1181. [PMID: 33048740 DOI: 10.1109/tvcg.2020.3030450] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Immersive technologies offer new opportunities to support collaborative visual data analysis by providing each collaborator a personal, high-resolution view of a flexible shared visualisation space through a head mounted display. However, most prior studies of collaborative immersive analytics have focused on how groups interact with surface interfaces such as tabletops and wall displays. This paper reports on a study in which teams of three co-located participants are given flexible visualisation authoring tools to allow a great deal of control in how they structure their shared workspace. They do so using a prototype system we call FIESTA: the Free-roaming Immersive Environment to Support Team-based Analysis. Unlike traditional visualisation tools, FIESTA allows users to freely position authoring interfaces and visualisation artefacts anywhere in the virtual environment, either on virtual surfaces or suspended within the interaction space. Our participants solved visual analytics tasks on a multivariate data set, doing so individually and collaboratively by creating a large number of 2D and 3D visualisations. Their behaviours suggest that the usage of surfaces is coupled with the type of visualisation used, often using walls to organise 2D visualisations, but positioning 3D visualisations in the space around them. Outside of tightly-coupled collaboration, participants followed social protocols and did not interact with visualisations that did not belong to them even if outside of its owner's personal workspace.
Collapse
|
11
|
Kraus M, Pollok T, Miller M, Kilian T, Moritz T, Schweitzer D, Beyerer J, Keim D, Qu C, Jentner W. Toward Mass Video Data Analysis: Interactive and Immersive 4D Scene Reconstruction. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5426. [PMID: 32971822 PMCID: PMC7570841 DOI: 10.3390/s20185426] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 09/15/2020] [Accepted: 09/17/2020] [Indexed: 11/18/2022]
Abstract
The technical progress in the last decades makes photo and video recording devices omnipresent. This change has a significant impact, among others, on police work. It is no longer unusual that a myriad of digital data accumulates after a criminal act, which must be reviewed by criminal investigators to collect evidence or solve the crime. This paper presents the VICTORIA Interactive 4D Scene Reconstruction and Analysis Framework ("ISRA-4D" 1.0), an approach for the visual consolidation of heterogeneous video and image data in a 3D reconstruction of the corresponding environment. First, by reconstructing the environment in which the materials were created, a shared spatial context of all available materials is established. Second, all footage is spatially and temporally registered within this 3D reconstruction. Third, a visualization of the hereby created 4D reconstruction (3D scene + time) is provided, which can be analyzed interactively. Additional information on video and image content is also extracted and displayed and can be analyzed with supporting visualizations. The presented approach facilitates the process of filtering, annotating, analyzing, and getting an overview of large amounts of multimedia material. The framework is evaluated using four case studies which demonstrate its broad applicability. Furthermore, the framework allows the user to immerse themselves in the analysis by entering the scenario in virtual reality. This feature is qualitatively evaluated by means of interviews of criminal investigators and outlines potential benefits such as improved spatial understanding and the initiation of new fields of application.
Collapse
Affiliation(s)
- Matthias Kraus
- Department of Computer and Information Science, Universiät Konstanz, Universitätsstr. 10, 78465 Konstanz, Germany; (M.M.); (T.K.); (D.S.); (D.K.); (W.J.)
| | - Thomas Pollok
- Fraunhofer IOSB, Fraunhoferstr. 1, 76131 Karlsruhe, Germany; (T.P.); (T.M.); (J.B.); (C.Q.)
| | - Matthias Miller
- Department of Computer and Information Science, Universiät Konstanz, Universitätsstr. 10, 78465 Konstanz, Germany; (M.M.); (T.K.); (D.S.); (D.K.); (W.J.)
| | - Timon Kilian
- Department of Computer and Information Science, Universiät Konstanz, Universitätsstr. 10, 78465 Konstanz, Germany; (M.M.); (T.K.); (D.S.); (D.K.); (W.J.)
| | - Tobias Moritz
- Fraunhofer IOSB, Fraunhoferstr. 1, 76131 Karlsruhe, Germany; (T.P.); (T.M.); (J.B.); (C.Q.)
| | - Daniel Schweitzer
- Department of Computer and Information Science, Universiät Konstanz, Universitätsstr. 10, 78465 Konstanz, Germany; (M.M.); (T.K.); (D.S.); (D.K.); (W.J.)
| | - Jürgen Beyerer
- Fraunhofer IOSB, Fraunhoferstr. 1, 76131 Karlsruhe, Germany; (T.P.); (T.M.); (J.B.); (C.Q.)
- Vision and Fusion Lab (IES), Karlsruhe Institute of Technology (KIT), c/o Technologiefabrik, Haid-und-Neu-Str. 7, 76131 Karlsruhe, Germany
| | - Daniel Keim
- Department of Computer and Information Science, Universiät Konstanz, Universitätsstr. 10, 78465 Konstanz, Germany; (M.M.); (T.K.); (D.S.); (D.K.); (W.J.)
| | - Chengchao Qu
- Fraunhofer IOSB, Fraunhoferstr. 1, 76131 Karlsruhe, Germany; (T.P.); (T.M.); (J.B.); (C.Q.)
| | - Wolfgang Jentner
- Department of Computer and Information Science, Universiät Konstanz, Universitätsstr. 10, 78465 Konstanz, Germany; (M.M.); (T.K.); (D.S.); (D.K.); (W.J.)
| |
Collapse
|