1
|
Afzaal H, Alim U. Evaluating Force-Based Haptics for Immersive Tangible Interactions with Surface Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:886-896. [PMID: 39255113 DOI: 10.1109/tvcg.2024.3456316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Haptic feedback provides an essential sensory stimulus crucial for interaction and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions-with or without the application of assisting force stimuli-have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities; the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces; and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations.
Collapse
|
2
|
Lammert A, Rendle G, Immohr F, Neidhardt A, Brandenburg K, Raake A, Froehlich B. Immersive Study Analyzer: Collaborative Immersive Analysis of Recorded Social VR Studies. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7214-7224. [PMID: 39255175 DOI: 10.1109/tvcg.2024.3456146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Virtual Reality (VR) has become an important tool for conducting behavioral studies in realistic, reproducible environments. In this paper, we present ISA, an Immersive Study Analyzer system designed for the comprehensive analysis of social VR studies. For in-depth analysis of participant behavior, ISA records all user actions, speech, and the contextual environment of social VR studies. A key feature is the ability to review and analyze such immersive recordings collaboratively in VR, through support of behavioral coding and user-defined analysis queries for efficient identification of complex behavior. Respatialization of the recorded audio streams enables analysts to follow study participants' conversations in a natural and intuitive way. To support phases of close and loosely coupled collaboration, ISA allows joint and individual temporal navigation, and provides tools to facilitate collaboration among users at different temporal positions. An expert review confirms that ISA effectively supports collaborative immersive analysis, providing a novel and effective tool for nuanced understanding of user behavior in social VR studies.
Collapse
|
3
|
Borhani Z, Sharma P, Ortega FR. Survey of Annotations in Extended Reality Systems. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5074-5096. [PMID: 37352090 DOI: 10.1109/tvcg.2023.3288869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2023]
Abstract
Annotation in 3D user interfaces such as Augmented Reality (AR) and Virtual Reality (VR) is a challenging and promising area; however, there are not currently surveys reviewing these contributions. In order to provide a survey of annotations for Extended Reality (XR) environments, we conducted a structured literature review of papers that used annotation in their AR/VR systems from the period between 2001 and 2021. Our literature review process consists of several filtering steps which resulted in 103 XR publications with a focus on annotation. We classified these papers based on the display technologies, input devices, annotation types, target object under annotation, collaboration type, modalities, and collaborative technologies. A survey of annotation in XR is an invaluable resource for researchers and newcomers. Finally, we provide a database of the collected information for each reviewed paper. This information includes applications, the display technologies and its annotator, input devices, modalities, annotation types, interaction techniques, collaboration types, and tasks for each paper. This database provides a rapid access to collected data and gives users the ability to search or filter the required information. This survey provides a starting point for anyone interested in researching annotation in XR environments.
Collapse
|
4
|
In S, Lin T, North C, Pfister H, Yang Y. This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5635-5650. [PMID: 37506003 DOI: 10.1109/tvcg.2023.3299602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.
Collapse
|
5
|
Cortes CAT, Thurow S, Ong A, Sharples JJ, Bednarz T, Stevens G, Favero DD. Analysis of Wildfire Visualization Systems for Research and Training: Are They Up for the Challenge of the Current State of Wildfires? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4285-4303. [PMID: 37030767 DOI: 10.1109/tvcg.2023.3258440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Wildfires affect many regions across the world. The accelerated progression of global warming has amplified their frequency and scale, deepening their impact on human life, the economy, and the environment. The temperature rise has been driving wildfires to behave unpredictably compared to those previously observed, challenging researchers and fire management agencies to understand the factors behind this behavioral change. Furthermore, this change has rendered fire personnel training outdated and lost its ability to adequately prepare personnel to respond to these new fires. Immersive visualization can play a key role in tackling the growing issue of wildfires. Therefore, this survey reviews various studies that use immersive and non-immersive data visualization techniques to depict wildfire behavior and train first responders and planners. This paper identifies the most useful characteristics of these systems. While these studies support knowledge creation for certain situations, there is still scope to comprehensively improve immersive systems to address the unforeseen dynamics of wildfires.
Collapse
|
6
|
Hong J, Hnatyshyn R, Santos EAD, Maciejewski R, Isenberg T. A Survey of Designs for Combined 2D+3D Visual Representations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2888-2902. [PMID: 38648152 DOI: 10.1109/tvcg.2024.3388516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
We examine visual representations of data that make use of combinations of both 2D and 3D data mappings. Combining 2D and 3D representations is a common technique that allows viewers to understand multiple facets of the data with which they are interacting. While 3D representations focus on the spatial character of the data or the dedicated 3D data mapping, 2D representations often show abstract data properties and take advantage of the unique benefits of mapping to a plane. Many systems have used unique combinations of both types of data mappings effectively. Yet there are no systematic reviews of the methods in linking 2D and 3D representations. We systematically survey the relationships between 2D and 3D visual representations in major visualization publications-IEEE VIS, IEEE TVCG, and EuroVis-from 2012 to 2022. We closely examined 105 articles where 2D and 3D representations are connected visually, interactively, or through animation. These approaches are designed based on their visual environment, the relationships between their visual representations, and their possible layouts. Through our analysis, we introduce a design space as well as provide design guidelines for effectively linking 2D and 3D visual representations.
Collapse
|
7
|
Seraji MR, Piray P, Zahednejad V, Stuerzlinger W. Analyzing User Behaviour Patterns in a Cross-Virtuality Immersive Analytics System. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2613-2623. [PMID: 38470602 DOI: 10.1109/tvcg.2024.3372129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Recent work in immersive analytics suggests benefits for systems that support work across both 2D and 3D data visualizations, i.e., cross-virtuality analytics systems. Here, we introduce HybridAxes, an immersive visual analytics system that enables users to conduct their analysis either in 2D on desktop monitors or in 3D within an immersive AR environment - while enabling them to seamlessly switch and transfer their graphs between modes. Our user study results show that the cross-virtuality sub-systems in HybridAxes complement each other well in helping the users in their data-understanding journey. We show that users preferred using the AR component for exploring the data, while they used the desktop to work on more detail-intensive tasks. Despite encountering some minor challenges in switching between the two virtuality modes, users consistently rated the whole system as highly engaging, user-friendly, and helpful in streamlining their analytics processes. Finally, we present suggestions for designers of cross-virtuality visual analytics systems and identify avenues for future work.
Collapse
|
8
|
Lee B, Sedlmair M, Schmalstieg D. Design Patterns for Situated Visualization in Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:1324-1335. [PMID: 37883275 DOI: 10.1109/tvcg.2023.3327398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Situated visualization has become an increasingly popular research area in the visualization community, fueled by advancements in augmented reality (AR) technology and immersive analytics. Visualizing data in spatial proximity to their physical referents affords new design opportunities and considerations not present in traditional visualization, which researchers are now beginning to explore. However, the AR research community has an extensive history of designing graphics that are displayed in highly physical contexts. In this work, we leverage the richness of AR research and apply it to situated visualization. We derive design patterns which summarize common approaches of visualizing data in situ. The design patterns are based on a survey of 293 papers published in the AR and visualization communities, as well as our own expertise. We discuss design dimensions that help to describe both our patterns and previous work in the literature. This discussion is accompanied by several guidelines which explain how to apply the patterns given the constraints imposed by the real world. We conclude by discussing future research directions that will help establish a complete understanding of the design of situated visualization, including the role of interactivity, tasks, and workflows.
Collapse
|
9
|
Walchshofer C, Dhanoa V, Streit M, Meyer M. Transitioning to a Commercial Dashboarding System: Socio-Technical Observations and Opportunities. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:381-391. [PMID: 37878440 DOI: 10.1109/tvcg.2023.3326525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
Many long-established, traditional manufacturing businesses are becoming more digital and data-driven to improve their production. These companies are embracing visual analytics in these transitions through their adoption of commercial dashboarding systems. Although a number of studies have looked at the technical challenges of adopting these systems, very few have focused on the socio-technical issues that arise. In this paper, we report on the results of an interview study with 17 participants working in a range of roles at a long-established, traditional manufacturing company as they adopted Microsoft Power BI. The results highlight a number of socio-technical challenges the employees faced, including difficulties in training, using and creating dashboards, and transitioning to a modern digital company. Based on these results, we propose a number of opportunities for both companies and visualization researchers to improve these difficult transitions, as well as opportunities for rethinking how we design dashboarding systems for real-world use.
Collapse
|
10
|
Filipov V, Arleo A, Miksch S. Are We There Yet? A Roadmap of Network Visualization from Surveys to Task Taxonomies. COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 2023; 42:e14794. [PMID: 38505648 PMCID: PMC10947241 DOI: 10.1111/cgf.14794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
Networks are abstract and ubiquitous data structures, defined as a set of data points and relationships between them. Network visualization provides meaningful representations of these data, supporting researchers in understanding the connections, gathering insights, and detecting and identifying unexpected patterns. Research in this field is focusing on increasingly challenging problems, such as visualizing dynamic, complex, multivariate, and geospatial networked data. This ever-growing, and widely varied, body of research led to several surveys being published, each covering one or more disciplines of network visualization. Despite this effort, the variety and complexity of this research represents an obstacle when surveying the domain and building a comprehensive overview of the literature. Furthermore, there exists a lack of clarification and uniformity between the terminology used in each of the surveys, which requires further effort when mapping and categorizing the plethora of different visualization techniques and approaches. In this paper, we aim at providing researchers and practitioners alike with a "roadmap" detailing the current research trends in the field of network visualization. We design our contribution as a meta-survey where we discuss, summarize, and categorize recent surveys and task taxonomies published in the context of network visualization. We identify more and less saturated disciplines of research and consolidate the terminology used in the surveyed literature. We also survey the available task taxonomies, providing a comprehensive analysis of their varying support to each network visualization discipline and by establishing and discussing a classification for the individual tasks. With this combined analysis of surveys and task taxonomies, we provide an overarching structure of the field, from which we extrapolate the current state of research and promising directions for future work.
Collapse
|
11
|
Jamaludin NAB, Mohamed FB, Sunar MSB, Selamat AB, Krejcar O. Answering Why? An Overview of Immersive Data Visualization Applications Using Multi-Level Typology of Visualization Task. 2022 IEEE INTERNATIONAL CONFERENCE ON COMPUTING (ICOCO) 2022. [DOI: 10.1109/icoco56118.2022.10031696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
| | - Farhan Bin Mohamed
- Universiti Teknologi Malaysia (UTM),School of Computing Faculty of Engineering & Media and Game Innovation Center of Excellence (MaGICX),Johor Bahru,Malaysia
| | - Mohd Shahrizal Bin Sunar
- Universiti Teknologi Malaysia (UTM),School of Computing Faculty of Engineering,Johor Bahru,Malaysia
| | - Ali Bin Selamat
- Malaysia-Japan International Institute of Technology (MJIIT) Universiti Teknologi Malaysia (UTM),Kuala Lumpur,Malaysia
| | - Ondrej Krejcar
- University of Hradec Kralove,Center for Basic and Applied Research Faculty of Informatics and Management,Hradec Kralove,Czech Republic
| |
Collapse
|
12
|
Joos L, Jaeger-Honz S, Schreiber F, Keim DA, Klein K. Visual Comparison of Networks in VR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3651-3661. [PMID: 36048995 DOI: 10.1109/tvcg.2022.3203001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Networks are an important means for the representation and analysis of data in a variety of research and application areas. While there are many efficient methods to create layouts for networks to support their visual analysis, approaches for the comparison of networks are still underexplored. Especially when it comes to the comparison of weighted networks, which is an important task in several areas, such as biology and biomedicine, there is a lack of efficient visualization approaches. With the availability of affordable high-quality virtual reality (VR) devices, such as head-mounted displays (HMDs), the research field of immersive analytics emerged and showed great potential for using the new technology for visual data exploration. However, the use of immersive technology for the comparison of networks is still underexplored. With this work, we explore how weighted networks can be visually compared in an immersive VR environment and investigate how visual representations can benefit from the extended 3D design space. For this purpose, we develop different encodings for 3D node-link diagrams supporting the visualization of two networks within a single representation and evaluate them in a pilot user study. We incorporate the results into a more extensive user study comparing node-link representations with matrix representations encoding two networks simultaneously. The data and tasks designed for our experiments are similar to those occurring in real-world scenarios. Our evaluation shows significantly better results for the node-link representations, which is contrary to comparable 2D experiments and indicates a high potential for using VR for the visual comparison of networks.
Collapse
|
13
|
Collaborative and individual learning of geography in immersive virtual reality: An effectiveness study. PLoS One 2022; 17:e0276267. [PMID: 36256672 PMCID: PMC9578614 DOI: 10.1371/journal.pone.0276267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 10/04/2022] [Indexed: 11/19/2022] Open
Abstract
Many university-taught courses moved to online form since the outbreak of the global pandemic of coronavirus disease (COVID-19). Distance learning has become broadly used as a result of the widely applied lockdowns, however, many students lack personal contact in the learning process. A classical web-based distance learning does not provide means for natural interpersonal interaction. The technology of immersive virtual reality (iVR) may mitigate this problem. Current research has been aimed mainly at specific instances of collaborative immersive virtual environment (CIVE) applications for learning. The fields utilizing iVR for knowledge construction and skills training with the use of spatial visualizations show promising results. The objective of this study was to assess the effectiveness of collaborative and individual use of iVR for learning geography, specifically training in hypsography. Furthermore, the study's goals were to determine whether collaborative learning would be more effective and to investigate the key elements in which collaborative and individual learning were expected to differ-motivation and use of cognitive resources. The CIVE application developed at Masaryk University was utilized to train 80 participants in inferring conclusions from cartographic visualizations. The collaborative and individual experimental group underwent a research procedure consisting of a pretest, training in iVR, posttest, and questionnaires. A statistical comparison between the geography pretest and posttest for the individual learning showed a significant increase in the score (p = 0.024, ES = 0.128) and speed (p = 0.027, ES = 0.123), while for the collaborative learning, there was a significant increase in the score (p<0.001, ES = 0.333) but not in speed (p = 1.000, ES = 0.000). Thus, iVR as a medium proved to be an effective tool for learning geography. However, comparing the collaborative and individual learning showed no significant difference in the learning gain (p = 0.303, ES = 0.115), speed gain (p = 0.098, ES = 0.185), or performance motivation (p = 0.368, ES = 0.101). Nevertheless, the collaborative learning group had significantly higher use of cognitive resources (p = 0.046, ES = 0.223) than the individual learning group. The results were discussed in relation to the cognitive load theories, and future research directions for iVR learning were proposed.
Collapse
|
14
|
Valades-Cruz CA, Leconte L, Fouche G, Blanc T, Van Hille N, Fournier K, Laurent T, Gallean B, Deslandes F, Hajj B, Faure E, Argelaguet F, Trubuil A, Isenberg T, Masson JB, Salamero J, Kervrann C. Challenges of intracellular visualization using virtual and augmented reality. FRONTIERS IN BIOINFORMATICS 2022; 2:997082. [PMID: 36304296 PMCID: PMC9580941 DOI: 10.3389/fbinf.2022.997082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 08/26/2022] [Indexed: 11/22/2022] Open
Abstract
Microscopy image observation is commonly performed on 2D screens, which limits human capacities to grasp volumetric, complex, and discrete biological dynamics. With the massive production of multidimensional images (3D + time, multi-channels) and derived images (e.g., restored images, segmentation maps, and object tracks), scientists need appropriate visualization and navigation methods to better apprehend the amount of information in their content. New modes of visualization have emerged, including virtual reality (VR)/augmented reality (AR) approaches which should allow more accurate analysis and exploration of large time series of volumetric images, such as those produced by the latest 3D + time fluorescence microscopy. They include integrated algorithms that allow researchers to interactively explore complex spatiotemporal objects at the scale of single cells or multicellular systems, almost in a real time manner. In practice, however, immersion of the user within 3D + time microscopy data represents both a paradigm shift in human-image interaction and an acculturation challenge, for the concerned community. To promote a broader adoption of these approaches by biologists, further dialogue is needed between the bioimaging community and the VR&AR developers.
Collapse
Affiliation(s)
- Cesar Augusto Valades-Cruz
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| | - Ludovic Leconte
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| | - Gwendal Fouche
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
- Inria, CNRS, IRISA, University Rennes, Rennes, France
| | - Thomas Blanc
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, Sorbonne Universites, CNRS UMR168, Paris, France
| | | | - Kevin Fournier
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
- Inria, CNRS, IRISA, University Rennes, Rennes, France
| | - Tao Laurent
- LIRMM, Université Montpellier, CNRS, Montpellier, France
| | | | | | - Bassam Hajj
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, Sorbonne Universites, CNRS UMR168, Paris, France
| | - Emmanuel Faure
- LIRMM, Université Montpellier, CNRS, Montpellier, France
| | | | - Alain Trubuil
- MaIAGE, INRAE, Université Paris-Saclay, Jouy-en-Josas, France
| | | | - Jean-Baptiste Masson
- Decision and Bayesian Computation, Neuroscience and Computational Biology Departments, CNRS UMR 3571, Institut Pasteur, Université Paris Cité, Paris, France
| | - Jean Salamero
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| | - Charles Kervrann
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| |
Collapse
|
15
|
Touching data with PropellerHand. J Vis (Tokyo) 2022. [DOI: 10.1007/s12650-022-00859-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
Abstract
Immersive analytics often takes place in virtual environments which promise the users immersion. To fulfill this promise, sensory feedback, such as haptics, is an important component, which is however not well supported yet. Existing haptic devices are often expensive, stationary, or occupy the user’s hand, preventing them from grasping objects or using a controller. We propose PropellerHand, an ungrounded hand-mounted haptic device with two rotatable propellers, that allows exerting forces on the hand without obstructing hand use. PropellerHand is able to simulate feedback such as weight and torque by generating thrust up to 11 N in 2-DOF and a torque of 1.87 Nm in 2-DOF. Its design builds on our experience from quantitative and qualitative experiments with different form factors and parts. We evaluated our prototype through a qualitative user study in various VR scenarios that required participants to manipulate virtual objects in different ways, while changing between torques and directional forces. Results show that PropellerHand improves users’ immersion in virtual reality. Additionally, we conducted a second user study in the field of immersive visualization to investigate the potential benefits of PropellerHand there.
Graphical abstract
Collapse
|
16
|
The Hitchhiker’s Guide to Fused Twins: A Review of Access to Digital Twins In Situ in Smart Cities. REMOTE SENSING 2022. [DOI: 10.3390/rs14133095] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Smart Cities already surround us, and yet they are still incomprehensibly far from directly impacting everyday life. While current Smart Cities are often inaccessible, the experience of everyday citizens may be enhanced with a combination of the emerging technologies Digital Twins (DTs) and Situated Analytics. DTs represent their Physical Twin (PT) in the real world via models, simulations, (remotely) sensed data, context awareness, and interactions. However, interaction requires appropriate interfaces to address the complexity of the city. Ultimately, leveraging the potential of Smart Cities requires going beyond assembling the DT to be comprehensive and accessible. Situated Analytics allows for the anchoring of city information in its spatial context. We advance the concept of embedding the DT into the PT through Situated Analytics to form Fused Twins (FTs). This fusion allows access to data in the location that it is generated in in an embodied context that can make the data more understandable. Prototypes of FTs are rapidly emerging from different domains, but Smart Cities represent the context with the most potential for FTs in the future. This paper reviews DTs, Situated Analytics, and Smart Cities as the foundations of FTs. Regarding DTs, we define five components (physical, data, analytical, virtual, and Connection Environments) that we relate to several cognates (i.e., similar but different terms) from existing literature. Regarding Situated Analytics, we review the effects of user embodiment on cognition and cognitive load. Finally, we classify existing partial examples of FTs from the literature and address their construction from Augmented Reality, Geographic Information Systems, Building/City Information Models, and DTs and provide an overview of future directions.
Collapse
|
17
|
Sereno M, Wang X, Besancon L, McGuffin MJ, Isenberg T. Collaborative Work in Augmented Reality: A Survey. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2530-2549. [PMID: 33085619 DOI: 10.1109/tvcg.2020.3032761] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In Augmented Reality (AR), users perceive virtual content anchored in the real world. It is used in medicine, education, games, navigation, maintenance, product design, and visualization, in both single-user and multi-user scenarios. Multi-user AR has received limited attention from researchers, even though AR has been in development for more than two decades. We present the state of existing work at the intersection of AR and Computer-Supported Collaborative Work (AR-CSCW), by combining a systematic survey approach with an exploratory, opportunistic literature search. We categorize 65 papers along the dimensions of space, time, role symmetry (whether the roles of users are symmetric), technology symmetry (whether the hardware platforms of users are symmetric), and output and input modalities. We derive design considerations for collaborative AR environments, and identify under-explored research topics. These include the use of heterogeneous hardware considerations and 3D data exploration research areas. This survey is useful for newcomers to the field, readers interested in an overview of CSCW in AR applications, and domain experts seeking up-to-date information.
Collapse
|
18
|
Bressa N, Korsgaard H, Tabard A, Houben S, Vermeulen J. What's the Situation with Situated Visualization? A Survey and Perspectives on Situatedness. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:107-117. [PMID: 34587065 DOI: 10.1109/tvcg.2021.3114835] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Situated visualization is an emerging concept within visualization, in which data is visualized in situ, where it is relevant to people. The concept has gained interest from multiple research communities, including visualization, human-computer interaction (HCI) and augmented reality. This has led to a range of explorations and applications of the concept, however, this early work has focused on the operational aspect of situatedness leading to inconsistent adoption of the concept and terminology. First, we contribute a literature survey in which we analyze 44 papers that explicitly use the term "situated visualization" to provide an overview of the research area, how it defines situated visualization, common application areas and technology used, as well as type of data and type of visualizations. Our survey shows that research on situated visualization has focused on technology-centric approaches that foreground a spatial understanding of situatedness. Secondly, we contribute five perspectives on situatedness (space, time, place, activity, and community) that together expand on the prevalent notion of situatedness in the corpus. We draw from six case studies and prior theoretical developments in HCI. Each perspective develops a generative way of looking at and working with situatedness in design and research. We outline future directions, including considering technology, material and aesthetics, leveraging the perspectives for design, and methods for stronger engagement with target audiences. We conclude with opportunities to consolidate situated visualization research.
Collapse
|
19
|
Zhu Y, Dai F, Maitra R. Fully Three-dimensional Radial Visualization. J Comput Graph Stat 2022. [DOI: 10.1080/10618600.2021.2020129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Yifan Zhu
- Department of Statistics, Iowa State University, Ames, Iowa
| | - Fan Dai
- Department of Mathematical Sciences, Michigan Technological University, Houghton, Michigan
| | - Ranjan Maitra
- Department of Statistics, Iowa State University, Ames, Iowa
| |
Collapse
|
20
|
Role-Aware Information Spread in Online Social Networks. ENTROPY 2021; 23:e23111542. [PMID: 34828240 PMCID: PMC8618065 DOI: 10.3390/e23111542] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 12/29/2022]
Abstract
Understanding the complex process of information spread in online social networks (OSNs) enables the efficient maximization/minimization of the spread of useful/harmful information. Users assume various roles based on their behaviors while engaging with information in these OSNs. Recent reviews on information spread in OSNs have focused on algorithms and challenges for modeling the local node-to-node cascading paths of viral information. However, they neglected to analyze non-viral information with low reach size that can also spread globally beyond OSN edges (links) via non-neighbors through, for example, pushed information via content recommendation algorithms. Previous reviews have also not fully considered user roles in the spread of information. To address these gaps, we: (i) provide a comprehensive survey of the latest studies on role-aware information spread in OSNs, also addressing the different temporal spreading patterns of viral and non-viral information; (ii) survey modeling approaches that consider structural, non-structural, and hybrid features, and provide a taxonomy of these approaches; (iii) review software platforms for the analysis and visualization of role-aware information spread in OSNs; and (iv) describe how information spread models enable useful applications in OSNs such as detecting influential users. We conclude by highlighting future research directions for studying information spread in OSNs, accounting for dynamic user roles.
Collapse
|
21
|
Kraus M, Klein K, Fuchs J, Keim D, Schreiber F, Sedlmair M, Rhyne TM. The Value of Immersive Visualization. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2021; 41:125-132. [PMID: 34264822 DOI: 10.1109/mcg.2021.3075258] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In recent years, research on immersive environments has experienced a new wave of interest, and immersive analytics has been established as a new research field. Every year, a vast amount of different techniques, applications, and user studies are published that focus on employing immersive environments for visualizing and analyzing data. Nevertheless, immersive analytics is still a relatively unexplored field that needs more basic research in many aspects and is still viewed with skepticism. Rightly so, because in our opinion, many researchers do not fully exploit the possibilities offered by immersive environments and, on the contrary, sometimes even overestimate the power of immersive visualizations. Although a growing body of papers has demonstrated individual advantages of immersive analytics for specific tasks and problems, the general benefit of using immersive environments for effective analytic tasks remains controversial. In this article, we reflect on when and how immersion may be appropriate for the analysis and present four guiding scenarios. We report on our experiences, discuss the landscape of assessment strategies, and point out the directions where we believe immersive visualizations have the greatest potential.
Collapse
|
22
|
Wagner J, Stuerzlinger W, Nedel L. Comparing and Combining Virtual Hand and Virtual Ray Pointer Interactions for Data Manipulation in Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2513-2523. [PMID: 33750698 DOI: 10.1109/tvcg.2021.3067759] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this work, we evaluate two standard interaction techniques for Immersive Analytics environments: virtual hands, with actions such as grabbing and stretching, and virtual ray pointers, with actions assigned to controller buttons. We also consider a third option: seamlessly integrating both modes and allowing the user to alternate between them without explicit mode switches. Easy-to-use interaction with data visualizations in Virtual Reality enables analysts to intuitively query or filter the data, in addition to the benefit of multiple perspectives and stereoscopic 3D display. While many VR-based Immersive Analytics systems employ one of the studied interaction modes, the effect of this choice is unknown. Considering that each has different advantages, we compared the three conditions through a controlled user study in the spatio-temporal data domain. We did not find significant differences between hands and ray-casting in task performance, workload, or interactivity patterns. Yet, 60% of the participants preferred the mixed mode and benefited from it by choosing the best alternative for each low-level task. This mode significantly reduced completion times by 23% for the most demanding task, at the cost of a 5% decrease in overall success rates.
Collapse
|
23
|
Exploring Multiple and Coordinated Views for Multilayered Geospatial Data in Virtual Reality. INFORMATION 2020. [DOI: 10.3390/info11090425] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Virtual reality (VR) headsets offer a large and immersive workspace for displaying visualizations with stereoscopic vision, as compared to traditional environments with monitors or printouts. The controllers for these devices further allow direct three-dimensional interaction with the virtual environment. In this paper, we make use of these advantages to implement a novel multiple and coordinated view (MCV) system in the form of a vertical stack, showing tilted layers of geospatial data. In a formal study based on a use-case from urbanism that requires cross-referencing four layers of geospatial urban data, we compared it against more conventional systems similarly implemented in VR: a simpler grid of layers, and one map that allows for switching between layers. Performance and oculometric analyses showed a slight advantage of the two spatial-multiplexing methods (the grid or the stack) over the temporal multiplexing in blitting. Subgrouping the participants based on their preferences, characteristics, and behavior allowed a more nuanced analysis, allowing us to establish links between e.g., saccadic information, experience with video games, and preferred system. In conclusion, we found that none of the three systems are optimal and a choice of different MCV systems should be provided in order to optimally engage users.
Collapse
|