1
|
In S, Lin T, North C, Pfister H, Yang Y. This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5635-5650. [PMID: 37506003 DOI: 10.1109/tvcg.2023.3299602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.
Collapse
|
2
|
Kyung G, Park S. Curved Versus Flat Monitors: Interactive Effects of Display Curvature Radius and Display Size on Visual Search Performance and Visual Fatigue. HUMAN FACTORS 2021; 63:1182-1195. [PMID: 32374635 DOI: 10.1177/0018720820922717] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE The aim of this study is to examine the interactive effects of display curvature radius and display size on visual search accuracy, visual search speed, and visual fatigue. BACKGROUND Although the advantages of curved displays have been reported, little is known about the interactive effects of display curvature radius and size. METHOD Twenty-seven individuals performed visual search tasks at a viewing distance of 50 cm using eight configurations involving four display curvature radii (400R, 600R, 1200R, and flat) and two display sizes (33″ and 50″). To simulate curved screens, five flat display panels were horizontally arranged with their centers concentrically repositioned following each display curvature radius. RESULTS For accuracy, speed, and fatigue, 33″-600R and 50″-600R provided the best or comparable-to-best results, whereas 50″-flat provided the worst results. For accuracy and fatigue, 33″-flat was the second worst. The changes in the horizontal field of view and viewing angle due to display curvature as well as the association between effective display curvature radii and empirical horopter (loci of perceived equidistance) can explain these results. CONCLUSION The interactive effects of display curvature radius and size were evident for visual search performance and fatigue. Beneficial effects of curved displays were maintained across 33″ and 50″, whereas increasing flat display size from 33″ to 50″ was detrimental. APPLICATION For visual search tasks at a viewing distance of 50 cm, 33″-600R and 50″ 600R displays are recommended, as opposed to 33″ and 50″ flat displays. Wide flat displays must be carefully considered for visual display terminal tasks.
Collapse
|
3
|
Liu R, Wang H, Zhang C, Chen X, Wang L, Ji G, Zhao B, Mao Z, Yang D. Narrative Scientific Data Visualization in an Immersive Environment. Bioinformatics 2021; 37:2033–2041. [PMID: 33538809 DOI: 10.1093/bioinformatics/btab052] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 01/09/2021] [Accepted: 01/21/2021] [Indexed: 11/14/2022] Open
Abstract
MOTIVATION Narrative visualization for scientific data explorations can help users better understand the domain knowledge, because narrative visualizations often present a sequence of facts and observations linked together by a unifying theme or argument. Narrative visualization in immersive environments can provide users with an intuitive experience to interactively explore the scientific data, because immersive environments provide a brand new strategy for interactive scientific data visualization and exploration. However, it is challenging to develop narrative scientific visualization in immersive environments. In this paper, we propose an immersive narrative visualization tool to create and customize scientific data explorations for ordinary users with little knowledge about programming on scientific visualization, They are allowed to define POIs (point of interests) conveniently by the handler of an immersive device. RESULTS Automatic exploration animations with narrative annotations can be generated by the gradual transitions between consecutive POI pairs. Besides, interactive slicing can be also controlled by device handler. Evaluations including user study and case study are designed and conducted to show the usability and effectiveness of the proposed tool. AVAILABILITY Related information can be accessed at: https://dabigtou.github.io/richenliu/.
Collapse
Affiliation(s)
- Richen Liu
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Hailong Wang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Chuyu Zhang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Xiaojian Chen
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Lijun Wang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Genlin Ji
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Bin Zhao
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Zhiwei Mao
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Dan Yang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| |
Collapse
|
4
|
Fonnet A, Prie Y. Survey of Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2101-2122. [PMID: 31352344 DOI: 10.1109/tvcg.2019.2929033] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Immersive analytics (IA) is a new term referring to the use of immersive technologies for data analysis. Yet such applications are not new, and numerous contributions have been made in the last three decades. However, no survey reviewing all these contributions is available. Here we propose a survey of IA from the early nineties until the present day, describing how rendering technologies, data, sensory mapping, and interaction means have been used to build IA systems, as well as how these systems have been evaluated. The conclusions that emerge from our analysis are that: multi-sensory aspects of IA are under-exploited, the 3DUI and VR community knowledge regarding immersive interaction is not sufficiently utilised, the IA community should focus on converging towards best practices, as well as aim for real life IA systems.
Collapse
|
5
|
Chen J, Zhang G, Chiou W, Laidlaw DH, Auchus AP. Measuring the Effects of Scalar and Spherical Colormaps on Ensembles of DMRI Tubes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2818-2833. [PMID: 30763242 DOI: 10.1109/tvcg.2019.2898438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
We report empirical study results on the color encoding of ensemble scalar and orientation to visualize diffusion magnetic resonance imaging (DMRI) tubes. The experiment tested six scalar colormaps for average fractional anisotropy (FA) tasks (grayscale, blackbody, diverging, isoluminant-rainbow, extended-blackbody, and coolwarm) and four three-dimensional (3D) spherical colormaps for tract tracing tasks (uniform gray, absolute, eigenmaps, and Boy's surface embedding). We found that extended-blackbody, coolwarm, and blackbody remain the best three approaches for identifying ensemble average in 3D. Isoluminant-rainbow colormap led to the same ensemble mean accuracy as other colormaps. However, more than 50 percent of the answers consistently had higher estimates of the ensemble average, independent of the mean values. The number of hues, not luminance, influences ensemble estimates of mean values. For ensemble orientation-tracing tasks, we found that both Boy's surface embedding (greatest spatial resolution and contrast) and absolute colormaps (lowest spatial resolution and contrast) led to more accurate answers than the eigenmaps scheme (medium resolution and contrast), acting as the uncanny-valley phenomenon of visualization design in terms of accuracy. Absolute colormap broadly used in brain science is a good default spherical colormap. We could conclude from our study that human visual processing of a chunk of colors differs from that of single colors.
Collapse
|
6
|
Schultz T, Vilanova A. Diffusion MRI visualization. NMR IN BIOMEDICINE 2019; 32:e3902. [PMID: 29485226 DOI: 10.1002/nbm.3902] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2016] [Revised: 11/22/2017] [Accepted: 01/04/2018] [Indexed: 06/08/2023]
Abstract
Modern diffusion magnetic resonance imaging (dMRI) acquires intricate volume datasets and biological meaning can only be found in the relationship between its different measurements. Suitable strategies for visualizing these complicated data have been key to interpretation by physicians and neuroscientists, for drawing conclusions on brain connectivity and for quality control. This article provides an overview of visualization solutions that have been proposed to date, ranging from basic grayscale and color encodings to glyph representations and renderings of fiber tractography. A particular focus is on ongoing and possible future developments in dMRI visualization, including comparative, uncertainty, interactive and dense visualizations.
Collapse
Affiliation(s)
- Thomas Schultz
- Bonn-Aachen International Center for Information Technology, Bonn, Germany
- Department of Computer Science, University of Bonn, Bonn, Germany
| | - Anna Vilanova
- Department of Electrical Engineering Mathematics and Computer Science (EEMCS), TU Delft, Delft, the Netherlands
| |
Collapse
|
7
|
Christensen AJ, Srinivasan V, Hart JC, Marshall-Colon A. Use of computational modeling combined with advanced visualization to develop strategies for the design of crop ideotypes to address food security. Nutr Rev 2018; 76:332-347. [PMID: 29562368 PMCID: PMC5892862 DOI: 10.1093/nutrit/nux076] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
Sustainable crop production is a contributing factor to current and future food security. Innovative technologies are needed to design strategies that will achieve higher crop yields on less land and with fewer resources. Computational modeling coupled with advanced scientific visualization enables researchers to explore and interact with complex agriculture, nutrition, and climate data to predict how crops will respond to untested environments. These virtual observations and predictions can direct the development of crop ideotypes designed to meet future yield and nutritional demands. This review surveys modeling strategies for the development of crop ideotypes and scientific visualization technologies that have led to discoveries in "big data" analysis. Combined modeling and visualization approaches have been used to realistically simulate crops and to guide selection that immediately enhances crop quantity and quality under challenging environmental conditions. This survey of current and developing technologies indicates that integrative modeling and advanced scientific visualization may help overcome challenges in agriculture and nutrition data as large-scale and multidimensional data become available in these fields.
Collapse
Affiliation(s)
- A J Christensen
- National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Venkatraman Srinivasan
- Pacific Northwest National Laboratory, Richland, Washington, USA, and was with the Institute for Genomic Biology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - John C Hart
- Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Amy Marshall-Colon
- Department of Plant Biology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| |
Collapse
|
8
|
Bach B, Sicat R, Beyer J, Cordeil M, Pfister H. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:457-467. [PMID: 28866590 DOI: 10.1109/tvcg.2017.2745941] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.
Collapse
|
9
|
Zhang G, Kochunov P, Hong E, Kelly S, Whelan C, Jahanshad N, Thompson P, Chen J. ENIGMA-Viewer: interactive visualization strategies for conveying effect sizes in meta-analysis. BMC Bioinformatics 2017; 18:253. [PMID: 28617224 PMCID: PMC5471941 DOI: 10.1186/s12859-017-1634-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND Global scale brain research collaborations such as the ENIGMA (Enhancing Neuro Imaging Genetics through Meta Analysis) consortium are beginning to collect data in large quantity and to conduct meta-analyses using uniformed protocols. It becomes strategically important that the results can be communicated among brain scientists effectively. Traditional graphs and charts failed to convey the complex shapes of brain structures which are essential to the understanding of the result statistics from the analyses. These problems could be addressed using interactive visualization strategies that can link those statistics with brain structures in order to provide a better interface to understand brain research results. RESULTS We present ENIGMA-Viewer, an interactive web-based visualization tool for brain scientists to compare statistics such as effect sizes from meta-analysis results on standardized ROIs (regions-of-interest) across multiple studies. The tool incorporates visualization design principles such as focus+context and visual data fusion to enable users to better understand the statistics on brain structures. To demonstrate the usability of the tool, three examples using recent research data are discussed via case studies. CONCLUSIONS ENIGMA-Viewer supports presentations and communications of brain research results through effective visualization designs. By linking visualizations of both statistics and structures, users can gain more insights into the presented data that are otherwise difficult to obtain. ENIGMA-Viewer is an open-source tool, the source code and sample data are publicly accessible through the NITRC website ( http://www.nitrc.org/projects/enigmaviewer_20 ). The tool can also be directly accessed online ( http://enigma-viewer.org ).
Collapse
Affiliation(s)
- Guohao Zhang
- Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, 21250 MD USA
| | - Peter Kochunov
- Maryland Psychiatric Research Center, University of Maryland, Baltimore, 55 Wade Ave, Baltimore, 21228 MD USA
| | - Elliot Hong
- Maryland Psychiatric Research Center, University of Maryland, Baltimore, 55 Wade Ave, Baltimore, 21228 MD USA
| | - Sinead Kelly
- Department of Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Ave, Boston, 02215 MA USA
- Psychiatry Neuroimaging Laboratory, Brigham and Women’s Hospital, Harvard Medical School, 75 Francis St, Boston, 02115 MA USA
| | - Christopher Whelan
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, 1975 Zonal Ave, Los Angeles, 90033 LA USA
- Department of Molecular and Cellular Therapeutics, Royal College of Surgeons in Ireland, Dublin, 123 St Stephen’s Green, Dublin 2, Ireland
| | - Neda Jahanshad
- Keck School of Medicine, University of Southern California, 1975 Zonal Ave, Los Angeles, 90033 LA USA
| | - Paul Thompson
- Keck School of Medicine, University of Southern California, 1975 Zonal Ave, Los Angeles, 90033 LA USA
| | - Jian Chen
- Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, 21250 MD USA
| |
Collapse
|
10
|
Bryant GW, Griffin W, Terrill JE. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1691-1705. [PMID: 28113469 PMCID: PMC5592787 DOI: 10.1109/tvcg.2016.2539949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.
Collapse
|
11
|
Stevens AH, Butkiewicz T, Ware C. Hairy Slices: Evaluating the Perceptual Effectiveness of Cutting Plane Glyphs for 3D Vector Fields. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:990-999. [PMID: 27875212 DOI: 10.1109/tvcg.2016.2598448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Three-dimensional vector fields are common datasets throughout the sciences. Visualizing these fields is inherently difficult due to issues such as visual clutter and self-occlusion. Cutting planes are often used to overcome these issues by presenting more manageable slices of data. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. This paper presents a quantitative human factors study that evaluates static monoscopic depth and orientation cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The goal of the study was to ascertain the relative effectiveness of various techniques for portraying the direction of flow through a cutting plane at a given point, and to identify the visual cues and combinations of cues involved, and how they contribute to accurate performance. It was found that increasing the dimensionality of line-based glyphs into tubular structures enhances their ability to convey orientation through shading, and that increasing their diameter intensifies this effect. These tube-based glyphs were also less sensitive to visual clutter issues at higher densities. Adding shadows to lines was also found to increase perception of flow direction. Implications of the experimental results are discussed and extrapolated into a number of guidelines for designing more perceptually effective glyphs for 3D vector field visualizations.
Collapse
|
12
|
Laha B, Bowman DA, Socha JJ. Effects of VR system fidelity on analyzing isosurface visualization of volume datasets. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:513-522. [PMID: 24650978 DOI: 10.1109/tvcg.2014.20] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.
Collapse
|