1
|
Athawale TM, Wang Z, Pugmire D, Moreland K, Gong Q, Klasky S, Johnson CR, Rosen P. Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:108-118. [PMID: 39255107 DOI: 10.1109/tvcg.2024.3456393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.
Collapse
|
2
|
Pont M, Vidal J, Tierny J. Principal Geodesic Analysis of Merge Trees (and Persistence Diagrams). IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1573-1589. [PMID: 36251893 DOI: 10.1109/tvcg.2022.3215001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
This article presents a computational framework for the Principal Geodesic Analysis of merge trees (MT-PGA), a novel adaptation of the celebrated Principal Component Analysis (PCA) framework (K. Pearson 1901) to the Wasserstein metric space of merge trees (Pont et al. 2022). We formulate MT-PGA computation as a constrained optimization problem, aiming at adjusting a basis of orthogonal geodesic axes, while minimizing a fitting energy. We introduce an efficient, iterative algorithm which exploits shared-memory parallelism, as well as an analytic expression of the fitting energy gradient, to ensure fast iterations. Our approach also trivially extends to extremum persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our approach - with MT-PGA computations in the orders of minutes for the largest examples. We show the utility of our contributions by extending to merge trees two typical PCA applications. First, we apply MT-PGA to data reduction and reliably compress merge trees by concisely representing them by their first coordinates in the MT-PGA basis. Second, we present a dimensionality reduction framework exploiting the first two directions of the MT-PGA basis to generate two-dimensional layouts of the ensemble. We augment these layouts with persistence correlation views, enabling global and local visual inspections of the feature variability in the ensemble. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.
Collapse
|
3
|
Mohammadzadeh Gonabadi A, Cesar GM, Buster TW, Burnfield JM. Effect of gap-filling technique and gap location on linear and nonlinear calculations of motion during locomotor activities. Gait Posture 2022; 94:85-92. [PMID: 35255383 DOI: 10.1016/j.gaitpost.2022.02.025] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 02/09/2022] [Accepted: 02/22/2022] [Indexed: 02/02/2023]
Abstract
BACKGROUND Marker occlusion during camera-based movement analysis is common. Different interpolation techniques are available for estimating location of missing marker trajectories. RESEARCH QUESTION What is the effect of gap location and interpolation technique on linear and nonlinear measures for a given kinematic time series? METHODS Kinematic data were recorded during motor-assisted elliptical training and treadmill walking. Gap-filling techniques (i.e., Cubic, Makima, Autoregressive, Nearest Neighbor, and No Interpolation) and gap locations experimentally applied to each cycle across initially complete time series (Gap 1: local minimum and maximum peaks; Gap 2: maximum peaks; Gap 3: maximum peaks at negative slope; Gap 4: random locations) were examined during linear (Maxima and Minima joint angles) and nonlinear [maximum Lyapunov exponent (LyE)] measures. RESULTS Gap-filling technique and gap location influenced values calculated for linear and nonlinear measures of joint motions. When referenced to the gold standard (original data series without gaps), across all joints studied the average % error of Maxima and Minima joint angles and LyE % error were lower when applying Cubic, Makima, Autoregressive, and Nearest Neighbor techniques compared to No Interpolation (p < 0.0001). The % error of Maxima joint angles was lower for Gaps 1, 3, and 4 compared to Gap 2 (p = 0.0003), while % error of Minima joint angles was lower for Gaps 2 and 3, compared to Gaps 1 and 4 (p < 0.0001). An interaction between gap-filling technique and gap location was identified for LyE % error, in which Gap 4 % error was significantly greater during No Interpolation compared to other gap-filling techniques (p < 0.0001). SIGNIFICANCE Findings can guide selection of appropriate techniques to manage missing kinematic data points in camera-based motion analysis time series. Gap-filling techniques significantly reduced error in calculating select linear and nonlinear measures of variability, with Cubic most consistently resulting in the greatest reduction in error.
Collapse
Affiliation(s)
- Arash Mohammadzadeh Gonabadi
- Movement and Neurosciences Center, Institute for Rehabilitation Science and Engineering Madonna Rehabilitation Hospitals, 5401 South Street, Lincoln, NE 68506, United States.
| | - Guilherme M Cesar
- Movement and Neurosciences Center, Institute for Rehabilitation Science and Engineering Madonna Rehabilitation Hospitals, 5401 South Street, Lincoln, NE 68506, United States.
| | - Thad W Buster
- Movement and Neurosciences Center, Institute for Rehabilitation Science and Engineering Madonna Rehabilitation Hospitals, 5401 South Street, Lincoln, NE 68506, United States.
| | - Judith M Burnfield
- Movement and Neurosciences Center, Institute for Rehabilitation Science and Engineering Madonna Rehabilitation Hospitals, 5401 South Street, Lincoln, NE 68506, United States.
| |
Collapse
|
4
|
Athawale TM, Maljovec D, Yan L, Johnson CR, Pascucci V, Wang B. Uncertainty Visualization of 2D Morse Complex Ensembles Using Statistical Summary Maps. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1955-1966. [PMID: 32897861 PMCID: PMC8935531 DOI: 10.1109/tvcg.2020.3022359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Morse complexes are gradient-based topological descriptors with close connections to Morse theory. They are widely applicable in scientific visualization as they serve as important abstractions for gaining insights into the topology of scalar fields. Data uncertainty inherent to scalar fields due to randomness in their acquisition and processing, however, limits our understanding of Morse complexes as structural abstractions. We, therefore, explore uncertainty visualization of an ensemble of 2D Morse complexes that arises from scalar fields coupled with data uncertainty. We propose several statistical summary maps as new entities for quantifying structural variations and visualizing positional uncertainties of Morse complexes in ensembles. Specifically, we introduce three types of statistical summary maps - the probabilistic map, the significance map, and the survival map - to characterize the uncertain behaviors of gradient flows. We demonstrate the utility of our proposed approach using wind, flow, and ocean eddy simulation datasets.
Collapse
|
5
|
Pont M, Vidal J, Delon J, Tierny J. Wasserstein Distances, Geodesics and Barycenters of Merge Trees. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:291-301. [PMID: 34596544 DOI: 10.1109/tvcg.2021.3114839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This paper presents a unified computational framework for the estimation of distances, geodesics and barycenters of merge trees. We extend recent work on the edit distance [104] and introduce a new metric, called the Wasserstein distance between merge trees, which is purposely designed to enable efficient computations of geodesics and barycenters. Specifically, our new distance is strictly equivalent to the $L$2-Wasserstein distance between extremum persistence diagrams, but it is restricted to a smaller solution space, namely, the space of rooted partial isomorphisms between branch decomposition trees. This enables a simple extension of existing optimization frameworks [110] for geodesics and barycenters from persistence diagrams to merge trees. We introduce a task-based algorithm which can be generically applied to distance, geodesic, barycenter or cluster computation. The task-based nature of our approach enables further accelerations with shared-memory parallelism. Extensive experiments on public ensembles and SciVis contest benchmarks demonstrate the efficiency of our approach - with barycenter computations in the orders of minutes for the largest examples - as well as its qualitative ability to generate representative barycenter merge trees, visually summarizing the features of interest found in the ensemble. We show the utility of our contributions with dedicated visualization applications: feature tracking, temporal reduction and ensemble clustering. We provide a lightweight C++ implementation that can be used to reproduce our results.
Collapse
|
6
|
Deep Learning Based Air-Writing Recognition with the Choice of Proper Interpolation Technique. SENSORS 2021; 21:s21248407. [PMID: 34960499 PMCID: PMC8705512 DOI: 10.3390/s21248407] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/13/2021] [Accepted: 12/13/2021] [Indexed: 11/16/2022]
Abstract
The act of writing letters or words in free space with body movements is known as air-writing. Air-writing recognition is a special case of gesture recognition in which gestures correspond to characters and digits written in the air. Air-writing, unlike general gestures, does not require the memorization of predefined special gesture patterns. Rather, it is sensitive to the subject and language of interest. Traditional air-writing requires an extra device containing sensor(s), while the wide adoption of smart-bands eliminates the requirement of the extra device. Therefore, air-writing recognition systems are becoming more flexible day by day. However, the variability of signal duration is a key problem in developing an air-writing recognition model. Inconsistent signal duration is obvious due to the nature of the writing and data-recording process. To make the signals consistent in length, researchers attempted various strategies including padding and truncating, but these procedures result in significant data loss. Interpolation is a statistical technique that can be employed for time-series signals to ensure minimum data loss. In this paper, we extensively investigated different interpolation techniques on seven publicly available air-writing datasets and developed a method to recognize air-written characters using a 2D-CNN model. In both user-dependent and user-independent principles, our method outperformed all the state-of-the-art methods by a clear margin for all datasets.
Collapse
|
7
|
Athawale TM, Ma B, Sakhaee E, Johnson CR, Entezari A. Direct Volume Rendering with Nonparametric Models of Uncertainty. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1797-1807. [PMID: 33052857 DOI: 10.1109/tvcg.2020.3030394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present a nonparametric statistical framework for the quantification, analysis, and propagation of data uncertainty in direct volume rendering (DVR). The state-of-the-art statistical DVR framework allows for preserving the transfer function (TF) of the ground truth function when visualizing uncertain data; however, the existing framework is restricted to parametric models of uncertainty. In this paper, we address the limitations of the existing DVR framework by extending the DVR framework for nonparametric distributions. We exploit the quantile interpolation technique to derive probability distributions representing uncertainty in viewing-ray sample intensities in closed form, which allows for accurate and efficient computation. We evaluate our proposed nonparametric statistical models through qualitative and quantitative comparisons with the mean-field and parametric statistical models, such as uniform and Gaussian, as well as Gaussian mixtures. In addition, we present an extension of the state-of-the-art rendering parametric framework to 2D TFs for improved DVR classifications. We show the applicability of our uncertainty quantification framework to ensemble, downsampled, and bivariate versions of scalar field datasets.
Collapse
|
8
|
Kim D, Kye H, Lee J, Shin YG. Confidence-Controlled Local Isosurfacing. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:29-42. [PMID: 32790630 DOI: 10.1109/tvcg.2020.3016327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This article presents a novel framework that can generate a high-fidelity isosurface model of X-ray computed tomography (CT) data. CT surfaces with subvoxel precision and smoothness can be simply modeled via isosurfacing, where a single CT value represents an isosurface. However, this inevitably results in geometric distortion of the CT data containing CT artifacts. An alternative is to treat this challenge as a segmentation problem. However, in general, segmentation techniques are not robust against noisy data and require heavy computation to handle the artifacts that occur in three-dimensional CT data. Furthermore, the surfaces generated from segmentation results may contain jagged, overly smooth, or distorted geometries. We present a novel local isosurfacing framework that can address these issues simultaneously. The proposed framework exploits two primary techniques: 1) Canny edge approach for obtaining surface candidate boundary points and evaluating their confidence and 2) screened Poisson optimization for fitting a surface to the boundary points in which the confidence term is incorporated. This combination facilitates local isosurfacing that can produce high-fidelity surface models. We also implement an intuitive user interface to alleviate the burden of selecting the appropriate confidence computing parameters. Our experimental results demonstrate the effectiveness of the proposed framework.
Collapse
|
9
|
He W, Guo H, Shen HW, Peterka T. eFESTA: Ensemble Feature Exploration with Surface Density Estimates. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1716-1731. [PMID: 30418881 DOI: 10.1109/tvcg.2018.2879866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
We propose surface density estimate (SDE) to model the spatial distribution of surface features-isosurfaces, ridge surfaces, and streamsurfaces-in 3D ensemble simulation data. The inputs of SDE computation are surface features represented as polygon meshes, and no field datasets are required (e.g., scalar fields or vector fields). The SDE is defined as the kernel density estimate of the infinite set of points on the input surfaces and is approximated by accumulating the surface densities of triangular patches. We also propose an algorithm to guide the selection of a proper kernel bandwidth for SDE computation. An ensemble Feature Exploration method based on Surface densiTy EstimAtes (eFESTA) is then proposed to extract and visualize the major trends of ensemble surface features. For an ensemble of surface features, each surface is first transformed into a density field based on its contribution to the SDE, and the resulting density fields are organized into a hierarchical representation based on the pairwise distances between them. The hierarchical representation is then used to guide visual exploration of the density fields as well as the underlying surface features. We demonstrate the application of our method using isosurface in ensemble scalar fields, Lagrangian coherent structures in uncertain unsteady flows, and streamsurfaces in ensemble fluid flows.
Collapse
|
10
|
Event-based exploration and comparison on time-varying ensembles. J Vis (Tokyo) 2019. [DOI: 10.1007/s12650-019-00608-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
11
|
Wang J, Hazarika S, Li C, Shen HW. Visualization and Visual Analysis of Ensemble Data: A Survey. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2853-2872. [PMID: 29994615 DOI: 10.1109/tvcg.2018.2853721] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Over the last decade, ensemble visualization has witnessed a significant development due to the wide availability of ensemble data, and the increasing visualization needs from a variety of disciplines. From the data analysis point of view, it can be observed that many ensemble visualization works focus on the same facet of ensemble data, use similar data aggregation or uncertainty modeling methods. However, the lack of reflections on those essential commonalities and a systematic overview of those works prevents visualization researchers from effectively identifying new or unsolved problems and planning for further developments. In this paper, we take a holistic perspective and provide a survey of ensemble visualization. Specifically, we study ensemble visualization works in the recent decade, and categorize them from two perspectives: (1) their proposed visualization techniques; and (2) their involved analytic tasks. For the first perspective, we focus on elaborating how conventional visualization techniques (e.g., surface, volume visualization techniques) have been adapted to ensemble data; for the second perspective, we emphasize how analytic tasks (e.g., comparison, clustering) have been performed differently for ensemble data. From the study of ensemble visualization literature, we have also identified several research trends, as well as some future research opportunities.
Collapse
|
12
|
Vidal J, Budin J, Tierny J. Progressive Wasserstein Barycenters of Persistence Diagrams. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019:1-1. [PMID: 31403427 DOI: 10.1109/tvcg.2019.2934256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This paper presents an efficient algorithm for the progressive approximation of Wasserstein barycenters of persistence diagrams, with applications to the visual analysis of ensemble data. Given a set of scalar fields, our approach enables the computation of a persistence diagram which is representative of the set, and which visually conveys the number, data ranges and saliences of the main features of interest found in the set. Such representative diagrams are obtained by computing explicitly the discrete Wasserstein barycenter of the set of persistence diagrams, a notoriously computationally intensive task. In particular, we revisit efficient algorithms for Wasserstein distance approximation [12,51] to extend previous work on barycenter estimation [94]. We present a new fast algorithm, which progressively approximates the barycenter by iteratively increasing the computation accuracy as well as the number of persistent features in the output diagram. Such a progressivity drastically improves convergence in practice and allows to design an interruptible algorithm, capable of respecting computation time constraints. This enables the approximation of Wasserstein barycenters within interactive times. We present an application to ensemble clustering where we revisit the k-means algorithm to exploit our barycenters and compute, within execution time constraints, meaningful clusters of ensemble data along with their barycenter diagram. Extensive experiments on synthetic and real-life data sets report that our algorithm converges to barycenters that are qualitatively meaningful with regard to the applications, and quantitatively comparable to previous techniques, while offering an order of magnitude speedup when run until convergence (without time constraint). Our algorithm can be trivially parallelized to provide additional speedups in practice on standard workstations. We provide a lightweight C++ implementation of our approach that can be used to reproduce our results.
Collapse
|
13
|
Ma B, Entezari A. Volumetric Feature-Based Classification and Visibility Analysis for Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:3253-3267. [PMID: 29989987 DOI: 10.1109/tvcg.2017.2776935] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Transfer function (TF) design is a central topic in direct volume rendering. The TF fundamentally translates data values into optical properties to reveal relevant features present in the volumetric data. We propose a semi-automatic TF design scheme which consists of two steps: First, we present a clustering process within 1D/2D TF domain based on the proximities of the respective volumetric features in the spatial domain. The presented approach provides an interactive tool that aids users in exploring clusters and identifying features of interest (FOI). Second, our method automatically generates a TF by iteratively refining the optical properties for the selected features using a novel feature visibility measurement. The proposed visibility measurement leverages the similarities of features to enhance their visibilities in DVR images. Compared to the conventional visibility measurement, the proposed feature visibility is able to efficiently sense opacity changes and precisely evaluate the impact of selected features on resulting visualizations. Our experiments validate the effectiveness of the proposed approach by demonstrating the advantages of integrating feature similarity into the visibility computations. We examine a number of datasets to establish the utility of our approach for semi-automatic TF design.
Collapse
|
14
|
Athawale TM, Johnson KA, Butson CR, Johnson CR. A statistical framework for quantification and visualisation of positional uncertainty in deep brain stimulation electrodes. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION 2018; 7:438-449. [PMID: 31186994 DOI: 10.1080/21681163.2018.1523750] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Deep brain stimulation (DBS) is an established therapy for treating patients with movement disorders such as Parkinson's disease. Patient-specific computational modelling and visualisation have been shown to play a key role in surgical and therapeutic decisions for DBS. The computational models use brain imaging, such as magnetic resonance (MR) and computed tomography (CT), to determine the DBS electrode positions within the patient's head. The finite resolution of brain imaging, however, introduces uncertainty in electrode positions. The DBS stimulation settings for optimal patient response are sensitive to the relative positioning of DBS electrodes to a specific neural substrate (white/grey matter). In our contribution, we study positional uncertainty in the DBS electrodes for imaging with finite resolution. In a three-step approach, we first derive a closed-form mathematical model characterising the geometry of the DBS electrodes. Second, we devise a statistical framework for quantifying the uncertainty in the positional attributes of the DBS electrodes, namely the direction of longitudinal axis and the contact-centre positions at subvoxel levels. The statistical framework leverages the analytical model derived in step one and a Bayesian probabilistic model for uncertainty quantification. Finally, the uncertainty in contact-centre positions is interactively visualised through volume rendering and isosurfacing techniques. We demonstrate the efficacy of our contribution through experiments on synthetic and real datasets. We show that the spatial variations in true electrode positions are significant for finite resolution imaging, and interactive visualisation can be instrumental in exploring probabilistic positional variations in the DBS lead.
Collapse
Affiliation(s)
- Tushar M Athawale
- Scientific Computing & Imaging (SCI) Institute, University of Utah, Salt Lake City, USA
| | - Kara A Johnson
- Scientific Computing & Imaging (SCI) Institute, University of Utah, Salt Lake City, USA
| | | | - Chris R Johnson
- Scientific Computing & Imaging (SCI) Institute, University of Utah, Salt Lake City, USA
| |
Collapse
|
15
|
Favelier G, Faraj N, Summa B, Tierny J. Persistence Atlas for Critical Point Variability in Ensembles. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:1152-1162. [PMID: 30207954 DOI: 10.1109/tvcg.2018.2864432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper presents a new approach for the visualization and analysis of the spatial variability of features of interest represented by critical points in ensemble data. Our framework, called Persistence Atlas, enables the visualization of the dominant spatial patterns of critical points, along with statistics regarding their occurrence in the ensemble. The persistence atlas represents in the geometrical domain each dominant pattern in the form of a confidence map for the appearance of critical points. As a by-product, our method also provides 2-dimensional layouts of the entire ensemble, highlighting the main trends at a global level. Our approach is based on the new notion of Persistence Map, a measure of the geometrical density in critical points which leverages the robustness to noise of topological persistence to better emphasize salient features. We show how to leverage spectral embedding to represent the ensemble members as points in a low-dimensional Euclidean space, where distances between points measure the dissimilarities between critical point layouts and where statistical tasks, such as clustering, can be easily carried out. Further, we show how the notion of mandatory critical point can be leveraged to evaluate for each cluster confidence regions for the appearance of critical points. Most of the steps of this framework can be trivially parallelized and we show how to efficiently implement them. Extensive experiments demonstrate the relevance of our approach. The accuracy of the confidence regions provided by the persistence atlas is quantitatively evaluated and compared to a baseline strategy using an off-the-shelf clustering approach. We illustrate the importance of the persistence atlas in a variety of real-life datasets, where clear trends in feature layouts are identified and analyzed. We provide a lightweight VTK-based C++ implementation of our approach that can be used for reproduction purposes.
Collapse
|
16
|
Athawale T, Johnson CR. Probabilistic Asymptotic Decider for Topological Ambiguity Resolution in Level-Set Extraction for Uncertain 2D Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:10.1109/TVCG.2018.2864505. [PMID: 30130200 PMCID: PMC6382610 DOI: 10.1109/tvcg.2018.2864505] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a framework for the analysis of uncertainty in isocontour extraction. The marching squares (MS) algorithm for isocontour reconstruction generates a linear topology that is consistent with hyperbolic curves of a piecewise bilinear interpolation. The saddle points of the bilinear interpolant cause topological ambiguity in isocontour extraction. The midpoint decider and the asymptotic decider are well-known mathematical techniques for resolving topological ambiguities. The latter technique investigates the data values at the cell saddle points for ambiguity resolution. The uncertainty in data, however, leads to uncertainty in underlying bilinear interpolation functions for the MS algorithm, and hence, their saddle points. In our work, we study the behavior of the asymptotic decider when data at grid vertices is uncertain. First, we derive closed-form distributions characterizing variations in the saddle point values for uncertain bilinear interpolants. The derivation assumes uniform and nonparametric noise models, and it exploits the concept of ratio distribution for analytic formulations. Next, the probabilistic asymptotic decider is devised for ambiguity resolution in uncertain data using distributions of the saddle point values derived in the first step. Finally, the confidence in probabilistic topological decisions is visualized using a colormapping technique. We demonstrate the higher accuracy and stability of the probabilistic asymptotic decider in uncertain data with regard to existing decision frameworks, such as deciders in the mean field and the probabilistic midpoint decider, through the isocontour visualization of synthetic and real datasets.
Collapse
Affiliation(s)
- Tushar Athawale
- Scientific Computing & Imaging (SCI) Institute at the University of Utah.
| | - Chris R. Johnson
- Scientific Computing & Imaging (SCI) Institute at the University of Utah.
| |
Collapse
|
17
|
Ma B, Entezari A. An Interactive Framework for Visualization of Weather Forecast Ensembles. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:1091-1101. [PMID: 30130213 DOI: 10.1109/tvcg.2018.2864815] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Numerical Weather Prediction (NWP) ensembles are commonly used to assess the uncertainty and confidence in weather forecasts. Spaghetti plots are conventional tools for meteorologists to directly examine the uncertainty exhibited by ensembles, where they simultaneously visualize isocontours of all ensemble members. To avoid visual clutter in practical usages, one needs to select a small number of informative isovalues for visual analysis. Moreover, due to the complex topology and variation of ensemble isocontours, it is often a challenging task to interpret the spaghetti plot for even a single isovalue in large ensembles. In this paper, we propose an interactive framework for uncertainty visualization of weather forecast ensembles that significantly improves and expands the utility of spaghetti plots in ensemble analysis. Complementary to state-of-the-art methods, our approach provides a complete framework for visual exploration of ensemble isocontours, including isovalue selection, interactive isocontour variability exploration, and interactive sub-region selection and re-analysis. Our framework is built upon the high-density clustering paradigm, where the mode structure of the density function is represented as a hierarchy of nested subsets of the data. We generalize the high-density clustering for isocontours and propose a bandwidth selection method for estimating the density function of ensemble isocontours. We present novel visualizations based on high-density clustering results, called the mode plot and the simplified spaghetti plot. The proposed mode plot visually encodes the structure provided by the high-density clustering result and summarizes the distribution of ensemble isocontours. It also enables the selection of subsets of interesting isocontours, which are interactively highlighted in a linked spaghetti plot for providing spatial context. To provide an interpretable overview of the positional variability of isocontours, our system allows for selection of informative isovalues from the simplified spaghetti plot. Due to the spatial variability of ensemble isocontours, the system allows for interactive selection and focus on sub-regions for local uncertainty and clustering re-analysis. We examine a number of ensemble datasets to establish the utility of our approach and discuss its advantages over state-of-the-art visual analysis tools for ensemble data.
Collapse
|
18
|
Hazarika S, Biswas A, Shen HW. Uncertainty Visualization Using Copula-Based Analysis in Mixed Distribution Models. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:934-943. [PMID: 28866523 DOI: 10.1109/tvcg.2017.2744099] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Distributions are often used to model uncertainty in many scientific datasets. To preserve the correlation among the spatially sampled grid locations in the dataset, various standard multivariate distribution models have been proposed in visualization literature. These models treat each grid location as a univariate random variable which models the uncertainty at that location. Standard multivariate distributions (both parametric and nonparametric) assume that all the univariate marginals are of the same type/family of distribution. But in reality, different grid locations show different statistical behavior which may not be modeled best by the same type of distribution. In this paper, we propose a new multivariate uncertainty modeling strategy to address the needs of uncertainty modeling in scientific datasets. Our proposed method is based on a statistically sound multivariate technique called Copula, which makes it possible to separate the process of estimating the univariate marginals and the process of modeling dependency, unlike the standard multivariate distributions. The modeling flexibility offered by our proposed method makes it possible to design distribution fields which can have different types of distribution (Gaussian, Histogram, KDE etc.) at the grid locations, while maintaining the correlation structure at the same time. Depending on the results of various standard statistical tests, we can choose an optimal distribution representation at each location, resulting in a more cost efficient modeling without significantly sacrificing on the analysis quality. To demonstrate the efficacy of our proposed modeling strategy, we extract and visualize uncertain features like isocontours and vortices in various real world datasets. We also study various modeling criterion to help users in the task of univariate model selection.
Collapse
|
19
|
Sakhaee E, Entezari A. A Statistical Direct Volume Rendering Framework for Visualization of Uncertain Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:2509-2520. [PMID: 27959812 DOI: 10.1109/tvcg.2016.2637333] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
With uncertainty present in almost all modalities of data acquisition, reduction, transformation, and representation, there is a growing demand for mathematical analysis of uncertainty propagation in data processing pipelines. In this paper, we present a statistical framework for quantification of uncertainty and its propagation in the main stages of the visualization pipeline. We propose a novel generalization of Irwin-Hall distributions from the statistical viewpoint of splines and box-splines, that enables interpolation of random variables. Moreover, we introduce a probabilistic transfer function classification model that allows for incorporating probability density functions into the volume rendering integral. Our statistical framework allows for incorporating distributions from various sources of uncertainty which makes it suitable in a wide range of visualization applications. We demonstrate effectiveness of our approach in visualization of ensemble data, visualizing large datasets at reduced scale, iso-surface extraction, and visualization of noisy data.
Collapse
|
20
|
Interpolation in Time Series: An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment. WATER 2017. [DOI: 10.3390/w9100796] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
21
|
Athawale T, Sakhaee E, Entezari A. Isosurface Visualization of Data with Nonparametric Models for Uncertainty. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:777-786. [PMID: 26529727 DOI: 10.1109/tvcg.2015.2467958] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The problem of isosurface extraction in uncertain data is an important research problem and may be approached in two ways. One can extract statistics (e.g., mean) from uncertain data points and visualize the extracted field. Alternatively, data uncertainty, characterized by probability distributions, can be propagated through the isosurface extraction process. We analyze the impact of data uncertainty on topology and geometry extraction algorithms. A novel, edge-crossing probability based approach is proposed to predict underlying isosurface topology for uncertain data. We derive a probabilistic version of the midpoint decider that resolves ambiguities that arise in identifying topological configurations. Moreover, the probability density function characterizing positional uncertainty in isosurfaces is derived analytically for a broad class of nonparametric distributions. This analytic characterization can be used for efficient closed-form computation of the expected value and variation in geometry. Our experiments show the computational advantages of our analytic approach over Monte-Carlo sampling for characterizing positional uncertainty. We also show the advantage of modeling underlying error densities in a nonparametric statistical framework as opposed to a parametric statistical framework through our experiments on ensemble datasets and uncertain scalar fields.
Collapse
|