1
|
Athawale TM, Wang Z, Pugmire D, Moreland K, Gong Q, Klasky S, Johnson CR, Rosen P. Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:108-118. [PMID: 39255107 DOI: 10.1109/tvcg.2024.3456393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.
Collapse
|
2
|
Pont M, Vidal J, Tierny J. Principal Geodesic Analysis of Merge Trees (and Persistence Diagrams). IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1573-1589. [PMID: 36251893 DOI: 10.1109/tvcg.2022.3215001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
This article presents a computational framework for the Principal Geodesic Analysis of merge trees (MT-PGA), a novel adaptation of the celebrated Principal Component Analysis (PCA) framework (K. Pearson 1901) to the Wasserstein metric space of merge trees (Pont et al. 2022). We formulate MT-PGA computation as a constrained optimization problem, aiming at adjusting a basis of orthogonal geodesic axes, while minimizing a fitting energy. We introduce an efficient, iterative algorithm which exploits shared-memory parallelism, as well as an analytic expression of the fitting energy gradient, to ensure fast iterations. Our approach also trivially extends to extremum persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our approach - with MT-PGA computations in the orders of minutes for the largest examples. We show the utility of our contributions by extending to merge trees two typical PCA applications. First, we apply MT-PGA to data reduction and reliably compress merge trees by concisely representing them by their first coordinates in the MT-PGA basis. Second, we present a dimensionality reduction framework exploiting the first two directions of the MT-PGA basis to generate two-dimensional layouts of the ensemble. We augment these layouts with persistence correlation views, enabling global and local visual inspections of the feature variability in the ensemble. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.
Collapse
|
3
|
Sun M, Cai L, Cui W, Wu Y, Shi Y, Cao N. Erato: Cooperative Data Story Editing via Fact Interpolation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:983-993. [PMID: 36155449 DOI: 10.1109/tvcg.2022.3209428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As an effective form of narrative visualization, visual data stories are widely used in data-driven storytelling to communicate complex insights and support data understanding. Although important, they are difficult to create, as a variety of interdisciplinary skills, such as data analysis and design, are required. In this work, we introduce Erato, a human-machine cooperative data story editing system, which allows users to generate insightful and fluent data stories together with the computer. Specifically, Erato only requires a number of keyframes provided by the user to briefly describe the topic and structure of a data story. Meanwhile, our system leverages a novel interpolation algorithm to help users insert intermediate frames between the keyframes to smooth the transition. We evaluated the effectiveness and usefulness of the Erato system via a series of evaluations including a Turing test, a controlled user study, a performance validation, and interviews with three expert users. The evaluation results showed that the proposed interpolation technique was able to generate coherent story content and help users create data stories more efficiently.
Collapse
|
4
|
Pont M, Vidal J, Delon J, Tierny J. Wasserstein Distances, Geodesics and Barycenters of Merge Trees. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:291-301. [PMID: 34596544 DOI: 10.1109/tvcg.2021.3114839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This paper presents a unified computational framework for the estimation of distances, geodesics and barycenters of merge trees. We extend recent work on the edit distance [104] and introduce a new metric, called the Wasserstein distance between merge trees, which is purposely designed to enable efficient computations of geodesics and barycenters. Specifically, our new distance is strictly equivalent to the $L$2-Wasserstein distance between extremum persistence diagrams, but it is restricted to a smaller solution space, namely, the space of rooted partial isomorphisms between branch decomposition trees. This enables a simple extension of existing optimization frameworks [110] for geodesics and barycenters from persistence diagrams to merge trees. We introduce a task-based algorithm which can be generically applied to distance, geodesic, barycenter or cluster computation. The task-based nature of our approach enables further accelerations with shared-memory parallelism. Extensive experiments on public ensembles and SciVis contest benchmarks demonstrate the efficiency of our approach - with barycenter computations in the orders of minutes for the largest examples - as well as its qualitative ability to generate representative barycenter merge trees, visually summarizing the features of interest found in the ensemble. We show the utility of our contributions with dedicated visualization applications: feature tracking, temporal reduction and ensemble clustering. We provide a lightweight C++ implementation that can be used to reproduce our results.
Collapse
|
5
|
BiLSTM-I: A Deep Learning-Based Long Interval Gap-Filling Method for Meteorological Observation Data. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph181910321. [PMID: 34639622 PMCID: PMC8507855 DOI: 10.3390/ijerph181910321] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 09/23/2021] [Accepted: 09/23/2021] [Indexed: 11/26/2022]
Abstract
Complete and high-resolution temperature observation data are important input parameters for agrometeorological disaster monitoring and ecosystem modelling. Due to the limitation of field meteorological observation conditions, observation data are commonly missing, and an appropriate data imputation method is necessary in meteorological data applications. In this paper, we focus on filling long gaps in meteorological observation data at field sites. A deep learning-based model, BiLSTM-I, is proposed to impute missing half-hourly temperature observations with high accuracy by considering temperature observations obtained manually at a low frequency. An encoder-decoder structure is adopted by BiLSTM-I, which is conducive to fully learning the potential distribution pattern of data. In addition, the BiLSTM-I model error function incorporates the difference between the final estimates and true observations. Therefore, the error function evaluates the imputation results more directly, and the model convergence error and the imputation accuracy are directly related, thus ensuring that the imputation error can be minimized at the time the model converges. The experimental analysis results show that the BiLSTM-I model designed in this paper is superior to other methods. For a test set with a time interval gap of 30 days, or a time interval gap of 60 days, the root mean square errors (RMSEs) remain stable, indicating the model’s excellent generalization ability for different missing value gaps. Although the model is only applied to temperature data imputation in this study, it also has the potential to be applied to other meteorological dataset-filling scenarios.
Collapse
|
6
|
Zheng B, Sadlo F. Uncertainty in Continuous Scatterplots, Continuous Parallel Coordinates, and Fibers. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1819-1828. [PMID: 33048747 DOI: 10.1109/tvcg.2020.3030466] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this paper, we introduce uncertainty to continuous scatterplots and continuous parallel coordinates. We derive respective models, validate them with sampling-based brute-force schemes, and present acceleration strategies for their computation. At the same time, we show that our approach lends itself as well for introducing uncertainty into the definition of fibers in bivariate data. Finally, we demonstrate the properties and the utility of our approach using specifically designed synthetic cases and simulated data.
Collapse
|
7
|
Herzig N, He L, Maiolino P, Abad SA, Nanayakkara T. Conditioned haptic perception for 3D localization of nodules in soft tissue palpation with a variable stiffness probe. PLoS One 2020; 15:e0237379. [PMID: 32780753 PMCID: PMC7419002 DOI: 10.1371/journal.pone.0237379] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 07/24/2020] [Indexed: 11/19/2022] Open
Abstract
This paper provides a solution for fast haptic information gain during soft tissue palpation using a Variable Lever Mechanism (VLM) probe. More specifically, we investigate the impact of stiffness variation of the probe to condition likelihood functions of the kinesthetic force and tactile sensors measurements during a palpation task for two sweeping directions. Using knowledge obtained from past probing trials or Finite Element (FE) simulations, we implemented this likelihood conditioning in an autonomous palpation control strategy. Based on a recursive Bayesian inferencing framework, this new control strategy adapts the sweeping direction and the stiffness of the probe to detect abnormal stiff inclusions in soft tissues. This original control strategy for compliant palpation probes shows a sub-millimeter accuracy for the 3D localization of the nodules in a soft tissue phantom as well as a 100% reliability detecting the existence of nodules in a soft phantom.
Collapse
Affiliation(s)
- Nicolas Herzig
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Liang He
- Dyson School of Design Engineering, Imperial College London, London, United Kingdom
| | - Perla Maiolino
- Oxford Robotics Institute, University of Oxford, Oxford, United Kingdom
| | - Sara-Adela Abad
- Department of Mechanical Engineering, University College London, London, United Kingdom
- Institute for Applied Sustainability Research, Quito, Ecuador
| | | |
Collapse
|
8
|
Resource Allocation to Massive Internet of Things in LoRaWANs. SENSORS 2020; 20:s20092645. [PMID: 32384656 PMCID: PMC7361687 DOI: 10.3390/s20092645] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 05/01/2020] [Accepted: 05/03/2020] [Indexed: 11/25/2022]
Abstract
A long-range wide area network (LoRaWAN) adapts the ALOHA network concept for channel access, resulting in packet collisions caused by intra- and inter-spreading factor (SF) interference. This leads to a high packet loss ratio. In LoRaWAN, each end device (ED) increments the SF after every two consecutive failed retransmissions, thus forcing the EDs to use a high SF. When numerous EDs switch to the highest SF, the network loses its advantage of orthogonality. Thus, the collision probability of the ED packets increases drastically. In this study, we propose two SF allocation schemes to enhance the packet success ratio by lowering the impact of interference. The first scheme, called the channel-adaptive SF recovery algorithm, increments or decrements the SF based on the retransmission of the ED packets, indicating the channel status in the network. The second approach allocates SF to EDs based on ED sensitivity during the initial deployment. These schemes are validated through extensive simulations by considering the channel interference in both confirmed and unconfirmed modes of LoRaWAN. Through simulation results, we show that the SFs have been adaptively applied to each ED, and the proposed schemes enhance the packet success delivery ratio as compared to the typical SF allocation schemes.
Collapse
|
9
|
Vidal J, Budin J, Tierny J. Progressive Wasserstein Barycenters of Persistence Diagrams. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019:1-1. [PMID: 31403427 DOI: 10.1109/tvcg.2019.2934256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This paper presents an efficient algorithm for the progressive approximation of Wasserstein barycenters of persistence diagrams, with applications to the visual analysis of ensemble data. Given a set of scalar fields, our approach enables the computation of a persistence diagram which is representative of the set, and which visually conveys the number, data ranges and saliences of the main features of interest found in the set. Such representative diagrams are obtained by computing explicitly the discrete Wasserstein barycenter of the set of persistence diagrams, a notoriously computationally intensive task. In particular, we revisit efficient algorithms for Wasserstein distance approximation [12,51] to extend previous work on barycenter estimation [94]. We present a new fast algorithm, which progressively approximates the barycenter by iteratively increasing the computation accuracy as well as the number of persistent features in the output diagram. Such a progressivity drastically improves convergence in practice and allows to design an interruptible algorithm, capable of respecting computation time constraints. This enables the approximation of Wasserstein barycenters within interactive times. We present an application to ensemble clustering where we revisit the k-means algorithm to exploit our barycenters and compute, within execution time constraints, meaningful clusters of ensemble data along with their barycenter diagram. Extensive experiments on synthetic and real-life data sets report that our algorithm converges to barycenters that are qualitatively meaningful with regard to the applications, and quantitatively comparable to previous techniques, while offering an order of magnitude speedup when run until convergence (without time constraint). Our algorithm can be trivially parallelized to provide additional speedups in practice on standard workstations. We provide a lightweight C++ implementation of our approach that can be used to reproduce our results.
Collapse
|
10
|
Kocev B, Hahn HK, Linsen L, Wells WM, Kikinis R. Uncertainty-aware asynchronous scattered motion interpolation using Gaussian process regression. Comput Med Imaging Graph 2019; 72:1-12. [PMID: 30654093 PMCID: PMC6433137 DOI: 10.1016/j.compmedimag.2018.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 08/16/2018] [Accepted: 12/03/2018] [Indexed: 11/28/2022]
Abstract
We address the problem of interpolating randomly non-uniformly spatiotemporally scattered uncertain motion measurements, which arises in the context of soft tissue motion estimation. Soft tissue motion estimation is of great interest in the field of image-guided soft-tissue intervention and surgery navigation, because it enables the registration of pre-interventional/pre-operative navigation information on deformable soft-tissue organs. To formally define the measurements as spatiotemporally scattered motion signal samples, we propose a novel motion field representation. To perform the interpolation of the motion measurements in an uncertainty-aware optimal unbiased fashion, we devise a novel Gaussian process (GP) regression model with a non-constant-mean prior and an anisotropic covariance function and show through an extensive evaluation that it outperforms the state-of-the-art GP models that have been deployed previously for similar tasks. The employment of GP regression enables the quantification of uncertainty in the interpolation result, which would allow the amount of uncertainty present in the registered navigation information governing the decisions of the surgeon or intervention specialist to be conveyed.
Collapse
Affiliation(s)
- Bojan Kocev
- Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany; Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Computer Science and Electrical Engineering, Jacobs University Bremen, Bremen, Germany.
| | - Horst Karl Hahn
- Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Computer Science and Electrical Engineering, Jacobs University Bremen, Bremen, Germany
| | - Lars Linsen
- Institute of Computer Science, Westfälische Wilhelms-Universität Münster, Germany
| | - William M Wells
- Department of Radiology, Harvard Medical School and Brigham and Women's Hospital, Boston, MA 02115, USA
| | - Ron Kikinis
- Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany; Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Radiology, Harvard Medical School and Brigham and Women's Hospital, Boston, MA 02115, USA
| |
Collapse
|
11
|
Favelier G, Faraj N, Summa B, Tierny J. Persistence Atlas for Critical Point Variability in Ensembles. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:1152-1162. [PMID: 30207954 DOI: 10.1109/tvcg.2018.2864432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper presents a new approach for the visualization and analysis of the spatial variability of features of interest represented by critical points in ensemble data. Our framework, called Persistence Atlas, enables the visualization of the dominant spatial patterns of critical points, along with statistics regarding their occurrence in the ensemble. The persistence atlas represents in the geometrical domain each dominant pattern in the form of a confidence map for the appearance of critical points. As a by-product, our method also provides 2-dimensional layouts of the entire ensemble, highlighting the main trends at a global level. Our approach is based on the new notion of Persistence Map, a measure of the geometrical density in critical points which leverages the robustness to noise of topological persistence to better emphasize salient features. We show how to leverage spectral embedding to represent the ensemble members as points in a low-dimensional Euclidean space, where distances between points measure the dissimilarities between critical point layouts and where statistical tasks, such as clustering, can be easily carried out. Further, we show how the notion of mandatory critical point can be leveraged to evaluate for each cluster confidence regions for the appearance of critical points. Most of the steps of this framework can be trivially parallelized and we show how to efficiently implement them. Extensive experiments demonstrate the relevance of our approach. The accuracy of the confidence regions provided by the persistence atlas is quantitatively evaluated and compared to a baseline strategy using an off-the-shelf clustering approach. We illustrate the importance of the persistence atlas in a variety of real-life datasets, where clear trends in feature layouts are identified and analyzed. We provide a lightweight VTK-based C++ implementation of our approach that can be used for reproduction purposes.
Collapse
|
12
|
Hazarika S, Biswas A, Shen HW. Uncertainty Visualization Using Copula-Based Analysis in Mixed Distribution Models. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:934-943. [PMID: 28866523 DOI: 10.1109/tvcg.2017.2744099] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Distributions are often used to model uncertainty in many scientific datasets. To preserve the correlation among the spatially sampled grid locations in the dataset, various standard multivariate distribution models have been proposed in visualization literature. These models treat each grid location as a univariate random variable which models the uncertainty at that location. Standard multivariate distributions (both parametric and nonparametric) assume that all the univariate marginals are of the same type/family of distribution. But in reality, different grid locations show different statistical behavior which may not be modeled best by the same type of distribution. In this paper, we propose a new multivariate uncertainty modeling strategy to address the needs of uncertainty modeling in scientific datasets. Our proposed method is based on a statistically sound multivariate technique called Copula, which makes it possible to separate the process of estimating the univariate marginals and the process of modeling dependency, unlike the standard multivariate distributions. The modeling flexibility offered by our proposed method makes it possible to design distribution fields which can have different types of distribution (Gaussian, Histogram, KDE etc.) at the grid locations, while maintaining the correlation structure at the same time. Depending on the results of various standard statistical tests, we can choose an optimal distribution representation at each location, resulting in a more cost efficient modeling without significantly sacrificing on the analysis quality. To demonstrate the efficacy of our proposed modeling strategy, we extract and visualize uncertain features like isocontours and vortices in various real world datasets. We also study various modeling criterion to help users in the task of univariate model selection.
Collapse
|
13
|
Sakhaee E, Entezari A. A Statistical Direct Volume Rendering Framework for Visualization of Uncertain Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:2509-2520. [PMID: 27959812 DOI: 10.1109/tvcg.2016.2637333] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
With uncertainty present in almost all modalities of data acquisition, reduction, transformation, and representation, there is a growing demand for mathematical analysis of uncertainty propagation in data processing pipelines. In this paper, we present a statistical framework for quantification of uncertainty and its propagation in the main stages of the visualization pipeline. We propose a novel generalization of Irwin-Hall distributions from the statistical viewpoint of splines and box-splines, that enables interpolation of random variables. Moreover, we introduce a probabilistic transfer function classification model that allows for incorporating probability density functions into the volume rendering integral. Our statistical framework allows for incorporating distributions from various sources of uncertainty which makes it suitable in a wide range of visualization applications. We demonstrate effectiveness of our approach in visualization of ensemble data, visualizing large datasets at reduced scale, iso-surface extraction, and visualization of noisy data.
Collapse
|
14
|
Interpolation in Time Series: An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment. WATER 2017. [DOI: 10.3390/w9100796] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
15
|
Biswas A, Lin G, Liu X, Shen HW. Visualization of Time-Varying Weather Ensembles across Multiple Resolutions. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:841-850. [PMID: 27875198 DOI: 10.1109/tvcg.2016.2598869] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Uncertainty quantification in climate ensembles is an important topic for the domain scientists, especially for decision making in the real-world scenarios. With powerful computers, simulations now produce time-varying and multi-resolution ensemble data sets. It is of extreme importance to understand the model sensitivity given the input parameters such that more computation power can be allocated to the parameters with higher influence on the output. Also, when ensemble data is produced at different resolutions, understanding the accuracy of different resolutions helps the total time required to produce a desired quality solution with improved storage and computation cost. In this work, we propose to tackle these non-trivial problems on the Weather Research and Forecasting (WRF) model output. We employ a moment independent sensitivity measure to quantify and analyze parameter sensitivity across spatial regions and time domain. A comparison of clustering structures across three resolutions enables the users to investigate the sensitivity variation over the spatial regions of the five input parameters. The temporal trend in the sensitivity values is explored via an MDS view linked with a line chart for interactive brushing. The spatial and temporal views are connected to provide a full exploration system for complete spatio-temporal sensitivity analysis. To analyze the accuracy across varying resolutions, we formulate a Bayesian approach to identify which regions are better predicted at which resolutions compared to the observed precipitation. This information is aggregated over the time domain and finally encoded in an output image through a custom color map that guides the domain experts towards an adaptive grid implementation given a cost model. Users can select and further analyze the spatial and temporal error patterns for multi-resolution accuracy analysis via brushing and linking on the produced image. In this work, we collaborate with a domain expert whose feedback shows the effectiveness of our proposed exploration work-flow.
Collapse
|
16
|
Athawale T, Sakhaee E, Entezari A. Isosurface Visualization of Data with Nonparametric Models for Uncertainty. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:777-786. [PMID: 26529727 DOI: 10.1109/tvcg.2015.2467958] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The problem of isosurface extraction in uncertain data is an important research problem and may be approached in two ways. One can extract statistics (e.g., mean) from uncertain data points and visualize the extracted field. Alternatively, data uncertainty, characterized by probability distributions, can be propagated through the isosurface extraction process. We analyze the impact of data uncertainty on topology and geometry extraction algorithms. A novel, edge-crossing probability based approach is proposed to predict underlying isosurface topology for uncertain data. We derive a probabilistic version of the midpoint decider that resolves ambiguities that arise in identifying topological configurations. Moreover, the probability density function characterizing positional uncertainty in isosurfaces is derived analytically for a broad class of nonparametric distributions. This analytic characterization can be used for efficient closed-form computation of the expected value and variation in geometry. Our experiments show the computational advantages of our analytic approach over Monte-Carlo sampling for characterizing positional uncertainty. We also show the advantage of modeling underlying error densities in a nonparametric statistical framework as opposed to a parametric statistical framework through our experiments on ensemble datasets and uncertain scalar fields.
Collapse
|
17
|
Athawale T, Entezari A. Uncertainty quantification in linear interpolation for isosurface extraction. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:2723-2732. [PMID: 24051839 DOI: 10.1109/tvcg.2013.208] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present a study of linear interpolation when applied to uncertain data. Linear interpolation is a key step for isosurface extraction algorithms, and the uncertainties in the data lead to non-linear variations in the geometry of the extracted isosurface. We present an approach for deriving the probability density function of a random variable modeling the positional uncertainty in the isosurface extraction. When the uncertainty is quantified by a uniform distribution, our approach provides a closed-form characterization of the mentioned random variable. This allows us to derive, in closed form, the expected value as well as the variance of the level-crossing position. While the former quantity is used for constructing a stable isosurface for uncertain data, the latter is used for visualizing the positional uncertainties in the expected isosurface level crossings on the underlying grid.
Collapse
|