1
|
Athawale TM, Wang Z, Pugmire D, Moreland K, Gong Q, Klasky S, Johnson CR, Rosen P. Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:108-118. [PMID: 39255107 DOI: 10.1109/tvcg.2024.3456393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.
Collapse
|
2
|
Nguyen N, Bohak C, Engel D, Mindek P, Strnad O, Wonka P, Li S, Ropinski T, Viola I. Finding Nano-Ötzi: Cryo-Electron Tomography Visualization Guided by Learned Segmentation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4198-4214. [PMID: 35749328 DOI: 10.1109/tvcg.2022.3186146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cryo-electron tomography (cryo-ET) is a new 3D imaging technique with unprecedented potential for resolving submicron structural details. Existing volume visualization methods, however, are not able to reveal details of interest due to low signal-to-noise ratio. In order to design more powerful transfer functions, we propose leveraging soft segmentation as an explicit component of visualization for noisy volumes. Our technical realization is based on semi-supervised learning, where we combine the advantages of two segmentation algorithms. First, the weak segmentation algorithm provides good results for propagating sparse user-provided labels to other voxels in the same volume and is used to generate dense pseudo-labels. Second, the powerful deep-learning-based segmentation algorithm learns from these pseudo-labels to generalize the segmentation to other unseen volumes, a task that the weak segmentation algorithm fails at completely. The proposed volume visualization uses deep-learning-based segmentation as a component for segmentation-aware transfer function design. Appropriate ramp parameters can be suggested automatically through frequency distribution analysis. Furthermore, our visualization uses gradient-free ambient occlusion shading to further suppress the visual presence of noise, and to give structural detail the desired prominence. The cryo-ET data studied in our technical experiments are based on the highest-quality tilted series of intact SARS-CoV-2 virions. Our technique shows the high impact in target sciences for visual data analysis of very noisy volumes that cannot be visualized with existing techniques.
Collapse
|
3
|
Liu L, Vuillemot R. A Generic Interactive Membership Function for Categorization of Quantities. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2023; 43:39-48. [PMID: 37535492 DOI: 10.1109/mcg.2023.3301449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/05/2023]
Abstract
The membership function is to categorize quantities along with a confidence degree. This article investigates a generic user interaction based on this function for categorizing various types of quantities without modification, which empowers users to articulate uncertainty categorization and enhance their visual data analysis significantly. We present the technique design and an online prototype, supplementing with insights from three case studies that highlight the technique's efficacy among different types of quantities. Furthermore, we conduct a formal user study to scrutinize the process and reasoning users employ while utilizing our technique. The findings indicate that our technique can help users create customized categories. Both our code and the interactive prototype are made available as open-source resources, intended for application across varied domains as a generic tool.
Collapse
|
4
|
Dhanoa V, Walchshofer C, Hinterreiter A, Groller E, Streit M. Fuzzy Spreadsheet: Understanding and Exploring Uncertainties in Tabular Calculations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1463-1477. [PMID: 34633930 DOI: 10.1109/tvcg.2021.3119212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spreadsheet-based tools provide a simple yet effective way of calculating values, which makes them the number-one choice for building and formalizing simple models for budget planning and many other applications. A cell in a spreadsheet holds one specific value and gives a discrete, overprecise view of the underlying model. Therefore, spreadsheets are of limited use when investigating the inherent uncertainties of such models and answering what-if questions. Existing extensions typically require a complex modeling process that cannot easily be embedded in a tabular layout. In Fuzzy Spreadsheet, a cell can hold and display a distribution of values. This integrated uncertainty-handling immediately conveys sensitivity and robustness information. The fuzzification of the cells enables calculations not only with precise values but also with distributions, and probabilities. We conservatively added and carefully crafted visuals to maintain the look and feel of a traditional spreadsheet while facilitating what-if analyses. Given a user-specified reference cell, Fuzzy Spreadsheet automatically extracts and visualizes contextually relevant information, such as impact, uncertainty, and degree of neighborhood, for the selected and related cells. To evaluate its usability and the perceived mental effort required, we conducted a user study. The results show that our approach outperforms traditional spreadsheets in terms of answer correctness, response time, and perceived mental effort in almost all tasks tested.
Collapse
|
5
|
Athawale TM, Johnson CR, Sane S, Pugmire D. Fiber Uncertainty Visualization for Bivariate Data With Parametric and Nonparametric Noise Models. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:613-623. [PMID: 36155460 DOI: 10.1109/tvcg.2022.3209424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Visualization and analysis of multivariate data and their uncertainty are top research challenges in data visualization. Constructing fiber surfaces is a popular technique for multivariate data visualization that generalizes the idea of level-set visualization for univariate data to multivariate data. In this paper, we present a statistical framework to quantify positional probabilities of fibers extracted from uncertain bivariate fields. Specifically, we extend the state-of-the-art Gaussian models of uncertainty for bivariate data to other parametric distributions (e.g., uniform and Epanechnikov) and more general nonparametric probability distributions (e.g., histograms and kernel density estimation) and derive corresponding spatial probabilities of fibers. In our proposed framework, we leverage Green's theorem for closed-form computation of fiber probabilities when bivariate data are assumed to have independent parametric and nonparametric noise. Additionally, we present a nonparametric approach combined with numerical integration to study the positional probability of fibers when bivariate data are assumed to have correlated noise. For uncertainty analysis, we visualize the derived probability volumes for fibers via volume rendering and extracting level sets based on probability thresholds. We present the utility of our proposed techniques via experiments on synthetic and simulation datasets.
Collapse
|
6
|
Information Visualisation for Antibiotic Detection Biochip Design and Testing. Processes (Basel) 2022. [DOI: 10.3390/pr10122680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Biochips are engineered substrates that have different spots that change colour according to biochemical reactions. These spots can be read together to detect different analytes (such as different types of antibiotic, pathogens, or biological agents). While some chips are designed so that each spot on its own can detect a particular analyte, chip designs that use a combination of spots to detect different analytes can be more efficient and detect a larger number of analytes with a smaller number of spots. These types of chip can, however, be more difficult to design, as an efficient and effective combination of biosensors needs to be selected for the chip. These need to be able to differentiate between a range of different analytes so the values can be combined in a way that demonstrates the confidence that a particular analyte is present or not. The study described in this paper examines the potential for information visualisation to support the process of designing and reading biochips by developing and evaluating applications that allow biologists to analyse the results of experiments aimed at detecting candidate bio-sensors (to be used as biochip spots) and examining how biosensors can combine to identify different analytes. Our results demonstrate the potential of information visualisation and machine learning techniques to improve the design of biochips.
Collapse
|
7
|
Rapp T, Peters C, Dachsbacher C. Image-based Visualization of Large Volumetric Data Using Moments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2314-2325. [PMID: 35442887 DOI: 10.1109/tvcg.2022.3165346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
We present a novel image-based representation to interactively visualize large and arbitrarily structured volumetric data. This image-based representation is created from a fixed view and models the scalar densities along each viewing ray. Then, any transfer function can be applied and changed interactively to visualize the data. In detail, we transform the density in each pixel to the Fourier basis and store Fourier coefficients of a bounded signal, i.e. bounded trigonometric moments. To keep this image-based representation compact, we adaptively determine the number of moments in each pixel and present a novel coding and quantization strategy. Additionally, we perform spatial and temporal interpolation of our image representation and discuss the visualization of introduced uncertainties. Moreover, we use our representation to add single scattering illumination. Lastly, we achieve accurate results even with changes in the view configuration. We evaluate our approach on two large volume datasets and a time-dependent SPH dataset.
Collapse
|
8
|
Edmunds CER, Harris AJL, Osman M. Applying Insights on Categorisation, Communication, and Dynamic Decision-Making: A Case Study of a ‘Simple’ Maritime Military Decision. REVIEW OF GENERAL PSYCHOLOGY 2022. [DOI: 10.1177/10892680221077242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
A complete understanding of decision-making in military domains requires gathering insights from several fields of study. To make the task tractable, here we consider a specific example of short-term tactical decisions under uncertainty made by the military at sea. Through this lens, we sketch out relevant literature from three psychological tasks each underpinned by decision-making processes: categorisation, communication and choice. From the literature, we note two general cognitive tendencies that emerge across all three stages: the effect of cognitive load and individual differences. Drawing on these tendencies, we recommend strategies, tools and future research that could improve performance in military domains – but, by extension, would also generalise to other high-stakes contexts. In so doing, we show the extent to which domain general properties of high order cognition are sufficient in explaining behaviours in domain specific contexts.
Collapse
Affiliation(s)
| | - Adam J. L. Harris
- Department of Experimental Psychology, University College London, London, UK
| | - Magda Osman
- Centre for Science and Policy, University of Cambridge, Cambridge, MA, USA
| |
Collapse
|
9
|
Vohra N, Liu H, Nelson AH, Bailey K, El-Shenawee M. Hyperspectral terahertz imaging and optical clearance for cancer classification in breast tumor surgical specimen. J Med Imaging (Bellingham) 2022; 9:014002. [PMID: 35036473 PMCID: PMC8752447 DOI: 10.1117/1.jmi.9.1.014002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 12/21/2021] [Indexed: 01/14/2023] Open
Abstract
Purpose: We investigate the enhancement in terahertz (THz) images of freshly excised breast tumors upon treatment with an optical clearance agent. The hyperspectral imaging and spectral classifications are used to quantitatively demonstrate the image enhancement. Glycerol solution with 60% concentration is applied to excised breast tumor specimens for various time durations to investigate the effectiveness on image enhancement. Approach: THz reflection spectroscopy is utilized to obtain the absorption coefficient and the index of refraction of untreated and glycerol-treated tissues at each frequency up to 3 THz. Two classifiers, spectral angular mapping (SAM) based on several kernels and Euclidean minimum distance (EMD) are implemented to evaluate the effectiveness of the treatment. The testing raw data is obtained from five breast cancer specimens: two untreated specimens and three specimens treated with glycerol solution for 20, 40, or 60 min. All tumors used in the testing data have healthy tissues adjacent to cancerous ones consistent with the challenge faced in lumpectomy surgeries. Results: The glycerol-treated tissues showed a decrease in the absorption coefficients compared with untreated tissues, especially as the period of treatment increased. Although the sensitivity metric of the classifier presented higher values in the untreated tissues compared with the treated ones, the specificity and accuracy metrics demonstrated higher values for the treated tissues compared with the untreated ones. Conclusions: The biocompatible glycerol solution is a potential optical clearance agent in THz imaging while keeping the histopathology imaging intact. The SAM technique provided a good classification of cancerous tissues despite the small amount of cancer in the training data (only 7%). The SAM exponential kernel and EMD presented classification accuracy of ∼ 80 % to 85% compared with linear and polynomial kernels that provided accuracy ranging from 70% to 80%. Overall, glycerol treatment provides a potential improvement in cancer classification in freshly excised breast tumors.
Collapse
Affiliation(s)
- Nagma Vohra
- University of Arkansas, Department of Electrical Engineering, Fayetteville, Arkansas, United States
| | - Haoyan Liu
- University of Arkansas, Department of Computer Science and Engineering, Fayetteville, Arkansas, United States
| | - Alexander H. Nelson
- University of Arkansas, Department of Computer Science and Engineering, Fayetteville, Arkansas, United States
| | - Keith Bailey
- Charles River Laboratory, Mattawan, Michigan, United States
| | - Magda El-Shenawee
- University of Arkansas, Department of Electrical Engineering, Fayetteville, Arkansas, United States
| |
Collapse
|
10
|
Athawale TM, Ma B, Sakhaee E, Johnson CR, Entezari A. Direct Volume Rendering with Nonparametric Models of Uncertainty. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1797-1807. [PMID: 33052857 DOI: 10.1109/tvcg.2020.3030394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present a nonparametric statistical framework for the quantification, analysis, and propagation of data uncertainty in direct volume rendering (DVR). The state-of-the-art statistical DVR framework allows for preserving the transfer function (TF) of the ground truth function when visualizing uncertain data; however, the existing framework is restricted to parametric models of uncertainty. In this paper, we address the limitations of the existing DVR framework by extending the DVR framework for nonparametric distributions. We exploit the quantile interpolation technique to derive probability distributions representing uncertainty in viewing-ray sample intensities in closed form, which allows for accurate and efficient computation. We evaluate our proposed nonparametric statistical models through qualitative and quantitative comparisons with the mean-field and parametric statistical models, such as uniform and Gaussian, as well as Gaussian mixtures. In addition, we present an extension of the state-of-the-art rendering parametric framework to 2D TFs for improved DVR classifications. We show the applicability of our uncertainty quantification framework to ensemble, downsampled, and bivariate versions of scalar field datasets.
Collapse
|
11
|
Bruder V, Muller C, Frey S, Ertl T. On Evaluating Runtime Performance of Interactive Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2848-2862. [PMID: 30763241 DOI: 10.1109/tvcg.2019.2898435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
As our field matures, evaluation of visualization techniques has extended from reporting runtime performance to studying user behavior. Consequently, many methodologies and best practices for user studies have evolved. While maintaining interactivity continues to be crucial for the exploration of large data sets, no similar methodological foundation for evaluating runtime performance has been developed. Our analysis of 50 recent visualization papers on new or improved techniques for rendering volumes or particles indicates that only a very limited set of parameters like different data sets, camera paths, viewport sizes, and GPUs are investigated, which make comparison with other techniques or generalization to other parameter ranges at least questionable. To derive a deeper understanding of qualitative runtime behavior and quantitative parameter dependencies, we developed a framework for the most exhaustive performance evaluation of volume and particle visualization techniques that we are aware of, including millions of measurements on ten different GPUs. This paper reports on our insights from statistical analysis of this data, discussing independent and linear parameter behavior and non-obvious effects. We give recommendations for best practices when evaluating runtime performance of scientific visualization applications, which can serve as a starting point for more elaborate models of performance quantification.
Collapse
|
12
|
Visual Analytics for the Representation, Exploration, and Analysis of High-Dimensional, Multi-faceted Medical Data. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2019; 1138:137-162. [PMID: 31313263 DOI: 10.1007/978-3-030-14227-8_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2023]
Abstract
Medicine is among those research fields with a significant impact on humans and their health. Already for decades, medicine has established a tight coupling with the visualization domain, proving the importance of developing visualization techniques, designed exclusively for this research discipline. However, medical data is steadily increasing in complexity with the appearance of heterogeneous, multi-modal, multi-parametric, cohort or population, as well as uncertain data. To deal with this kind of complex data, the field of Visual Analytics has emerged. In this chapter, we discuss the many dimensions and facets of medical data. Based on this classification, we provide a general overview of state-of-the-art visualization systems and solutions dealing with high-dimensional, multi-faceted data. Our particular focus will be on multi-modal, multi-parametric data, on data from cohort or population studies and on uncertain data, especially with respect to Visual Analytics applications for the representation, exploration, and analysis of high-dimensional, multi-faceted medical data.
Collapse
|
13
|
Abstract
Due to image reconstruction process of all image capturing methods, image data is inherently affected by uncertainty. This is caused by the underlying image reconstruction model, that is not capable to map all physical properties in its entirety. In order to be aware of these effects, image uncertainty needs to be quantified and propagated along the entire image processing pipeline. In classical image processing methodologies, pre-processing algorithms do not consider this information. Therefore, this paper presents an uncertainty-aware image pre-processing paradigm, that is aware of the input image’s uncertainty and propagates it trough the entire pipeline. To accomplish this, we utilize rules for transformation and propagation of uncertainty to incorporate this additional information with a variety of operations. Resulting from this, we are able to adapt prominent image pre-processing algorithms such that they consider the input images uncertainty. Furthermore, we allow the composition of arbitrary image pre-processing pipelines and visually encode the accumulated uncertainty throughout this pipeline. The effectiveness of the demonstrated approach is shown by creating image pre-processing pipelines for a variety of real world datasets.
Collapse
|
14
|
Hullman J, Qiao X, Correll M, Kale A, Kay M. In Pursuit of Error: A Survey of Uncertainty Visualization Evaluation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:903-913. [PMID: 30207956 DOI: 10.1109/tvcg.2018.2864889] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Understanding and accounting for uncertainty is critical to effectively reasoning about visualized data. However, evaluating the impact of an uncertainty visualization is complex due to the difficulties that people have interpreting uncertainty and the challenge of defining correct behavior with uncertainty information. Currently, evaluators of uncertainty visualization must rely on general purpose visualization evaluation frameworks which can be ill-equipped to provide guidance with the unique difficulties of assessing judgments under uncertainty. To help evaluators navigate these complexities, we present a taxonomy for characterizing decisions made in designing an evaluation of an uncertainty visualization. Our taxonomy differentiates six levels of decisions that comprise an uncertainty visualization evaluation: the behavioral targets of the study, expected effects from an uncertainty visualization, evaluation goals, measures, elicitation techniques, and analysis approaches. Applying our taxonomy to 86 user studies of uncertainty visualizations, we find that existing evaluation practice, particularly in visualization research, focuses on Performance and Satisfaction-based measures that assume more predictable and statistically-driven judgment behavior than is suggested by research on human judgment and decision making. We reflect on common themes in evaluation practice concerning the interpretation and semantics of uncertainty, the use of confidence reporting, and a bias toward evaluating performance as accuracy rather than decision quality. We conclude with a concrete set of recommendations for evaluators designed to reduce the mismatch between the conceptualization of uncertainty in visualization versus other fields.
Collapse
|
15
|
Athawale T, Johnson CR. Probabilistic Asymptotic Decider for Topological Ambiguity Resolution in Level-Set Extraction for Uncertain 2D Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:10.1109/TVCG.2018.2864505. [PMID: 30130200 PMCID: PMC6382610 DOI: 10.1109/tvcg.2018.2864505] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a framework for the analysis of uncertainty in isocontour extraction. The marching squares (MS) algorithm for isocontour reconstruction generates a linear topology that is consistent with hyperbolic curves of a piecewise bilinear interpolation. The saddle points of the bilinear interpolant cause topological ambiguity in isocontour extraction. The midpoint decider and the asymptotic decider are well-known mathematical techniques for resolving topological ambiguities. The latter technique investigates the data values at the cell saddle points for ambiguity resolution. The uncertainty in data, however, leads to uncertainty in underlying bilinear interpolation functions for the MS algorithm, and hence, their saddle points. In our work, we study the behavior of the asymptotic decider when data at grid vertices is uncertain. First, we derive closed-form distributions characterizing variations in the saddle point values for uncertain bilinear interpolants. The derivation assumes uniform and nonparametric noise models, and it exploits the concept of ratio distribution for analytic formulations. Next, the probabilistic asymptotic decider is devised for ambiguity resolution in uncertain data using distributions of the saddle point values derived in the first step. Finally, the confidence in probabilistic topological decisions is visualized using a colormapping technique. We demonstrate the higher accuracy and stability of the probabilistic asymptotic decider in uncertain data with regard to existing decision frameworks, such as deciders in the mean field and the probabilistic midpoint decider, through the isocontour visualization of synthetic and real datasets.
Collapse
Affiliation(s)
- Tushar Athawale
- Scientific Computing & Imaging (SCI) Institute at the University of Utah.
| | - Chris R. Johnson
- Scientific Computing & Imaging (SCI) Institute at the University of Utah.
| |
Collapse
|
16
|
Kale A, Nguyen F, Kay M, Hullman J. Hypothetical Outcome Plots Help Untrained Observers Judge Trends in Ambiguous Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:892-902. [PMID: 30136961 DOI: 10.1109/tvcg.2018.2864909] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Animated representations of outcomes drawn from distributions (hypothetical outcome plots, or HOPs) are used in the media and other public venues to communicate uncertainty. HOPs greatly improve multivariate probability estimation over conventional static uncertainty visualizations and leverage the ability of the visual system to quickly, accurately, and automatically process the summary statistical properties of ensembles. However, it is unclear how well HOPs support applied tasks resembling real world judgments posed in uncertainty communication. We identify and motivate an appropriate task to investigate realistic judgments of uncertainty in the public domain through a qualitative analysis of uncertainty visualizations in the news. We contribute two crowdsourced experiments comparing the effectiveness of HOPs, error bars, and line ensembles for supporting perceptual decision-making from visualized uncertainty. Participants infer which of two possible underlying trends is more likely to have produced a sample of time series data by referencing uncertainty visualizations which depict the two trends with variability due to sampling error. By modeling each participant's accuracy as a function of the level of evidence presented over many repeated judgments, we find that observers are able to correctly infer the underlying trend in samples conveying a lower level of evidence when using HOPs rather than static aggregate uncertainty visualizations as a decision aid. Modeling approaches like ours contribute theoretically grounded and richly descriptive accounts of user perceptions to visualization evaluation.
Collapse
|
17
|
Sakhaee E, Entezari A. A Statistical Direct Volume Rendering Framework for Visualization of Uncertain Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:2509-2520. [PMID: 27959812 DOI: 10.1109/tvcg.2016.2637333] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
With uncertainty present in almost all modalities of data acquisition, reduction, transformation, and representation, there is a growing demand for mathematical analysis of uncertainty propagation in data processing pipelines. In this paper, we present a statistical framework for quantification of uncertainty and its propagation in the main stages of the visualization pipeline. We propose a novel generalization of Irwin-Hall distributions from the statistical viewpoint of splines and box-splines, that enables interpolation of random variables. Moreover, we introduce a probabilistic transfer function classification model that allows for incorporating probability density functions into the volume rendering integral. Our statistical framework allows for incorporating distributions from various sources of uncertainty which makes it suitable in a wide range of visualization applications. We demonstrate effectiveness of our approach in visualization of ensemble data, visualizing large datasets at reduced scale, iso-surface extraction, and visualization of noisy data.
Collapse
|
18
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
19
|
Chen YT. A novel approach to segmentation and measurement of medical image using level set methods. Magn Reson Imaging 2017; 39:175-193. [PMID: 28219649 DOI: 10.1016/j.mri.2017.02.008] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Revised: 01/10/2017] [Accepted: 02/16/2017] [Indexed: 11/16/2022]
Abstract
The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis.
Collapse
Affiliation(s)
- Yao-Tien Chen
- Department of Applied Mobile Technology, Yuanpei University of Medical Technology, No. 306, Yuanpei St., HsinChu City 30015, Taiwan.
| |
Collapse
|
20
|
Jonsson D, Ynnerman A. Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:901-910. [PMID: 27514045 DOI: 10.1109/tvcg.2016.2598430] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present a method for interactive global illumination of both static and time-varying volumetric data based on reduction of the overhead associated with re-computation of photon maps. Our method uses the identification of photon traces invariant to changes of visual parameters such as the transfer function (TF), or data changes between time-steps in a 4D volume. This lets us operate on a variant subset of the entire photon distribution. The amount of computation required in the two stages of the photon mapping process, namely tracing and gathering, can thus be reduced to the subset that are affected by a data or visual parameter change. We rely on two different types of information from the original data to identify the regions that have changed. A low resolution uniform grid containing the minimum and maximum data values of the original data is derived for each time step. Similarly, for two consecutive time-steps, a low resolution grid containing the difference between the overlapping data is used. We show that this compact metadata can be combined with the transfer function to identify the regions that have changed. Each photon traverses the low-resolution grid to identify if it can be directly transferred to the next photon distribution state or if it needs to be recomputed. An efficient representation of the photon distribution is presented leading to an order of magnitude improved performance of the raycasting step. The utility of the method is demonstrated in several examples that show visual fidelity, as well as performance. The examples show that visual quality can be retained when the fraction of retraced photons is as low as 40%-50%.
Collapse
|
21
|
Athawale T, Sakhaee E, Entezari A. Isosurface Visualization of Data with Nonparametric Models for Uncertainty. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:777-786. [PMID: 26529727 DOI: 10.1109/tvcg.2015.2467958] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The problem of isosurface extraction in uncertain data is an important research problem and may be approached in two ways. One can extract statistics (e.g., mean) from uncertain data points and visualize the extracted field. Alternatively, data uncertainty, characterized by probability distributions, can be propagated through the isosurface extraction process. We analyze the impact of data uncertainty on topology and geometry extraction algorithms. A novel, edge-crossing probability based approach is proposed to predict underlying isosurface topology for uncertain data. We derive a probabilistic version of the midpoint decider that resolves ambiguities that arise in identifying topological configurations. Moreover, the probability density function characterizing positional uncertainty in isosurfaces is derived analytically for a broad class of nonparametric distributions. This analytic characterization can be used for efficient closed-form computation of the expected value and variation in geometry. Our experiments show the computational advantages of our analytic approach over Monte-Carlo sampling for characterizing positional uncertainty. We also show the advantage of modeling underlying error densities in a nonparametric statistical framework as opposed to a parametric statistical framework through our experiments on ensemble datasets and uncertain scalar fields.
Collapse
|
22
|
Ferstl F, Bürger K, Westermann R. Streamline Variability Plots for Characterizing the Uncertainty in Vector Field Ensembles. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:767-776. [PMID: 26390476 DOI: 10.1109/tvcg.2015.2467204] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a new method to visualize from an ensemble of flow fields the statistical properties of streamlines passing through a selected location. We use principal component analysis to transform the set of streamlines into a low-dimensional Euclidean space. In this space the streamlines are clustered into major trends, and each cluster is in turn approximated by a multivariate Gaussian distribution. This yields a probabilistic mixture model for the streamline distribution, from which confidence regions can be derived in which the streamlines are most likely to reside. This is achieved by transforming the Gaussian random distributions from the low-dimensional Euclidean space into a streamline distribution that follows the statistical model, and by visualizing confidence regions in this distribution via iso-contours. We further make use of the principal component representation to introduce a new concept of streamline-median, based on existing median concepts in multidimensional Euclidean spaces. We demonstrate the potential of our method in a number of real-world examples, and we compare our results to alternative clustering approaches for particle trajectories as well as curve boxplots.
Collapse
|
23
|
Chen H, Zhang S, Chen W, Mei H, Zhang J, Mercer A, Liang R, Qu H. Uncertainty-Aware Multidimensional Ensemble Data Visualization and Exploration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:1072-1086. [PMID: 26357288 DOI: 10.1109/tvcg.2015.2410278] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper presents an efficient visualization and exploration approach for modeling and characterizing the relationships and uncertainties in the context of a multidimensional ensemble dataset. Its core is a novel dissimilarity-preserving projection technique that characterizes not only the relationships among the mean values of the ensemble data objects but also the relationships among the distributions of ensemble members. This uncertainty-aware projection scheme leads to an improved understanding of the intrinsic structure in an ensemble dataset. The analysis of the ensemble dataset is further augmented by a suite of visual encoding and exploration tools. Experimental results on both artificial and real-world datasets demonstrate the effectiveness of our approach.
Collapse
|
24
|
Schroeder D, Korsakov F, Knipe CMP, Thorson L, Ellingson AM, Nuckley D, Carlis J, Keefe DF. Trend-Centric Motion Visualization: Designing and Applying a New Strategy for Analyzing Scientific Motion Collections. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:2644-2653. [PMID: 26356978 PMCID: PMC5307926 DOI: 10.1109/tvcg.2014.2346451] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In biomechanics studies, researchers collect, via experiments or simulations, datasets with hundreds or thousands of trials, each describing the same type of motion (e.g., a neck flexion-extension exercise) but under different conditions (e.g., different patients, different disease states, pre- and post-treatment). Analyzing similarities and differences across all of the trials in these collections is a major challenge. Visualizing a single trial at a time does not work, and the typical alternative of juxtaposing multiple trials in a single visual display leads to complex, difficult-to-interpret visualizations. We address this problem via a new strategy that organizes the analysis around motion trends rather than trials. This new strategy matches the cognitive approach that scientists would like to take when analyzing motion collections. We introduce several technical innovations making trend-centric motion visualization possible. First, an algorithm detects a motion collection's trends via time-dependent clustering. Second, a 2D graphical technique visualizes how trials leave and join trends. Third, a 3D graphical technique, using a median 3D motion plus a visual variance indicator, visualizes the biomechanics of the set of trials within each trend. These innovations are combined to create an interactive exploratory visualization tool, which we designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer. We report on insights generated during this design process and demonstrate the tool's effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers who used the tool to analyze the effects of disc degeneration on human spinal kinematics.
Collapse
|
25
|
Demir I, Dick C, Westermann R. Multi-Charts for Comparative 3D Ensemble Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:2694-2703. [PMID: 26356983 DOI: 10.1109/tvcg.2014.2346448] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
A comparative visualization of multiple volume data sets is challenging due to the inherent occlusion effects, yet it is important to effectively reveal uncertainties, correlations and reliable trends in 3D ensemble fields. In this paper we present bidirectional linking of multi-charts and volume visualization as a means to analyze visually 3D scalar ensemble fields at the data level. Multi-charts are an extension of conventional bar and line charts: They linearize the 3D data points along a space-filling curve and draw them as multiple charts in the same plot area. The bar charts encode statistical information on ensemble members, such as histograms and probability densities, and line charts are overlayed to allow comparing members against the ensemble. Alternative linearizations based on histogram similarities or ensemble variation allow clustering of spatial locations depending on data distribution. Multi-charts organize the data at multiple scales to quickly provide overviews and enable users to select regions exhibiting interesting behavior interactively. They are further put into a spatial context by allowing the user to brush or query value intervals and specific distributions, and to simultaneously visualize the corresponding spatial points via volume rendering. By providing a picking mechanism in 3D and instantly highlighting the corresponding data points in the chart, the user can go back and forth between the abstract and the 3D view to focus the analysis.
Collapse
|
26
|
Lindholm S, Jönsson D, Hansen C, Ynnerman A. Boundary Aware Reconstruction of Scalar Fields. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:2447-2455. [PMID: 26356958 DOI: 10.1109/tvcg.2014.2346351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In visualization, the combined role of data reconstruction and its classification plays a crucial role. In this paper we propose a novel approach that improves classification of different materials and their boundaries by combining information from the classifiers at the reconstruction stage. Our approach estimates the targeted materials' local support before performing multiple material-specific reconstructions that prevent much of the misclassification traditionally associated with transitional regions and transfer function (TF) design. With respect to previously published methods our approach offers a number of improvements and advantages. For one, it does not rely on TFs acting on derivative expressions, therefore it is less sensitive to noisy data and the classification of a single material does not depend on specialized TF widgets or specifying regions in a multidimensional TF. Additionally, improved classification is attained without increasing TF dimensionality, which promotes scalability to multivariate data. These aspects are also key in maintaining low interaction complexity. The results are simple-to-achieve visualizations that better comply with the user's understanding of discrete features within the studied object.
Collapse
|
27
|
Athawale T, Entezari A. Uncertainty quantification in linear interpolation for isosurface extraction. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:2723-2732. [PMID: 24051839 DOI: 10.1109/tvcg.2013.208] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present a study of linear interpolation when applied to uncertain data. Linear interpolation is a key step for isosurface extraction algorithms, and the uncertainties in the data lead to non-linear variations in the geometry of the extracted isosurface. We present an approach for deriving the probability density function of a random variable modeling the positional uncertainty in the isosurface extraction. When the uncertainty is quantified by a uniform distribution, our approach provides a closed-form characterization of the mentioned random variable. This allows us to derive, in closed form, the expected value as well as the variance of the level-crossing position. While the former quantity is used for constructing a stable isosurface for uncertain data, the latter is used for visualizing the positional uncertainties in the expected isosurface level crossings on the underlying grid.
Collapse
|
28
|
Pfaffelmoser T, Mihai M, Westermann R. Visualizing the variability of gradients in uncertain 2D scalar fields. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:1948-1961. [PMID: 24029913 DOI: 10.1109/tvcg.2013.92] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In uncertain scalar fields where data values vary with a certain probability, the strength of this variability indicates the confidence in the data. It does not, however, allow inferring on the effect of uncertainty on differential quantities such as the gradient, which depend on the variability of the rate of change of the data. Analyzing the variability of gradients is nonetheless more complicated, since, unlike scalars, gradients vary in both strength and direction. This requires initially the mathematical derivation of their respective value ranges, and then the development of effective analysis techniques for these ranges. This paper takes a first step into this direction: Based on the stochastic modeling of uncertainty via multivariate random variables, we start by deriving uncertainty parameters, such as the mean and the covariance matrix, for gradients in uncertain discrete scalar fields. We do not make any assumption about the distribution of the random variables. Then, for the first time to our best knowledge, we develop a mathematical framework for computing confidence intervals for both the gradient orientation and the strength of the derivative in any prescribed direction, for instance, the mean gradient direction. While this framework generalizes to 3D uncertain scalar fields, we concentrate on the visualization of the resulting intervals in 2D fields. We propose a novel color diffusion scheme to visualize both the absolute variability of the derivative strength and its magnitude relative to the mean values. A special family of circular glyphs is introduced to convey the uncertainty in gradient orientation. For a number of synthetic and real-world data sets, we demonstrate the use of our approach for analyzing the stability of certain features in uncertain 2D scalar fields, with respect to both local derivatives and feature orientation.
Collapse
|
29
|
Fout N, Ma KL. Fuzzy Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:2335-2344. [PMID: 26357141 DOI: 10.1109/tvcg.2012.227] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In order to assess the reliability of volume rendering, it is necessary to consider the uncertainty associated with the volume data and how it is propagated through the volume rendering algorithm, as well as the contribution to uncertainty from the rendering algorithm itself. In this work, we show how to apply concepts from the field of reliable computing in order to build a framework for management of uncertainty in volume rendering, with the result being a self-validating computational model to compute a posteriori uncertainty bounds. We begin by adopting a coherent, unifying possibility-based representation of uncertainty that is able to capture the various forms of uncertainty that appear in visualization, including variability, imprecision, and fuzziness. Next, we extend the concept of the fuzzy transform in order to derive rules for accumulation and propagation of uncertainty. This representation and propagation of uncertainty together constitute an automated framework for management of uncertainty in visualization, which we then apply to volume rendering. The result, which we call fuzzy volume rendering, is an uncertainty-aware rendering algorithm able to produce more complete depictions of the volume data, thereby allowing more reliable conclusions and informed decisions. Finally, we compare approaches for self-validated computation in volume rendering, demonstrating that our chosen method has the ability to handle complex uncertainty while maintaining efficiency.
Collapse
|
30
|
Kersten-Oertel M, Jannin P, Collins DL. DVV: a taxonomy for mixed reality visualization in image guided surgery. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:332-352. [PMID: 21383411 DOI: 10.1109/tvcg.2011.50] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- McConell Brain Imaging Center at the Montreal Neurological Institute (MNI), 3801 University St, Montre´al, QC H3A 2B4, Canada.
| | | | | |
Collapse
|
31
|
From Quantification to Visualization: A Taxonomy of Uncertainty Visualization Approaches. IFIP ADVANCES IN INFORMATION AND COMMUNICATION TECHNOLOGY 2012; 377:226-249. [PMID: 25663949 DOI: 10.1007/978-3-642-32677-6_15] [Citation(s) in RCA: 91] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Quantifying uncertainty is an increasingly important topic across many domains. The uncertainties present in data come with many diverse representations having originated from a wide variety of disciplines. Communicating these uncertainties is a task often left to visualization without clear connection between the quantification and visualization. In this paper, we first identify frequently occurring types of uncertainty. Second, we connect those uncertainty representations to ones commonly used in visualization. We then look at various approaches to visualizing this uncertainty by partitioning the work based on the dimensionality of the data and the dimensionality of the uncertainty. We also discuss noteworthy exceptions to our taxonomy along with future research directions for the uncertainty visualization community.
Collapse
|
32
|
Schneider D, Fuhrmann J, Reich W, Scheuermann G. A Variance Based FTLE-Like Method for Unsteady Uncertain Vector Fields. MATHEMATICS AND VISUALIZATION 2012. [DOI: 10.1007/978-3-642-23175-9_17] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
33
|
Zhang Q, Eagleson R, Peters TM. Volume visualization: a technical overview with a focus on medical applications. J Digit Imaging 2011; 24:640-64. [PMID: 20714917 DOI: 10.1007/s10278-010-9321-6] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
With the increasing availability of high-resolution isotropic three- or four-dimensional medical datasets from sources such as magnetic resonance imaging, computed tomography, and ultrasound, volumetric image visualization techniques have increased in importance. Over the past two decades, a number of new algorithms and improvements have been developed for practical clinical image display. More recently, further efficiencies have been attained by designing and implementing volume-rendering algorithms on graphics processing units (GPUs). In this paper, we review volumetric image visualization pipelines, algorithms, and medical applications. We also illustrate our algorithm implementation and evaluation results, and address the advantages and drawbacks of each algorithm in terms of image quality and efficiency. Within the outlined literature review, we have integrated our research results relating to new visualization, classification, enhancement, and multimodal data dynamic rendering. Finally, we illustrate issues related to modern GPU working pipelines, and their applications in volume visualization domain.
Collapse
Affiliation(s)
- Qi Zhang
- Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, ON, Canada.
| | | | | |
Collapse
|
34
|
Kniss J. Supervised Manifold Distance Segmentation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1637-1649. [PMID: 20855917 DOI: 10.1109/tvcg.2010.120] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
We present a simple and robust method for image and volume data segmentation based on manifold distance metrics. This is done by treating the image as a function that maps the 2D (image) or 3D (volume) to a 2D or 3D manifold in a higher dimensional feature space. We explore a range of possible feature spaces, including value, gradient, and probabilistic measures, and examine the consequences of including these measures in the feature space. The time and space computational complexity of our segmentation algorithm is O(N), which allows interactive, user-centric segmentation even for large data sets. We show that this method, given appropriate choice of feature vector, produces results both qualitatively and quantitatively similar to Level Sets, Random Walkers, and others. We validate the robustness of this segmentation scheme with comparisons to standard ground-truth models and sensitivity analysis of the algorithm.
Collapse
|
35
|
Pöthkow K, Hege HC. Positional uncertainty of isocontours: condition analysis and probabilistic measures. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1393-1406. [PMID: 21041883 DOI: 10.1109/tvcg.2010.247] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Uncertainty is ubiquitous in science, engineering and medicine. Drawing conclusions from uncertain data is the normal case, not an exception. While the field of statistical graphics is well established, only a few 2D and 3D visualization and feature extraction methods have been devised that consider uncertainty. We present mathematical formulations for uncertain equivalents of isocontours based on standard probability theory and statistics and employ them in interactive visualization methods. As input data, we consider discretized uncertain scalar fields and model these as random fields. To create a continuous representation suitable for visualization we introduce interpolated probability density functions. Furthermore, we introduce numerical condition as a general means in feature-based visualization. The condition number-which potentially diverges in the isocontour problem-describes how errors in the input data are amplified in feature computation. We show how the average numerical condition of isocontours aids the selection of thresholds that correspond to robust isocontours. Additionally, we introduce the isocontour density and the level crossing probability field; these two measures for the spatial distribution of uncertain isocontours are directly based on the probabilistic model of the input data. Finally, we adapt interactive visualization methods to evaluate and display these measures and apply them to 2D and 3D data sets.
Collapse
Affiliation(s)
- Kai Pöthkow
- Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB), Berlin, Germany.
| | | |
Collapse
|
36
|
Abstract
Visual analytics is a new interdisciplinary field of study that calls for a more structured scientific approach to understanding the effects of interaction with complex graphical displays on human cognitive processes. Its primary goal is to support the design and evaluation of graphical information systems that better support cognitive processes in areas as diverse as scientific research and emergency management. The methodologies that make up this new field are as yet ill defined. This paper proposes a pathway for development of visual analytics as a translational cognitive science that bridges fundamental research in human/computer cognitive systems and design and evaluation of information systems in situ. Achieving this goal will require the development of enhanced field methods for conceptual decomposition of human/computer cognitive systems that maps onto laboratory studies, and improved methods for conducting laboratory investigations that might better map onto real-world cognitive processes in technology-rich environments.
Collapse
Affiliation(s)
- Brian Fisher
- School of Interactive Arts and Technology, Simon Fraser University
| | | | | |
Collapse
|
37
|
Prassni JS, Ropinski T, Hinrichs K. Uncertainty-aware guided volume segmentation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2010; 16:1358-1365. [PMID: 20975176 DOI: 10.1109/tvcg.2010.208] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Although direct volume rendering is established as a powerful tool for the visualization of volumetric data, efficient and reliable feature detection is still an open topic. Usually, a tradeoff between fast but imprecise classification schemes and accurate but time-consuming segmentation techniques has to be made. Furthermore, the issue of uncertainty introduced with the feature detection process is completely neglected by the majority of existing approaches.In this paper we propose a guided probabilistic volume segmentation approach that focuses on the minimization of uncertainty. In an iterative process, our system continuously assesses uncertainty of a random walker-based segmentation in order to detect regions with high ambiguity, to which the user's attention is directed to support the correction of potential misclassifications. This reduces the risk of critical segmentation errors and ensures that information about the segmentation's reliability is conveyed to the user in a dependable way. In order to improve the efficiency of the segmentation process, our technique does not only take into account the volume data to be segmented, but also enables the user to incorporate classification information. An interactive workflow has been achieved by implementing the presented system on the GPU using the OpenCL API. Our results obtained for several medical data sets of different modalities, including brain MRI and abdominal CT, demonstrate the reliability and efficiency of our approach.
Collapse
|
38
|
Saad A, Hamarneh G, Möller T. Exploration and visualization of segmentation uncertainty using shape and appearance prior information. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2010; 16:1366-1375. [PMID: 20975177 DOI: 10.1109/tvcg.2010.152] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We develop an interactive analysis and visualization tool for probabilistic segmentation in medical imaging. The originality of our approach is that the data exploration is guided by shape and appearance knowledge learned from expert-segmented images of a training population. We introduce a set of multidimensional transfer function widgets to analyze the multivariate probabilistic field data. These widgets furnish the user with contextual information about conformance or deviation from the population statistics. We demonstrate the user's ability to identify suspicious regions (e.g. tumors) and to correct the misclassification results. We evaluate our system and demonstrate its usefulness in the context of static anatomical and time-varying functional imaging datasets.
Collapse
Affiliation(s)
- Ahmed Saad
- School of Computer Science, Simon Fraser University, Burnaby, BC, Canada.
| | | | | |
Collapse
|
39
|
Persson A. Will medical visualisation tools meet medical user requirements in the future? RADIATION PROTECTION DOSIMETRY 2010; 139:12-19. [PMID: 20159921 DOI: 10.1093/rpd/ncq018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
This paper describes state-of-the-art medical visualisation and discusses the need for a research agenda that focuses on the development of the next generation of medical acquisition and visualisation tools, emphasising the fact that these tools must be based on medical user requirement and workflow studies as well as on new technical developments.
Collapse
Affiliation(s)
- Anders Persson
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
| |
Collapse
|
40
|
Taylor CA, Steinman DA. Image-Based Modeling of Blood Flow and Vessel Wall Dynamics: Applications, Methods and Future Directions. Ann Biomed Eng 2010; 38:1188-203. [DOI: 10.1007/s10439-010-9901-0] [Citation(s) in RCA: 165] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2009] [Accepted: 01/02/2010] [Indexed: 10/19/2022]
|
41
|
Brecheisen R, Platel B, Vilanova A, ter Haar Romeny B. Parameter sensitivity visualization for DTI fiber tracking. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2009; 15:1441-1448. [PMID: 19834219 DOI: 10.1109/tvcg.2009.170] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Fiber tracking of Diffusion Tensor Imaging (DTI) data offers a unique insight into the three-dimensional organisation of white matter structures in the living brain. However, fiber tracking algorithms require a number of user-defined input parameters that strongly affect the output results. Usually the fiber tracking parameters are set once and are then re-used for several patient datasets. However, the stability of the chosen parameters is not evaluated and a small change in the parameter values can give very different results. The user remains completely unaware of such effects. Furthermore, it is difficult to reproduce output results between different users. We propose a visualization tool that allows the user to visually explore how small variations in parameter values affect the output of fiber tracking. With this knowledge the user cannot only assess the stability of commonly used parameter values but also evaluate in a more reliable way the output results between different patients. Existing tools do not provide such information. A small user evaluation of our tool has been done to show the potential of the technique.
Collapse
|
42
|
Sanyal J, Zhang S, Bhattacharya G, Amburn P, Moorhead RJ. A user study to compare four uncertainty visualization methods for 1D and 2D datasets. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2009; 15:1209-1218. [PMID: 19834191 DOI: 10.1109/tvcg.2009.114] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Many techniques have been proposed to show uncertainty in data visualizations. However, very little is known about their effectiveness in conveying meaningful information. In this paper, we present a user study that evaluates the perception of uncertainty amongst four of the most commonly used techniques for visualizing uncertainty in one-dimensional and two-dimensional data. The techniques evaluated are traditional errorbars, scaled size of glyphs, color-mapping on glyphs, and color-mapping of uncertainty on the data surface. The study uses generated data that was designed to represent the systematic and random uncertainty components. Twenty-seven users performed two types of search tasks and two types of counting tasks on 1D and 2D datasets. The search tasks involved finding data points that were least or most uncertain. The counting tasks involved counting data features or uncertainty features. A 4x4 full-factorial ANOVA indicated a significant interaction between the techniques used and the type of tasks assigned for both datasets indicating that differences in performance between the four techniques depended on the type of task performed. Several one-way ANOVAs were computed to explore the simple main effects. Bonferronni's correction was used to control for the family-wise error rate for alpha-inflation. Although we did not find a consistent order among the four techniques for all the tasks, there are several findings from the study that we think are useful for uncertainty visualization design. We found a significant difference in user performance between searching for locations of high and searching for locations of low uncertainty. Errorbars consistently underperformed throughout the experiment. Scaling the size of glyphs and color-mapping of the surface performed reasonably well. The efficiency of most of these techniques were highly dependent on the tasks performed. We believe that these findings can be used in future uncertainty visualization design. In addition, the framework developed in this user study presents a structured approach to evaluate uncertainty visualization techniques, as well as provides a basis for future research in uncertainty visualization.
Collapse
Affiliation(s)
- Jibonananda Sanyal
- Geosystems Research Insitute, High Performance Computing Collaboratory, Mississippi State University, MS, USA.
| | | | | | | | | |
Collapse
|
43
|
Kayser K, Schultz H, Goldmann T, Görtler J, Kayser G, Vollmer E. Theory of sampling and its application in tissue based diagnosis. Diagn Pathol 2009; 4:6. [PMID: 19220904 PMCID: PMC2649041 DOI: 10.1186/1746-1596-4-6] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2009] [Accepted: 02/16/2009] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference) space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. METHODS When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I) the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II) the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space) selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. RESULTS Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume) fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features) and evaluates spatial features in relation to the detected objects (for example grey value distribution around an object). It serves also for the definition of parameters of the probability function in so-called active segmentation. CONCLUSION The method is useful in standardization of images derived from immunohistochemically stained slides, and implemented in the EAMUS system http://www.diagnomX.de. It can also be applied for the search of "objects possessing an amplification function", i.e. a rare event with "steering function". A formula to calculate the efficiency and potential error rate of the described sampling procedures is given.
Collapse
Affiliation(s)
- Klaus Kayser
- UICC-TPCC, Institute of Pathology, Charite, Berlin, Germany.
| | | | | | | | | | | |
Collapse
|
44
|
Kirby RM, Silva CT. The need for verifiable visualization. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2008; 28:78-83. [PMID: 18753037 DOI: 10.1109/mcg.2008.103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Visualization is often employed as part of the simulation science pipeline-it's the window through which scientists examine their data for deriving new science, and the lens used to view modeling and discretization interactions within their simulations. We advocate that as a component of the simulation science pipeline, visualization must be explicitly considered as part of the validation and verification (V&V) process. In this article, the authors define V&V in the context of computational science, discuss the role of V&V in the scientific process, and present arguments for the need for verifiable visualization.
Collapse
|