1
|
Jadhav S, Torkaman M, Tannenbaum A, Nadeem S, Kaufman AE. Volume Exploration Using Multidimensional Bhattacharyya Flow. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1651-1663. [PMID: 34780328 PMCID: PMC9594946 DOI: 10.1109/tvcg.2021.3127918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present a novel approach for volume exploration that is versatile yet effective in isolating semantic structures in both noisy and clean data. Specifically, we describe a hierarchical active contours approach based on Bhattacharyya gradient flow which is easier to control, robust to noise, and can incorporate various types of statistical information to drive an edge-agnostic exploration process. To facilitate a time-bound user-driven volume exploration process that is applicable to a wide variety of data sources, we present an efficient multi-GPU implementation that (1) is approximately 400 times faster than a single thread CPU implementation, (2) allows hierarchical exploration of 2D and 3D images, (3) supports customization through multidimensional attribute spaces, and (4) is applicable to a variety of data sources and semantic structures. The exploration system follows a 2-step process. It first applies active contours to isolate semantically meaningful subsets of the volume. It then applies transfer functions to the isolated regions locally to produce clear and clutter-free visualizations. We show the effectiveness of our approach in isolating and visualizing structures-of-interest without needing any specialized segmentation methods on a variety of data sources, including 3D optical microscopy, multi-channel optical volumes, abdominal and chest CT, micro-CT, MRI, simulation, and synthetic data. We also gathered feedback from a medical trainee regarding the usefulness of our approach and discussion on potential applications in clinical workflows.
Collapse
|
2
|
Zeng Q, Zhao Y, Wang Y, Zhang J, Cao Y, Tu C, Viola I, Wang Y. Data-Driven Colormap Adjustment for Exploring Spatial Variations in Scalar Fields. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4902-4917. [PMID: 34469302 DOI: 10.1109/tvcg.2021.3109014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Colormapping is an effective and popular visualization technique for analyzing patterns in scalar fields. Scientists usually adjust a default colormap to show hidden patterns by shifting the colors in a trial-and-error process. To improve efficiency, efforts have been made to automate the colormap adjustment process based on data properties (e.g., statistical data value or histogram distribution). However, as the data properties have no direct correlation to the spatial variations, previous methods may be insufficient to reveal the dynamic range of spatial variations hidden in the data. To address the above issues, we conduct a pilot analysis with domain experts and summarize three requirements for the colormap adjustment process. Based on the requirements, we formulate colormap adjustment as an objective function, composed of a boundary term and a fidelity term, which is flexible enough to support interactive functionalities. We compare our approach with alternative methods under a quantitative measure and a qualitative user study (25 participants), based on a set of data with broad distribution diversity. We further evaluate our approach via three case studies with six domain experts. Our method is not necessarily more optimal than alternative methods of revealing patterns, but rather is an additional color adjustment option for exploring data with a dynamic range of spatial variations.
Collapse
|
3
|
He X, Yang S, Tao Y, Dai H, Lin H. Graph convolutional network-based semi-supervised feature classification of volumes. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00787-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
4
|
|
5
|
Berger M, Li J, Levine JA. A Generative Model for Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1636-1650. [PMID: 29993811 DOI: 10.1109/tvcg.2018.2816059] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a technique to synthesize and analyze volume-rendered images using generative models. We use the Generative Adversarial Network (GAN) framework to compute a model from a large collection of volume renderings, conditioned on (1) viewpoint and (2) transfer functions for opacity and color. Our approach facilitates tasks for volume analysis that are challenging to achieve using existing rendering techniques such as ray casting or texture-based methods. We show how to guide the user in transfer function editing by quantifying expected change in the output image. Additionally, the generative model transforms transfer functions into a view-invariant latent space specifically designed to synthesize volume-rendered images. We use this space directly for rendering, enabling the user to explore the space of volume-rendered images. As our model is independent of the choice of volume rendering process, we show how to analyze volume-rendered images produced by direct and global illumination lighting, for a variety of volume datasets.
Collapse
|
6
|
Ma B, Entezari A. Volumetric Feature-Based Classification and Visibility Analysis for Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:3253-3267. [PMID: 29989987 DOI: 10.1109/tvcg.2017.2776935] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Transfer function (TF) design is a central topic in direct volume rendering. The TF fundamentally translates data values into optical properties to reveal relevant features present in the volumetric data. We propose a semi-automatic TF design scheme which consists of two steps: First, we present a clustering process within 1D/2D TF domain based on the proximities of the respective volumetric features in the spatial domain. The presented approach provides an interactive tool that aids users in exploring clusters and identifying features of interest (FOI). Second, our method automatically generates a TF by iteratively refining the optical properties for the selected features using a novel feature visibility measurement. The proposed visibility measurement leverages the similarities of features to enhance their visibilities in DVR images. Compared to the conventional visibility measurement, the proposed feature visibility is able to efficiently sense opacity changes and precisely evaluate the impact of selected features on resulting visualizations. Our experiments validate the effectiveness of the proposed approach by demonstrating the advantages of integrating feature similarity into the visibility computations. We examine a number of datasets to establish the utility of our approach for semi-automatic TF design.
Collapse
|
7
|
Shih M, Rozhon C, Ma KL. A Declarative Grammar of Flexible Volume Visualization Pipelines. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:1050-1059. [PMID: 30130223 DOI: 10.1109/tvcg.2018.2864841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper presents a declarative grammar for conveniently and effectively specifying advanced volume visualizations. Existing methods for creating volume visualizations either lack the flexibility to specify sophisticated visualizations or are difficult to use for those unfamiliar with volume rendering implementation and parameterization. Our design provides the ability to quickly create expressive visualizations without knowledge of the volume rendering implementation. It attempts to capture aspects of those difficult but powerful methods while remaining flexible and easy to use. As a proof of concept, our current implementation of the grammar allows users to combine multiple data variables in various ways and define transfer functions for diverse input data. The grammar also has the ability to describe advanced shading effects and create animations. We demonstrate the power and flexibility of our approach using multiple practical volume visualizations.
Collapse
|
8
|
|
9
|
Lan S, Wang L, Song Y, Wang YP, Yao L, Sun K, Xia B, Xu Z. Improving Separability of Structures with Similar Attributes in 2D Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1546-1560. [PMID: 26955038 DOI: 10.1109/tvcg.2016.2537341] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The 2D transfer function based on scalar value and gradient magnitude (SG-TF) is popularly used in volume rendering. However, it is plagued by the boundary-overlapping problem: different structures with similar attributes have the same region in SG-TF space, and their boundaries are usually connected. The SG-TF thus often fails in separating these structures (or their boundaries) and has limited ability to classify different objects in real-world 3D images. To overcome such a difficulty, we propose a novel method for boundary separation by integrating spatial connectivity computation of the boundaries and set operations on boundary voxels into the SG-TF. Specifically, spatial positions of boundaries and their regions in the SG-TF space are computed, from which boundaries can be well separated and volume rendered in different colors. In the method, the boundaries are divided into three classes and different boundary-separation techniques are applied to them, respectively. The complex task of separating various boundaries in 3D images is then simplified by breaking it into several small separation problems. The method shows good object classification ability in real-world 3D images while avoiding the complexity of high-dimensional transfer functions. Its effectiveness and validation is demonstrated by many experimental results to visualize boundaries of different structures in complex real-world 3D images.
Collapse
|
10
|
Sundén E, Kottravel S, Ropinski T. Multimodal volume illumination. COMPUTERS & GRAPHICS 2015; 50:47-60. [DOI: 10.1016/j.cag.2015.05.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
|
11
|
Volume visualization based on the intensity and SUSAN transfer function spaces. Biomed Signal Process Control 2015. [DOI: 10.1016/j.bspc.2014.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
12
|
Jung Y, Kim J, Fulham M, Feng DD. Opacity-driven volume clipping for slice of interest (SOI) visualisation of multi-modality PET-CT volumes. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:6714-7. [PMID: 25571537 DOI: 10.1109/embc.2014.6945169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Multi-modality positron emission tomography and computed tomography (PET-CT) imaging depicts biological and physiological functions (from PET) within a higher resolution anatomical reference frame (from CT). The need to efficiently assimilate the information from these co-aligned volumes simultaneously has resulted in 3D visualisation methods that depict e.g., slice of interest (SOI) from PET combined with direct volume rendering (DVR) of CT. However because DVR renders the whole volume, regions of interests (ROIs) such as tumours that are embedded within the volume may be occluded from view. Volume clipping is typically used to remove occluding structures by `cutting away' parts of the volume; this involves tedious trail-and-error tweaking of the clipping attempts until a satisfied visualisation is made, thus restricting its application. Hence, we propose a new automated opacity-driven volume clipping method for PET-CT using DVR-SOI visualisation. Our method dynamically calculates the volume clipping depth by considering the opacity information of the CT voxels in front of the PET SOI, thereby ensuring that only the relevant anatomical information from the CT is visualised while not impairing the visibility of the PET SOI. We outline the improvements of our method when compared to conventional 2D and traditional DVR-SOI visualisations.
Collapse
|
13
|
Hsu WH, Zhang Y, Ma KL. A multi-criteria approach to camera motion design for volume data animation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:2792-2801. [PMID: 24051846 DOI: 10.1109/tvcg.2013.123] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.
Collapse
|
14
|
Automatic transfer function design for medical visualization using visibility distributions and projective color mapping. Comput Med Imaging Graph 2013; 37:450-8. [PMID: 24070670 DOI: 10.1016/j.compmedimag.2013.08.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2013] [Revised: 08/19/2013] [Accepted: 08/28/2013] [Indexed: 11/22/2022]
Abstract
Transfer functions play a key role in volume rendering of medical data, but transfer function manipulation is unintuitive and can be time-consuming; achieving an optimal visualization of patient anatomy or pathology is difficult. To overcome this problem, we present a system for automatic transfer function design based on visibility distribution and projective color mapping. Instead of assigning opacity directly based on voxel intensity and gradient magnitude, the opacity transfer function is automatically derived by matching the observed visibility distribution to a target visibility distribution. An automatic color assignment scheme based on projective mapping is proposed to assign colors that allow for the visual discrimination of different structures, while also reflecting the degree of similarity between them. When our method was tested on several medical volumetric datasets, the key structures within the volume were clearly visualized with minimal user intervention.
Collapse
|
15
|
Ahmed N, Zheng Z, Mueller K. Human Computation in Visualization: Using Purpose Driven Games for Robust Evaluation of Visualization Algorithms. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:2104-2113. [PMID: 26357117 DOI: 10.1109/tvcg.2012.234] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Due to the inherent characteristics of the visualization process, most of the problems in this field have strong ties with human cognition and perception. This makes the human brain and sensory system the only truly appropriate evaluation platform for evaluating and fine-tuning a new visualization method or paradigm. However, getting humans to volunteer for these purposes has always been a significant obstacle, and thus this phase of the development process has traditionally formed a bottleneck, slowing down progress in visualization research. We propose to take advantage of the newly emerging field of Human Computation (HC) to overcome these challenges. HC promotes the idea that rather than considering humans as users of the computational system, they can be made part of a hybrid computational loop consisting of traditional computation resources and the human brain and sensory system. This approach is particularly successful in cases where part of the computational problem is considered intractable using known computer algorithms but is trivial to common sense human knowledge. In this paper, we focus on HC from the perspective of solving visualization problems and also outline a framework by which humans can be easily seduced to volunteer their HC resources. We introduce a purpose-driven game titled "Disguise" which serves as a prototypical example for how the evaluation of visualization algorithms can be mapped into a fun and addicting activity, allowing this task to be accomplished in an extensive yet cost effective way. Finally, we sketch out a framework that transcends from the pure evaluation of existing visualization methods to the design of a new one.
Collapse
Affiliation(s)
- N Ahmed
- Comput. Sci. Dept., Stony Brook Univ., Stony Brook, NY, USA.
| | | | | |
Collapse
|
16
|
Guo H, Xiao H, Yuan X. Scalable Multivariate Volume Visualization and Analysis Based on Dimension Projection and Parallel Coordinates. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:1397-1410. [PMID: 22411886 DOI: 10.1109/tvcg.2012.80] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this paper, we present an effective and scalable system for multivariate volume data visualization and analysis with a novel transfer function interface design that tightly couples parallel coordinates plots (PCP) and MDS-based dimension projection plots. In our system, the PCP visualizes the data distribution of each variate (dimension) and the MDS plots project features. They are integrated seamlessly to provide flexible feature classification without context switching between different data presentations during the user interaction. The proposed interface enables users to identify relevant correlation clusters and assign optical properties with lassos, magic wand, and other tools. Furthermore, direct sketching on the volume rendered images has been implemented to probe and edit features. With our system, users can interactively analyze multivariate volumetric data sets by navigating and exploring feature spaces in unified PCP and MDS plots. To further support large-scale multivariate volume data visualization and analysis, Scalable Pivot MDS (SPMDS), parallel adaptive continuous PCP rendering, as well as parallel rendering techniques are developed and integrated into our visualization system. Our experiments show that the system is effective in multivariate volume data visualization and its performance is highly scalable for data sets with different sizes and number of variates.
Collapse
|
17
|
Jung Y, Kim J, Feng DD. Dual-modal visibility metrics for interactive PET-CT visualization. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2012:2696-2699. [PMID: 23366481 DOI: 10.1109/embc.2012.6346520] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Dual-modal positron emission tomography and computed tomography (PET-CT) imaging enables the visualization of functional structures (PET) within human bodies in the spatial context of their anatomical (CT) counterparts, and is providing unprecedented capabilities in understanding diseases. However, the need to access and assimilate the two volumes simultaneously has raised new visualization challenges. In typical dual-modal visualization, the transfer functions for the two volumes are designed in isolation with the resulting volumes being fused. Unfortunately, such transfer function design fails to exploit the correlation that exists between the two volumes. In this study, we propose a dual-modal visualization method where we employ 'visibility' metrics to provide interactive visual feedback regarding the occlusion caused by the first volume on the second volume and vice versa. We further introduce a region of interest (ROI) function that allows visibility analyses to be restricted to subsection of the volume. We demonstrate the new visualization enabled by our proposed dual-modal visibility metrics using clinical whole-body PET-CT studies of various diseases.
Collapse
Affiliation(s)
- Younhyun Jung
- School of Information Technologies, University of Sydney, Australia.
| | | | | |
Collapse
|
18
|
Kaufman AE. Modified Dendrogram of Attribute Space for Multidimensional Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:121-131. [PMID: 21282856 DOI: 10.1109/tvcg.2011.23] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We introduce a modified dendrogram (MD) (with subtrees to represent clusters) and display it in 2D for multidimensional transfer function design. Such a transfer function for direct volume rendering employs a multidimensional space, termed attribute space. The MD reveals the hierarchical structure information of the attribute space. The user can design a transfer function in an intuitive and informative manner using the MD user interface in 2D instead of multidimensional space, where it is hard to ascertain the relationship of the space. In addition, we provide the capability to interactively modify the granularity of the MD. The coarse-grained MD primarily shows the global information of the attribute space while the fine-grained MD reveals the finer details, and the separation ability of the attribute space is completely preserved in the finest granularity. With this so called multigrained method, the user can efficiently create a transfer function using the coarse-grained MD, and then fine tune it with the fine-grained MDs. Our method is independent on the type of the attributes and supports arbitrary-dimension attribute space.
Collapse
|
19
|
Guo H, Mao N, Yuan X. WYSIWYG (What You See is What You Get) volume visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:2106-2114. [PMID: 22034329 DOI: 10.1109/tvcg.2011.261] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this paper, we propose a volume visualization system that accepts direct manipulation through a sketch-based What You See Is What You Get (WYSIWYG) approach. Similar to the operations in painting applications for 2D images, in our system, a full set of tools have been developed to enable direct volume rendering manipulation of color, transparency, contrast, brightness, and other optical properties by brushing a few strokes on top of the rendered volume image. To be able to smartly identify the targeted features of the volume, our system matches the sparse sketching input with the clustered features both in image space and volume space. To achieve interactivity, both special algorithms to accelerate the input identification and feature matching have been developed and implemented in our system. Without resorting to tuning transfer function parameters, our proposed system accepts sparse stroke inputs and provides users with intuitive, flexible and effective interaction during volume data exploration and visualization.
Collapse
Affiliation(s)
- Hanqi Guo
- Key Laboratory of Machine Perception (Ministry of Education), and School of EECS, Peking University.
| | | | | |
Collapse
|
20
|
Zheng Z, Ahmed N, Mueller K. iView: a feature clustering framework for suggesting informative views in volume visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1959-1968. [PMID: 22034313 DOI: 10.1109/tvcg.2011.218] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The unguided visual exploration of volumetric data can be both a challenging and a time-consuming undertaking. Identifying a set of favorable vantage points at which to start exploratory expeditions can greatly reduce this effort and can also ensure that no important structures are being missed. Recent research efforts have focused on entropy-based viewpoint selection criteria that depend on scalar values describing the structures of interest. In contrast, we propose a viewpoint suggestion pipeline that is based on feature-clustering in high-dimensional space. We use gradient/normal variation as a metric to identify interesting local events and then cluster these via k-means to detect important salient composite features. Next, we compute the maximum possible exposure of these composite feature for different viewpoints and calculate a 2D entropy map parameterized in longitude and latitude to point out promising view orientations. Superimposed onto an interactive track-ball interface, users can then directly use this entropy map to quickly navigate to potentially interesting viewpoints where visibility-based transfer functions can be employed to generate volume renderings that minimize occlusions. To give full exploration freedom to the user, the entropy map is updated on the fly whenever a view has been selected, pointing to new and promising but so far unseen view directions. Alternatively, our system can also use a set-cover optimization algorithm to provide a minimal set of views needed to observe all features. The views so generated could then be saved into a list for further inspection or into a gallery for a summary presentation.
Collapse
Affiliation(s)
- Ziyi Zheng
- Visual Analytics and Imaging (VAI) Laboratory, Center for Visual Computing, Computer Science Department, Stony Brook University, NY, USA.
| | | | | |
Collapse
|
21
|
Yunhai Wang, Wei Chen, Jian Zhang, Tingxing Dong, Guihua Shan, Xuebin Chi. Efficient Volume Exploration Using the Gaussian Mixture Model. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1560-1573. [PMID: 21670489 DOI: 10.1109/tvcg.2011.97] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The multidimensional transfer function is a flexible and effective tool for exploring volume data. However, designing an appropriate transfer function is a trial-and-error process and remains a challenge. In this paper, we propose a novel volume exploration scheme that explores volumetric structures in the feature space by modeling the space using the Gaussian mixture model (GMM). Our new approach has three distinctive advantages. First, an initial feature separation can be automatically achieved through GMM estimation. Second, the calculated Gaussians can be directly mapped to a set of elliptical transfer functions (ETFs), facilitating a fast pre-integrated volume rendering process. Third, an inexperienced user can flexibly manipulate the ETFs with the assistance of a suite of simple widgets, and discover potential features with several interactions. We further extend the GMM-based exploration scheme to time-varying data sets using an incremental GMM estimation algorithm. The algorithm estimates the GMM for one time step by using itself and the GMM generated from its previous steps. Sequentially applying the incremental algorithm to all time steps in a selected time interval yields a preliminary classification for each time step. In addition, the computed ETFs can be freely adjusted. The adjustments are then automatically propagated to other time steps. In this way, coherent user-guided exploration of a given time interval is achieved. Our GPU implementation demonstrates interactive performance and good scalability. The effectiveness of our approach is verified on several data sets.
Collapse
|
22
|
Xiang D, Tian J, Yang F, Yang Q, Zhang X, Li Q, Liu X. Skeleton Cuts--An Efficient Segmentation Method for Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1295-1306. [PMID: 21041885 DOI: 10.1109/tvcg.2010.239] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Volume rendering has long been used as a key technique for volume data visualization, which works by using a transfer function to map color and opacity to each voxel. Many volume rendering approaches proposed so far for voxels classification have been limited in a single global transfer function, which is in general unable to properly visualize interested structures. In this paper, we propose a localized volume data visualization approach which regards volume visualization as a combination of two mutually related processes: the segmentation of interested structures and the visualization using a locally designed transfer function for each individual structure of interest. As shown in our work, a new interactive segmentation algorithm is advanced via skeletons to properly categorize interested structures. In addition, a localized transfer function is subsequently presented to assign optical parameters via interested information such as intensity, thickness and distance. As can be seen from the experimental results, the proposed techniques allow to appropriately visualize interested structures in highly complex volume medical data sets.
Collapse
|
23
|
Fujiwara T, Iwamaru M, Tange M, Someya S, Okamoto K. A fractal-based 2D expansion method for multi-scale volume data visualization. J Vis (Tokyo) 2011. [DOI: 10.1007/s12650-011-0084-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
24
|
Špelič D, Žalik B. Lossless Compression of Threshold-Segmented Medical Images. J Med Syst 2011; 36:2349-57. [DOI: 10.1007/s10916-011-9702-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2011] [Accepted: 04/05/2011] [Indexed: 11/24/2022]
|
25
|
Prassni JS, Ropinski T, Hinrichs K. Uncertainty-aware guided volume segmentation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2010; 16:1358-1365. [PMID: 20975176 DOI: 10.1109/tvcg.2010.208] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Although direct volume rendering is established as a powerful tool for the visualization of volumetric data, efficient and reliable feature detection is still an open topic. Usually, a tradeoff between fast but imprecise classification schemes and accurate but time-consuming segmentation techniques has to be made. Furthermore, the issue of uncertainty introduced with the feature detection process is completely neglected by the majority of existing approaches.In this paper we propose a guided probabilistic volume segmentation approach that focuses on the minimization of uncertainty. In an iterative process, our system continuously assesses uncertainty of a random walker-based segmentation in order to detect regions with high ambiguity, to which the user's attention is directed to support the correction of potential misclassifications. This reduces the risk of critical segmentation errors and ensures that information about the segmentation's reliability is conveyed to the user in a dependable way. In order to improve the efficiency of the segmentation process, our technique does not only take into account the volume data to be segmented, but also enables the user to incorporate classification information. An interactive workflow has been achieved by implementing the presented system on the GPU using the OpenCL API. Our results obtained for several medical data sets of different modalities, including brain MRI and abdominal CT, demonstrate the reliability and efficiency of our approach.
Collapse
|
26
|
Lindholm S, Ljung P, Lundström C, Persson A, Ynnerman A. Spatial conditioning of transfer functions using local material distributions. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2010; 16:1301-1310. [PMID: 20975170 DOI: 10.1109/tvcg.2010.195] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In many applications of Direct Volume Rendering (DVR) the importance of a certain material or feature is highly dependent on its relative spatial location. For instance, in the medical diagnostic procedure, the patient's symptoms often lead to specification of features, tissues and organs of particular interest. One such example is pockets of gas which, if found inside the body at abnormal locations, are a crucial part of a diagnostic visualization. This paper presents an approach that enhances DVR transfer function design with spatial localization based on user specified material dependencies. Semantic expressions are used to define conditions based on relations between different materials, such as only render iodine uptake when close to liver. The underlying methods rely on estimations of material distributions which are acquired by weighing local neighborhoods of the data against approximations of material likelihood functions. This information is encoded and used to influence rendering according to the user's specifications. The result is improved focus on important features by allowing the user to suppress spatially less-important data. In line with requirements from actual clinical DVR practice, the methods do not require explicit material segmentation that would be impossible or prohibitively time-consuming to achieve in most real cases. The scheme scales well to higher dimensions which accounts for multi-dimensional transfer functions and multivariate data. Dual-Energy Computed Tomography, an important new modality in radiology, is used to demonstrate this scalability. In several examples we show significantly improved focus on clinically important aspects in the rendered images.
Collapse
|
27
|
Saad A, Hamarneh G, Möller T. Exploration and visualization of segmentation uncertainty using shape and appearance prior information. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2010; 16:1366-1375. [PMID: 20975177 DOI: 10.1109/tvcg.2010.152] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We develop an interactive analysis and visualization tool for probabilistic segmentation in medical imaging. The originality of our approach is that the data exploration is guided by shape and appearance knowledge learned from expert-segmented images of a training population. We introduce a set of multidimensional transfer function widgets to analyze the multivariate probabilistic field data. These widgets furnish the user with contextual information about conformance or deviation from the population statistics. We demonstrate the user's ability to identify suspicious regions (e.g. tumors) and to correct the misclassification results. We evaluate our system and demonstrate its usefulness in the context of static anatomical and time-varying functional imaging datasets.
Collapse
Affiliation(s)
- Ahmed Saad
- School of Computer Science, Simon Fraser University, Burnaby, BC, Canada.
| | | | | |
Collapse
|
28
|
2D Histogram based volume visualization: combining intensity and size of anatomical structures. Int J Comput Assist Radiol Surg 2010; 5:655-66. [PMID: 20512631 DOI: 10.1007/s11548-010-0480-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2009] [Accepted: 04/27/2010] [Indexed: 10/19/2022]
Abstract
PURPOSE Surgical planning requires 3D volume visualizations based on transfer functions (TF) that assign optical properties to volumetric image data. Two-dimensional TFs and 2D histograms may be employed to improve overall performance. METHODS Anatomical structures were used for 2D TF definition in an algorithm that computes a new structure-size image from the original data set. The original image and structure-size data sets were used to generate a structure-size enhanced (SSE) histogram. Alternatively, the gradient magnitude could be used as second property for 2D TF definition. Both types of 2D TFs were generated and compared using subjective evaluation of anatomic feature conspicuity. RESULTS Experiments with several medical image data sets provided SSE histograms that were judged subjectively to be more intuitive and better discriminated different anatomical structures than gradient magnitude-based 2D histograms. CONCLUSIONS In clinical applications, where the size of anatomical structures is more meaningful than gradient magnitude, the 2D TF can be effective for highlighting anatomical structures in 3D visualizations.
Collapse
|
29
|
Parallel Mean Shift for Interactive Volume Segmentation. MACHINE LEARNING IN MEDICAL IMAGING 2010. [DOI: 10.1007/978-3-642-15948-0_9] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|