1
|
Kumar A, Zhang X, Xin HL, Yan H, Huang X, Xu W, Mueller K. RadVolViz: An Information Display-Inspired Transfer Function Editor for Multivariate Volume Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4464-4479. [PMID: 37030815 DOI: 10.1109/tvcg.2023.3263856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In volume visualization transfer functions are widely used for mapping voxel properties to color and opacity. Typically, volume density data are scalars which require simple 1D transfer functions to achieve this mapping. If the volume densities are vectors of three channels, one can straightforwardly map each channel to either red, green or blue, which requires a trivial extension of the 1D transfer function editor. We devise a new method that applies to volume data with more than three channels. These types of data often arise in scientific scanning applications, where the data are separated into spectral bands or chemical elements. Our method expands on prior work in which a multivariate information display, RadViz, was fused with a radial color map, in order to visualize multi-band 2D images. In this work, we extend this joint interface to blended volume rendering. The information display allows users to recognize the presence and value distribution of the multivariate voxels and the joint volume rendering display visualizes their spatial distribution. We design a set of operators and lenses that allow users to interactively control the mapping of the multivariate voxels to opacity and color. This enables users to isolate or emphasize volumetric structures with desired multivariate properties. Furthermore, it turns out that our method also enables more insightful displays even for RGB data. We demonstrate our method with three datasets obtained from spectral electron microscopy, high energy X-ray scanning, and atmospheric science.
Collapse
|
2
|
Zhao H, Zhang ZW, Yang HW, Wei GH. Research on spatial carving method of glutenite reservoir based on opacity voxel imaging. Sci Rep 2024; 14:12667. [PMID: 38831094 PMCID: PMC11637114 DOI: 10.1038/s41598-024-63643-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 05/30/2024] [Indexed: 06/05/2024] Open
Abstract
The glutenite reservoir in an exploration area in eastern China is well-developed and holds significant exploration potential as an important oil and gas alternative layer. However, due to the influence of sedimentary characteristics, the glutenite reservoir exhibits strong lateral heterogeneity, significant vertical thickness variations, and low accuracy in reservoir space characterization, which affects the reasonable and effective deployment of development wells. Seismic data contains the three-dimensional spatial characteristics of geological bodies, but how to design a suitable transfer function to extract the nonlinear relationship between seismic data and reservoirs is crucial. At present, the transfer functions are concentrated in low-dimensional or high-dimensional fixed mathematical models, which cannot accurately describe the nonlinear relationship between seismic data and complex reservoirs, resulting in low spatial description accuracy of complex reservoirs. In this regard, this paper first utilizes a fusion method based on probability kernel to fuse seismic attributes such as wave impedance, effective bandwidth, and composite envelope difference. This provide a more intuitive reflection of the distribution characteristics of glutenite reservoirs. Moreover, a hybrid nonlinear transfer function is established to transform the fused attribute cube into an opaque attribute cube. Finally, the illumination model and ray casting method are used to perform voxel imaging of the glutenite reservoirs, brighten the detailed characteristics of reservoir space, and then form a set of methods for ' brightening reservoirs and darkening non-reservoirs ', which improves the spatial engraving accuracy of glutenite reservoirs.
Collapse
Affiliation(s)
- Hu Zhao
- Natural Gas Geology Key Laboratory of Sichuan Province, Southwest Petroleum University, Chengdu, 610500, China.
- School of Geoscisence and Technology, Southwest Petroleum University, Chengdu, 610500, China.
| | - Zhong-Wei Zhang
- School of Geoscisence and Technology, Southwest Petroleum University, Chengdu, 610500, China
| | - Hong-Wei Yang
- Geophysical Exploration Institute, Shengli Oilfield Company, SINOPEC, Dongying, 257000, China
| | - Guo-Hua Wei
- Geophysical Exploration Institute, Shengli Oilfield Company, SINOPEC, Dongying, 257000, China
| |
Collapse
|
3
|
Jadhav S, Torkaman M, Tannenbaum A, Nadeem S, Kaufman AE. Volume Exploration Using Multidimensional Bhattacharyya Flow. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1651-1663. [PMID: 34780328 PMCID: PMC9594946 DOI: 10.1109/tvcg.2021.3127918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present a novel approach for volume exploration that is versatile yet effective in isolating semantic structures in both noisy and clean data. Specifically, we describe a hierarchical active contours approach based on Bhattacharyya gradient flow which is easier to control, robust to noise, and can incorporate various types of statistical information to drive an edge-agnostic exploration process. To facilitate a time-bound user-driven volume exploration process that is applicable to a wide variety of data sources, we present an efficient multi-GPU implementation that (1) is approximately 400 times faster than a single thread CPU implementation, (2) allows hierarchical exploration of 2D and 3D images, (3) supports customization through multidimensional attribute spaces, and (4) is applicable to a variety of data sources and semantic structures. The exploration system follows a 2-step process. It first applies active contours to isolate semantically meaningful subsets of the volume. It then applies transfer functions to the isolated regions locally to produce clear and clutter-free visualizations. We show the effectiveness of our approach in isolating and visualizing structures-of-interest without needing any specialized segmentation methods on a variety of data sources, including 3D optical microscopy, multi-channel optical volumes, abdominal and chest CT, micro-CT, MRI, simulation, and synthetic data. We also gathered feedback from a medical trainee regarding the usefulness of our approach and discussion on potential applications in clinical workflows.
Collapse
|
4
|
Jadhav S, Nadeem S, Kaufman A. FeatureLego: Volume Exploration Using Exhaustive Clustering of Super-Voxels. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2725-2737. [PMID: 30028709 PMCID: PMC6703906 DOI: 10.1109/tvcg.2018.2856744] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a volume exploration framework, FeatureLego, that uses a novel voxel clustering approach for efficient selection of semantic features. We partition the input volume into a set of compact super-voxels that represent the finest selection granularity. We then perform an exhaustive clustering of these super-voxels using a graph-based clustering method. Unlike the prevalent brute-force parameter sampling approaches, we propose an efficient algorithm to perform this exhaustive clustering. By computing an exhaustive set of clusters, we aim to capture as many boundaries as possible and ensure that the user has sufficient options for efficiently selecting semantically relevant features. Furthermore, we merge all the computed clusters into a single tree of meta-clusters that can be used for hierarchical exploration. We implement an intuitive user-interface to interactively explore volumes using our clustering approach. Finally, we show the effectiveness of our framework on multiple real-world datasets of different modalities.
Collapse
|
5
|
Berger M, Li J, Levine JA. A Generative Model for Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1636-1650. [PMID: 29993811 DOI: 10.1109/tvcg.2018.2816059] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a technique to synthesize and analyze volume-rendered images using generative models. We use the Generative Adversarial Network (GAN) framework to compute a model from a large collection of volume renderings, conditioned on (1) viewpoint and (2) transfer functions for opacity and color. Our approach facilitates tasks for volume analysis that are challenging to achieve using existing rendering techniques such as ray casting or texture-based methods. We show how to guide the user in transfer function editing by quantifying expected change in the output image. Additionally, the generative model transforms transfer functions into a view-invariant latent space specifically designed to synthesize volume-rendered images. We use this space directly for rendering, enabling the user to explore the space of volume-rendered images. As our model is independent of the choice of volume rendering process, we show how to analyze volume-rendered images produced by direct and global illumination lighting, for a variety of volume datasets.
Collapse
|
6
|
Cheng HC, Cardone A, Jain S, Krokos E, Narayan K, Subramaniam S, Varshney A. Deep-Learning-Assisted Volume Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1378-1391. [PMID: 29994182 PMCID: PMC8369530 DOI: 10.1109/tvcg.2018.2796085] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Designing volume visualizations showing various structures of interest is critical to the exploratory analysis of volumetric data. The last few years have witnessed dramatic advances in the use of convolutional neural networks for identification of objects in large image collections. Whereas such machine learning methods have shown superior performance in a number of applications, their direct use in volume visualization has not yet been explored. In this paper, we present a deep-learning-assisted volume visualization to depict complex structures, which are otherwise challenging for conventional approaches. A significant challenge in designing volume visualizations based on the high-dimensional deep features lies in efficiently handling the immense amount of information that deep-learning methods provide. In this paper, we present a new technique that uses spectral methods to facilitate user interactions with high-dimensional features. We also present a new deep-learning-assisted technique for hierarchically exploring a volumetric dataset. We have validated our approach on two electron microscopy volumes and one magnetic resonance imaging dataset.
Collapse
|
7
|
Zhou B, Chiang YJ, Wang C. Efficient Local Statistical Analysis via Point-Wise Histograms in Tetrahedral Meshes and Curvilinear Grids. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1392-1406. [PMID: 29994603 DOI: 10.1109/tvcg.2018.2796555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Local histograms (i.e., point-wise histograms computed from local regions of mesh vertices) have been used in many data analysis and visualization applications. Previous methods for computing local histograms mainly work for regular or rectilinear grids only. In this paper, we develop theory and novel algorithms for computing local histograms in tetrahedral meshes and curvilinear grids. Our algorithms are theoretically sound and efficient, and work effectively and fast in practice. Our main focus is on scalar fields, but the algorithms also work for vector fields as a by-product with small, easy modifications. Our methods can benefit information theoretic and other distribution-driven analysis. The experiments demonstrate the efficacy of our new techniques, including a utility case study on tetrahedral vector field visualization.
Collapse
|
8
|
Jung Y, Kim J, Bi L, Kumar A, Feng DD, Fulham M. A direct volume rendering visualization approach for serial PET-CT scans that preserves anatomical consistency. Int J Comput Assist Radiol Surg 2019; 14:733-744. [PMID: 30661169 DOI: 10.1007/s11548-019-01916-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2018] [Accepted: 01/10/2019] [Indexed: 10/27/2022]
Abstract
PURPOSE Our aim was to develop an interactive 3D direct volume rendering (DVR) visualization solution to interpret and analyze complex, serial multi-modality imaging datasets from positron emission tomography-computed tomography (PET-CT). METHODS Our approach uses: (i) a serial transfer function (TF) optimization to automatically depict particular regions of interest (ROIs) over serial datasets with consistent anatomical structures; (ii) integration of a serial segmentation algorithm to interactively identify and track ROIs on PET; and (iii) parallel graphics processing unit (GPU) implementation for interactive visualization. RESULTS Our DVR visualization more easily identifies changes in ROIs in serial scans in an automated fashion and parallel GPU computation which enables interactive visualization. CONCLUSIONS Our approach provides a rapid 3D visualization of relevant ROIs over multiple scans, and we suggest that it can be used as an adjunct to conventional 2D viewing software from scanner vendors.
Collapse
Affiliation(s)
- Younhyun Jung
- Biomedical & Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, Australia
| | - Jinman Kim
- Biomedical & Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, Australia.
| | - Lei Bi
- Biomedical & Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, Australia
| | - Ashnil Kumar
- Biomedical & Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, Australia
| | - David Dagan Feng
- Biomedical & Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, Australia.,Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Michael Fulham
- Sydney Medical School, The University of Sydney, Sydney, Australia.,Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia
| |
Collapse
|
9
|
Ma B, Entezari A. Volumetric Feature-Based Classification and Visibility Analysis for Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:3253-3267. [PMID: 29989987 DOI: 10.1109/tvcg.2017.2776935] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Transfer function (TF) design is a central topic in direct volume rendering. The TF fundamentally translates data values into optical properties to reveal relevant features present in the volumetric data. We propose a semi-automatic TF design scheme which consists of two steps: First, we present a clustering process within 1D/2D TF domain based on the proximities of the respective volumetric features in the spatial domain. The presented approach provides an interactive tool that aids users in exploring clusters and identifying features of interest (FOI). Second, our method automatically generates a TF by iteratively refining the optical properties for the selected features using a novel feature visibility measurement. The proposed visibility measurement leverages the similarities of features to enhance their visibilities in DVR images. Compared to the conventional visibility measurement, the proposed feature visibility is able to efficiently sense opacity changes and precisely evaluate the impact of selected features on resulting visualizations. Our experiments validate the effectiveness of the proposed approach by demonstrating the advantages of integrating feature similarity into the visibility computations. We examine a number of datasets to establish the utility of our approach for semi-automatic TF design.
Collapse
|
10
|
Khan NM, Ksantini R, Guan L. A Novel Image-Centric Approach Toward Direct Volume Rendering. ACM T INTEL SYST TEC 2018. [DOI: 10.1145/3152875] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Transfer function (TF) generation is a fundamental problem in direct volume rendering (DVR). A TF maps voxels to color and opacity values to reveal inner structures. Existing TF tools are complex and unintuitive for the users who are more likely to be medical professionals than computer scientists. In this article, we propose a novel image-centric method for TF generation where instead of complex tools, the user directly manipulates volume data to generate DVR. The user’s work is further simplified by presenting only the most informative volume slices for selection. Based on the selected parts, the voxels are classified using our novel sparse nonparametric support vector machine classifier, which combines both local and near-global distributional information of the training data. The voxel classes are mapped to aesthetically pleasing and distinguishable color and opacity values using harmonic colors. Experimental results on several benchmark datasets and a detailed user survey show the effectiveness of the proposed method.
Collapse
|
11
|
Upsampling for Improved Multidimensional Attribute Space Clustering of Multifield Data. INFORMATION 2018. [DOI: 10.3390/info9070156] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
12
|
|
13
|
Lan S, Wang L, Song Y, Wang YP, Yao L, Sun K, Xia B, Xu Z. Improving Separability of Structures with Similar Attributes in 2D Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1546-1560. [PMID: 26955038 DOI: 10.1109/tvcg.2016.2537341] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The 2D transfer function based on scalar value and gradient magnitude (SG-TF) is popularly used in volume rendering. However, it is plagued by the boundary-overlapping problem: different structures with similar attributes have the same region in SG-TF space, and their boundaries are usually connected. The SG-TF thus often fails in separating these structures (or their boundaries) and has limited ability to classify different objects in real-world 3D images. To overcome such a difficulty, we propose a novel method for boundary separation by integrating spatial connectivity computation of the boundaries and set operations on boundary voxels into the SG-TF. Specifically, spatial positions of boundaries and their regions in the SG-TF space are computed, from which boundaries can be well separated and volume rendered in different colors. In the method, the boundaries are divided into three classes and different boundary-separation techniques are applied to them, respectively. The complex task of separating various boundaries in 3D images is then simplified by breaking it into several small separation problems. The method shows good object classification ability in real-world 3D images while avoiding the complexity of high-dimensional transfer functions. Its effectiveness and validation is demonstrated by many experimental results to visualize boundaries of different structures in complex real-world 3D images.
Collapse
|
14
|
Wu K, Knoll A, Isaac BJ, Carr H, Pascucci V. Direct Multifield Volume Ray Casting of Fiber Surfaces. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:941-949. [PMID: 27875207 DOI: 10.1109/tvcg.2016.2599040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Multifield data are common in visualization. However, reducing these data to comprehensible geometry is a challenging problem. Fiber surfaces, an analogy of isosurfaces to bivariate volume data, are a promising new mechanism for understanding multifield volumes. In this work, we explore direct ray casting of fiber surfaces from volume data without any explicit geometry extraction. We sample directly along rays in domain space, and perform geometric tests in range space where fibers are defined, using a signed distance field derived from the control polygons. Our method requires little preprocess, and enables real-time exploration of data, dynamic modification and pixel-exact rendering of fiber surfaces, and support for higher-order interpolation in domain space. We demonstrate this approach on several bivariate datasets, including analysis of multi-field combustion data.
Collapse
|
15
|
Mefraz Khan N, Kyan M, Guan L. Intuitive volume exploration through spherical self-organizing map and color harmonization. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2013.09.064] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
16
|
Nakao M, Takemoto S, Sugiura T, Sawada K, Kawakami R, Nemoto T, Matsuda T. Interactive visual exploration of overlapping similar structures for three-dimensional microscope images. BMC Bioinformatics 2014; 15:415. [PMID: 25523409 PMCID: PMC4279998 DOI: 10.1186/s12859-014-0415-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2014] [Accepted: 12/09/2014] [Indexed: 11/10/2022] Open
Abstract
Background Recent advances in microscopy enable the acquisition of large numbers of tomographic images from living tissues. Three-dimensional microscope images are often displayed with volume rendering by adjusting the transfer functions. However, because the emissions from fluorescent materials and the optical properties based on point spread functions affect the imaging results, the intensity value can differ locally, even in the same structure. Further, images obtained from brain tissues contain a variety of neural structures such as dendrites and axons with complex crossings and overlapping linear structures. In these cases, the transfer functions previously used fail to optimize image generation, making it difficult to explore the connectivity of these tissues. Results This paper proposes an interactive visual exploration method by which the transfer functions are modified locally and interactively based on multidimensional features in the images. A direct editing interface is also provided to specify both the target region and structures with characteristic features, where all manual operations can be performed on the rendered image. This method is demonstrated using two-photon microscope images acquired from living mice, and is shown to be an effective method for interactive visual exploration of overlapping similar structures. Conclusions An interactive visualization method was introduced for local improvement of visualization by volume rendering in two-photon microscope images containing regions in which linear nerve structures crisscross in a complex manner. The proposed method is characterized by the localized multidimensional transfer function and interface where the parameters can be determined by the user to suit their particular visualization requirements.
Collapse
Affiliation(s)
- Megumi Nakao
- Graduate School of Informatics, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto, Japan.
| | - Shintaro Takemoto
- Graduate School of Informatics, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto, Japan.
| | - Tadao Sugiura
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5, Takayama, Ikoma, Nara, Japan.
| | - Kazuaki Sawada
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan.
| | - Ryosuke Kawakami
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan. .,Research Institute for Electronic Science, Hokkaido University, Sapporo, Japan.
| | - Tomomi Nemoto
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan. .,Research Institute for Electronic Science, Hokkaido University, Sapporo, Japan.
| | - Tetsuya Matsuda
- Graduate School of Informatics, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto, Japan.
| |
Collapse
|
17
|
Qin H, Ye B, He R. The voxel visibility model: an efficient framework for transfer function design. Comput Med Imaging Graph 2014; 40:138-46. [PMID: 25510474 DOI: 10.1016/j.compmedimag.2014.11.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2014] [Revised: 10/29/2014] [Accepted: 11/20/2014] [Indexed: 10/24/2022]
Abstract
Volume visualization is a very important work in medical imaging and surgery plan. However, determining an ideal transfer function is still a challenging task because of the lack of measurable metrics for quality of volume visualization. In the paper, we presented the voxel vibility model as a quality metric to design the desired visibility for voxels instead of designing transfer functions directly. Transfer functions are obtained by minimizing the distance between the desired visibility distribution and the actual visibility distribution. The voxel model is a mapping function from the feature attributes of voxels to the visibility of voxels. To consider between-class information and with-class information simultaneously, the voxel visibility model is described as a Gaussian mixture model. To highlight the important features, the matched result can be obtained by changing the parameters in the voxel visibility model through a simple and effective interface. Simultaneously, we also proposed an algorithm for transfer functions optimization. The effectiveness of this method is demonstrated through experimental results on several volumetric data sets.
Collapse
Affiliation(s)
- Hongxing Qin
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Bin Ye
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Rui He
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
18
|
Waldner M, Le Muzic M, Bernhard M, Purgathofer W, Viola I. Attractive Flicker--Guiding Attention in Dynamic Narrative Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:2456-2465. [PMID: 26356959 DOI: 10.1109/tvcg.2014.2346352] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Focus+context techniques provide visual guidance in visualizations by giving strong visual prominence to elements of interest while the context is suppressed. However, finding a visual feature to enhance for the focus to pop out from its context in a large dynamic scene, while leading to minimal visual deformation and subjective disturbance, is challenging. This paper proposes Attractive Flicker, a novel technique for visual guidance in dynamic narrative visualizations. We first show that flicker is a strong visual attractor in the entire visual field, without distorting, suppressing, or adding any scene elements. The novel aspect of our Attractive Flicker technique is that it consists of two signal stages: The first "orientation stage" is a short but intensive flicker stimulus to attract the attention to elements of interest. Subsequently, the intensive flicker is reduced to a minimally disturbing luminance oscillation ("engagement stage") as visual support to keep track of the focus elements. To find a good trade-off between attraction effectiveness and subjective annoyance caused by flicker, we conducted two perceptual studies to find suitable signal parameters. We showcase Attractive Flicker with the parameters obtained from the perceptual statistics in a study of molecular interactions. With Attractive Flicker, users were able to easily follow the narrative of the visualization on a large display, while the flickering of focus elements was not disturbing when observing the context.
Collapse
|
19
|
Eichelbaum S, Dannhauer M, Hlawitschka M, Brooks D, Knösche TR, Scheuermann G. Visualizing simulated electrical fields from electroencephalography and transcranial electric brain stimulation: a comparative evaluation. Neuroimage 2014; 101:513-30. [PMID: 24821532 PMCID: PMC4172355 DOI: 10.1016/j.neuroimage.2014.04.085] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2013] [Revised: 04/23/2014] [Accepted: 04/30/2014] [Indexed: 11/21/2022] Open
Abstract
Electrical activity of neuronal populations is a crucial aspect of brain activity. This activity is not measured directly but recorded as electrical potential changes using head surface electrodes (electroencephalogram - EEG). Head surface electrodes can also be deployed to inject electrical currents in order to modulate brain activity (transcranial electric stimulation techniques) for therapeutic and neuroscientific purposes. In electroencephalography and noninvasive electric brain stimulation, electrical fields mediate between electrical signal sources and regions of interest (ROI). These fields can be very complicated in structure, and are influenced in a complex way by the conductivity profile of the human head. Visualization techniques play a central role to grasp the nature of those fields because such techniques allow for an effective conveyance of complex data and enable quick qualitative and quantitative assessments. The examination of volume conduction effects of particular head model parameterizations (e.g., skull thickness and layering), of brain anomalies (e.g., holes in the skull, tumors), location and extent of active brain areas (e.g., high concentrations of current densities) and around current injecting electrodes can be investigated using visualization. Here, we evaluate a number of widely used visualization techniques, based on either the potential distribution or on the current-flow. In particular, we focus on the extractability of quantitative and qualitative information from the obtained images, their effective integration of anatomical context information, and their interaction. We present illustrative examples from clinically and neuroscientifically relevant cases and discuss the pros and cons of the various visualization techniques.
Collapse
Affiliation(s)
- Sebastian Eichelbaum
- Image and Signal Processing Group, Leipzig University, Augustusplatz 10-11, 04109 Leipzig, Germany.
| | - Moritz Dannhauer
- Scientific Computing and Imaging Institute, University of Utah, 72S. Central Campus Drive, 84112 Salt Lake City, UT, USA; Center for Integrative Biomedical Computing, University of Utah, 72S. Central Campus Drive, 84112, Salt Lake City, UT, USA.
| | - Mario Hlawitschka
- Scientific Visualization, Leipzig University, Augustusplatz 10-11, 04109 Leipzig, Germany.
| | - Dana Brooks
- Center for Integrative Biomedical Computing, University of Utah, 72S. Central Campus Drive, 84112, Salt Lake City, UT, USA; Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA.
| | - Thomas R Knösche
- Human Cognitive and Brain Sciences, Max Planck Institute, Stephanstraße 1a, 04103 Leipzig, Germany.
| | - Gerik Scheuermann
- Image and Signal Processing Group, Leipzig University, Augustusplatz 10-11, 04109 Leipzig, Germany.
| |
Collapse
|
20
|
Nakao M, Kurebayashi K, Sugiura T, Sato T, Sawada K, Kawakami R, Nemoto T, Minato K, Matsuda T. Visualizing in vivo brain neural structures using volume rendered feature spaces. Comput Biol Med 2014; 53:85-93. [PMID: 25129020 DOI: 10.1016/j.compbiomed.2014.07.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2014] [Revised: 07/06/2014] [Accepted: 07/15/2014] [Indexed: 11/28/2022]
Abstract
BACKGROUND Dendrites of cortical neurons are widely spread across several layers of the cortex. Recently developed two-photon microscopy systems are capable of visualizing the morphology of neurons within deeper layers of the brain and generate large amounts of volumetric imaging data from living tissue. METHOD For visual exploration of the three-dimensional (3D) structure of dendrites and the connectivity among neurons in the brain, we propose a visualization software and interface for 3D images based on a new transfer function design using volume rendered feature spaces. This software enables the visualization of multidimensional descriptors of shape and texture extracted from imaging data to characterize tissue. It also allows the efficient analysis and visualization of large data sets. RESULTS We apply and demonstrate the software to two-photon microscopy images of a living mouse brain. By applying the developed visualization software and algorithms to two-photon microscope images of the mouse brain, we identified a set of feature values that distinguish characteristic structures such as soma, dendrites and apical dendrites in mouse brain. Also, the visualization interface was compared to conventional 1D/2D transfer function system. CONCLUSIONS We have developed a visualization tool and interface that can represent 3D feature values as textures and shapes. This visualization system allows the analysis and characterization of the higher-dimensional feature values of living tissues at the micron level and will contribute to new discoveries in basic biology and clinical medicine.
Collapse
Affiliation(s)
- Megumi Nakao
- Graduate School of Informatics, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto, Japan.
| | - Kosuke Kurebayashi
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5, Takayama, Ikoma, Nara, Japan
| | - Tadao Sugiura
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5, Takayama, Ikoma, Nara, Japan
| | - Tetsuo Sato
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5, Takayama, Ikoma, Nara, Japan
| | - Kazuaki Sawada
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Ryosuke Kawakami
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan; Research Institute for Electronic Science, Hokkaido University, Sapporo, Japan
| | - Tomomi Nemoto
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan; Research Institute for Electronic Science, Hokkaido University, Sapporo, Japan
| | - Kotaro Minato
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5, Takayama, Ikoma, Nara, Japan
| | - Tetsuya Matsuda
- Graduate School of Informatics, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto, Japan
| |
Collapse
|
21
|
Automatic transfer function design for medical visualization using visibility distributions and projective color mapping. Comput Med Imaging Graph 2013; 37:450-8. [PMID: 24070670 DOI: 10.1016/j.compmedimag.2013.08.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2013] [Revised: 08/19/2013] [Accepted: 08/28/2013] [Indexed: 11/22/2022]
Abstract
Transfer functions play a key role in volume rendering of medical data, but transfer function manipulation is unintuitive and can be time-consuming; achieving an optimal visualization of patient anatomy or pathology is difficult. To overcome this problem, we present a system for automatic transfer function design based on visibility distribution and projective color mapping. Instead of assigning opacity directly based on voxel intensity and gradient magnitude, the opacity transfer function is automatically derived by matching the observed visibility distribution to a target visibility distribution. An automatic color assignment scheme based on projective mapping is proposed to assign colors that allow for the visual discrimination of different structures, while also reflecting the degree of similarity between them. When our method was tested on several medical volumetric datasets, the key structures within the volume were clearly visualized with minimal user intervention.
Collapse
|
22
|
Maciejewski R, Jang Y, Woo I, Jänicke H, Gaither KP, Ebert DS. Abstracting Attribute Space for Transfer Function Exploration and Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:94-107. [PMID: 22508900 DOI: 10.1109/tvcg.2012.105] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Currently, user centered transfer function design begins with the user interacting with a one or two-dimensional histogram of the volumetric attribute space. The attribute space is visualized as a function of the number of voxels, allowing the user to explore the data in terms of the attribute size/magnitude. However, such visualizations provide the user with no information on the relationship between various attribute spaces (e.g., density, temperature, pressure, x, y, z) within the multivariate data. In this work, we propose a modification to the attribute space visualization in which the user is no longer presented with the magnitude of the attribute; instead, the user is presented with an information metric detailing the relationship between attributes of the multivariate volumetric data. In this way, the user can guide their exploration based on the relationship between the attribute magnitude and user selected attribute information as opposed to being constrained by only visualizing the magnitude of the attribute. We refer to this modification to the traditional histogram widget as an abstract attribute space representation. Our system utilizes common one and two-dimensional histogram widgets where the bins of the abstract attribute space now correspond to an attribute relationship in terms of the mean, standard deviation, entropy, or skewness. In this manner, we exploit the relationships and correlations present in the underlying data with respect to the dimension(s) under examination. These relationships are often times key to insight and allow us to guide attribute discovery as opposed to automatic extraction schemes which try to calculate and extract distinct attributes a priori. In this way, our system aids in the knowledge discovery of the interaction of properties within volumetric data.
Collapse
|
23
|
Ip CY, Varshney A, JaJa J. Hierarchical Exploration of Volumes Using Multilevel Segmentation of the Intensity-Gradient Histograms. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:2355-2363. [PMID: 26357143 DOI: 10.1109/tvcg.2012.231] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Visual exploration of volumetric datasets to discover the embedded features and spatial structures is a challenging and tedious task. In this paper we present a semi-automatic approach to this problem that works by visually segmenting the intensity-gradient 2D histogram of a volumetric dataset into an exploration hierarchy. Our approach mimics user exploration behavior by analyzing the histogram with the normalized-cut multilevel segmentation technique. Unlike previous work in this area, our technique segments the histogram into a reasonable set of intuitive components that are mutually exclusive and collectively exhaustive. We use information-theoretic measures of the volumetric data segments to guide the exploration. This provides a data-driven coarse-to-fine hierarchy for a user to interactively navigate the volume in a meaningful manner.
Collapse
Affiliation(s)
- Cheuk Yiu Ip
- Institute for Advanced Computer Studies, University of Maryland, College Park, USA.
| | | | | |
Collapse
|
24
|
Woo I, Maciejewski R, Gaither KP, Ebert DS. Feature-driven data exploration for volumetric rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:1731-1743. [PMID: 22291153 DOI: 10.1109/tvcg.2012.24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We have developed an intuitive method to semiautomatically explore volumetric data in a focus-region-guided or value-driven way using a user-defined ray through the 3D volume and contour lines in the region of interest. After selecting a point of interest from a 2D perspective, which defines a ray through the 3D volume, our method provides analytical tools to assist in narrowing the region of interest to a desired set of features. Feature layers are identified in a 1D scalar value profile with the ray and are used to define default rendering parameters, such as color and opacity mappings, and locate the center of the region of interest. Contour lines are generated based on the feature layer level sets within interactively selected slices of the focus region. Finally, we utilize feature-preserving filters and demonstrate the applicability of our scheme to noisy data.
Collapse
Affiliation(s)
- Insoo Woo
- Purdue Visual Analytics Center, Purdue University, PO Box 519, 465 Northwestern Ave., West Lafayette, IN 47907, USA.
| | | | | | | |
Collapse
|
25
|
Guo H, Xiao H, Yuan X. Scalable Multivariate Volume Visualization and Analysis Based on Dimension Projection and Parallel Coordinates. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:1397-1410. [PMID: 22411886 DOI: 10.1109/tvcg.2012.80] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this paper, we present an effective and scalable system for multivariate volume data visualization and analysis with a novel transfer function interface design that tightly couples parallel coordinates plots (PCP) and MDS-based dimension projection plots. In our system, the PCP visualizes the data distribution of each variate (dimension) and the MDS plots project features. They are integrated seamlessly to provide flexible feature classification without context switching between different data presentations during the user interaction. The proposed interface enables users to identify relevant correlation clusters and assign optical properties with lassos, magic wand, and other tools. Furthermore, direct sketching on the volume rendered images has been implemented to probe and edit features. With our system, users can interactively analyze multivariate volumetric data sets by navigating and exploring feature spaces in unified PCP and MDS plots. To further support large-scale multivariate volume data visualization and analysis, Scalable Pivot MDS (SPMDS), parallel adaptive continuous PCP rendering, as well as parallel rendering techniques are developed and integrated into our visualization system. Our experiments show that the system is effective in multivariate volume data visualization and its performance is highly scalable for data sets with different sizes and number of variates.
Collapse
|
26
|
Kaufman AE. Modified Dendrogram of Attribute Space for Multidimensional Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:121-131. [PMID: 21282856 DOI: 10.1109/tvcg.2011.23] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We introduce a modified dendrogram (MD) (with subtrees to represent clusters) and display it in 2D for multidimensional transfer function design. Such a transfer function for direct volume rendering employs a multidimensional space, termed attribute space. The MD reveals the hierarchical structure information of the attribute space. The user can design a transfer function in an intuitive and informative manner using the MD user interface in 2D instead of multidimensional space, where it is hard to ascertain the relationship of the space. In addition, we provide the capability to interactively modify the granularity of the MD. The coarse-grained MD primarily shows the global information of the attribute space while the fine-grained MD reveals the finer details, and the separation ability of the attribute space is completely preserved in the finest granularity. With this so called multigrained method, the user can efficiently create a transfer function using the coarse-grained MD, and then fine tune it with the fine-grained MDs. Our method is independent on the type of the attributes and supports arbitrary-dimension attribute space.
Collapse
|
27
|
Zheng Z, Ahmed N, Mueller K. iView: a feature clustering framework for suggesting informative views in volume visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1959-1968. [PMID: 22034313 DOI: 10.1109/tvcg.2011.218] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The unguided visual exploration of volumetric data can be both a challenging and a time-consuming undertaking. Identifying a set of favorable vantage points at which to start exploratory expeditions can greatly reduce this effort and can also ensure that no important structures are being missed. Recent research efforts have focused on entropy-based viewpoint selection criteria that depend on scalar values describing the structures of interest. In contrast, we propose a viewpoint suggestion pipeline that is based on feature-clustering in high-dimensional space. We use gradient/normal variation as a metric to identify interesting local events and then cluster these via k-means to detect important salient composite features. Next, we compute the maximum possible exposure of these composite feature for different viewpoints and calculate a 2D entropy map parameterized in longitude and latitude to point out promising view orientations. Superimposed onto an interactive track-ball interface, users can then directly use this entropy map to quickly navigate to potentially interesting viewpoints where visibility-based transfer functions can be employed to generate volume renderings that minimize occlusions. To give full exploration freedom to the user, the entropy map is updated on the fly whenever a view has been selected, pointing to new and promising but so far unseen view directions. Alternatively, our system can also use a set-cover optimization algorithm to provide a minimal set of views needed to observe all features. The views so generated could then be saved into a list for further inspection or into a gallery for a summary presentation.
Collapse
Affiliation(s)
- Ziyi Zheng
- Visual Analytics and Imaging (VAI) Laboratory, Center for Visual Computing, Computer Science Department, Stony Brook University, NY, USA.
| | | | | |
Collapse
|
28
|
Yunhai Wang, Wei Chen, Jian Zhang, Tingxing Dong, Guihua Shan, Xuebin Chi. Efficient Volume Exploration Using the Gaussian Mixture Model. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1560-1573. [PMID: 21670489 DOI: 10.1109/tvcg.2011.97] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The multidimensional transfer function is a flexible and effective tool for exploring volume data. However, designing an appropriate transfer function is a trial-and-error process and remains a challenge. In this paper, we propose a novel volume exploration scheme that explores volumetric structures in the feature space by modeling the space using the Gaussian mixture model (GMM). Our new approach has three distinctive advantages. First, an initial feature separation can be automatically achieved through GMM estimation. Second, the calculated Gaussians can be directly mapped to a set of elliptical transfer functions (ETFs), facilitating a fast pre-integrated volume rendering process. Third, an inexperienced user can flexibly manipulate the ETFs with the assistance of a suite of simple widgets, and discover potential features with several interactions. We further extend the GMM-based exploration scheme to time-varying data sets using an incremental GMM estimation algorithm. The algorithm estimates the GMM for one time step by using itself and the GMM generated from its previous steps. Sequentially applying the incremental algorithm to all time steps in a selected time interval yields a preliminary classification for each time step. In addition, the computed ETFs can be freely adjusted. The adjustments are then automatically propagated to other time steps. In this way, coherent user-guided exploration of a given time interval is achieved. Our GPU implementation demonstrates interactive performance and good scalability. The effectiveness of our approach is verified on several data sets.
Collapse
|
29
|
Lindholm S, Ljung P, Lundström C, Persson A, Ynnerman A. Spatial conditioning of transfer functions using local material distributions. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2010; 16:1301-1310. [PMID: 20975170 DOI: 10.1109/tvcg.2010.195] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In many applications of Direct Volume Rendering (DVR) the importance of a certain material or feature is highly dependent on its relative spatial location. For instance, in the medical diagnostic procedure, the patient's symptoms often lead to specification of features, tissues and organs of particular interest. One such example is pockets of gas which, if found inside the body at abnormal locations, are a crucial part of a diagnostic visualization. This paper presents an approach that enhances DVR transfer function design with spatial localization based on user specified material dependencies. Semantic expressions are used to define conditions based on relations between different materials, such as only render iodine uptake when close to liver. The underlying methods rely on estimations of material distributions which are acquired by weighing local neighborhoods of the data against approximations of material likelihood functions. This information is encoded and used to influence rendering according to the user's specifications. The result is improved focus on important features by allowing the user to suppress spatially less-important data. In line with requirements from actual clinical DVR practice, the methods do not require explicit material segmentation that would be impossible or prohibitively time-consuming to achieve in most real cases. The scheme scales well to higher dimensions which accounts for multi-dimensional transfer functions and multivariate data. Dual-Energy Computed Tomography, an important new modality in radiology, is used to demonstrate this scalability. In several examples we show significantly improved focus on clinically important aspects in the rendered images.
Collapse
|
30
|
Kim HS, Schulze JP, Cone AC, Sosinsky GE, Martone ME. Dimensionality Reduction on Multi-Dimensional Transfer Functions for Multi-Channel Volume Data Sets. INFORMATION VISUALIZATION 2010; 9:167-180. [PMID: 21841914 PMCID: PMC3153355 DOI: 10.1057/ivs.2010.6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The design of transfer functions for volume rendering is a non-trivial task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel, which requires multi-dimensional transfer functions. In this paper, we propose a new method for multi-dimensional transfer function design. Our new method provides a framework to combine multiple computational approaches and pushes the boundary of gradient-based multi-dimensional transfer functions to multiple channels, while keeping the dimensionality of transfer functions at a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. Applying recently developed nonlinear dimensionality reduction algorithms reduces the high-dimensional data of the domain. In this paper, we use Isomap and Locally Linear Embedding as well as a traditional algorithm, Principle Component Analysis. Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. We demonstrate the effectiveness of our new dimensionality reduction algorithms with two volumetric confocal microscopy data sets.
Collapse
Affiliation(s)
- Han Suk Kim
- Department of Computer Science and Engineering, University of California San Diego, 9500 Gilman Drive, La Jolla, CA, USA
| | | | | | | | | |
Collapse
|
31
|
2D Histogram based volume visualization: combining intensity and size of anatomical structures. Int J Comput Assist Radiol Surg 2010; 5:655-66. [PMID: 20512631 DOI: 10.1007/s11548-010-0480-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2009] [Accepted: 04/27/2010] [Indexed: 10/19/2022]
Abstract
PURPOSE Surgical planning requires 3D volume visualizations based on transfer functions (TF) that assign optical properties to volumetric image data. Two-dimensional TFs and 2D histograms may be employed to improve overall performance. METHODS Anatomical structures were used for 2D TF definition in an algorithm that computes a new structure-size image from the original data set. The original image and structure-size data sets were used to generate a structure-size enhanced (SSE) histogram. Alternatively, the gradient magnitude could be used as second property for 2D TF definition. Both types of 2D TFs were generated and compared using subjective evaluation of anatomic feature conspicuity. RESULTS Experiments with several medical image data sets provided SSE histograms that were judged subjectively to be more intuitive and better discriminated different anatomical structures than gradient magnitude-based 2D histograms. CONCLUSIONS In clinical applications, where the size of anatomical structures is more meaningful than gradient magnitude, the 2D TF can be effective for highlighting anatomical structures in 3D visualizations.
Collapse
|
32
|
Zhao X, Kaufman A. Multi-dimensional Reduction and Transfer Function Design using Parallel Coordinates. VOLUME GRAPHICS. INTERNATIONAL SYMPOSIUM ON VOLUME GRAPHICS 2010:69-76. [PMID: 26278929 DOI: 10.2312/vg/vg10/069-076] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Multi-dimensional transfer functions are widely used to provide appropriate data classification for direct volume rendering. Nevertheless, the design of a multi-dimensional transfer function is a complicated task. In this paper, we propose to use parallel coordinates, a powerful tool to visualize high-dimensional geometry and analyze multivariate data, for multi-dimensional transfer function design. This approach has two major advantages: (1) Combining the information of spatial space (voxel position) and parameter space; (2) Selecting appropriate high-dimensional parameters to obtain sophisticated data classification. Although parallel coordinates offers simple interface for the user to design the high-dimensional transfer function, some extra work such as sorting the coordinates is inevitable. Therefore, we use a local linear embedding technique for dimension reduction to reduce the burdensome calculations in the high dimensional parameter space and to represent the transfer function concisely. With the aid of parallel coordinates, we propose some novel high-dimensional transfer function widgets for better visualization results. We demonstrate the capability of our parallel coordinates based transfer function (PCbTF) design method for direct volume rendering using CT and MRI datasets.
Collapse
Affiliation(s)
- X Zhao
- Stony Brook University, USA
| | | |
Collapse
|