1
|
Nguyen N, Bohak C, Engel D, Mindek P, Strnad O, Wonka P, Li S, Ropinski T, Viola I. Finding Nano-Ötzi: Cryo-Electron Tomography Visualization Guided by Learned Segmentation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4198-4214. [PMID: 35749328 DOI: 10.1109/tvcg.2022.3186146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cryo-electron tomography (cryo-ET) is a new 3D imaging technique with unprecedented potential for resolving submicron structural details. Existing volume visualization methods, however, are not able to reveal details of interest due to low signal-to-noise ratio. In order to design more powerful transfer functions, we propose leveraging soft segmentation as an explicit component of visualization for noisy volumes. Our technical realization is based on semi-supervised learning, where we combine the advantages of two segmentation algorithms. First, the weak segmentation algorithm provides good results for propagating sparse user-provided labels to other voxels in the same volume and is used to generate dense pseudo-labels. Second, the powerful deep-learning-based segmentation algorithm learns from these pseudo-labels to generalize the segmentation to other unseen volumes, a task that the weak segmentation algorithm fails at completely. The proposed volume visualization uses deep-learning-based segmentation as a component for segmentation-aware transfer function design. Appropriate ramp parameters can be suggested automatically through frequency distribution analysis. Furthermore, our visualization uses gradient-free ambient occlusion shading to further suppress the visual presence of noise, and to give structural detail the desired prominence. The cryo-ET data studied in our technical experiments are based on the highest-quality tilted series of intact SARS-CoV-2 virions. Our technique shows the high impact in target sciences for visual data analysis of very noisy volumes that cannot be visualized with existing techniques.
Collapse
|
2
|
Weiss S, Westermann R. Differentiable Direct Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:562-572. [PMID: 34587023 DOI: 10.1109/tvcg.2021.3114769] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present a differentiable volume rendering solution that provides differentiability of all continuous parameters of the volume rendering process. This differentiable renderer is used to steer the parameters towards a setting with an optimal solution of a problem-specific objective function. We have tailored the approach to volume rendering by enforcing a constant memory footprint via analytic inversion of the blending functions. This makes it independent of the number of sampling steps through the volume and facilitates the consideration of small-scale changes. The approach forms the basis for automatic optimizations regarding external parameters of the rendering process and the volumetric density field itself. We demonstrate its use for automatic viewpoint selection using differentiable entropy as objective, and for optimizing a transfer function from rendered images of a given volume. Optimization of per-voxel densities is addressed in two different ways: First, we mimic inverse tomography and optimize a 3D density field from images using an absorption model. This simplification enables comparisons with algebraic reconstruction techniques and state-of-the-art differentiable path tracers. Second, we introduce a novel approach for tomographic reconstruction from images using an emission-absorption model with post-shading via an arbitrary transfer function.
Collapse
|
3
|
Biswas A, Dutta S, Lawrence E, Patchett J, Calhoun JC, Ahrens J. Probabilistic Data-Driven Sampling via Multi-Criteria Importance Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4439-4454. [PMID: 32746272 DOI: 10.1109/tvcg.2020.3006426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Although supercomputers are becoming increasingly powerful, their components have thus far not scaled proportionately. Compute power is growing enormously and is enabling finely resolved simulations that produce never-before-seen features. However, I/O capabilities lag by orders of magnitude, which means only a fraction of the simulation data can be stored for post hoc analysis. Prespecified plans for saving features and quantities of interest do not work for features that have not been seen before. Data-driven intelligent sampling schemes are needed to detect and save important parts of the simulation while it is running. Here, we propose a novel sampling scheme that reduces the size of the data by orders-of-magnitude while still preserving important regions. The approach we develop selects points with unusual data values and high gradients. We demonstrate that our approach outperforms traditional sampling schemes on a number of tasks.
Collapse
|
4
|
Berger M, Li J, Levine JA. A Generative Model for Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1636-1650. [PMID: 29993811 DOI: 10.1109/tvcg.2018.2816059] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a technique to synthesize and analyze volume-rendered images using generative models. We use the Generative Adversarial Network (GAN) framework to compute a model from a large collection of volume renderings, conditioned on (1) viewpoint and (2) transfer functions for opacity and color. Our approach facilitates tasks for volume analysis that are challenging to achieve using existing rendering techniques such as ray casting or texture-based methods. We show how to guide the user in transfer function editing by quantifying expected change in the output image. Additionally, the generative model transforms transfer functions into a view-invariant latent space specifically designed to synthesize volume-rendered images. We use this space directly for rendering, enabling the user to explore the space of volume-rendered images. As our model is independent of the choice of volume rendering process, we show how to analyze volume-rendered images produced by direct and global illumination lighting, for a variety of volume datasets.
Collapse
|
5
|
Cheng HC, Cardone A, Jain S, Krokos E, Narayan K, Subramaniam S, Varshney A. Deep-Learning-Assisted Volume Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1378-1391. [PMID: 29994182 PMCID: PMC8369530 DOI: 10.1109/tvcg.2018.2796085] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Designing volume visualizations showing various structures of interest is critical to the exploratory analysis of volumetric data. The last few years have witnessed dramatic advances in the use of convolutional neural networks for identification of objects in large image collections. Whereas such machine learning methods have shown superior performance in a number of applications, their direct use in volume visualization has not yet been explored. In this paper, we present a deep-learning-assisted volume visualization to depict complex structures, which are otherwise challenging for conventional approaches. A significant challenge in designing volume visualizations based on the high-dimensional deep features lies in efficiently handling the immense amount of information that deep-learning methods provide. In this paper, we present a new technique that uses spectral methods to facilitate user interactions with high-dimensional features. We also present a new deep-learning-assisted technique for hierarchically exploring a volumetric dataset. We have validated our approach on two electron microscopy volumes and one magnetic resonance imaging dataset.
Collapse
|
6
|
Jung Y, Kim J, Bi L, Kumar A, Feng DD, Fulham M. A direct volume rendering visualization approach for serial PET-CT scans that preserves anatomical consistency. Int J Comput Assist Radiol Surg 2019; 14:733-744. [PMID: 30661169 DOI: 10.1007/s11548-019-01916-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2018] [Accepted: 01/10/2019] [Indexed: 10/27/2022]
Abstract
PURPOSE Our aim was to develop an interactive 3D direct volume rendering (DVR) visualization solution to interpret and analyze complex, serial multi-modality imaging datasets from positron emission tomography-computed tomography (PET-CT). METHODS Our approach uses: (i) a serial transfer function (TF) optimization to automatically depict particular regions of interest (ROIs) over serial datasets with consistent anatomical structures; (ii) integration of a serial segmentation algorithm to interactively identify and track ROIs on PET; and (iii) parallel graphics processing unit (GPU) implementation for interactive visualization. RESULTS Our DVR visualization more easily identifies changes in ROIs in serial scans in an automated fashion and parallel GPU computation which enables interactive visualization. CONCLUSIONS Our approach provides a rapid 3D visualization of relevant ROIs over multiple scans, and we suggest that it can be used as an adjunct to conventional 2D viewing software from scanner vendors.
Collapse
Affiliation(s)
- Younhyun Jung
- Biomedical & Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, Australia
| | - Jinman Kim
- Biomedical & Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, Australia.
| | - Lei Bi
- Biomedical & Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, Australia
| | - Ashnil Kumar
- Biomedical & Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, Australia
| | - David Dagan Feng
- Biomedical & Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, Australia.,Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Michael Fulham
- Sydney Medical School, The University of Sydney, Sydney, Australia.,Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia
| |
Collapse
|
7
|
Ma B, Entezari A. Volumetric Feature-Based Classification and Visibility Analysis for Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:3253-3267. [PMID: 29989987 DOI: 10.1109/tvcg.2017.2776935] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Transfer function (TF) design is a central topic in direct volume rendering. The TF fundamentally translates data values into optical properties to reveal relevant features present in the volumetric data. We propose a semi-automatic TF design scheme which consists of two steps: First, we present a clustering process within 1D/2D TF domain based on the proximities of the respective volumetric features in the spatial domain. The presented approach provides an interactive tool that aids users in exploring clusters and identifying features of interest (FOI). Second, our method automatically generates a TF by iteratively refining the optical properties for the selected features using a novel feature visibility measurement. The proposed visibility measurement leverages the similarities of features to enhance their visibilities in DVR images. Compared to the conventional visibility measurement, the proposed feature visibility is able to efficiently sense opacity changes and precisely evaluate the impact of selected features on resulting visualizations. Our experiments validate the effectiveness of the proposed approach by demonstrating the advantages of integrating feature similarity into the visibility computations. We examine a number of datasets to establish the utility of our approach for semi-automatic TF design.
Collapse
|
8
|
Traore M, Hurter C, Telea A. Interactive obstruction-free lensing for volumetric data visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:1029-1039. [PMID: 30235132 DOI: 10.1109/tvcg.2018.2864690] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Occlusion is an issue in volumetric visualization as it prevents direct visualization of the region of interest. While many techniques such as transfer functions, volume segmentation or view distortion have been developed to address this, there is still room for improvement to better support the understanding of objects' vicinity. However, most existing Focus+Context fail to solve partial occlusion in datasets where the target and the occluder are very similar density-wise. For these reasons, we investigate a new technique which maintains the general structure of the investigated volumetric dataset while addressing occlusion issues. With our technique, the user interactively defines an area of interest where an occluded region or object is partially visible. Then our lens starts pushing at its border occluding objects, thus revealing hidden volumetric data. Next, the lens is modified with an extended field of view (fish-eye deformation) to better see the vicinity of the selected region. Finally, the user can freely explore the surroundings of the area under investigation within the lens. To provide real-time exploration, we implemented our lens using a GPU accelerated ray-casting framework to handle ray deformations, local lighting, and local viewpoint manipulation. We illustrate our technique with five application scenarios in baggage inspection, 3D fluid flow visualization, chest radiology, air traffic planning, and DTI fiber exploration.
Collapse
|
9
|
Ament M, Zirr T, Dachsbacher C. Extinction-Optimized Volume Illumination. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1767-1781. [PMID: 27214903 DOI: 10.1109/tvcg.2016.2569080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel method to optimize the attenuation of light for the single scattering model in direct volume rendering. A common problem of single scattering is the high dynamic range between lit and shadowed regions due to the exponential attenuation of light along a ray. Moreover, light is often attenuated too strong between a sample point and the camera, hampering the visibility of important features. Our algorithm employs an importance function to selectively illuminate important structures and make them visible from the camera. With the importance function, more light can be transmitted to the features of interest, while contextual structures cast shadows which provide visual cues for perception of depth. At the same time, more scattered light is transmitted from the sample point to the camera to improve the primary visibility of important features. We formulate a minimization problem that automatically determines the extinction along a view or shadow ray to obtain a good balance between sufficient transmittance and attenuation. In contrast to previous approaches, we do not require a computationally expensive solution of a global optimization, but instead provide a closed-form solution for each sampled extinction value along a view or shadow ray and thus achieve interactive performance.
Collapse
|
10
|
Lan S, Wang L, Song Y, Wang YP, Yao L, Sun K, Xia B, Xu Z. Improving Separability of Structures with Similar Attributes in 2D Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1546-1560. [PMID: 26955038 DOI: 10.1109/tvcg.2016.2537341] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The 2D transfer function based on scalar value and gradient magnitude (SG-TF) is popularly used in volume rendering. However, it is plagued by the boundary-overlapping problem: different structures with similar attributes have the same region in SG-TF space, and their boundaries are usually connected. The SG-TF thus often fails in separating these structures (or their boundaries) and has limited ability to classify different objects in real-world 3D images. To overcome such a difficulty, we propose a novel method for boundary separation by integrating spatial connectivity computation of the boundaries and set operations on boundary voxels into the SG-TF. Specifically, spatial positions of boundaries and their regions in the SG-TF space are computed, from which boundaries can be well separated and volume rendered in different colors. In the method, the boundaries are divided into three classes and different boundary-separation techniques are applied to them, respectively. The complex task of separating various boundaries in 3D images is then simplified by breaking it into several small separation problems. The method shows good object classification ability in real-world 3D images while avoiding the complexity of high-dimensional transfer functions. Its effectiveness and validation is demonstrated by many experimental results to visualize boundaries of different structures in complex real-world 3D images.
Collapse
|
11
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
12
|
Le Muzic M, Mindek P, Sorger J, Autin L, Goodsell D, Viola I. Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models. COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 2016; 35:161-170. [PMID: 28344374 PMCID: PMC5364803 DOI: 10.1111/cgf.12892] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes.
Collapse
Affiliation(s)
| | | | - J Sorger
- TU Wien, Austria; VRVis Research Center, Vienna, Austria
| | - L Autin
- The Scripps Research Institute, La Jolla, California, USA
| | - D Goodsell
- The Scripps Research Institute, La Jolla, California, USA
| | | |
Collapse
|
13
|
Jung Y, Kim J, Kumar A, Feng DD, Fulham M. Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram. Comput Med Imaging Graph 2016; 51:40-9. [PMID: 27139998 DOI: 10.1016/j.compmedimag.2016.04.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Revised: 03/14/2016] [Accepted: 04/12/2016] [Indexed: 11/17/2022]
Abstract
'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation.
Collapse
Affiliation(s)
- Younhyun Jung
- The Institute of Biomedical Engineering and Technology, University of Sydney, Australia; BMIT Research Group, School of Information Technologies, University of Sydney, Australia.
| | - Jinman Kim
- The Institute of Biomedical Engineering and Technology, University of Sydney, Australia; BMIT Research Group, School of Information Technologies, University of Sydney, Australia
| | - Ashnil Kumar
- The Institute of Biomedical Engineering and Technology, University of Sydney, Australia; BMIT Research Group, School of Information Technologies, University of Sydney, Australia
| | - David Dagan Feng
- The Institute of Biomedical Engineering and Technology, University of Sydney, Australia; BMIT Research Group, School of Information Technologies, University of Sydney, Australia; Med-X Research Institute, Shanghai Jiao Tong University, China
| | - Michael Fulham
- Sydney Medical School, University of Sydney, Australia; Department of Molecular Imaging, Royal Prince Alfred Hospital, Australia
| |
Collapse
|
14
|
Jönsson D, Falk M, Ynnerman A. Intuitive Exploration of Volumetric Data Using Dynamic Galleries. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:896-905. [PMID: 26390481 DOI: 10.1109/tvcg.2015.2467294] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.
Collapse
|
15
|
Su YJ, Chuang YY. Disambiguating Stereoscopic Transparency Using a Thaumatrope Approach. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:959-969. [PMID: 26357258 DOI: 10.1109/tvcg.2015.2410273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Volume rendering is a popular visualization technique for scientific computing and medical imaging. By assigning proper transparency, it allows us to see more information inside the volume. However, because volume rendering projects complex 3D structures into the 2D domain, the resultant visualization often suffers from ambiguity and its spatial relationship could be difficult to recognize correctly, especially when the scene or setting is highly transparent. Stereoscopic displays are not the rescue to the problem even though they add an additional dimension which seems helpful for resolving the ambiguity. This paper proposes a thaumatrope method to enhance 3D understanding with stereoscopic transparency for volume rendering. Our method first generates an additional cue with less spatial ambiguity by using a high opacity setting. To avoid cluttering the actual content, we only select its prominent feature for displaying. By alternating the actual content and the selected feature quickly, the viewer only perceives a whole volume while its spatial understanding has been enhanced. A user study was performed to compare the proposed method with the original stereoscopic volume rendering and the static combination of the actual content and the selected feature using a 3D display. Results show that the proposed thaumatrope approach provides better spatial understanding than compared approaches.
Collapse
|
16
|
Song Y, Yang J, Zhou L, Zhu Y. Electric-field-based Transfer Functions for Volume Visualization. J Med Biol Eng 2015. [DOI: 10.1007/s40846-015-0027-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
17
|
Volume visualization based on the intensity and SUSAN transfer function spaces. Biomed Signal Process Control 2015. [DOI: 10.1016/j.bspc.2014.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
18
|
Peng Y, Chen L, Ou-Yang FX, Chen W, Yong JH. JF-cut: a parallel graph cut approach for large-scale image and video. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:655-666. [PMID: 25494510 DOI: 10.1109/tip.2014.2378060] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Graph cut has proven to be an effective scheme to solve a wide variety of segmentation problems in vision and graphics community. The main limitation of conventional graph-cut implementations is that they can hardly handle large images or videos because of high computational complexity. Even though there are some parallelization solutions, they commonly suffer from the problems of low parallelism (on CPU) or low convergence speed (on GPU). In this paper, we present a novel graph-cut algorithm that leverages a parallelized jump flooding technique and an heuristic push-relabel scheme to enhance the graph-cut process, namely, back-and-forth relabel, convergence detection, and block-wise push-relabel. The entire process is parallelizable on GPU, and outperforms the existing GPU-based implementations in terms of global convergence, information propagation, and performance. We design an intuitive user interface for specifying interested regions in cases of occlusions when handling video sequences. Experiments on a variety of data sets, including images (up to 15 K × 10 K), videos (up to 2.5 K × 1.5 K × 50), and volumetric data, achieve high-quality results and a maximum 40-fold (139-fold) speedup over conventional GPU (CPU-)-based approaches.
Collapse
|
19
|
Intuitive transfer function design for photographic volumes. J Vis (Tokyo) 2014. [DOI: 10.1007/s12650-014-0267-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
20
|
Qin H, Ye B, He R. The voxel visibility model: an efficient framework for transfer function design. Comput Med Imaging Graph 2014; 40:138-46. [PMID: 25510474 DOI: 10.1016/j.compmedimag.2014.11.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2014] [Revised: 10/29/2014] [Accepted: 11/20/2014] [Indexed: 10/24/2022]
Abstract
Volume visualization is a very important work in medical imaging and surgery plan. However, determining an ideal transfer function is still a challenging task because of the lack of measurable metrics for quality of volume visualization. In the paper, we presented the voxel vibility model as a quality metric to design the desired visibility for voxels instead of designing transfer functions directly. Transfer functions are obtained by minimizing the distance between the desired visibility distribution and the actual visibility distribution. The voxel model is a mapping function from the feature attributes of voxels to the visibility of voxels. To consider between-class information and with-class information simultaneously, the voxel visibility model is described as a Gaussian mixture model. To highlight the important features, the matched result can be obtained by changing the parameters in the voxel visibility model through a simple and effective interface. Simultaneously, we also proposed an algorithm for transfer functions optimization. The effectiveness of this method is demonstrated through experimental results on several volumetric data sets.
Collapse
Affiliation(s)
- Hongxing Qin
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Bin Ye
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Rui He
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
21
|
Bramon R, Ruiz M, Bardera A, Boada I, Feixas M, Sbert M. Information Theory-Based Automatic Multimodal Transfer Function Design. IEEE J Biomed Health Inform 2013; 17:870-80. [DOI: 10.1109/jbhi.2013.2263227] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
22
|
Zheng L, Wu Y, Ma KL. Perceptually-based depth-ordering enhancement for direct volume rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:446-459. [PMID: 22732679 DOI: 10.1109/tvcg.2012.144] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Visualizing complex volume data usually renders selected parts of the volume semitransparently to see inner structures of the volume or provide a context. This presents a challenge for volume rendering methods to produce images with unambiguous depth-ordering perception. Existing methods use visual cues such as halos and shadows to enhance depth perception. Along with other limitations, these methods introduce redundant information and require additional overhead. This paper presents a new approach to enhancing depth-ordering perception of volume rendered images without using additional visual cues. We set up an energy function based on quantitative perception models to measure the quality of the images in terms of the effectiveness of depth-ordering and transparency perception as well as the faithfulness of the information revealed. Guided by the function, we use a conjugate gradient method to iteratively and judiciously enhance the results. Our method can complement existing systems for enhancing volume rendering results. The experimental results demonstrate the usefulness and effectiveness of our approach.
Collapse
Affiliation(s)
- Lin Zheng
- Department of Computer Science, University of California, Davis, CA 95616-8562, USA.
| | | | | |
Collapse
|
23
|
Ip CY, Varshney A, JaJa J. Hierarchical Exploration of Volumes Using Multilevel Segmentation of the Intensity-Gradient Histograms. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:2355-2363. [PMID: 26357143 DOI: 10.1109/tvcg.2012.231] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Visual exploration of volumetric datasets to discover the embedded features and spatial structures is a challenging and tedious task. In this paper we present a semi-automatic approach to this problem that works by visually segmenting the intensity-gradient 2D histogram of a volumetric dataset into an exploration hierarchy. Our approach mimics user exploration behavior by analyzing the histogram with the normalized-cut multilevel segmentation technique. Unlike previous work in this area, our technique segments the histogram into a reasonable set of intuitive components that are mutually exclusive and collectively exhaustive. We use information-theoretic measures of the volumetric data segments to guide the exploration. This provides a data-driven coarse-to-fine hierarchy for a user to interactively navigate the volume in a meaningful manner.
Collapse
Affiliation(s)
- Cheuk Yiu Ip
- Institute for Advanced Computer Studies, University of Maryland, College Park, USA.
| | | | | |
Collapse
|
24
|
Okuyan E, Güdükbay U, İşler V. Dynamic view-dependent visualization of unstructured tetrahedral volumetric meshes. J Vis (Tokyo) 2012. [DOI: 10.1007/s12650-011-0122-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
25
|
Jung Y, Kim J, Feng DD. Dual-modal visibility metrics for interactive PET-CT visualization. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2012:2696-2699. [PMID: 23366481 DOI: 10.1109/embc.2012.6346520] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Dual-modal positron emission tomography and computed tomography (PET-CT) imaging enables the visualization of functional structures (PET) within human bodies in the spatial context of their anatomical (CT) counterparts, and is providing unprecedented capabilities in understanding diseases. However, the need to access and assimilate the two volumes simultaneously has raised new visualization challenges. In typical dual-modal visualization, the transfer functions for the two volumes are designed in isolation with the resulting volumes being fused. Unfortunately, such transfer function design fails to exploit the correlation that exists between the two volumes. In this study, we propose a dual-modal visualization method where we employ 'visibility' metrics to provide interactive visual feedback regarding the occlusion caused by the first volume on the second volume and vice versa. We further introduce a region of interest (ROI) function that allows visibility analyses to be restricted to subsection of the volume. We demonstrate the new visualization enabled by our proposed dual-modal visibility metrics using clinical whole-body PET-CT studies of various diseases.
Collapse
Affiliation(s)
- Younhyun Jung
- School of Information Technologies, University of Sydney, Australia.
| | | | | |
Collapse
|
26
|
Guo H, Mao N, Yuan X. WYSIWYG (What You See is What You Get) volume visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:2106-2114. [PMID: 22034329 DOI: 10.1109/tvcg.2011.261] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this paper, we propose a volume visualization system that accepts direct manipulation through a sketch-based What You See Is What You Get (WYSIWYG) approach. Similar to the operations in painting applications for 2D images, in our system, a full set of tools have been developed to enable direct volume rendering manipulation of color, transparency, contrast, brightness, and other optical properties by brushing a few strokes on top of the rendered volume image. To be able to smartly identify the targeted features of the volume, our system matches the sparse sketching input with the clustered features both in image space and volume space. To achieve interactivity, both special algorithms to accelerate the input identification and feature matching have been developed and implemented in our system. Without resorting to tuning transfer function parameters, our proposed system accepts sparse stroke inputs and provides users with intuitive, flexible and effective interaction during volume data exploration and visualization.
Collapse
Affiliation(s)
- Hanqi Guo
- Key Laboratory of Machine Perception (Ministry of Education), and School of EECS, Peking University.
| | | | | |
Collapse
|
27
|
Ruiz M, Bardera A, Boada I, Viola I, Feixas M, Sbert M. Automatic transfer functions based on informational divergence. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1932-1941. [PMID: 22034310 DOI: 10.1109/tvcg.2011.173] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this paper we present a framework to define transfer functions from a target distribution provided by the user. A target distribution can reflect the data importance, or highly relevant data value interval, or spatial segmentation. Our approach is based on a communication channel between a set of viewpoints and a set of bins of a volume data set, and it supports 1D as well as 2D transfer functions including the gradient information. The transfer functions are obtained by minimizing the informational divergence or Kullback-Leibler distance between the visibility distribution captured by the viewpoints and a target distribution selected by the user. The use of the derivative of the informational divergence allows for a fast optimization process. Different target distributions for 1D and 2D transfer functions are analyzed together with importance-driven and view-based techniques.
Collapse
|
28
|
Zheng Z, Ahmed N, Mueller K. iView: a feature clustering framework for suggesting informative views in volume visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1959-1968. [PMID: 22034313 DOI: 10.1109/tvcg.2011.218] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The unguided visual exploration of volumetric data can be both a challenging and a time-consuming undertaking. Identifying a set of favorable vantage points at which to start exploratory expeditions can greatly reduce this effort and can also ensure that no important structures are being missed. Recent research efforts have focused on entropy-based viewpoint selection criteria that depend on scalar values describing the structures of interest. In contrast, we propose a viewpoint suggestion pipeline that is based on feature-clustering in high-dimensional space. We use gradient/normal variation as a metric to identify interesting local events and then cluster these via k-means to detect important salient composite features. Next, we compute the maximum possible exposure of these composite feature for different viewpoints and calculate a 2D entropy map parameterized in longitude and latitude to point out promising view orientations. Superimposed onto an interactive track-ball interface, users can then directly use this entropy map to quickly navigate to potentially interesting viewpoints where visibility-based transfer functions can be employed to generate volume renderings that minimize occlusions. To give full exploration freedom to the user, the entropy map is updated on the fly whenever a view has been selected, pointing to new and promising but so far unseen view directions. Alternatively, our system can also use a set-cover optimization algorithm to provide a minimal set of views needed to observe all features. The views so generated could then be saved into a list for further inspection or into a gallery for a summary presentation.
Collapse
Affiliation(s)
- Ziyi Zheng
- Visual Analytics and Imaging (VAI) Laboratory, Center for Visual Computing, Computer Science Department, Stony Brook University, NY, USA.
| | | | | |
Collapse
|
29
|
Yunhai Wang, Wei Chen, Jian Zhang, Tingxing Dong, Guihua Shan, Xuebin Chi. Efficient Volume Exploration Using the Gaussian Mixture Model. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1560-1573. [PMID: 21670489 DOI: 10.1109/tvcg.2011.97] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The multidimensional transfer function is a flexible and effective tool for exploring volume data. However, designing an appropriate transfer function is a trial-and-error process and remains a challenge. In this paper, we propose a novel volume exploration scheme that explores volumetric structures in the feature space by modeling the space using the Gaussian mixture model (GMM). Our new approach has three distinctive advantages. First, an initial feature separation can be automatically achieved through GMM estimation. Second, the calculated Gaussians can be directly mapped to a set of elliptical transfer functions (ETFs), facilitating a fast pre-integrated volume rendering process. Third, an inexperienced user can flexibly manipulate the ETFs with the assistance of a suite of simple widgets, and discover potential features with several interactions. We further extend the GMM-based exploration scheme to time-varying data sets using an incremental GMM estimation algorithm. The algorithm estimates the GMM for one time step by using itself and the GMM generated from its previous steps. Sequentially applying the incremental algorithm to all time steps in a selected time interval yields a preliminary classification for each time step. In addition, the computed ETFs can be freely adjusted. The adjustments are then automatically propagated to other time steps. In this way, coherent user-guided exploration of a given time interval is achieved. Our GPU implementation demonstrates interactive performance and good scalability. The effectiveness of our approach is verified on several data sets.
Collapse
|
30
|
Fujiwara T, Iwamaru M, Tange M, Someya S, Okamoto K. A fractal-based 2D expansion method for multi-scale volume data visualization. J Vis (Tokyo) 2011. [DOI: 10.1007/s12650-011-0084-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|