1
|
Zhou L, Fan M, Hansen C, Johnson CR, Weiskopf D. A Review of Three-Dimensional Medical Image Visualization. HEALTH DATA SCIENCE 2022; 2022:9840519. [PMID: 38487486 PMCID: PMC10880180 DOI: 10.34133/2022/9840519] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Accepted: 03/17/2022] [Indexed: 03/17/2024]
Abstract
Importance. Medical images are essential for modern medicine and an important research subject in visualization. However, medical experts are often not aware of the many advanced three-dimensional (3D) medical image visualization techniques that could increase their capabilities in data analysis and assist the decision-making process for specific medical problems. Our paper provides a review of 3D visualization techniques for medical images, intending to bridge the gap between medical experts and visualization researchers.Highlights. Fundamental visualization techniques are revisited for various medical imaging modalities, from computational tomography to diffusion tensor imaging, featuring techniques that enhance spatial perception, which is critical for medical practices. The state-of-the-art of medical visualization is reviewed based on a procedure-oriented classification of medical problems for studies of individuals and populations. This paper summarizes free software tools for different modalities of medical images designed for various purposes, including visualization, analysis, and segmentation, and it provides respective Internet links.Conclusions. Visualization techniques are a useful tool for medical experts to tackle specific medical problems in their daily work. Our review provides a quick reference to such techniques given the medical problem and modalities of associated medical images. We summarize fundamental techniques and readily available visualization tools to help medical experts to better understand and utilize medical imaging data. This paper could contribute to the joint effort of the medical and visualization communities to advance precision medicine.
Collapse
Affiliation(s)
- Liang Zhou
- National Institute of Health Data Science, Peking University, Beijing, China
| | - Mengjie Fan
- National Institute of Health Data Science, Peking University, Beijing, China
| | - Charles Hansen
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, USA
| | - Chris R. Johnson
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, USA
| | - Daniel Weiskopf
- Visualization Research Center (VISUS), University of Stuttgart, Stuttgart, Germany
| |
Collapse
|
2
|
Engel D, Ropinski T. Deep Volumetric Ambient Occlusion. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1268-1278. [PMID: 33048686 DOI: 10.1109/tvcg.2020.3030344] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present a novel deep learning based technique for volumetric ambient occlusion in the context of direct volume rendering. Our proposed Deep Volumetric Ambient Occlusion (DVAO) approach can predict per-voxel ambient occlusion in volumetric data sets, while considering global information provided through the transfer function. The proposed neural network only needs to be executed upon change of this global information, and thus supports real-time volume interaction. Accordingly, we demonstrate DVAO's ability to predict volumetric ambient occlusion, such that it can be applied interactively within direct volume rendering. To achieve the best possible results, we propose and analyze a variety of transfer function representations and injection strategies for deep neural networks. Based on the obtained results we also give recommendations applicable in similar volume learning scenarios. Lastly, we show that DVAO generalizes to a variety of modalities, despite being trained on computed tomography data only.
Collapse
|
3
|
Zhou L, Weiskopf D, Johnson CR. Perceptually guided contrast enhancement based on viewing distance. JOURNAL OF VISUAL LANGUAGES AND COMPUTING 2019; 55:100911. [PMID: 31827316 PMCID: PMC6904545 DOI: 10.1016/j.cola.2019.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We propose an image-space contrast enhancement method for color-encoded visualization. The contrast of an image is enhanced through a perceptually guided approach that interfaces with the user with a single and intuitive parameter of the virtual viewing distance. To this end, we analyze a multiscale contrast model of the input image and test the visibility of bandpass images of all scales at a virtual viewing distance. By adapting weights of bandpass images with a threshold model of spatial vision, this image-based method enhances contrast to compensate for contrast loss caused by viewing the image at a certain distance. Relevant features in the color image can be further emphasized by the user using overcompensation. The weights can be assigned with a simple band-based approach, or with an efficient pixel-based approach that reduces ringing artifacts. The method is efficient and can be integrated into any visualization tool as it is a generic image-based post-processing technique. Using highly diverse datasets, we show the usefulness of perception compensation across a wide range of typical visualizations.
Collapse
Affiliation(s)
- Liang Zhou
- SCI Institute, University of Utah, United States
| | - Daniel Weiskopf
- Visualization Research Center, University of Stuttgart, Germany
| | | |
Collapse
|
4
|
Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry. Symmetry (Basel) 2018. [DOI: 10.3390/sym10040125] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
5
|
Abstract
Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel.
Collapse
|
6
|
Magnus JG, Bruckner S. Interactive Dynamic Volume Illumination with Refraction and Caustics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:984-993. [PMID: 28866548 DOI: 10.1109/tvcg.2017.2744438] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In recent years, significant progress has been made in developing high-quality interactive methods for realistic volume illumination. However, refraction - despite being an important aspect of light propagation in participating media - has so far only received little attention. In this paper, we present a novel approach for refractive volume illumination including caustics capable of interactive frame rates. By interleaving light and viewing ray propagation, our technique avoids memory-intensive storage of illumination information and does not require any precomputation. It is fully dynamic and all parameters such as light position and transfer function can be modified interactively without a performance penalty.
Collapse
|
7
|
Ament M, Zirr T, Dachsbacher C. Extinction-Optimized Volume Illumination. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1767-1781. [PMID: 27214903 DOI: 10.1109/tvcg.2016.2569080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel method to optimize the attenuation of light for the single scattering model in direct volume rendering. A common problem of single scattering is the high dynamic range between lit and shadowed regions due to the exponential attenuation of light along a ray. Moreover, light is often attenuated too strong between a sample point and the camera, hampering the visibility of important features. Our algorithm employs an importance function to selectively illuminate important structures and make them visible from the camera. With the importance function, more light can be transmitted to the features of interest, while contextual structures cast shadows which provide visual cues for perception of depth. At the same time, more scattered light is transmitted from the sample point to the camera to improve the primary visibility of important features. We formulate a minimization problem that automatically determines the extinction along a view or shadow ray to obtain a good balance between sufficient transmittance and attenuation. In contrast to previous approaches, we do not require a computationally expensive solution of a global optimization, but instead provide a closed-form solution for each sampled extinction value along a view or shadow ray and thus achieve interactive performance.
Collapse
|
8
|
Zhou J, Wang X, Cui H, Gong P, Miao X, Miao Y, Xiao C, Chen F, Feng D. Topology-aware illumination design for volume rendering. BMC Bioinformatics 2016; 17:309. [PMID: 27538893 PMCID: PMC4991004 DOI: 10.1186/s12859-016-1177-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Accepted: 08/11/2016] [Indexed: 11/21/2022] Open
Abstract
Background Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. Results We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. Conclusions The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.
Collapse
Affiliation(s)
- Jianlong Zhou
- Xi'an Jiaotong University City College, 8715 Shangji Road, Xi'an, Shaanxi 710018, People's Republic of China.,DATA61, CSIRO, 13 Garden Street, Eveleigh, 2015, NSW, Australia
| | - Xiuying Wang
- The University of Sydney, 1 Cleveland Street, Darlington, 2008, NSW, Australia
| | - Hui Cui
- The University of Sydney, 1 Cleveland Street, Darlington, 2008, NSW, Australia
| | - Peng Gong
- The University of Sydney, 1 Cleveland Street, Darlington, 2008, NSW, Australia
| | - Xianglin Miao
- Xi'an Jiaotong University City College, 8715 Shangji Road, Xi'an, Shaanxi 710018, People's Republic of China.
| | - Yalin Miao
- Xi'an University of Technology, 5 Jinhua Nan Road, Xi'an, 710048, Shaanxi, People's Republic of China
| | - Chun Xiao
- Xiangtan University, Xiangtan, 411105, Hunan, People's Republic of China
| | - Fang Chen
- DATA61, CSIRO, 13 Garden Street, Eveleigh, 2015, NSW, Australia
| | - Dagan Feng
- The University of Sydney, 1 Cleveland Street, Darlington, 2008, NSW, Australia
| |
Collapse
|
9
|
Ament M, Weiskopf D, Dachsbacher C. Ambient Volume Illumination. Comput Sci Eng 2016. [DOI: 10.1109/mcse.2016.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
10
|
Ament M, Dachsbacher C. Anisotropic Ambient Volume Shading. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:1015-1024. [PMID: 26529745 DOI: 10.1109/tvcg.2015.2467963] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel method to compute anisotropic shading for direct volume rendering to improve the perception of the orientation and shape of surface-like structures. We determine the scale-aware anisotropy of a shading point by analyzing its ambient region. We sample adjacent points with similar scalar values to perform a principal component analysis by computing the eigenvectors and eigenvalues of the covariance matrix. In particular, we estimate the tangent directions, which serve as the tangent frame for anisotropic bidirectional reflectance distribution functions. Moreover, we exploit the ratio of the eigenvalues to measure the magnitude of the anisotropy at each shading point. Altogether, this allows us to model a data-driven, smooth transition from isotropic to strongly anisotropic volume shading. In this way, the shape of volumetric features can be enhanced significantly by aligning specular highlights along the principal direction of anisotropy. Our algorithm is independent of the transfer function, which allows us to compute all shading parameters once and store them with the data set. We integrated our method in a GPU-based volume renderer, which offers interactive control of the transfer function, light source positions, and viewpoint. Our results demonstrate the benefit of anisotropic shading for visualization to achieve data-driven local illumination for improved perception compared to isotropic shading.
Collapse
|
11
|
Sundén E, Kottravel S, Ropinski T. Multimodal volume illumination. COMPUTERS & GRAPHICS 2015; 50:47-60. [DOI: 10.1016/j.cag.2015.05.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
|
12
|
Ament M, Sadlo F, Dachsbacher C, Weiskopf D. Low-Pass Filtered Volumetric Shadows. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:2437-2446. [PMID: 26356957 DOI: 10.1109/tvcg.2014.2346333] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel and efficient method to compute volumetric soft shadows for interactive direct volume visualization to improve the perception of spatial depth. By direct control of the softness of volumetric shadows, disturbing visual patterns due to hard shadows can be avoided and users can adapt the illumination to their personal and application-specific requirements. We compute the shadowing of a point in the data set by employing spatial filtering of the optical depth over a finite area patch pointing toward each light source. Conceptually, the area patch spans a volumetric region that is sampled with shadow rays; afterward, the resulting optical depth values are convolved with a low-pass filter on the patch. In the numerical computation, however, to avoid expensive shadow ray marching, we show how to align and set up summed area tables for both directional and point light sources. Once computed, the summed area tables enable efficient evaluation of soft shadows for each point in constant time without shadow ray marching and the softness of the shadows can be controlled interactively. We integrated our method in a GPU-based volume renderer with ray casting from the camera, which offers interactive control of the transfer function, light source positions, and viewpoint, for both static and time-dependent data sets. Our results demonstrate the benefit of soft shadows for visualization to achieve user-controlled illumination with many-point lighting setups for improved perception combined with high rendering speed.
Collapse
|