1
|
Zou J, Qin J. Real-time volume rendering for three-dimensional fetal ultrasound using volumetric photon mapping. Vis Comput Ind Biomed Art 2024; 7:25. [PMID: 39453538 PMCID: PMC11511803 DOI: 10.1186/s42492-024-00177-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 09/29/2024] [Indexed: 10/26/2024] Open
Abstract
Three-dimensional (3D) fetal ultrasound has been widely used in prenatal examinations. Realistic and real-time volumetric ultrasound volume rendering can enhance the effectiveness of diagnoses and assist obstetricians and pregnant mothers in communicating. However, this remains a challenging task because (1) there is a large amount of speckle noise in ultrasound images and (2) ultrasound images usually have low contrasts, making it difficult to distinguish different tissues and organs. However, traditional local-illumination-based methods do not achieve satisfactory results. This real-time requirement makes the task increasingly challenging. This study presents a novel real-time volume-rendering method equipped with a global illumination model for 3D fetal ultrasound visualization. This method can render direct illumination and indirect illumination separately by calculating single scattering and multiple scattering radiances, respectively. The indirect illumination effect was simulated using volumetric photon mapping. Calculating each photon's brightness is proposed using a novel screen-space destiny estimation to avoid complicated storage structures and accelerate computation. This study proposes a high dynamic range approach to address the issue of fetal skin with a dynamic range exceeding that of the display device. Experiments show that our technology, compared to conventional methodologies, can generate realistic rendering results with far more depth information.
Collapse
Affiliation(s)
- Jing Zou
- Centre for Smart Health, School of Nursing, the Hong Kong Polytechnic University, Hong Kong, China.
| | - Jing Qin
- Centre for Smart Health, School of Nursing, the Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
2
|
Iglesias-Guitian JA, Mane P, Moon B. Real-Time Denoising of Volumetric Path Tracing for Direct Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2734-2747. [PMID: 33180727 DOI: 10.1109/tvcg.2020.3037680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Direct volume rendering (DVR) using volumetric path tracing (VPT) is a scientific visualization technique that simulates light transport with objects' matter using physically-based lighting models. Monte Carlo (MC) path tracing is often used with surface models, yet its application for volumetric models is difficult due to the complexity of integrating MC light-paths in volumetric media with none or smooth material boundaries. Moreover, auxiliary geometry-buffers (G-buffers) produced for volumes are typically very noisy, failing to guide image denoisers relying on that information to preserve image details. This makes existing real-time denoisers, which take noise-free G-buffers as their input, less effective when denoising VPT images. We propose the necessary modifications to an image-based denoiser previously used when rendering surface models, and demonstrate effective denoising of VPT images. In particular, our denoising exploits temporal coherence between frames, without relying on noise-free G-buffers, which has been a common assumption of existing denoisers for surface-models. Our technique preserves high-frequency details through a weighted recursive least squares that handles heterogeneous noise for volumetric models. We show for various real data sets that our method improves the visual fidelity and temporal stability of VPT during classic DVR operations such as camera movements, modifications of the light sources, and editions to the volume transfer function.
Collapse
|
3
|
Fukumoto W, Kitera N, Mitani H, Sueoka T, Kondo S, Kawashita I, Nakamura Y, Nagao M, Awai K. Global illumination rendering versus volume rendering for the forensic evaluation of stab wounds using computed tomography. Sci Rep 2022; 12:2452. [PMID: 35165357 PMCID: PMC8844357 DOI: 10.1038/s41598-022-06541-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 02/01/2022] [Indexed: 11/15/2022] Open
Abstract
We compared three-dimensional (3D) CT images of stabbing victims subjected to volume-rendering (VR) or global illumination-rendering (GIR), a new technique now available for the reconstruction of 3D CT images. It simulates the complete interactions of photons with the scanned object, thereby providing photorealistic images. The diagnostic value of the images was also compared with that of macroscopic photographs. We used postmortem 3D CT images of 14 stabbing victims who had undergone autopsy and CT studies. The 3D CT images were subjected to GIR or VR and the 3D effect and the smoothness of the skin surface were graded on a 5-point scale. We also compared the 3D CT images of 37 stab wounds with macroscopic photographs. The maximum diameter of the wounds was measured on VR and GIR images and compared with the diameter recorded at autopsy. The overall image-quality scores and the ability to assess the stab wounds were significantly better on GIR than VR images (median scores: VR = 3 vs GIR = 4, p < 0.01). The mean difference between the wound diameter measured on VR and GIR images and at autopsy were both 0.2 cm, respectively. For the assessment of stab wounds, 3D CT images subjected to GIR were superior to VR images. The diagnostic value of 3D CT GIR image was comparable to that of macroscopic photographs.
Collapse
|
4
|
Bruder V, Muller C, Frey S, Ertl T. On Evaluating Runtime Performance of Interactive Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2848-2862. [PMID: 30763241 DOI: 10.1109/tvcg.2019.2898435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
As our field matures, evaluation of visualization techniques has extended from reporting runtime performance to studying user behavior. Consequently, many methodologies and best practices for user studies have evolved. While maintaining interactivity continues to be crucial for the exploration of large data sets, no similar methodological foundation for evaluating runtime performance has been developed. Our analysis of 50 recent visualization papers on new or improved techniques for rendering volumes or particles indicates that only a very limited set of parameters like different data sets, camera paths, viewport sizes, and GPUs are investigated, which make comparison with other techniques or generalization to other parameter ranges at least questionable. To derive a deeper understanding of qualitative runtime behavior and quantitative parameter dependencies, we developed a framework for the most exhaustive performance evaluation of volume and particle visualization techniques that we are aware of, including millions of measurements on ten different GPUs. This paper reports on our insights from statistical analysis of this data, discussing independent and linear parameter behavior and non-obvious effects. We give recommendations for best practices when evaluating runtime performance of scientific visualization applications, which can serve as a starting point for more elaborate models of performance quantification.
Collapse
|
5
|
Ament M, Zirr T, Dachsbacher C. Extinction-Optimized Volume Illumination. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1767-1781. [PMID: 27214903 DOI: 10.1109/tvcg.2016.2569080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel method to optimize the attenuation of light for the single scattering model in direct volume rendering. A common problem of single scattering is the high dynamic range between lit and shadowed regions due to the exponential attenuation of light along a ray. Moreover, light is often attenuated too strong between a sample point and the camera, hampering the visibility of important features. Our algorithm employs an importance function to selectively illuminate important structures and make them visible from the camera. With the importance function, more light can be transmitted to the features of interest, while contextual structures cast shadows which provide visual cues for perception of depth. At the same time, more scattered light is transmitted from the sample point to the camera to improve the primary visibility of important features. We formulate a minimization problem that automatically determines the extinction along a view or shadow ray to obtain a good balance between sufficient transmittance and attenuation. In contrast to previous approaches, we do not require a computationally expensive solution of a global optimization, but instead provide a closed-form solution for each sampled extinction value along a view or shadow ray and thus achieve interactive performance.
Collapse
|
6
|
Weber GH, Carpendale S, Ebert D, Fisher B, Hagen H, Shneiderman B, Ynnerman A. Apply or Die: On the Role and Assessment of Application Papers in Visualization. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2017; 37:96-104. [PMID: 28459676 DOI: 10.1109/mcg.2017.51] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Application-oriented papers provide an important way to invigorate and cross-pollinate the visualization field, but the exact criteria for judging an application paper's merit remain an open question. This article builds on a panel at the 2016 IEEE Visualization Conference entitled "Application Papers: What Are They, and How Should They Be Evaluated?" that sought to gain a better understanding of prevalent views in the visualization community. This article surveys current trends that favor application papers, reviews the benefits and contributions of this paper type, and discusses their assessment in the review process. It concludes with recommendations to ensure that the visualization community is more inclusive to application papers.
Collapse
|
7
|
Jonsson D, Ynnerman A. Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:901-910. [PMID: 27514045 DOI: 10.1109/tvcg.2016.2598430] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present a method for interactive global illumination of both static and time-varying volumetric data based on reduction of the overhead associated with re-computation of photon maps. Our method uses the identification of photon traces invariant to changes of visual parameters such as the transfer function (TF), or data changes between time-steps in a 4D volume. This lets us operate on a variant subset of the entire photon distribution. The amount of computation required in the two stages of the photon mapping process, namely tracing and gathering, can thus be reduced to the subset that are affected by a data or visual parameter change. We rely on two different types of information from the original data to identify the regions that have changed. A low resolution uniform grid containing the minimum and maximum data values of the original data is derived for each time step. Similarly, for two consecutive time-steps, a low resolution grid containing the difference between the overlapping data is used. We show that this compact metadata can be combined with the transfer function to identify the regions that have changed. Each photon traverses the low-resolution grid to identify if it can be directly transferred to the next photon distribution state or if it needs to be recomputed. An efficient representation of the photon distribution is presented leading to an order of magnitude improved performance of the raycasting step. The utility of the method is demonstrated in several examples that show visual fidelity, as well as performance. The examples show that visual quality can be retained when the fraction of retraced photons is as low as 40%-50%.
Collapse
|
8
|
Ament M, Dachsbacher C. Anisotropic Ambient Volume Shading. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:1015-1024. [PMID: 26529745 DOI: 10.1109/tvcg.2015.2467963] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel method to compute anisotropic shading for direct volume rendering to improve the perception of the orientation and shape of surface-like structures. We determine the scale-aware anisotropy of a shading point by analyzing its ambient region. We sample adjacent points with similar scalar values to perform a principal component analysis by computing the eigenvectors and eigenvalues of the covariance matrix. In particular, we estimate the tangent directions, which serve as the tangent frame for anisotropic bidirectional reflectance distribution functions. Moreover, we exploit the ratio of the eigenvalues to measure the magnitude of the anisotropy at each shading point. Altogether, this allows us to model a data-driven, smooth transition from isotropic to strongly anisotropic volume shading. In this way, the shape of volumetric features can be enhanced significantly by aligning specular highlights along the principal direction of anisotropy. Our algorithm is independent of the transfer function, which allows us to compute all shading parameters once and store them with the data set. We integrated our method in a GPU-based volume renderer, which offers interactive control of the transfer function, light source positions, and viewpoint. Our results demonstrate the benefit of anisotropic shading for visualization to achieve data-driven local illumination for improved perception compared to isotropic shading.
Collapse
|
9
|
Ament M, Sadlo F, Dachsbacher C, Weiskopf D. Low-Pass Filtered Volumetric Shadows. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:2437-2446. [PMID: 26356957 DOI: 10.1109/tvcg.2014.2346333] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel and efficient method to compute volumetric soft shadows for interactive direct volume visualization to improve the perception of spatial depth. By direct control of the softness of volumetric shadows, disturbing visual patterns due to hard shadows can be avoided and users can adapt the illumination to their personal and application-specific requirements. We compute the shadowing of a point in the data set by employing spatial filtering of the optical depth over a finite area patch pointing toward each light source. Conceptually, the area patch spans a volumetric region that is sampled with shadow rays; afterward, the resulting optical depth values are convolved with a low-pass filter on the patch. In the numerical computation, however, to avoid expensive shadow ray marching, we show how to align and set up summed area tables for both directional and point light sources. Once computed, the summed area tables enable efficient evaluation of soft shadows for each point in constant time without shadow ray marching and the softness of the shadows can be controlled interactively. We integrated our method in a GPU-based volume renderer with ray casting from the camera, which offers interactive control of the transfer function, light source positions, and viewpoint, for both static and time-dependent data sets. Our results demonstrate the benefit of soft shadows for visualization to achieve user-controlled illumination with many-point lighting setups for improved perception combined with high rendering speed.
Collapse
|
10
|
Ament M, Sadlo F, Weiskopf D. Ambient volume scattering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:2936-2945. [PMID: 24051861 DOI: 10.1109/tvcg.2013.129] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present ambient scattering as a preintegration method for scattering on mesoscopic scales in direct volume rendering. Far-range scattering effects usually provide negligible contributions to a given location due to the exponential attenuation with increasing distance. This motivates our approach to preintegrating multiple scattering within a finite spherical region around any given sample point. To this end, we solve the full light transport with a Monte-Carlo simulation within a set of spherical regions, where each region may have different material parameters regarding anisotropy and extinction. This precomputation is independent of the data set and the transfer function, and results in a small preintegration table. During rendering, the look-up table is accessed for each ray sample point with respect to the viewing direction, phase function, and material properties in the spherical neighborhood of the sample. Our rendering technique is efficient and versatile because it readily fits in existing ray marching algorithms and can be combined with local illumination and volumetric ambient occlusion. It provides interactive volumetric scattering and soft shadows, with interactive control of the transfer function, anisotropy parameter of the phase function, lighting conditions, and viewpoint. A GPU implementation demonstrates the benefits of ambient scattering for the visualization of different types of data sets, with respect to spatial perception, high-quality illumination, translucency, and rendering speed.
Collapse
|