1
|
Zou J, Qin J. Real-time volume rendering for three-dimensional fetal ultrasound using volumetric photon mapping. Vis Comput Ind Biomed Art 2024; 7:25. [PMID: 39453538 PMCID: PMC11511803 DOI: 10.1186/s42492-024-00177-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 09/29/2024] [Indexed: 10/26/2024] Open
Abstract
Three-dimensional (3D) fetal ultrasound has been widely used in prenatal examinations. Realistic and real-time volumetric ultrasound volume rendering can enhance the effectiveness of diagnoses and assist obstetricians and pregnant mothers in communicating. However, this remains a challenging task because (1) there is a large amount of speckle noise in ultrasound images and (2) ultrasound images usually have low contrasts, making it difficult to distinguish different tissues and organs. However, traditional local-illumination-based methods do not achieve satisfactory results. This real-time requirement makes the task increasingly challenging. This study presents a novel real-time volume-rendering method equipped with a global illumination model for 3D fetal ultrasound visualization. This method can render direct illumination and indirect illumination separately by calculating single scattering and multiple scattering radiances, respectively. The indirect illumination effect was simulated using volumetric photon mapping. Calculating each photon's brightness is proposed using a novel screen-space destiny estimation to avoid complicated storage structures and accelerate computation. This study proposes a high dynamic range approach to address the issue of fetal skin with a dynamic range exceeding that of the display device. Experiments show that our technology, compared to conventional methodologies, can generate realistic rendering results with far more depth information.
Collapse
Affiliation(s)
- Jing Zou
- Centre for Smart Health, School of Nursing, the Hong Kong Polytechnic University, Hong Kong, China.
| | - Jing Qin
- Centre for Smart Health, School of Nursing, the Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
2
|
Iglesias-Guitian JA, Mane P, Moon B. Real-Time Denoising of Volumetric Path Tracing for Direct Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2734-2747. [PMID: 33180727 DOI: 10.1109/tvcg.2020.3037680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Direct volume rendering (DVR) using volumetric path tracing (VPT) is a scientific visualization technique that simulates light transport with objects' matter using physically-based lighting models. Monte Carlo (MC) path tracing is often used with surface models, yet its application for volumetric models is difficult due to the complexity of integrating MC light-paths in volumetric media with none or smooth material boundaries. Moreover, auxiliary geometry-buffers (G-buffers) produced for volumes are typically very noisy, failing to guide image denoisers relying on that information to preserve image details. This makes existing real-time denoisers, which take noise-free G-buffers as their input, less effective when denoising VPT images. We propose the necessary modifications to an image-based denoiser previously used when rendering surface models, and demonstrate effective denoising of VPT images. In particular, our denoising exploits temporal coherence between frames, without relying on noise-free G-buffers, which has been a common assumption of existing denoisers for surface-models. Our technique preserves high-frequency details through a weighted recursive least squares that handles heterogeneous noise for volumetric models. We show for various real data sets that our method improves the visual fidelity and temporal stability of VPT during classic DVR operations such as camera movements, modifications of the light sources, and editions to the volume transfer function.
Collapse
|
3
|
Rapp T, Peters C, Dachsbacher C. Image-based Visualization of Large Volumetric Data Using Moments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2314-2325. [PMID: 35442887 DOI: 10.1109/tvcg.2022.3165346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
We present a novel image-based representation to interactively visualize large and arbitrarily structured volumetric data. This image-based representation is created from a fixed view and models the scalar densities along each viewing ray. Then, any transfer function can be applied and changed interactively to visualize the data. In detail, we transform the density in each pixel to the Fourier basis and store Fourier coefficients of a bounded signal, i.e. bounded trigonometric moments. To keep this image-based representation compact, we adaptively determine the number of moments in each pixel and present a novel coding and quantization strategy. Additionally, we perform spatial and temporal interpolation of our image representation and discuss the visualization of introduced uncertainties. Moreover, we use our representation to add single scattering illumination. Lastly, we achieve accurate results even with changes in the view configuration. We evaluate our approach on two large volume datasets and a time-dependent SPH dataset.
Collapse
|
4
|
Age estimation based on the acetabulum using global illumination rendering with computed tomography. Int J Legal Med 2021; 135:1923-1934. [PMID: 33713164 DOI: 10.1007/s00414-021-02539-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 02/10/2021] [Indexed: 10/21/2022]
Abstract
INTRODUCTION The acetabulum has been reported as a reliable age estimation marker. However, analyzing its morphological changes can be challenging using computed tomography (CT) imaging. Newly introduced global illumination rendering (GIR) applied to CT can improve the visualization of the fine details and thus the method's performance. This study aimed to analyze age estimation using morphological features of the acetabulum using GIR applied to CT. METHODS We collected 200 postmortem CT scans. A segmentation of the acetabular joint was initially done. Then, three-dimensional (3D) reconstruction of the images was performed using GIR. These images were saved and then analyzed by two operators based on the three morphological criteria described in the Rougé-Maillart method. Reproducibility was assessed by intraclass correlation (ICC). Age estimation was assessed by multiple linear regression. RESULTS The sample was composed of 155 males and 45 females, with a mean age of 50 ± 18.3 years old. We observed high agreement in both the inter-observer and intra-observer reproducibility for the three variables (ICC of 75.6 to 90.8% and 89.3 to 95.8%, respectively) and the total score (ICC of 93.5% and 95%, respectively). The three variables, as well as the total score, were significantly correlated with age groups. The total score showed a prediction rate higher than 85% for ages under 40 and over 70 years old. We identified three models with two validated models with an adjusted R2 of 85.6% and 84.8%, respectively; a standard error of 0.688 and 0.706, respectively; and a good correlation of all variables and no inter-correlation. The first validated model included the three morphological criteria scores, and the second model was based on the total score. CONCLUSION GIR applied to CT provides photorealistic images that can be useful for forensic imaging intended for age estimation based on morphological methods.
Collapse
|
5
|
Engel D, Ropinski T. Deep Volumetric Ambient Occlusion. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1268-1278. [PMID: 33048686 DOI: 10.1109/tvcg.2020.3030344] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present a novel deep learning based technique for volumetric ambient occlusion in the context of direct volume rendering. Our proposed Deep Volumetric Ambient Occlusion (DVAO) approach can predict per-voxel ambient occlusion in volumetric data sets, while considering global information provided through the transfer function. The proposed neural network only needs to be executed upon change of this global information, and thus supports real-time volume interaction. Accordingly, we demonstrate DVAO's ability to predict volumetric ambient occlusion, such that it can be applied interactively within direct volume rendering. To achieve the best possible results, we propose and analyze a variety of transfer function representations and injection strategies for deep neural networks. Based on the obtained results we also give recommendations applicable in similar volume learning scenarios. Lastly, we show that DVAO generalizes to a variety of modalities, despite being trained on computed tomography data only.
Collapse
|
6
|
|
7
|
Igouchkine O, Zhang Y, Ma KL. Multi-Material Volume Rendering with a Physically-Based Surface Reflection Model. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:3147-3159. [PMID: 29990043 DOI: 10.1109/tvcg.2017.2784830] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Rendering techniques that increase realism in volume visualization help enhance perception of the 3D features in the volume data. While techniques focusing on high-quality global illumination have been extensively studied, few works handle the interaction of light with materials in the volume. Existing techniques for light-material interaction are limited in their ability to handle high-frequency real-world material data, and the current treatment of volume data poorly supports the correct integration of surface materials. In this paper, we introduce an alternative definition for the transfer function which supports surface-like behavior at the boundaries between volume components and volume-like behavior within. We show that this definition enables multi-material rendering with high-quality, real-world material data. We also show that this approach offers an efficient alternative to pre-integrated rendering through isosurface techniques. We introduce arbitrary spatially-varying materials to achieve better multi-material support for scanned volume data. Finally, we show that it is possible to map an arbitrary set of parameters directly to a material representation for the more intuitive creation of novel materials.
Collapse
|
8
|
Magnus JG, Bruckner S. Interactive Dynamic Volume Illumination with Refraction and Caustics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:984-993. [PMID: 28866548 DOI: 10.1109/tvcg.2017.2744438] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In recent years, significant progress has been made in developing high-quality interactive methods for realistic volume illumination. However, refraction - despite being an important aspect of light propagation in participating media - has so far only received little attention. In this paper, we present a novel approach for refractive volume illumination including caustics capable of interactive frame rates. By interleaving light and viewing ray propagation, our technique avoids memory-intensive storage of illumination information and does not require any precomputation. It is fully dynamic and all parameters such as light position and transfer function can be modified interactively without a performance penalty.
Collapse
|
9
|
Ament M, Zirr T, Dachsbacher C. Extinction-Optimized Volume Illumination. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1767-1781. [PMID: 27214903 DOI: 10.1109/tvcg.2016.2569080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel method to optimize the attenuation of light for the single scattering model in direct volume rendering. A common problem of single scattering is the high dynamic range between lit and shadowed regions due to the exponential attenuation of light along a ray. Moreover, light is often attenuated too strong between a sample point and the camera, hampering the visibility of important features. Our algorithm employs an importance function to selectively illuminate important structures and make them visible from the camera. With the importance function, more light can be transmitted to the features of interest, while contextual structures cast shadows which provide visual cues for perception of depth. At the same time, more scattered light is transmitted from the sample point to the camera to improve the primary visibility of important features. We formulate a minimization problem that automatically determines the extinction along a view or shadow ray to obtain a good balance between sufficient transmittance and attenuation. In contrast to previous approaches, we do not require a computationally expensive solution of a global optimization, but instead provide a closed-form solution for each sampled extinction value along a view or shadow ray and thus achieve interactive performance.
Collapse
|
10
|
Jonsson D, Ynnerman A. Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:901-910. [PMID: 27514045 DOI: 10.1109/tvcg.2016.2598430] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present a method for interactive global illumination of both static and time-varying volumetric data based on reduction of the overhead associated with re-computation of photon maps. Our method uses the identification of photon traces invariant to changes of visual parameters such as the transfer function (TF), or data changes between time-steps in a 4D volume. This lets us operate on a variant subset of the entire photon distribution. The amount of computation required in the two stages of the photon mapping process, namely tracing and gathering, can thus be reduced to the subset that are affected by a data or visual parameter change. We rely on two different types of information from the original data to identify the regions that have changed. A low resolution uniform grid containing the minimum and maximum data values of the original data is derived for each time step. Similarly, for two consecutive time-steps, a low resolution grid containing the difference between the overlapping data is used. We show that this compact metadata can be combined with the transfer function to identify the regions that have changed. Each photon traverses the low-resolution grid to identify if it can be directly transferred to the next photon distribution state or if it needs to be recomputed. An efficient representation of the photon distribution is presented leading to an order of magnitude improved performance of the raycasting step. The utility of the method is demonstrated in several examples that show visual fidelity, as well as performance. The examples show that visual quality can be retained when the fraction of retraced photons is as low as 40%-50%.
Collapse
|
11
|
Lind AJ, Bruckner S. Comparing Cross-Sections and 3D Renderings for Surface Matching Tasks Using Physical Ground Truths. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:781-790. [PMID: 27875192 DOI: 10.1109/tvcg.2016.2598602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Within the visualization community there are some well-known techniques for visualizing 3D spatial data and some general assumptions about how perception affects the performance of these techniques in practice. However, there is a lack of empirical research backing up the possible performance differences among the basic techniques for general tasks. One such assumption is that 3D renderings are better for obtaining an overview, whereas cross sectional visualizations such as the commonly used Multi-Planar Reformation (MPR) are better for supporting detailed analysis tasks. In the present study we investigated this common assumption by examining the difference in performance between MPR and 3D rendering for correctly identifying a known surface. We also examined whether prior experience working with image data affects the participant's performance, and whether there was any difference between interactive or static versions of the visualizations. Answering this question is important because it can be used as part of a scientific and empirical basis for determining when to use which of the two techniques. An advantage of the present study compared to other studies is that several factors were taken into account to compare the two techniques. The problem was examined through an experiment with 45 participants, where physical objects were used as the known surface (ground truth). Our findings showed that: 1. The 3D renderings largely outperformed the cross sections; 2. Interactive visualizations were partially more effective than static visualizations; and 3. The high experience group did not generally outperform the low experience group.
Collapse
|
12
|
Szafir DA, Sarikaya A, Gleicher M. Lightness Constancy in Surface Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:2107-21. [PMID: 26584495 PMCID: PMC4982670 DOI: 10.1109/tvcg.2015.2500240] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Color is a common channel for displaying data in surface visualization, but is affected by the shadows and shading used to convey surface depth and shape. Understanding encoded data in the context of surface structure is critical for effective analysis in a variety of domains, such as in molecular biology. In the physical world, lightness constancy allows people to accurately perceive shadowed colors; however, its effectiveness in complex synthetic environments such as surface visualizations is not well understood. We report a series of crowdsourced and laboratory studies that confirm the existence of lightness constancy effects for molecular surface visualizations using ambient occlusion. We provide empirical evidence of how common visualization design decisions can impact viewers' abilities to accurately identify encoded surface colors. These findings suggest that lightness constancy aids in understanding color encodings in surface visualization and reveal a correlation between visualization techniques that improve color interpretation in shadow and those that enhance perceptions of surface depth. These results collectively suggest that understanding constancy in practice can inform effective visualization design.
Collapse
|
13
|
Englund R, Kottravel S, Ropinski T. A crowdsourcing system for integrated and reproducible evaluation in scientific visualization. 2016 IEEE PACIFIC VISUALIZATION SYMPOSIUM (PACIFICVIS) 2016. [DOI: 10.1109/pacificvis.2016.7465249] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
|
14
|
Ament M, Weiskopf D, Dachsbacher C. Ambient Volume Illumination. Comput Sci Eng 2016. [DOI: 10.1109/mcse.2016.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
15
|
Ament M, Dachsbacher C. Anisotropic Ambient Volume Shading. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:1015-1024. [PMID: 26529745 DOI: 10.1109/tvcg.2015.2467963] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel method to compute anisotropic shading for direct volume rendering to improve the perception of the orientation and shape of surface-like structures. We determine the scale-aware anisotropy of a shading point by analyzing its ambient region. We sample adjacent points with similar scalar values to perform a principal component analysis by computing the eigenvectors and eigenvalues of the covariance matrix. In particular, we estimate the tangent directions, which serve as the tangent frame for anisotropic bidirectional reflectance distribution functions. Moreover, we exploit the ratio of the eigenvalues to measure the magnitude of the anisotropy at each shading point. Altogether, this allows us to model a data-driven, smooth transition from isotropic to strongly anisotropic volume shading. In this way, the shape of volumetric features can be enhanced significantly by aligning specular highlights along the principal direction of anisotropy. Our algorithm is independent of the transfer function, which allows us to compute all shading parameters once and store them with the data set. We integrated our method in a GPU-based volume renderer, which offers interactive control of the transfer function, light source positions, and viewpoint. Our results demonstrate the benefit of anisotropic shading for visualization to achieve data-driven local illumination for improved perception compared to isotropic shading.
Collapse
|
16
|
Sundén E, Kottravel S, Ropinski T. Multimodal volume illumination. COMPUTERS & GRAPHICS 2015; 50:47-60. [DOI: 10.1016/j.cag.2015.05.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
|
17
|
Ament M, Sadlo F, Dachsbacher C, Weiskopf D. Low-Pass Filtered Volumetric Shadows. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:2437-2446. [PMID: 26356957 DOI: 10.1109/tvcg.2014.2346333] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel and efficient method to compute volumetric soft shadows for interactive direct volume visualization to improve the perception of spatial depth. By direct control of the softness of volumetric shadows, disturbing visual patterns due to hard shadows can be avoided and users can adapt the illumination to their personal and application-specific requirements. We compute the shadowing of a point in the data set by employing spatial filtering of the optical depth over a finite area patch pointing toward each light source. Conceptually, the area patch spans a volumetric region that is sampled with shadow rays; afterward, the resulting optical depth values are convolved with a low-pass filter on the patch. In the numerical computation, however, to avoid expensive shadow ray marching, we show how to align and set up summed area tables for both directional and point light sources. Once computed, the summed area tables enable efficient evaluation of soft shadows for each point in constant time without shadow ray marching and the softness of the shadows can be controlled interactively. We integrated our method in a GPU-based volume renderer with ray casting from the camera, which offers interactive control of the transfer function, light source positions, and viewpoint, for both static and time-dependent data sets. Our results demonstrate the benefit of soft shadows for visualization to achieve user-controlled illumination with many-point lighting setups for improved perception combined with high rendering speed.
Collapse
|
18
|
From Individual to Population: Challenges in Medical Visualization. MATHEMATICS AND VISUALIZATION 2014. [DOI: 10.1007/978-1-4471-6497-5_23] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
19
|
Ament M, Sadlo F, Weiskopf D. Ambient volume scattering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:2936-2945. [PMID: 24051861 DOI: 10.1109/tvcg.2013.129] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present ambient scattering as a preintegration method for scattering on mesoscopic scales in direct volume rendering. Far-range scattering effects usually provide negligible contributions to a given location due to the exponential attenuation with increasing distance. This motivates our approach to preintegrating multiple scattering within a finite spherical region around any given sample point. To this end, we solve the full light transport with a Monte-Carlo simulation within a set of spherical regions, where each region may have different material parameters regarding anisotropy and extinction. This precomputation is independent of the data set and the transfer function, and results in a small preintegration table. During rendering, the look-up table is accessed for each ray sample point with respect to the viewing direction, phase function, and material properties in the spherical neighborhood of the sample. Our rendering technique is efficient and versatile because it readily fits in existing ray marching algorithms and can be combined with local illumination and volumetric ambient occlusion. It provides interactive volumetric scattering and soft shadows, with interactive control of the transfer function, anisotropy parameter of the phase function, lighting conditions, and viewpoint. A GPU implementation demonstrates the benefits of ambient scattering for the visualization of different types of data sets, with respect to spatial perception, high-quality illumination, translucency, and rendering speed.
Collapse
|
20
|
Fabry T, Vanherpe L, Feral B, Braesch C. Developing an Interactive Intervention Planner - A Systems Engineering Perspective. INT J ADV ROBOT SYST 2013. [DOI: 10.5772/56846] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Intervention planning is crucial for maintenance operations in particle accelerator environments with ionizing radiation, during which the radiation dose received by maintenance workers should be reduced to a minimum. In this context, we discuss the development of a new software tool and the entailed methodology, including the visualization aspects. The software tool integrates interactive exploration of a scene depicting an accelerator facility augmented with residual radiation level simulations, with the visualization of intervention data such as the followed trajectory and maintenance tasks. Its conception allows for future inclusion of measurements performed by mobile robotic devices. In this work, we explore the systems engineering life cycle of the development process of an interactive intervention planner, which includes the needs analysis, specification explicitation, conceptual mathematical modelling, iterative implementation, design and prototype testing and usability testing.
Collapse
|
21
|
Jonsson D, Kronander J, Ropinski T, Ynnerman A. Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:2364-2371. [PMID: 26357144 DOI: 10.1109/tvcg.2012.232] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, we enable interactive volumetric global illumination by extending photon mapping techniques to handle interactive transfer function (TF) and material editing in the context of volume rendering. We propose novel algorithms and data structures for finding and evaluating parts of a scene affected by these parameter changes, and thus support efficient updates of the photon map. In direct volume rendering (DVR) the ability to explore volume data using parameter changes, such as editable TFs, is of key importance. Advanced global illumination techniques are in most cases computationally too expensive, as they prevent the desired interactivity. Our technique decreases the amount of computation caused by parameter changes, by introducing Historygrams which allow us to efficiently reuse previously computed photon media interactions. Along the viewing rays, we utilize properties of the light transport equations to subdivide a view-ray into segments and independently update them when invalid. Unlike segments of a view-ray, photon scattering events within the volumetric medium needs to be sequentially updated. Using our Historygram approach, we can identify the first invalid photon interaction caused by a property change, and thus reuse all valid photon interactions. Combining these two novel concepts, supports interactive editing of parameters when using volumetric photon mapping in the context of DVR. As a consequence, we can handle arbitrarily shaped and positioned light sources, arbitrary phase functions, bidirectional reflectance distribution functions and multiple scattering which has previously not been possible in interactive DVR.
Collapse
|
22
|
Exposure render: an interactive photo-realistic volume rendering framework. PLoS One 2012; 7:e38586. [PMID: 22768292 PMCID: PMC3388083 DOI: 10.1371/journal.pone.0038586] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2012] [Accepted: 05/11/2012] [Indexed: 11/19/2022] Open
Abstract
The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license.
Collapse
|
23
|
Sundén E, Ynnerman A, Ropinski T. Image plane sweep volume illumination. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:2125-2134. [PMID: 22034331 DOI: 10.1109/tvcg.2011.211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In recent years, many volumetric illumination models have been proposed, which have the potential to simulate advanced lighting effects and thus support improved image comprehension. Although volume ray-casting is widely accepted as the volume rendering technique which achieves the highest image quality, so far no volumetric illumination algorithm has been designed to be directly incorporated into the ray-casting process. In this paper we propose image plane sweep volume illumination (IPSVI), which allows the integration of advanced illumination effects into a GPU-based volume ray-caster by exploiting the plane sweep paradigm. Thus, we are able to reduce the problem complexity and achieve interactive frame rates, while supporting scattering as well as shadowing. Since all illumination computations are performed directly within a single rendering pass, IPSVI does not require any preprocessing nor does it need to store intermediate results within an illumination volume. It therefore has a significantly lower memory footprint than other techniques. This makes IPSVI directly applicable to large data sets. Furthermore, the integration into a GPU-based ray-caster allows for high image quality as well as improved rendering performance by exploiting early ray termination. This paper discusses the theory behind IPSVI, describes its implementation, demonstrates its visual results and provides performance measurements.
Collapse
Affiliation(s)
- Erik Sundén
- Scientific Visualization Group, Linköping University, Sweden.
| | | | | |
Collapse
|