1
|
Yang F, Wei X, Chen B, Li C, Li D, Zhang S, Lu W, Zhang L. Cardiac biophysical detailed synergetic modality rendering and visible correlation. Front Physiol 2023; 14:1086154. [PMID: 37089421 PMCID: PMC10119415 DOI: 10.3389/fphys.2023.1086154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 03/27/2023] [Indexed: 04/25/2023] Open
Abstract
The heart is a vital organ in the human body. Research and treatment for the heart have made remarkable progress, and the functional mechanisms of the heart have been simulated and rendered through the construction of relevant models. The current methods for rendering cardiac functional mechanisms only consider one type of modality, which means they cannot show how different types of modality, such as physical and physiological, work together. To realistically represent the three-dimensional synergetic biological modality of the heart, this paper proposes a WebGL-based cardiac synergetic modality rendering framework to visualize the cardiac physical volume data and present synergetic correspondence rendering of the cardiac electrophysiological modality. By constructing the biological detailed interactive histogram, users can implement local details rendering for the heart, which could reveal the cardiac biology details more clearly. We also present cardiac physical-physiological correlation visualization to explore cardiac biological association characteristics. Experimental results show that the proposed framework can provide favorable cardiac biological detailed synergetic modality rendering results in terms of both effectiveness and efficiency. Compared with existing methods, the framework can facilitate the study of the internal mechanism of the heart and subsequently deduce the process of initiation, development, and transformation from a healthy heart to an ill one, and thereby improve the diagnosis and treatment of cardiac disorders.
Collapse
Affiliation(s)
- Fei Yang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, China
- School of Computer Science and Technology, Shandong University, Qingdao, China
| | - Xiaoxi Wei
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, China
| | - Bo Chen
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, China
| | - Chenxi Li
- Pizhou Power Supply Branch of State Grid Jiangsu Electric Power Co., Ltd., Pizhou, China
| | - Dong Li
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, China
| | - Shugang Zhang
- College of Computer Science and Technology, Ocean University of China, Qingdao, China
| | - Weigang Lu
- Department of Educational Technology, Ocean University of China, Qingdao, China
- *Correspondence: Weigang Lu,
| | - Lei Zhang
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, United States
| |
Collapse
|
2
|
Rojo IB, Gross M, Gunther T. Fourier Opacity Optimization for Scalable Exploration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3204-3216. [PMID: 31095484 DOI: 10.1109/tvcg.2019.2915222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Over the past decades, scientific visualization became a fundamental aspect of modern scientific data analysis. Across all data-intensive research fields, ranging from structural biology to cosmology, data sizes increase rapidly. Dealing with the growing large-scale data is one of the top research challenges of this century. For the visual exploratory data analysis, interactivity, a view-dependent visibility optimization and frame coherence are indispensable. In this work, we extend the recent decoupled opacity optimization framework to enable a navigation without occlusion of important features through large geometric data. By expressing the accumulation of importance and optical depth in Fourier basis, the computation, evaluation and rendering of optimized transparent geometry become not only order-independent, but also operate within a fixed memory bound. We study the quality of our Fourier approximation in terms of accuracy, memory requirements and efficiency for both the opacity computation, as well as the order-independent compositing. We apply the method to different point, line and surface data sets originating from various research fields, including meteorology, health science, astrophysics and organic chemistry.
Collapse
|
3
|
Ma B, Entezari A. Volumetric Feature-Based Classification and Visibility Analysis for Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:3253-3267. [PMID: 29989987 DOI: 10.1109/tvcg.2017.2776935] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Transfer function (TF) design is a central topic in direct volume rendering. The TF fundamentally translates data values into optical properties to reveal relevant features present in the volumetric data. We propose a semi-automatic TF design scheme which consists of two steps: First, we present a clustering process within 1D/2D TF domain based on the proximities of the respective volumetric features in the spatial domain. The presented approach provides an interactive tool that aids users in exploring clusters and identifying features of interest (FOI). Second, our method automatically generates a TF by iteratively refining the optical properties for the selected features using a novel feature visibility measurement. The proposed visibility measurement leverages the similarities of features to enhance their visibilities in DVR images. Compared to the conventional visibility measurement, the proposed feature visibility is able to efficiently sense opacity changes and precisely evaluate the impact of selected features on resulting visualizations. Our experiments validate the effectiveness of the proposed approach by demonstrating the advantages of integrating feature similarity into the visibility computations. We examine a number of datasets to establish the utility of our approach for semi-automatic TF design.
Collapse
|
4
|
Ament M, Zirr T, Dachsbacher C. Extinction-Optimized Volume Illumination. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1767-1781. [PMID: 27214903 DOI: 10.1109/tvcg.2016.2569080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel method to optimize the attenuation of light for the single scattering model in direct volume rendering. A common problem of single scattering is the high dynamic range between lit and shadowed regions due to the exponential attenuation of light along a ray. Moreover, light is often attenuated too strong between a sample point and the camera, hampering the visibility of important features. Our algorithm employs an importance function to selectively illuminate important structures and make them visible from the camera. With the importance function, more light can be transmitted to the features of interest, while contextual structures cast shadows which provide visual cues for perception of depth. At the same time, more scattered light is transmitted from the sample point to the camera to improve the primary visibility of important features. We formulate a minimization problem that automatically determines the extinction along a view or shadow ray to obtain a good balance between sufficient transmittance and attenuation. In contrast to previous approaches, we do not require a computationally expensive solution of a global optimization, but instead provide a closed-form solution for each sampled extinction value along a view or shadow ray and thus achieve interactive performance.
Collapse
|
5
|
Luo M, Duan C, Qiu J, Li W, Zhu D, Cai W. Diagnostic Value of Multidetector CT and Its Multiplanar Reformation, Volume Rendering and Virtual Bronchoscopy Postprocessing Techniques for Primary Trachea and Main Bronchus Tumors. PLoS One 2015; 10:e0137329. [PMID: 26332466 PMCID: PMC4558050 DOI: 10.1371/journal.pone.0137329] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2015] [Accepted: 08/16/2015] [Indexed: 12/25/2022] Open
Abstract
Purpose To evaluate the diagnostic value of multidetector CT (MDCT) and its multiplanar reformation (MPR), volume rendering (VR) and virtual bronchoscopy (VB) postprocessing techniques for primary trachea and main bronchus tumors. Methods Detection results of 31 primary trachea and main bronchus tumors with MDCT and its MPR, VR and VB postprocessing techniques, were analyzed retrospectively with regard to tumor locations, tumor morphologies, extramural invasions of tumors, longitudinal involvements of tumors, morphologies and extents of luminal stenoses, distances between main bronchus tumors and trachea carinae, and internal features of tumors. The detection results were compared with that of surgery and pathology. Results Detection results with MDCT and its MPR, VR and VB were consistent with that of surgery and pathology, included tumor locations (tracheae, n = 19; right main bronchi, n = 6; left main bronchi, n = 6), tumor morphologies (endoluminal nodes with narrow bases, n = 2; endoluminal nodes with wide bases, n = 13; both intraluminal and extraluminal masses, n = 16), extramural invasions of tumors (brokethrough only serous membrane, n = 1; 4.0 mm—56.0 mm, n = 14; no clear border with right atelectasis, n = 1), longitudinal involvements of tumors (3.0 mm, n = 1; 5.0 mm—68.0 mm, n = 29; whole right main bronchus wall and trachea carina, n = 1), morphologies of luminal stenoses (irregular, n = 26; circular, n = 3; eccentric, n = 1; conical, n = 1) and extents (mild, n = 5; moderate, n = 7; severe, n = 19), distances between main bronchus tumors and trachea carinae (16.0 mm, n = 1; invaded trachea carina, n = 1; >20.0 mm, n = 10), and internal features of tumors (fairly homogeneous densities with rather obvious enhancements, n = 26; homogeneous density with obvious enhancement, n = 1; homogeneous density without obvious enhancement, n = 1; not enough homogeneous density with obvious enhancement, n = 1; punctate calcification with obvious enhancement, n = 1; low density without obvious enhancement, n = 1). Conclusion MDCT and its MPR, VR and VB images have respective advantages and disadvantages. Their combination could complement to each other to accurately detect locations, natures (benignancy, malignancy or low malignancy), and quantities (extramural invasions, longitudinal involvements, extents of luminal stenoses, distances between main bronchus tumors and trachea carinae) of primary trachea and main bronchus tumors with crucial information for surgical treatment, are highly useful diagnostic methods for primary trachea and main bronchus tumors.
Collapse
Affiliation(s)
- Mingyue Luo
- Department of Radiology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
- * E-mail:
| | - Chaijie Duan
- Research Center of Biomedical Engineering, Graduate School at Shenzhen, Tsinghua University, Shenzhen, Guangdong, China
| | - Jianping Qiu
- Department of Radiology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Wenru Li
- Department of Radiology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Dongyun Zhu
- Department of Radiology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Wenli Cai
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
6
|
Su YJ, Chuang YY. Disambiguating Stereoscopic Transparency Using a Thaumatrope Approach. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:959-969. [PMID: 26357258 DOI: 10.1109/tvcg.2015.2410273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Volume rendering is a popular visualization technique for scientific computing and medical imaging. By assigning proper transparency, it allows us to see more information inside the volume. However, because volume rendering projects complex 3D structures into the 2D domain, the resultant visualization often suffers from ambiguity and its spatial relationship could be difficult to recognize correctly, especially when the scene or setting is highly transparent. Stereoscopic displays are not the rescue to the problem even though they add an additional dimension which seems helpful for resolving the ambiguity. This paper proposes a thaumatrope method to enhance 3D understanding with stereoscopic transparency for volume rendering. Our method first generates an additional cue with less spatial ambiguity by using a high opacity setting. To avoid cluttering the actual content, we only select its prominent feature for displaying. By alternating the actual content and the selected feature quickly, the viewer only perceives a whole volume while its spatial understanding has been enhanced. A user study was performed to compare the proposed method with the original stereoscopic volume rendering and the static combination of the actual content and the selected feature using a 3D display. Results show that the proposed thaumatrope approach provides better spatial understanding than compared approaches.
Collapse
|
7
|
Volume visualization based on the intensity and SUSAN transfer function spaces. Biomed Signal Process Control 2015. [DOI: 10.1016/j.bspc.2014.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
8
|
Qin H, Ye B, He R. The voxel visibility model: an efficient framework for transfer function design. Comput Med Imaging Graph 2014; 40:138-46. [PMID: 25510474 DOI: 10.1016/j.compmedimag.2014.11.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2014] [Revised: 10/29/2014] [Accepted: 11/20/2014] [Indexed: 10/24/2022]
Abstract
Volume visualization is a very important work in medical imaging and surgery plan. However, determining an ideal transfer function is still a challenging task because of the lack of measurable metrics for quality of volume visualization. In the paper, we presented the voxel vibility model as a quality metric to design the desired visibility for voxels instead of designing transfer functions directly. Transfer functions are obtained by minimizing the distance between the desired visibility distribution and the actual visibility distribution. The voxel model is a mapping function from the feature attributes of voxels to the visibility of voxels. To consider between-class information and with-class information simultaneously, the voxel visibility model is described as a Gaussian mixture model. To highlight the important features, the matched result can be obtained by changing the parameters in the voxel visibility model through a simple and effective interface. Simultaneously, we also proposed an algorithm for transfer functions optimization. The effectiveness of this method is demonstrated through experimental results on several volumetric data sets.
Collapse
Affiliation(s)
- Hongxing Qin
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Bin Ye
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Rui He
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
9
|
Wang L, Kaufman AE. Importance-Driven Accessory Lights Design for Enhancing Local Shapes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:781-794. [PMID: 26357298 DOI: 10.1109/tvcg.2013.257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We introduce a semi-automatic lighting design method that deploys per-voxel accessory lights (fill and detail lights) to enhance local shapes, as well as to increase the perceptibility and visual saliency of an object. Our approach allows the user to manually design arbitrary lights in a scene for creating the desired feeling of emotion. The user designed lights are used as key lights and our approach automatically configures per-voxel accessory lights that preserve the user designed feeling of emotion. Per-voxel fill lights brighten the shadows and thus increase the perceptibility and visual saliency. Per-voxel detail lights enhance the visual cues for the local shape perception. Moreover, the revealed local shapes are controlled by the user employing an importance distribution. Similarly, the perceptibility and visual saliency are also controlled based on an importance distribution. Our perceptual measurement guarantees that the revealed local shapes are independent of the key lights. In addition, our method provides two control parameters, which adjust the fill and detail lights, to provide the user with additional flexibility in designing the expected lighting effect. The major contributions of this paper are the idea of using the importance distribution to control local shapes, the per-voxel accessory lights and the perceptual measurement.
Collapse
|
10
|
Carnecky R, Fuchs R, Mehl S, Jang Y, Peikert R. Smart transparency for illustrative visualization of complex flow surfaces. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:838-851. [PMID: 22802119 DOI: 10.1109/tvcg.2012.159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The perception of transparency and the underlying neural mechanisms have been subject to extensive research in the cognitive sciences. However, we have yet to develop visualization techniques that optimally convey the inner structure of complex transparent shapes. In this paper, we apply the findings of perception research to develop a novel illustrative rendering method that enhances surface transparency nonlocally. Rendering of transparent geometry is computationally expensive since many optimizations, such as visibility culling, are not applicable and fragments have to be sorted by depth for correct blending. In order to overcome these difficulties efficiently, we propose the illustration buffer. This novel data structure combines the ideas of the A and G-buffers to store a list of all surface layers for each pixel. A set of local and nonlocal operators is then used to process these depth-lists to generate the final image. Our technique is interactive on current graphics hardware and is only limited by the available graphics memory. Based on this framework, we present an efficient algorithm for a nonlocal transparency enhancement that creates expressive renderings of transparent surfaces. A controlled quantitative double blind user study shows that the presented approach improves the understanding of complex transparent surfaces significantly.
Collapse
Affiliation(s)
- Robert Carnecky
- Computer Science Department, ETH Zurich, 8092 Zurich, Switzerland.
| | | | | | | | | |
Collapse
|
11
|
Zheng L, Wu Y, Ma KL. Perceptually-based depth-ordering enhancement for direct volume rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:446-459. [PMID: 22732679 DOI: 10.1109/tvcg.2012.144] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Visualizing complex volume data usually renders selected parts of the volume semitransparently to see inner structures of the volume or provide a context. This presents a challenge for volume rendering methods to produce images with unambiguous depth-ordering perception. Existing methods use visual cues such as halos and shadows to enhance depth perception. Along with other limitations, these methods introduce redundant information and require additional overhead. This paper presents a new approach to enhancing depth-ordering perception of volume rendered images without using additional visual cues. We set up an energy function based on quantitative perception models to measure the quality of the images in terms of the effectiveness of depth-ordering and transparency perception as well as the faithfulness of the information revealed. Guided by the function, we use a conjugate gradient method to iteratively and judiciously enhance the results. Our method can complement existing systems for enhancing volume rendering results. The experimental results demonstrate the usefulness and effectiveness of our approach.
Collapse
Affiliation(s)
- Lin Zheng
- Department of Computer Science, University of California, Davis, CA 95616-8562, USA.
| | | | | |
Collapse
|
12
|
Wang L, Kaufman AE. Lighting System for Visual Perception Enhancement in Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:67-80. [PMID: 22431550 DOI: 10.1109/tvcg.2012.91] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We introduce a lighting system that enhances the visual cues in a rendered image for the perception of 3D volumetric objects. We divide the lighting effects into global and local effects, and deploy three types of directional lights: the key light and accessory lights (fill and detail lights). The key light provides both lighting effects and carries the visual cues for the perception of local and global shapes and depth. The cues for local shapes are conveyed by gradient; those for global shapes are carried by shadows; and those for depth are provided by shadows and translucent objects. Fill lights produce global effects to increase the perceptibility. Detail lights generate local effects to improve the cues for local shapes. Our method quantifies the perception and uses an exhaustive search to set the lights. It configures accessory lights with the consideration of preserving the global impression conveyed by the key light. It ensures the feeling of smooth light movements in animations. With simplification, it achieves interactive frame rates and produces results that are visually indistinguishable from results using the nonsimplified algorithm. The major contributions of this paper are our lighting system, perception measurement and lighting design algorithm with our indistinguishable simplification.
Collapse
|
13
|
Ahmed N, Zheng Z, Mueller K. Human Computation in Visualization: Using Purpose Driven Games for Robust Evaluation of Visualization Algorithms. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:2104-2113. [PMID: 26357117 DOI: 10.1109/tvcg.2012.234] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Due to the inherent characteristics of the visualization process, most of the problems in this field have strong ties with human cognition and perception. This makes the human brain and sensory system the only truly appropriate evaluation platform for evaluating and fine-tuning a new visualization method or paradigm. However, getting humans to volunteer for these purposes has always been a significant obstacle, and thus this phase of the development process has traditionally formed a bottleneck, slowing down progress in visualization research. We propose to take advantage of the newly emerging field of Human Computation (HC) to overcome these challenges. HC promotes the idea that rather than considering humans as users of the computational system, they can be made part of a hybrid computational loop consisting of traditional computation resources and the human brain and sensory system. This approach is particularly successful in cases where part of the computational problem is considered intractable using known computer algorithms but is trivial to common sense human knowledge. In this paper, we focus on HC from the perspective of solving visualization problems and also outline a framework by which humans can be easily seduced to volunteer their HC resources. We introduce a purpose-driven game titled "Disguise" which serves as a prototypical example for how the evaluation of visualization algorithms can be mapped into a fun and addicting activity, allowing this task to be accomplished in an extensive yet cost effective way. Finally, we sketch out a framework that transcends from the pure evaluation of existing visualization methods to the design of a new one.
Collapse
Affiliation(s)
- N Ahmed
- Comput. Sci. Dept., Stony Brook Univ., Stony Brook, NY, USA.
| | | | | |
Collapse
|
14
|
Woo I, Maciejewski R, Gaither KP, Ebert DS. Feature-driven data exploration for volumetric rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:1731-1743. [PMID: 22291153 DOI: 10.1109/tvcg.2012.24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We have developed an intuitive method to semiautomatically explore volumetric data in a focus-region-guided or value-driven way using a user-defined ray through the 3D volume and contour lines in the region of interest. After selecting a point of interest from a 2D perspective, which defines a ray through the 3D volume, our method provides analytical tools to assist in narrowing the region of interest to a desired set of features. Feature layers are identified in a 1D scalar value profile with the ray and are used to define default rendering parameters, such as color and opacity mappings, and locate the center of the region of interest. Contour lines are generated based on the feature layer level sets within interactively selected slices of the focus region. Finally, we utilize feature-preserving filters and demonstrate the applicability of our scheme to noisy data.
Collapse
Affiliation(s)
- Insoo Woo
- Purdue Visual Analytics Center, Purdue University, PO Box 519, 465 Northwestern Ave., West Lafayette, IN 47907, USA.
| | | | | | | |
Collapse
|
15
|
Chen W, Chen W, Bao H. An efficient direct volume rendering approach for dichromats. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:2144-2152. [PMID: 22034333 DOI: 10.1109/tvcg.2011.164] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Color vision deficiency (CVD) affects a high percentage of the population worldwide. When seeing a volume visualization result, persons with CVD may be incapable of discriminating the classification information expressed in the image if the color transfer function or the color blending used in the direct volume rendering is not appropriate. Conventional methods used to address this problem adopt advanced image recoloring techniques to enhance the rendering results frame-by-frame; unfortunately, problematic perceptual results may still be generated. This paper proposes an alternative solution that complements the image recoloring scheme by reconfiguring the components of the direct volume rendering (DVR) pipeline. Our approach optimizes the mapped colors of a transfer function to simulate CVD-friendly effect that is generated by applying the image recoloring to the results with the initial transfer function. The optimization process has a low computational complexity, and only needs to be performed once for a given transfer function. To achieve detail-preserving and perceptually natural semi-transparent effects, we introduce a new color composition mode that works in the color space of dichromats. Experimental results and a pilot study demonstrates that our approach can yield dichromats-friendly and consistent volume visualization in real-time.
Collapse
|
16
|
Lindemann F, Ropinski T. About the influence of illumination models on image comprehension in direct volume rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1922-1931. [PMID: 22034309 DOI: 10.1109/tvcg.2011.161] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this paper, we present a user study in which we have investigated the influence of seven state-of-the-art volumetric illumination models on the spatial perception of volume rendered images. Within the study, we have compared gradient-based shading with half angle slicing, directional occlusion shading, multidirectional occlusion shading, shadow volume propagation, spherical harmonic lighting as well as dynamic ambient occlusion. To evaluate these models, users had to solve three tasks relying on correct depth as well as size perception. Our motivation for these three tasks was to find relations between the used illumination model, user accuracy and the elapsed time. In an additional task, users had to subjectively judge the output of the tested models. After first reviewing the models and their features, we will introduce the individual tasks and discuss their results. We discovered statistically significant differences in the testing performance of the techniques. Based on these findings, we have analyzed the models and extracted those features which are possibly relevant for the improved spatial comprehension in a relational task. We believe that a combination of these distinctive features could pave the way for a novel illumination model, which would be optimized based on our findings.
Collapse
Affiliation(s)
- Florian Lindemann
- Visualization and Computer Graphics Research Group, University of Münster.
| | | |
Collapse
|
17
|
2D Histogram based volume visualization: combining intensity and size of anatomical structures. Int J Comput Assist Radiol Surg 2010; 5:655-66. [PMID: 20512631 DOI: 10.1007/s11548-010-0480-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2009] [Accepted: 04/27/2010] [Indexed: 10/19/2022]
Abstract
PURPOSE Surgical planning requires 3D volume visualizations based on transfer functions (TF) that assign optical properties to volumetric image data. Two-dimensional TFs and 2D histograms may be employed to improve overall performance. METHODS Anatomical structures were used for 2D TF definition in an algorithm that computes a new structure-size image from the original data set. The original image and structure-size data sets were used to generate a structure-size enhanced (SSE) histogram. Alternatively, the gradient magnitude could be used as second property for 2D TF definition. Both types of 2D TFs were generated and compared using subjective evaluation of anatomic feature conspicuity. RESULTS Experiments with several medical image data sets provided SSE histograms that were judged subjectively to be more intuitive and better discriminated different anatomical structures than gradient magnitude-based 2D histograms. CONCLUSIONS In clinical applications, where the size of anatomical structures is more meaningful than gradient magnitude, the 2D TF can be effective for highlighting anatomical structures in 3D visualizations.
Collapse
|