1
|
Liu W, Zhang L, Huang N, Xu Z. Wide dynamic range signal detection for underwater optical wireless communication using a PMT detector. OPTICS EXPRESS 2023; 31:25267-25279. [PMID: 37475336 DOI: 10.1364/oe.494311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/03/2023] [Indexed: 07/22/2023]
Abstract
In the underwater optical wireless communication (UOWC) scenario, a photomultiplier tube (PMT) with higher sensitivity, lower noise, and a larger receiver area is employed as the photon detector to further extend the transmission distance. Due to the complex underwater environment, the high directionality of the light beam, and the vibration of a transceiver, the incident optical power usually spans a very wide dynamic range, and the PMT may operate in any one of the three regimes: pulse, transition, and waveform. While it is difficult to obtain the analytical characterization of the output electric signals across these regimes, this paper resorts to experimental measurements of the upsampled discrete samples within a training symbol duration. Among different statistical distribution fitting options, generalized extreme value (GEV) distribution is found to show excellent performance in fitting the probability density function (PDF) of either multiple samples or the superimposition of all samples within a symbol duration. Then joint sample distribution (JSD) based and superimposed sample distribution (SSD) based symbol detection methods are proposed by adopting the GEV distribution and log-likelihood ratio (LLR) testing criterion. The proposed methods are experimentally evaluated under different received signal optical powers, data rates, and sampling rates. They are shown to outperform the Poisson and Gaussian based maximum likelihood detection methods which are employed for the pulse regime and waveform regime respectively. Furthermore, the effectiveness of the proposed methods in alleviating strong ambient radiation is experimentally verified.
Collapse
|
2
|
Igouchkine O, Zhang Y, Ma KL. Multi-Material Volume Rendering with a Physically-Based Surface Reflection Model. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:3147-3159. [PMID: 29990043 DOI: 10.1109/tvcg.2017.2784830] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Rendering techniques that increase realism in volume visualization help enhance perception of the 3D features in the volume data. While techniques focusing on high-quality global illumination have been extensively studied, few works handle the interaction of light with materials in the volume. Existing techniques for light-material interaction are limited in their ability to handle high-frequency real-world material data, and the current treatment of volume data poorly supports the correct integration of surface materials. In this paper, we introduce an alternative definition for the transfer function which supports surface-like behavior at the boundaries between volume components and volume-like behavior within. We show that this definition enables multi-material rendering with high-quality, real-world material data. We also show that this approach offers an efficient alternative to pre-integrated rendering through isosurface techniques. We introduce arbitrary spatially-varying materials to achieve better multi-material support for scanned volume data. Finally, we show that it is possible to map an arbitrary set of parameters directly to a material representation for the more intuitive creation of novel materials.
Collapse
|
3
|
Usher W, Klacansky P, Federer F, Bremer PT, Knoll A, Yarch J, Angelucci A, Pascucci V. A Virtual Reality Visualization Tool for Neuron Tracing. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:994-1003. [PMID: 28866520 PMCID: PMC5722662 DOI: 10.1109/tvcg.2017.2744079] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.
Collapse
|
4
|
Li H, Fang S, Contreras JA, West JD, Risacher SL, Wang Y, Sporns O, Saykin AJ, Goñi J, Shen L. Brain explorer for connectomic analysis. Brain Inform 2017; 4:253-269. [PMID: 28836134 PMCID: PMC5709282 DOI: 10.1007/s40708-017-0071-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 08/12/2017] [Indexed: 11/24/2022] Open
Abstract
Visualization plays a vital role in the analysis of multimodal neuroimaging data. A major challenge in neuroimaging visualization is how to integrate structural, functional, and connectivity data to form a comprehensive visual context for data exploration, quality control, and hypothesis discovery. We develop a new integrated visualization solution for brain imaging data by combining scientific and information visualization techniques within the context of the same anatomical structure. In this paper, new surface texture techniques are developed to map non-spatial attributes onto both 3D brain surfaces and a planar volume map which is generated by the proposed volume rendering technique, spherical volume rendering. Two types of non-spatial information are represented: (1) time series data from resting-state functional MRI measuring brain activation; (2) network properties derived from structural connectivity data for different groups of subjects, which may help guide the detection of differentiation features. Through visual exploration, this integrated solution can help identify brain regions with highly correlated functional activations as well as their activation patterns. Visual detection of differentiation features can also potentially discover image-based phenotypic biomarkers for brain diseases.
Collapse
Affiliation(s)
- Huang Li
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA.,Department of Computer and Information Science, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA
| | - Shiaofen Fang
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Joey A Contreras
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA
| | - John D West
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Shannon L Risacher
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Yang Wang
- Department of Radiology Imaging Research, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Olaf Sporns
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, USA
| | - Andrew J Saykin
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Joaquín Goñi
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA.,Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA.,Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA
| | - Li Shen
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA.
| | | |
Collapse
|
5
|
Wu K, Knoll A, Isaac BJ, Carr H, Pascucci V. Direct Multifield Volume Ray Casting of Fiber Surfaces. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:941-949. [PMID: 27875207 DOI: 10.1109/tvcg.2016.2599040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Multifield data are common in visualization. However, reducing these data to comprehensible geometry is a challenging problem. Fiber surfaces, an analogy of isosurfaces to bivariate volume data, are a promising new mechanism for understanding multifield volumes. In this work, we explore direct ray casting of fiber surfaces from volume data without any explicit geometry extraction. We sample directly along rays in domain space, and perform geometric tests in range space where fibers are defined, using a signed distance field derived from the control polygons. Our method requires little preprocess, and enables real-time exploration of data, dynamic modification and pixel-exact rendering of fiber surfaces, and support for higher-order interpolation in domain space. We demonstrate this approach on several bivariate datasets, including analysis of multi-field combustion data.
Collapse
|
6
|
Yun J, Kim YK, Chun EJ, Shin YG, Lee J, Kim B. Stenosis map for volume visualization of constricted tubular structures: Application to coronary artery stenosis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2016; 124:76-90. [PMID: 26608866 DOI: 10.1016/j.cmpb.2015.10.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Revised: 10/08/2015] [Accepted: 10/27/2015] [Indexed: 06/05/2023]
Abstract
Although direct volume rendering (DVR) has become a commodity, effective rendering of interesting features is still a challenge. In one of active DVR application fields, the medicine, radiologists have used DVR for the diagnosis of lesions or diseases that should be visualized distinguishably from other surrounding anatomical structures. One of most frequent and important radiologic tasks is the detection of lesions, usually constrictions, in complex tubular structures. In this paper, we propose a 3D spatial field for the effective visualization of constricted tubular structures, called as a stenosis map which stores the degree of constriction at each voxel. Constrictions within tubular structures are quantified by using newly proposed measures (i.e. line similarity measure and constriction measure) based on the localized structure analysis, and classified with a proposed transfer function mapping the degree of constriction to color and opacity. We show the application results of our method to the visualization of coronary artery stenoses. We present performance evaluations using twenty eight clinical datasets, demonstrating high accuracy and efficacy of our proposed method. The ability of our method to saliently visualize the constrictions within tubular structures and interactively adjust the visual appearance of the constrictions proves to deliver a substantial aid in radiologic practice.
Collapse
Affiliation(s)
- Jihye Yun
- School of Computer Science and Engineering, Seoul National University, Gwanak-ro, Gwanak-gu, Seoul 151-742, South Korea.
| | - Yeo Koon Kim
- Department of Radiology, Seoul National University Bundang Hospital, 82 Gumi-ro, 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea.
| | - Eun Ju Chun
- Department of Radiology, Seoul National University Bundang Hospital, 82 Gumi-ro, 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea.
| | - Yeong-Gil Shin
- School of Computer Science and Engineering, Seoul National University, Gwanak-ro, Gwanak-gu, Seoul 151-742, South Korea.
| | - Jeongjin Lee
- School of Computer Science and Engineering, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul 156-743, South Korea.
| | - Bohyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, 82 Gumi-ro, 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea.
| |
Collapse
|
7
|
Sundén E, Kottravel S, Ropinski T. Multimodal volume illumination. COMPUTERS & GRAPHICS 2015; 50:47-60. [DOI: 10.1016/j.cag.2015.05.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
|
8
|
Zhao K, Sakamoto N, Koyamada K. Adaptive fused visualization for large-scale blood flow dataset with particle-based rendering. J Vis (Tokyo) 2014. [DOI: 10.1007/s12650-014-0260-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
9
|
Eichelbaum S, Dannhauer M, Hlawitschka M, Brooks D, Knösche TR, Scheuermann G. Visualizing simulated electrical fields from electroencephalography and transcranial electric brain stimulation: a comparative evaluation. Neuroimage 2014; 101:513-30. [PMID: 24821532 PMCID: PMC4172355 DOI: 10.1016/j.neuroimage.2014.04.085] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2013] [Revised: 04/23/2014] [Accepted: 04/30/2014] [Indexed: 11/21/2022] Open
Abstract
Electrical activity of neuronal populations is a crucial aspect of brain activity. This activity is not measured directly but recorded as electrical potential changes using head surface electrodes (electroencephalogram - EEG). Head surface electrodes can also be deployed to inject electrical currents in order to modulate brain activity (transcranial electric stimulation techniques) for therapeutic and neuroscientific purposes. In electroencephalography and noninvasive electric brain stimulation, electrical fields mediate between electrical signal sources and regions of interest (ROI). These fields can be very complicated in structure, and are influenced in a complex way by the conductivity profile of the human head. Visualization techniques play a central role to grasp the nature of those fields because such techniques allow for an effective conveyance of complex data and enable quick qualitative and quantitative assessments. The examination of volume conduction effects of particular head model parameterizations (e.g., skull thickness and layering), of brain anomalies (e.g., holes in the skull, tumors), location and extent of active brain areas (e.g., high concentrations of current densities) and around current injecting electrodes can be investigated using visualization. Here, we evaluate a number of widely used visualization techniques, based on either the potential distribution or on the current-flow. In particular, we focus on the extractability of quantitative and qualitative information from the obtained images, their effective integration of anatomical context information, and their interaction. We present illustrative examples from clinically and neuroscientifically relevant cases and discuss the pros and cons of the various visualization techniques.
Collapse
Affiliation(s)
- Sebastian Eichelbaum
- Image and Signal Processing Group, Leipzig University, Augustusplatz 10-11, 04109 Leipzig, Germany.
| | - Moritz Dannhauer
- Scientific Computing and Imaging Institute, University of Utah, 72S. Central Campus Drive, 84112 Salt Lake City, UT, USA; Center for Integrative Biomedical Computing, University of Utah, 72S. Central Campus Drive, 84112, Salt Lake City, UT, USA.
| | - Mario Hlawitschka
- Scientific Visualization, Leipzig University, Augustusplatz 10-11, 04109 Leipzig, Germany.
| | - Dana Brooks
- Center for Integrative Biomedical Computing, University of Utah, 72S. Central Campus Drive, 84112, Salt Lake City, UT, USA; Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA.
| | - Thomas R Knösche
- Human Cognitive and Brain Sciences, Max Planck Institute, Stephanstraße 1a, 04103 Leipzig, Germany.
| | - Gerik Scheuermann
- Image and Signal Processing Group, Leipzig University, Augustusplatz 10-11, 04109 Leipzig, Germany.
| |
Collapse
|
10
|
Ament M, Sadlo F, Weiskopf D. Ambient volume scattering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:2936-2945. [PMID: 24051861 DOI: 10.1109/tvcg.2013.129] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present ambient scattering as a preintegration method for scattering on mesoscopic scales in direct volume rendering. Far-range scattering effects usually provide negligible contributions to a given location due to the exponential attenuation with increasing distance. This motivates our approach to preintegrating multiple scattering within a finite spherical region around any given sample point. To this end, we solve the full light transport with a Monte-Carlo simulation within a set of spherical regions, where each region may have different material parameters regarding anisotropy and extinction. This precomputation is independent of the data set and the transfer function, and results in a small preintegration table. During rendering, the look-up table is accessed for each ray sample point with respect to the viewing direction, phase function, and material properties in the spherical neighborhood of the sample. Our rendering technique is efficient and versatile because it readily fits in existing ray marching algorithms and can be combined with local illumination and volumetric ambient occlusion. It provides interactive volumetric scattering and soft shadows, with interactive control of the transfer function, anisotropy parameter of the phase function, lighting conditions, and viewpoint. A GPU implementation demonstrates the benefits of ambient scattering for the visualization of different types of data sets, with respect to spatial perception, high-quality illumination, translucency, and rendering speed.
Collapse
|
11
|
Liu B, Clapworthy GJ, Dong F, Prakash EC. Octree rasterization: accelerating high-quality out-of-core GPU volume rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:1732-1745. [PMID: 22778151 DOI: 10.1109/tvcg.2012.151] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
We present a novel approach for GPU-based high-quality volume rendering of large out-of-core volume data. By focusing on the locations and costs of ray traversal, we are able to significantly reduce the rendering time over traditional algorithms. We store a volume in an octree (of bricks); in addition, every brick is further split into regular macrocells. Our solutions move the branch-intensive accelerating structure traversal out of the GPU raycasting loop and introduce an efficient empty-space culling method by rasterizing the proxy geometry of a view-dependent cut of the octree nodes. This rasterization pass can capture all of the bricks that the ray penetrates in a per-pixel list. Since the per-pixel list is captured in a front-to-back order, our raycasting pass needs only to cast rays inside the tighter ray segments. As a result, we achieve two levels of empty space skipping: the brick level and the macrocell level. During evaluation and testing, this technique achieved 2 to 4 times faster rendering speed than a current state-of-the-art algorithm across a variety of data sets.
Collapse
Affiliation(s)
- Baoquan Liu
- Department of Computer Science and Technology, Faculty of Creative Arts, Technologies and Science, University of Bedfordshire, D108 Park Square, Luton, Bedfordshire LU1 3JU, United Kingdom.
| | | | | | | |
Collapse
|
12
|
Lin L, Chen S, Shao Y, Gu Z. Plane-based sampling for ray casting algorithm in sequential medical images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2013; 2013:874517. [PMID: 23424608 PMCID: PMC3566489 DOI: 10.1155/2013/874517] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2012] [Accepted: 12/28/2012] [Indexed: 11/25/2022]
Abstract
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed.
Collapse
Affiliation(s)
- Lili Lin
- School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
| | - Shengyong Chen
- School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
| | - Yan Shao
- Department of Plastic and Reconstructive Surgery, Sir Run Run Shaw Hospital, Medical College, Zhejiang University, Hangzhou 310016, China
| | - Zichun Gu
- Department of Plastic and Reconstructive Surgery, Sir Run Run Shaw Hospital, Medical College, Zhejiang University, Hangzhou 310016, China
| |
Collapse
|
13
|
Kim G, Lee J, Lee H, Seo J, Koo YM, Shin YG, Kim B. Automatic extraction of inferior alveolar nerve canal using feature-enhancing panoramic volume rendering. IEEE Trans Biomed Eng 2011; 58:253-64. [PMID: 21257360 DOI: 10.1109/tbme.2010.2089053] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Dental implant surgery, which involves the surgical insertion of a dental implant into the jawbone as an artificial root, has become one of the most successful applications of computed tomography (CT) in dental implantology. For successful implant surgery, it is essential to identify vital anatomic structures such as the inferior alveolar nerve (IAN), which should be avoided during the surgical procedure. Due to the ambiguity of its structure, the IAN is very elusive to extract in dental CT images. As a result, the IAN canal is typically identified in most previous studies. This paper presents a novel method of automatically extracting the IAN canal. Mental and mandibular foramens, which are regarded as the ends of the IAN canal in the mandible, are detected automatically using 3-D panoramic volume rendering (VR) and texture analysis techniques. In the 3-D panoramic VR, novel color shading and compositing methods are proposed to emphasize the foramens and isolate them from other fine structures. Subsequently, the path of the IAN canal is computed using a line-tracking algorithm. Finally, the IAN canal is extracted by expanding the region of the path using a fast marching method with a new speed function exploiting the anatomical information about the canal radius. In experimental results using ten clinical datasets, the proposed method identified the IAN canal accurately, demonstrating that this approach assists dentists substantially during dental implant surgery.
Collapse
Affiliation(s)
- Gyehyun Kim
- School of Computer Scienceand Engineering, Seoul National University, Seoul 151-742, Korea.
| | | | | | | | | | | | | |
Collapse
|
14
|
Petkov K, Papadopoulos C, Zhang M, Kaufman AE, Gu X. Conformal Visualization for Partially-Immersive Platforms. PROCEEDINGS. IEEE VIRTUAL REALITY CONFERENCE 2011:143-150. [PMID: 26279083 DOI: 10.1109/vr.2011.5759453] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Current immersive VR systems such as the CAVE provide an effective platform for the immersive exploration of large 3D data. A major limitation is that in most cases at least one display surface is missing due to space, access or cost constraints. This partially-immersive visualization results in a substantial loss of visual information that may be acceptable for some applications, however it becomes a major obstacle for critical tasks, such as the analysis of medical data. We propose a conformal deformation rendering pipeline for the visualization of datasets on partially-immersive platforms. The angle-preserving conformal mapping approach is used to map the 360°3D view volume to arbitrary display configurations. It has the desirable property of preserving shapes under distortion, which is important for identifying features, especially in medical data. The conformal mapping is used for rasterization, realtime raytracing and volume rendering of the datasets. Since the technique is applied during the rendering, we can construct stereoscopic images from the data, which is usually not true for image-based distortion approaches. We demonstrate the stereo conformal mapping rendering pipeline in the partially-immersive 5-wall Immersive Cabin (IC) for virtual colonoscopy and architectural review.
Collapse
|
15
|
Lee B, Yun J, Seo J, Shim B, Shin YG, Kim B. Fast high-quality volume ray-casting with virtual samplings. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2010; 16:1525-1532. [PMID: 20975194 DOI: 10.1109/tvcg.2010.155] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Volume ray-casting with a higher order reconstruction filter and/or a higher sampling rate has been adopted in direct volume rendering frameworks to provide a smooth reconstruction of the volume scalar and/or to reduce artifacts when the combined frequency of the volume and transfer function is high. While it enables high-quality volume rendering, it cannot support interactive rendering due to its high computational cost. In this paper, we propose a fast high-quality volume ray-casting algorithm which effectively increases the sampling rate. While a ray traverses the volume, intensity values are uniformly reconstructed using a high-order convolution filter. Additional samplings, referred to as virtual samplings, are carried out within a ray segment from a cubic spline curve interpolating those uniformly reconstructed intensities. These virtual samplings are performed by evaluating the polynomial function of the cubic spline curve via simple arithmetic operations. The min max blocks are refined accordingly for accurate empty space skipping in the proposed method. Experimental results demonstrate that the proposed algorithm, also exploiting fast cubic texture filtering supported by programmable GPUs, offers renderings as good as a conventional ray-casting algorithm using high-order reconstruction filtering at the same sampling rate, while delivering 2.5x to 3.3x rendering speed-up.
Collapse
|
16
|
Ament M, Weiskopf D, Carr H. Direct interval volume visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2010; 16:1505-1514. [PMID: 20975192 DOI: 10.1109/tvcg.2010.145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We extend direct volume rendering with a unified model for generalized isosurfaces, also called interval volumes, allowing a wider spectrum of visual classification. We generalize the concept of scale-invariant opacity—typical for isosurface rendering—to semi-transparent interval volumes. Scale-invariant rendering is independent of physical space dimensions and therefore directly facilitates the analysis of data characteristics. Our model represents sharp isosurfaces as limits of interval volumes and combines them with features of direct volume rendering. Our objective is accurate rendering, guaranteeing that all isosurfaces and interval volumes are visualized in a crack-free way with correct spatial ordering. We achieve simultaneous direct and interval volume rendering by extending preintegration and explicit peak finding with data-driven splitting of ray integration and hybrid computation in physical and data domains. Our algorithm is suitable for efficient parallel processing for interactive applications as demonstrated by our CUDA implementation.
Collapse
Affiliation(s)
- Marco Ament
- VISUS, Universität Stuttgart, Stuttgart, Germany.
| | | | | |
Collapse
|