1
|
Zhao H, Zhang ZW, Yang HW, Wei GH. Research on spatial carving method of glutenite reservoir based on opacity voxel imaging. Sci Rep 2024; 14:12667. [PMID: 38831094 PMCID: PMC11637114 DOI: 10.1038/s41598-024-63643-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 05/30/2024] [Indexed: 06/05/2024] Open
Abstract
The glutenite reservoir in an exploration area in eastern China is well-developed and holds significant exploration potential as an important oil and gas alternative layer. However, due to the influence of sedimentary characteristics, the glutenite reservoir exhibits strong lateral heterogeneity, significant vertical thickness variations, and low accuracy in reservoir space characterization, which affects the reasonable and effective deployment of development wells. Seismic data contains the three-dimensional spatial characteristics of geological bodies, but how to design a suitable transfer function to extract the nonlinear relationship between seismic data and reservoirs is crucial. At present, the transfer functions are concentrated in low-dimensional or high-dimensional fixed mathematical models, which cannot accurately describe the nonlinear relationship between seismic data and complex reservoirs, resulting in low spatial description accuracy of complex reservoirs. In this regard, this paper first utilizes a fusion method based on probability kernel to fuse seismic attributes such as wave impedance, effective bandwidth, and composite envelope difference. This provide a more intuitive reflection of the distribution characteristics of glutenite reservoirs. Moreover, a hybrid nonlinear transfer function is established to transform the fused attribute cube into an opaque attribute cube. Finally, the illumination model and ray casting method are used to perform voxel imaging of the glutenite reservoirs, brighten the detailed characteristics of reservoir space, and then form a set of methods for ' brightening reservoirs and darkening non-reservoirs ', which improves the spatial engraving accuracy of glutenite reservoirs.
Collapse
Affiliation(s)
- Hu Zhao
- Natural Gas Geology Key Laboratory of Sichuan Province, Southwest Petroleum University, Chengdu, 610500, China.
- School of Geoscisence and Technology, Southwest Petroleum University, Chengdu, 610500, China.
| | - Zhong-Wei Zhang
- School of Geoscisence and Technology, Southwest Petroleum University, Chengdu, 610500, China
| | - Hong-Wei Yang
- Geophysical Exploration Institute, Shengli Oilfield Company, SINOPEC, Dongying, 257000, China
| | - Guo-Hua Wei
- Geophysical Exploration Institute, Shengli Oilfield Company, SINOPEC, Dongying, 257000, China
| |
Collapse
|
2
|
Zhang H, Zhu L, Zhang Q, Wang Y, Song A. Online view enhancement for exploration inside medical volumetric data using virtual reality. Comput Biol Med 2023; 163:107217. [PMID: 37450968 DOI: 10.1016/j.compbiomed.2023.107217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 06/13/2023] [Accepted: 06/25/2023] [Indexed: 07/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Medical image visualization is an essential tool for conveying anatomical information. Ray-casting-based volume rendering is commonly used for generating visualizations of raw medical images. However, exposing a target area inside the skin often requires manual tuning of transfer functions or segmentation of original images, as preset parameters in volume rendering may not work well for arbitrary scanned data. This process is tedious and unnatural. To address this issue, we propose a volume visualization system that enhances the view inside the skin, enabling flexible exploration of medical volumetric data using virtual reality. METHODS In our proposed system, we design a virtual reality interface that allows users to walk inside the data. We introduce a view-dependent occlusion weakening method based on geodesic distance transform to support this interaction. By combining these methods, we develop a virtual reality system with intuitive interactions, facilitating online view enhancement for medical data exploration and annotation inside the volume. RESULTS Our rendering results demonstrate that the proposed occlusion weakening method effectively weakens obstacles while preserving the target area. Furthermore, comparative analysis with other alternative solutions highlights the advantages of our method in virtual reality. We conducted user studies to evaluate our system, including area annotation and line drawing tasks. The results showed that our method with enhanced views achieved 47.73% and 35.29% higher accuracy compared to the group with traditional volume rendering. Additionally, subjective feedback from medical experts further supported the effectiveness of the designed interactions in virtual reality. CONCLUSIONS We successfully address the occlusion problems in the exploration of medical volumetric data within a virtual reality environment. Our system allows for flexible integration of scanned medical volumes without requiring extensive manual preprocessing. The results of our user studies demonstrate the feasibility and effectiveness of walk-in interaction for medical data exploration.
Collapse
Affiliation(s)
- Hongkun Zhang
- State Key Laboratory of Digital Medical Engineering, Jiangsu Key Lab of Remote Measurement and Control, School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu, PR China
| | - Lifeng Zhu
- State Key Laboratory of Digital Medical Engineering, Jiangsu Key Lab of Remote Measurement and Control, School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu, PR China.
| | | | - Yunhai Wang
- Department of Computer Science, Shandong University, Shandong, PR China
| | - Aiguo Song
- State Key Laboratory of Digital Medical Engineering, Jiangsu Key Lab of Remote Measurement and Control, School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu, PR China
| |
Collapse
|
3
|
Zeng Q, Zhao Y, Wang Y, Zhang J, Cao Y, Tu C, Viola I, Wang Y. Data-Driven Colormap Adjustment for Exploring Spatial Variations in Scalar Fields. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4902-4917. [PMID: 34469302 DOI: 10.1109/tvcg.2021.3109014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Colormapping is an effective and popular visualization technique for analyzing patterns in scalar fields. Scientists usually adjust a default colormap to show hidden patterns by shifting the colors in a trial-and-error process. To improve efficiency, efforts have been made to automate the colormap adjustment process based on data properties (e.g., statistical data value or histogram distribution). However, as the data properties have no direct correlation to the spatial variations, previous methods may be insufficient to reveal the dynamic range of spatial variations hidden in the data. To address the above issues, we conduct a pilot analysis with domain experts and summarize three requirements for the colormap adjustment process. Based on the requirements, we formulate colormap adjustment as an objective function, composed of a boundary term and a fidelity term, which is flexible enough to support interactive functionalities. We compare our approach with alternative methods under a quantitative measure and a qualitative user study (25 participants), based on a set of data with broad distribution diversity. We further evaluate our approach via three case studies with six domain experts. Our method is not necessarily more optimal than alternative methods of revealing patterns, but rather is an additional color adjustment option for exploring data with a dynamic range of spatial variations.
Collapse
|
4
|
He X, Yang S, Tao Y, Dai H, Lin H. Graph convolutional network-based semi-supervised feature classification of volumes. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00787-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
5
|
Cheng HC, Cardone A, Jain S, Krokos E, Narayan K, Subramaniam S, Varshney A. Deep-Learning-Assisted Volume Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1378-1391. [PMID: 29994182 PMCID: PMC8369530 DOI: 10.1109/tvcg.2018.2796085] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Designing volume visualizations showing various structures of interest is critical to the exploratory analysis of volumetric data. The last few years have witnessed dramatic advances in the use of convolutional neural networks for identification of objects in large image collections. Whereas such machine learning methods have shown superior performance in a number of applications, their direct use in volume visualization has not yet been explored. In this paper, we present a deep-learning-assisted volume visualization to depict complex structures, which are otherwise challenging for conventional approaches. A significant challenge in designing volume visualizations based on the high-dimensional deep features lies in efficiently handling the immense amount of information that deep-learning methods provide. In this paper, we present a new technique that uses spectral methods to facilitate user interactions with high-dimensional features. We also present a new deep-learning-assisted technique for hierarchically exploring a volumetric dataset. We have validated our approach on two electron microscopy volumes and one magnetic resonance imaging dataset.
Collapse
|
6
|
Lan S, Wang L, Song Y, Wang YP, Yao L, Sun K, Xia B, Xu Z. Improving Separability of Structures with Similar Attributes in 2D Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1546-1560. [PMID: 26955038 DOI: 10.1109/tvcg.2016.2537341] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The 2D transfer function based on scalar value and gradient magnitude (SG-TF) is popularly used in volume rendering. However, it is plagued by the boundary-overlapping problem: different structures with similar attributes have the same region in SG-TF space, and their boundaries are usually connected. The SG-TF thus often fails in separating these structures (or their boundaries) and has limited ability to classify different objects in real-world 3D images. To overcome such a difficulty, we propose a novel method for boundary separation by integrating spatial connectivity computation of the boundaries and set operations on boundary voxels into the SG-TF. Specifically, spatial positions of boundaries and their regions in the SG-TF space are computed, from which boundaries can be well separated and volume rendered in different colors. In the method, the boundaries are divided into three classes and different boundary-separation techniques are applied to them, respectively. The complex task of separating various boundaries in 3D images is then simplified by breaking it into several small separation problems. The method shows good object classification ability in real-world 3D images while avoiding the complexity of high-dimensional transfer functions. Its effectiveness and validation is demonstrated by many experimental results to visualize boundaries of different structures in complex real-world 3D images.
Collapse
|
7
|
Abstract
Dual-modality positron emission tomography and computed tomography (PET-CT) depicts pathophysiological function with PET in an anatomical context provided by CT. Three-dimensional volume rendering approaches enable visualization of a two-dimensional slice of interest (SOI) from PET combined with direct volume rendering (DVR) from CT. However, because DVR depicts the whole volume, it may occlude a region of interest, such as a tumor in the SOI. Volume clipping can eliminate this occlusion by cutting away parts of the volume, but it requires intensive user involvement in deciding on the appropriate depth to clip. Transfer functions that are currently available can make the regions of interest visible, but this often requires complex parameter tuning and coupled preprocessing of the data to define the regions. Hence, we propose a new visualization algorithm where an SOI from PET is augmented by volumetric contextual information from a DVR of the counterpart CT so that the obtrusiveness from the CT in the SOI is minimized. Our approach automatically calculates an augmentation depth parameter by considering the occlusion information derived from the voxels of the CT in front of the PET SOI. The depth parameter is then used to generate an opacity weight function that controls the amount of contextual information visible from the DVR. We outline the improvements with our visualization approach compared to other slice-based and our previous approaches. We present the preliminary clinical evaluation of our visualization in a series of PET-CT studies from patients with nonsmall cell lung cancer.
Collapse
|
8
|
Kawamura T, Idomura Y, Miyamura H, Takemiya H. Algebraic design of multi-dimensional transfer function using transfer function synthesizer. J Vis (Tokyo) 2016. [DOI: 10.1007/s12650-016-0387-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
9
|
Yun J, Kim YK, Chun EJ, Shin YG, Lee J, Kim B. Stenosis map for volume visualization of constricted tubular structures: Application to coronary artery stenosis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2016; 124:76-90. [PMID: 26608866 DOI: 10.1016/j.cmpb.2015.10.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Revised: 10/08/2015] [Accepted: 10/27/2015] [Indexed: 06/05/2023]
Abstract
Although direct volume rendering (DVR) has become a commodity, effective rendering of interesting features is still a challenge. In one of active DVR application fields, the medicine, radiologists have used DVR for the diagnosis of lesions or diseases that should be visualized distinguishably from other surrounding anatomical structures. One of most frequent and important radiologic tasks is the detection of lesions, usually constrictions, in complex tubular structures. In this paper, we propose a 3D spatial field for the effective visualization of constricted tubular structures, called as a stenosis map which stores the degree of constriction at each voxel. Constrictions within tubular structures are quantified by using newly proposed measures (i.e. line similarity measure and constriction measure) based on the localized structure analysis, and classified with a proposed transfer function mapping the degree of constriction to color and opacity. We show the application results of our method to the visualization of coronary artery stenoses. We present performance evaluations using twenty eight clinical datasets, demonstrating high accuracy and efficacy of our proposed method. The ability of our method to saliently visualize the constrictions within tubular structures and interactively adjust the visual appearance of the constrictions proves to deliver a substantial aid in radiologic practice.
Collapse
Affiliation(s)
- Jihye Yun
- School of Computer Science and Engineering, Seoul National University, Gwanak-ro, Gwanak-gu, Seoul 151-742, South Korea.
| | - Yeo Koon Kim
- Department of Radiology, Seoul National University Bundang Hospital, 82 Gumi-ro, 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea.
| | - Eun Ju Chun
- Department of Radiology, Seoul National University Bundang Hospital, 82 Gumi-ro, 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea.
| | - Yeong-Gil Shin
- School of Computer Science and Engineering, Seoul National University, Gwanak-ro, Gwanak-gu, Seoul 151-742, South Korea.
| | - Jeongjin Lee
- School of Computer Science and Engineering, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul 156-743, South Korea.
| | - Bohyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, 82 Gumi-ro, 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea.
| |
Collapse
|
10
|
Song Y, Yang J, Zhou L, Zhu Y. Electric-field-based Transfer Functions for Volume Visualization. J Med Biol Eng 2015. [DOI: 10.1007/s40846-015-0027-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
11
|
Alper Selver M. Exploring Brushlet Based 3D Textures in Transfer Function Specification for Direct Volume Rendering of Abdominal Organs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:174-187. [PMID: 26357028 DOI: 10.1109/tvcg.2014.2359462] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Intuitive and differentiating domains for transfer function (TF) specification for direct volume rendering is an important research area for producing informative and useful 3D images. One of the emerging branches of this research is the texture based transfer functions. Although several studies in two, three, and four dimensional image processing show the importance of using texture information, these studies generally focus on segmentation. However, TFs can also be built effectively using appropriate texture information. To accomplish this, methods should be developed to collect wide variety of shape, orientation, and texture of biological tissues and organs. In this study, volumetric data (i.e., domain of a TF) is enhanced using brushlet expansion, which represents both low and high frequency textured structures at different quadrants in transform domain. Three methods (i.e., expert based manual, atlas and machine learning based automatic) are proposed for selection of the quadrants. Non-linear manipulation of the complex brushlet coefficients is also used prior to the tiling of selected quadrants and reconstruction of the volume. Applications to abdominal data sets acquired with CT, MR, and PET show that the proposed volume enhancement effectively improves the quality of 3D rendering using well-known TF specification techniques.
Collapse
|
12
|
Jung Y, Kim J, Fulham M, Feng DD. Opacity-driven volume clipping for slice of interest (SOI) visualisation of multi-modality PET-CT volumes. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:6714-7. [PMID: 25571537 DOI: 10.1109/embc.2014.6945169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Multi-modality positron emission tomography and computed tomography (PET-CT) imaging depicts biological and physiological functions (from PET) within a higher resolution anatomical reference frame (from CT). The need to efficiently assimilate the information from these co-aligned volumes simultaneously has resulted in 3D visualisation methods that depict e.g., slice of interest (SOI) from PET combined with direct volume rendering (DVR) of CT. However because DVR renders the whole volume, regions of interests (ROIs) such as tumours that are embedded within the volume may be occluded from view. Volume clipping is typically used to remove occluding structures by `cutting away' parts of the volume; this involves tedious trail-and-error tweaking of the clipping attempts until a satisfied visualisation is made, thus restricting its application. Hence, we propose a new automated opacity-driven volume clipping method for PET-CT using DVR-SOI visualisation. Our method dynamically calculates the volume clipping depth by considering the opacity information of the CT voxels in front of the PET SOI, thereby ensuring that only the relevant anatomical information from the CT is visualised while not impairing the visibility of the PET SOI. We outline the improvements of our method when compared to conventional 2D and traditional DVR-SOI visualisations.
Collapse
|
13
|
Intuitive transfer function design for photographic volumes. J Vis (Tokyo) 2014. [DOI: 10.1007/s12650-014-0267-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
14
|
Nakao M, Takemoto S, Sugiura T, Sawada K, Kawakami R, Nemoto T, Matsuda T. Interactive visual exploration of overlapping similar structures for three-dimensional microscope images. BMC Bioinformatics 2014; 15:415. [PMID: 25523409 PMCID: PMC4279998 DOI: 10.1186/s12859-014-0415-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2014] [Accepted: 12/09/2014] [Indexed: 11/10/2022] Open
Abstract
Background Recent advances in microscopy enable the acquisition of large numbers of tomographic images from living tissues. Three-dimensional microscope images are often displayed with volume rendering by adjusting the transfer functions. However, because the emissions from fluorescent materials and the optical properties based on point spread functions affect the imaging results, the intensity value can differ locally, even in the same structure. Further, images obtained from brain tissues contain a variety of neural structures such as dendrites and axons with complex crossings and overlapping linear structures. In these cases, the transfer functions previously used fail to optimize image generation, making it difficult to explore the connectivity of these tissues. Results This paper proposes an interactive visual exploration method by which the transfer functions are modified locally and interactively based on multidimensional features in the images. A direct editing interface is also provided to specify both the target region and structures with characteristic features, where all manual operations can be performed on the rendered image. This method is demonstrated using two-photon microscope images acquired from living mice, and is shown to be an effective method for interactive visual exploration of overlapping similar structures. Conclusions An interactive visualization method was introduced for local improvement of visualization by volume rendering in two-photon microscope images containing regions in which linear nerve structures crisscross in a complex manner. The proposed method is characterized by the localized multidimensional transfer function and interface where the parameters can be determined by the user to suit their particular visualization requirements.
Collapse
Affiliation(s)
- Megumi Nakao
- Graduate School of Informatics, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto, Japan.
| | - Shintaro Takemoto
- Graduate School of Informatics, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto, Japan.
| | - Tadao Sugiura
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5, Takayama, Ikoma, Nara, Japan.
| | - Kazuaki Sawada
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan.
| | - Ryosuke Kawakami
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan. .,Research Institute for Electronic Science, Hokkaido University, Sapporo, Japan.
| | - Tomomi Nemoto
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan. .,Research Institute for Electronic Science, Hokkaido University, Sapporo, Japan.
| | - Tetsuya Matsuda
- Graduate School of Informatics, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto, Japan.
| |
Collapse
|
15
|
Qin H, Ye B, He R. The voxel visibility model: an efficient framework for transfer function design. Comput Med Imaging Graph 2014; 40:138-46. [PMID: 25510474 DOI: 10.1016/j.compmedimag.2014.11.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2014] [Revised: 10/29/2014] [Accepted: 11/20/2014] [Indexed: 10/24/2022]
Abstract
Volume visualization is a very important work in medical imaging and surgery plan. However, determining an ideal transfer function is still a challenging task because of the lack of measurable metrics for quality of volume visualization. In the paper, we presented the voxel vibility model as a quality metric to design the desired visibility for voxels instead of designing transfer functions directly. Transfer functions are obtained by minimizing the distance between the desired visibility distribution and the actual visibility distribution. The voxel model is a mapping function from the feature attributes of voxels to the visibility of voxels. To consider between-class information and with-class information simultaneously, the voxel visibility model is described as a Gaussian mixture model. To highlight the important features, the matched result can be obtained by changing the parameters in the voxel visibility model through a simple and effective interface. Simultaneously, we also proposed an algorithm for transfer functions optimization. The effectiveness of this method is demonstrated through experimental results on several volumetric data sets.
Collapse
Affiliation(s)
- Hongxing Qin
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Bin Ye
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Rui He
- Chongqing Key Laboratory of Computational Intelligence, Chongqing 400065, China; College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
16
|
Nakao M, Kurebayashi K, Sugiura T, Sato T, Sawada K, Kawakami R, Nemoto T, Minato K, Matsuda T. Visualizing in vivo brain neural structures using volume rendered feature spaces. Comput Biol Med 2014; 53:85-93. [PMID: 25129020 DOI: 10.1016/j.compbiomed.2014.07.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2014] [Revised: 07/06/2014] [Accepted: 07/15/2014] [Indexed: 11/28/2022]
Abstract
BACKGROUND Dendrites of cortical neurons are widely spread across several layers of the cortex. Recently developed two-photon microscopy systems are capable of visualizing the morphology of neurons within deeper layers of the brain and generate large amounts of volumetric imaging data from living tissue. METHOD For visual exploration of the three-dimensional (3D) structure of dendrites and the connectivity among neurons in the brain, we propose a visualization software and interface for 3D images based on a new transfer function design using volume rendered feature spaces. This software enables the visualization of multidimensional descriptors of shape and texture extracted from imaging data to characterize tissue. It also allows the efficient analysis and visualization of large data sets. RESULTS We apply and demonstrate the software to two-photon microscopy images of a living mouse brain. By applying the developed visualization software and algorithms to two-photon microscope images of the mouse brain, we identified a set of feature values that distinguish characteristic structures such as soma, dendrites and apical dendrites in mouse brain. Also, the visualization interface was compared to conventional 1D/2D transfer function system. CONCLUSIONS We have developed a visualization tool and interface that can represent 3D feature values as textures and shapes. This visualization system allows the analysis and characterization of the higher-dimensional feature values of living tissues at the micron level and will contribute to new discoveries in basic biology and clinical medicine.
Collapse
Affiliation(s)
- Megumi Nakao
- Graduate School of Informatics, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto, Japan.
| | - Kosuke Kurebayashi
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5, Takayama, Ikoma, Nara, Japan
| | - Tadao Sugiura
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5, Takayama, Ikoma, Nara, Japan
| | - Tetsuo Sato
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5, Takayama, Ikoma, Nara, Japan
| | - Kazuaki Sawada
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Ryosuke Kawakami
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan; Research Institute for Electronic Science, Hokkaido University, Sapporo, Japan
| | - Tomomi Nemoto
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan; Research Institute for Electronic Science, Hokkaido University, Sapporo, Japan
| | - Kotaro Minato
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5, Takayama, Ikoma, Nara, Japan
| | - Tetsuya Matsuda
- Graduate School of Informatics, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto, Japan
| |
Collapse
|
17
|
Yu L, Lu A, Chen W. Visualization and analysis of 3D time-varying simulations with time lines. JOURNAL OF VISUAL LANGUAGES AND COMPUTING 2013. [DOI: 10.1016/j.jvlc.2013.07.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
18
|
Kerwin T, Stredney D, Wiet G, Shen HW. Virtual mastoidectomy performance evaluation through multi-volume analysis. Int J Comput Assist Radiol Surg 2012; 8:51-61. [PMID: 22528058 DOI: 10.1007/s11548-012-0687-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2012] [Accepted: 03/26/2012] [Indexed: 11/29/2022]
Abstract
PURPOSE Development of a visualization system that provides surgical instructors with a method to compare the results of many virtual surgeries (n > 100). METHODS A masked distance field models the overlap between expert and resident results. Multiple volume displays are used side-by-side with a 2D point display. RESULTS Performance characteristics were examined by comparing the results of specific residents with those of experts and the entire class. CONCLUSIONS The software provides a promising approach for comparing performance between large groups of residents learning mastoidectomy techniques.
Collapse
|
19
|
Jung Y, Kim J, Feng DD. Dual-modal visibility metrics for interactive PET-CT visualization. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2012:2696-2699. [PMID: 23366481 DOI: 10.1109/embc.2012.6346520] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Dual-modal positron emission tomography and computed tomography (PET-CT) imaging enables the visualization of functional structures (PET) within human bodies in the spatial context of their anatomical (CT) counterparts, and is providing unprecedented capabilities in understanding diseases. However, the need to access and assimilate the two volumes simultaneously has raised new visualization challenges. In typical dual-modal visualization, the transfer functions for the two volumes are designed in isolation with the resulting volumes being fused. Unfortunately, such transfer function design fails to exploit the correlation that exists between the two volumes. In this study, we propose a dual-modal visualization method where we employ 'visibility' metrics to provide interactive visual feedback regarding the occlusion caused by the first volume on the second volume and vice versa. We further introduce a region of interest (ROI) function that allows visibility analyses to be restricted to subsection of the volume. We demonstrate the new visualization enabled by our proposed dual-modal visibility metrics using clinical whole-body PET-CT studies of various diseases.
Collapse
Affiliation(s)
- Younhyun Jung
- School of Information Technologies, University of Sydney, Australia.
| | | | | |
Collapse
|
20
|
Kaufman AE. Modified Dendrogram of Attribute Space for Multidimensional Transfer Function Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:121-131. [PMID: 21282856 DOI: 10.1109/tvcg.2011.23] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We introduce a modified dendrogram (MD) (with subtrees to represent clusters) and display it in 2D for multidimensional transfer function design. Such a transfer function for direct volume rendering employs a multidimensional space, termed attribute space. The MD reveals the hierarchical structure information of the attribute space. The user can design a transfer function in an intuitive and informative manner using the MD user interface in 2D instead of multidimensional space, where it is hard to ascertain the relationship of the space. In addition, we provide the capability to interactively modify the granularity of the MD. The coarse-grained MD primarily shows the global information of the attribute space while the fine-grained MD reveals the finer details, and the separation ability of the attribute space is completely preserved in the finest granularity. With this so called multigrained method, the user can efficiently create a transfer function using the coarse-grained MD, and then fine tune it with the fine-grained MDs. Our method is independent on the type of the attributes and supports arbitrary-dimension attribute space.
Collapse
|
21
|
Zhang Q, Eagleson R, Peters TM. Volume visualization: a technical overview with a focus on medical applications. J Digit Imaging 2011; 24:640-64. [PMID: 20714917 DOI: 10.1007/s10278-010-9321-6] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
With the increasing availability of high-resolution isotropic three- or four-dimensional medical datasets from sources such as magnetic resonance imaging, computed tomography, and ultrasound, volumetric image visualization techniques have increased in importance. Over the past two decades, a number of new algorithms and improvements have been developed for practical clinical image display. More recently, further efficiencies have been attained by designing and implementing volume-rendering algorithms on graphics processing units (GPUs). In this paper, we review volumetric image visualization pipelines, algorithms, and medical applications. We also illustrate our algorithm implementation and evaluation results, and address the advantages and drawbacks of each algorithm in terms of image quality and efficiency. Within the outlined literature review, we have integrated our research results relating to new visualization, classification, enhancement, and multimodal data dynamic rendering. Finally, we illustrate issues related to modern GPU working pipelines, and their applications in volume visualization domain.
Collapse
Affiliation(s)
- Qi Zhang
- Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, ON, Canada.
| | | | | |
Collapse
|
22
|
Xiang D, Tian J, Yang F, Yang Q, Zhang X, Li Q, Liu X. Skeleton Cuts--An Efficient Segmentation Method for Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1295-1306. [PMID: 21041885 DOI: 10.1109/tvcg.2010.239] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Volume rendering has long been used as a key technique for volume data visualization, which works by using a transfer function to map color and opacity to each voxel. Many volume rendering approaches proposed so far for voxels classification have been limited in a single global transfer function, which is in general unable to properly visualize interested structures. In this paper, we propose a localized volume data visualization approach which regards volume visualization as a combination of two mutually related processes: the segmentation of interested structures and the visualization using a locally designed transfer function for each individual structure of interest. As shown in our work, a new interactive segmentation algorithm is advanced via skeletons to properly categorize interested structures. In addition, a localized transfer function is subsequently presented to assign optical parameters via interested information such as intensity, thickness and distance. As can be seen from the experimental results, the proposed techniques allow to appropriately visualize interested structures in highly complex volume medical data sets.
Collapse
|
23
|
Kim HS, Schulze JP, Cone AC, Sosinsky GE, Martone ME. Dimensionality Reduction on Multi-Dimensional Transfer Functions for Multi-Channel Volume Data Sets. INFORMATION VISUALIZATION 2010; 9:167-180. [PMID: 21841914 PMCID: PMC3153355 DOI: 10.1057/ivs.2010.6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The design of transfer functions for volume rendering is a non-trivial task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel, which requires multi-dimensional transfer functions. In this paper, we propose a new method for multi-dimensional transfer function design. Our new method provides a framework to combine multiple computational approaches and pushes the boundary of gradient-based multi-dimensional transfer functions to multiple channels, while keeping the dimensionality of transfer functions at a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. Applying recently developed nonlinear dimensionality reduction algorithms reduces the high-dimensional data of the domain. In this paper, we use Isomap and Locally Linear Embedding as well as a traditional algorithm, Principle Component Analysis. Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. We demonstrate the effectiveness of our new dimensionality reduction algorithms with two volumetric confocal microscopy data sets.
Collapse
Affiliation(s)
- Han Suk Kim
- Department of Computer Science and Engineering, University of California San Diego, 9500 Gilman Drive, La Jolla, CA, USA
| | | | | | | | | |
Collapse
|
24
|
Zhao X, Kaufman A. Multi-dimensional Reduction and Transfer Function Design using Parallel Coordinates. VOLUME GRAPHICS. INTERNATIONAL SYMPOSIUM ON VOLUME GRAPHICS 2010:69-76. [PMID: 26278929 DOI: 10.2312/vg/vg10/069-076] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Multi-dimensional transfer functions are widely used to provide appropriate data classification for direct volume rendering. Nevertheless, the design of a multi-dimensional transfer function is a complicated task. In this paper, we propose to use parallel coordinates, a powerful tool to visualize high-dimensional geometry and analyze multivariate data, for multi-dimensional transfer function design. This approach has two major advantages: (1) Combining the information of spatial space (voxel position) and parameter space; (2) Selecting appropriate high-dimensional parameters to obtain sophisticated data classification. Although parallel coordinates offers simple interface for the user to design the high-dimensional transfer function, some extra work such as sorting the coordinates is inevitable. Therefore, we use a local linear embedding technique for dimension reduction to reduce the burdensome calculations in the high dimensional parameter space and to represent the transfer function concisely. With the aid of parallel coordinates, we propose some novel high-dimensional transfer function widgets for better visualization results. We demonstrate the capability of our parallel coordinates based transfer function (PCbTF) design method for direct volume rendering using CT and MRI datasets.
Collapse
Affiliation(s)
- X Zhao
- Stony Brook University, USA
| | | |
Collapse
|
25
|
Jeong WK, Beyer J, Hadwiger M, Vazquez A, Pfister H, Whitaker RT. Scalable and interactive segmentation and visualization of neural processes in EM datasets. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2009; 15:1505-14. [PMID: 19834227 PMCID: PMC3179915 DOI: 10.1109/tvcg.2009.178] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuro-scientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes.
Collapse
Affiliation(s)
- Won-Ki Jeong
- School of Engineering and Applied Sciences at Harvard University
| | - Johanna Beyer
- VRV is Center for Virtual Reality and Visualization Research, Inc
| | - Markus Hadwiger
- VRV is Center for Virtual Reality and Visualization Research, Inc
| | - Amelio Vazquez
- School of Engineering and Applied Sciences at Harvard University
| | | | - Ross T. Whitaker
- Scientific Computing and Imaging Institute at the University of Utah
| |
Collapse
|