1
|
Jadhav S, Torkaman M, Tannenbaum A, Nadeem S, Kaufman AE. Volume Exploration Using Multidimensional Bhattacharyya Flow. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1651-1663. [PMID: 34780328 PMCID: PMC9594946 DOI: 10.1109/tvcg.2021.3127918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present a novel approach for volume exploration that is versatile yet effective in isolating semantic structures in both noisy and clean data. Specifically, we describe a hierarchical active contours approach based on Bhattacharyya gradient flow which is easier to control, robust to noise, and can incorporate various types of statistical information to drive an edge-agnostic exploration process. To facilitate a time-bound user-driven volume exploration process that is applicable to a wide variety of data sources, we present an efficient multi-GPU implementation that (1) is approximately 400 times faster than a single thread CPU implementation, (2) allows hierarchical exploration of 2D and 3D images, (3) supports customization through multidimensional attribute spaces, and (4) is applicable to a variety of data sources and semantic structures. The exploration system follows a 2-step process. It first applies active contours to isolate semantically meaningful subsets of the volume. It then applies transfer functions to the isolated regions locally to produce clear and clutter-free visualizations. We show the effectiveness of our approach in isolating and visualizing structures-of-interest without needing any specialized segmentation methods on a variety of data sources, including 3D optical microscopy, multi-channel optical volumes, abdominal and chest CT, micro-CT, MRI, simulation, and synthetic data. We also gathered feedback from a medical trainee regarding the usefulness of our approach and discussion on potential applications in clinical workflows.
Collapse
|
2
|
Jadhav S, Nadeem S, Kaufman A. FeatureLego: Volume Exploration Using Exhaustive Clustering of Super-Voxels. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2725-2737. [PMID: 30028709 PMCID: PMC6703906 DOI: 10.1109/tvcg.2018.2856744] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a volume exploration framework, FeatureLego, that uses a novel voxel clustering approach for efficient selection of semantic features. We partition the input volume into a set of compact super-voxels that represent the finest selection granularity. We then perform an exhaustive clustering of these super-voxels using a graph-based clustering method. Unlike the prevalent brute-force parameter sampling approaches, we propose an efficient algorithm to perform this exhaustive clustering. By computing an exhaustive set of clusters, we aim to capture as many boundaries as possible and ensure that the user has sufficient options for efficiently selecting semantically relevant features. Furthermore, we merge all the computed clusters into a single tree of meta-clusters that can be used for hierarchical exploration. We implement an intuitive user-interface to interactively explore volumes using our clustering approach. Finally, we show the effectiveness of our framework on multiple real-world datasets of different modalities.
Collapse
|
3
|
Igouchkine O, Zhang Y, Ma KL. Multi-Material Volume Rendering with a Physically-Based Surface Reflection Model. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:3147-3159. [PMID: 29990043 DOI: 10.1109/tvcg.2017.2784830] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Rendering techniques that increase realism in volume visualization help enhance perception of the 3D features in the volume data. While techniques focusing on high-quality global illumination have been extensively studied, few works handle the interaction of light with materials in the volume. Existing techniques for light-material interaction are limited in their ability to handle high-frequency real-world material data, and the current treatment of volume data poorly supports the correct integration of surface materials. In this paper, we introduce an alternative definition for the transfer function which supports surface-like behavior at the boundaries between volume components and volume-like behavior within. We show that this definition enables multi-material rendering with high-quality, real-world material data. We also show that this approach offers an efficient alternative to pre-integrated rendering through isosurface techniques. We introduce arbitrary spatially-varying materials to achieve better multi-material support for scanned volume data. Finally, we show that it is possible to map an arbitrary set of parameters directly to a material representation for the more intuitive creation of novel materials.
Collapse
|
4
|
Liu R, Chen S, Ji G, Zhao B, Li Q, Su M. Interactive stratigraphic structure visualization for seismic data. JOURNAL OF VISUAL LANGUAGES AND COMPUTING 2018. [DOI: 10.1016/j.jvlc.2018.07.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
5
|
Xiang D, Bagci U, Jin C, Shi F, Zhu W, Yao J, Sonka M, Chen X. CorteXpert: A model-based method for automatic renal cortex segmentation. Med Image Anal 2017; 42:257-273. [DOI: 10.1016/j.media.2017.06.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Revised: 05/17/2017] [Accepted: 06/22/2017] [Indexed: 10/19/2022]
|
6
|
Song J, Yang C, Fan L, Wang K, Yang F, Liu S, Tian J. Lung Lesion Extraction Using a Toboggan Based Growing Automatic Segmentation Approach. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:337-353. [PMID: 26336121 DOI: 10.1109/tmi.2015.2474119] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The accurate segmentation of lung lesions from computed tomography (CT) scans is important for lung cancer research and can offer valuable information for clinical diagnosis and treatment. However, it is challenging to achieve a fully automatic lesion detection and segmentation with acceptable accuracy due to the heterogeneity of lung lesions. Here, we propose a novel toboggan based growing automatic segmentation approach (TBGA) with a three-step framework, which are automatic initial seed point selection, multi-constraints 3D lesion extraction and the final lesion refinement. The new approach does not require any human interaction or training dataset for lesion detection, yet it can provide a high lesion detection sensitivity (96.35%) and a comparable segmentation accuracy with manual segmentation (P > 0.05), which was proved by a series assessments using the LIDC-IDRI dataset (850 lesions) and in-house clinical dataset (121 lesions). We also compared TBGA with commonly used level set and skeleton graph cut methods, respectively. The results indicated a significant improvement of segmentation accuracy . Furthermore, the average time consumption for one lesion segmentation was under 8 s using our new method. In conclusion, we believe that the novel TBGA can achieve robust, efficient and accurate lung lesion segmentation in CT images automatically.
Collapse
|
7
|
Li G, Chen X, Shi F, Zhu W, Tian J, Xiang D. Automatic Liver Segmentation Based on Shape Constraints and Deformable Graph Cut in CT Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:5315-5329. [PMID: 26415173 DOI: 10.1109/tip.2015.2481326] [Citation(s) in RCA: 82] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Liver segmentation is still a challenging task in medical image processing area due to the complexity of the liver's anatomy, low contrast with adjacent organs, and presence of pathologies. This investigation was used to develop and validate an automated method to segment livers in CT images. The proposed framework consists of three steps: 1) preprocessing; 2) initialization; and 3) segmentation. In the first step, a statistical shape model is constructed based on the principal component analysis and the input image is smoothed using curvature anisotropic diffusion filtering. In the second step, the mean shape model is moved using thresholding and Euclidean distance transformation to obtain a coarse position in a test image, and then the initial mesh is locally and iteratively deformed to the coarse boundary, which is constrained to stay close to a subspace of shapes describing the anatomical variability. Finally, in order to accurately detect the liver surface, deformable graph cut was proposed, which effectively integrates the properties and inter-relationship of the input images and initialized surface. The proposed method was evaluated on 50 CT scan images, which are publicly available in two databases Sliver07 and 3Dircadb. The experimental results showed that the proposed method was effective and accurate for detection of the liver surface.
Collapse
|
8
|
Mu W, Chen Z, Liang Y, Shen W, Yang F, Dai R, Wu N, Tian J. Staging of cervical cancer based on tumor heterogeneity characterized by texture features on (18)F-FDG PET images. Phys Med Biol 2015; 60:5123-39. [PMID: 26083460 DOI: 10.1088/0031-9155/60/13/5123] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
The aim of the study is to assess the staging value of the tumor heterogeneity characterized by texture features and other commonly used semi-quantitative indices extracted from (18)F-FDG PET images of cervical cancer (CC) patients. Forty-two patients suffering CC at different stages were enrolled in this study. Firstly, we proposed a new tumor segmentation method by combining the intensity and gradient field information in a level set framework. Secondly, fifty-four 3D texture features were studied besides of SUVs (SUVmax, SUVmean, SUVpeak) and metabolic tumor volume (MTV). Through correlation analysis, receiver-operating-characteristic (ROC) curves analysis, some independent indices showed statistically significant differences between the early stage (ES, stages I and II) and the advanced stage (AS, stages III and IV). Then the tumors represented by those independent indices could be automatically classified into ES and AS, and the most discriminative feature could be chosen. Finally, the robustness of the optimal index with respect to sampling schemes and the quality of the PET images were validated. Using the proposed segmentation method, the dice similarity coefficient and Hausdorff distance were 91.78 ± 1.66% and 7.94 ± 1.99 mm, respectively. According to the correlation analysis, all the fifty-eight indices could be divided into 20 groups. Six independent indices were selected for their highest areas under the ROC curves (AUROC), and showed significant differences between ES and AS (P < 0.05). Through automatic classification with the support vector machine (SVM) Classifier, run percentage (RP) was the most discriminative index with the higher accuracy (88.10%) and larger AUROC (0.88). The Pearson correlation of RP under different sampling schemes is 0.9991 ± 0.0011. RP is a highly stable feature and well correlated with tumor stage in CC, which suggests it could differentiate ES and AS with high accuracy.
Collapse
Affiliation(s)
- Wei Mu
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China. Beijing Key Laboratory of Molecular Imaging, Beijing, 100190, China
| | | | | | | | | | | | | | | |
Collapse
|
9
|
Mu W, Chen Z, Shen W, Yang F, Liang Y, Dai R, Wu N, Tian J. A Segmentation Algorithm for Quantitative Analysis of Heterogeneous Tumors of the Cervix With ¹⁸F-FDG PET/CT. IEEE Trans Biomed Eng 2015; 62:2465-79. [PMID: 25993699 DOI: 10.1109/tbme.2015.2433397] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
As positron-emission tomography (PET) images have low spatial resolution and much noise, accurate image segmentation is one of the most challenging issues in tumor quantification. Tumors of the uterine cervix present a particular challenge because of urine activity in the adjacent bladder. Here, we propose and validate an automatic segmentation method adapted to cervical tumors. Our proposed methodology combined the gradient field information of both the filtered PET image and the level set function into a level set framework by constructing a new evolution equation. Furthermore, we also constructed a new hyperimage to recognize a rough tumor region using the fuzzy c-means algorithm according to the tissue specificity as defined by both PET (uptake) and computed tomography (attenuation) to provide the initial zero level set, which could make the segmentation process fully automatic. The proposed method was verified based on simulation and clinical studies. For simulation studies, seven different phantoms, representing tumors with homogenous/heterogeneous-low/high uptake patterns and different volumes, were simulated with five different noise levels. Twenty-seven cervical cancer patients at different stages were enrolled for clinical evaluation of the method. Dice similarity coefficients (DSC) and Hausdorff distance (HD) were used to evaluate the accuracy of the segmentation method, while a Bland-Altman analysis of the mean standardized uptake value (SUVmean) and metabolic tumor volume (MTV) was used to evaluate the accuracy of the quantification. Using this method, the DSCs and HDs of the homogenous and heterogeneous phantoms under clinical noise level were 93.39 ±1.09% and 6.02 ±1.09 mm, 93.59 ±1.63% and 8.92 ±2.57 mm, respectively. The DSCs and HDs in patients measured 91.80 ±2.46% and 7.79 ±2.18 mm. Through Bland-Altman analysis, the SUVmean and the MTV using our method showed high correlation with the clinical gold standard. The results of both simulation and clinical studies demonstrated the accuracy, effectiveness, and robustness of the proposed method. Further assessment of the quantitative indices indicates the feasibility of this algorithm in accurate quantitative analysis of cervical tumors in clinical practice.
Collapse
|
10
|
Gu Y, Kumar V, Hall LO, Goldgof DB, Li CY, Korn R, Bendtsen C, Velazquez ER, Dekker A, Aerts H, Lambin P, Li X, Tian J, Gatenby RA, Gillies RJ. Automated Delineation of Lung Tumors from CT Images Using a Single Click Ensemble Segmentation Approach. PATTERN RECOGNITION 2013; 46:692-702. [PMID: 23459617 PMCID: PMC3580869 DOI: 10.1016/j.patcog.2012.10.005] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
A single click ensemble segmentation (SCES) approach based on an existing "Click&Grow" algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated.
Collapse
Affiliation(s)
- Yuhua Gu
- Department of Imaging, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida 33612. USA
| | - Virendra Kumar
- Department of Imaging, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida 33612. USA
| | - Lawrence O Hall
- Department of Computer Science and Engineering, University of South Florida, Tampa, Florida 33620. USA
| | - Dmitry B Goldgof
- Department of Computer Science and Engineering, University of South Florida, Tampa, Florida 33620. USA
| | - Ching-Yen Li
- Department of Computer Science and Engineering, University of South Florida, Tampa, Florida 33620. USA
| | - René Korn
- Definiens AG, Trappentreustraße 1, 80339 München, Germany
| | - Claus Bendtsen
- DECS, AstraZeneca, 50S27 Mereside, Alderley Park, Macclesfield, Cheshire SK10 4TG, UK
| | | | - Andre Dekker
- Departments of Radiation Oncology, University Hospital Maastricht, Maastricht, Netherlands
| | - Hugo Aerts
- Departments of Radiation Oncology, University Hospital Maastricht, Maastricht, Netherlands
| | - Philippe Lambin
- Departments of Radiation Oncology, University Hospital Maastricht, Maastricht, Netherlands
| | - Xiuli Li
- Medical Image Processing Group, State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Jie Tian
- Medical Image Processing Group, State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Robert A Gatenby
- Department of Imaging, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida 33612. USA
| | - Robert J Gillies
- Department of Imaging, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida 33612. USA
| |
Collapse
|
11
|
Yang F, Li Q, Xiang D, Cao Y, Tian J. A versatile optical model for hybrid rendering of volume data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:925-937. [PMID: 21690650 DOI: 10.1109/tvcg.2011.113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In volume rendering, most optical models currently in use are based on the assumptions that a volumetric object is a collection of particles and that the macro behavior of particles, when they interact with light rays, can be predicted based on the behavior of each individual particle. However, such models are not capable of characterizing the collective optical effect of a collection of particles which dominates the appearance of the boundaries of dense objects. In this paper, we propose a generalized optical model that combines particle elements and surface elements together to characterize both the behavior of individual particles and the collective effect of particles. The framework based on a new model provides a more powerful and flexible tool for hybrid rendering of isosurfaces and transparent clouds of particles in a single scene. It also provides a more rational basis for shading, so the problem of normal-based shading in homogeneous regions encountered in conventional volume rendering can be easily avoided. The model can be seen as an extension to the classical model. It can be implemented easily, and most of the advanced numerical estimation methods previously developed specifically for the particle-based optical model, such as preintegration, can be applied to the new model to achieve high-quality rendering results.
Collapse
Affiliation(s)
- Fei Yang
- Institute of Automation, Chinese Academy of Sciences, 95 Zhongguancun East Road, Beijing 100190, China.
| | | | | | | | | |
Collapse
|