1
|
Song Z, Wu H, Chen W, Slowik A. Improving automatic segmentation of liver tumor images using a deep learning model. Heliyon 2024; 10:e28538. [PMID: 38571625 PMCID: PMC10988037 DOI: 10.1016/j.heliyon.2024.e28538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 03/17/2024] [Accepted: 03/20/2024] [Indexed: 04/05/2024] Open
Abstract
Liver tumors are one of the most aggressive malignancies in the human body. Computer-aided technology and liver interventional surgery are effective in the prediction, identification and management of liver neoplasms. One of the important processes is to accurately grasp the morphological structure of the liver and liver blood vessels. However, accurate identification and segmentation of hepatic blood vessels in CT images poses a formidable challenge. Manually locating and segmenting liver vessels in CT images is time-consuming and impractical. There is an imperative clinical requirement for a precise and effective algorithm to segment liver vessels. In response to this demand, the current paper advocates a liver vessel segmentation approach that employs an enhanced 3D fully convolutional neural network V-Net. The network model improves the basic network structure according to the characteristics of liver vessels. First, a pyramidal convolution block is introduced between the encoder and decoder of the network to improve the network localization ability. Then, multi-resolution deep supervision is introduced in the network, resulting in more robust segmentation. Finally, by fusing feature maps of different resolutions, the overall segmentation result is predicted. Evaluation experiments on public datasets demonstrate that our improved scheme can increase the segmentation ability of existing network models for liver vessels. Compared with the existing work, the experimental outcomes demonstrate that the technique presented in this manuscript has attained superior performance on the Dice Coefficient index, which can promote the treatment of liver tumors.
Collapse
Affiliation(s)
- Zhendong Song
- School of Mechanical and Electrical Engineering, Shenzhen Polytechnic University, Shenzhen, 518055, China
| | - Huiming Wu
- School of Mechanical and Electrical Engineering, Shenzhen Polytechnic University, Shenzhen, 518055, China
| | - Wei Chen
- School of Mechanical and Electrical Engineering, Shenzhen Polytechnic University, Shenzhen, 518055, China
| | - Adam Slowik
- Koszalin University of Technology, Koszalin, Poland
| |
Collapse
|
2
|
Yue Y, Li N, Xing W, Zhang G, Liu X, Zhu Z, Song S, Ta D. Condition control training-based ConVMLP-ResU-Net for semantic segmentation of esophageal cancer in 18F-FDG PET/CT images. Phys Eng Sci Med 2023; 46:1643-1658. [PMID: 37910383 DOI: 10.1007/s13246-023-01327-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Accepted: 08/28/2023] [Indexed: 11/03/2023]
Abstract
The precise delineation of esophageal gross tumor volume (GTV) on medical images can promote the radiotherapy effect of esophagus cancer. This work is intended to explore effective learning-based methods to tackle the challenging auto-segmentation problem of esophageal GTV. By employing the progressive hierarchical reasoning mechanism (PHRM), we devised a simple yet effective two-stage deep framework, ConVMLP-ResU-Net. Thereinto, the front-end ConVMLP integrates convolution (ConV) and multi-layer perceptrons (MLP) to capture localized and long-range spatial information, thus making ConVMLP excel in the location and coarse shape prediction of esophageal GTV. According to the PHRM, the front-end ConVMLP should have a strong generalization ability to ensure that the back-end ResU-Net has correct and valid reasoning. Therefore, a condition control training algorithm was proposed to control the training process of ConVMLP for a robust front end. Afterward, the back-end ResU-Net benefits from the yielded mask by ConVMLP to conduct a finer expansive segmentation to output the final result. Extensive experiments were carried out on a clinical cohort, which included 1138 pairs of 18F-FDG positron emission tomography/computed tomography (PET/CT) images. We report the Dice similarity coefficient, Hausdorff distance, and Mean surface distance as 0.82 ± 0.13, 4.31 ± 7.91 mm, and 1.42 ± 3.69 mm, respectively. The predicted contours visually have good agreements with the ground truths. The devised ConVMLP is apt at locating the esophageal GTV with correct initial shape prediction and hence facilitates the finer segmentation of the back-end ResU-Net. Both the qualitative and quantitative results validate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Yaoting Yue
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Nan Li
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Wenyu Xing
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Gaobo Zhang
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Zhibin Zhu
- School of Physics and Electromechanical Engineering, Hexi University, Zhangye, Gansu, China.
| | - Shaoli Song
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China.
| | - Dean Ta
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| |
Collapse
|
3
|
Yue Y, Li N, Zhang G, Zhu Z, Liu X, Song S, Ta D. Automatic segmentation of esophageal gross tumor volume in 18F-FDG PET/CT images via GloD-LoATUNet. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107266. [PMID: 36470035 DOI: 10.1016/j.cmpb.2022.107266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 10/08/2022] [Accepted: 11/22/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE For esophageal squamous cell carcinoma, radiotherapy is one of the primary treatments. During the planning before radiotherapy, the intractable task is to precisely delineate the esophageal gross tumor volume (GTV) on medical images. In current clinical practice, the manual delineation suffers from high intra- and inter-rater variability, while also exhausting the oncologists on a treadmill. There is an urgent demand for effective computer-aided automatic segmentation methods. To this end, we designed a novel deep network, dubbed as GloD-LoATUNet. METHODS GloD-LoATUNet follows the effective U-shape structure. On the contractile path, the global deformable dense attention transformer (GloDAT), local attention transformer (LoAT), and convolution blocks are integrated to model long-range dependencies and localized information. On the center bridge and the expanding path, convolution blocks are adopted to upsample the extracted representations for pixel-wise semantic prediction. Between the peer-to-peer counterparts, enhanced skip connections are built to compensate for the lost spatial information and dependencies. By exploiting complementary strengths of the GloDAT, LoAT, and convolution, GloD-LoATUNet has remarkable representation learning capabilities, performing well in the prediction of the small and variable esophageal GTV. RESULTS The proposed approach was validated in the clinical positron emission tomography/computed tomography (PET/CT) cohort. For 4 different data partitions, we report the Dice similarity coefficient (DSC), Hausdorff distance (HD), and Mean surface distance (MSD) as: 0.83±0.13, 4.88±9.16 mm, and 1.40±4.11 mm; 0.84±0.12, 6.89±12.04 mm, and 1.18±3.02 mm; 0.84±0.13, 3.89±7.64 mm, and 1.28±3.68 mm; 0.86±0.09, 3.71±4.79 mm, and 0.90±0.37 mm; respectively. The predicted contours present a desirable consistency with the ground truth. CONCLUSIONS The inspiring results confirm the accuracy and generalizability of the proposed model, demonstrating the potential for automatic segmentation of esophageal GTV in clinical practice.
Collapse
Affiliation(s)
- Yaoting Yue
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China; Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - Nan Li
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai 201321, China
| | - Gaobo Zhang
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China
| | - Zhibin Zhu
- School of Physics and Electromechanical Engineering, Hexi University, Zhangye 734000, Gansu, China.
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Shaoli Song
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai 201321, China.
| | - Dean Ta
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China; Academy for Engineering and Technology, Fudan University, Shanghai 200433, China.
| |
Collapse
|
4
|
Lafata KJ, Wang Y, Konkel B, Yin FF, Bashir MR. Radiomics: a primer on high-throughput image phenotyping. Abdom Radiol (NY) 2022; 47:2986-3002. [PMID: 34435228 DOI: 10.1007/s00261-021-03254-x] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 08/15/2021] [Accepted: 08/16/2021] [Indexed: 01/18/2023]
Abstract
Radiomics is a high-throughput approach to image phenotyping. It uses computer algorithms to extract and analyze a large number of quantitative features from radiological images. These radiomic features collectively describe unique patterns that can serve as digital fingerprints of disease. They may also capture imaging characteristics that are difficult or impossible to characterize by the human eye. The rapid development of this field is motivated by systems biology, facilitated by data analytics, and powered by artificial intelligence. Here, as part of Abdominal Radiology's special issue on Quantitative Imaging, we provide an introduction to the field of radiomics. The technique is formally introduced as an advanced application of data analytics, with illustrating examples in abdominal radiology. Artificial intelligence is then presented as the main driving force of radiomics, and common techniques are defined and briefly compared. The complete step-by-step process of radiomic phenotyping is then broken down into five key phases. Potential pitfalls of each phase are highlighted, and recommendations are provided to reduce sources of variation, non-reproducibility, and error associated with radiomics.
Collapse
Affiliation(s)
- Kyle J Lafata
- Department of Radiology, Duke University School of Medicine, Durham, NC, USA. .,Department of Radiation Oncology, Duke University School of Medicine, Durham, NC, USA. .,Department of Electrical & Computer Engineering, Duke University Pratt School of Engineering, Durham, NC, USA.
| | - Yuqi Wang
- Department of Electrical & Computer Engineering, Duke University Pratt School of Engineering, Durham, NC, USA
| | - Brandon Konkel
- Department of Radiology, Duke University School of Medicine, Durham, NC, USA
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University School of Medicine, Durham, NC, USA
| | - Mustafa R Bashir
- Department of Radiology, Duke University School of Medicine, Durham, NC, USA.,Department of Medicine, Gastroenterology, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
5
|
Diagnostic Value of Coronary Computed Tomography Angiography Image under Automatic Segmentation Algorithm for Restenosis after Coronary Stenting. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:7013703. [PMID: 35510177 PMCID: PMC9034947 DOI: 10.1155/2022/7013703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 03/23/2022] [Indexed: 11/17/2022]
Abstract
The diagnostic efficacy of coronary computed tomography angiography (CTA) images of coronary arteries in restenosis after coronary stenting based on the combination of the convolutional neural network (CNN) algorithm and the automatic segmentation algorithm for region growth of vascular similarity features was explored to provide a more effective diagnostic method for patients. 130 patients with coronary artery disease were randomly selected as the research objects, and they were averagely classified into the control group (conventional coronary CTA image diagnosis) and the observation group (coronary CTA image diagnosis based on an improved automatic segmentation algorithm). Based on the diagnostic criteria of coronary angiography (CAG), the efficacy of two kinds of coronary CTA images on the postoperative subsequent visit of coronary heart disease (CHD) stenting was evaluated. The results showed that the accuracy of the CNN algorithm was 87.89%, and the average voxel error of the improved algorithm was signally lower than that of the traditional algorithm (1.8921 HU/voxel vs. 7.10091 HU/voxel) (p < 0.05). The average score of the coronary CTA image in the observation group was higher than that in the control group (2.89 ± 0.11 points vs. 2.01 ± 0.73 points) (p < 0.05). The diagnostic sensitivity (91.43%), specificity (86.76%), positive predictive value (88.89%), negative predictive value (89.66%), and accuracy (89.23%) of the observation group were higher than those of the control group (p < 0.05). In conclusion, the region growth algorithm under the CNN algorithm and vascular similarity features had an accurate segmentation effect, which was helpful for the diagnosis of CTA image in restenosis after coronary stenting.
Collapse
|
6
|
R R, Gobalakrishnan N, Chokkalingam A. Detection of turner syndrome using hand X-ray using anchor based links segmentation method. Proc Inst Mech Eng H 2022; 236:9544119221075496. [PMID: 35118910 DOI: 10.1177/09544119221075496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Turner Syndrome (TS) is a chromosomal disorder, wherein the female's growth is impacted. Immature ovaries, low stature, and heart abnormalities are a range of developmental and medical issues due to TS. The condition of TS might be detected prior to birth, throughout infancy or in the early years of life. The diagnosis of TS in girls with modest symptoms and indications is sometimes deferred until they reach adolescence or become young adults. This study presents an algorithm to segment the hand digital X-ray image in children with TS. In medical image and computer vision examination, image segmentation is demanding, and very crucial. Prevailing segmentation algorithms even now suffer from common segmentation issues including under-segmentation, over-segmentation, and spurious or non-closed edges, regardless of the several years of studies. In this paper, Anchor Based Link (ABL) segmentation approach is proposed to detect TS based on fourth Metacarpal bone from left hand X-ray images. The detection of TS is demonstrated based upon the comparison of proposed approach with existing watershed segmentation and Gaussian-Mixture-Model-based Hidden-Markov-Random-Field (GMM-HMRF) method. The proposed approach attains better segmentation based on the ratio of height and width of left fourth finger that is analyzed for normal children and children having TS with the help of edge pixel present in the metacarpal bone that has been segmented. The suggested method is verified on fifty (50) sample X-ray hand images of carpal bones, providing 0.60 ± 0.02 as an average Dice coefficient.
Collapse
Affiliation(s)
- Ramachandran R
- Research Scholar, Anna University, Chennai, Tamilnadu, India
| | - N Gobalakrishnan
- Department of Information Technology, Sri Venkateswara College of Engineering, Sriperumbudur, Chennai, Tamil Nadu, India
| | - Arun Chokkalingam
- Department of ECE, R.M.K College of Engineering and Technology, Chennai, Tamil Nadu, India
| |
Collapse
|
7
|
Weisman AJ, Kieler MW, Perlman S, Hutchings M, Jeraj R, Kostakoglu L, Bradshaw TJ. Comparison of 11 automated PET segmentation methods in lymphoma. Phys Med Biol 2020; 65:235019. [PMID: 32906088 DOI: 10.1088/1361-6560/abb6bd] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Segmentation of lymphoma lesions in FDG PET/CT images is critical in both assessing individual lesions and quantifying patient disease burden. Simple thresholding methods remain common despite the large heterogeneity in lymphoma lesion location, size, and contrast. Here, we assess 11 automated PET segmentation methods for their use in two scenarios: individual lesion segmentation and patient-level disease quantification in lymphoma. Lesions on 18F-FDG PET/CT scans of 90 lymphoma patients were contoured by a nuclear medicine physician. Thresholding, active contours, clustering, adaptive region-growing, and convolutional neural network (CNN) methods were implemented on all physician-identified lesions. Lesion-level segmentation was evaluated using multiple segmentation performance metrics (Dice, Hausdorff Distance). Patient-level quantification of total disease burden (SUVtotal) and metabolic tumor volume (MTV) was assessed using Spearman's correlation coefficients between the segmentation output and physician contours. Lesion segmentation and patient quantification performance was compared to inter-physician agreement in a subset of 20 patients segmented by a second nuclear medicine physician. In total, 1223 lesions with median tumor-to-background ratio of 4.0 and volume of 1.8 cm3, were evaluated. When assessed for lesion segmentation, a 3D CNN, DeepMedic, achieved the highest performance across all evaluation metrics. DeepMedic, clustering methods, and an iterative threshold method had lesion-level segmentation performance comparable to the degree of inter-physician agreement. For patient-level SUVtotal and MTV quantification, all methods except 40% and 50% SUVmax and adaptive region-growing achieved a performance that was similar the agreement of the two physicians. Multiple methods, including a 3D CNN, clustering, and an iterative threshold method, achieved both good lesion-level segmentation and patient-level quantification performance in a population of 90 lymphoma patients. These methods are thus recommended over thresholding methods such as 40% and 50% SUVmax, which were consistently found to be significantly outside the limits defined by inter-physician agreement.
Collapse
Affiliation(s)
- Amy J Weisman
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, United States of America
| | | | | | | | | | | | | |
Collapse
|
8
|
Jin D, Guo D, Ho TY, Harrison AP, Xiao J, Tseng CK, Lu L. DeepTarget: Gross tumor and clinical target volume segmentation in esophageal cancer radiotherapy. Med Image Anal 2020; 68:101909. [PMID: 33341494 DOI: 10.1016/j.media.2020.101909] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 09/10/2020] [Accepted: 11/13/2020] [Indexed: 12/19/2022]
Abstract
Gross tumor volume (GTV) and clinical target volume (CTV) delineation are two critical steps in the cancer radiotherapy planning. GTV defines the primary treatment area of the gross tumor, while CTV outlines the sub-clinical malignant disease. Automatic GTV and CTV segmentation are both challenging for distinct reasons: GTV segmentation relies on the radiotherapy computed tomography (RTCT) image appearance, which suffers from poor contrast with the surrounding tissues, while CTV delineation relies on a mixture of predefined and judgement-based margins. High intra- and inter-user variability makes this a particularly difficult task. We develop tailored methods solving each task in the esophageal cancer radiotherapy, together leading to a comprehensive solution for the target contouring task. Specifically, we integrate the RTCT and positron emission tomography (PET) modalities together into a two-stream chained deep fusion framework taking advantage of both modalities to facilitate more accurate GTV segmentation. For CTV segmentation, since it is highly context-dependent-it must encompass the GTV and involved lymph nodes while also avoiding excessive exposure to the organs at risk-we formulate it as a deep contextual appearance-based problem using encoded spatial distances of these anatomical structures. This better emulates the margin- and appearance-based CTV delineation performed by oncologists. Adding to our contributions, for the GTV segmentation we propose a simple yet effective progressive semantically-nested network (PSNN) backbone that outperforms more complicated models. Our work is the first to provide a comprehensive solution for the esophageal GTV and CTV segmentation in radiotherapy planning. Extensive 4-fold cross-validation on 148 esophageal cancer patients, the largest analysis to date, was carried out for both tasks. The results demonstrate that our GTV and CTV segmentation approaches significantly improve the performance over previous state-of-the-art works, e.g., by 8.7% increases in Dice score (DSC) and 32.9mm reduction in Hausdorff distance (HD) for GTV segmentation, and by 3.4% increases in DSC and 29.4mm reduction in HD for CTV segmentation.
Collapse
Affiliation(s)
| | | | | | | | - Jing Xiao
- Ping An Technology, Shenzhen, Guangdong, China
| | | | - Le Lu
- PAII Inc., Bethesda, MD, USA
| |
Collapse
|
9
|
Li L, Lu W, Tan S. Variational PET/CT Tumor Co-segmentation Integrated with PET Restoration. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020; 4:37-49. [PMID: 32939423 DOI: 10.1109/trpms.2019.2911597] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
PET and CT are widely used imaging modalities in radiation oncology. PET imaging has a high contrast but blurry tumor edges due to its limited spatial resolution, while CT imaging has a high resolution but a low contrast between tumor and soft normal tissues. Tumor segmentation from either a single PET or CT image is difficult. It is known that co-segmentation methods utilizing the complementary information between PET and CT can improve segmentation accuracy. These information can be either consistent or inconsistent in the image-level. How to correctly localize tumor edges with these inconsistent information is a major challenge for co-segmentation methods. In this study, we proposed a novel variational method for tumor co-segmentation in PET/CT, with a fusion strategy specifically designed to handle the information inconsistency between PET and CT in an adaptive way - the method can automatically decide which modality should be more trustful when PET and CT disagree to each other for localizing the tumor boundary. The proposed method was constructed based on the Γ-convergence approximation of the Mumford-Shah (MS) segmentation model. A PET restoration process was integrated into the co-segmentation process, which further eliminate the uncertainty for tumor segmentation introduced by the blurring of tumor edges in PET. The performance of the proposed method was validated on a test dataset with fifty non-small cell lung cancer patients. Experimental results demonstrated that the proposed method had a high accuracy for PET/CT co-segmentation and PET restoration, and can accurately estimate the blur kernel of the PET scanner as well. For those complex images in which the tumors exhibit Fluorodeoxyglucose (FDG) uptake inhomogeneity or even invade adjacent soft normal tissues, the proposed method can still accurately segment the tumors. It achieved an average dice similarity indexes (DSI) of 0.85 ± 0.06, volume error (VE) of 0.09 ± 0.08, and classification error (CE) of 0.31 ± 0.13.
Collapse
Affiliation(s)
- Laquan Li
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York 10065, USA
| | - Shan Tan
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
10
|
Stefano A, Comelli A, Bravatà V, Barone S, Daskalovski I, Savoca G, Sabini MG, Ippolito M, Russo G. A preliminary PET radiomics study of brain metastases using a fully automatic segmentation method. BMC Bioinformatics 2020; 21:325. [PMID: 32938360 PMCID: PMC7493376 DOI: 10.1186/s12859-020-03647-7] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 07/09/2020] [Indexed: 12/20/2022] Open
Abstract
Background Positron Emission Tomography (PET) is increasingly utilized in radiomics studies for treatment evaluation purposes. Nevertheless, lesion volume identification in PET images is a critical and still challenging step in the process of radiomics, due to the low spatial resolution and high noise level of PET images. Currently, the biological target volume (BTV) is manually contoured by nuclear physicians, with a time expensive and operator-dependent procedure. This study aims to obtain BTVs from cerebral metastases in patients who underwent L-[11C]methionine (11C-MET) PET, using a fully automatic procedure and to use these BTVs to extract radiomics features to stratify between patients who respond to treatment or not. For these purposes, 31 brain metastases, for predictive evaluation, and 25 ones, for follow-up evaluation after treatment, were delineated using the proposed method. Successively, 11C-MET PET studies and related volumetric segmentations were used to extract 108 features to investigate the potential application of radiomics analysis in patients with brain metastases. A novel statistical system has been implemented for feature reduction and selection, while discriminant analysis was used as a method for feature classification. Results For predictive evaluation, 3 features (asphericity, low-intensity run emphasis, and complexity) were able to discriminate between responder and non-responder patients, after feature reduction and selection. Best performance in patient discrimination was obtained using the combination of the three selected features (sensitivity 81.23%, specificity 73.97%, and accuracy 78.27%) compared to the use of all features. Secondly, for follow-up evaluation, 8 features (SUVmean, SULpeak, SUVmin, SULpeak prod-surface-area, SUVmean prod-sphericity, surface mean SUV 3, SULpeak prod-sphericity, and second angular moment) were selected with optimal performance in discriminant analysis classification (sensitivity 86.28%, specificity 87.75%, and accuracy 86.57%) outperforming the use of all features. Conclusions The proposed system is able i) to extract 108 features for each automatically segmented lesion and ii) to select a sub-panel of 11C-MET PET features (3 and 8 in the case of predictive and follow-up evaluation), with valuable association with patient outcome. We believe that our model can be useful to improve treatment response and prognosis evaluation, potentially allowing the personalization of cancer treatment plans.
Collapse
Affiliation(s)
- Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy
| | - Albert Comelli
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy.,Ri.MED Foundation, Palermo, Italy
| | - Valentina Bravatà
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy.
| | | | - Igor Daskalovski
- Department of Physics and Astronomy, University of Catania, Catania, Italy
| | - Gaetano Savoca
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy
| | | | - Massimo Ippolito
- Nuclear Medicine Department, Cannizzaro Hospital, Catania, Italy
| | - Giorgio Russo
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy.,Medical Physics Unit, Cannizzaro Hospital, Catania, Italy
| |
Collapse
|
11
|
Tamal M. A hybrid region growing tumour segmentation method for low contrast and high noise Nuclear Medicine (NM) images by combining a novel non-linear diffusion filter and global gradient measure (HNDF-GGM-RG). Heliyon 2019; 5:e02993. [PMID: 31879709 PMCID: PMC6920261 DOI: 10.1016/j.heliyon.2019.e02993] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 09/26/2019] [Accepted: 12/03/2019] [Indexed: 10/27/2022] Open
Abstract
Poor spatial resolution and low signal-to-noise ratio (SNR) along with the finite image sampling constraint make lesion segmentation on Nuclear Medicine (NM) images (e.g., PET-Positron Emission Tomography) a challenging task. Since the size, signal-to-background ratio (SBR) and SNR of lesion vary within and between patients, performance of the conventional segmentation methods are not consistent against statistical fluctuations. To overcome these limitations, a hybrid region growing segmentation method is proposed combining non-linear diffusion filter and global gradient measure (HNDF-GGM-RG). The performance of the algorithm is validated on PET images and compared with the 40%-fixed threshold and a state-of-the-art active contour (AC) methods. Segmented volume, dice similarity coefficient (DSC) and percentage classification error (% CE) were used as the quantitative figures of merit (FOM) using the torso NEMA phantom that contains six different sizes of spheres. A 2:1 SBR was created between the spheres and background and the phantom was scanned with a Siemens TrueV PET-CT scanner. 40T method is SNR dependent and overestimates the volumes ( ≈ 4.5 times ) . AC volumes match with the true volumes only for the largest three spheres. On the other hand, the proposed HNDF-GGM-RG volumes match closely with the true volumes irrespective of the size and SNR. Average DSC of 0.32 and 0.66 and % CE of 700% and 160% were achieved by the 40T and AC methods respectively. Conversely, average DSC and %CE are 0.70 and 60% for HNDF-GGM-RG and less dependent on SNR. Since two-sample t-test indicates that the performance of AC and HNDF-GGM-RG are statistically significant for the smallest three spheres and similar for the rest, HNDF-GGM-RG can be applied where the size, SBR and SNR are subject to change either due to alterations in the radiotracer uptake because of treatment or uptake variability of different radiotracers because of differences in their molecular pathways.
Collapse
Affiliation(s)
- Mahbubunnabi Tamal
- Department of Biomedical Engineering, Imam Abdulrahman Bin Faisal University, PO Box 1982, Dammam, 31441, Saudi Arabia
| |
Collapse
|
12
|
A phantom study to assess the reproducibility, robustness and accuracy of PET image segmentation methods against statistical fluctuations. PLoS One 2019; 14:e0219127. [PMID: 31283779 PMCID: PMC6613706 DOI: 10.1371/journal.pone.0219127] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Accepted: 06/17/2019] [Indexed: 01/21/2023] Open
Abstract
Background Automatic and semi-automatic segmentation methods for PET serve as alternatives to manual delineation and eliminate observer variability. The robustness of these segmentation methods against statistical fluctuations arising from variable size, contrast and noise are vital for providing reliable clinical outcomes for diagnosis and treatment response assessment. In this study, the performances of several segmentation methods have been investigated using the torso NEMA phantom against statistical fluctuations. Methods The six hot spheres (0.5-27ml) and the background of the phantom were filled with different activities of 18F to yield 2:1 and 4:1 contrast ratios. The phantom was scanned on a TrueV PET-CT scanner for 120 minutes. The images were reconstructed using OSEM (4iterations-21subsets) for different durations (15, 20, 34 and 67 minutes) to represent different noise levels and smoothed with a 4-mm Gaussian filter. Each sphere with different settings was delineated using a fixed 40% threshold (40T), fuzzy clustering mean (FCM), adaptive threshold and region based variational (C-V) segmentation methods and compared with the gold standard volume, which was estimated from the known diameter and position of each sphere. Results The smallest three spheres at the 2:1 contrast level are not evaluable for the 40T method. For the other spheres, the 40T method grossly overestimates the volumes and the segmented volumes are highly dependent on the statistical variations. These volumes are the least reproducible (80%) with a mean Dice Similarity Coefficient (DSC) of 0.67 and 90% classification error (CE). The other three methods reduce the dependency on noise and contrast in a similar manner by providing low bias (<10%) and CE (<25%) as well as a high DSC (0.88) and reproducibility (30%) for objects >17mm in diameter. However, for the smallest three spheres at a 2:1 contrast level, the performances of all three methods were significantly low, with the adaptive method being superior to the FCM and C-V (mean bias 168% and 350%, mean DSC 0.65 and 0.50, mean CE 227% and 454% for the adaptive and other two methods (approximately similar for FCM and C-V), respectively). Conclusions The segmentation accuracy of the fixed threshold-based method depends on size, contrast and noise. The intensity thresholds determined by the adaptive threshold methods are less sensitive to noise and therefore, the segmented volumes are more reproducible across different acquisition durations. A similar performance can be achieved with the FCM and C-V methods. Though, for small lesions (< 2cm diameter) with low counts and contrast, the adaptive threshold-based method outperforms the FCM and C-V methods, and the performance of neither of these methods is optimal for volumes <2cm in diameter. These three methods can only reliably be used to delineate tumours for diagnostic and monitoring purposes provided that the contrast between the tumour and background is not below a 2:1 ratio and the size of the tumour does not fall not below 2cm in diameter in response to treatment. They can also be used for different radiotracers with variable uptake. However, the FCM and C-V methods have the advantage of not requiring calibrations for different scanners and settings.
Collapse
|
13
|
Li L, Zhao X, Lu W, Tan S. Deep Learning for Variational Multimodality Tumor Segmentation in PET/CT. Neurocomputing 2019; 392:277-295. [PMID: 32773965 DOI: 10.1016/j.neucom.2018.10.099] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Positron emission tomography/computed tomography (PET/CT) imaging can simultaneously acquire functional metabolic information and anatomical information of the human body. How to rationally fuse the complementary information in PET/CT for accurate tumor segmentation is challenging. In this study, a novel deep learning based variational method was proposed to automatically fuse multimodality information for tumor segmentation in PET/CT. A 3D fully convolutional network (FCN) was first designed and trained to produce a probability map from the CT image. The learnt probability map describes the probability of each CT voxel belonging to the tumor or the background, and roughly distinguishes the tumor from its surrounding soft tissues. A fuzzy variational model was then proposed to incorporate the probability map and the PET intensity image for an accurate multimodality tumor segmentation, where the probability map acted as a membership degree prior. A split Bregman algorithm was used to minimize the variational model. The proposed method was validated on a non-small cell lung cancer dataset with 84 PET/CT images. Experimental results demonstrated that: 1). Only a few training samples were needed for training the designed network to produce the probability map; 2). The proposed method can be applied to small datasets, normally seen in clinic research; 3). The proposed method successfully fused the complementary information in PET/CT, and outperformed two existing deep learning-based multimodality segmentation methods and other multimodality segmentation methods using traditional fusion strategies (without deep learning); 4). The proposed method had a good performance for tumor segmentation, even for those with Fluorodeoxyglucose (FDG) uptake inhomogeneity and blurred tumor edges (two major challenges in PET single modality segmentation) and complex surrounding soft tissues (one major challenge in CT single modality segmentation), and achieved an average dice similarity indexes (DSI) of 0.86 ± 0.05, sensitivity (SE) of 0.86 ± 0.07, positive predictive value (PPV) of 0.87 ± 0.10, volume error (VE) of 0.16 ± 0.12, and classification error (CE) of 0.30 ± 0.12.
Collapse
Affiliation(s)
- Laquan Li
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China.,College of Science, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Xiangming Zhao
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065, USA
| | - Shan Tan
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
14
|
Lian C, Ruan S, Denoeux T, Li H, Vera P. Joint Tumor Segmentation in PET-CT Images Using Co-Clustering and Fusion Based on Belief Functions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:755-766. [PMID: 30296224 PMCID: PMC8191586 DOI: 10.1109/tip.2018.2872908] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Precise delineation of target tumor is a key factor to ensure the effectiveness of radiation therapy. While hybrid positron emission tomography-computed tomography (PET-CT) has become a standard imaging tool in the practice of radiation oncology, many existing automatic/semi-automatic methods still perform tumor segmentation on mono-modal images. In this paper, a co-clustering algorithm is proposed to concurrently segment 3D tumors in PET-CT images, considering that the two complementary imaging modalities can combine functional and anatomical information to improve segmentation performance. The theory of belief functions is adopted in the proposed method to model, fuse, and reason with uncertain and imprecise knowledge from noisy and blurry PET-CT images. To ensure reliable segmentation for each modality, the distance metric for the quantification of clustering distortions and spatial smoothness is iteratively adapted during the clustering procedure. On the other hand, to encourage consistent segmentation between different modalities, a specific context term is proposed in the clustering objective function. Moreover, during the iterative optimization process, clustering results for the two distinct modalities are further adjusted via a belief-functions-based information fusion strategy. The proposed method has been evaluated on a data set consisting of 21 paired PET-CT images for non-small cell lung cancer patients. The quantitative and qualitative evaluations show that our proposed method performs well compared with the state-of-the-art methods.
Collapse
|
15
|
Zhao X, Li L, Lu W, Tan S. Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network. Phys Med Biol 2018; 64:015011. [PMID: 30523964 PMCID: PMC7493812 DOI: 10.1088/1361-6560/aaf44b] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Automatic tumor segmentation from medical images is an important step for computer-aided cancer diagnosis and treatment. Recently, deep learning has been successfully applied to this task, leading to state-of-the-art performance. However, most of existing deep learning segmentation methods only work for a single imaging modality. PET/CT scanner is nowadays widely used in the clinic, and is able to provide both metabolic information and anatomical information through integrating PET and CT into the same utility. In this study, we proposed a novel multi-modality segmentation method based on a 3D fully convolutional neural network (FCN), which is capable of taking account of both PET and CT information simultaneously for tumor segmentation. The network started with a multi-task training module, in which two parallel sub-segmentation architectures constructed using deep convolutional neural networks (CNNs) were designed to automatically extract feature maps from PET and CT respectively. A feature fusion module was subsequently designed based on cascaded convolutional blocks, which re-extracted features from PET/CT feature maps using a weighted cross entropy minimization strategy. The tumor mask was obtained as the output at the end of the network using a softmax function. The effectiveness of the proposed method was validated on a clinic PET/CT dataset of 84 patients with lung cancer. The results demonstrated that the proposed network was effective, fast and robust and achieved significantly performance gain over CNN-based methods and traditional methods using PET or CT only, two V-net based co-segmentation methods, two variational co-segmentation methods based on fuzzy set theory and a deep learning co-segmentation method using W-net.
Collapse
Affiliation(s)
- Xiangming Zhao
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Laquan Li
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065, USA
| | - Shan Tan
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
16
|
Tong Y, Udupa JK, Odhner D, Wu C, Schuster SJ, Torigian DA. Disease quantification on PET/CT images without explicit object delineation. Med Image Anal 2018; 51:169-183. [PMID: 30453165 DOI: 10.1016/j.media.2018.11.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 10/17/2018] [Accepted: 11/09/2018] [Indexed: 10/27/2022]
Abstract
PURPOSE The derivation of quantitative information from images in a clinically practical way continues to face a major hurdle because of image segmentation challenges. This paper presents a novel approach, called automatic anatomy recognition-disease quantification (AAR-DQ), for disease quantification (DQ) on positron emission tomography/computed tomography (PET/CT) images. This approach explores how to decouple DQ methods from explicit dependence on object (e.g., organ) delineation through the use of only object recognition results from our recently developed automatic anatomy recognition (AAR) method to quantify disease burden. METHOD The AAR-DQ process starts off with the AAR approach for modeling anatomy and automatically recognizing objects on low-dose CT images of PET/CT acquisitions. It incorporates novel aspects of model building that relate to finding an optimal disease map for each organ. The parameters of the disease map are estimated from a set of training image data sets including normal subjects and patients with metastatic cancer. The result of recognition for an object on a patient image is the location of a fuzzy model for the object which is optimally adjusted for the image. The model is used as a fuzzy mask on the PET image for estimating a fuzzy disease map for the specific patient and subsequently for quantifying disease based on this map. This process handles blur arising in PET images from partial volume effect entirely through accurate fuzzy mapping to account for heterogeneity and gradation of disease content at the voxel level without explicitly performing correction for the partial volume effect. Disease quantification is performed from the fuzzy disease map in terms of total lesion glycolysis (TLG) and standardized uptake value (SUV) statistics. We also demonstrate that the method of disease quantification is applicable even when the "object" of interest is recognized manually with a simple and quick action such as interactively specifying a 3D box ROI. Depending on the degree of automaticity for object and lesion recognition on PET/CT, DQ can be performed at the object level either semi-automatically (DQ-MO) or automatically (DQ-AO), or at the lesion level either semi-automatically (DQ-ML) or automatically. RESULTS We utilized 67 data sets in total: 16 normal data sets used for model building, and 20 phantom data sets plus 31 patient data sets (with various types of metastatic cancer) used for testing the three methods DQ-AO, DQ-MO, and DQ-ML. The parameters of the disease map were estimated using the leave-one-out strategy. The organs of focus were left and right lungs and liver, and the disease quantities measured were TLG, SUVMean, and SUVMax. On phantom data sets, overall error for the three parameters were approximately 6%, 3%, and 0%, respectively, with TLG error varying from 2% for large "lesions" (37 mm diameter) to 37% for small "lesions" (10 mm diameter). On patient data sets, for non-conspicuous lesions, those overall errors were approximately 19%, 14% and 0%; for conspicuous lesions, these overall errors were approximately 9%, 7%, 0%, respectively, with errors in estimation being generally smaller for liver than for lungs, although without statistical significance. CONCLUSIONS Accurate disease quantification on PET/CT images without performing explicit delineation of lesions is feasible following object recognition. Method DQ-MO generally yields more accurate results than DQ-AO although the difference is statistically not significant. Compared to current methods from the literature, almost all of which focus only on lesion-level DQ and not organ-level DQ, our results were comparable for large lesions and were superior for smaller lesions, with less demand on training data and computational resources. DQ-AO and even DQ-MO seem to have the potential for quantifying disease burden body-wide routinely via the AAR-DQ approach.
Collapse
Affiliation(s)
- Yubing Tong
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States.
| | - Dewey Odhner
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States
| | - Caiyun Wu
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States
| | - Stephen J Schuster
- Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Drew A Torigian
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States; Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, PA 19104, United States
| |
Collapse
|
17
|
A smart and operator independent system to delineate tumours in Positron Emission Tomography scans. Comput Biol Med 2018; 102:1-15. [PMID: 30219733 DOI: 10.1016/j.compbiomed.2018.09.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Revised: 08/20/2018] [Accepted: 09/06/2018] [Indexed: 12/30/2022]
Abstract
Positron Emission Tomography (PET) imaging has an enormous potential to improve radiation therapy treatment planning offering complementary functional information with respect to other anatomical imaging approaches. The aim of this study is to develop an operator independent, reliable, and clinically feasible system for biological tumour volume delineation from PET images. Under this design hypothesis, we combine several known approaches in an original way to deploy a system with a high level of automation. The proposed system automatically identifies the optimal region of interest around the tumour and performs a slice-by-slice marching local active contour segmentation. It automatically stops when a "cancer-free" slice is identified. User intervention is limited at drawing an initial rough contour around the cancer region. By design, the algorithm performs the segmentation minimizing any dependence from the initial input, so that the final result is extremely repeatable. To assess the performances under different conditions, our system is evaluated on a dataset comprising five synthetic experiments and fifty oncological lesions located in different anatomical regions (i.e. lung, head and neck, and brain) using PET studies with 18F-fluoro-2-deoxy-d-glucose and 11C-labeled Methionine radio-tracers. Results on synthetic lesions demonstrate enhanced performances when compared against the most common PET segmentation methods. In clinical cases, the proposed system produces accurate segmentations (average dice similarity coefficient: 85.36 ± 2.94%, 85.98 ± 3.40%, 88.02 ± 2.75% in the lung, head and neck, and brain district, respectively) with high agreement with the gold standard (determination coefficient R2 = 0.98). We believe that the proposed system could be efficiently used in the everyday clinical routine as a medical decision tool, and to provide the clinicians with additional information, derived from PET, which can be of use in radiation therapy, treatment, and planning.
Collapse
|
18
|
Riyahi S, Choi W, Liu CJ, Nadeem S, Tan S, Zhong H, Chen W, Wu AJ, Mechalakos JG, Deasy JO, Lu W. Quantification of Local Metabolic Tumor Volume Changes by Registering Blended PET-CT Images for Prediction of Pathologic Tumor Response. DATA DRIVEN TREATMENT RESPONSE ASSESSMENT AND PRETERM, PERINATAL, AND PAEDIATRIC IMAGE ANALYSIS 2018. [DOI: 10.1007/978-3-030-00807-9_4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
19
|
The first MICCAI challenge on PET tumor segmentation. Med Image Anal 2017; 44:177-195. [PMID: 29268169 DOI: 10.1016/j.media.2017.12.007] [Citation(s) in RCA: 91] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Revised: 12/07/2017] [Accepted: 12/07/2017] [Indexed: 01/15/2023]
Abstract
INTRODUCTION Automatic functional volume segmentation in PET images is a challenge that has been addressed using a large array of methods. A major limitation for the field has been the lack of a benchmark dataset that would allow direct comparison of the results in the various publications. In the present work, we describe a comparison of recent methods on a large dataset following recommendations by the American Association of Physicists in Medicine (AAPM) task group (TG) 211, which was carried out within a MICCAI (Medical Image Computing and Computer Assisted Intervention) challenge. MATERIALS AND METHODS Organization and funding was provided by France Life Imaging (FLI). A dataset of 176 images combining simulated, phantom and clinical images was assembled. A website allowed the participants to register and download training data (n = 19). Challengers then submitted encapsulated pipelines on an online platform that autonomously ran the algorithms on the testing data (n = 157) and evaluated the results. The methods were ranked according to the arithmetic mean of sensitivity and positive predictive value. RESULTS Sixteen teams registered but only four provided manuscripts and pipeline(s) for a total of 10 methods. In addition, results using two thresholds and the Fuzzy Locally Adaptive Bayesian (FLAB) were generated. All competing methods except one performed with median accuracy above 0.8. The method with the highest score was the convolutional neural network-based segmentation, which significantly outperformed 9 out of 12 of the other methods, but not the improved K-Means, Gaussian Model Mixture and Fuzzy C-Means methods. CONCLUSION The most rigorous comparative study of PET segmentation algorithms to date was carried out using a dataset that is the largest used in such studies so far. The hierarchy amongst the methods in terms of accuracy did not depend strongly on the subset of datasets or the metrics (or combination of metrics). All the methods submitted by the challengers except one demonstrated good performance with median accuracy scores above 0.8.
Collapse
|