1
|
Torkaman M, Jemaa S, Fredrickson J, Fernandez Coimbra A, De Crespigny A, Carano RAD. Comparative analysis of intestinal tumor segmentation in PET CT scans using organ based and whole body deep learning. BMC Med Imaging 2025; 25:52. [PMID: 39962481 PMCID: PMC11834234 DOI: 10.1186/s12880-025-01587-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Accepted: 02/10/2025] [Indexed: 02/20/2025] Open
Abstract
BACKGROUND 18-Fluoro-deoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) is a valuable imaging tool widely used in the management of cancer patients. Deep learning models excel at segmenting highly metabolic tumors but face challenges in regions with complex anatomy and normal cell uptake, such as the gastro-intestinal tract. Despite these challenges, it remains important to achieve accurate segmentation of gastro-intestinal tumors. METHODS Here, we present an international multicenter comparative study between a novel organ-focused approach and a whole-body training method to evaluate the effectiveness of training data homogeneity in accurately identifying gastro-intestinal tumors. In the organ-focused method, the training data is limited to cases with intestinal tumors which makes the network trained with more homogeneous data and with stronger presence of intestinal tumor signals. The whole body approach extracts the intestinal tumors from the results of a model trained on the whole-body scans. Both approaches were trained using diffuse large B cell (DLBCL) patients from a large multi-center clinical trial (NCT01287741). RESULTS We report an improved mean(±std) Dice score of 0.78(±0.21) for the organ-based approach on the hold-out set, compared to 0.63(±0.30) for the whole-body approach, with the p-value of less than 0.0001. At the lesion level, the proposed organ-based approach also shows increased precision, recall, and F1-score. An independent trial was used to evaluate the generalizability of the proposed method to non-Hodgkin's lymphoma (NHL) patients with follicular lymphoma (FL). CONCLUSION Given the variability in structure and metabolism across tissues in the body, our quantitative findings suggest organ-focused training enhances intestinal tumor segmentation by leveraging tissue homogeneity in the training data, contrasting with the whole-body training approach, which, by its very nature, is a more heterogeneous data set.
Collapse
Affiliation(s)
| | | | | | | | | | - Richard A D Carano
- Genentech, Inc, South San Francisco, CA, USA
- F. Hoffman-La Roche Ltd, Basel, Switzerland
| |
Collapse
|
2
|
Stefano A. Challenges and limitations in applying radiomics to PET imaging: Possible opportunities and avenues for research. Comput Biol Med 2024; 179:108827. [PMID: 38964244 DOI: 10.1016/j.compbiomed.2024.108827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 06/05/2024] [Accepted: 06/29/2024] [Indexed: 07/06/2024]
Abstract
Radiomics, the high-throughput extraction of quantitative imaging features from medical images, holds immense potential for advancing precision medicine in oncology and beyond. While radiomics applied to positron emission tomography (PET) imaging offers unique insights into tumor biology and treatment response, it is imperative to elucidate the challenges and constraints inherent in this domain to facilitate their translation into clinical practice. This review examines the challenges and limitations of applying radiomics to PET imaging, synthesizing findings from the last five years (2019-2023) and highlights the significance of addressing these challenges to realize the full clinical potential of radiomics in oncology and molecular imaging. A comprehensive search was conducted across multiple electronic databases, including PubMed, Scopus, and Web of Science, using keywords relevant to radiomics issues in PET imaging. Only studies published in peer-reviewed journals were eligible for inclusion in this review. Although many studies have highlighted the potential of radiomics in predicting treatment response, assessing tumor heterogeneity, enabling risk stratification, and personalized therapy selection, various challenges regarding the practical implementation of the proposed models still need to be addressed. This review illustrates the challenges and limitations of radiomics in PET imaging across various cancer types, encompassing both phantom and clinical investigations. The analyzed studies highlight the importance of reproducible segmentation methods, standardized pre-processing and post-processing methodologies, and the need to create large multicenter studies registered in a centralized database to promote the continuous validation and clinical integration of radiomics into PET imaging.
Collapse
Affiliation(s)
- Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy.
| |
Collapse
|
3
|
Li L, Jiang C, Yu L, Zeng X, Zheng S. Efficient model-informed co-segmentation of tumors on PET/CT driven by clustering and classification information. Comput Biol Med 2024; 180:108980. [PMID: 39137668 DOI: 10.1016/j.compbiomed.2024.108980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 07/18/2024] [Accepted: 08/01/2024] [Indexed: 08/15/2024]
Abstract
Automatic tumor segmentation via positron emission tomography (PET) and computed tomography (CT) images plays a critical role in the prevention, diagnosis, and treatment of this disease via radiation oncology. However, segmenting these tumors is challenging due to the heterogeneity of grayscale levels and fuzzy boundaries. To address these issues, in this paper, an efficient model-informed PET/CT tumor co-segmentation method that combines fuzzy C-means clustering and Bayesian classification information is proposed. To alleviate the grayscale heterogeneity of multi-modal images, in this method, a novel grayscale similar region term is designed based on the background region information of PET and the foreground region information of CT. An edge stop function is innovatively presented to enhance the localization of fuzzy edges by incorporating the fuzzy C-means clustering strategy. To improve the segmentation accuracy further, a unique data fidelity term is introduced based on PET images by combining the distribution characteristics of pixel points in PET images. Finally, experimental validation on datasets of head and neck tumor (HECKTOR) and non-small cell lung cancer (NSCLC) demonstrated impressive values for three key evaluation metrics, including DSC, RVD and HD5, achieved impressive values of 0.85, 5.32, and 0.17, respectively. These compelling results indicate that image segmentation methods based on mathematical models exhibit outstanding performance in handling grayscale heterogeneity and fuzzy boundaries in multi-modal images.
Collapse
Affiliation(s)
- Laquan Li
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Chuangbo Jiang
- School of Science, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Lei Yu
- Emergency Department, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, 400010, China
| | - Xianhua Zeng
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Shenhai Zheng
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| |
Collapse
|
4
|
Zhou L, Wu C, Chen Y, Zhang Z. Multitask connected U-Net: automatic lung cancer segmentation from CT images using PET knowledge guidance. Front Artif Intell 2024; 7:1423535. [PMID: 39247847 PMCID: PMC11377414 DOI: 10.3389/frai.2024.1423535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 07/26/2024] [Indexed: 09/10/2024] Open
Abstract
Lung cancer is a predominant cause of cancer-related mortality worldwide, necessitating precise tumor segmentation of medical images for accurate diagnosis and treatment. However, the intrinsic complexity and variability of tumor morphology pose substantial challenges to segmentation tasks. To address this issue, we propose a multitask connected U-Net model with a teacher-student framework to enhance the effectiveness of lung tumor segmentation. The proposed model and framework integrate PET knowledge into the segmentation process, leveraging complementary information from both CT and PET modalities to improve segmentation performance. Additionally, we implemented a tumor area detection method to enhance tumor segmentation performance. In extensive experiments on four datasets, the average Dice coefficient of 0.56, obtained using our model, surpassed those of existing methods such as Segformer (0.51), Transformer (0.50), and UctransNet (0.43). These findings validate the efficacy of the proposed method in lung tumor segmentation tasks.
Collapse
Affiliation(s)
- Lu Zhou
- Traditional Chinese Medicine (Zhong Jing) School, Henan University of Chinese Medicine, Zhengzhou, Henan, China
| | - Chaoyong Wu
- Shenzhen Hospital, Beijing University of Chinese Medicine, Shenzhen, Guangdong, China
| | - Yiheng Chen
- Traditional Chinese Medicine (Zhong Jing) School, Henan University of Chinese Medicine, Zhengzhou, Henan, China
| | - Zhicheng Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
5
|
Shiri I, Amini M, Yousefirizi F, Vafaei Sadr A, Hajianfar G, Salimi Y, Mansouri Z, Jenabi E, Maghsudi M, Mainta I, Becker M, Rahmim A, Zaidi H. Information fusion for fully automated segmentation of head and neck tumors from PET and CT images. Med Phys 2024; 51:319-333. [PMID: 37475591 DOI: 10.1002/mp.16615] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/16/2023] [Accepted: 06/19/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PURPOSE Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. METHODS The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. RESULTS In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak , SUVmean and SUVmedian . CONCLUSION PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, USA
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Minerva Becker
- Service of Radiology, Geneva University Hospital, Geneva, Switzerland
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
- Department of Radiology and Physics, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
6
|
Alshmrani GM, Ni Q, Jiang R, Muhammed N. Hyper-Dense_Lung_Seg: Multimodal-Fusion-Based Modified U-Net for Lung Tumour Segmentation Using Multimodality of CT-PET Scans. Diagnostics (Basel) 2023; 13:3481. [PMID: 37998617 PMCID: PMC10670323 DOI: 10.3390/diagnostics13223481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 11/09/2023] [Accepted: 11/15/2023] [Indexed: 11/25/2023] Open
Abstract
The majority of cancer-related deaths globally are due to lung cancer, which also has the second-highest mortality rate. The segmentation of lung tumours, treatment evaluation, and tumour stage classification have become significantly more accessible with the advent of PET/CT scans. With the advent of PET/CT scans, it is possible to obtain both functioning and anatomic data during a single examination. However, integrating images from different modalities can indeed be time-consuming for medical professionals and remains a challenging task. This challenge arises from several factors, including differences in image acquisition techniques, image resolutions, and the inherent variations in the spectral and temporal data captured by different imaging modalities. Artificial Intelligence (AI) methodologies have shown potential in the automation of image integration and segmentation. To address these challenges, multimodal fusion approach-based U-Net architecture (early fusion, late fusion, dense fusion, hyper-dense fusion, and hyper-dense VGG16 U-Net) are proposed for lung tumour segmentation. Dice scores of 73% show that hyper-dense VGG16 U-Net is superior to the other four proposed models. The proposed method can potentially aid medical professionals in detecting lung cancer at an early stage.
Collapse
Affiliation(s)
- Goram Mufarah Alshmrani
- School of Computing and Commutations, Lancaster University, Lancaster LA1 4YW, UK; (Q.N.); (R.J.)
- College of Computing and Information Technology, University of Bisha, Bisha 67714, Saudi Arabia
| | - Qiang Ni
- School of Computing and Commutations, Lancaster University, Lancaster LA1 4YW, UK; (Q.N.); (R.J.)
| | - Richard Jiang
- School of Computing and Commutations, Lancaster University, Lancaster LA1 4YW, UK; (Q.N.); (R.J.)
| | - Nada Muhammed
- Computers and Control Engineering Department, Faculty of Engineering, Tanta University, Tanta 31733, Egypt;
| |
Collapse
|
7
|
Feuerecker B, Heimer MM, Geyer T, Fabritius MP, Gu S, Schachtner B, Beyer L, Ricke J, Gatidis S, Ingrisch M, Cyran CC. Artificial Intelligence in Oncological Hybrid Imaging. Nuklearmedizin 2023; 62:296-305. [PMID: 37802057 DOI: 10.1055/a-2157-6810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Artificial intelligence (AI) applications have become increasingly relevant across a broad spectrum of settings in medical imaging. Due to the large amount of imaging data that is generated in oncological hybrid imaging, AI applications are desirable for lesion detection and characterization in primary staging, therapy monitoring, and recurrence detection. Given the rapid developments in machine learning (ML) and deep learning (DL) methods, the role of AI will have significant impact on the imaging workflow and will eventually improve clinical decision making and outcomes. METHODS AND RESULTS The first part of this narrative review discusses current research with an introduction to artificial intelligence in oncological hybrid imaging and key concepts in data science. The second part reviews relevant examples with a focus on applications in oncology as well as discussion of challenges and current limitations. CONCLUSION AI applications have the potential to leverage the diagnostic data stream with high efficiency and depth to facilitate automated lesion detection, characterization, and therapy monitoring to ultimately improve quality and efficiency throughout the medical imaging workflow. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based therapy guidance in oncology. However, significant challenges remain regarding application development, benchmarking, and clinical implementation. KEY POINTS · Hybrid imaging generates a large amount of multimodality medical imaging data with high complexity and depth.. · Advanced tools are required to enable fast and cost-efficient processing along the whole radiology value chain.. · AI applications promise to facilitate the assessment of oncological disease in hybrid imaging with high quality and efficiency for lesion detection, characterization, and response assessment. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based oncological therapy guidance.. · Selected applications in three oncological entities (lung, prostate, and neuroendocrine tumors) demonstrate how AI algorithms may impact imaging-based tasks in hybrid imaging and potentially guide clinical decision making..
Collapse
Affiliation(s)
- Benedikt Feuerecker
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
- German Cancer Research Center (DKFZ), Partner site Munich, DKTK German Cancer Consortium, Munich, Germany
| | - Maurice M Heimer
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Thomas Geyer
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | | | - Sijing Gu
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | | | - Leonie Beyer
- Department of Nuclear Medicine, University Hospital, LMU Munich, Munich, Germany
| | - Jens Ricke
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Sergios Gatidis
- Department of Radiology, University Hospital Tübingen, Tübingen, Germany
- MPI, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Michael Ingrisch
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Clemens C Cyran
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
8
|
Yu X, He L, Wang Y, Dong Y, Song Y, Yuan Z, Yan Z, Wang W. A deep learning approach for automatic tumor delineation in stereotactic radiotherapy for non-small cell lung cancer using diagnostic PET-CT and planning CT. Front Oncol 2023; 13:1235461. [PMID: 37601687 PMCID: PMC10437048 DOI: 10.3389/fonc.2023.1235461] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/10/2023] [Indexed: 08/22/2023] Open
Abstract
Introduction Accurate delineation of tumor targets is crucial for stereotactic body radiation therapy (SBRT) for non-small cell lung cancer (NSCLC). This study aims to develop a deep learning-based segmentation approach to accurately and efficiently delineate NSCLC targets using diagnostic PET-CT and SBRT planning CT (pCT). Methods The diagnostic PET was registered to pCT using the transform matrix from registering diagnostic CT to the pCT. We proposed a 3D-UNet-based segmentation method to segment NSCLC tumor targets on dual-modality PET-pCT images. This network contained squeeze-and-excitation and Residual blocks in each convolutional block to perform dynamic channel-wise feature recalibration. Furthermore, up-sampling paths were added to supplement low-resolution features to the model and also to compute the overall loss function. The dice similarity coefficient (DSC), precision, recall, and the average symmetric surface distances were used to assess the performance of the proposed approach on 86 pairs of diagnostic PET and pCT images. The proposed model using dual-modality images was compared with both conventional 3D-UNet architecture and single-modality image input. Results The average DSC of the proposed model with both PET and pCT images was 0.844, compared to 0.795 and 0.827, when using 3D-UNet and nnUnet. It also outperformed using either pCT or PET alone with the same network, which had DSC of 0.823 and 0.732, respectively. Discussion Therefore, our proposed segmentation approach is able to outperform the current 3D-UNet network with diagnostic PET and pCT images. The integration of two image modalities helps improve segmentation accuracy.
Collapse
Affiliation(s)
- Xuyao Yu
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
- Tianjin Medical University, Tianjin, China
| | - Lian He
- Perception Vision Medical Technologies Co Ltd, Guangzhou, China
| | - Yuwen Wang
- Department of Radiotherapy, Tianjin Cancer Hospital Airport Hospital, Tianjin, China
| | - Yang Dong
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Yongchun Song
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Zhiyong Yuan
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Ziye Yan
- Perception Vision Medical Technologies Co Ltd, Guangzhou, China
| | - Wei Wang
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| |
Collapse
|
9
|
Li L, Jiang C, Wang PSP, Zheng S. 3D PET/CT Tumor Co-Segmentation Based on Background Subtraction Hybrid Active Contour Model. INT J PATTERN RECOGN 2023; 37. [DOI: 10.1142/s0218001423570069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/15/2024]
Abstract
Accurate tumor segmentation in medical images plays an important role in clinical diagnosis and disease analysis. However, medical images usually have great complexity, such as low contrast of computed tomography (CT) or low spatial resolution of positron emission tomography (PET). In the actual radiotherapy plan, multimodal imaging technology, such as PET/CT, is often used. PET images provide basic metabolic information and CT images provide anatomical details. In this paper, we propose a 3D PET/CT tumor co-segmentation framework based on active contour model. First, a new edge stop function (ESF) based on PET image and CT image is defined, which combines the grayscale standard deviation information of the image and is more effective for blurry medical image edges. Second, we propose a background subtraction model to solve the problem of uneven grayscale level in medical images. Apart from that, the calculation format adopts the level set algorithm based on the additive operator splitting (AOS) format. The solution is unconditionally stable and eliminates the dependence on time step size. Experimental results on a dataset of 50 pairs of PET/CT images of non-small cell lung cancer patients show that the proposed method has a good performance for tumor segmentation.
Collapse
Affiliation(s)
- Laquan Li
- School of Science, Chongqing University of Posts and Telecommunications, Chongqing 400065, P. R. China
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, P. R. China
| | - Chuangbo Jiang
- School of Science, Chongqing University of Posts and Telecommunications, Chongqing 400065, P. R. China
| | - Patrick Shen-Pei Wang
- College of Computer and Information Science, Northeastern University, Boston 02115, USA
| | - Shenhai Zheng
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, P. R. China
- College of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| |
Collapse
|
10
|
Wang F, Cheng C, Cao W, Wu Z, Wang H, Wei W, Yan Z, Liu Z. MFCNet: A multi-modal fusion and calibration networks for 3D pancreas tumor segmentation on PET-CT images. Comput Biol Med 2023; 155:106657. [PMID: 36791551 DOI: 10.1016/j.compbiomed.2023.106657] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 01/29/2023] [Accepted: 02/09/2023] [Indexed: 02/12/2023]
Abstract
In clinical diagnosis, positron emission tomography and computed tomography (PET-CT) images containing complementary information are fused. Tumor segmentation based on multi-modal PET-CT images is an important part of clinical diagnosis and treatment. However, the existing current PET-CT tumor segmentation methods mainly focus on positron emission tomography (PET) and computed tomography (CT) feature fusion, which weakens the specificity of the modality. In addition, the information interaction between different modal images is usually completed by simple addition or concatenation operations, but this has the disadvantage of introducing irrelevant information during the multi-modal semantic feature fusion, so effective features cannot be highlighted. To overcome this problem, this paper propose a novel Multi-modal Fusion and Calibration Networks (MFCNet) for tumor segmentation based on three-dimensional PET-CT images. First, a Multi-modal Fusion Down-sampling Block (MFDB) with a residual structure is developed. The proposed MFDB can fuse complementary features of multi-modal images while retaining the unique features of different modal images. Second, a Multi-modal Mutual Calibration Block (MMCB) based on the inception structure is designed. The MMCB can guide the network to focus on a tumor region by combining different branch decoding features using the attention mechanism and extracting multi-scale pathological features using a convolution kernel of different sizes. The proposed MFCNet is verified on both the public dataset (Head and Neck cancer) and the in-house dataset (pancreas cancer). The experimental results indicate that on the public and in-house datasets, the average Dice values of the proposed multi-modal segmentation network are 74.14% and 76.20%, while the average Hausdorff distances are 6.41 and 6.84, respectively. In addition, the experimental results show that the proposed MFCNet outperforms the state-of-the-art methods on the two datasets.
Collapse
Affiliation(s)
- Fei Wang
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Chao Cheng
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University(Changhai Hospital), Shanghai, 200433, China
| | - Weiwei Cao
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Zhongyi Wu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Heng Wang
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022, China
| | - Wenting Wei
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022, China
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China.
| | - Zhaobang Liu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| |
Collapse
|
11
|
Feuerecker B, Heimer MM, Geyer T, Fabritius MP, Gu S, Schachtner B, Beyer L, Ricke J, Gatidis S, Ingrisch M, Cyran CC. Artificial Intelligence in Oncological Hybrid Imaging. ROFO-FORTSCHR RONTG 2023; 195:105-114. [PMID: 36170852 DOI: 10.1055/a-1909-7013] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
BACKGROUND Artificial intelligence (AI) applications have become increasingly relevant across a broad spectrum of settings in medical imaging. Due to the large amount of imaging data that is generated in oncological hybrid imaging, AI applications are desirable for lesion detection and characterization in primary staging, therapy monitoring, and recurrence detection. Given the rapid developments in machine learning (ML) and deep learning (DL) methods, the role of AI will have significant impact on the imaging workflow and will eventually improve clinical decision making and outcomes. METHODS AND RESULTS The first part of this narrative review discusses current research with an introduction to artificial intelligence in oncological hybrid imaging and key concepts in data science. The second part reviews relevant examples with a focus on applications in oncology as well as discussion of challenges and current limitations. CONCLUSION AI applications have the potential to leverage the diagnostic data stream with high efficiency and depth to facilitate automated lesion detection, characterization, and therapy monitoring to ultimately improve quality and efficiency throughout the medical imaging workflow. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based therapy guidance in oncology. However, significant challenges remain regarding application development, benchmarking, and clinical implementation. KEY POINTS · Hybrid imaging generates a large amount of multimodality medical imaging data with high complexity and depth.. · Advanced tools are required to enable fast and cost-efficient processing along the whole radiology value chain.. · AI applications promise to facilitate the assessment of oncological disease in hybrid imaging with high quality and efficiency for lesion detection, characterization, and response assessment. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based oncological therapy guidance.. · Selected applications in three oncological entities (lung, prostate, and neuroendocrine tumors) demonstrate how AI algorithms may impact imaging-based tasks in hybrid imaging and potentially guide clinical decision making.. CITATION FORMAT · Feuerecker B, Heimer M, Geyer T et al. Artificial Intelligence in Oncological Hybrid Imaging. Fortschr Röntgenstr 2023; 195: 105 - 114.
Collapse
Affiliation(s)
- Benedikt Feuerecker
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany.,German Cancer Research Center (DKFZ), Partner site Munich, DKTK German Cancer Consortium, Munich, Germany
| | - Maurice M Heimer
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Thomas Geyer
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | | | - Sijing Gu
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | | | - Leonie Beyer
- Department of Nuclear Medicine, University Hospital, LMU Munich, Munich, Germany
| | - Jens Ricke
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Sergios Gatidis
- Department of Radiology, University Hospital Tübingen, Tübingen, Germany.,MPI, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Michael Ingrisch
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Clemens C Cyran
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
12
|
Zhang X, Zhang B, Deng S, Meng Q, Chen X, Xiang D. Cross modality fusion for modality-specific lung tumor segmentation in PET-CT images. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac994e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 10/11/2022] [Indexed: 11/09/2022]
Abstract
Abstract
Although positron emission tomography-computed tomography (PET-CT) images have been widely used, it is still challenging to accurately segment the lung tumor. The respiration, movement and imaging modality lead to large modality discrepancy of the lung tumors between PET images and CT images. To overcome these difficulties, a novel network is designed to simultaneously obtain the corresponding lung tumors of PET images and CT images. The proposed network can fuse the complementary information and preserve modality-specific features of PET images and CT images. Due to the complementarity between PET images and CT images, the two modality images should be fused for automatic lung tumor segmentation. Therefore, cross modality decoding blocks are designed to extract modality-specific features of PET images and CT images with the constraints of the other modality. The edge consistency loss is also designed to solve the problem of blurred boundaries of PET images and CT images. The proposed method is tested on 126 PET-CT images with non-small cell lung cancer, and Dice similarity coefficient scores of lung tumor segmentation reach 75.66 ± 19.42 in CT images and 79.85 ± 16.76 in PET images, respectively. Extensive comparisons with state-of-the-art lung tumor segmentation methods have also been performed to demonstrate the superiority of the proposed network.
Collapse
|
13
|
Astaraki M, Smedby Ö, Wang C. Prior-aware autoencoders for lung pathology segmentation. Med Image Anal 2022; 80:102491. [DOI: 10.1016/j.media.2022.102491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 04/11/2022] [Accepted: 05/20/2022] [Indexed: 10/18/2022]
|
14
|
Ansari AS, Zamani AS, Mohammadi MS, Meenakshi, Ritonga M, Ahmed SS, Pounraj D, Kaliyaperumal K. Detection of Pancreatic Cancer in CT Scan Images Using PSO SVM and Image Processing. BIOMED RESEARCH INTERNATIONAL 2022; 2022:8544337. [PMID: 35928919 PMCID: PMC9345701 DOI: 10.1155/2022/8544337] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 06/30/2022] [Accepted: 07/11/2022] [Indexed: 11/18/2022]
Abstract
A diagnosis of pancreatic cancer is one of the worst cancers that may be received anywhere in the world; the five-year survival rate is very less. The majority of cases of this condition may be traced back to pancreatic cancer. Due to medical image scans, a significant number of cancer patients are able to identify abnormalities at an earlier stage. The expensive cost of the necessary gear and infrastructure makes it difficult to disseminate the technology, putting it out of the reach of a lot of people. This article presents detection of pancreatic cancer in CT scan images using machine PSO SVM and image processing. The Gaussian elimination filter is utilized during the image preprocessing stage of the removal of noise from images. The K means algorithm uses a partitioning technique to separate the image into its component parts. The process of identifying objects in an image and determining the regions of interest is aided by image segmentation. The PCA method is used to extract important information from digital photographs. PSO SVM, naive Bayes, and AdaBoost are the algorithms that are used to perform the classification. Accuracy, sensitivity, and specificity of the PSO SVM algorithm are better.
Collapse
Affiliation(s)
- Arshiya S. Ansari
- Department of Information Technology, College of Computer and Information Sciences, Majmaah University, Al-Majmaah 11952, Saudi Arabia
| | - Abu Sarwar Zamani
- Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Mohammad Sajid Mohammadi
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
| | - Meenakshi
- GD Goenka University Sohna Haryana, India
| | | | - Syed Sohail Ahmed
- Department of Computer Engineering, Qassim University, Buraydah, Saudi Arabia
| | - Devabalan Pounraj
- BVC Engineering College (Autonomous), Odalarevu, Allavaram Mandal, East-GodhavariDistrict, Andhra Pradesh, India
| | | |
Collapse
|
15
|
Li J, Chen H, Li Y, Peng Y, Sun J, Pan P. Cross-modality synthesis aiding lung tumor segmentation on multi-modal MRI images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103655] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
16
|
Positron Emission Tomography Image Segmentation Based on Atanassov’s Intuitionistic Fuzzy Sets. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12104865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
In this paper, we present an approach to fully automate tumor delineation in positron emission tomography (PET) images. PET images play a major role in medicine for in vivo imaging in oncology (PET images are used to evaluate oncology patients, detecting emitted photons from a radiotracer localized in abnormal cells). PET image tumor delineation plays a vital role both in pre- and post-treatment stages. The low spatial resolution and high noise characteristics of PET images increase the challenge in PET image segmentation. Despite the difficulties and known limitations, several image segmentation approaches have been proposed. This paper introduces a new unsupervised approach to perform tumor delineation in PET images using Atanassov’s intuitionistic fuzzy sets (A-IFSs) and restricted dissimilarity functions. Moreover, the implementation of this methodology is presented and tested against other existing methodologies. The proposed algorithm increases the accuracy of tumor delineation in PET images, and the experimental results show that the proposed method outperformed all methods tested.
Collapse
|
17
|
Xue Z, Li P, Zhang L, Lu X, Zhu G, Shen P, Ali Shah SA, Bennamoun M. Multi-Modal Co-Learning for Liver Lesion Segmentation on PET-CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3531-3542. [PMID: 34133275 DOI: 10.1109/tmi.2021.3089702] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Liver lesion segmentation is an essential process to assist doctors in hepatocellular carcinoma diagnosis and treatment planning. Multi-modal positron emission tomography and computed tomography (PET-CT) scans are widely utilized due to their complementary feature information for this purpose. However, current methods ignore the interaction of information across the two modalities during feature extraction, omit the co-learning of the feature maps of different resolutions, and do not ensure that shallow and deep features complement each others sufficiently. In this paper, our proposed model can achieve feature interaction across multi-modal channels by sharing the down-sampling blocks between two encoding branches to eliminate misleading features. Furthermore, we combine feature maps of different resolutions to derive spatially varying fusion maps and enhance the lesions information. In addition, we introduce a similarity loss function for consistency constraint in case that predictions of separated refactoring branches for the same regions vary a lot. We evaluate our model for liver tumor segmentation using a PET-CT scans dataset, compare our method with the baseline techniques for multi-modal (multi-branches, multi-channels and cascaded networks) and then demonstrate that our method has a significantly higher accuracy ( ) than the baseline models.
Collapse
|
18
|
Diao Z, Jiang H, Han XH, Yao YD, Shi T. EFNet: evidence fusion network for tumor segmentation from PET-CT volumes. Phys Med Biol 2021; 66. [PMID: 34555816 DOI: 10.1088/1361-6560/ac299a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/23/2021] [Indexed: 11/11/2022]
Abstract
Precise delineation of target tumor from positron emission tomography-computed tomography (PET-CT) is a key step in clinical practice and radiation therapy. PET-CT co-segmentation actually uses the complementary information of two modalities to reduce the uncertainty of single-modal segmentation, so as to obtain more accurate segmentation results. At present, the PET-CT segmentation methods based on fully convolutional neural network (FCN) mainly adopt image fusion and feature fusion. The current fusion strategies do not consider the uncertainty of multi-modal segmentation and complex feature fusion consumes more computing resources, especially when dealing with 3D volumes. In this work, we analyze the PET-CT co-segmentation from the perspective of uncertainty, and propose evidence fusion network (EFNet). The network respectively outputs PET result and CT result containing uncertainty by proposed evidence loss, which are used as PET evidence and CT evidence. Then we use evidence fusion to reduce uncertainty of single-modal evidence. The final segmentation result is obtained based on evidence fusion of PET evidence and CT evidence. EFNet uses the basic 3D U-Net as backbone and only uses simple unidirectional feature fusion. In addition, EFNet can separately train and predict PET evidence and CT evidence, without the need for parallel training of two branch networks. We do experiments on the soft-tissue-sarcomas and lymphoma datasets. Compared with 3D U-Net, our proposed method improves the Dice by 8% and 5% respectively. Compared with the complex feature fusion method, our proposed method improves the Dice by 7% and 2% respectively. Our results show that in PET-CT segmentation methods based on FCN, by outputting uncertainty evidence and evidence fusion, the network can be simplified and the segmentation results can be improved.
Collapse
Affiliation(s)
- Zhaoshuo Diao
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| | - Xian-Hua Han
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi-shi 7538511, Japan
| | - Yu-Dong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken NJ 07030, United States of America
| | - Tianyu Shi
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
19
|
Yao C, Wang M, Zhu W, Huang H, Shi F, Chen Z, Wang L, Wang T, Zhou Y, Peng Y, Zhu L, Chen H, Chen X. Joint segmentation of multi-class hyper-reflective foci in retinal optical coherence tomography images. IEEE Trans Biomed Eng 2021; 69:1349-1358. [PMID: 34570700 DOI: 10.1109/tbme.2021.3115552] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Hyper-reflective foci (HRF) refers to the spot-shaped, block-shaped areas with characteristics of high local contrast and high reflectivity, which is mostly observed in retinal optical coherence tomography (OCT) images of patients with fundus diseases. HRF mainly appears hard exudates (HE) and microglia (MG) clinically. Accurate segmentation of HE and MG is essential to alleviate the harm in retinal diseases. However, it is still a challenge to segment HE and MG simultaneously due to similar pathological features, various shapes and location distribution, blurred boundaries, and small morphology dimensions. To tackle these problems, in this paper, we propose a novel global information fusion and dual decoder collaboration-based network (GD-Net), which can segment HE and MG in OCT images jointly. Specifically, to suppress the interference of similar pathological features, a novel global information fusion (GIF) module is proposed, which can aggregate the global semantic information efficiently. To further improve the segmentation performance, we design a dual decoder collaborative workspace (DDCW) to comprehensively utilize the semantic correlation between HE and MG while enhancing the mutual influence on them by feedback alternately. To further optimize GD-Net, we explore a joint loss function which integrates pixel-level with image-level. The dataset of this study comes from patients diagnosed with diabetic macular edema at the department of ophthalmology, University Medical Center Groningen, the Netherlands. Experimental results show that our proposed method performs better than other state-of-the-art methods, which suggests the effectiveness of the proposed method and provides research ideas for medical applications.
Collapse
|
20
|
Fu X, Bi L, Kumar A, Fulham M, Kim J. Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung Tumor Segmentation. IEEE J Biomed Health Inform 2021; 25:3507-3516. [PMID: 33591922 DOI: 10.1109/jbhi.2021.3059453] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Multimodal positron emission tomography-computed tomography (PET-CT) is used routinely in the assessment of cancer. PET-CT combines the high sensitivity for tumor detection of PET and anatomical information from CT. Tumor segmentation is a critical element of PET-CT but at present, the performance of existing automated methods for this challenging task is low. Segmentation tends to be done manually by different imaging experts, which is labor-intensive and prone to errors and inconsistency. Previous automated segmentation methods largely focused on fusing information that is extracted separately from the PET and CT modalities, with the underlying assumption that each modality contains complementary information. However, these methods do not fully exploit the high PET tumor sensitivity that can guide the segmentation. We introduce a deep learning-based framework in multimodal PET-CT segmentation with a multimodal spatial attention module (MSAM). The MSAM automatically learns to emphasize regions (spatial areas) related to tumors and suppress normal regions with physiologic high-uptake from the PET input. The resulting spatial attention maps are subsequently employed to target a convolutional neural network (CNN) backbone for segmentation of areas with higher tumor likelihood from the CT image. Our experimental results on two clinical PET-CT datasets of non-small cell lung cancer (NSCLC) and soft tissue sarcoma (STS) validate the effectiveness of our framework in these different cancer types. We show that our MSAM, with a conventional U-Net backbone, surpasses the state-of-the-art lung tumor segmentation approach by a margin of 7.6% in Dice similarity coefficient (DSC).
Collapse
|
21
|
Kanithi P, de Ruiter NJA, Amma MR, Lindeman RW, Butler APH, Butler PH, Chernoglazov AI, Mandalika VBH, Adebileje SA, Alexander SD, Anjomrouz M, Asghariomabad F, Atharifard A, Atlas J, Bamford B, Bell ST, Bheesette S, Carbonez P, Chambers C, Clark JA, Colgan F, Crighton JS, Dahal S, Damet J, Doesburg RMN, Duncan N, Ghodsian N, Gieseg SP, Goulter BP, Gurney S, Healy JL, Kirkbride T, Lansley SP, Lowe C, Marfo E, Matanaghi A, Moghiseh M, Palmer D, Panta RK, Prebble HM, Raja AY, Renaud P, Sayous Y, Schleich N, Searle E, Sheeja JS, Uddin R, Broeke LV, Vivek VS, Walker EP, Walsh MF, Wijesooriya M, Younger WR. Interactive Image Segmentation of MARS Datasets Using Bag of Features. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3030045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
22
|
Li S, Jiang H, Li H, Yao YD. AW-SDRLSE: Adaptive Weighting and Scalable Distance Regularized Level Set Evolution for Lymphoma Segmentation on PET Images. IEEE J Biomed Health Inform 2021; 25:1173-1184. [PMID: 32841130 DOI: 10.1109/jbhi.2020.3017546] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Accurate lymphoma segmentation on Positron Emission Tomography (PET) images is of great importance for medical diagnoses, such as for distinguishing benign and malignant. To this end, this paper proposes an adaptive weighting and scalable distance regularized level set evolution (AW-SDRLSE) method for delineating lymphoma boundaries on 2D PET slices. There are three important characteristics with respect to AW-SDRLSE: 1) A scalable distance regularization term is proposed and a parameter q can control the contour's convergence rate and precision in theory. 2) A novel dynamic annular mask is proposed to calculate mean intensities of local interior and exterior regions and further define the region energy term. 3) As the level set method is sensitive to parameters, we thus propose an adaptive weighting strategy for the length and area energy terms using local region intensity and boundary direction information. AW-SDRLSE is evaluated on 90 cases of real PET data with a mean Dice coefficient of 0.8796. Comparative results demonstrate the accuracy and robustness of AW-SDRLSE as well as its performance advantages as compared with related level set methods. In addition, experimental results indicate that AW-SDRLSE can be a fine segmentation method for improving the lymphoma segmentation results obtained by deep learning (DL) methods significantly.
Collapse
|
23
|
Abstract
Positron emission tomography (PET)/computed tomography (CT) are nuclear diagnostic imaging modalities that are routinely deployed for cancer staging and monitoring. They hold the advantage of detecting disease related biochemical and physiologic abnormalities in advance of anatomical changes, thus widely used for staging of disease progression, identification of the treatment gross tumor volume, monitoring of disease, as well as prediction of outcomes and personalization of treatment regimens. Among the arsenal of different functional imaging modalities, nuclear imaging has benefited from early adoption of quantitative image analysis starting from simple standard uptake value normalization to more advanced extraction of complex imaging uptake patterns; thanks to application of sophisticated image processing and machine learning algorithms. In this review, we discuss the application of image processing and machine/deep learning techniques to PET/CT imaging with special focus on the oncological radiotherapy domain as a case study and draw examples from our work and others to highlight current status and future potentials.
Collapse
Affiliation(s)
- Lise Wei
- Department of Radiation Oncology, Physics Division, University of Michigan, Ann Arbor, MI
| | - Issam El Naqa
- Department of Radiation Oncology, Physics Division, University of Michigan, Ann Arbor, MI.
| |
Collapse
|
24
|
Random walkers on morphological trees: A segmentation paradigm. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2020.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
25
|
Li L, Lu W, Tan S. Variational PET/CT Tumor Co-segmentation Integrated with PET Restoration. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020; 4:37-49. [PMID: 32939423 DOI: 10.1109/trpms.2019.2911597] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
PET and CT are widely used imaging modalities in radiation oncology. PET imaging has a high contrast but blurry tumor edges due to its limited spatial resolution, while CT imaging has a high resolution but a low contrast between tumor and soft normal tissues. Tumor segmentation from either a single PET or CT image is difficult. It is known that co-segmentation methods utilizing the complementary information between PET and CT can improve segmentation accuracy. These information can be either consistent or inconsistent in the image-level. How to correctly localize tumor edges with these inconsistent information is a major challenge for co-segmentation methods. In this study, we proposed a novel variational method for tumor co-segmentation in PET/CT, with a fusion strategy specifically designed to handle the information inconsistency between PET and CT in an adaptive way - the method can automatically decide which modality should be more trustful when PET and CT disagree to each other for localizing the tumor boundary. The proposed method was constructed based on the Γ-convergence approximation of the Mumford-Shah (MS) segmentation model. A PET restoration process was integrated into the co-segmentation process, which further eliminate the uncertainty for tumor segmentation introduced by the blurring of tumor edges in PET. The performance of the proposed method was validated on a test dataset with fifty non-small cell lung cancer patients. Experimental results demonstrated that the proposed method had a high accuracy for PET/CT co-segmentation and PET restoration, and can accurately estimate the blur kernel of the PET scanner as well. For those complex images in which the tumors exhibit Fluorodeoxyglucose (FDG) uptake inhomogeneity or even invade adjacent soft normal tissues, the proposed method can still accurately segment the tumors. It achieved an average dice similarity indexes (DSI) of 0.85 ± 0.06, volume error (VE) of 0.09 ± 0.08, and classification error (CE) of 0.31 ± 0.13.
Collapse
Affiliation(s)
- Laquan Li
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York 10065, USA
| | - Shan Tan
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
26
|
Zhao L, Lu Z, Jiang J, Zhou Y, Wu Y, Feng Q. Automatic Nasopharyngeal Carcinoma Segmentation Using Fully Convolutional Networks with Auxiliary Paths on Dual-Modality PET-CT Images. J Digit Imaging 2020; 32:462-470. [PMID: 30719587 DOI: 10.1007/s10278-018-00173-0] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Nasopharyngeal carcinoma (NPC) is prevalent in certain areas, such as South China, Southeast Asia, and the Middle East. Radiation therapy is the most efficient means to treat this malignant tumor. Positron emission tomography-computed tomography (PET-CT) is a suitable imaging technique to assess this disease. However, the large amount of data produced by numerous patients causes traditional manual delineation of tumor contour, a basic step for radiotherapy, to become time-consuming and labor-intensive. Thus, the demand for automatic and credible segmentation methods to alleviate the workload of radiologists is increasing. This paper presents a method that uses fully convolutional networks with auxiliary paths to achieve automatic segmentation of NPC on PET-CT images. This work is the first to segment NPC using dual-modality PET-CT images. This technique is identical to what is used in clinical practice and offers considerable convenience for subsequent radiotherapy. The deep supervision introduced by auxiliary paths can explicitly guide the training of lower layers, thus enabling these layers to learn more representative features and improve the discriminative capability of the model. Results of threefold cross-validation with a mean dice score of 87.47% demonstrate the efficiency and robustness of the proposed method. The method remarkably outperforms state-of-the-art methods in NPC segmentation. We also validated by experiments that the registration process among different subjects and the auxiliary paths strategy are considerably useful techniques for learning discriminative features and improving segmentation performance.
Collapse
Affiliation(s)
- Lijun Zhao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Zixiao Lu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Jun Jiang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Yujia Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Yi Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China.
| |
Collapse
|
27
|
Qin H, Han J, Li N, Huang H, Chen B. Mass-Driven Topology-Aware Curve Skeleton Extraction from Incomplete Point Clouds. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2805-2817. [PMID: 30869620 DOI: 10.1109/tvcg.2019.2903805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
We introduce a mass-driven curve skeleton as a curve skeleton representation for 3D point cloud data. The mass-driven curve skeleton presents geometric properties and mass distribution of a curve skeleton simultaneously. The computation of the mass-driven curve skeleton is formulated as a minimization of Wasserstein distance, with an entropic regularization term, between mass distributions of point clouds and curve skeletons. Assuming that the mass of one sampling point should be transported to a line-like structure, a topology-aware rough curve skeleton is extracted via the optimal transport plan. A Dirichlet energy regularization term is then used to obtain a smooth curve skeleton via geometric optimization. Given that rough curve skeleton extraction does not depend on complete point clouds, our algorithm can be directly applied to curve skeleton extraction from incomplete point clouds. We demonstrate that a mass-driven curve skeleton can be directly applied to an unoriented raw point scan with significant noise, outliers and large areas of missing data. In comparison with state-of-the-art methods on curve skeleton extraction, the performance of the proposed mass-driven curve skeleton is more robust in terms of extracting a correct topology.
Collapse
|
28
|
Cheng DC, Chi JH, Yang SN, Liu SH. Organ Contouring for Lung Cancer Patients with a Seed Generation Scheme and Random Walks. SENSORS (BASEL, SWITZERLAND) 2020; 20:E4823. [PMID: 32858982 PMCID: PMC7506591 DOI: 10.3390/s20174823] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Revised: 08/20/2020] [Accepted: 08/24/2020] [Indexed: 12/25/2022]
Abstract
In this study, we proposed a semi-automated and interactive scheme for organ contouring in radiotherapy planning for patients with non-small cell lung cancers. Several organs were contoured, including the lungs, airway, heart, spinal cord, body, and gross tumor volume (GTV). We proposed some schemes to automatically generate and vanish the seeds of the random walks (RW) algorithm. We considered 25 lung cancer patients, whose computed tomography (CT) images were obtained from the China Medical University Hospital (CMUH) in Taichung, Taiwan. The manual contours made by clinical oncologists were taken as the gold standard for comparison to evaluate the performance of our proposed method. The Dice coefficient between two contours of the same organ was computed to evaluate the similarity. The average Dice coefficients for the lungs, airway, heart, spinal cord, and body and GTV segmentation were 0.92, 0.84, 0.83, 0.73, 0.85 and 0.66, respectively. The computation time was between 2 to 4 min for a whole CT sequence segmentation. The results showed that our method has the potential to assist oncologists in the process of radiotherapy treatment in the CMUH, and hopefully in other hospitals as well, by saving a tremendous amount of time in contouring.
Collapse
Affiliation(s)
- Da-Chuan Cheng
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung City 40402, Taiwan;
| | - Jen-Hong Chi
- Department of Diagnostic Radiology, Singapore General Hospital, Singapore 169608, Singapore;
| | - Shih-Neng Yang
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung City 40402, Taiwan;
- Department of Radiation Oncology, China Medical University Hospital, Taichung City 40447, Taiwan
| | - Shing-Hong Liu
- Department of Computer Science and Information Engineering Chaoyang University of Technology, Taichung City 41349, Taiwan
| |
Collapse
|
29
|
Wang S, Nie D, Qu L, Shao Y, Lian J, Wang Q, Shen D. CT Male Pelvic Organ Segmentation via Hybrid Loss Network With Incomplete Annotation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2151-2162. [PMID: 31940526 PMCID: PMC8195629 DOI: 10.1109/tmi.2020.2966389] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Sufficient data with complete annotation is essential for training deep models to perform automatic and accurate segmentation of CT male pelvic organs, especially when such data is with great challenges such as low contrast and large shape variation. However, manual annotation is expensive in terms of both finance and human effort, which usually results in insufficient completely annotated data in real applications. To this end, we propose a novel deep framework to segment male pelvic organs in CT images with incomplete annotation delineated in a very user-friendly manner. Specifically, we design a hybrid loss network derived from both voxel classification and boundary regression, to jointly improve the organ segmentation performance in an iterative way. Moreover, we introduce a label completion strategy to complete the labels of the rich unannotated voxels and then embed them into the training data to enhance the model capability. To reduce the computation complexity and improve segmentation performance, we locate the pelvic region based on salient bone structures to focus on the candidate segmentation organs. Experimental results on a large planning CT pelvic organ dataset show that our proposed method with incomplete annotation achieves comparable segmentation performance to the state-of-the-art methods with complete annotation. Moreover, our proposed method requires much less effort of manual contouring from medical professionals such that an institutional specific model can be more easily established.
Collapse
|
30
|
Comelli A, Bignardi S, Stefano A, Russo G, Sabini MG, Ippolito M, Yezzi A. Development of a new fully three-dimensional methodology for tumours delineation in functional images. Comput Biol Med 2020; 120:103701. [PMID: 32217282 PMCID: PMC7237290 DOI: 10.1016/j.compbiomed.2020.103701] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 03/11/2020] [Accepted: 03/11/2020] [Indexed: 01/15/2023]
Abstract
Delineation of tumours in Positron Emission Tomography (PET) plays a crucial role in accurate diagnosis and radiotherapy treatment planning. In this context, it is of outmost importance to devise efficient and operator-independent segmentation algorithms capable of reconstructing the tumour three-dimensional (3D) shape. In previous work, we proposed a system for 3D tumour delineation on PET data (expressed in terms of Standardized Uptake Value - SUV), based on a two-step approach. Step 1 identified the slice enclosing the maximum SUV and generated a rough contour surrounding it. Such contour was then used to initialize step 2, where the 3D shape of the tumour was obtained by separately segmenting 2D PET slices, leveraging the slice-by-slice marching approach. Additionally, we combined active contours and machine learning components to improve performance. Despite its success, the slice marching approach poses unnecessary limitations that are naturally removed by performing the segmentation directly in 3D. In this paper, we migrate our system into 3D. In particular, the segmentation in step 2 is now performed by evolving an active surface directly in the 3D space. The key points of such an advancement are that it performs the shape reconstruction on the whole stack of slices simultaneously, naturally leveraging cross-slice information that could not be exploited before. Additionally, it does not require any specific stopping condition, as the active surface naturally reaches a stable topology once convergence is achieved. Performance of this fully 3D approach is evaluated on the same dataset discussed in our previous work, which comprises fifty PET scans of lung, head and neck, and brain tumours. The results have confirmed that a benefit is indeed achieved in practice for all investigated anatomical districts, both quantitatively, through a set of commonly used quality indicators (dice similarity coefficient >87.66%, Hausdorff distance < 1.48 voxel and Mahalanobis distance < 0.82 voxel), and qualitatively in terms of Likert score (>3 in 54% of the tumours).
Collapse
Affiliation(s)
- Albert Comelli
- Ri.MED Foundation, via Bandiera 11, 90133, Palermo, Italy
| | - Samuel Bignardi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy.
| | - Giorgio Russo
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy; Medical Physics Unit, Cannizzaro Hospital, Catania, Italy
| | | | - Massimo Ippolito
- Nuclear Medicine Department, Cannizzaro Hospital, Catania, Italy
| | - Anthony Yezzi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| |
Collapse
|
31
|
Sbei A, ElBedoui K, Barhoumi W, Maktouf C. Gradient-based generation of intermediate images for heterogeneous tumor segmentation within hybrid PET/MRI scans. Comput Biol Med 2020; 119:103669. [PMID: 32339115 DOI: 10.1016/j.compbiomed.2020.103669] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 02/17/2020] [Accepted: 02/17/2020] [Indexed: 10/25/2022]
Abstract
Segmentation of tumors from hybrid PET/MRI scans plays an essential role in accurate diagnosis and treatment planning. However, when treating tumors, several challenges, notably heterogeneity and the problem of leaking into surrounding tissues with similar high uptake, have to be considered. To address these issues, we propose an automated method for accurate delineation of tumors in hybrid PET/MRI scans. The method is mainly based on creating intermediate images. In fact, an automatic detection technique that determines a preliminary Interesting Uptake Region (IUR) is firstly performed. To overcome the leakage problem, a separation technique is adopted to generate the final IUR. Then, smart seeds are provided for the Graph Cut (GC) technique to obtain the tumor map. To create intermediate images that tend to reduce heterogeneity faced on the original images, the tumor map gradient is combined with the gradient image. Lastly, segmentation based on the GCsummax technique is applied to the generated images. The proposed method has been validated on PET phantoms as well as on real-world PET/MRI scans of prostate, liver and pancreatic tumors. Experimental comparison revealed the superiority of the proposed method over state-of-the-art methods. This confirms the crucial role of automatically creating intermediate images in addressing the problem of wrongly estimating arc weights for heterogeneous targets.
Collapse
Affiliation(s)
- Arafet Sbei
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia
| | - Khaoula ElBedoui
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035 Tunis-Carthage, Tunisia
| | - Walid Barhoumi
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035 Tunis-Carthage, Tunisia.
| | - Chokri Maktouf
- Nuclear Medicine Department, Pasteur Institute of Tunis, Tunis, Tunisia
| |
Collapse
|
32
|
Li X, Li B, Liu F, Yin H, Zhou F. Segmentation of Pulmonary Nodules Using a GMM Fuzzy C-Means Algorithm. IEEE ACCESS 2020; 8:37541-37556. [DOI: 10.1109/access.2020.2968936] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2025]
|
33
|
Ning Q, Yu X, Gao Q, Xie J, Yao C, Zhou K, Ye J. An accurate interactive segmentation and volume calculation of orbital soft tissue for orbital reconstruction after enucleation. BMC Ophthalmol 2019; 19:256. [PMID: 31842802 PMCID: PMC6916112 DOI: 10.1186/s12886-019-1260-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Accepted: 11/27/2019] [Indexed: 12/18/2022] Open
Abstract
Background Accurate measurement and reconstruction of orbital soft tissue is important to diagnosis and treatment of orbital diseases. This study applied an interactive graph cut method to orbital soft tissue precise segmentation and calculation in computerized tomography (CT) images, and to estimate its application in orbital reconstruction. Methods The interactive graph cut method was introduced to segment extraocular muscle and intraorbital fat in CT images. Intra- and inter-observer variability of tissue volume measured by graph cut segmentation was validated. Accuracy and reliability of the method was accessed by comparing with manual delineation and commercial medical image software. Intraorbital structure of 10 patients after enucleation surgery was reconstructed based on graph cut segmentation and soft tissue volume were compared within two different surgical techniques. Results Both muscle and fat tissue segmentation results of graph cut method showed good consistency with ground truth in phantom data. There were no significant differences in muscle calculations between observers or segmental methods (p > 0.05). Graph cut results of fat tissue had coincidental variable trend with ground truth which could identify 0.1cm3 variation. The mean performance time of graph cut segmentation was significantly shorter than manual delineation and commercial software (p < 0.001). Jaccard similarity and Dice coefficient of graph cut method were 0.767 ± 0.045 and 0.836 ± 0.032 for human normal extraocular muscle segmentation. The measurements of fat tissue were significantly better in graph cut than those in commercial software (p < 0.05). Orbital soft tissue volume was decreased in post-enucleation orbit than that in normal orbit (p < 0.05). Conclusion The graph cut method was validated to have good accuracy, reliability and efficiency in orbit soft tissue segmentation. It could discern minor volume changes of soft tissue. The interactive segmenting technique would be a valuable tool for dynamic analysis and prediction of therapeutic effect and orbital reconstruction.
Collapse
Affiliation(s)
- Qingyao Ning
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang Province, China
| | - Xiaoyao Yu
- State Key Lab of CAD & CG, Zhejiang University, No. 886 Yuhangtang Road, Hangzhou, 310058, Zhejiang Province, China
| | - Qi Gao
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang Province, China
| | - Jiajun Xie
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang Province, China
| | - Chunlei Yao
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang Province, China
| | - Kun Zhou
- State Key Lab of CAD & CG, Zhejiang University, No. 886 Yuhangtang Road, Hangzhou, 310058, Zhejiang Province, China.
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang Province, China.
| |
Collapse
|
34
|
Jeba JA, Devi SN. Efficient graph cut optimization using hybrid kernel functions for segmentation of FDG uptakes in fused PET/CT images. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105815] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
35
|
Huang W, Li H, Wang R, Zhang X, Wang X, Zhang J. A self‐supervised strategy for fully automatic segmentation of renal dynamic contrast‐enhanced magnetic resonance images. Med Phys 2019; 46:4417-4430. [PMID: 31306492 DOI: 10.1002/mp.13715] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Revised: 05/24/2019] [Accepted: 07/02/2019] [Indexed: 01/10/2023] Open
Affiliation(s)
- Wenjian Huang
- Academy for Advanced Interdisciplinary Studies Peking University Beijing China
| | - Hao Li
- Academy for Advanced Interdisciplinary Studies Peking University Beijing China
| | - Rui Wang
- Department of Radiology Peking University First Hospital Beijing China
| | - Xiaodong Zhang
- Department of Radiology Peking University First Hospital Beijing China
| | - Xiaoying Wang
- Academy for Advanced Interdisciplinary Studies Peking University Beijing China
- Department of Radiology Peking University First Hospital Beijing China
| | - Jue Zhang
- Academy for Advanced Interdisciplinary Studies Peking University Beijing China
- College of Engineering Peking University Beijing China
| |
Collapse
|
36
|
Kumar A, Fulham M, Feng D, Kim J. Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 39:204-217. [PMID: 31217099 DOI: 10.1109/tmi.2019.2923601] [Citation(s) in RCA: 83] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications (e.g., detection and segmentation) requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions (lungs, mediastinum, tumors) with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion (fused inputs (FS), multi-branch (MB) techniques, and multichannel (MC) techniques) and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0:05) than the fusion baselines (FS: 99.00%, MB: 99.08%, TC: 98.92%) and a significantly higher Dice score (63.85%) than recent PET-CT tumor segmentation methods.
Collapse
|
37
|
Jiang H, Chen X, Shi F, Ma Y, Xiang D, Ye L, Su J, Li Z, Chen Q, Hua Y, Xu X, Zhu W, Fan Y. Improved cGAN based linear lesion segmentation in high myopia ICGA images. BIOMEDICAL OPTICS EXPRESS 2019; 10:2355-2366. [PMID: 31149376 PMCID: PMC6524580 DOI: 10.1364/boe.10.002355] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 03/13/2019] [Accepted: 04/08/2019] [Indexed: 05/23/2023]
Abstract
The increasing prevalence of myopia has attracted global attention recently. Linear lesions including lacquer cracks and myopic stretch lines are the main signs in high myopia retinas, and can be revealed by indocyanine green angiography (ICGA). Automatic linear lesion segmentation in ICGA images can help doctors diagnose and analyze high myopia quantitatively. To achieve accurate segmentation of linear lesions, an improved conditional generative adversarial network (cGAN) based method is proposed. A new partial densely connected network is adopted as the generator of cGAN to encourage the reuse of features and make the network time-saving. Dice loss and weighted binary cross-entropy loss are added to solve the data imbalance problem. Experiments on our data set indicated that the proposed network achieved better performance compared to other networks.
Collapse
Affiliation(s)
- Hongjiu Jiang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
- contributed equally
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, Suzhou 215123, China
- contributed equally
| | - Fei Shi
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
- Collaborative Innovation Center of IoT Industrialization and Intelligent Production, Minjiang University, Fuzhou 350108, China
| | - Yuhui Ma
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Dehui Xiang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Lei Ye
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Jinzhu Su
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Zuoyong Li
- Collaborative Innovation Center of IoT Industrialization and Intelligent Production, Minjiang University, Fuzhou 350108, China
| | - Qiuying Chen
- Shanghai General Hospital, Shanghai 200080, China
| | - Yihong Hua
- Shanghai General Hospital, Shanghai 200080, China
| | - Xun Xu
- Shanghai General Hospital, Shanghai 200080, China
| | - Weifang Zhu
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
- Collaborative Innovation Center of IoT Industrialization and Intelligent Production, Minjiang University, Fuzhou 350108, China
| | - Ying Fan
- Shanghai General Hospital, Shanghai 200080, China
| |
Collapse
|
38
|
Tripathi P, Tyagi S, Nath M. A Comparative Analysis of Segmentation Techniques for Lung Cancer Detection. PATTERN RECOGNITION AND IMAGE ANALYSIS 2019. [DOI: 10.1134/s105466181901019x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
39
|
Li L, Zhao X, Lu W, Tan S. Deep Learning for Variational Multimodality Tumor Segmentation in PET/CT. Neurocomputing 2019; 392:277-295. [PMID: 32773965 DOI: 10.1016/j.neucom.2018.10.099] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Positron emission tomography/computed tomography (PET/CT) imaging can simultaneously acquire functional metabolic information and anatomical information of the human body. How to rationally fuse the complementary information in PET/CT for accurate tumor segmentation is challenging. In this study, a novel deep learning based variational method was proposed to automatically fuse multimodality information for tumor segmentation in PET/CT. A 3D fully convolutional network (FCN) was first designed and trained to produce a probability map from the CT image. The learnt probability map describes the probability of each CT voxel belonging to the tumor or the background, and roughly distinguishes the tumor from its surrounding soft tissues. A fuzzy variational model was then proposed to incorporate the probability map and the PET intensity image for an accurate multimodality tumor segmentation, where the probability map acted as a membership degree prior. A split Bregman algorithm was used to minimize the variational model. The proposed method was validated on a non-small cell lung cancer dataset with 84 PET/CT images. Experimental results demonstrated that: 1). Only a few training samples were needed for training the designed network to produce the probability map; 2). The proposed method can be applied to small datasets, normally seen in clinic research; 3). The proposed method successfully fused the complementary information in PET/CT, and outperformed two existing deep learning-based multimodality segmentation methods and other multimodality segmentation methods using traditional fusion strategies (without deep learning); 4). The proposed method had a good performance for tumor segmentation, even for those with Fluorodeoxyglucose (FDG) uptake inhomogeneity and blurred tumor edges (two major challenges in PET single modality segmentation) and complex surrounding soft tissues (one major challenge in CT single modality segmentation), and achieved an average dice similarity indexes (DSI) of 0.86 ± 0.05, sensitivity (SE) of 0.86 ± 0.07, positive predictive value (PPV) of 0.87 ± 0.10, volume error (VE) of 0.16 ± 0.12, and classification error (CE) of 0.30 ± 0.12.
Collapse
Affiliation(s)
- Laquan Li
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China.,College of Science, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Xiangming Zhao
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065, USA
| | - Shan Tan
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
40
|
Lian C, Ruan S, Denoeux T, Li H, Vera P. Joint Tumor Segmentation in PET-CT Images Using Co-Clustering and Fusion Based on Belief Functions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:755-766. [PMID: 30296224 PMCID: PMC8191586 DOI: 10.1109/tip.2018.2872908] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Precise delineation of target tumor is a key factor to ensure the effectiveness of radiation therapy. While hybrid positron emission tomography-computed tomography (PET-CT) has become a standard imaging tool in the practice of radiation oncology, many existing automatic/semi-automatic methods still perform tumor segmentation on mono-modal images. In this paper, a co-clustering algorithm is proposed to concurrently segment 3D tumors in PET-CT images, considering that the two complementary imaging modalities can combine functional and anatomical information to improve segmentation performance. The theory of belief functions is adopted in the proposed method to model, fuse, and reason with uncertain and imprecise knowledge from noisy and blurry PET-CT images. To ensure reliable segmentation for each modality, the distance metric for the quantification of clustering distortions and spatial smoothness is iteratively adapted during the clustering procedure. On the other hand, to encourage consistent segmentation between different modalities, a specific context term is proposed in the clustering objective function. Moreover, during the iterative optimization process, clustering results for the two distinct modalities are further adjusted via a belief-functions-based information fusion strategy. The proposed method has been evaluated on a data set consisting of 21 paired PET-CT images for non-small cell lung cancer patients. The quantitative and qualitative evaluations show that our proposed method performs well compared with the state-of-the-art methods.
Collapse
|
41
|
Zhong Z, Kim Y, Plichta K, Allen BG, Zhou L, Buatti J, Wu X. Simultaneous cosegmentation of tumors in PET-CT images using deep fully convolutional networks. Med Phys 2019; 46:619-633. [PMID: 30537103 PMCID: PMC6527327 DOI: 10.1002/mp.13331] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Revised: 11/11/2018] [Accepted: 11/12/2018] [Indexed: 12/19/2022] Open
Abstract
PURPOSE To investigate the use and efficiency of 3-D deep learning, fully convolutional networks (DFCN) for simultaneous tumor cosegmentation on dual-modality nonsmall cell lung cancer (NSCLC) and positron emission tomography (PET)-computed tomography (CT) images. METHODS We used DFCN cosegmentation for NSCLC tumors in PET-CT images, considering both the CT and PET information. The proposed DFCN-based cosegmentation method consists of two coupled three-dimensional (3D)-UNets with an encoder-decoder architecture, which can communicate with the other in order to share complementary information between PET and CT. The weighted average sensitivity and positive predictive values denoted as Scores, dice similarity coefficients (DSCs), and the average symmetric surface distances were used to assess the performance of the proposed approach on 60 pairs of PET/CTs. A Simultaneous Truth and Performance Level Estimation Algorithm (STAPLE) of 3 expert physicians' delineations were used as a reference. The proposed DFCN framework was compared to 3 graph-based cosegmentation methods. RESULTS Strong agreement was observed when using the STAPLE references for the proposed DFCN cosegmentation on the PET-CT images. The average DSCs on CT and PET are 0.861 ± 0.037 and 0.828 ± 0.087, respectively, using DFCN, compared to 0.638 ± 0.165 and 0.643 ± 0.141, respectively, when using the graph-based cosegmentation method. The proposed DFCN cosegmentation using both PET and CT also outperforms the deep learning method using either PET or CT alone. CONCLUSIONS The proposed DFCN cosegmentation is able to outperform existing graph-based segmentation methods. The proposed DFCN cosegmentation shows promise for further integration with quantitative multimodality imaging tools in clinical trials.
Collapse
Affiliation(s)
- Zisha Zhong
- Department of Electrical and Computer EngineeringThe University of IowaIowa CityIA52242USA
- Department of Radiation OncologyUniversity of Iowa Hospitals and ClinicsIowa CityIA52242USA
| | - Yusung Kim
- Department of Radiation OncologyUniversity of Iowa Hospitals and ClinicsIowa CityIA52242USA
| | - Kristin Plichta
- Department of Radiation OncologyUniversity of Iowa Hospitals and ClinicsIowa CityIA52242USA
| | - Bryan G. Allen
- Department of Radiation OncologyUniversity of Iowa Hospitals and ClinicsIowa CityIA52242USA
| | - Leixin Zhou
- Department of Electrical and Computer EngineeringThe University of IowaIowa CityIA52242USA
- Department of Radiation OncologyUniversity of Iowa Hospitals and ClinicsIowa CityIA52242USA
| | - John Buatti
- Department of Radiation OncologyUniversity of Iowa Hospitals and ClinicsIowa CityIA52242USA
| | - Xiaodong Wu
- Department of Electrical and Computer EngineeringThe University of IowaIowa CityIA52242USA
- Department of Radiation OncologyUniversity of Iowa Hospitals and ClinicsIowa CityIA52242USA
| |
Collapse
|
42
|
Xu M, Qi S, Yue Y, Teng Y, Xu L, Yao Y, Qian W. Segmentation of lung parenchyma in CT images using CNN trained with the clustering algorithm generated dataset. Biomed Eng Online 2019; 18:2. [PMID: 30602393 PMCID: PMC6317251 DOI: 10.1186/s12938-018-0619-9] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 12/19/2018] [Indexed: 11/24/2022] Open
Abstract
Background Lung segmentation constitutes a critical procedure for any clinical-decision supporting system aimed to improve the early diagnosis and treatment of lung diseases. Abnormal lungs mainly include lung parenchyma with commonalities on CT images across subjects, diseases and CT scanners, and lung lesions presenting various appearances. Segmentation of lung parenchyma can help locate and analyze the neighboring lesions, but is not well studied in the framework of machine learning. Methods We proposed to segment lung parenchyma using a convolutional neural network (CNN) model. To reduce the workload of manually preparing the dataset for training the CNN, one clustering algorithm based method is proposed firstly. Specifically, after splitting CT slices into image patches, the k-means clustering algorithm with two categories is performed twice using the mean and minimum intensity of image patch, respectively. A cross-shaped verification, a volume intersection, a connected component analysis and a patch expansion are followed to generate final dataset. Secondly, we design a CNN architecture consisting of only one convolutional layer with six kernels, followed by one maximum pooling layer and two fully connected layers. Using the generated dataset, a variety of CNN models are trained and optimized, and their performances are evaluated by eightfold cross-validation. A separate validation experiment is further conducted using a dataset of 201 subjects (4.62 billion patches) with lung cancer or chronic obstructive pulmonary disease, scanned by CT or PET/CT. The segmentation results by our method are compared with those yielded by manual segmentation and some available methods. Results A total of 121,728 patches are generated to train and validate the CNN models. After the parameter optimization, our CNN model achieves an average F-score of 0.9917 and an area of curve up to 0.9991 for classification of lung parenchyma and non-lung-parenchyma. The obtain model can segment the lung parenchyma accurately for 201 subjects with heterogeneous lung diseases and CT scanners. The overlap ratio between the manual segmentation and the one by our method reaches 0.96. Conclusions The results demonstrated that the proposed clustering algorithm based method can generate the training dataset for CNN models. The obtained CNN model can segment lung parenchyma with very satisfactory performance and have the potential to locate and analyze lung lesions.
Collapse
Affiliation(s)
- Mingjie Xu
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Shouliang Qi
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China. .,Key Laboratory of Medical Image Computing of Northeastern University (Ministry of Education), Shenyang, China.
| | - Yong Yue
- Department of Radiology, Shengjing Hospital of China Medical University, No. 36 Sanhao Street, Shenyang, 110004, China
| | - Yueyang Teng
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Lisheng Xu
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Yudong Yao
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China.,Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, 07030, USA
| | - Wei Qian
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China.,College of Engineering, University of Texas at El Paso, 500 W University, El Paso, TX, 79902, USA
| |
Collapse
|
43
|
Zhao X, Li L, Lu W, Tan S. Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network. Phys Med Biol 2018; 64:015011. [PMID: 30523964 PMCID: PMC7493812 DOI: 10.1088/1361-6560/aaf44b] [Citation(s) in RCA: 91] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Automatic tumor segmentation from medical images is an important step for computer-aided cancer diagnosis and treatment. Recently, deep learning has been successfully applied to this task, leading to state-of-the-art performance. However, most of existing deep learning segmentation methods only work for a single imaging modality. PET/CT scanner is nowadays widely used in the clinic, and is able to provide both metabolic information and anatomical information through integrating PET and CT into the same utility. In this study, we proposed a novel multi-modality segmentation method based on a 3D fully convolutional neural network (FCN), which is capable of taking account of both PET and CT information simultaneously for tumor segmentation. The network started with a multi-task training module, in which two parallel sub-segmentation architectures constructed using deep convolutional neural networks (CNNs) were designed to automatically extract feature maps from PET and CT respectively. A feature fusion module was subsequently designed based on cascaded convolutional blocks, which re-extracted features from PET/CT feature maps using a weighted cross entropy minimization strategy. The tumor mask was obtained as the output at the end of the network using a softmax function. The effectiveness of the proposed method was validated on a clinic PET/CT dataset of 84 patients with lung cancer. The results demonstrated that the proposed network was effective, fast and robust and achieved significantly performance gain over CNN-based methods and traditional methods using PET or CT only, two V-net based co-segmentation methods, two variational co-segmentation methods based on fuzzy set theory and a deep learning co-segmentation method using W-net.
Collapse
Affiliation(s)
- Xiangming Zhao
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Laquan Li
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065, USA
| | - Shan Tan
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
44
|
Tong Y, Udupa JK, Odhner D, Wu C, Schuster SJ, Torigian DA. Disease quantification on PET/CT images without explicit object delineation. Med Image Anal 2018; 51:169-183. [PMID: 30453165 DOI: 10.1016/j.media.2018.11.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 10/17/2018] [Accepted: 11/09/2018] [Indexed: 10/27/2022]
Abstract
PURPOSE The derivation of quantitative information from images in a clinically practical way continues to face a major hurdle because of image segmentation challenges. This paper presents a novel approach, called automatic anatomy recognition-disease quantification (AAR-DQ), for disease quantification (DQ) on positron emission tomography/computed tomography (PET/CT) images. This approach explores how to decouple DQ methods from explicit dependence on object (e.g., organ) delineation through the use of only object recognition results from our recently developed automatic anatomy recognition (AAR) method to quantify disease burden. METHOD The AAR-DQ process starts off with the AAR approach for modeling anatomy and automatically recognizing objects on low-dose CT images of PET/CT acquisitions. It incorporates novel aspects of model building that relate to finding an optimal disease map for each organ. The parameters of the disease map are estimated from a set of training image data sets including normal subjects and patients with metastatic cancer. The result of recognition for an object on a patient image is the location of a fuzzy model for the object which is optimally adjusted for the image. The model is used as a fuzzy mask on the PET image for estimating a fuzzy disease map for the specific patient and subsequently for quantifying disease based on this map. This process handles blur arising in PET images from partial volume effect entirely through accurate fuzzy mapping to account for heterogeneity and gradation of disease content at the voxel level without explicitly performing correction for the partial volume effect. Disease quantification is performed from the fuzzy disease map in terms of total lesion glycolysis (TLG) and standardized uptake value (SUV) statistics. We also demonstrate that the method of disease quantification is applicable even when the "object" of interest is recognized manually with a simple and quick action such as interactively specifying a 3D box ROI. Depending on the degree of automaticity for object and lesion recognition on PET/CT, DQ can be performed at the object level either semi-automatically (DQ-MO) or automatically (DQ-AO), or at the lesion level either semi-automatically (DQ-ML) or automatically. RESULTS We utilized 67 data sets in total: 16 normal data sets used for model building, and 20 phantom data sets plus 31 patient data sets (with various types of metastatic cancer) used for testing the three methods DQ-AO, DQ-MO, and DQ-ML. The parameters of the disease map were estimated using the leave-one-out strategy. The organs of focus were left and right lungs and liver, and the disease quantities measured were TLG, SUVMean, and SUVMax. On phantom data sets, overall error for the three parameters were approximately 6%, 3%, and 0%, respectively, with TLG error varying from 2% for large "lesions" (37 mm diameter) to 37% for small "lesions" (10 mm diameter). On patient data sets, for non-conspicuous lesions, those overall errors were approximately 19%, 14% and 0%; for conspicuous lesions, these overall errors were approximately 9%, 7%, 0%, respectively, with errors in estimation being generally smaller for liver than for lungs, although without statistical significance. CONCLUSIONS Accurate disease quantification on PET/CT images without performing explicit delineation of lesions is feasible following object recognition. Method DQ-MO generally yields more accurate results than DQ-AO although the difference is statistically not significant. Compared to current methods from the literature, almost all of which focus only on lesion-level DQ and not organ-level DQ, our results were comparable for large lesions and were superior for smaller lesions, with less demand on training data and computational resources. DQ-AO and even DQ-MO seem to have the potential for quantifying disease burden body-wide routinely via the AAR-DQ approach.
Collapse
Affiliation(s)
- Yubing Tong
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States.
| | - Dewey Odhner
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States
| | - Caiyun Wu
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States
| | - Stephen J Schuster
- Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Drew A Torigian
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States; Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, PA 19104, United States
| |
Collapse
|
45
|
Wang H, Zhang N, Huo L, Zhang B. Dual-modality multi-atlas segmentation of torso organs from [ 18F]FDG-PET/CT images. Int J Comput Assist Radiol Surg 2018; 14:473-482. [PMID: 30390179 DOI: 10.1007/s11548-018-1879-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Accepted: 10/23/2018] [Indexed: 11/28/2022]
Abstract
PURPOSE Automated segmentation of torso organs from positron emission tomography/computed tomography (PET/CT) images is a prerequisite step for nuclear medicine image analysis. However, accurate organ segmentation from clinical PET/CT is challenging due to the poor soft tissue contrast in the low-dose CT image and the low spatial resolution of the PET image. To overcome these challenges, we developed a multi-atlas segmentation (MAS) framework for torso organ segmentation from 2-deoxy-2-[18F]fluoro-D-glucose PET/CT images. METHOD Our key idea is to use PET information to compensate for the imperfect CT contrast and use surface-based atlas fusion to overcome the low PET resolution. First, all the organs are segmented from CT using a conventional MAS method, and then the abdomen region of the PET image is automatically cropped. Focusing on the cropped PET image, a refined MAS segmentation of the abdominal organs is performed, using a surface-based atlas fusion approach to reach subvoxel accuracy. RESULTS This method was validated based on 69 PET/CT images. The Dice coefficients of the target organs were between 0.80 and 0.96, and the average surface distances were between 1.58 and 2.44 mm. Compared to the CT-based segmentation, the PET-based segmentation gained a Dice increase of 0.06 and an ASD decrease of 0.38 mm. The surface-based atlas fusion leads to significant accuracy improvement for the liver and kidneys and saved ~ 10 min computation time compared to volumetric atlas fusion. CONCLUSIONS The presented method achieves better segmentation accuracy than conventional MAS method within acceptable computation time for clinical applications.
Collapse
Affiliation(s)
- Hongkai Wang
- Department of Biomedical Engineering, Dalian University of Technology, Dalian, Liaoning, China
| | - Nan Zhang
- Department of Biomedical Engineering, Dalian University of Technology, Dalian, Liaoning, China
| | - Li Huo
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Beijing, China
| | - Bin Zhang
- Department of Biomedical Engineering, Dalian University of Technology, Dalian, Liaoning, China.
| |
Collapse
|
46
|
A smart and operator independent system to delineate tumours in Positron Emission Tomography scans. Comput Biol Med 2018; 102:1-15. [PMID: 30219733 DOI: 10.1016/j.compbiomed.2018.09.002] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Revised: 08/20/2018] [Accepted: 09/06/2018] [Indexed: 12/30/2022]
Abstract
Positron Emission Tomography (PET) imaging has an enormous potential to improve radiation therapy treatment planning offering complementary functional information with respect to other anatomical imaging approaches. The aim of this study is to develop an operator independent, reliable, and clinically feasible system for biological tumour volume delineation from PET images. Under this design hypothesis, we combine several known approaches in an original way to deploy a system with a high level of automation. The proposed system automatically identifies the optimal region of interest around the tumour and performs a slice-by-slice marching local active contour segmentation. It automatically stops when a "cancer-free" slice is identified. User intervention is limited at drawing an initial rough contour around the cancer region. By design, the algorithm performs the segmentation minimizing any dependence from the initial input, so that the final result is extremely repeatable. To assess the performances under different conditions, our system is evaluated on a dataset comprising five synthetic experiments and fifty oncological lesions located in different anatomical regions (i.e. lung, head and neck, and brain) using PET studies with 18F-fluoro-2-deoxy-d-glucose and 11C-labeled Methionine radio-tracers. Results on synthetic lesions demonstrate enhanced performances when compared against the most common PET segmentation methods. In clinical cases, the proposed system produces accurate segmentations (average dice similarity coefficient: 85.36 ± 2.94%, 85.98 ± 3.40%, 88.02 ± 2.75% in the lung, head and neck, and brain district, respectively) with high agreement with the gold standard (determination coefficient R2 = 0.98). We believe that the proposed system could be efficiently used in the everyday clinical routine as a medical decision tool, and to provide the clinicians with additional information, derived from PET, which can be of use in radiation therapy, treatment, and planning.
Collapse
|
47
|
Unsupervised Change Detection Using Fast Fuzzy Clustering for Landslide Mapping from Very High-Resolution Images. REMOTE SENSING 2018. [DOI: 10.3390/rs10091381] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Change detection approaches based on image segmentation are often used for landslide mapping (LM) from very high-resolution (VHR) remote sensing images. However, these approaches usually have two limitations. One is that they are sensitive to thresholds used for image segmentation and require too many parameters. The other one is that the computational complexity of these approaches depends on the image size, and thus they require a long execution time for very high-resolution (VHR) remote sensing images. In this paper, an unsupervised change detection using fast fuzzy c-means clustering (CDFFCM) for LM is proposed. The proposed CDFFCM has two contributions. The first is that we employ a Gaussian pyramid-based fast fuzzy c-means (FCM) clustering algorithm to obtain candidate landslide regions that have a better visual effect due to the utilization of image spatial information. The second is that we use the difference of image structure information instead of grayscale difference to obtain more accurate landslide regions. Three comparative approaches, edge-based level-set (ELSE), region-based level-set (RLSE), and change detection-based Markov random field (CDMRF), and the proposed CDFFCM are evaluated in three true landslide cases in the Lantau area of Hong Kong. The experiments show that the proposed CDFFCM is superior to three comparative approaches in terms of higher accuracy, fewer parameters, and shorter execution time.
Collapse
|
48
|
Segmentation of parotid glands from registered CT and MR images. Phys Med 2018; 52:33-41. [PMID: 30139607 DOI: 10.1016/j.ejmp.2018.06.012] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Revised: 06/11/2018] [Accepted: 06/12/2018] [Indexed: 01/16/2023] Open
Abstract
PURPOSE To develop an automatic multimodal method for segmentation of parotid glands (PGs) from pre-registered computed tomography (CT) and magnetic resonance (MR) images and compare its results to the results of an existing state-of-the-art algorithm that segments PGs from CT images only. METHODS Magnetic resonance images of head and neck were registered to the accompanying CT images using two different state-of-the-art registration procedures. The reference domains of registered image pairs were divided on the complementary PG regions and backgrounds according to the manual delineation of PGs on CT images, provided by a physician. Patches of intensity values from both image modalities, centered around randomly sampled voxels from the reference domain, served as positive or negative samples in the training of the convolutional neural network (CNN) classifier. The trained CNN accepted a previously unseen (registered) image pair and classified its voxels according to the resemblance of its patches to the patches used for training. The final segmentation was refined using a graph-cut algorithm, followed by the dilate-erode operations. RESULTS Using the same image dataset, segmentation of PGs was performed using the proposed multimodal algorithm and an existing monomodal algorithm, which segments PGs from CT images only. The mean value of the achieved Dice overlapping coefficient for the proposed algorithm was 78.8%, while the corresponding mean value for the monomodal algorithm was 76.5%. CONCLUSIONS Automatic PG segmentation on the planning CT image can be augmented with the MR image modality, leading to an improved RT planning of head and neck cancer.
Collapse
|
49
|
Zhong Z, Kim Y, Zhou L, Plichta K, Allen B, Buatti J, Wu X. 3D FULLY CONVOLUTIONAL NETWORKS FOR CO-SEGMENTATION OF TUMORS ON PET-CT IMAGES. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2018; 2018:228-231. [PMID: 31772717 PMCID: PMC6878113 DOI: 10.1109/isbi.2018.8363561] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Positron emission tomography and computed tomography (PET-CT) dual-modality imaging provides critical diagnostic information in modern cancer diagnosis and therapy. Automated accurate tumor delineation is essentially important in computer-assisted tumor reading and interpretation based on PET-CT. In this paper, we propose a novel approach for the segmentation of lung tumors that combines the powerful fully convolutional networks (FCN) based semantic segmentation framework (3D-UNet) and the graph cut based co-segmentation model. First, two separate deep UNets are trained on PET and CT, separately, to learn high level discriminative features to generate tumor/non-tumor masks and probability maps for PET and CT images. Then, the two probability maps on PET and CT are further simultaneously employed in a graph cut based co-segmentation model to produce the final tumor segmentation results. Comparative experiments on 32 PET-CT scans of lung cancer patients demonstrate the effectiveness of our method.
Collapse
Affiliation(s)
- Zisha Zhong
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA
| | - Yusung Kim
- Department of Radiation Oncology, University of Iowa, Iowa City, IA
| | - Leixin Zhou
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA
| | - Kristin Plichta
- Department of Radiation Oncology, University of Iowa, Iowa City, IA
| | - Bryan Allen
- Department of Radiation Oncology, University of Iowa, Iowa City, IA
| | - John Buatti
- Department of Radiation Oncology, University of Iowa, Iowa City, IA
| | - Xiaodong Wu
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA
- Department of Radiation Oncology, University of Iowa, Iowa City, IA
| |
Collapse
|
50
|
Zhong Z, Kim Y, Zhou L, Plichta K, Allen B, Buatti J, Wu X. IMPROVING TUMOR CO-SEGMENTATION ON PET-CT IMAGES WITH 3D CO-MATTING. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2018; 2018:224-227. [PMID: 31762933 PMCID: PMC6873703 DOI: 10.1109/isbi.2018.8363560] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Positron emission tomography and computed tomography (PET-CT) plays a critically important role in modern cancer therapy. In this paper, we focus on automated tumor delineation on PET-CT image pairs. Inspired by co-segmentation model, we develop a novel 3D image co-matting technique making use of the inner-modality information of PET and CT for matting. The obtained co-matting results are then incorporated in the graph-cut based PET-CT co-segmentation framework. Our comparative experiments on 32 PET-CT scan pairs of lung cancer patients demonstrate that the proposed 3D image co-matting technique can significantly improve the quality of cost images for the co-segmentation, resulting in highly accurate tumor segmentation on both PET and CT scan pairs.
Collapse
Affiliation(s)
- Zisha Zhong
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA
| | - Yusung Kim
- Department of Radiation Oncology, University of Iowa, Iowa City, IA
| | - Leixin Zhou
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA
| | - Kristin Plichta
- Department of Radiation Oncology, University of Iowa, Iowa City, IA
| | - Bryan Allen
- Department of Radiation Oncology, University of Iowa, Iowa City, IA
| | - John Buatti
- Department of Radiation Oncology, University of Iowa, Iowa City, IA
| | - Xiaodong Wu
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA
- Department of Radiation Oncology, University of Iowa, Iowa City, IA
| |
Collapse
|