51
|
Yu X, Jin F, Luo H, Lei Q, Wu Y. Gross Tumor Volume Segmentation for Stage III NSCLC Radiotherapy Using 3D ResSE-Unet. Technol Cancer Res Treat 2022; 21:15330338221090847. [PMID: 35443832 PMCID: PMC9047806 DOI: 10.1177/15330338221090847] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
INTRODUCTION Radiotherapy is one of the most effective ways to treat lung cancer. Accurately delineating the gross target volume is a key step in the radiotherapy process. In current clinical practice, the target area is still delineated manually by radiologists, which is time-consuming and laborious. However, these problems can be better solved by deep learning-assisted automatic segmentation methods. METHODS In this paper, a 3D CNN model named 3D ResSE-Unet is proposed for gross tumor volume segmentation for stage III NSCLC radiotherapy. This model is based on 3D Unet and combines residual connection and channel attention mechanisms. Three-dimensional convolution operation and encoding-decoding structure are used to mine three-dimensional spatial information of tumors from computed tomography data. Inspired by ResNet and SE-Net, residual connection and channel attention mechanisms are used to improve segmentation performance. A total of 214 patients with stage III NSCLC were collected selectively and 148 cases were randomly selected as the training set, 30 cases as the validation set, and 36 cases as the testing set. The segmentation performance of models was evaluated by the testing set. In addition, the segmentation results of different depths of 3D Unet were analyzed. And the performance of 3D ResSE-Unet was compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet. RESULTS Compared with other depths, 3D Unet with four downsampling depths is more suitable for our work. Compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet, 3D ResSE-Unet can obtain superior results. Its dice similarity coefficient, 95th-percentile of Hausdorff distance, and average surface distance can reach 0.7367, 21.39mm, 4.962mm, respectively. And the average time cost of 3D ResSE-Unet to segment a patient is only about 10s. CONCLUSION The method proposed in this study provides a new tool for GTV auto-segmentation and may be useful for lung cancer radiotherapy.
Collapse
Affiliation(s)
- Xinhao Yu
- College of Bioengineering, 47913Chongqing University, Chongqing, China.,Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Fu Jin
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - HuanLi Luo
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Qianqian Lei
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Yongzhong Wu
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| |
Collapse
|
52
|
Moreau N, Rousseau C, Fourcade C, Santini G, Brennan A, Ferrer L, Lacombe M, Guillerminet C, Colombié M, Jézéquel P, Campone M, Normand N, Rubeaux M. Automatic Segmentation of Metastatic Breast Cancer Lesions on 18F-FDG PET/CT Longitudinal Acquisitions for Treatment Response Assessment. Cancers (Basel) 2021; 14:101. [PMID: 35008265 PMCID: PMC8750371 DOI: 10.3390/cancers14010101] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 12/16/2021] [Accepted: 12/21/2021] [Indexed: 11/21/2022] Open
Abstract
Metastatic breast cancer patients receive lifelong medication and are regularly monitored for disease progression. The aim of this work was to (1) propose networks to segment breast cancer metastatic lesions on longitudinal whole-body PET/CT and (2) extract imaging biomarkers from the segmentations and evaluate their potential to determine treatment response. Baseline and follow-up PET/CT images of 60 patients from the EPICUREseinmeta study were used to train two deep-learning models to segment breast cancer metastatic lesions: One for baseline images and one for follow-up images. From the automatic segmentations, four imaging biomarkers were computed and evaluated: SULpeak, Total Lesion Glycolysis (TLG), PET Bone Index (PBI) and PET Liver Index (PLI). The first network obtained a mean Dice score of 0.66 on baseline acquisitions. The second network obtained a mean Dice score of 0.58 on follow-up acquisitions. SULpeak, with a 32% decrease between baseline and follow-up, was the biomarker best able to assess patients' response (sensitivity 87%, specificity 87%), followed by TLG (43% decrease, sensitivity 73%, specificity 81%) and PBI (8% decrease, sensitivity 69%, specificity 69%). Our networks constitute promising tools for the automatic segmentation of lesions in patients with metastatic breast cancer allowing treatment response assessment with several biomarkers.
Collapse
Affiliation(s)
- Noémie Moreau
- LS2N, University of Nantes, CNRS, 44000 Nantes, France; (C.F.); (N.N.)
- Keosys Medical Imaging, 13 Imp. Serge Reggiani, 44815 Saint-Herblain, France; (G.S.); (A.B.); (M.R.)
| | - Caroline Rousseau
- CRCINA, University of Nantes, INSERM UMR1232, CNRS-ERL6001, 44000 Nantes, France; (C.R.); (P.J.)
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
| | - Constance Fourcade
- LS2N, University of Nantes, CNRS, 44000 Nantes, France; (C.F.); (N.N.)
- Keosys Medical Imaging, 13 Imp. Serge Reggiani, 44815 Saint-Herblain, France; (G.S.); (A.B.); (M.R.)
| | - Gianmarco Santini
- Keosys Medical Imaging, 13 Imp. Serge Reggiani, 44815 Saint-Herblain, France; (G.S.); (A.B.); (M.R.)
| | - Aislinn Brennan
- Keosys Medical Imaging, 13 Imp. Serge Reggiani, 44815 Saint-Herblain, France; (G.S.); (A.B.); (M.R.)
| | - Ludovic Ferrer
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
- CRCINA, University of Angers, INSERM UMR1232, CNRS-ERL6001, 49000 Angers, France
| | - Marie Lacombe
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
| | | | - Mathilde Colombié
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
| | - Pascal Jézéquel
- CRCINA, University of Nantes, INSERM UMR1232, CNRS-ERL6001, 44000 Nantes, France; (C.R.); (P.J.)
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
| | - Mario Campone
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
- CRCINA, University of Angers, INSERM UMR1232, CNRS-ERL6001, 49000 Angers, France
| | - Nicolas Normand
- LS2N, University of Nantes, CNRS, 44000 Nantes, France; (C.F.); (N.N.)
| | - Mathieu Rubeaux
- Keosys Medical Imaging, 13 Imp. Serge Reggiani, 44815 Saint-Herblain, France; (G.S.); (A.B.); (M.R.)
| |
Collapse
|
53
|
Oreiller V, Andrearczyk V, Jreige M, Boughdad S, Elhalawani H, Castelli J, Vallières M, Zhu S, Xie J, Peng Y, Iantsen A, Hatt M, Yuan Y, Ma J, Yang X, Rao C, Pai S, Ghimire K, Feng X, Naser MA, Fuller CD, Yousefirizi F, Rahmim A, Chen H, Wang L, Prior JO, Depeursinge A. Head and neck tumor segmentation in PET/CT: The HECKTOR challenge. Med Image Anal 2021; 77:102336. [PMID: 35016077 DOI: 10.1016/j.media.2021.102336] [Citation(s) in RCA: 69] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 10/13/2021] [Accepted: 12/14/2021] [Indexed: 12/23/2022]
Abstract
This paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge's task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (H&N) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in H&N cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs.
Collapse
Affiliation(s)
- Valentin Oreiller
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland.
| | - Vincent Andrearczyk
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
| | - Mario Jreige
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Sarah Boughdad
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Hesham Elhalawani
- Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Joel Castelli
- Radiotherapy Department, Cancer Institute Eugène Marquis, Rennes, France
| | - Martin Vallières
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Simeng Zhu
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA
| | - Juanying Xie
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Ying Peng
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Andrei Iantsen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, Jiangsu, China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Jiangsu, China
| | - Chinmay Rao
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Suraj Pai
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | | | - Xue Feng
- Carina Medical, Lexington, KY, 40513, USA; Department of Biomedical Engineering, University of Virginia, Charlottesville VA 22903, USA
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Huai Chen
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - Lisheng Wang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - John O Prior
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Adrien Depeursinge
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| |
Collapse
|
54
|
Xue Z, Li P, Zhang L, Lu X, Zhu G, Shen P, Ali Shah SA, Bennamoun M. Multi-Modal Co-Learning for Liver Lesion Segmentation on PET-CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3531-3542. [PMID: 34133275 DOI: 10.1109/tmi.2021.3089702] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Liver lesion segmentation is an essential process to assist doctors in hepatocellular carcinoma diagnosis and treatment planning. Multi-modal positron emission tomography and computed tomography (PET-CT) scans are widely utilized due to their complementary feature information for this purpose. However, current methods ignore the interaction of information across the two modalities during feature extraction, omit the co-learning of the feature maps of different resolutions, and do not ensure that shallow and deep features complement each others sufficiently. In this paper, our proposed model can achieve feature interaction across multi-modal channels by sharing the down-sampling blocks between two encoding branches to eliminate misleading features. Furthermore, we combine feature maps of different resolutions to derive spatially varying fusion maps and enhance the lesions information. In addition, we introduce a similarity loss function for consistency constraint in case that predictions of separated refactoring branches for the same regions vary a lot. We evaluate our model for liver tumor segmentation using a PET-CT scans dataset, compare our method with the baseline techniques for multi-modal (multi-branches, multi-channels and cascaded networks) and then demonstrate that our method has a significantly higher accuracy ( ) than the baseline models.
Collapse
|
55
|
Zukotynski KA, Gaudet VC, Uribe CF, Chiam K, Bénard F, Gerbaudo VH. Clinical Applications of Artificial Intelligence in Positron Emission Tomography of Lung Cancer. PET Clin 2021; 17:77-84. [PMID: 34809872 DOI: 10.1016/j.cpet.2021.09.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
The ability of a computer to perform tasks normally requiring human intelligence or artificial intelligence (AI) is not new. However, until recently, practical applications in medical imaging were limited, especially in the clinic. With advances in theory, microelectronic circuits, and computer architecture as well as our ability to acquire and access large amounts of data, AI is becoming increasingly ubiquitous in medical imaging. Of particular interest to our community, radiomics tries to identify imaging features of specific pathology that can represent, for example, the texture or shape of a region in the image. This is conducted based on a review of mathematical patterns and pattern combinations. The difficulty is often finding sufficient data to span the spectrum of disease heterogeneity because many features change with pathology as well as over time and, among other issues, data acquisition is expensive. Although we are currently in the early days of the practical application of AI to medical imaging, research is ongoing to integrate imaging, molecular pathobiology, genetic make-up, and clinical manifestations to classify patients into subgroups for the purpose of precision medicine, or in other words, predicting a priori treatment response and outcome. Lung cancer is a functionally and morphologically heterogeneous disease. Positron emission tomography (PET) is an imaging technique with an important role in the precision medicine of patients with lung cancer that helps predict early response to therapy and guides the selection of appropriate treatment. Although still in its infancy, early results suggest that the use of AI in PET of lung cancer has promise for the detection, segmentation, and characterization of disease as well as for outcome prediction.
Collapse
Affiliation(s)
- Katherine A Zukotynski
- Departments of Radiology and Medicine, McMaster University, 1200 Main St.W., Hamilton, ON L8N 3Z5, Canada; School of Biomedical Engineering, McMaster University, 1280 Main St. W., Hamilton, ON L8S 4K1 Canada; Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, 10 King's College Rd., Toronto, ON M5S 3G8, Canada.
| | - Vincent C Gaudet
- Department of Electrical and Computer Engineering, University of Waterloo, 200 University Ave.W., Waterloo, ON N2L 3G1, Canada
| | - Carlos F Uribe
- PET Functional Imaging, BC Cancer, 600W. 10th Ave., Vancouver, V5Z 4E6, Canada
| | - Katarina Chiam
- Division of Engineering Science, University of Toronto, 40 St. George St., Toronto, ON M5S 2E4, Canada
| | - François Bénard
- Department of Radiology, University of British Columbia, 2775 Laurel St., 11th floor, Vancouver, BC V5Z 1M9, Canada
| | - Victor H Gerbaudo
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA 02492, USA
| |
Collapse
|
56
|
Shao X, Niu R, Shao X, Gao J, Shi Y, Jiang Z, Wang Y. Application of dual-stream 3D convolutional neural network based on 18F-FDG PET/CT in distinguishing benign and invasive adenocarcinoma in ground-glass lung nodules. EJNMMI Phys 2021; 8:74. [PMID: 34727258 PMCID: PMC8561359 DOI: 10.1186/s40658-021-00423-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Accepted: 10/25/2021] [Indexed: 12/31/2022] Open
Abstract
Purpose This work aims to train, validate, and test a dual-stream three-dimensional convolutional neural network (3D-CNN) based on fluorine 18 (18F)-fluorodeoxyglucose (FDG) PET/CT to distinguish benign lesions and invasive adenocarcinoma (IAC) in ground-glass nodules (GGNs). Methods We retrospectively analyzed patients with suspicious GGNs who underwent 18F-FDG PET/CT in our hospital from November 2011 to November 2020. The patients with benign lesions or IAC were selected for this study. According to the ratio of 7:3, the data were randomly divided into training data and testing data. Partial image feature extraction software was used to segment PET and CT images, and the training data after using the data augmentation were used for the training and validation (fivefold cross-validation) of the three CNNs (PET, CT, and PET/CT networks). Results A total of 23 benign nodules and 92 IAC nodules from 106 patients were included in this study. In the training set, the performance of PET network (accuracy, sensitivity, and specificity of 0.92 ± 0.02, 0.97 ± 0.03, and 0.76 ± 0.15) was better than the CT network (accuracy, sensitivity, and specificity of 0.84 ± 0.03, 0.90 ± 0.07, and 0.62 ± 0.16) (especially accuracy was significant, P-value was 0.001); in the testing set, the performance of both networks declined. However, the accuracy and sensitivity of PET network were still higher than that of CT network (0.76 vs. 0.67; 0.85 vs. 0.70). For dual-stream PET/CT network, its performance was almost the same as PET network in the training set (P-value was 0.372–1.000), while in the testing set, although its performance decreased, the accuracy and sensitivity (0.85 and 0.96) were still higher than both CT and PET networks. Moreover, the accuracy of PET/CT network was higher than two nuclear medicine physicians [physician 1 (3-year experience): 0.70 and physician 2 (10-year experience): 0.73]. Conclusion The 3D-CNN based on 18F-FDG PET/CT can be used to distinguish benign lesions and IAC in GGNs, and the performance is better when both CT and PET images are used together. Supplementary Information The online version contains supplementary material available at 10.1186/s40658-021-00423-1.
Collapse
Affiliation(s)
- Xiaonan Shao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China.,Changzhou Key Laboratory of Molecular Imaging, Changzhou, 213003, China
| | - Rong Niu
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China.,Changzhou Key Laboratory of Molecular Imaging, Changzhou, 213003, China
| | - Xiaoliang Shao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China.,Changzhou Key Laboratory of Molecular Imaging, Changzhou, 213003, China
| | - Jianxiong Gao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China.,Changzhou Key Laboratory of Molecular Imaging, Changzhou, 213003, China
| | - Yunmei Shi
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China.,Changzhou Key Laboratory of Molecular Imaging, Changzhou, 213003, China
| | - Zhenxing Jiang
- Department of Radiology, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
| | - Yuetao Wang
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China. .,Changzhou Key Laboratory of Molecular Imaging, Changzhou, 213003, China.
| |
Collapse
|
57
|
Früh M, Fischer M, Schilling A, Gatidis S, Hepp T. Weakly supervised segmentation of tumor lesions in PET-CT hybrid imaging. J Med Imaging (Bellingham) 2021; 8:054003. [PMID: 34660843 PMCID: PMC8510879 DOI: 10.1117/1.jmi.8.5.054003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 10/01/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: We introduce and evaluate deep learning methods for weakly supervised segmentation of tumor lesions in whole-body fluorodeoxyglucose-positron emission tomography (FDG-PET) based solely on binary global labels (“tumor” versus “no tumor”). Approach: We propose a three-step approach based on (i) a deep learning framework for image classification, (ii) subsequent generation of class activation maps (CAMs) using different CAM methods (CAM, GradCAM, GradCAM++, ScoreCAM), and (iii) final tumor segmentation based on the aforementioned CAMs. A VGG-based classification neural network was trained to distinguish between PET image slices with and without FDG-avid tumor lesions. Subsequently, the CAMs of this network were used to identify the tumor regions within images. This proposed framework was applied to FDG-PET/CT data of 453 oncological patients with available manually generated ground-truth segmentations. Quantitative segmentation performance was assessed for the different CAM approaches and compared with the manual ground truth segmentation and with supervised segmentation methods. In addition, further biomarkers (MTV and TLG) were extracted from the segmentation masks. Results: A weakly supervised segmentation of tumor lesions was feasible with satisfactory performance [best median Dice score 0.47, interquartile range (IQR) 0.35] compared with a fully supervised U-Net model (median Dice score 0.72, IQR 0.36) and a simple threshold based segmentation (Dice score 0.29, IQR 0.28). CAM, GradCAM++, and ScoreCAM yielded similar results. However, GradCAM led to inferior results (median Dice score: 0.12, IQR 0.21) and was likely to ignore multiple instances within a given slice. CAM, GradCAM++, and ScoreCAM yielded accurate estimates of metabolic tumor volume (MTV) and tumor lesion glycolysis. Again, worse results were observed for GradCAM. Conclusions: This work demonstrated the feasibility of weakly supervised segmentation of tumor lesions and accurate estimation of derived metrics such as MTV and tumor lesion glycolysis.
Collapse
Affiliation(s)
- Marcel Früh
- University Hospital Tübingen, Department of Diagnostic and Interventional Radiology, Tübingen, Germany.,University of Tübingen, Institute for Visual Computing, Department of Computer Science, Tübingen, Germany
| | - Marc Fischer
- University of Stuttgart, Institute of Signal Processing and System Theory, Stuttgart, Germany
| | - Andreas Schilling
- University of Tübingen, Institute for Visual Computing, Department of Computer Science, Tübingen, Germany
| | - Sergios Gatidis
- University Hospital Tübingen, Department of Diagnostic and Interventional Radiology, Tübingen, Germany.,Max Planck Institute for Intelligent Systems, Max Planck Ring 4, Tübingen, Germany
| | - Tobias Hepp
- University Hospital Tübingen, Department of Diagnostic and Interventional Radiology, Tübingen, Germany.,Max Planck Institute for Intelligent Systems, Max Planck Ring 4, Tübingen, Germany
| |
Collapse
|
58
|
Diao Z, Jiang H, Han XH, Yao YD, Shi T. EFNet: evidence fusion network for tumor segmentation from PET-CT volumes. Phys Med Biol 2021; 66. [PMID: 34555816 DOI: 10.1088/1361-6560/ac299a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/23/2021] [Indexed: 11/11/2022]
Abstract
Precise delineation of target tumor from positron emission tomography-computed tomography (PET-CT) is a key step in clinical practice and radiation therapy. PET-CT co-segmentation actually uses the complementary information of two modalities to reduce the uncertainty of single-modal segmentation, so as to obtain more accurate segmentation results. At present, the PET-CT segmentation methods based on fully convolutional neural network (FCN) mainly adopt image fusion and feature fusion. The current fusion strategies do not consider the uncertainty of multi-modal segmentation and complex feature fusion consumes more computing resources, especially when dealing with 3D volumes. In this work, we analyze the PET-CT co-segmentation from the perspective of uncertainty, and propose evidence fusion network (EFNet). The network respectively outputs PET result and CT result containing uncertainty by proposed evidence loss, which are used as PET evidence and CT evidence. Then we use evidence fusion to reduce uncertainty of single-modal evidence. The final segmentation result is obtained based on evidence fusion of PET evidence and CT evidence. EFNet uses the basic 3D U-Net as backbone and only uses simple unidirectional feature fusion. In addition, EFNet can separately train and predict PET evidence and CT evidence, without the need for parallel training of two branch networks. We do experiments on the soft-tissue-sarcomas and lymphoma datasets. Compared with 3D U-Net, our proposed method improves the Dice by 8% and 5% respectively. Compared with the complex feature fusion method, our proposed method improves the Dice by 7% and 2% respectively. Our results show that in PET-CT segmentation methods based on FCN, by outputting uncertainty evidence and evidence fusion, the network can be simplified and the segmentation results can be improved.
Collapse
Affiliation(s)
- Zhaoshuo Diao
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| | - Xian-Hua Han
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi-shi 7538511, Japan
| | - Yu-Dong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken NJ 07030, United States of America
| | - Tianyu Shi
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
59
|
Gan W, Wang H, Gu H, Duan Y, Shao Y, Chen H, Feng A, Huang Y, Fu X, Ying Y, Quan H, Xu Z. Automatic segmentation of lung tumors on CT images based on a 2D & 3D hybrid convolutional neural network. Br J Radiol 2021; 94:20210038. [PMID: 34347535 PMCID: PMC9328064 DOI: 10.1259/bjr.20210038] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 06/22/2021] [Accepted: 07/25/2021] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVE A stable and accurate automatic tumor delineation method has been developed to facilitate the intelligent design of lung cancer radiotherapy process. The purpose of this paper is to introduce an automatic tumor segmentation network for lung cancer on CT images based on deep learning. METHODS In this paper, a hybrid convolution neural network (CNN) combining 2D CNN and 3D CNN was implemented for the automatic lung tumor delineation using CT images. 3D CNN used V-Net model for the extraction of tumor context information from CT sequence images. 2D CNN used an encoder-decoder structure based on dense connection scheme, which could expand information flow and promote feature propagation. Next, 2D features and 3D features were fused through a hybrid module. Meanwhile, the hybrid CNN was compared with the individual 3D CNN and 2D CNN, and three evaluation metrics, Dice, Jaccard and Hausdorff distance (HD), were used for quantitative evaluation. The relationship between the segmentation performance of hybrid network and the GTV volume size was also explored. RESULTS The newly introduced hybrid CNN was trained and tested on a dataset of 260 cases, and could achieve a median value of 0.73, with mean and stand deviation of 0.72 ± 0.10 for the Dice metric, 0.58 ± 0.13 and 21.73 ± 13.30 mm for the Jaccard and HD metrics, respectively. The hybrid network significantly outperformed the individual 3D CNN and 2D CNN in the three examined evaluation metrics (p < 0.001). A larger GTV present a higher value for the Dice metric, but its delineation at the tumor boundary is unstable. CONCLUSIONS The implemented hybrid CNN was able to achieve good lung tumor segmentation performance on CT images. ADVANCES IN KNOWLEDGE The hybrid CNN has valuable prospect with the ability to segment lung tumor.
Collapse
Affiliation(s)
| | - Hao Wang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Hengle Gu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yanhua Duan
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yan Shao
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Hua Chen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Aihui Feng
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Ying Huang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaolong Fu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yanchen Ying
- Department of Radiation Physics, Zhejiang Cancer Hospital, University of Chinese Academy of Sciences, Zhejiang, China
| | - Hong Quan
- School of Physics and Technology, University of Wuhan, Wuhan, China
| | - Zhiyong Xu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
60
|
Yousefirizi F, Jha AK, Brosch-Lenz J, Saboury B, Rahmim A. Toward High-Throughput Artificial Intelligence-Based Segmentation in Oncological PET Imaging. PET Clin 2021; 16:577-596. [PMID: 34537131 DOI: 10.1016/j.cpet.2021.06.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) techniques for image-based segmentation have garnered much attention in recent years. Convolutional neural networks have shown impressive results and potential toward fully automated segmentation in medical imaging, and particularly PET imaging. To cope with the limited access to annotated data needed in supervised AI methods, given tedious and prone-to-error manual delineations, semi-supervised and unsupervised AI techniques have also been explored for segmentation of tumors or normal organs in single- and bimodality scans. This work reviews existing AI techniques for segmentation tasks and the evaluation criteria for translational AI-based segmentation efforts toward routine adoption in clinical workflows.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St Louis, MO 63130, USA; Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO 63110, USA
| | - Julia Brosch-Lenz
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada; Department of Physics, University of British Columbia, Senior Scientist & Provincial Medical Imaging Physicist, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada
| |
Collapse
|
61
|
Fu X, Bi L, Kumar A, Fulham M, Kim J. Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung Tumor Segmentation. IEEE J Biomed Health Inform 2021; 25:3507-3516. [PMID: 33591922 DOI: 10.1109/jbhi.2021.3059453] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Multimodal positron emission tomography-computed tomography (PET-CT) is used routinely in the assessment of cancer. PET-CT combines the high sensitivity for tumor detection of PET and anatomical information from CT. Tumor segmentation is a critical element of PET-CT but at present, the performance of existing automated methods for this challenging task is low. Segmentation tends to be done manually by different imaging experts, which is labor-intensive and prone to errors and inconsistency. Previous automated segmentation methods largely focused on fusing information that is extracted separately from the PET and CT modalities, with the underlying assumption that each modality contains complementary information. However, these methods do not fully exploit the high PET tumor sensitivity that can guide the segmentation. We introduce a deep learning-based framework in multimodal PET-CT segmentation with a multimodal spatial attention module (MSAM). The MSAM automatically learns to emphasize regions (spatial areas) related to tumors and suppress normal regions with physiologic high-uptake from the PET input. The resulting spatial attention maps are subsequently employed to target a convolutional neural network (CNN) backbone for segmentation of areas with higher tumor likelihood from the CT image. Our experimental results on two clinical PET-CT datasets of non-small cell lung cancer (NSCLC) and soft tissue sarcoma (STS) validate the effectiveness of our framework in these different cancer types. We show that our MSAM, with a conventional U-Net backbone, surpasses the state-of-the-art lung tumor segmentation approach by a margin of 7.6% in Dice similarity coefficient (DSC).
Collapse
|
62
|
Xie Z, Li T, Zhang X, Qi W, Asma E, Qi J. Anatomically aided PET image reconstruction using deep neural networks. Med Phys 2021; 48:5244-5258. [PMID: 34129690 PMCID: PMC8510002 DOI: 10.1002/mp.15051] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 05/07/2021] [Accepted: 06/02/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The developments of PET/CT and PET/MR scanners provide opportunities for improving PET image quality by using anatomical information. In this paper, we propose a novel co-learning three-dimensional (3D) convolutional neural network (CNN) to extract modality-specific features from PET/CT image pairs and integrate complementary features into an iterative reconstruction framework to improve PET image reconstruction. METHODS We used a pretrained deep neural network to represent PET images. The network was trained using low-count PET and CT image pairs as inputs and high-count PET images as labels. This network was then incorporated into a constrained maximum likelihood framework to regularize PET image reconstruction. Two different network structures were investigated for the integration of anatomical information from CT images. One was a multichannel CNN, which treated PET and CT volumes as separate channels of the input. The other one was multibranch CNN, which implemented separate encoders for PET and CT images to extract latent features and fed the combined latent features into a decoder. Using computer-based Monte Carlo simulations and two real patient datasets, the proposed method has been compared with existing methods, including the maximum likelihood expectation maximization (MLEM) reconstruction, a kernel-based reconstruction and a CNN-based deep penalty method with and without anatomical guidance. RESULTS Reconstructed images showed that the proposed constrained ML reconstruction approach produced higher quality images than the competing methods. The tumors in the lung region have higher contrast in the proposed constrained ML reconstruction than in the CNN-based deep penalty reconstruction. The image quality was further improved by incorporating the anatomical information. Moreover, the liver standard deviation was lower in the proposed approach than all the competing methods at a matched lesion contrast. CONCLUSIONS The supervised co-learning strategy can improve the performance of constrained maximum likelihood reconstruction. Compared with existing techniques, the proposed method produced a better lesion contrast versus background standard deviation trade-off curve, which can potentially improve lesion detection.
Collapse
Affiliation(s)
- Zhaoheng Xie
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Tiantian Li
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Xuezhu Zhang
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Wenyuan Qi
- Canon Medical Research USA, Inc., Vernon Hills, IL,
USA
| | - Evren Asma
- Canon Medical Research USA, Inc., Vernon Hills, IL,
USA
| | - Jinyi Qi
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| |
Collapse
|
63
|
Sitek A, Ahn S, Asma E, Chandler A, Ihsani A, Prevrhal S, Rahmim A, Saboury B, Thielemans K. Artificial Intelligence in PET: An Industry Perspective. PET Clin 2021; 16:483-492. [PMID: 34353746 DOI: 10.1016/j.cpet.2021.06.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Artificial intelligence (AI) has significant potential to positively impact and advance medical imaging, including positron emission tomography (PET) imaging applications. AI has the ability to enhance and optimize all aspects of the PET imaging chain from patient scheduling, patient setup, protocoling, data acquisition, detector signal processing, reconstruction, image processing, and interpretation. AI poses industry-specific challenges which will need to be addressed and overcome to maximize the future potentials of AI in PET. This article provides an overview of these industry-specific challenges for the development, standardization, commercialization, and clinical adoption of AI and explores the potential enhancements to PET imaging brought on by AI in the near future. In particular, the combination of on-demand image reconstruction, AI, and custom-designed data-processing workflows may open new possibilities for innovation which would positively impact the industry and ultimately patients.
Collapse
Affiliation(s)
- Arkadiusz Sitek
- Sano Centre for Computational Medicine, Nawojki 11 Street, Kraków 30-072, Poland.
| | - Sangtae Ahn
- GE Research, 1 Research Circle KWC-1310C, Niskayuna, NY 12309, USA
| | - Evren Asma
- Canon Medical Research, 706 N Deerpath Drive, Vernon Hills, IL 60061, USA
| | - Adam Chandler
- Global Scientific Collaborations Group, United Imaging Healthcare, America, 9230 Kirby Drive, Houston, TX 77054, USA
| | - Alvin Ihsani
- NVIDIA, 2 Technology Park Drive, Westford, MA 01886, USA
| | - Sven Prevrhal
- Philips Research Europe, Röntgenstr. 22, Hamburg 22335, Germany
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada; Department of Physics, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, UCL Hospital Tower 5, 235 Euston Road, London NW1 2BU, UK; Algorithms and Software Consulting Ltd, 10 Laneway, London SW15 5HX, UK
| |
Collapse
|
64
|
Liu X, Li KW, Yang R, Geng LS. Review of Deep Learning Based Automatic Segmentation for Lung Cancer Radiotherapy. Front Oncol 2021; 11:717039. [PMID: 34336704 PMCID: PMC8323481 DOI: 10.3389/fonc.2021.717039] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 06/21/2021] [Indexed: 12/14/2022] Open
Abstract
Lung cancer is the leading cause of cancer-related mortality for males and females. Radiation therapy (RT) is one of the primary treatment modalities for lung cancer. While delivering the prescribed dose to tumor targets, it is essential to spare the tissues near the targets-the so-called organs-at-risk (OARs). An optimal RT planning benefits from the accurate segmentation of the gross tumor volume and surrounding OARs. Manual segmentation is a time-consuming and tedious task for radiation oncologists. Therefore, it is crucial to develop automatic image segmentation to relieve radiation oncologists of the tedious contouring work. Currently, the atlas-based automatic segmentation technique is commonly used in clinical routines. However, this technique depends heavily on the similarity between the atlas and the image segmented. With significant advances made in computer vision, deep learning as a part of artificial intelligence attracts increasing attention in medical image automatic segmentation. In this article, we reviewed deep learning based automatic segmentation techniques related to lung cancer and compared them with the atlas-based automatic segmentation technique. At present, the auto-segmentation of OARs with relatively large volume such as lung and heart etc. outperforms the organs with small volume such as esophagus. The average Dice similarity coefficient (DSC) of lung, heart and liver are over 0.9, and the best DSC of spinal cord reaches 0.9. However, the DSC of esophagus ranges between 0.71 and 0.87 with a ragged performance. In terms of the gross tumor volume, the average DSC is below 0.8. Although deep learning based automatic segmentation techniques indicate significant superiority in many aspects compared to manual segmentation, various issues still need to be solved. We discussed the potential issues in deep learning based automatic segmentation including low contrast, dataset size, consensus guidelines, and network design. Clinical limitations and future research directions of deep learning based automatic segmentation were discussed as well.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing, China
| | - Kai-Wen Li
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Li-Sheng Geng
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing, China
- School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
65
|
Yuan C, Zhang M, Huang X, Xie W, Lin X, Zhao W, Li B, Qian D. Diffuse large B-cell lymphoma segmentation in PET-CT images via hybrid learning for feature fusion. Med Phys 2021; 48:3665-3678. [PMID: 33735451 DOI: 10.1002/mp.14847] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 02/09/2021] [Accepted: 03/10/2021] [Indexed: 12/27/2022] Open
Abstract
PURPOSE Diffuse large B-cell lymphoma (DLBCL) is an aggressive type of lymphoma with high mortality and poor prognosis that especially has a high incidence in Asia. Accurate segmentation of DLBCL lesions is crucial for clinical radiation therapy. However, manual delineation of DLBCL lesions is tedious and time-consuming. Automatic segmentation provides an alternative solution but is difficult for diffuse lesions without the sufficient utilization of multimodality information. Our work is the first study focusing on positron emission tomography and computed tomography (PET-CT) feature fusion for the DLBCL segmentation issue. We aim to improve the fusion performance of complementary information contained in PET-CT imaging with a hybrid learning module in the supervised convolutional neural network. METHODS First, two encoder branches extract single-modality features, respectively. Next, the hybrid learning component utilizes them to generate spatial fusion maps which can quantify the contribution of complementary information. Such feature fusion maps are then concatenated with specific-modality (i.e., PET and CT) feature maps to obtain a representation of the final fused feature maps in different scales. Finally, the reconstruction part of our network creates a prediction map of DLBCL lesions by integrating and up-sampling the final fused feature maps from encoder blocks in different scales. RESULTS The ability of our method was evaluated to detect foreground and segment lesions in three independent body regions (nasopharynx, chest, and abdomen) of a set of 45 PET-CT scans. Extensive ablation experiments compared our method to four baseline techniques for multimodality fusion (input-level (IL) fusion, multichannel (MC) strategy, multibranch (MB) strategy, and quantitative weighting (QW) fusion). The results showed that our method achieved a high detection accuracy (99.63% in the nasopharynx, 99.51% in the chest, and 99.21% in the abdomen) and had the superiority in segmentation performance with the mean dice similarity coefficient (DSC) of 73.03% and the modified Hausdorff distance (MHD) of 4.39 mm, when compared with the baselines (DSC: IL: 53.08%, MC: 63.59%, MB: 69.98%, and QW: 72.19%; MHD: IL: 12.16 mm, MC: 6.46 mm, MB: 4.83 mm, and QW: 4.89 mm). CONCLUSIONS A promising segmentation method has been proposed for the challenging DLBCL lesions in PET-CT images, which improves the understanding of complementary information by feature fusion and may guide clinical radiotherapy. The statistically significant analysis based on P-value calculation has indicated a degree of significant difference between our proposed method and other baselines (almost metrics: P < 0.05). This is a preliminary research using a small sample size, and we will collect data continually to achieve the larger verification study.
Collapse
Affiliation(s)
- Cheng Yuan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200040, China
| | - Miao Zhang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Xinyun Huang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Wei Xie
- Department of Hematology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Xiaozhu Lin
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Weili Zhao
- Department of Hematology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200040, China
| |
Collapse
|
66
|
Huang D, Bai H, Wang L, Hou Y, Li L, Xia Y, Yan Z, Chen W, Chang L, Li W. The Application and Development of Deep Learning in Radiotherapy: A Systematic Review. Technol Cancer Res Treat 2021; 20:15330338211016386. [PMID: 34142614 PMCID: PMC8216350 DOI: 10.1177/15330338211016386] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.
Collapse
Affiliation(s)
- Danju Huang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Han Bai
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Wang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yu Hou
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Lan Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yaoxiong Xia
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Zhirui Yan
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenrui Chen
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Chang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenhui Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| |
Collapse
|
67
|
Liu Z, Mhlanga JC, Laforest R, Derenoncourt PR, Siegel BA, Jha AK. A Bayesian approach to tissue-fraction estimation for oncological PET segmentation. Phys Med Biol 2021; 66. [PMID: 34125078 PMCID: PMC8765116 DOI: 10.1088/1361-6560/ac01f4] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 05/17/2021] [Indexed: 01/06/2023]
Abstract
Tumor segmentation in oncological PET is challenging, a major reason being the partial-volume effects (PVEs) that arise due to low system resolution and finite voxel size. The latter results in tissue-fraction effects (TFEs), i.e. voxels contain a mixture of tissue classes. Conventional segmentation methods are typically designed to assign each image voxel as belonging to a certain tissue class. Thus, these methods are inherently limited in modeling TFEs. To address the challenge of accounting for PVEs, and in particular, TFEs, we propose a Bayesian approach to tissue-fraction estimation for oncological PET segmentation. Specifically, this Bayesian approach estimates the posterior mean of the fractional volume that the tumor occupies within each image voxel. The proposed method, implemented using a deep-learning-based technique, was first evaluated using clinically realistic 2D simulation studies with known ground truth, in the context of segmenting the primary tumor in PET images of patients with lung cancer. The evaluation studies demonstrated that the method accurately estimated the tumor-fraction areas and significantly outperformed widely used conventional PET segmentation methods, including a U-net-based method, on the task of segmenting the tumor. In addition, the proposed method was relatively insensitive to PVEs and yielded reliable tumor segmentation for different clinical-scanner configurations. The method was then evaluated using clinical images of patients with stage IIB/III non-small cell lung cancer from ACRIN 6668/RTOG 0235 multi-center clinical trial. Here, the results showed that the proposed method significantly outperformed all other considered methods and yielded accurate tumor segmentation on patient images with Dice similarity coefficient (DSC) of 0.82 (95% CI: 0.78, 0.86). In particular, the method accurately segmented relatively small tumors, yielding a high DSC of 0.77 for the smallest segmented cross-section of 1.30 cm2. Overall, this study demonstrates the efficacy of the proposed method to accurately segment tumors in PET images.
Collapse
Affiliation(s)
- Ziping Liu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, United States of America
| | - Joyce C Mhlanga
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Paul-Robert Derenoncourt
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Barry A Siegel
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, United States of America.,Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| |
Collapse
|
68
|
Sadaghiani MS, Rowe SP, Sheikhbahaei S. Applications of artificial intelligence in oncologic 18F-FDG PET/CT imaging: a systematic review. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:823. [PMID: 34268436 PMCID: PMC8246218 DOI: 10.21037/atm-20-6162] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 03/25/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) is a growing field of research that is emerging as a promising adjunct to assist physicians in detection and management of patients with cancer. 18F-FDG PET imaging helps physicians in detection and management of patients with cancer. In this study we discuss the possible applications of AI in 18F-FDG PET imaging based on the published studies. A systematic literature review was performed in PubMed on early August 2020 to find the relevant studies. A total of 65 studies were available for review against the inclusion criteria which included studies that developed an AI model based on 18F-FDG PET data in cancer to diagnose, differentiate, delineate, stage, assess response to therapy, determine prognosis, or improve image quality. Thirty-two studies met the inclusion criteria and are discussed in this review. The majority of studies are related to lung cancer. Other studied cancers included breast cancer, cervical cancer, head and neck cancer, lymphoma, pancreatic cancer, and sarcoma. All studies were based on human patients except for one which was performed on rats. According to the included studies, machine learning (ML) models can help in detection, differentiation from benign lesions, segmentation, staging, response assessment, and prognosis determination. Despite the potential benefits of AI in cancer imaging and management, the routine implementation of AI-based models and 18F-FDG PET-derived radiomics in clinical practice is limited at least partially due to lack of standardized, reproducible, generalizable, and precise techniques.
Collapse
Affiliation(s)
- Mohammad S Sadaghiani
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Steven P Rowe
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Sara Sheikhbahaei
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
69
|
Bi L, Fulham M, Li N, Liu Q, Song S, Dagan Feng D, Kim J. Recurrent feature fusion learning for multi-modality pet-ct tumor segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106043. [PMID: 33744750 DOI: 10.1016/j.cmpb.2021.106043] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 03/04/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE [18f]-fluorodeoxyglucose (fdg) positron emission tomography - computed tomography (pet-ct) is now the preferred imaging modality for staging many cancers. Pet images characterize tumoral glucose metabolism while ct depicts the complementary anatomical localization of the tumor. Automatic tumor segmentation is an important step in image analysis in computer aided diagnosis systems. Recently, fully convolutional networks (fcns), with their ability to leverage annotated datasets and extract image feature representations, have become the state-of-the-art in tumor segmentation. There are limited fcn based methods that support multi-modality images and current methods have primarily focused on the fusion of multi-modality image features at various stages, i.e., early-fusion where the multi-modality image features are fused prior to fcn, late-fusion with the resultant features fused and hyper-fusion where multi-modality image features are fused across multiple image feature scales. Early- and late-fusion methods, however, have inherent, limited freedom to fuse complementary multi-modality image features. The hyper-fusion methods learn different image features across different image feature scales that can result in inaccurate segmentations, in particular, in situations where the tumors have heterogeneous textures. METHODS we propose a recurrent fusion network (rfn), which consists of multiple recurrent fusion phases to progressively fuse the complementary multi-modality image features with intermediary segmentation results derived at individual recurrent fusion phases: (1) the recurrent fusion phases iteratively learn the image features and then refine the subsequent segmentation results; and, (2) the intermediary segmentation results allows our method to focus on learning the multi-modality image features around these intermediary segmentation results, which minimize the risk of inconsistent feature learning. RESULTS we evaluated our method on two pathologically proven non-small cell lung cancer pet-ct datasets. We compared our method to the commonly used fusion methods (early-fusion, late-fusion and hyper-fusion) and the state-of-the-art pet-ct tumor segmentation methods on various network backbones (resnet, densenet and 3d-unet). Our results show that the rfn provides more accurate segmentation compared to the existing methods and is generalizable to different datasets. CONCLUSIONS we show that learning through multiple recurrent fusion phases allows the iterative re-use of multi-modality image features that refines tumor segmentation results. We also identify that our rfn produces consistent segmentation results across different network architectures.
Collapse
Affiliation(s)
- Lei Bi
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia.
| | - Michael Fulham
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia; Department of Molecular Imaging, Royal Prince Alfred Hospital, NSW, Australia
| | - Nan Li
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Fudan University, Shanghai, China
| | - Qiufang Liu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Fudan University, Shanghai, China
| | - Shaoli Song
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Fudan University, Shanghai, China
| | - David Dagan Feng
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia; Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Jinman Kim
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia.
| |
Collapse
|
70
|
Iantsen A, Ferreira M, Lucia F, Jaouen V, Reinhold C, Bonaffini P, Alfieri J, Rovira R, Masson I, Robin P, Mervoyer A, Rousseau C, Kridelka F, Decuypere M, Lovinfosse P, Pradier O, Hustinx R, Schick U, Visvikis D, Hatt M. Convolutional neural networks for PET functional volume fully automatic segmentation: development and validation in a multi-center setting. Eur J Nucl Med Mol Imaging 2021; 48:3444-3456. [PMID: 33772335 PMCID: PMC8440243 DOI: 10.1007/s00259-021-05244-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 02/07/2021] [Indexed: 11/12/2022]
Abstract
Purpose In this work, we addressed fully automatic determination of tumor functional uptake from positron emission tomography (PET) images without relying on other image modalities or additional prior constraints, in the context of multicenter images with heterogeneous characteristics. Methods In cervical cancer, an additional challenge is the location of the tumor uptake near or even stuck to the bladder. PET datasets of 232 patients from five institutions were exploited. To avoid unreliable manual delineations, the ground truth was generated with a semi-automated approach: a volume containing the tumor and excluding the bladder was first manually determined, then a well-validated, semi-automated approach relying on the Fuzzy locally Adaptive Bayesian (FLAB) algorithm was applied to generate the ground truth. Our model built on the U-Net architecture incorporates residual blocks with concurrent spatial squeeze and excitation modules, as well as learnable non-linear downsampling and upsampling blocks. Experiments relied on cross-validation (four institutions for training and validation, and the fifth for testing). Results The model achieved good Dice similarity coefficient (DSC) with little variability across institutions (0.80 ± 0.03), with higher recall (0.90 ± 0.05) than precision (0.75 ± 0.05) and improved results over the standard U-Net (DSC 0.77 ± 0.05, recall 0.87 ± 0.02, precision 0.74 ± 0.08). Both vastly outperformed a fixed threshold at 40% of SUVmax (DSC 0.33 ± 0.15, recall 0.52 ± 0.17, precision 0.30 ± 0.16). In all cases, the model could determine the tumor uptake without including the bladder. Neither shape priors nor anatomical information was required to achieve efficient training. Conclusion The proposed method could facilitate the deployment of a fully automated radiomics pipeline in such a challenging multicenter context. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05244-z.
Collapse
Affiliation(s)
- Andrei Iantsen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France.
| | - Marta Ferreira
- GIGA-CRC in vivo Imaging, University of Liège, Liège, Belgium
| | - Francois Lucia
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Vincent Jaouen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Caroline Reinhold
- Department of Radiology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Pietro Bonaffini
- Department of Radiology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Joanne Alfieri
- Department of Radiation Oncology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Ramon Rovira
- Gynecology Oncology and Laparoscopy Department, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Ingrid Masson
- Department of Radiation Oncology, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Philippe Robin
- Nuclear Medicine Department, University Hospital, Brest, France
| | - Augustin Mervoyer
- Department of Radiation Oncology, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Caroline Rousseau
- Nuclear Medicine Department, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Frédéric Kridelka
- Division of Oncological Gynecology, University Hospital of Liège, Liège, Belgium
| | - Marjolein Decuypere
- Division of Oncological Gynecology, University Hospital of Liège, Liège, Belgium
| | - Pierre Lovinfosse
- Division of Nuclear Medicine and Oncological Imaging, University Hospital of Liège, Liège, Belgium
| | | | - Roland Hustinx
- GIGA-CRC in vivo Imaging, University of Liège, Liège, Belgium
| | - Ulrike Schick
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | | | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| |
Collapse
|
71
|
Cui Y, Arimura H, Nakano R, Yoshitake T, Shioyama Y, Yabuuchi H. Automated approach for segmenting gross tumor volumes for lung cancer stereotactic body radiation therapy using CT-based dense V-networks. JOURNAL OF RADIATION RESEARCH 2021; 62:346-355. [PMID: 33480438 PMCID: PMC7948852 DOI: 10.1093/jrr/rraa132] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 09/12/2020] [Indexed: 06/12/2023]
Abstract
The aim of this study was to develop an automated segmentation approach for small gross tumor volumes (GTVs) in 3D planning computed tomography (CT) images using dense V-networks (DVNs) that offer more advantages in segmenting smaller structures than conventional V-networks. Regions of interest (ROI) with dimensions of 50 × 50 × 6-72 pixels in the planning CT images were cropped based on the GTV centroids when applying stereotactic body radiotherapy (SBRT) to patients. Segmentation accuracy of GTV contours for 192 lung cancer patients [with the following tumor types: 118 solid, 53 part-solid types and 21 pure ground-glass opacity (pure GGO)], who underwent SBRT, were evaluated based on a 10-fold cross-validation test using Dice's similarity coefficient (DSC) and Hausdorff distance (HD). For each case, 11 segmented GTVs consisting of three single outputs, four logical AND outputs, and four logical OR outputs from combinations of two or three outputs from DVNs were obtained by three runs with different initial weights. The AND output (combination of three outputs) achieved the highest values of average 3D-DSC (0.832 ± 0.074) and HD (4.57 ± 2.44 mm). The average 3D DSCs from the AND output for solid, part-solid and pure GGO types were 0.838 ± 0.074, 0.822 ± 0.078 and 0.819 ± 0.059, respectively. This study suggests that the proposed approach could be useful in segmenting GTVs for planning lung cancer SBRT.
Collapse
Affiliation(s)
- Yunhao Cui
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Hidetaka Arimura
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Risa Nakano
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Tadamasa Yoshitake
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Yoshiyuki Shioyama
- Saga International Heavy Ion Cancer Treatment Foundation, 3049 Harakogamachi, Tosu-shi, Saga 841-0071, Japan
| | - Hidetake Yabuuchi
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| |
Collapse
|
72
|
Abstract
Positron emission tomography (PET)/computed tomography (CT) are nuclear diagnostic imaging modalities that are routinely deployed for cancer staging and monitoring. They hold the advantage of detecting disease related biochemical and physiologic abnormalities in advance of anatomical changes, thus widely used for staging of disease progression, identification of the treatment gross tumor volume, monitoring of disease, as well as prediction of outcomes and personalization of treatment regimens. Among the arsenal of different functional imaging modalities, nuclear imaging has benefited from early adoption of quantitative image analysis starting from simple standard uptake value normalization to more advanced extraction of complex imaging uptake patterns; thanks to application of sophisticated image processing and machine learning algorithms. In this review, we discuss the application of image processing and machine/deep learning techniques to PET/CT imaging with special focus on the oncological radiotherapy domain as a case study and draw examples from our work and others to highlight current status and future potentials.
Collapse
Affiliation(s)
- Lise Wei
- Department of Radiation Oncology, Physics Division, University of Michigan, Ann Arbor, MI
| | - Issam El Naqa
- Department of Radiation Oncology, Physics Division, University of Michigan, Ann Arbor, MI.
| |
Collapse
|
73
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
74
|
Torres-Velázquez M, Chen WJ, Li X, McMillan AB. Application and Construction of Deep Learning Networks in Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:137-159. [PMID: 34017931 PMCID: PMC8132932 DOI: 10.1109/trpms.2020.3030611] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning (DL) approaches are part of the machine learning (ML) subfield concerned with the development of computational models to train artificial intelligence systems. DL models are characterized by automatically extracting high-level features from the input data to learn the relationship between matching datasets. Thus, its implementation offers an advantage over common ML methods that often require the practitioner to have some domain knowledge of the input data to select the best latent representation. As a result of this advantage, DL has been successfully applied within the medical imaging field to address problems, such as disease classification and tumor segmentation for which it is difficult or impossible to determine which image features are relevant. Therefore, taking into consideration the positive impact of DL on the medical imaging field, this article reviews the key concepts associated with its evolution and implementation. The sections of this review summarize the milestones related to the development of the DL field, followed by a description of the elements of deep neural network and an overview of its application within the medical imaging field. Subsequently, the key steps necessary to implement a supervised DL application are defined, and associated limitations are discussed.
Collapse
Affiliation(s)
- Maribel Torres-Velázquez
- Department of Biomedical Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Wei-Jie Chen
- Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Xue Li
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA, and also with the Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705 USA
| |
Collapse
|
75
|
Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand? Clin Transl Imaging 2021. [DOI: 10.1007/s40336-021-00411-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
76
|
Naser MA, van Dijk LV, He R, Wahid KA, Fuller CD. Tumor Segmentation in Patients with Head and Neck Cancers Using Deep Learning Based-on Multi-modality PET/CT Images. HEAD AND NECK TUMOR SEGMENTATION : FIRST CHALLENGE, HECKTOR 2020, HELD IN CONJUNCTION WITH MICCAI 2020, LIMA, PERU, OCTOBER 4, 2020, PROCEEDINGS 2021; 12603:85-98. [PMID: 33724743 PMCID: PMC7929493 DOI: 10.1007/978-3-030-67194-5_10] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Segmentation of head and neck cancer (HNC) primary tumors onmedical images is an essential, yet labor-intensive, aspect of radiotherapy.PET/CT imaging offers a unique ability to capture metabolic and anatomicinformation, which is invaluable for tumor detection and border definition. Anautomatic segmentation tool that could leverage the dual streams of informationfrom PET and CT imaging simultaneously, could substantially propel HNCradiotherapy workflows forward. Herein, we leverage a multi-institutionalPET/CT dataset of 201 HNC patients, as part of the MICCAI segmentationchallenge, to develop novel deep learning architectures for primary tumor auto-segmentation for HNC patients. We preprocess PET/CT images by normalizingintensities and applying data augmentation to mitigate overfitting. Both 2D and3D convolutional neural networks based on the U-net architecture, which wereoptimized with a model loss function based on a combination of dice similaritycoefficient (DSC) and binary cross entropy, were implemented. The median andmean DSC values comparing the predicted tumor segmentation with the groundtruth achieved by the models through 5-fold cross validation are 0.79 and 0.69for the 3D model, respectively, and 0.79 and 0.67 for the 2D model, respec-tively. These promising results show potential to provide an automatic, accurate,and efficient approach for primary tumor auto-segmentation to improve theclinical practice of HNC treatment.
Collapse
Affiliation(s)
- Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Lisanne V van Dijk
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD AndersonCancer, Houston, TX 77030, USA
| |
Collapse
|
77
|
Leung KH, Marashdeh W, Wray R, Ashrafinia S, Pomper MG, Rahmim A, Jha AK. A physics-guided modular deep-learning based automated framework for tumor segmentation in PET. Phys Med Biol 2020; 65:245032. [PMID: 32235059 DOI: 10.1088/1361-6560/ab8535] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
An important need exists for reliable positron emission tomography (PET) tumor-segmentation methods for tasks such as PET-based radiation-therapy planning and reliable quantification of volumetric and radiomic features. To address this need, we propose an automated physics-guided deep-learning-based three-module framework to segment PET images on a per-slice basis. The framework is designed to help address the challenges of limited spatial resolution and lack of clinical training data with known ground-truth tumor boundaries in PET. The first module generates PET images containing highly realistic tumors with known ground-truth using a new stochastic and physics-based approach, addressing lack of training data. The second module trains a modified U-net using these images, helping it learn the tumor-segmentation task. The third module fine-tunes this network using a small-sized clinical dataset with radiologist-defined delineations as surrogate ground-truth, helping the framework learn features potentially missed in simulated tumors. The framework was evaluated in the context of segmenting primary tumors in 18F-fluorodeoxyglucose (FDG)-PET images of patients with lung cancer. The framework's accuracy, generalizability to different scanners, sensitivity to partial volume effects (PVEs) and efficacy in reducing the number of training images were quantitatively evaluated using Dice similarity coefficient (DSC) and several other metrics. The framework yielded reliable performance in both simulated (DSC: 0.87 (95% confidence interval (CI): 0.86, 0.88)) and patient images (DSC: 0.73 (95% CI: 0.71, 0.76)), outperformed several widely used semi-automated approaches, accurately segmented relatively small tumors (smallest segmented cross-section was 1.83 cm2), generalized across five PET scanners (DSC: 0.74 (95% CI: 0.71, 0.76)), was relatively unaffected by PVEs, and required low training data (training with data from even 30 patients yielded DSC of 0.70 (95% CI: 0.68, 0.71)). In conclusion, the proposed automated physics-guided deep-learning-based PET-segmentation framework yielded reliable performance in delineating tumors in FDG-PET images of patients with lung cancer.
Collapse
Affiliation(s)
- Kevin H Leung
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
- The Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - Wael Marashdeh
- Department of Radiology and Nuclear Medicine, Jordan University of Science and Technology, Ar Ramtha, Jordan
| | - Rick Wray
- Memorial Sloan Kettering Cancer Center, Greater New York City Area, NY, United States of America
| | - Saeed Ashrafinia
- The Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - Martin G Pomper
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
- The Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - Arman Rahmim
- The Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC, Canada
| | - Abhinav K Jha
- Department of Biomedical Engineering and Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, MO, United States of America
| |
Collapse
|
78
|
Li W, Liu H, Cheng F, Li Y, Li S, Yan J. Artificial intelligence applications for oncological positron emission tomography imaging. Eur J Radiol 2020; 134:109448. [PMID: 33307463 DOI: 10.1016/j.ejrad.2020.109448] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 10/07/2020] [Accepted: 11/26/2020] [Indexed: 12/16/2022]
Abstract
Positron emission tomography (PET), a functional and dynamic molecular imaging technique, is generally used to reveal tumors' biological behavior. Radiomics allows a high-throughput extraction of multiple features from images with artificial intelligence (AI) approaches and develops rapidly worldwide. Quantitative and objective features of medical images have been explored to recognize reliable biomarkers, with the development of PET radiomics. This paper will review the current clinical exploration of PET-based classical machine learning and deep learning methods, including disease diagnosis, the prediction of histological subtype, gene mutation status, tumor metastasis, tumor relapse, therapeutic side effects, therapeutic intervention and evaluation of prognosis. The applications of AI in oncology will be mainly discussed. The image-guided biopsy or surgery assisted by PET-based AI will be introduced as well. This paper aims to present the applications and methods of AI for PET imaging, which may offer important details for further clinical studies. Relevant precautions are put forward and future research directions are suggested.
Collapse
Affiliation(s)
- Wanting Li
- Shanxi Medical University, Taiyuan 030009, PR China; Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan 030001, PR China; Collaborative Innovation Center for Molecular Imaging, Taiyuan 030001, PR China
| | - Haiyan Liu
- Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan 030001, PR China; Collaborative Innovation Center for Molecular Imaging, Taiyuan 030001, PR China; Cellular Physiology Key Laboratory of Ministry of Education, Translational Medicine Research Center, Shanxi Medical University, Taiyuan 030001, PR China
| | - Feng Cheng
- Shanxi Medical University, Taiyuan 030009, PR China
| | - Yanhua Li
- Shanxi Medical University, Taiyuan 030009, PR China
| | - Sijin Li
- Shanxi Medical University, Taiyuan 030009, PR China; Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan 030001, PR China; Collaborative Innovation Center for Molecular Imaging, Taiyuan 030001, PR China.
| | - Jiangwei Yan
- Shanxi Medical University, Taiyuan 030009, PR China.
| |
Collapse
|
79
|
Jin D, Guo D, Ho TY, Harrison AP, Xiao J, Tseng CK, Lu L. DeepTarget: Gross tumor and clinical target volume segmentation in esophageal cancer radiotherapy. Med Image Anal 2020; 68:101909. [PMID: 33341494 DOI: 10.1016/j.media.2020.101909] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 09/10/2020] [Accepted: 11/13/2020] [Indexed: 12/19/2022]
Abstract
Gross tumor volume (GTV) and clinical target volume (CTV) delineation are two critical steps in the cancer radiotherapy planning. GTV defines the primary treatment area of the gross tumor, while CTV outlines the sub-clinical malignant disease. Automatic GTV and CTV segmentation are both challenging for distinct reasons: GTV segmentation relies on the radiotherapy computed tomography (RTCT) image appearance, which suffers from poor contrast with the surrounding tissues, while CTV delineation relies on a mixture of predefined and judgement-based margins. High intra- and inter-user variability makes this a particularly difficult task. We develop tailored methods solving each task in the esophageal cancer radiotherapy, together leading to a comprehensive solution for the target contouring task. Specifically, we integrate the RTCT and positron emission tomography (PET) modalities together into a two-stream chained deep fusion framework taking advantage of both modalities to facilitate more accurate GTV segmentation. For CTV segmentation, since it is highly context-dependent-it must encompass the GTV and involved lymph nodes while also avoiding excessive exposure to the organs at risk-we formulate it as a deep contextual appearance-based problem using encoded spatial distances of these anatomical structures. This better emulates the margin- and appearance-based CTV delineation performed by oncologists. Adding to our contributions, for the GTV segmentation we propose a simple yet effective progressive semantically-nested network (PSNN) backbone that outperforms more complicated models. Our work is the first to provide a comprehensive solution for the esophageal GTV and CTV segmentation in radiotherapy planning. Extensive 4-fold cross-validation on 148 esophageal cancer patients, the largest analysis to date, was carried out for both tasks. The results demonstrate that our GTV and CTV segmentation approaches significantly improve the performance over previous state-of-the-art works, e.g., by 8.7% increases in Dice score (DSC) and 32.9mm reduction in Hausdorff distance (HD) for GTV segmentation, and by 3.4% increases in DSC and 29.4mm reduction in HD for CTV segmentation.
Collapse
Affiliation(s)
| | | | | | | | - Jing Xiao
- Ping An Technology, Shenzhen, Guangdong, China
| | | | - Le Lu
- PAII Inc., Bethesda, MD, USA
| |
Collapse
|
80
|
Hatt M, Cheze Le Rest C, Antonorsi N, Tixier F, Tankyevych O, Jaouen V, Lucia F, Bourbonne V, Schick U, Badic B, Visvikis D. Radiomics in PET/CT: Current Status and Future AI-Based Evolutions. Semin Nucl Med 2020; 51:126-133. [PMID: 33509369 DOI: 10.1053/j.semnuclmed.2020.09.002] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
This short review aims at providing the readers with an update on the current status, as well as future perspectives in the quickly evolving field of radiomics applied to the field of PET/CT imaging. Numerous pitfalls have been identified in study design, data acquisition, segmentation, features calculation and modeling by the radiomics community, and these are often the same issues across all image modalities and clinical applications, however some of these are specific to PET/CT (and SPECT/CT) imaging and therefore the present paper focuses on those. In most cases, recommendations and potential methodological solutions do exist and should therefore be followed to improve the overall quality and reproducibility of published studies. In terms of future evolutions, the techniques from the larger field of artificial intelligence (AI), including those relying on deep neural networks (also known as deep learning) have already shown impressive potential to provide solutions, especially in terms of automation, but also to maybe fully replace the tools the radiomics community has been using until now in order to build the usual radiomics workflow. Some important challenges remain to be addressed before the full impact of AI may be realized but overall the field has made striking advances over the last few years and it is expected advances will continue at a rapid pace.
Collapse
Affiliation(s)
- Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France
| | - Catherine Cheze Le Rest
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France; Nuclear Medicine Department, CHU Milétrie, Poitiers, France
| | - Nils Antonorsi
- Nuclear Medicine Department, CHU Milétrie, Poitiers, France
| | - Florent Tixier
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States of America
| | | | - Vincent Jaouen
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France; IMT-Atlantique, Plouzané, France
| | - Francois Lucia
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France
| | | | - Ulrike Schick
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France
| | - Bogdan Badic
- LaTIM, INSERM, UMR 1101, University of Brest, CHRU Brest, France
| | | |
Collapse
|
81
|
Krarup MMK, Krokos G, Subesinghe M, Nair A, Fischer BM. Artificial Intelligence for the Characterization of Pulmonary Nodules, Lung Tumors and Mediastinal Nodes on PET/CT. Semin Nucl Med 2020; 51:143-156. [PMID: 33509371 DOI: 10.1053/j.semnuclmed.2020.09.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Lung cancer is the leading cause of cancer related death around the world although early diagnosis remains vital to enabling access to curative treatment options. This article briefly describes the current role of imaging, in particular 2-deoxy-2-[18F]fluoro-D-glucose (FDG) PET/CT, in lung cancer and specifically the role of artificial intelligence with CT followed by a detailed review of the published studies applying artificial intelligence (ie, machine learning and deep learning), on FDG PET or combined PET/CT images with the purpose of early detection and diagnosis of pulmonary nodules, and characterization of lung tumors and mediastinal lymph nodes. A comprehensive search was performed on Pubmed, Embase, and clinical trial databases. The studies were analyzed with a modified version of the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) and Prediction model Risk Of Bias Assessment Tool (PROBAST) statement. The search resulted in 361 studies; of these 29 were included; all retrospective; none were clinical trials. Twenty-two records evaluated standard machine learning (ML) methods on imaging features (ie, support vector machine), and 7 studies evaluated new ML methods (ie, deep learning) applied directly on PET or PET/CT images. The studies mainly reported positive results regarding the use of ML methods for diagnosing pulmonary nodules, characterizing lung tumors and mediastinal lymph nodes. However, 22 of the 29 studies were lacking a relevant comparator and/or lacking independent testing of the model. Application of ML methods with feature and image input from PET/CT for diagnosing and characterizing lung cancer is a relatively young area of research with great promise. Nevertheless, current published studies are often under-powered and lacking a clinically relevant comparator and/or independent testing.
Collapse
Affiliation(s)
| | - Georgios Krokos
- King's College London & Guy's and St. Thomas' PET Centre, St. Thomas' Hospital, London, UK
| | - Manil Subesinghe
- King's College London & Guy's and St. Thomas' PET Centre, St. Thomas' Hospital, London, UK; Department of Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Arjun Nair
- Department of Radiology, University College London Hospitals NHS Foundation Trust, London, UK
| | - Barbara Malene Fischer
- Department of Clinical Physiology, Nuclear Medicin and PET, Rigshospitalet, Copenhagen, Denmark; King's College London & Guy's and St. Thomas' PET Centre, St. Thomas' Hospital, London, UK; King's College London & Guy's and St. Thomas' PET Centre, St. Thomas' Hospital, London, UK.
| |
Collapse
|
82
|
Zukotynski K, Gaudet V, Uribe CF, Mathotaarachchi S, Smith KC, Rosa-Neto P, Bénard F, Black SE. Machine Learning in Nuclear Medicine: Part 2-Neural Networks and Clinical Aspects. J Nucl Med 2020; 62:22-29. [PMID: 32978286 DOI: 10.2967/jnumed.119.231837] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Accepted: 08/13/2020] [Indexed: 12/12/2022] Open
Abstract
This article is the second part in our machine learning series. Part 1 provided a general overview of machine learning in nuclear medicine. Part 2 focuses on neural networks. We start with an example illustrating how neural networks work and a discussion of potential applications. Recognizing that there is a spectrum of applications, we focus on recent publications in the areas of image reconstruction, low-dose PET, disease detection, and models used for diagnosis and outcome prediction. Finally, since the way machine learning algorithms are reported in the literature is extremely variable, we conclude with a call to arms regarding the need for standardized reporting of design and outcome metrics and we propose a basic checklist our community might follow going forward.
Collapse
Affiliation(s)
- Katherine Zukotynski
- Departments of Medicine and Radiology, McMaster University, Hamilton, Ontario, Canada
| | - Vincent Gaudet
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | - Carlos F Uribe
- PET Functional Imaging, BC Cancer, Vancouver, British Columbia, Canada
| | | | - Kenneth C Smith
- Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Pedro Rosa-Neto
- Translational Neuroimaging Lab, McGill University, Montreal, Quebec, Canada
| | - François Bénard
- PET Functional Imaging, BC Cancer, Vancouver, British Columbia, Canada.,Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; and
| | - Sandra E Black
- Department of Medicine (Neurology), Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
83
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
84
|
Abstract
CLINICAL ISSUE Hybrid imaging enables the precise visualization of cellular metabolism by combining anatomical and metabolic information. Advances in artificial intelligence (AI) offer new methods for processing and evaluating this data. METHODOLOGICAL INNOVATIONS This review summarizes current developments and applications of AI methods in hybrid imaging. Applications in image processing as well as methods for disease-related evaluation are presented and discussed. MATERIALS AND METHODS This article is based on a selective literature search with the search engines PubMed and arXiv. ASSESSMENT Currently, there are only a few AI applications using hybrid imaging data and no applications are established in clinical routine yet. Although the first promising approaches are emerging, they still need to be evaluated prospectively. In the future, AI applications will support radiologists and nuclear medicine radiologists in diagnosis and therapy.
Collapse
Affiliation(s)
- Christian Strack
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland
- Heidelberg University, Heidelberg, Deutschland
| | - Robert Seifert
- Department of Nuclear Medicine, Medical Faculty, University Hospital Essen, Essen, Deutschland
| | - Jens Kleesiek
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland.
- German Cancer Consortium (DKTK), Heidelberg, Deutschland.
| |
Collapse
|
85
|
Zhao X, Huang M, Li L, Qi XS, Tan S. Multi-to-binary network (MTBNet) for automated multi-organ segmentation on multi-sequence abdominal MRI images. ACTA ACUST UNITED AC 2020; 65:165013. [DOI: 10.1088/1361-6560/ab9453] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
86
|
Xu G, Udupa JK, Tong Y, Odhner D, Cao H, Torigian DA. AAR-LN-DQ: Automatic anatomy recognition based disease quantification in thoracic lymph node zones via FDG PET/CT images without Nodal Delineation. Med Phys 2020; 47:3467-3484. [PMID: 32418221 DOI: 10.1002/mp.14240] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 04/22/2020] [Accepted: 05/08/2020] [Indexed: 01/02/2023] Open
Abstract
PURPOSE The derivation of quantitative information from medical images in a practical manner is essential for quantitative radiology (QR) to become a clinical reality, but still faces a major hurdle because of image segmentation challenges. With the goal of performing disease quantification in lymph node (LN) stations without explicit nodal delineation, this paper presents a novel approach for disease quantification (DQ) by automatic recognition of LN zones and detection of malignant lymph nodes within thoracic LN zones via positron emission tomography/computed tomography (PET/CT) images. Named AAR-LN-DQ, this approach decouples DQ methods from explicit nodal segmentation via an LN recognition strategy involving a novel globular filter and a deep neural network called SegNet. METHOD The methodology consists of four main steps: (a) Building lymph node zone models by automatic anatomy recognition (AAR) method. It incorporates novel aspects of model building that relate to finding an optimal hierarchy for organs and lymph node zones in the thorax. (b) Recognizing lymph node zones by the built lymph node models. (c) Detecting pathologic LNs in the recognized zones by using a novel globular filter (g-filter) and a multi-level support vector machine (SVM) classifier. Here, we make use of the general globular shape of LNs to first localize them and then use a multi-level SVM classifier to identify pathologic LNs from among the LNs localized by the g-filter. Alternatively, we designed a deep neural network called SegNet which is trained to directly recognize pathologic nodes within AAR localized LN zones. (d) Disease quantification based on identified pathologic LNs within localized zones. A fuzzy disease map is devised to express the degree of disease burden at each voxel within the identified LNs to simultaneously handle several uncertain phenomena such as PET partial volume effects, uncertainty in localization of LNs, and gradation of disease content at the voxel level. We focused on the task of disease quantification in patients with lymphoma based on PET/CT acquisitions and devised a method of evaluation. Model building was carried out using 42 near-normal patient datasets via contrast-enhanced CT examinations of their thorax. PET/CT datasets from an additional 63 lymphoma patients were utilized for evaluating the AAR-LN-DQ methodology. We assess the accuracy of the three main processes involved in AAR-LN-DQ via fivefold cross validation: lymph node zone recognition, abnormal lymph node localization, and disease quantification. RESULTS The recognition and scale error for LN zones were 12.28 mm ± 1.99 and 0.94 ± 0.02, respectively, on normal CT datasets. On abnormal PET/CT datasets, the sensitivity and specificity of pathologic LN recognition were 84.1% ± 0.115 and 98.5% ± 0.003, respectively, for the g-filter-SVM strategy, and 91.3% ± 0.110 and 96.1% ± 0.016, respectively, for the SegNet method. Finally, the mean absolute percent errors for disease quantification of the recognized abnormal LNs were 8% ± 0.09 and 14% ± 0.10 for the g-filter-SVM method and the best SegNet strategy, respectively. CONCLUSIONS Accurate disease quantification on PET/CT images without performing explicit delineation of lymph nodes is feasible following lymph node zone and pathologic LN localization. It is very useful to perform LN zone recognition by AAR as this step can cover most (95.8%) of the abnormal LNs and drastically reduce the regions to search for abnormal LNs. This also improves the specificity of deep networks such as SegNet significantly. It is possible to utilize general shape information about LNs such as their globular nature via g-filter and to arrive at high recognition rates for abnormal LNs in conjunction with a traditional classifier such as SVM. Finally, the disease map concept is effective for estimating disease burden, irrespective of how the LNs are identified, to handle various uncertainties without having to address them explicitly one by one.
Collapse
Affiliation(s)
- Guoping Xu
- School of Electronic Information and Communications, Huazhong University of Science and technology, Wuhan, Hubei, 430074, China.,Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Hanqiang Cao
- School of Electronic Information and Communications, Huazhong University of Science and technology, Wuhan, Hubei, 430074, China
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA.,Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
87
|
Pfaehler E, Burggraaff C, Kramer G, Zijlstra J, Hoekstra OS, Jalving M, Noordzij W, Brouwers AH, Stevenson MG, de Jong J, Boellaard R. PET segmentation of bulky tumors: Strategies and workflows to improve inter-observer variability. PLoS One 2020; 15:e0230901. [PMID: 32226030 PMCID: PMC7105134 DOI: 10.1371/journal.pone.0230901] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Accepted: 03/11/2020] [Indexed: 12/26/2022] Open
Abstract
Background PET-based tumor delineation is an error prone and labor intensive part of image analysis. Especially for patients with advanced disease showing bulky tumor FDG load, segmentations are challenging. Reducing the amount of user-interaction in the segmentation might help to facilitate segmentation tasks especially when labeling bulky and complex tumors. Therefore, this study reports on segmentation workflows/strategies that may reduce the inter-observer variability for large tumors with complex shapes with different levels of user-interaction. Methods Twenty PET images of bulky tumors were delineated independently by six observers using four strategies: (I) manual, (II) interactive threshold-based, (III) interactive threshold-based segmentation with the additional presentation of the PET-gradient image and (IV) the selection of the most reasonable result out of four established semi-automatic segmentation algorithms (Select-the-best approach). The segmentations were compared using Jaccard coefficients (JC) and percentage volume differences. To obtain a reference standard, a majority vote (MV) segmentation was calculated including all segmentations of experienced observers. Performed and MV segmentations were compared regarding positive predictive value (PPV), sensitivity (SE), and percentage volume differences. Results The results show that with decreasing user-interaction the inter-observer variability decreases. JC values and percentage volume differences of Select-the-best and a workflow including gradient information were significantly better than the measurements of the other segmentation strategies (p-value<0.01). Interactive threshold-based and manual segmentations also result in significant lower and more variable PPV/SE values when compared with the MV segmentation. Conclusions FDG PET segmentations of bulky tumors using strategies with lower user-interaction showed less inter-observer variability. None of the methods led to good results in all cases, but use of either the gradient or the Select-the-best workflow did outperform the other strategies tested and may be a good candidate for fast and reliable labeling of bulky and heterogeneous tumors.
Collapse
Affiliation(s)
- Elisabeth Pfaehler
- Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
- * E-mail:
| | - Coreline Burggraaff
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Gem Kramer
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Josée Zijlstra
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Otto S. Hoekstra
- Department of Oncology Medicine, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Mathilde Jalving
- Department of Oncology Medicine, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Walter Noordzij
- Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Adrienne H. Brouwers
- Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Marc G. Stevenson
- Department of Surgical Oncology, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Johan de Jong
- Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Ronald Boellaard
- Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
88
|
Zhang F, Wang Q, Li H. Automatic Segmentation of the Gross Target Volume in Non-Small Cell Lung Cancer Using a Modified Version of ResNet. Technol Cancer Res Treat 2020. [PMCID: PMC7432983 DOI: 10.1177/1533033820947484] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Radiotherapy plays an important role in the treatment of non-small cell lung
cancer. Accurate segmentation of the gross target volume is very important for
successful radiotherapy delivery. Deep learning techniques can obtain fast and
accurate segmentation, which is independent of experts’ experience and saves
time compared with manual delineation. In this paper, we introduce a modified
version of ResNet and apply it to segment the gross target volume in computed
tomography images of patients with non-small cell lung cancer. Normalization was
applied to reduce the differences among images and data augmentation techniques
were employed to further enrich the data of the training set. Two different
residual convolutional blocks were used to efficiently extract the deep features
of the computed tomography images, and the features from all levels of the
ResNet were merged into a single output. This simple design achieved a fusion of
deep semantic features and shallow appearance features to generate dense pixel
outputs. The test loss tended to be stable after 50 training epochs, and the
segmentation took 21 ms per computed tomography image. The average evaluation
metrics were: Dice similarity coefficient, 0.73; Jaccard similarity coefficient,
0.68; true positive rate, 0.71; and false positive rate, 0.0012. Those results
were better than those of U-Net, which was used as a benchmark. The modified
ResNet directly extracted multi-scale context features from original input
images. Thus, the proposed automatic segmentation method can quickly segment the
gross target volume in non-small cell lung cancer cases and be applied to
improve consistency in contouring.
Collapse
Affiliation(s)
- Fuli Zhang
- Radiation Oncology Department, The Seventh Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Qiusheng Wang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Haipeng Li
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| |
Collapse
|
89
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
90
|
Nensa F, Demircioglu A, Rischpler C. Artificial Intelligence in Nuclear Medicine. J Nucl Med 2019; 60:29S-37S. [DOI: 10.2967/jnumed.118.220590] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 05/16/2019] [Indexed: 02/06/2023] Open
|
91
|
Hatt M, Le Rest CC, Tixier F, Badic B, Schick U, Visvikis D. Radiomics: Data Are Also Images. J Nucl Med 2019; 60:38S-44S. [DOI: 10.2967/jnumed.118.220582] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 03/28/2019] [Indexed: 12/14/2022] Open
|
92
|
Visvikis D, Cheze Le Rest C, Jaouen V, Hatt M. Artificial intelligence, machine (deep) learning and radio(geno)mics: definitions and nuclear medicine imaging applications. Eur J Nucl Med Mol Imaging 2019; 46:2630-2637. [DOI: 10.1007/s00259-019-04373-w] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Accepted: 05/23/2019] [Indexed: 12/14/2022]
|
93
|
Kumar A, Fulham M, Feng D, Kim J. Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 39:204-217. [PMID: 31217099 DOI: 10.1109/tmi.2019.2923601] [Citation(s) in RCA: 83] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications (e.g., detection and segmentation) requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions (lungs, mediastinum, tumors) with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion (fused inputs (FS), multi-branch (MB) techniques, and multichannel (MC) techniques) and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0:05) than the fusion baselines (FS: 99.00%, MB: 99.08%, TC: 98.92%) and a significantly higher Dice score (63.85%) than recent PET-CT tumor segmentation methods.
Collapse
|
94
|
Accurate Esophageal Gross Tumor Volume Segmentation in PET/CT Using Two-Stream Chained 3D Deep Network Fusion. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32245-8_21] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|