1
|
Zhang S, Zhuang Y, Luo Y, Zhu F, Zhao W, Zeng H. Deep learning-based automated lesion segmentation on pediatric focal cortical dysplasia II preoperative MRI: a reliable approach. Insights Imaging 2024; 15:71. [PMID: 38472513 DOI: 10.1186/s13244-024-01635-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 01/27/2024] [Indexed: 03/14/2024] Open
Abstract
OBJECTIVES Focal cortical dysplasia (FCD) represents one of the most common causes of refractory epilepsy in children. Deep learning demonstrates great power in tissue discrimination by analyzing MRI data. A prediction model was built and verified using 3D full-resolution nnU-Net for automatic lesion detection and segmentation of children with FCD II. METHODS High-resolution brain MRI structure data from 65 patients, confirmed with FCD II by pathology, were retrospectively studied. Experienced neuroradiologists segmented and labeled the lesions as the ground truth. Also, we used 3D full-resolution nnU-Net to segment lesions automatically, generating detection maps. The algorithm was trained using fivefold cross-validation, with data partitioned into training (N = 200) and testing (N = 15). To evaluate performance, detection maps were compared to expert manual labels. The Dice-Sørensen coefficient (DSC) and sensitivity were used to assess the algorithm performance. RESULTS The 3D nnU-Net showed a good performance for FCD lesion detection at the voxel level, with a sensitivity of 0.73. The best segmentation model achieved a mean DSC score of 0.57 on the testing dataset. CONCLUSION This pilot study confirmed that 3D full-resolution nnU-Net can automatically segment FCD lesions with reliable outcomes. This provides a novel approach to FCD lesion detection. CRITICAL RELEVANCE STATEMENT Our fully automatic models could process the 3D T1-MPRAGE data and segment FCD II lesions with reliable outcomes. KEY POINTS • Simplified image processing promotes the DL model implemented in clinical practice. • The histopathological confirmed lesion masks enhance the clinical credibility of the AI model. • The voxel-level evaluation metrics benefit lesion detection and clinical decisions.
Collapse
Affiliation(s)
- Siqi Zhang
- Shantou University Medical College, Shantou University, 22 Xinling Road, Jinping District, Shantou, 515041, China
- Department of Radiology, Shenzhen Children's Hospital, District, 7019 Yitian Road, Futian, Shenzhen, 518038, China
| | - Yijiang Zhuang
- Department of Radiology, Shenzhen Children's Hospital, District, 7019 Yitian Road, Futian, Shenzhen, 518038, China
| | - Yi Luo
- Department of Radiology, Shenzhen Children's Hospital, District, 7019 Yitian Road, Futian, Shenzhen, 518038, China
| | - Fengjun Zhu
- Department of Epilepsy Surgical Department, Shenzhen Children's Hospital, 7019 Yitian Road, Futian District, Shenzhen, 518038, China
| | - Wen Zhao
- Shantou University Medical College, Shantou University, 22 Xinling Road, Jinping District, Shantou, 515041, China
- Department of Radiology, Shenzhen Children's Hospital, District, 7019 Yitian Road, Futian, Shenzhen, 518038, China
| | - Hongwu Zeng
- Department of Radiology, Shenzhen Children's Hospital, District, 7019 Yitian Road, Futian, Shenzhen, 518038, China.
| |
Collapse
|
2
|
Xia S, Li Q, Zhu HT, Zhang XY, Shi YJ, Yang D, Wu J, Guan Z, Lu Q, Li XT, Sun YS. Fully semantic segmentation for rectal cancer based on post-nCRT MRl modality and deep learning framework. BMC Cancer 2024; 24:315. [PMID: 38454349 PMCID: PMC10919051 DOI: 10.1186/s12885-024-11997-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 02/13/2024] [Indexed: 03/09/2024] Open
Abstract
PURPOSE Rectal tumor segmentation on post neoadjuvant chemoradiotherapy (nCRT) magnetic resonance imaging (MRI) has great significance for tumor measurement, radiomics analysis, treatment planning, and operative strategy. In this study, we developed and evaluated segmentation potential exclusively on post-chemoradiation T2-weighted MRI using convolutional neural networks, with the aim of reducing the detection workload for radiologists and clinicians. METHODS A total of 372 consecutive patients with LARC were retrospectively enrolled from October 2015 to December 2017. The standard-of-care neoadjuvant process included 22-fraction intensity-modulated radiation therapy and oral capecitabine. Further, 243 patients (3061 slices) were grouped into training and validation datasets with a random 80:20 split, and 41 patients (408 slices) were used as the test dataset. A symmetric eight-layer deep network was developed using the nnU-Net Framework, which outputs the segmentation result with the same size. The trained deep learning (DL) network was examined using fivefold cross-validation and tumor lesions with different TRGs. RESULTS At the stage of testing, the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and mean surface distance (MSD) were applied to quantitatively evaluate the performance of generalization. Considering the test dataset (41 patients, 408 slices), the average DSC, HD95, and MSD were 0.700 (95% CI: 0.680-0.720), 17.73 mm (95% CI: 16.08-19.39), and 3.11 mm (95% CI: 2.67-3.56), respectively. Eighty-two percent of the MSD values were less than 5 mm, and fifty-five percent were less than 2 mm (median 1.62 mm, minimum 0.07 mm). CONCLUSIONS The experimental results indicated that the constructed pipeline could achieve relatively high accuracy. Future work will focus on assessing the performances with multicentre external validation.
Collapse
Affiliation(s)
- Shaojun Xia
- Institute of Medical Technology, Peking University Health Science Center, Haidian District, No. 38 Xueyuan Road, Beijing, 100191, China
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Qingyang Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Hai-Tao Zhu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Xiao-Yan Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Yan-Jie Shi
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Ding Yang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Jiaqi Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Zhen Guan
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Qiaoyuan Lu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Xiao-Ting Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Ying-Shi Sun
- Institute of Medical Technology, Peking University Health Science Center, Haidian District, No. 38 Xueyuan Road, Beijing, 100191, China.
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China.
| |
Collapse
|
3
|
Miao Q, Wang X, Cui J, Zheng H, Xie Y, Zhu K, Chai R, Jiang Y, Feng D, Zhang X, Shi F, Tan X, Fan G, Liang K. Artificial intelligence to predict T4 stage of pancreatic ductal adenocarcinoma using CT imaging. Comput Biol Med 2024; 171:108125. [PMID: 38340439 DOI: 10.1016/j.compbiomed.2024.108125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/02/2024] [Accepted: 02/05/2024] [Indexed: 02/12/2024]
Abstract
BACKGROUND The accurate assessment of T4 stage of pancreatic ductal adenocarcinoma (PDAC) has consistently presented a considerable difficulty for radiologists. This study aimed to develop and validate an automated artificial intelligence (AI) pipeline for the prediction of T4 stage of PDAC using contrast-enhanced CT imaging. METHODS The data were obtained retrospectively from consecutive patients with surgically resected and pathologically proved PDAC at two institutions between July 2017 and June 2022. Initially, a deep learning (DL) model was developed to segment PDAC. Subsequently, radiomics features were extracted from the automatically segmented region of interest (ROI), which encompassed both the tumor region and a 3 mm surrounding area, to construct a predictive model for determining T4 stage of PDAC. The assessment of the models' performance involved the calculation of the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. RESULTS The study encompassed a cohort of 509 PDAC patients, with a median age of 62 years (interquartile range: 55-67). The proportion of patients in T4 stage within the model was 16.9%. The model achieved an AUC of 0.849 (95% CI: 0.753-0.940), a sensitivity of 0.875, and a specificity of 0.728 in predicting T4 stage of PDAC. The performance of the model was determined to be comparable to that of two experienced abdominal radiologists (AUCs: 0.849 vs. 0.834 and 0.857). CONCLUSION The automated AI pipeline utilizing tumor and peritumor-related radiomics features demonstrated comparable performance to that of senior abdominal radiologists in predicting T4 stage of PDAC.
Collapse
Affiliation(s)
- Qi Miao
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Xuechun Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jingjing Cui
- Department of Research and Development, United Imaging Intelligence (Beijing) Co., Ltd., Bejing, China
| | - Haoxin Zheng
- Department of Computer Science, University of California, Los Angeles, USA
| | - Yan Xie
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Kexin Zhu
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Ruimei Chai
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Yuanxi Jiang
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Dongli Feng
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Xin Zhang
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiaodong Tan
- Department of General Surgery/Pancreatic and Thyroid Surgery, Shengjing Hospital of China Medical University, Shenyang, China
| | - Guoguang Fan
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China.
| | - Keke Liang
- Department of General Surgery/Pancreatic and Thyroid Surgery, Shengjing Hospital of China Medical University, Shenyang, China.
| |
Collapse
|
4
|
Anghel C, Grasu MC, Anghel DA, Rusu-Munteanu GI, Dumitru RL, Lupescu IG. Pancreatic Adenocarcinoma: Imaging Modalities and the Role of Artificial Intelligence in Analyzing CT and MRI Images. Diagnostics (Basel) 2024; 14:438. [PMID: 38396476 PMCID: PMC10887967 DOI: 10.3390/diagnostics14040438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 02/10/2024] [Accepted: 02/14/2024] [Indexed: 02/25/2024] Open
Abstract
Pancreatic ductal adenocarcinoma (PDAC) stands out as the predominant malignant neoplasm affecting the pancreas, characterized by a poor prognosis, in most cases patients being diagnosed in a nonresectable stage. Image-based artificial intelligence (AI) models implemented in tumor detection, segmentation, and classification could improve diagnosis with better treatment options and increased survival. This review included papers published in the last five years and describes the current trends in AI algorithms used in PDAC. We analyzed the applications of AI in the detection of PDAC, segmentation of the lesion, and classification algorithms used in differential diagnosis, prognosis, and histopathological and genomic prediction. The results show a lack of multi-institutional collaboration and stresses the need for bigger datasets in order for AI models to be implemented in a clinically relevant manner.
Collapse
Affiliation(s)
- Cristian Anghel
- Faculty of Medicine, Department of Medical Imaging and Interventional Radiology, Carol Davila University of Medicine and Pharmacy Bucharest, 020021 Bucharest, Romania; (C.A.); (R.L.D.); (I.G.L.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania; (D.A.A.); (G.-I.R.-M.)
| | - Mugur Cristian Grasu
- Faculty of Medicine, Department of Medical Imaging and Interventional Radiology, Carol Davila University of Medicine and Pharmacy Bucharest, 020021 Bucharest, Romania; (C.A.); (R.L.D.); (I.G.L.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania; (D.A.A.); (G.-I.R.-M.)
| | - Denisa Andreea Anghel
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania; (D.A.A.); (G.-I.R.-M.)
| | - Gina-Ionela Rusu-Munteanu
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania; (D.A.A.); (G.-I.R.-M.)
| | - Radu Lucian Dumitru
- Faculty of Medicine, Department of Medical Imaging and Interventional Radiology, Carol Davila University of Medicine and Pharmacy Bucharest, 020021 Bucharest, Romania; (C.A.); (R.L.D.); (I.G.L.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania; (D.A.A.); (G.-I.R.-M.)
| | - Ioana Gabriela Lupescu
- Faculty of Medicine, Department of Medical Imaging and Interventional Radiology, Carol Davila University of Medicine and Pharmacy Bucharest, 020021 Bucharest, Romania; (C.A.); (R.L.D.); (I.G.L.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania; (D.A.A.); (G.-I.R.-M.)
| |
Collapse
|
5
|
Bereska JI, Janssen BV, Nio CY, Kop MPM, Kazemier G, Busch OR, Struik F, Marquering HA, Stoker J, Besselink MG, Verpalen IM. Artificial intelligence for assessment of vascular involvement and tumor resectability on CT in patients with pancreatic cancer. Eur Radiol Exp 2024; 8:18. [PMID: 38342782 PMCID: PMC10859357 DOI: 10.1186/s41747-023-00419-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/09/2023] [Indexed: 02/13/2024] Open
Abstract
OBJECTIVE This study aimed to develop and evaluate an automatic model using artificial intelligence (AI) for quantifying vascular involvement and classifying tumor resectability stage in patients with pancreatic ductal adenocarcinoma (PDAC), primarily to support radiologists in referral centers. Resectability of PDAC is determined by the degree of vascular involvement on computed tomography scans (CTs), which is associated with considerable inter-observer variability. METHODS We developed a semisupervised machine learning segmentation model to segment the PDAC and surrounding vasculature using 613 CTs of 467 patients with pancreatic tumors and 50 control patients. After segmenting the relevant structures, our model quantifies vascular involvement by measuring the degree of the vessel wall that is in contact with the tumor using AI-segmented CTs. Based on these measurements, the model classifies the resectability stage using the Dutch Pancreatic Cancer Group criteria as either resectable, borderline resectable, or locally advanced (LA). RESULTS We evaluated the performance of the model using a test set containing 60 CTs from 60 patients, consisting of 20 resectable, 20 borderline resectable, and 20 locally advanced cases, by comparing the automated analysis obtained from the model to expert visual vascular involvement assessments. The model concurred with the radiologists on 227/300 (76%) vessels for determining vascular involvement. The model's resectability classification agreed with the radiologists on 17/20 (85%) resectable, 16/20 (80%) for borderline resectable, and 15/20 (75%) for locally advanced cases. CONCLUSIONS This study demonstrates that an AI model may allow automatic quantification of vascular involvement and classification of resectability for PDAC. RELEVANCE STATEMENT This AI model enables automated vascular involvement quantification and resectability classification for pancreatic cancer, aiding radiologists in treatment decisions, and potentially improving patient outcomes. KEY POINTS • High inter-observer variability exists in determining vascular involvement and resectability for PDAC. • Artificial intelligence accurately quantifies vascular involvement and classifies resectability for PDAC. • Artificial intelligence can aid radiologists by automating vascular involvement and resectability assessments.
Collapse
Affiliation(s)
- Jacqueline I Bereska
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands.
- Cancer Center Amsterdam, Amsterdam, The Netherlands.
- Department of Biomedical Engineering and Physics, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands.
| | - Boris V Janssen
- Cancer Center Amsterdam, Amsterdam, The Netherlands
- Department of Surgery, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands
- Department of Pathology, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands
| | - C Yung Nio
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Marnix P M Kop
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Geert Kazemier
- Cancer Center Amsterdam, Amsterdam, The Netherlands
- Department of Surgery, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands
| | - Olivier R Busch
- Cancer Center Amsterdam, Amsterdam, The Netherlands
- Department of Surgery, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands
| | - Femke Struik
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Henk A Marquering
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Amsterdam, The Netherlands
- Department of Biomedical Engineering and Physics, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands
| | - Jaap Stoker
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Marc G Besselink
- Cancer Center Amsterdam, Amsterdam, The Netherlands
- Department of Surgery, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands
| | - Inez M Verpalen
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location University of Amsterdam, Amsterdam, The Netherlands.
- Cancer Center Amsterdam, Amsterdam, The Netherlands.
| |
Collapse
|
6
|
Perik T, Alves N, Hermans JJ, Huisman H. Automated Quantitative Analysis of CT Perfusion to Classify Vascular Phenotypes of Pancreatic Ductal Adenocarcinoma. Cancers (Basel) 2024; 16:577. [PMID: 38339328 PMCID: PMC10854854 DOI: 10.3390/cancers16030577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 01/20/2024] [Accepted: 01/25/2024] [Indexed: 02/12/2024] Open
Abstract
CT perfusion (CTP) analysis is difficult to implement in clinical practice. Therefore, we investigated a novel semi-automated CTP AI biomarker and applied it to identify vascular phenotypes of pancreatic ductal adenocarcinoma (PDAC) and evaluate their association with overall survival (OS). METHODS From January 2018 to November 2022, 107 PDAC patients were prospectively included, who needed to undergo CTP and a diagnostic contrast-enhanced CT (CECT). We developed a semi-automated CTP AI biomarker, through a process that involved deformable image registration, a deep learning segmentation model of tumor and pancreas parenchyma volume, and a trilinear non-parametric CTP curve model to extract the enhancement slope and peak enhancement in segmented tumors and pancreas. The biomarker was validated in terms of its use to predict vascular phenotypes and their association with OS. A receiver operating characteristic (ROC) analysis with five-fold cross-validation was performed. OS was assessed with Kaplan-Meier curves. Differences between phenotypes were tested using the Mann-Whitney U test. RESULTS The final analysis included 92 patients, in whom 20 tumors (21%) were visually isovascular. The AI biomarker effectively discriminated tumor types, and isovascular tumors showed higher enhancement slopes (2.9 Hounsfield unit HU/s vs. 2.0 HU/s, p < 0.001) and peak enhancement (70 HU vs. 47 HU, p < 0.001); the AUC was 0.86. The AI biomarker's vascular phenotype significantly differed in OS (p < 0.01). CONCLUSIONS The AI biomarker offers a promising tool for robust CTP analysis. In PDAC, it can distinguish vascular phenotypes with significant OS prognostication.
Collapse
Affiliation(s)
- Tom Perik
- Department of Medical Imaging, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands (J.J.H.); (H.H.)
| | | | | | | |
Collapse
|
7
|
Korfiatis P, Suman G, Patnam NG, Trivedi KH, Karbhari A, Mukherjee S, Cook C, Klug JR, Patra A, Khasawneh H, Rajamohan N, Fletcher JG, Truty MJ, Majumder S, Bolan CW, Sandrasegaran K, Chari ST, Goenka AH. Automated Artificial Intelligence Model Trained on a Large Data Set Can Detect Pancreas Cancer on Diagnostic Computed Tomography Scans As Well As Visually Occult Preinvasive Cancer on Prediagnostic Computed Tomography Scans. Gastroenterology 2023; 165:1533-1546.e4. [PMID: 37657758 PMCID: PMC10843414 DOI: 10.1053/j.gastro.2023.08.034] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 08/13/2023] [Accepted: 08/17/2023] [Indexed: 09/03/2023]
Abstract
BACKGROUND & AIMS The aims of our case-control study were (1) to develop an automated 3-dimensional (3D) Convolutional Neural Network (CNN) for detection of pancreatic ductal adenocarcinoma (PDA) on diagnostic computed tomography scans (CTs), (2) evaluate its generalizability on multi-institutional public data sets, (3) its utility as a potential screening tool using a simulated cohort with high pretest probability, and (4) its ability to detect visually occult preinvasive cancer on prediagnostic CTs. METHODS A 3D-CNN classification system was trained using algorithmically generated bounding boxes and pancreatic masks on a curated data set of 696 portal phase diagnostic CTs with PDA and 1080 control images with a nonneoplastic pancreas. The model was evaluated on (1) an intramural hold-out test subset (409 CTs with PDA, 829 controls); (2) a simulated cohort with a case-control distribution that matched the risk of PDA in glycemically defined new-onset diabetes, and Enriching New-Onset Diabetes for Pancreatic Cancer score ≥3; (3) multi-institutional public data sets (194 CTs with PDA, 80 controls), and (4) a cohort of 100 prediagnostic CTs (i.e., CTs incidentally acquired 3-36 months before clinical diagnosis of PDA) without a focal mass, and 134 controls. RESULTS Of the CTs in the intramural test subset, 798 (64%) were from other hospitals. The model correctly classified 360 CTs (88%) with PDA and 783 control CTs (94%), with a mean accuracy 0.92 (95% CI, 0.91-0.94), area under the receiver operating characteristic (AUROC) curve of 0.97 (95% CI, 0.96-0.98), sensitivity of 0.88 (95% CI, 0.85-0.91), and specificity of 0.95 (95% CI, 0.93-0.96). Activation areas on heat maps overlapped with the tumor in 350 of 360 CTs (97%). Performance was high across tumor stages (sensitivity of 0.80, 0.87, 0.95, and 1.0 on T1 through T4 stages, respectively), comparable for hypodense vs isodense tumors (sensitivity: 0.90 vs 0.82), different age, sex, CT slice thicknesses, and vendors (all P > .05), and generalizable on both the simulated cohort (accuracy, 0.95 [95% 0.94-0.95]; AUROC curve, 0.97 [95% CI, 0.94-0.99]) and public data sets (accuracy, 0.86 [95% CI, 0.82-0.90]; AUROC curve, 0.90 [95% CI, 0.86-0.95]). Despite being exclusively trained on diagnostic CTs with larger tumors, the model could detect occult PDA on prediagnostic CTs (accuracy, 0.84 [95% CI, 0.79-0.88]; AUROC curve, 0.91 [95% CI, 0.86-0.94]; sensitivity, 0.75 [95% CI, 0.67-0.84]; and specificity, 0.90 [95% CI, 0.85-0.95]) at a median 475 days (range, 93-1082 days) before clinical diagnosis. CONCLUSIONS This automated artificial intelligence model trained on a large and diverse data set shows high accuracy and generalizable performance for detection of PDA on diagnostic CTs as well as for visually occult PDA on prediagnostic CTs. Prospective validation with blood-based biomarkers is warranted to assess the potential for early detection of sporadic PDA in high-risk individuals.
Collapse
Affiliation(s)
| | - Garima Suman
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | | | | | | | - Cole Cook
- Division of Medical Imaging Technology Services, Mayo Clinic, Rochester, Minnesota
| | - Jason R Klug
- Division of Medical Imaging Technology Services, Mayo Clinic, Rochester, Minnesota
| | - Anurima Patra
- Department of Radiology, Tata Medical Center, Kolkata, India
| | - Hala Khasawneh
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | | | - Mark J Truty
- Department of Surgery, Mayo Clinic, Rochester, Minnesota
| | - Shounak Majumder
- Department of Gastroenterology, Mayo Clinic, Rochester, Minnesota
| | | | | | - Suresh T Chari
- Department of Gastroenterology, Mayo Clinic, Rochester, Minnesota
| | - Ajit H Goenka
- Department of Radiology, Mayo Clinic, Rochester, Minnesota.
| |
Collapse
|
8
|
Litjens G, Broekmans JPEA, Boers T, Caballo M, van den Hurk MHF, Ozdemir D, van Schaik CJ, Janse MHA, van Geenen EJM, van Laarhoven CJHM, Prokop M, de With PHN, van der Sommen F, Hermans JJ. Computed Tomography-Based Radiomics Using Tumor and Vessel Features to Assess Resectability in Cancer of the Pancreatic Head. Diagnostics (Basel) 2023; 13:3198. [PMID: 37892019 PMCID: PMC10606005 DOI: 10.3390/diagnostics13203198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/01/2023] [Accepted: 10/11/2023] [Indexed: 10/29/2023] Open
Abstract
The preoperative prediction of resectability pancreatic ductal adenocarcinoma (PDAC) is challenging. This retrospective single-center study examined tumor and vessel radiomics to predict the resectability of PDAC in chemo-naïve patients. The tumor and adjacent arteries and veins were segmented in the portal-venous phase of contrast-enhanced CT scans, and radiomic features were extracted. Features were selected via stability and collinearity testing, and least absolute shrinkage and selection operator application (LASSO). Three models, using tumor features, vessel features, and a combination of both, were trained with the training set (N = 86) to predict resectability. The results were validated with the test set (N = 15) and compared to the multidisciplinary team's (MDT) performance. The vessel-features-only model performed best, with an AUC of 0.92 and sensitivity and specificity of 97% and 73%, respectively. Test set validation showed a sensitivity and specificity of 100% and 88%, respectively. The combined model was as good as the vessel model (AUC = 0.91), whereas the tumor model showed poor performance (AUC = 0.76). The MDT's prediction reached a sensitivity and specificity of 97% and 84% for the training set and 88% and 100% for the test set, respectively. Our clinician-independent vessel-based radiomics model can aid in predicting resectability and shows performance comparable to that of the MDT. With these encouraging results, improved, automated, and generalizable models can be developed that reduce workload and can be applied in non-expert hospitals.
Collapse
Affiliation(s)
- Geke Litjens
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Joris P. E. A. Broekmans
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Tim Boers
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Marco Caballo
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Maud H. F. van den Hurk
- Department of Plastic and Reconstructive Surgery, Saint Vincent’s University Hospital, D04 T6F4 Dublin, Ireland
| | - Dilek Ozdemir
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Caroline J. van Schaik
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Markus H. A. Janse
- Image Sciences Institute, University Medical Center Utrecht, 3584 CX Utrecht, The Netherlands
| | - Erwin J. M. van Geenen
- Department of Gastroenterology and Hepatology, Radboud Institute for Molecular Life Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Cees J. H. M. van Laarhoven
- Department of Surgery, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Mathias Prokop
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Peter H. N. de With
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - John J. Hermans
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| |
Collapse
|
9
|
Abstract
PURPOSE OF REVIEW Early and accurate diagnosis of pancreatic cancer is crucial for improving patient outcomes, and artificial intelligence (AI) algorithms have the potential to play a vital role in computer-aided diagnosis of pancreatic cancer. In this review, we aim to provide the latest and relevant advances in AI, specifically deep learning (DL) and radiomics approaches, for pancreatic cancer diagnosis using cross-sectional imaging examinations such as computed tomography (CT) and magnetic resonance imaging (MRI). RECENT FINDINGS This review highlights the recent developments in DL techniques applied to medical imaging, including convolutional neural networks (CNNs), transformer-based models, and novel deep learning architectures that focus on multitype pancreatic lesions, multiorgan and multitumor segmentation, as well as incorporating auxiliary information. We also discuss advancements in radiomics, such as improved imaging feature extraction, optimized machine learning classifiers and integration with clinical data. Furthermore, we explore implementing AI-based clinical decision support systems for pancreatic cancer diagnosis using medical imaging in practical settings. SUMMARY Deep learning and radiomics with medical imaging have demonstrated strong potential to improve diagnostic accuracy of pancreatic cancer, facilitate personalized treatment planning, and identify prognostic and predictive biomarkers. However, challenges remain in translating research findings into clinical practice. More studies are required focusing on refining these methods, addressing significant limitations, and developing integrative approaches for data analysis to further advance the field of pancreatic cancer diagnosis.
Collapse
Affiliation(s)
- Lanhong Yao
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University
| | - Zheyuan Zhang
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University
| | - Elif Keles
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University
| | - Cemal Yazici
- Division of Gastroentrrology and Hepatology, University of Illinois Chicago, Chicago, Illinois
| | - Temel Tirkes
- Department of Radiology & Imaging Sciences, Medicine and Urology, Indiana University School of Medicine, Indianapolis, Indianapolis, USA
| | - Ulas Bagci
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University
| |
Collapse
|
10
|
Alves N, Bosma JS, Venkadesh KV, Jacobs C, Saghir Z, de Rooij M, Hermans J, Huisman H. Prediction Variability to Identify Reduced AI Performance in Cancer Diagnosis at MRI and CT. Radiology 2023; 308:e230275. [PMID: 37724961 DOI: 10.1148/radiol.230275] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/21/2023]
Abstract
Background A priori identification of patients at risk of artificial intelligence (AI) failure in diagnosing cancer would contribute to the safer clinical integration of diagnostic algorithms. Purpose To evaluate AI prediction variability as an uncertainty quantification (UQ) metric for identifying cases at risk of AI failure in diagnosing cancer at MRI and CT across different cancer types, data sets, and algorithms. Materials and Methods Multicenter data sets and publicly available AI algorithms from three previous studies that evaluated detection of pancreatic cancer on contrast-enhanced CT images, detection of prostate cancer on MRI scans, and prediction of pulmonary nodule malignancy on low-dose CT images were analyzed retrospectively. Each task's algorithm was extended to generate an uncertainty score based on ensemble prediction variability. AI accuracy percentage and partial area under the receiver operating characteristic curve (pAUC) were compared between certain and uncertain patient groups in a range of percentile thresholds (10%-90%) for the uncertainty score using permutation tests for statistical significance. The pulmonary nodule malignancy prediction algorithm was compared with 11 clinical readers for the certain group (CG) and uncertain group (UG). Results In total, 18 022 images were used for training and 838 images were used for testing. AI diagnostic accuracy was higher for the cases in the CG across all tasks (P < .001). At an 80% threshold of certain predictions, accuracy in the CG was 21%-29% higher than in the UG and 4%-6% higher than in the overall test data sets. The lesion-level pAUC in the CG was 0.25-0.39 higher than in the UG and 0.05-0.08 higher than in the overall test data sets (P < .001). For pulmonary nodule malignancy prediction, accuracy of AI was on par with clinicians for cases in the CG (AI results vs clinician results, 80% [95% CI: 76, 85] vs 78% [95% CI: 70, 87]; P = .07) but worse for cases in the UG (AI results vs clinician results, 50% [95% CI: 37, 64] vs 68% [95% CI: 60, 76]; P < .001). Conclusion An AI-prediction UQ metric consistently identified reduced performance of AI in cancer diagnosis. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Babyn in this issue.
Collapse
Affiliation(s)
- Natália Alves
- From the Department of Medical Imaging, Radboudumc, Route 767, Room 2.30, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands (N.A., J.S.B., K.V.V., C.J., M.d.R., J.H., H.H.); Department of Medicine, Section of Pulmonary Medicine, Herlev-Gentofte Hospital, Herlev, Denmark (Z.S.); and Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark (Z.S.)
| | - Joeran S Bosma
- From the Department of Medical Imaging, Radboudumc, Route 767, Room 2.30, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands (N.A., J.S.B., K.V.V., C.J., M.d.R., J.H., H.H.); Department of Medicine, Section of Pulmonary Medicine, Herlev-Gentofte Hospital, Herlev, Denmark (Z.S.); and Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark (Z.S.)
| | - Kiran V Venkadesh
- From the Department of Medical Imaging, Radboudumc, Route 767, Room 2.30, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands (N.A., J.S.B., K.V.V., C.J., M.d.R., J.H., H.H.); Department of Medicine, Section of Pulmonary Medicine, Herlev-Gentofte Hospital, Herlev, Denmark (Z.S.); and Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark (Z.S.)
| | - Colin Jacobs
- From the Department of Medical Imaging, Radboudumc, Route 767, Room 2.30, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands (N.A., J.S.B., K.V.V., C.J., M.d.R., J.H., H.H.); Department of Medicine, Section of Pulmonary Medicine, Herlev-Gentofte Hospital, Herlev, Denmark (Z.S.); and Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark (Z.S.)
| | - Zaigham Saghir
- From the Department of Medical Imaging, Radboudumc, Route 767, Room 2.30, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands (N.A., J.S.B., K.V.V., C.J., M.d.R., J.H., H.H.); Department of Medicine, Section of Pulmonary Medicine, Herlev-Gentofte Hospital, Herlev, Denmark (Z.S.); and Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark (Z.S.)
| | - Maarten de Rooij
- From the Department of Medical Imaging, Radboudumc, Route 767, Room 2.30, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands (N.A., J.S.B., K.V.V., C.J., M.d.R., J.H., H.H.); Department of Medicine, Section of Pulmonary Medicine, Herlev-Gentofte Hospital, Herlev, Denmark (Z.S.); and Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark (Z.S.)
| | - John Hermans
- From the Department of Medical Imaging, Radboudumc, Route 767, Room 2.30, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands (N.A., J.S.B., K.V.V., C.J., M.d.R., J.H., H.H.); Department of Medicine, Section of Pulmonary Medicine, Herlev-Gentofte Hospital, Herlev, Denmark (Z.S.); and Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark (Z.S.)
| | - Henkjan Huisman
- From the Department of Medical Imaging, Radboudumc, Route 767, Room 2.30, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands (N.A., J.S.B., K.V.V., C.J., M.d.R., J.H., H.H.); Department of Medicine, Section of Pulmonary Medicine, Herlev-Gentofte Hospital, Herlev, Denmark (Z.S.); and Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark (Z.S.)
| |
Collapse
|
11
|
Viriyasaranon T, Chun JW, Koh YH, Cho JH, Jung MK, Kim SH, Kim HJ, Lee WJ, Choi JH, Woo SM. Annotation-Efficient Deep Learning Model for Pancreatic Cancer Diagnosis and Classification Using CT Images: A Retrospective Diagnostic Study. Cancers (Basel) 2023; 15:3392. [PMID: 37444502 DOI: 10.3390/cancers15133392] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 06/26/2023] [Accepted: 06/26/2023] [Indexed: 07/15/2023] Open
Abstract
The aim of this study was to develop a novel deep learning (DL) model without requiring large-annotated training datasets for detecting pancreatic cancer (PC) using computed tomography (CT) images. This retrospective diagnostic study was conducted using CT images collected from 2004 and 2019 from 4287 patients diagnosed with PC. We proposed a self-supervised learning algorithm (pseudo-lesion segmentation (PS)) for PC classification, which was trained with and without PS and validated on randomly divided training and validation sets. We further performed cross-racial external validation using open-access CT images from 361 patients. For internal validation, the accuracy and sensitivity for PC classification were 94.3% (92.8-95.4%) and 92.5% (90.0-94.4%), and 95.7% (94.5-96.7%) and 99.3 (98.4-99.7%) for the convolutional neural network (CNN) and transformer-based DL models (both with PS), respectively. Implementing PS on a small-sized training dataset (randomly sampled 10%) increased accuracy by 20.5% and sensitivity by 37.0%. For external validation, the accuracy and sensitivity were 82.5% (78.3-86.1%) and 81.7% (77.3-85.4%) and 87.8% (84.0-90.8%) and 86.5% (82.3-89.8%) for the CNN and transformer-based DL models (both with PS), respectively. PS self-supervised learning can increase DL-based PC classification performance, reliability, and robustness of the model for unseen, and even small, datasets. The proposed DL model is potentially useful for PC diagnosis.
Collapse
Affiliation(s)
- Thanaporn Viriyasaranon
- Graduate Program in System Health Science and Engineering, Division of Mechanical and Biomedical Engineering, Ewha Womans University, Seoul 03760, Republic of Korea
| | - Jung Won Chun
- Center for Liver and Pancreatobiliary Cancer, National Cancer Center, Goyang 10408, Republic of Korea
| | - Young Hwan Koh
- Center for Liver and Pancreatobiliary Cancer, National Cancer Center, Goyang 10408, Republic of Korea
| | - Jae Hee Cho
- Department of Internal Medicine, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Min Kyu Jung
- Department of Internal Medicine, Kyungpook National University Hospital, Daegu 41944, Republic of Korea
| | - Seong-Hun Kim
- Department of Internal Medicine, Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju 54907, Republic of Korea
| | - Hyo Jung Kim
- Department of Gastroenterology, Korea University Guro Hospital, Seoul 10408, Republic of Korea
| | - Woo Jin Lee
- Center for Liver and Pancreatobiliary Cancer, National Cancer Center, Goyang 10408, Republic of Korea
| | - Jang-Hwan Choi
- Graduate Program in System Health Science and Engineering, Division of Mechanical and Biomedical Engineering, Ewha Womans University, Seoul 03760, Republic of Korea
| | - Sang Myung Woo
- Center for Liver and Pancreatobiliary Cancer, National Cancer Center, Goyang 10408, Republic of Korea
| |
Collapse
|
12
|
Carrillo-Perez F, Ortuno FM, Börjesson A, Rojas I, Herrera LJ. Performance comparison between multi-center histopathology datasets of a weakly-supervised deep learning model for pancreatic ductal adenocarcinoma detection. Cancer Imaging 2023; 23:66. [PMID: 37365659 PMCID: PMC10294485 DOI: 10.1186/s40644-023-00586-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 06/21/2023] [Indexed: 06/28/2023] Open
Abstract
BACKGROUND Pancreatic ductal carcinoma patients have a really poor prognosis given its difficult early detection and the lack of early symptoms. Digital pathology is routinely used by pathologists to diagnose the disease. However, visually inspecting the tissue is a time-consuming task, which slows down the diagnostic procedure. With the advances occurred in the area of artificial intelligence, specifically with deep learning models, and the growing availability of public histology data, clinical decision support systems are being created. However, the generalization capabilities of these systems are not always tested, nor the integration of publicly available datasets for pancreatic ductal carcinoma detection (PDAC). METHODS In this work, we explored the performace of two weakly-supervised deep learning models using the two more widely available datasets with pancreatic ductal carcinoma histology images, The Cancer Genome Atlas Project (TCGA) and the Clinical Proteomic Tumor Analysis Consortium (CPTAC). In order to have sufficient training data, the TCGA dataset was integrated with the Genotype-Tissue Expression (GTEx) project dataset, which contains healthy pancreatic samples. RESULTS We showed how the model trained on CPTAC generalizes better than the one trained on the integrated dataset, obtaining an inter-dataset accuracy of 90.62% ± 2.32 and an outer-dataset accuracy of 92.17% when evaluated on TCGA + GTEx. Furthermore, we tested the performance on another dataset formed by tissue micro-arrays, obtaining an accuracy of 98.59%. We showed how the features learned in an integrated dataset do not differentiate between the classes, but between the datasets, noticing that a stronger normalization might be needed when creating clinical decision support systems with datasets obtained from different sources. To mitigate this effect, we proposed to train on the three available datasets, improving the detection performance and generalization capabilities of a model trained only on TCGA + GTEx and achieving a similar performance to the model trained only on CPTAC. CONCLUSIONS The integration of datasets where both classes are present can mitigate the batch effect present when integrating datasets, improving the classification performance, and accurately detecting PDAC across different datasets.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain.
| | - Francisco M Ortuno
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
- Clinical Bioinformatics Area, Fundación Progreso y Salud (FPS), Hospital Virgen del Rocío, Sevilla, Spain
| | - Alejandro Börjesson
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| | - Ignacio Rojas
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| |
Collapse
|
13
|
Ramaekers M, Viviers CGA, Janssen BV, Hellström TAE, Ewals L, van der Wulp K, Nederend J, Jacobs I, Pluyter JR, Mavroeidis D, van der Sommen F, Besselink MG, Luyer MDP. Computer-Aided Detection for Pancreatic Cancer Diagnosis: Radiological Challenges and Future Directions. J Clin Med 2023; 12:4209. [PMID: 37445243 DOI: 10.3390/jcm12134209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 06/08/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023] Open
Abstract
Radiological imaging plays a crucial role in the detection and treatment of pancreatic ductal adenocarcinoma (PDAC). However, there are several challenges associated with the use of these techniques in daily clinical practice. Determination of the presence or absence of cancer using radiological imaging is difficult and requires specific expertise, especially after neoadjuvant therapy. Early detection and characterization of tumors would potentially increase the number of patients who are eligible for curative treatment. Over the last decades, artificial intelligence (AI)-based computer-aided detection (CAD) has rapidly evolved as a means for improving the radiological detection of cancer and the assessment of the extent of disease. Although the results of AI applications seem promising, widespread adoption in clinical practice has not taken place. This narrative review provides an overview of current radiological CAD systems in pancreatic cancer, highlights challenges that are pertinent to clinical practice, and discusses potential solutions for these challenges.
Collapse
Affiliation(s)
- Mark Ramaekers
- Department of Surgery, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Christiaan G A Viviers
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Boris V Janssen
- Department of Surgery, Amsterdam UMC, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands
- Cancer Center Amsterdam, 1081 HV Amsterdam, The Netherlands
| | - Terese A E Hellström
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Lotte Ewals
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Kasper van der Wulp
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Joost Nederend
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Igor Jacobs
- Department of Hospital Services and Informatics, Philips Research, 5656 AE Eindhoven, The Netherlands
| | - Jon R Pluyter
- Department of Experience Design, Philips Design, 5656 AE Eindhoven, The Netherlands
| | - Dimitrios Mavroeidis
- Department of Data Science, Philips Research, 5656 AE Eindhoven, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Marc G Besselink
- Department of Surgery, Amsterdam UMC, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands
- Cancer Center Amsterdam, 1081 HV Amsterdam, The Netherlands
| | - Misha D P Luyer
- Department of Surgery, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| |
Collapse
|
14
|
Faur AC, Lazar DC, Ghenciu LA. Artificial intelligence as a noninvasive tool for pancreatic cancer prediction and diagnosis. World J Gastroenterol 2023; 29:1811-1823. [PMID: 37032728 PMCID: PMC10080704 DOI: 10.3748/wjg.v29.i12.1811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/23/2022] [Accepted: 03/16/2023] [Indexed: 03/28/2023] Open
Abstract
Pancreatic cancer (PC) has a low incidence rate but a high mortality, with patients often in the advanced stage of the disease at the time of the first diagnosis. If detected, early neoplastic lesions are ideal for surgery, offering the best prognosis. Preneoplastic lesions of the pancreas include pancreatic intraepithelial neoplasia and mucinous cystic neoplasms, with intraductal papillary mucinous neoplasms being the most commonly diagnosed. Our study focused on predicting PC by identifying early signs using noninvasive techniques and artificial intelligence (AI). A systematic English literature search was conducted on the PubMed electronic database and other sources. We obtained a total of 97 studies on the subject of pancreatic neoplasms. The final number of articles included in our study was 44, 34 of which focused on the use of AI algorithms in the early diagnosis and prediction of pancreatic lesions. AI algorithms can facilitate diagnosis by analyzing massive amounts of data in a short period of time. Correlations can be made through AI algorithms by expanding image and electronic medical records databases, which can later be used as part of a screening program for the general population. AI-based screening models should involve a combination of biomarkers and medical and imaging data from different sources. This requires large numbers of resources, collaboration between medical practitioners, and investment in medical infrastructures.
Collapse
Affiliation(s)
- Alexandra Corina Faur
- Department of Anatomy and Embriology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Timișoara 300041, Timiș, Romania
| | - Daniela Cornelia Lazar
- Department V of Internal Medicine I, Discipline of Internal Medicine IV, University of Medicine and Pharmacy “Victor Babes” Timișoara, Timișoara 300041, Timiș, Romania
| | - Laura Andreea Ghenciu
- Department III, Discipline of Pathophysiology, “Victor Babeș” University of Medicine and Pharmacy, Timișoara 300041, Timiș, Romania
| |
Collapse
|
15
|
Salahuddin Z, Chen Y, Zhong X, Woodruff HC, Rad NM, Mali SA, Lambin P. From Head and Neck Tumour and Lymph Node Segmentation to Survival Prediction on PET/CT: An End-to-End Framework Featuring Uncertainty, Fairness, and Multi-Region Multi-Modal Radiomics. Cancers (Basel) 2023; 15:cancers15071932. [PMID: 37046593 PMCID: PMC10093277 DOI: 10.3390/cancers15071932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/17/2023] [Accepted: 03/21/2023] [Indexed: 04/14/2023] Open
Abstract
Automatic delineation and detection of the primary tumour (GTVp) and lymph nodes (GTVn) using PET and CT in head and neck cancer and recurrence-free survival prediction can be useful for diagnosis and patient risk stratification. We used data from nine different centres, with 524 and 359 cases used for training and testing, respectively. We utilised posterior sampling of the weight space in the proposed segmentation model to estimate the uncertainty for false positive reduction. We explored the prognostic potential of radiomics features extracted from the predicted GTVp and GTVn in PET and CT for recurrence-free survival prediction and used SHAP analysis for explainability. We evaluated the bias of models with respect to age, gender, chemotherapy, HPV status, and lesion size. We achieved an aggregate Dice score of 0.774 and 0.760 on the test set for GTVp and GTVn, respectively. We observed a per image false positive reduction of 19.5% and 7.14% using the uncertainty threshold for GTVp and GTVn, respectively. Radiomics features extracted from GTVn in PET and from both GTVp and GTVn in CT are the most prognostic, and our model achieves a C-index of 0.672 on the test set. Our framework incorporates uncertainty estimation, fairness, and explainability, demonstrating the potential for accurate detection and risk stratification.
Collapse
Affiliation(s)
- Zohaib Salahuddin
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Reproduction, Maastricht University, 6200 MD Maastricht, The Netherlands
| | - Yi Chen
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Reproduction, Maastricht University, 6200 MD Maastricht, The Netherlands
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Xian Zhong
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Reproduction, Maastricht University, 6200 MD Maastricht, The Netherlands
- Department of Medical Ultrasonics, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, China
| | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Reproduction, Maastricht University, 6200 MD Maastricht, The Netherlands
- Department of Radiology and Nuclear Medicine, GROW-School for Oncology and Reproduction, Maastricht University Medical Center+, 6229 HX Maastricht, The Netherlands
| | - Nastaran Mohammadian Rad
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Reproduction, Maastricht University, 6200 MD Maastricht, The Netherlands
| | - Shruti Atul Mali
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Reproduction, Maastricht University, 6200 MD Maastricht, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Reproduction, Maastricht University, 6200 MD Maastricht, The Netherlands
- Department of Radiology and Nuclear Medicine, GROW-School for Oncology and Reproduction, Maastricht University Medical Center+, 6229 HX Maastricht, The Netherlands
| |
Collapse
|
16
|
Berbís MA, Paulano Godino F, Royuela del Val J, Alcalá Mata L, Luna A. Clinical impact of artificial intelligence-based solutions on imaging of the pancreas and liver. World J Gastroenterol 2023; 29:1427-1445. [PMID: 36998424 PMCID: PMC10044858 DOI: 10.3748/wjg.v29.i9.1427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/13/2023] [Accepted: 02/27/2023] [Indexed: 03/07/2023] Open
Abstract
Artificial intelligence (AI) has experienced substantial progress over the last ten years in many fields of application, including healthcare. In hepatology and pancreatology, major attention to date has been paid to its application to the assisted or even automated interpretation of radiological images, where AI can generate accurate and reproducible imaging diagnosis, reducing the physicians’ workload. AI can provide automatic or semi-automatic segmentation and registration of the liver and pancreatic glands and lesions. Furthermore, using radiomics, AI can introduce new quantitative information which is not visible to the human eye to radiological reports. AI has been applied in the detection and characterization of focal lesions and diffuse diseases of the liver and pancreas, such as neoplasms, chronic hepatic disease, or acute or chronic pancreatitis, among others. These solutions have been applied to different imaging techniques commonly used to diagnose liver and pancreatic diseases, such as ultrasound, endoscopic ultrasonography, computerized tomography (CT), magnetic resonance imaging, and positron emission tomography/CT. However, AI is also applied in this context to many other relevant steps involved in a comprehensive clinical scenario to manage a gastroenterological patient. AI can also be applied to choose the most convenient test prescription, to improve image quality or accelerate its acquisition, and to predict patient prognosis and treatment response. In this review, we summarize the current evidence on the application of AI to hepatic and pancreatic radiology, not only in regard to the interpretation of images, but also to all the steps involved in the radiological workflow in a broader sense. Lastly, we discuss the challenges and future directions of the clinical application of AI methods.
Collapse
Affiliation(s)
- M Alvaro Berbís
- Department of Radiology, HT Médica, San Juan de Dios Hospital, Córdoba 14960, Spain
- Faculty of Medicine, Autonomous University of Madrid, Madrid 28049, Spain
| | | | | | - Lidia Alcalá Mata
- Department of Radiology, HT Médica, Clínica las Nieves, Jaén 23007, Spain
| | - Antonio Luna
- Department of Radiology, HT Médica, Clínica las Nieves, Jaén 23007, Spain
| |
Collapse
|
17
|
Veiga-Canuto D, Cerdà-Alberich L, Jiménez-Pastor A, Carot Sierra JM, Gomis-Maya A, Sangüesa-Nebot C, Fernández-Patón M, Martínez de las Heras B, Taschner-Mandl S, Düster V, Pötschger U, Simon T, Neri E, Alberich-Bayarri Á, Cañete A, Hero B, Ladenstein R, Martí-Bonmatí L. Independent Validation of a Deep Learning nnU-Net Tool for Neuroblastoma Detection and Segmentation in MR Images. Cancers (Basel) 2023; 15:cancers15051622. [PMID: 36900410 PMCID: PMC10000775 DOI: 10.3390/cancers15051622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 02/22/2023] [Accepted: 03/05/2023] [Indexed: 03/08/2023] Open
Abstract
OBJECTIVES To externally validate and assess the accuracy of a previously trained fully automatic nnU-Net CNN algorithm to identify and segment primary neuroblastoma tumors in MR images in a large children cohort. METHODS An international multicenter, multivendor imaging repository of patients with neuroblastic tumors was used to validate the performance of a trained Machine Learning (ML) tool to identify and delineate primary neuroblastoma tumors. The dataset was heterogeneous and completely independent from the one used to train and tune the model, consisting of 300 children with neuroblastic tumors having 535 MR T2-weighted sequences (486 sequences at diagnosis and 49 after finalization of the first phase of chemotherapy). The automatic segmentation algorithm was based on a nnU-Net architecture developed within the PRIMAGE project. For comparison, the segmentation masks were manually edited by an expert radiologist, and the time for the manual editing was recorded. Different overlaps and spatial metrics were calculated to compare both masks. RESULTS The median Dice Similarity Coefficient (DSC) was high 0.997; 0.944-1.000 (median; Q1-Q3). In 18 MR sequences (6%), the net was not able neither to identify nor segment the tumor. No differences were found regarding the MR magnetic field, type of T2 sequence, or tumor location. No significant differences in the performance of the net were found in patients with an MR performed after chemotherapy. The time for visual inspection of the generated masks was 7.9 ± 7.5 (mean ± Standard Deviation (SD)) seconds. Those cases where manual editing was needed (136 masks) required 124 ± 120 s. CONCLUSIONS The automatic CNN was able to locate and segment the primary tumor on the T2-weighted images in 94% of cases. There was an extremely high agreement between the automatic tool and the manually edited masks. This is the first study to validate an automatic segmentation model for neuroblastic tumor identification and segmentation with body MR images. The semi-automatic approach with minor manual editing of the deep learning segmentation increases the radiologist's confidence in the solution with a minor workload for the radiologist.
Collapse
Affiliation(s)
- Diana Veiga-Canuto
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
- Correspondence: (D.V.-C.); (L.M.-B.)
| | - Leonor Cerdà-Alberich
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Ana Jiménez-Pastor
- Quantitative Imaging Biomarkers in Medicine, QUIBIM SL, 46026 Valencia, Spain
| | - José Miguel Carot Sierra
- Departamento de Estadística e Investigación Operativa Aplicadas y Calidad, Universitat Politècnica de València, Camí de Vera s/n, 46022 Valencia, Spain
| | - Armando Gomis-Maya
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Cinta Sangüesa-Nebot
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Matías Fernández-Patón
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Blanca Martínez de las Heras
- Unidad de Oncohematología Pediátrica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Sabine Taschner-Mandl
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria
| | - Vanessa Düster
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria
| | - Ulrike Pötschger
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria
| | - Thorsten Simon
- Department of Pediatric Oncology and Hematology, University Children’s Hospital of Cologne, Medical Faculty, University of Cologne, 50937 Cologne, Germany
| | - Emanuele Neri
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma, 67, 56126 Pisa, Italy
| | | | - Adela Cañete
- Unidad de Oncohematología Pediátrica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Barbara Hero
- Department of Pediatric Oncology and Hematology, University Children’s Hospital of Cologne, Medical Faculty, University of Cologne, 50937 Cologne, Germany
| | - Ruth Ladenstein
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria
| | - Luis Martí-Bonmatí
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
- Correspondence: (D.V.-C.); (L.M.-B.)
| |
Collapse
|
18
|
Park HJ, Shin K, You MW, Kyung SG, Kim SY, Park SH, Byun JH, Kim N, Kim HJ. Deep Learning-based Detection of Solid and Cystic Pancreatic Neoplasms at Contrast-enhanced CT. Radiology 2023; 306:140-149. [PMID: 35997607 DOI: 10.1148/radiol.220171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Background Deep learning (DL) may facilitate the diagnosis of various pancreatic lesions at imaging. Purpose To develop and validate a DL-based approach for automatic identification of patients with various solid and cystic pancreatic neoplasms at abdominal CT and compare its diagnostic performance with that of radiologists. Materials and Methods In this retrospective study, a three-dimensional nnU-Net-based DL model was trained using the CT data of patients who underwent resection for pancreatic lesions between January 2014 and March 2015 and a subset of patients without pancreatic abnormality who underwent CT in 2014. Performance of the DL-based approach to identify patients with pancreatic lesions was evaluated in a temporally independent cohort (test set 1) and a temporally and spatially independent cohort (test set 2) and was compared with that of two board-certified radiologists. Performance was assessed using receiver operating characteristic analysis. Results The study included 852 patients in the training set (median age, 60 years [range, 19-85 years]; 462 men), 603 patients in test set 1 (median age, 58 years [range, 18-82 years]; 376 men), and 589 patients in test set 2 (median age, 63 years [range, 18-99 years]; 343 men). In test set 1, the DL-based approach had an area under the receiver operating characteristic curve (AUC) of 0.91 (95% CI: 0.89, 0.94) and showed slightly worse performance in test set 2 (AUC, 0.87 [95% CI: 0.84, 0.89]). The DL-based approach showed high sensitivity in identifying patients with solid lesions of any size (98%-100%) or cystic lesions measuring 1.0 cm or larger (92%-93%), which was comparable with the radiologists (95%-100% for solid lesions [P = .51 to P > .99]; 93%-98% for cystic lesions ≥1.0 cm [P = .38 to P > .99]). Conclusion The deep learning-based approach demonstrated high performance in identifying patients with various solid and cystic pancreatic lesions at CT. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Hyo Jung Park
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Keewon Shin
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Myung-Won You
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Sung-Gu Kyung
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - So Yeon Kim
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Seong Ho Park
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Jae Ho Byun
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Namkug Kim
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Hyoung Jung Kim
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| |
Collapse
|
19
|
Zavalsiz MT, Alhajj S, Sailunaz K, Ozyer T, Alhajj R. Pancreatic Tumor Detection by Convolutional Neural Networks. 2022 International Arab Conference on Information Technology (ACIT) 2022. [DOI: 10.1109/acit57182.2022.9994181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
| | - Sleiman Alhajj
- International School of Medicine, Istanbul Medipol University,Istanbul,Turkey
| | - Kashfia Sailunaz
- University of Calgary,Department of Computer Science,Alberta,Canada
| | - Tansel Ozyer
- Ankara Medipol University,Department of Computer Engineering,Ankara,Turkey
| | - Reda Alhajj
- Istanbul Medipol University,Department of Computer Engineering,Istanbul,Turkey
| |
Collapse
|
20
|
Wei W, Jia G, Wu Z, Wang T, Wang H, Wei K, Cheng C, Liu Z, Zuo C. A multidomain fusion model of radiomics and deep learning to discriminate between PDAC and AIP based on 18F-FDG PET/CT images. Jpn J Radiol 2022; 41:417-427. [PMID: 36409398 PMCID: PMC9676903 DOI: 10.1007/s11604-022-01363-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 11/09/2022] [Indexed: 11/22/2022]
Abstract
PURPOSE To explore a multidomain fusion model of radiomics and deep learning features based on 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) images to distinguish pancreatic ductal adenocarcinoma (PDAC) and autoimmune pancreatitis (AIP), which could effectively improve the accuracy of diseases diagnosis. MATERIALS AND METHODS This retrospective study included 48 patients with AIP (mean age, 65 ± 12.0 years; range, 37-90 years) and 64 patients with PDAC patients (mean age, 66 ± 11.3 years; range, 32-88 years). Three different methods were discussed to identify PDAC and AIP based on 18F-FDG PET/CT images, including the radiomics model (RAD_model), the deep learning model (DL_model), and the multidomain fusion model (MF_model). We also compared the classification results of PET/CT, PET, and CT images in these three models. In addition, we explored the attributes of deep learning abstract features by analyzing the correlation between radiomics and deep learning features. Five-fold cross-validation was used to calculate receiver operating characteristic (ROC), area under the roc curve (AUC), accuracy (Acc), sensitivity (Sen), and specificity (Spe) to quantitatively evaluate the performance of different classification models. RESULTS The experimental results showed that the multidomain fusion model had the best comprehensive performance compared with radiomics and deep learning models, and the AUC, accuracy, sensitivity, specificity were 96.4% (95% CI 95.4-97.3%), 90.1% (95% CI 88.7-91.5%), 87.5% (95% CI 84.3-90.6%), and 93.0% (95% CI 90.3-95.6%), respectively. And our study proved that the multimodal features of PET/CT were superior to using either PET or CT features alone. First-order features of radiomics provided valuable complementary information for the deep learning model. CONCLUSION The preliminary results of this paper demonstrated that our proposed multidomain fusion model fully exploits the value of radiomics and deep learning features based on 18F-FDG PET/CT images, which provided competitive accuracy for the discrimination of PDAC and AIP.
Collapse
Affiliation(s)
- Wenting Wei
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022 China ,Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163 China
| | - Guorong Jia
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University (Changhai Hospital), 168 Changhai Road, Shanghai, 200433 China
| | - Zhongyi Wu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163 China
| | - Tao Wang
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University (Changhai Hospital), 168 Changhai Road, Shanghai, 200433 China
| | - Heng Wang
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022 China ,Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163 China
| | - Kezhen Wei
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022 China ,Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163 China
| | - Chao Cheng
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University (Changhai Hospital), 168 Changhai Road, Shanghai, 200433 China
| | - Zhaobang Liu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163 China
| | - Changjing Zuo
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University (Changhai Hospital), 168 Changhai Road, Shanghai, 200433 China
| |
Collapse
|
21
|
Veiga-Canuto D, Cerdà-Alberich L, Sangüesa Nebot C, Martínez de las Heras B, Pötschger U, Gabelloni M, Carot Sierra JM, Taschner-Mandl S, Düster V, Cañete A, Ladenstein R, Neri E, Martí-Bonmatí L. Comparative Multicentric Evaluation of Inter-Observer Variability in Manual and Automatic Segmentation of Neuroblastic Tumors in Magnetic Resonance Images. Cancers (Basel) 2022; 14:cancers14153648. [PMID: 35954314 PMCID: PMC9367307 DOI: 10.3390/cancers14153648] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 07/21/2022] [Accepted: 07/26/2022] [Indexed: 02/05/2023] Open
Abstract
Simple Summary Tumor segmentation is a key step in oncologic imaging processing and is a time-consuming process usually performed manually by radiologists. To facilitate it, there is growing interest in applying deep-learning segmentation algorithms. Thus, we explore the variability between two observers performing manual segmentation and use the state-of-the-art deep learning architecture nnU-Net to develop a model to detect and segment neuroblastic tumors on MR images. We were able to show that the variability between nnU-Net and manual segmentation is similar to the inter-observer variability in manual segmentation. Furthermore, we compared the time needed to manually segment the tumors from scratch with the time required for the automatic model to segment the same cases, with posterior human validation with manual adjustment when needed. Abstract Tumor segmentation is one of the key steps in imaging processing. The goals of this study were to assess the inter-observer variability in manual segmentation of neuroblastic tumors and to analyze whether the state-of-the-art deep learning architecture nnU-Net can provide a robust solution to detect and segment tumors on MR images. A retrospective multicenter study of 132 patients with neuroblastic tumors was performed. Dice Similarity Coefficient (DSC) and Area Under the Receiver Operating Characteristic Curve (AUC ROC) were used to compare segmentation sets. Two more metrics were elaborated to understand the direction of the errors: the modified version of False Positive (FPRm) and False Negative (FNR) rates. Two radiologists manually segmented 46 tumors and a comparative study was performed. nnU-Net was trained-tuned with 106 cases divided into five balanced folds to perform cross-validation. The five resulting models were used as an ensemble solution to measure training (n = 106) and validation (n = 26) performance, independently. The time needed by the model to automatically segment 20 cases was compared to the time required for manual segmentation. The median DSC for manual segmentation sets was 0.969 (±0.032 IQR). The median DSC for the automatic tool was 0.965 (±0.018 IQR). The automatic segmentation model achieved a better performance regarding the FPRm. MR images segmentation variability is similar between radiologists and nnU-Net. Time leverage when using the automatic model with posterior visual validation and manual adjustment corresponds to 92.8%.
Collapse
Affiliation(s)
- Diana Veiga-Canuto
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (L.C.-A.); (L.M.-B.)
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain;
- Correspondence:
| | - Leonor Cerdà-Alberich
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (L.C.-A.); (L.M.-B.)
| | - Cinta Sangüesa Nebot
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain;
| | - Blanca Martínez de las Heras
- Unidad de Oncohematología Pediátrica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (B.M.d.l.H.); (A.C.)
| | - Ulrike Pötschger
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Michela Gabelloni
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma, 67, 56126 Pisa, Italy; (M.G.); (E.N.)
| | - José Miguel Carot Sierra
- Departamento de Estadística e Investigación Operativa Aplicadas y Calidad, Universitat Politècnica de València, Camí de Vera s/n, 46022 Valencia, Spain;
| | - Sabine Taschner-Mandl
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Vanessa Düster
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Adela Cañete
- Unidad de Oncohematología Pediátrica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (B.M.d.l.H.); (A.C.)
| | - Ruth Ladenstein
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Emanuele Neri
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma, 67, 56126 Pisa, Italy; (M.G.); (E.N.)
| | - Luis Martí-Bonmatí
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (L.C.-A.); (L.M.-B.)
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain;
| |
Collapse
|
22
|
Schuurmans M, Alves N, Vendittelli P, Huisman H, Hermans J. Setting the Research Agenda for Clinical Artificial Intelligence in Pancreatic Adenocarcinoma Imaging. Cancers (Basel) 2022; 14:cancers14143498. [PMID: 35884559 PMCID: PMC9316850 DOI: 10.3390/cancers14143498] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 07/07/2022] [Accepted: 07/15/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary Pancreatic ductal adenocarcinoma (PDAC) is one of the deadliest cancers worldwide, associated with a 98% loss of life expectancy and a 30% increase in disability-adjusted life years. Image-based artificial intelligence (AI) can help improve outcomes for PDAC given that current clinical guidelines are non-uniform and lack evidence-based consensus. However, research on image-based AI for PDAC is too scattered and lacking in sufficient quality to be incorporated into clinical workflows. In this review, an international, multi-disciplinary team of the world’s leading experts in pancreatic cancer breaks down the patient pathway and pinpoints the current clinical touchpoints in each stage. The available PDAC imaging AI literature addressing each pathway stage is then rigorously analyzed, and current performance and pitfalls are identified in a comprehensive overview. Finally, the future research agenda for clinically relevant, image-driven AI in PDAC is proposed. Abstract Pancreatic ductal adenocarcinoma (PDAC), estimated to become the second leading cause of cancer deaths in western societies by 2030, was flagged as a neglected cancer by the European Commission and the United States Congress. Due to lack of investment in research and development, combined with a complex and aggressive tumour biology, PDAC overall survival has not significantly improved the past decades. Cross-sectional imaging and histopathology play a crucial role throughout the patient pathway. However, current clinical guidelines for diagnostic workup, patient stratification, treatment response assessment, and follow-up are non-uniform and lack evidence-based consensus. Artificial Intelligence (AI) can leverage multimodal data to improve patient outcomes, but PDAC AI research is too scattered and lacking in quality to be incorporated into clinical workflows. This review describes the patient pathway and derives touchpoints for image-based AI research in collaboration with a multi-disciplinary, multi-institutional expert panel. The literature exploring AI to address these touchpoints is thoroughly retrieved and analysed to identify the existing trends and knowledge gaps. The results show absence of multi-institutional, well-curated datasets, an essential building block for robust AI applications. Furthermore, most research is unimodal, does not use state-of-the-art AI techniques, and lacks reliable ground truth. Based on this, the future research agenda for clinically relevant, image-driven AI in PDAC is proposed.
Collapse
Affiliation(s)
- Megan Schuurmans
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands; (P.V.); (H.H.)
- Correspondence: (M.S.); (N.A.)
| | - Natália Alves
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands; (P.V.); (H.H.)
- Correspondence: (M.S.); (N.A.)
| | - Pierpaolo Vendittelli
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands; (P.V.); (H.H.)
| | - Henkjan Huisman
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands; (P.V.); (H.H.)
| | - John Hermans
- Department of Medical Imaging, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands;
| |
Collapse
|