1
|
Jeltsch P, Monnin K, Jreige M, Fernandes-Mendes L, Girardet R, Dromain C, Richiardi J, Vietti-Violi N. Magnetic Resonance Imaging Liver Segmentation Protocol Enables More Consistent and Robust Annotations, Paving the Way for Advanced Computer-Assisted Analysis. Diagnostics (Basel) 2024; 14:2785. [PMID: 39767146 PMCID: PMC11726866 DOI: 10.3390/diagnostics14242785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2024] [Revised: 12/05/2024] [Accepted: 12/10/2024] [Indexed: 01/16/2025] Open
Abstract
BACKGROUND/OBJECTIVES Recent advancements in artificial intelligence (AI) have spurred interest in developing computer-assisted analysis for imaging examinations. However, the lack of high-quality datasets remains a significant bottleneck. Labeling instructions are critical for improving dataset quality but are often lacking. This study aimed to establish a liver MRI segmentation protocol and assess its impact on annotation quality and inter-reader agreement. METHODS This retrospective study included 20 patients with chronic liver disease. Manual liver segmentations were performed by a radiologist in training and a radiology technician on T2-weighted imaging (wi) and T1wi at the portal venous phase. Based on the inter-reader discrepancies identified after the first segmentation round, a segmentation protocol was established, guiding the second round of segmentation, resulting in a total of 160 segmentations. The Dice Similarity Coefficient (DSC) assessed inter-reader agreement pre- and post-protocol, with a Wilcoxon signed-rank test for per-volume analysis and an Aligned-Rank Transform (ART) for repeated measures analyses of variance (ANOVA) for per-slice analysis. Slice selection at extreme cranial or caudal liver positions was evaluated using the McNemar test. RESULTS The per-volume DSC significantly increased after protocol implementation for both T2wi (p < 0.001) and T1wi (p = 0.03). Per-slice DSC also improved significantly for both T2wi and T1wi (p < 0.001). The protocol reduced the number of liver segmentations with a non-annotated slice on T1wi (p = 0.04), but the change was not significant on T2wi (p = 0.16). CONCLUSIONS Establishing a liver MRI segmentation protocol improves annotation robustness and reproducibility, paving the way for advanced computer-assisted analysis. Moreover, segmentation protocols could be extended to other organs and lesions and incorporated into guidelines, thereby expanding the potential applications of AI in daily clinical practice.
Collapse
Affiliation(s)
- Patrick Jeltsch
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Killian Monnin
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Mario Jreige
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Lucia Fernandes-Mendes
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Raphaël Girardet
- Department of Radiology, South Metropolitan Health Service, Murdoch, WA 6150, Australia;
| | - Clarisse Dromain
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Jonas Richiardi
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Naik Vietti-Violi
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| |
Collapse
|
2
|
Núñez L, Ferreira C, Mojtahed A, Lamb H, Cappio S, Husainy MA, Dennis A, Pansini M. Assessing the performance of AI-assisted technicians in liver segmentation, Couinaud division, and lesion detection: a pilot study. Abdom Radiol (NY) 2024; 49:4264-4272. [PMID: 39123052 PMCID: PMC11522103 DOI: 10.1007/s00261-024-04507-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 07/16/2024] [Accepted: 07/21/2024] [Indexed: 08/12/2024]
Abstract
BACKGROUND In patients with primary and secondary liver cancer, the number and sizes of lesions, their locations within the Couinaud segments, and the volume and health status of the future liver remnant are key for informing treatment planning. Currently this is performed manually, generally by trained radiologists, who are seeing an inexorable growth in their workload. Integrating artificial intelligence (AI) and non-radiologist personnel into the workflow potentially addresses the increasing workload without sacrificing accuracy. This study evaluated the accuracy of non-radiologist technicians in liver cancer imaging compared with radiologists, both assisted by AI. METHODS Non-contrast T1-weighted MRI data from 18 colorectal liver metastasis patients were analyzed using an AI-enabled decision support tool that enables non-radiology trained technicians to perform key liver measurements. Three non-radiologist, experienced operators and three radiologists performed whole liver segmentation, Couinaud segment segmentation, and the detection and measurements of lesions aided by AI-generated delineations. Agreement between radiologists and non-radiologists was assessed using the intraclass correlation coefficient (ICC). Two additional radiologists adjudicated any lesion detection discrepancies. RESULTS Whole liver volume showed high levels of agreement between the non-radiologist and radiologist groups (ICC = 0.99). The Couinaud segment volumetry ICC range was 0.77-0.96. Both groups identified the same 41 lesions. As well, the non-radiologist group identified seven more structures which were also confirmed as lesions by the adjudicators. Lesion diameter categorization agreement was 90%, Couinaud localization 91.9%. Within-group variability was comparable for lesion measurements. CONCLUSION With AI assistance, non-radiologist experienced operators showed good agreement with radiologists for quantifying whole liver volume, Couinaud segment volume, and the detection and measurement of lesions in patients with known liver cancer. This AI-assisted non-radiologist approach has potential to reduce the stress on radiologists without compromising accuracy.
Collapse
Affiliation(s)
- Luis Núñez
- Perspectum Ltd., Gemini One, 5520 John Smith Drive, Oxford, OX4 2LL, UK.
| | - Carlos Ferreira
- Perspectum Ltd., Gemini One, 5520 John Smith Drive, Oxford, OX4 2LL, UK
| | - Amirkasra Mojtahed
- Division of Abdominal Imaging, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA
| | - Hildo Lamb
- Department of Radiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - Stefano Cappio
- Clinica Di Radiologia EOC, Istituto Di Imaging Della Svizzera Italiana (IIMSI), Lugano, Switzerland
| | - Mohammad Ali Husainy
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Andrea Dennis
- Perspectum Ltd., Gemini One, 5520 John Smith Drive, Oxford, OX4 2LL, UK
| | - Michele Pansini
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- Clinica Di Radiologia EOC, Istituto Di Imaging Della Svizzera Italiana (IIMSI), Lugano, Switzerland
| |
Collapse
|
3
|
Baldini G, Hosch R, Schmidt CS, Borys K, Kroll L, Koitka S, Haubold P, Pelka O, Nensa F, Haubold J. Addressing the Contrast Media Recognition Challenge: A Fully Automated Machine Learning Approach for Predicting Contrast Phases in CT Imaging. Invest Radiol 2024; 59:635-645. [PMID: 38436405 DOI: 10.1097/rli.0000000000001071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2024]
Abstract
OBJECTIVES Accurately acquiring and assigning different contrast-enhanced phases in computed tomography (CT) is relevant for clinicians and for artificial intelligence orchestration to select the most appropriate series for analysis. However, this information is commonly extracted from the CT metadata, which is often wrong. This study aimed at developing an automatic pipeline for classifying intravenous (IV) contrast phases and additionally for identifying contrast media in the gastrointestinal tract (GIT). MATERIALS AND METHODS This retrospective study used 1200 CT scans collected at the investigating institution between January 4, 2016 and September 12, 2022, and 240 CT scans from multiple centers from The Cancer Imaging Archive for external validation. The open-source segmentation algorithm TotalSegmentator was used to identify regions of interest (pulmonary artery, aorta, stomach, portal/splenic vein, liver, portal vein/hepatic veins, inferior vena cava, duodenum, small bowel, colon, left/right kidney, urinary bladder), and machine learning classifiers were trained with 5-fold cross-validation to classify IV contrast phases (noncontrast, pulmonary arterial, arterial, venous, and urographic) and GIT contrast enhancement. The performance of the ensembles was evaluated using the receiver operating characteristic area under the curve (AUC) and 95% confidence intervals (CIs). RESULTS For the IV phase classification task, the following AUC scores were obtained for the internal test set: 99.59% [95% CI, 99.58-99.63] for the noncontrast phase, 99.50% [95% CI, 99.49-99.52] for the pulmonary-arterial phase, 99.13% [95% CI, 99.10-99.15] for the arterial phase, 99.8% [95% CI, 99.79-99.81] for the venous phase, and 99.7% [95% CI, 99.68-99.7] for the urographic phase. For the external dataset, a mean AUC of 97.33% [95% CI, 97.27-97.35] and 97.38% [95% CI, 97.34-97.41] was achieved for all contrast phases for the first and second annotators, respectively. Contrast media in the GIT could be identified with an AUC of 99.90% [95% CI, 99.89-99.9] in the internal dataset, whereas in the external dataset, an AUC of 99.73% [95% CI, 99.71-99.73] and 99.31% [95% CI, 99.27-99.33] was achieved with the first and second annotator, respectively. CONCLUSIONS The integration of open-source segmentation networks and classifiers effectively classified contrast phases and identified GIT contrast enhancement using anatomical landmarks.
Collapse
Affiliation(s)
- Giulia Baldini
- From the Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany (G.B., R.H., K.B., L.K., S.K., F.N., J.H.); Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany (G.B., R.H., C.S.S., K.B., L.K., S.K., O.P., F.N., J.H.); Institute for Transfusion Medicine, University Hospital Essen, Essen, Germany (C.S.S.); Department of Diagnostic and Interventional Radiology, Kliniken Essen-Mitte, Essen, Germany (P.H.); and Data Integration Center, Central IT Department, University Hospital Essen, Essen, Germany (O.P., F.N.)
| | | | | | | | | | | | | | | | | | | |
Collapse
|
4
|
Jeon SK, Joo I, Park J, Kim JM, Park SJ, Yoon SH. Fully-automated multi-organ segmentation tool applicable to both non-contrast and post-contrast abdominal CT: deep learning algorithm developed using dual-energy CT images. Sci Rep 2024; 14:4378. [PMID: 38388824 PMCID: PMC10883917 DOI: 10.1038/s41598-024-55137-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 02/20/2024] [Indexed: 02/24/2024] Open
Abstract
A novel 3D nnU-Net-based of algorithm was developed for fully-automated multi-organ segmentation in abdominal CT, applicable to both non-contrast and post-contrast images. The algorithm was trained using dual-energy CT (DECT)-obtained portal venous phase (PVP) and spatiotemporally-matched virtual non-contrast images, and tested using a single-energy (SE) CT dataset comprising PVP and true non-contrast (TNC) images. The algorithm showed robust accuracy in segmenting the liver, spleen, right kidney (RK), and left kidney (LK), with mean dice similarity coefficients (DSCs) exceeding 0.94 for each organ, regardless of contrast enhancement. However, pancreas segmentation demonstrated slightly lower performance with mean DSCs of around 0.8. In organ volume estimation, the algorithm demonstrated excellent agreement with ground-truth measurements for the liver, spleen, RK, and LK (intraclass correlation coefficients [ICCs] > 0.95); while the pancreas showed good agreements (ICC = 0.792 in SE-PVP, 0.840 in TNC). Accurate volume estimation within a 10% deviation from ground-truth was achieved in over 90% of cases involving the liver, spleen, RK, and LK. These findings indicate the efficacy of our 3D nnU-Net-based algorithm, developed using DECT images, which provides precise segmentation of the liver, spleen, and RK and LK in both non-contrast and post-contrast CT images, enabling reliable organ volumetry, albeit with relatively reduced performance for the pancreas.
Collapse
Affiliation(s)
- Sun Kyung Jeon
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Ijin Joo
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea.
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.
- Institute of Radiation Medicine, Seoul National University Medical Research Center Seoul National University Hospital, Seoul, Korea.
| | - Junghoan Park
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | | | | | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- MEDICALIP. Co. Ltd., Seoul, Korea
| |
Collapse
|
5
|
Korfiatis P, Suman G, Patnam NG, Trivedi KH, Karbhari A, Mukherjee S, Cook C, Klug JR, Patra A, Khasawneh H, Rajamohan N, Fletcher JG, Truty MJ, Majumder S, Bolan CW, Sandrasegaran K, Chari ST, Goenka AH. Automated Artificial Intelligence Model Trained on a Large Data Set Can Detect Pancreas Cancer on Diagnostic Computed Tomography Scans As Well As Visually Occult Preinvasive Cancer on Prediagnostic Computed Tomography Scans. Gastroenterology 2023; 165:1533-1546.e4. [PMID: 37657758 PMCID: PMC10843414 DOI: 10.1053/j.gastro.2023.08.034] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 08/13/2023] [Accepted: 08/17/2023] [Indexed: 09/03/2023]
Abstract
BACKGROUND & AIMS The aims of our case-control study were (1) to develop an automated 3-dimensional (3D) Convolutional Neural Network (CNN) for detection of pancreatic ductal adenocarcinoma (PDA) on diagnostic computed tomography scans (CTs), (2) evaluate its generalizability on multi-institutional public data sets, (3) its utility as a potential screening tool using a simulated cohort with high pretest probability, and (4) its ability to detect visually occult preinvasive cancer on prediagnostic CTs. METHODS A 3D-CNN classification system was trained using algorithmically generated bounding boxes and pancreatic masks on a curated data set of 696 portal phase diagnostic CTs with PDA and 1080 control images with a nonneoplastic pancreas. The model was evaluated on (1) an intramural hold-out test subset (409 CTs with PDA, 829 controls); (2) a simulated cohort with a case-control distribution that matched the risk of PDA in glycemically defined new-onset diabetes, and Enriching New-Onset Diabetes for Pancreatic Cancer score ≥3; (3) multi-institutional public data sets (194 CTs with PDA, 80 controls), and (4) a cohort of 100 prediagnostic CTs (i.e., CTs incidentally acquired 3-36 months before clinical diagnosis of PDA) without a focal mass, and 134 controls. RESULTS Of the CTs in the intramural test subset, 798 (64%) were from other hospitals. The model correctly classified 360 CTs (88%) with PDA and 783 control CTs (94%), with a mean accuracy 0.92 (95% CI, 0.91-0.94), area under the receiver operating characteristic (AUROC) curve of 0.97 (95% CI, 0.96-0.98), sensitivity of 0.88 (95% CI, 0.85-0.91), and specificity of 0.95 (95% CI, 0.93-0.96). Activation areas on heat maps overlapped with the tumor in 350 of 360 CTs (97%). Performance was high across tumor stages (sensitivity of 0.80, 0.87, 0.95, and 1.0 on T1 through T4 stages, respectively), comparable for hypodense vs isodense tumors (sensitivity: 0.90 vs 0.82), different age, sex, CT slice thicknesses, and vendors (all P > .05), and generalizable on both the simulated cohort (accuracy, 0.95 [95% 0.94-0.95]; AUROC curve, 0.97 [95% CI, 0.94-0.99]) and public data sets (accuracy, 0.86 [95% CI, 0.82-0.90]; AUROC curve, 0.90 [95% CI, 0.86-0.95]). Despite being exclusively trained on diagnostic CTs with larger tumors, the model could detect occult PDA on prediagnostic CTs (accuracy, 0.84 [95% CI, 0.79-0.88]; AUROC curve, 0.91 [95% CI, 0.86-0.94]; sensitivity, 0.75 [95% CI, 0.67-0.84]; and specificity, 0.90 [95% CI, 0.85-0.95]) at a median 475 days (range, 93-1082 days) before clinical diagnosis. CONCLUSIONS This automated artificial intelligence model trained on a large and diverse data set shows high accuracy and generalizable performance for detection of PDA on diagnostic CTs as well as for visually occult PDA on prediagnostic CTs. Prospective validation with blood-based biomarkers is warranted to assess the potential for early detection of sporadic PDA in high-risk individuals.
Collapse
Affiliation(s)
| | - Garima Suman
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | | | | | | | - Cole Cook
- Division of Medical Imaging Technology Services, Mayo Clinic, Rochester, Minnesota
| | - Jason R Klug
- Division of Medical Imaging Technology Services, Mayo Clinic, Rochester, Minnesota
| | - Anurima Patra
- Department of Radiology, Tata Medical Center, Kolkata, India
| | - Hala Khasawneh
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | | | - Mark J Truty
- Department of Surgery, Mayo Clinic, Rochester, Minnesota
| | - Shounak Majumder
- Department of Gastroenterology, Mayo Clinic, Rochester, Minnesota
| | | | | | - Suresh T Chari
- Department of Gastroenterology, Mayo Clinic, Rochester, Minnesota
| | - Ajit H Goenka
- Department of Radiology, Mayo Clinic, Rochester, Minnesota.
| |
Collapse
|
6
|
Mukherjee S, Korfiatis P, Khasawneh H, Rajamohan N, Patra A, Suman G, Singh A, Thakkar J, Patnam NG, Trivedi KH, Karbhari A, Chari ST, Truty MJ, Halfdanarson TR, Bolan CW, Sandrasegaran K, Majumder S, Goenka AH. Bounding box-based 3D AI model for user-guided volumetric segmentation of pancreatic ductal adenocarcinoma on standard-of-care CTs. Pancreatology 2023; 23:522-529. [PMID: 37296006 PMCID: PMC10676442 DOI: 10.1016/j.pan.2023.05.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 05/19/2023] [Accepted: 05/20/2023] [Indexed: 06/12/2023]
Abstract
OBJECTIVES To develop a bounding-box-based 3D convolutional neural network (CNN) for user-guided volumetric pancreas ductal adenocarcinoma (PDA) segmentation. METHODS Reference segmentations were obtained on CTs (2006-2020) of treatment-naïve PDA. Images were algorithmically cropped using a tumor-centered bounding box for training a 3D nnUNet-based-CNN. Three radiologists independently segmented tumors on test subset, which were combined with reference segmentations using STAPLE to derive composite segmentations. Generalizability was evaluated on Cancer Imaging Archive (TCIA) (n = 41) and Medical Segmentation Decathlon (MSD) (n = 152) datasets. RESULTS Total 1151 patients [667 males; age:65.3 ± 10.2 years; T1:34, T2:477, T3:237, T4:403; mean (range) tumor diameter:4.34 (1.1-12.6)-cm] were randomly divided between training/validation (n = 921) and test subsets (n = 230; 75% from other institutions). Model had a high DSC (mean ± SD) against reference segmentations (0.84 ± 0.06), which was comparable to its DSC against composite segmentations (0.84 ± 0.11, p = 0.52). Model-predicted versus reference tumor volumes were comparable (mean ± SD) (29.1 ± 42.2-cc versus 27.1 ± 32.9-cc, p = 0.69, CCC = 0.93). Inter-reader variability was high (mean DSC 0.69 ± 0.16), especially for smaller and isodense tumors. Conversely, model's high performance was comparable between tumor stages, volumes and densities (p > 0.05). Model was resilient to different tumor locations, status of pancreatic/biliary ducts, pancreatic atrophy, CT vendors and slice thicknesses, as well as to the epicenter and dimensions of the bounding-box (p > 0.05). Performance was generalizable on MSD (DSC:0.82 ± 0.06) and TCIA datasets (DSC:0.84 ± 0.08). CONCLUSION A computationally efficient bounding box-based AI model developed on a large and diverse dataset shows high accuracy, generalizability, and robustness to clinically encountered variations for user-guided volumetric PDA segmentation including for small and isodense tumors. CLINICAL RELEVANCE AI-driven bounding box-based user-guided PDA segmentation offers a discovery tool for image-based multi-omics models for applications such as risk-stratification, treatment response assessment, and prognostication, which are urgently needed to customize treatment strategies to the unique biological profile of each patient's tumor.
Collapse
Affiliation(s)
- Sovanlal Mukherjee
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Panagiotis Korfiatis
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Hala Khasawneh
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Naveen Rajamohan
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Anurima Patra
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Garima Suman
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Aparna Singh
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Jay Thakkar
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Nandakumar G Patnam
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Kamaxi H Trivedi
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Aashna Karbhari
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Suresh T Chari
- Department of Gastroenterology, Hepatology and Nutrition, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA.
| | - Mark J Truty
- Department of Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | | | - Candice W Bolan
- Department of Radiology, Mayo Clinic, 4500 San Pablo Rd S, Jacksonville, FL, 32224, USA.
| | - Kumar Sandrasegaran
- Department of Radiology, Mayo Clinic, 13400 E Shea Blvd, Scottsdale, AZ, 85259, USA.
| | - Shounak Majumder
- Department of Gastroenterology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| | - Ajit H Goenka
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| |
Collapse
|
7
|
Ershadi MM, Rise ZR. Fusing clinical and image data for detecting the severity level of hospitalized symptomatic COVID-19 patients using hierarchical model. RESEARCH ON BIOMEDICAL ENGINEERING 2023; 39:209-232. [PMCID: PMC9957693 DOI: 10.1007/s42600-023-00268-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 02/08/2023] [Indexed: 02/05/2024]
Abstract
Purpose Based on medical reports, it is hard to find levels of different hospitalized symptomatic COVID-19 patients according to their features in a short time. Besides, there are common and special features for COVID-19 patients at different levels based on physicians’ knowledge that make diagnosis difficult. For this purpose, a hierarchical model is proposed in this paper based on experts’ knowledge, fuzzy C-mean (FCM) clustering, and adaptive neuro-fuzzy inference system (ANFIS) classifier. Methods Experts considered a special set of features for different groups of COVID-19 patients to find their treatment plans. Accordingly, the structure of the proposed hierarchical model is designed based on experts’ knowledge. In the proposed model, we applied clustering methods to patients’ data to determine some clusters. Then, we learn classifiers for each cluster in a hierarchical model. Regarding different common and special features of patients, FCM is considered for the clustering method. Besides, ANFIS had better performances than other classification methods. Therefore, FCM and ANFIS were considered to design the proposed hierarchical model. FCM finds the membership degree of each patient’s data based on common and special features of different clusters to reinforce the ANFIS classifier. Next, ANFIS identifies the need of hospitalized symptomatic COVID-19 patients to ICU and to find whether or not they are in the end-stage (mortality target class). Two real datasets about COVID-19 patients are analyzed in this paper using the proposed model. One of these datasets had only clinical features and another dataset had both clinical and image features. Therefore, some appropriate features are extracted using some image processing and deep learning methods. Results According to the results and statistical test, the proposed model has the best performance among other utilized classifiers. Its accuracies based on clinical features of the first and second datasets are 92% and 90% to find the ICU target class. Extracted features of image data increase the accuracy by 94%. Conclusion The accuracy of this model is even better for detecting the mortality target class among different classifiers in this paper and the literature review. Besides, this model is compatible with utilized datasets about COVID-19 patients based on clinical data and both clinical and image data, as well. Highlights • A new hierarchical model is proposed using ANFIS classifiers and FCM clustering method in this paper. Its structure is designed based on experts’ knowledge and real medical process. FCM reinforces the ANFIS classification learning phase based on the features of COVID-19 patients. • Two real datasets about COVID-19 patients are studied in this paper. One of these datasets has both clinical and image data. Therefore, appropriate features are extracted based on its image data and considered with available meaningful clinical data. Different levels of hospitalized symptomatic COVID-19 patients are considered in this paper including the need of patients to ICU and whether or not they are in end-stage. • Well-known classification methods including case-based reasoning (CBR), decision tree, convolutional neural networks (CNN), K-nearest neighbors (KNN), learning vector quantization (LVQ), multi-layer perceptron (MLP), Naive Bayes (NB), radial basis function network (RBF), support vector machine (SVM), recurrent neural networks (RNN), fuzzy type-I inference system (FIS), and adaptive neuro-fuzzy inference system (ANFIS) are designed for these datasets and their results are analyzed for different random groups of the train and test data; • According to unbalanced utilized datasets, different performances of classifiers including accuracy, sensitivity, specificity, precision, F-score, and G-mean are compared to find the best classifier. ANFIS classifiers have the best results for both datasets. • To reduce the computational time, the effects of the Principal Component Analysis (PCA) feature reduction method are studied on the performances of the proposed model and classifiers. According to the results and statistical test, the proposed hierarchical model has the best performances among other utilized classifiers. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1007/s42600-023-00268-w.
Collapse
Affiliation(s)
- Mohammad Mahdi Ershadi
- Department of Industrial Engineering and Management Systems, Amirkabir University of Technology, No. 350, Hafez Ave, Valiasr Square, Tehran, 1591634311 Iran
| | - Zeinab Rahimi Rise
- Department of Industrial Engineering and Management Systems, Amirkabir University of Technology, No. 350, Hafez Ave, Valiasr Square, Tehran, 1591634311 Iran
| |
Collapse
|
8
|
Khasawneh H, Patra A, Rajamohan N, Suman G, Klug J, Majumder S, Chari ST, Korfiatis P, Goenka AH. Volumetric Pancreas Segmentation on Computed Tomography: Accuracy and Efficiency of a Convolutional Neural Network Versus Manual Segmentation in 3D Slicer in the Context of Interreader Variability of Expert Radiologists. J Comput Assist Tomogr 2022; 46:841-847. [PMID: 36055122 DOI: 10.1097/rct.0000000000001374] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE This study aimed to compare accuracy and efficiency of a convolutional neural network (CNN)-enhanced workflow for pancreas segmentation versus radiologists in the context of interreader reliability. METHODS Volumetric pancreas segmentations on a data set of 294 portal venous computed tomographies were performed by 3 radiologists (R1, R2, and R3) and by a CNN. Convolutional neural network segmentations were reviewed and, if needed, corrected ("corrected CNN [c-CNN]" segmentations) by radiologists. Ground truth was obtained from radiologists' manual segmentations using simultaneous truth and performance level estimation algorithm. Interreader reliability and model's accuracy were evaluated with Dice-Sorenson coefficient (DSC) and Jaccard coefficient (JC). Equivalence was determined using a two 1-sided test. Convolutional neural network segmentations below the 25th percentile DSC were reviewed to evaluate segmentation errors. Time for manual segmentation and c-CNN was compared. RESULTS Pancreas volumes from 3 sets of segmentations (manual, CNN, and c-CNN) were noninferior to simultaneous truth and performance level estimation-derived volumes [76.6 cm 3 (20.2 cm 3 ), P < 0.05]. Interreader reliability was high (mean [SD] DSC between R2-R1, 0.87 [0.04]; R3-R1, 0.90 [0.05]; R2-R3, 0.87 [0.04]). Convolutional neural network segmentations were highly accurate (DSC, 0.88 [0.05]; JC, 0.79 [0.07]) and required minimal-to-no corrections (c-CNN: DSC, 0.89 [0.04]; JC, 0.81 [0.06]; equivalence, P < 0.05). Undersegmentation (n = 47 [64%]) was common in the 73 CNN segmentations below 25th percentile DSC, but there were no major errors. Total inference time (minutes) for CNN was 1.2 (0.3). Average time (minutes) taken by radiologists for c-CNN (0.6 [0.97]) was substantially lower compared with manual segmentation (3.37 [1.47]; savings of 77.9%-87% [ P < 0.0001]). CONCLUSIONS Convolutional neural network-enhanced workflow provides high accuracy and efficiency for volumetric pancreas segmentation on computed tomography.
Collapse
Affiliation(s)
- Hala Khasawneh
- From the Department of Radiology, Mayo Clinic, Rochester, MN
| | - Anurima Patra
- Department of Radiology, Tata Medical Center, Kolkata, India
| | | | - Garima Suman
- From the Department of Radiology, Mayo Clinic, Rochester, MN
| | - Jason Klug
- From the Department of Radiology, Mayo Clinic, Rochester, MN
| | | | | | | | | |
Collapse
|
9
|
Mukherjee S, Patra A, Khasawneh H, Korfiatis P, Rajamohan N, Suman G, Majumder S, Panda A, Johnson MP, Larson NB, Wright DE, Kline TL, Fletcher JG, Chari ST, Goenka AH. Radiomics-based Machine-learning Models Can Detect Pancreatic Cancer on Prediagnostic Computed Tomography Scans at a Substantial Lead Time Before Clinical Diagnosis. Gastroenterology 2022; 163:1435-1446.e3. [PMID: 35788343 DOI: 10.1053/j.gastro.2022.06.066] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/20/2022] [Accepted: 06/22/2022] [Indexed: 02/01/2023]
Abstract
BACKGROUND & AIMS Our purpose was to detect pancreatic ductal adenocarcinoma (PDAC) at the prediagnostic stage (3-36 months before clinical diagnosis) using radiomics-based machine-learning (ML) models, and to compare performance against radiologists in a case-control study. METHODS Volumetric pancreas segmentation was performed on prediagnostic computed tomography scans (CTs) (median interval between CT and PDAC diagnosis: 398 days) of 155 patients and an age-matched cohort of 265 subjects with normal pancreas. A total of 88 first-order and gray-level radiomic features were extracted and 34 features were selected through the least absolute shrinkage and selection operator-based feature selection method. The dataset was randomly divided into training (292 CTs: 110 prediagnostic and 182 controls) and test subsets (128 CTs: 45 prediagnostic and 83 controls). Four ML classifiers, k-nearest neighbor (KNN), support vector machine (SVM), random forest (RM), and extreme gradient boosting (XGBoost), were evaluated. Specificity of model with highest accuracy was further validated on an independent internal dataset (n = 176) and the public National Institutes of Health dataset (n = 80). Two radiologists (R4 and R5) independently evaluated the pancreas on a 5-point diagnostic scale. RESULTS Median (range) time between prediagnostic CTs of the test subset and PDAC diagnosis was 386 (97-1092) days. SVM had the highest sensitivity (mean; 95% confidence interval) (95.5; 85.5-100.0), specificity (90.3; 84.3-91.5), F1-score (89.5; 82.3-91.7), area under the curve (AUC) (0.98; 0.94-0.98), and accuracy (92.2%; 86.7-93.7) for classification of CTs into prediagnostic versus normal. All 3 other ML models, KNN, RF, and XGBoost, had comparable AUCs (0.95, 0.95, and 0.96, respectively). The high specificity of SVM was generalizable to both the independent internal (92.6%) and the National Institutes of Health dataset (96.2%). In contrast, interreader radiologist agreement was only fair (Cohen's kappa 0.3) and their mean AUC (0.66; 0.46-0.86) was lower than each of the 4 ML models (AUCs: 0.95-0.98) (P < .001). Radiologists also recorded false positive indirect findings of PDAC in control subjects (n = 83) (7% R4, 18% R5). CONCLUSIONS Radiomics-based ML models can detect PDAC from normal pancreas when it is beyond human interrogation capability at a substantial lead time before clinical diagnosis. Prospective validation and integration of such models with complementary fluid-based biomarkers has the potential for PDAC detection at a stage when surgical cure is a possibility.
Collapse
Affiliation(s)
| | - Anurima Patra
- Department of Radiology, Tata Medical Centre, Kolkata, India
| | - Hala Khasawneh
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | | | - Garima Suman
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Shounak Majumder
- Department of Gastroenterology, Mayo Clinic, Rochester, Minnesota
| | - Ananya Panda
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Matthew P Johnson
- Department of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Nicholas B Larson
- Department of Radiology, Mayo Clinic, Rochester, Minnesota; Department of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, Minnesota
| | | | | | | | - Suresh T Chari
- Department of Gastroenterology, Mayo Clinic, Rochester, Minnesota; Department of Gastroenterology, Hepatology, and Nutrition, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Ajit H Goenka
- Department of Radiology, Mayo Clinic, Rochester, Minnesota.
| |
Collapse
|
10
|
Wright DE, Mukherjee S, Patra A, Khasawneh H, Korfiatis P, Suman G, Chari ST, Kudva YC, Kline TL, Goenka AH. Radiomics-based machine learning (ML) classifier for detection of type 2 diabetes on standard-of-care abdomen CTs: a proof-of-concept study. Abdom Radiol (NY) 2022; 47:3806-3816. [PMID: 36085379 DOI: 10.1007/s00261-022-03668-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/26/2022] [Accepted: 08/27/2022] [Indexed: 01/18/2023]
Abstract
PURPOSE To determine if pancreas radiomics-based AI model can detect the CT imaging signature of type 2 diabetes (T2D). METHODS Total 107 radiomic features were extracted from volumetrically segmented normal pancreas in 422 T2D patients and 456 age-matched controls. Dataset was randomly split into training (300 T2D, 300 control CTs) and test subsets (122 T2D, 156 control CTs). An XGBoost model trained on 10 features selected through top-K-based selection method and optimized through threefold cross-validation on training subset was evaluated on test subset. RESULTS Model correctly classified 73 (60%) T2D patients and 96 (62%) controls yielding F1-score, sensitivity, specificity, precision, and AUC of 0.57, 0.62, 0.61, 0.55, and 0.65, respectively. Model's performance was equivalent across gender, CT slice thicknesses, and CT vendors (p values > 0.05). There was no difference between correctly classified versus misclassified patients in the mean (range) T2D duration [4.5 (0-15.4) versus 4.8 (0-15.7) years, p = 0.8], antidiabetic treatment [insulin (22% versus 18%), oral antidiabetics (10% versus 18%), both (41% versus 39%) (p > 0.05)], and treatment duration [5.4 (0-15) versus 5 (0-13) years, p = 0.4]. CONCLUSION Pancreas radiomics-based AI model can detect the imaging signature of T2D. Further refinement and validation are needed to evaluate its potential for opportunistic T2D detection on millions of CTs that are performed annually.
Collapse
Affiliation(s)
- Darryl E Wright
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Sovanlal Mukherjee
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Anurima Patra
- Department of Radiology, Tata Medical Center, Kolkata, 700160, India
| | - Hala Khasawneh
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Panagiotis Korfiatis
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Garima Suman
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Suresh T Chari
- Department of Gastroenterology, Hepatology and Nutrition, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
- Department of Gastroenterology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Yogish C Kudva
- Department of Endocrinology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Timothy L Kline
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Ajit H Goenka
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA.
| |
Collapse
|
11
|
Laino ME, Ammirabile A, Lofino L, Mannelli L, Fiz F, Francone M, Chiti A, Saba L, Orlandi MA, Savevski V. Artificial Intelligence Applied to Pancreatic Imaging: A Narrative Review. Healthcare (Basel) 2022; 10:1511. [PMID: 36011168 PMCID: PMC9408381 DOI: 10.3390/healthcare10081511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/31/2022] [Accepted: 08/08/2022] [Indexed: 12/19/2022] Open
Abstract
The diagnosis, evaluation, and treatment planning of pancreatic pathologies usually require the combined use of different imaging modalities, mainly, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). Artificial intelligence (AI) has the potential to transform the clinical practice of medical imaging and has been applied to various radiological techniques for different purposes, such as segmentation, lesion detection, characterization, risk stratification, or prediction of response to treatments. The aim of the present narrative review is to assess the available literature on the role of AI applied to pancreatic imaging. Up to now, the use of computer-aided diagnosis (CAD) and radiomics in pancreatic imaging has proven to be useful for both non-oncological and oncological purposes and represents a promising tool for personalized approaches to patients. Although great developments have occurred in recent years, it is important to address the obstacles that still need to be overcome before these technologies can be implemented into our clinical routine, mainly considering the heterogeneity among studies.
Collapse
Affiliation(s)
- Maria Elena Laino
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Angela Ammirabile
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Ludovica Lofino
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | | | - Francesco Fiz
- Nuclear Medicine Unit, Department of Diagnostic Imaging, E.O. Ospedali Galliera, 56321 Genoa, Italy
- Department of Nuclear Medicine and Clinical Molecular Imaging, University Hospital, 72074 Tübingen, Germany
| | - Marco Francone
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Arturo Chiti
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Nuclear Medicine, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Luca Saba
- Department of Radiology, University of Cagliari, 09124 Cagliari, Italy
| | | | - Victor Savevski
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
12
|
Tallam H, Elton DC, Lee S, Wakim P, Pickhardt PJ, Summers RM. Fully Automated Abdominal CT Biomarkers for Type 2 Diabetes Using Deep Learning. Radiology 2022; 304:85-95. [PMID: 35380492 DOI: 10.1148/radiol.211914] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Background CT biomarkers both inside and outside the pancreas can potentially be used to diagnose type 2 diabetes mellitus. Previous studies on this topic have shown significant results but were limited by manual methods and small study samples. Purpose To investigate abdominal CT biomarkers for type 2 diabetes mellitus in a large clinical data set using fully automated deep learning. Materials and Methods For external validation, noncontrast abdominal CT images were retrospectively collected from consecutive patients who underwent routine colorectal cancer screening with CT colonography from 2004 to 2016. The pancreas was segmented using a deep learning method that outputs measurements of interest, including CT attenuation, volume, fat content, and pancreas fractal dimension. Additional biomarkers assessed included visceral fat, atherosclerotic plaque, liver and muscle CT attenuation, and muscle volume. Univariable and multivariable analyses were performed, separating patients into groups based on time between type 2 diabetes diagnosis and CT date and including clinical factors such as sex, age, body mass index (BMI), BMI greater than 30 kg/m2, and height. The best set of predictors for type 2 diabetes were determined using multinomial logistic regression. Results A total of 8992 patients (mean age, 57 years ± 8 [SD]; 5009 women) were evaluated in the test set, of whom 572 had type 2 diabetes mellitus. The deep learning model had a mean Dice similarity coefficient for the pancreas of 0.69 ± 0.17, similar to the interobserver Dice similarity coefficient of 0.69 ± 0.09 (P = .92). The univariable analysis showed that patients with diabetes had, on average, lower pancreatic CT attenuation (mean, 18.74 HU ± 16.54 vs 29.99 HU ± 13.41; P < .0001) and greater visceral fat volume (mean, 235.0 mL ± 108.6 vs 130.9 mL ± 96.3; P < .0001) than those without diabetes. Patients with diabetes also showed a progressive decrease in pancreatic attenuation with greater duration of disease. The final multivariable model showed pairwise areas under the receiver operating characteristic curve (AUCs) of 0.81 and 0.85 between patients without and patients with diabetes who were diagnosed 0-2499 days before and after undergoing CT, respectively. In the multivariable analysis, adding clinical data did not improve upon CT-based AUC performance (AUC = 0.67 for the CT-only model vs 0.68 for the CT and clinical model). The best predictors of type 2 diabetes mellitus included intrapancreatic fat percentage, pancreatic fractal dimension, plaque severity between the L1 and L4 vertebra levels, average liver CT attenuation, and BMI. Conclusion The diagnosis of type 2 diabetes mellitus was associated with abdominal CT biomarkers, especially measures of pancreatic CT attenuation and visceral fat. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Hima Tallam
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Daniel C Elton
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Sungwon Lee
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Paul Wakim
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Perry J Pickhardt
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Ronald M Summers
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| |
Collapse
|
13
|
Chen X, Fu R, Shao Q, Chen Y, Ye Q, Li S, He X, Zhu J. Application of artificial intelligence to pancreatic adenocarcinoma. Front Oncol 2022; 12:960056. [PMID: 35936738 PMCID: PMC9353734 DOI: 10.3389/fonc.2022.960056] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 06/24/2022] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Pancreatic cancer (PC) is one of the deadliest cancers worldwide although substantial advancement has been made in its comprehensive treatment. The development of artificial intelligence (AI) technology has allowed its clinical applications to expand remarkably in recent years. Diverse methods and algorithms are employed by AI to extrapolate new data from clinical records to aid in the treatment of PC. In this review, we will summarize AI's use in several aspects of PC diagnosis and therapy, as well as its limits and potential future research avenues. METHODS We examine the most recent research on the use of AI in PC. The articles are categorized and examined according to the medical task of their algorithm. Two search engines, PubMed and Google Scholar, were used to screen the articles. RESULTS Overall, 66 papers published in 2001 and after were selected. Of the four medical tasks (risk assessment, diagnosis, treatment, and prognosis prediction), diagnosis was the most frequently researched, and retrospective single-center studies were the most prevalent. We found that the different medical tasks and algorithms included in the reviewed studies caused the performance of their models to vary greatly. Deep learning algorithms, on the other hand, produced excellent results in all of the subdivisions studied. CONCLUSIONS AI is a promising tool for helping PC patients and may contribute to improved patient outcomes. The integration of humans and AI in clinical medicine is still in its infancy and requires the in-depth cooperation of multidisciplinary personnel.
Collapse
Affiliation(s)
- Xi Chen
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Ruibiao Fu
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Qian Shao
- Department of Surgical Ward 1, Ningbo Women and Children’s Hospital, Ningbo, China
| | - Yan Chen
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Qinghuang Ye
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Jinhui Zhu
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
- *Correspondence: Jinhui Zhu,
| |
Collapse
|
14
|
Roeth AA, Garretson I, Beltz M, Herbold T, Schulze-Hagen M, Quaisser S, Georgens A, Reith D, Slabu I, Klink CD, Neumann UP, Linke BS. 3D-Printed Replica and Porcine Explants for Pre-Clinical Optimization of Endoscopic Tumor Treatment by Magnetic Targeting. Cancers (Basel) 2021; 13:cancers13215496. [PMID: 34771659 PMCID: PMC8583102 DOI: 10.3390/cancers13215496] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 10/19/2021] [Accepted: 10/28/2021] [Indexed: 12/19/2022] Open
Abstract
Simple Summary Animal models are often needed in cancer research but some research questions may be answered with other models, e.g., 3D replicas of patient-specific data, as these mirror the anatomy in more detail. We, therefore, developed a simple eight-step process to fabricate a 3D replica from computer tomography (CT) data using solely open access software and described the method in detail. For evaluation, we performed experiments regarding endoscopic tumor treatment with magnetic nanoparticles by magnetic hyperthermia and local drug release. For this, the magnetic nanoparticles need to be accumulated at the tumor site via a magnetic field trap. Using the developed eight-step process, we printed a replica of a locally advanced pancreatic cancer and used it to find the best position for the magnetic field trap. In addition, we described a method to hold these magnetic field traps stably in place. The results are highly important for the development of endoscopic tumor treatment with magnetic nanoparticles as the handling and the stable positioning of the magnetic field trap at the stomach wall in close proximity to the pancreatic tumor could be defined and practiced. Finally, the detailed description of the workflow and use of open access software allows for a wide range of possible uses. Abstract Background: Animal models have limitations in cancer research, especially regarding anatomy-specific questions. An example is the exact endoscopic placement of magnetic field traps for the targeting of therapeutic nanoparticles. Three-dimensional-printed human replicas may be used to overcome these pitfalls. Methods: We developed a transparent method to fabricate a patient-specific replica, allowing for a broad scope of application. As an example, we then additively manufactured the relevant organs of a patient with locally advanced pancreatic ductal adenocarcinoma. We performed experimental design investigations for a magnetic field trap and explored the best fixation methods on an explanted porcine stomach wall. Results: We describe in detail the eight-step development of a 3D replica from CT data. To guide further users in their decisions, a morphologic box was created. Endoscopies were performed on the replica and the resulting magnetic field was investigated. The best fixation method to hold the magnetic field traps stably in place was the fixation of loops at the stomach wall with endoscopic single-use clips. Conclusions: Using only open access software, the developed method may be used for a variety of cancer-related research questions. A detailed description of the workflow allows one to produce a 3D replica for research or training purposes at low costs.
Collapse
Affiliation(s)
- Anjali A. Roeth
- Department of General, Visceral and Transplant Surgery, RWTH Aachen University Hospital, 52074Aachen, Germany; (T.H.); (C.D.K.); (U.P.N.)
- Department of Surgery, Maastricht University Medical Center, 6229 HX Maastricht, The Netherlands
- Correspondence: ; Tel.: +49-241-80-89501
| | - Ian Garretson
- Department of Mechanical and Aerospace Engineering, University of California Davis, Davis, CA 95616, USA; (I.G.); (M.B.); (S.Q.); (A.G.); (B.S.L.)
| | - Maja Beltz
- Department of Mechanical and Aerospace Engineering, University of California Davis, Davis, CA 95616, USA; (I.G.); (M.B.); (S.Q.); (A.G.); (B.S.L.)
- Department of Electrical and Mechanical Engineering, Bonn-Rhein-Sieg University of Applied Sciences, 53757 Sankt Augustin, Germany;
| | - Till Herbold
- Department of General, Visceral and Transplant Surgery, RWTH Aachen University Hospital, 52074Aachen, Germany; (T.H.); (C.D.K.); (U.P.N.)
| | - Maximilian Schulze-Hagen
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University Hospital, 52074 Aachen, Germany;
| | - Sebastian Quaisser
- Department of Mechanical and Aerospace Engineering, University of California Davis, Davis, CA 95616, USA; (I.G.); (M.B.); (S.Q.); (A.G.); (B.S.L.)
- Department of Electrical and Mechanical Engineering, Bonn-Rhein-Sieg University of Applied Sciences, 53757 Sankt Augustin, Germany;
| | - Alex Georgens
- Department of Mechanical and Aerospace Engineering, University of California Davis, Davis, CA 95616, USA; (I.G.); (M.B.); (S.Q.); (A.G.); (B.S.L.)
| | - Dirk Reith
- Department of Electrical and Mechanical Engineering, Bonn-Rhein-Sieg University of Applied Sciences, 53757 Sankt Augustin, Germany;
| | - Ioana Slabu
- Institute of Applied Medical Engineering, Helmholtz-Institute Aachen, RWTH Aachen University, 52062 Aachen, Germany;
| | - Christian D. Klink
- Department of General, Visceral and Transplant Surgery, RWTH Aachen University Hospital, 52074Aachen, Germany; (T.H.); (C.D.K.); (U.P.N.)
| | - Ulf P. Neumann
- Department of General, Visceral and Transplant Surgery, RWTH Aachen University Hospital, 52074Aachen, Germany; (T.H.); (C.D.K.); (U.P.N.)
- Department of Surgery, Maastricht University Medical Center, 6229 HX Maastricht, The Netherlands
| | - Barbara S. Linke
- Department of Mechanical and Aerospace Engineering, University of California Davis, Davis, CA 95616, USA; (I.G.); (M.B.); (S.Q.); (A.G.); (B.S.L.)
| |
Collapse
|
15
|
Enriquez JS, Chu Y, Pudakalakatti S, Hsieh KL, Salmon D, Dutta P, Millward NZ, Lurie E, Millward S, McAllister F, Maitra A, Sen S, Killary A, Zhang J, Jiang X, Bhattacharya PK, Shams S. Hyperpolarized Magnetic Resonance and Artificial Intelligence: Frontiers of Imaging in Pancreatic Cancer. JMIR Med Inform 2021; 9:e26601. [PMID: 34137725 PMCID: PMC8277399 DOI: 10.2196/26601] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 02/24/2021] [Accepted: 04/03/2021] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND There is an unmet need for noninvasive imaging markers that can help identify the aggressive subtype(s) of pancreatic ductal adenocarcinoma (PDAC) at diagnosis and at an earlier time point, and evaluate the efficacy of therapy prior to tumor reduction. In the past few years, there have been two major developments with potential for a significant impact in establishing imaging biomarkers for PDAC and pancreatic cancer premalignancy: (1) hyperpolarized metabolic (HP)-magnetic resonance (MR), which increases the sensitivity of conventional MR by over 10,000-fold, enabling real-time metabolic measurements; and (2) applications of artificial intelligence (AI). OBJECTIVE Our objective of this review was to discuss these two exciting but independent developments (HP-MR and AI) in the realm of PDAC imaging and detection from the available literature to date. METHODS A systematic review following the PRISMA extension for Scoping Reviews (PRISMA-ScR) guidelines was performed. Studies addressing the utilization of HP-MR and/or AI for early detection, assessment of aggressiveness, and interrogating the early efficacy of therapy in patients with PDAC cited in recent clinical guidelines were extracted from the PubMed and Google Scholar databases. The studies were reviewed following predefined exclusion and inclusion criteria, and grouped based on the utilization of HP-MR and/or AI in PDAC diagnosis. RESULTS Part of the goal of this review was to highlight the knowledge gap of early detection in pancreatic cancer by any imaging modality, and to emphasize how AI and HP-MR can address this critical gap. We reviewed every paper published on HP-MR applications in PDAC, including six preclinical studies and one clinical trial. We also reviewed several HP-MR-related articles describing new probes with many functional applications in PDAC. On the AI side, we reviewed all existing papers that met our inclusion criteria on AI applications for evaluating computed tomography (CT) and MR images in PDAC. With the emergence of AI and its unique capability to learn across multimodal data, along with sensitive metabolic imaging using HP-MR, this knowledge gap in PDAC can be adequately addressed. CT is an accessible and widespread imaging modality worldwide as it is affordable; because of this reason alone, most of the data discussed are based on CT imaging datasets. Although there were relatively few MR-related papers included in this review, we believe that with rapid adoption of MR imaging and HP-MR, more clinical data on pancreatic cancer imaging will be available in the near future. CONCLUSIONS Integration of AI, HP-MR, and multimodal imaging information in pancreatic cancer may lead to the development of real-time biomarkers of early detection, assessing aggressiveness, and interrogating early efficacy of therapy in PDAC.
Collapse
Affiliation(s)
- José S Enriquez
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Yan Chu
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Shivanand Pudakalakatti
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Kang Lin Hsieh
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Duncan Salmon
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, United States
| | - Prasanta Dutta
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Niki Zacharias Millward
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Urology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Eugene Lurie
- Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Steven Millward
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Florencia McAllister
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Clinical Cancer Prevention, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Anirban Maitra
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Subrata Sen
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Ann Killary
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Jian Zhang
- Division of Computer Science and Engineering, Louisiana State University, Baton Rouge, LA, United States
| | - Xiaoqian Jiang
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Pratip K Bhattacharya
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Shayan Shams
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
16
|
Panda A, Korfiatis P, Suman G, Garg SK, Polley EC, Singh DP, Chari ST, Goenka AH. Two-stage deep learning model for fully automated pancreas segmentation on computed tomography: Comparison with intra-reader and inter-reader reliability at full and reduced radiation dose on an external dataset. Med Phys 2021; 48:2468-2481. [PMID: 33595105 DOI: 10.1002/mp.14782] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 01/07/2021] [Accepted: 02/11/2021] [Indexed: 01/24/2023] Open
Abstract
PURPOSE To develop a two-stage three-dimensional (3D) convolutional neural networks (CNNs) for fully automated volumetric segmentation of pancreas on computed tomography (CT) and to further evaluate its performance in the context of intra-reader and inter-reader reliability at full dose and reduced radiation dose CTs on a public dataset. METHODS A dataset of 1994 abdomen CT scans (portal venous phase, slice thickness ≤ 3.75-mm, multiple CT vendors) was curated by two radiologists (R1 and R2) to exclude cases with pancreatic pathology, suboptimal image quality, and image artifacts (n = 77). Remaining 1917 CTs were equally allocated between R1 and R2 for volumetric pancreas segmentation [ground truth (GT)]. This internal dataset was randomly divided into training (n = 1380), validation (n = 248), and test (n = 289) sets for the development of a two-stage 3D CNN model based on a modified U-net architecture for automated volumetric pancreas segmentation. Model's performance for pancreas segmentation and the differences in model-predicted pancreatic volumes vs GT volumes were compared on the test set. Subsequently, an external dataset from The Cancer Imaging Archive (TCIA) that had CT scans acquired at standard radiation dose and same scans reconstructed at a simulated 25% radiation dose was curated (n = 41). Volumetric pancreas segmentation was done on this TCIA dataset by R1 and R2 independently on the full dose and then at the reduced radiation dose CT images. Intra-reader and inter-reader reliability, model's segmentation performance, and reliability between model-predicted pancreatic volumes at full vs reduced dose were measured. Finally, model's performance was tested on the benchmarking National Institute of Health (NIH)-Pancreas CT (PCT) dataset. RESULTS Three-dimensional CNN had mean (SD) Dice similarity coefficient (DSC): 0.91 (0.03) and average Hausdorff distance of 0.15 (0.09) mm on the test set. Model's performance was equivalent between males and females (P = 0.08) and across different CT slice thicknesses (P > 0.05) based on noninferiority statistical testing. There was no difference in model-predicted and GT pancreatic volumes [mean predicted volume 99 cc (31cc); GT volume 101 cc (33 cc), P = 0.33]. Mean pancreatic volume difference was -2.7 cc (percent difference: -2.4% of GT volume) with excellent correlation between model-predicted and GT volumes [concordance correlation coefficient (CCC)=0.97]. In the external TCIA dataset, the model had higher reliability than R1 and R2 on full vs reduced dose CT scans [model mean (SD) DSC: 0.96 (0.02), CCC = 0.995 vs R1 DSC: 0.83 (0.07), CCC = 0.89, and R2 DSC:0.87 (0.04), CCC = 0.97]. The DSC and volume concordance correlations for R1 vs R2 (inter-reader reliability) were 0.85 (0.07), CCC = 0.90 at full dose and 0.83 (0.07), CCC = 0.96 at reduced dose datasets. There was good reliability between model and R1 at both full and reduced dose CT [full dose: DSC: 0.81 (0.07), CCC = 0.83 and reduced dose DSC:0.81 (0.08), CCC = 0.87]. Likewise, there was good reliability between model and R2 at both full and reduced dose CT [full dose: DSC: 0.84 (0.05), CCC = 0.89 and reduced dose DSC:0.83(0.06), CCC = 0.89]. There was no difference in model-predicted and GT pancreatic volume in TCIA dataset (mean predicted volume 96 cc (33); GT pancreatic volume 89 cc (30), p = 0.31). Model had mean (SD) DSC: 0.89 (0.04) (minimum-maximum DSC: 0.79 -0.96) on the NIH-PCT dataset. CONCLUSION A 3D CNN developed on the largest dataset of CTs is accurate for fully automated volumetric pancreas segmentation and is generalizable across a wide range of CT slice thicknesses, radiation dose, and patient gender. This 3D CNN offers a scalable tool to leverage biomarkers from pancreas morphometrics and radiomics for pancreatic diseases including for early pancreatic cancer detection.
Collapse
Affiliation(s)
- Ananya Panda
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Panagiotis Korfiatis
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Garima Suman
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Sushil K Garg
- Department of Gastroenterology and Hepatology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Eric C Polley
- Department of Biostatistics, Health Sciences Research, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Dhruv P Singh
- Department of Gastroenterology and Hepatology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Suresh T Chari
- Department of Gastroenterology, Hepatology and Nutrition, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | - Ajit H Goenka
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| |
Collapse
|