1
|
Dai J, He C, Jin L, Chen C, Wu J, Bian Y. A deep learning detection method for pancreatic cystic neoplasm based on Mamba architecture. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2025; 33:461-471. [PMID: 39973786 DOI: 10.1177/08953996251313719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
OBJECTIVE Early diagnosis of pancreatic cystic neoplasm (PCN) is crucial for patient survival. This study proposes M-YOLO, a novel model combining Mamba architecture and YOLO, to enhance the detection of pancreatic cystic tumors. The model addresses the technical challenge posed by the tumors' complex morphological features in medical images. METHODS This study develops an innovative deep learning network architecture, M-YOLO (Mamba YOLOv10), which combines the advantages of Mamba and YOLOv10 and aims to improve the accuracy and efficiency of pancreatic cystic neoplasm(PCN) detection. The Mamba architecture, with its superior sequence modeling capabilities, is ideally suited for processing the rich contextual information contained in medical images. At the same time, YOLOv10's fast object detection feature ensures the system's viability for application in clinical practice. RESULTS M-YOLO has a high sensitivity of 0.98, a specificity of 0.92, a precision of 0.96, an F1 value of 0.97, an accuracy of 0.93, as well as a mean average precision (mAP) of 0.96 at 50% intersection-to-union (IoU) threshold on the dataset provided by Changhai Hospital. CONCLUSIONS M-YOLO(Mamba YOLOv10) enhances the identification performance of PCN by integrating the deep feature extraction capability of Mamba and the fast localization technique of YOLOv10.
Collapse
Affiliation(s)
- Junlong Dai
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Cong He
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Liang Jin
- Department of Radiology, Huadong Hospital, Fudan University, Shanghai, China
| | - Chengwei Chen
- Department of Radiology, First Affiliated Hospital of Naval Medical University (Second Military Medical University), Shanghai, China
| | - Jie Wu
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Yun Bian
- Department of Radiology, First Affiliated Hospital of Naval Medical University (Second Military Medical University), Shanghai, China
| |
Collapse
|
2
|
Qadir MI, Baril JA, Yip-Schneider MT, Schonlau D, Tran TTT, Schmidt CM, Kolbinger FR. Artificial Intelligence in Pancreatic Intraductal Papillary Mucinous Neoplasm Imaging: A Systematic Review. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.01.08.25320130. [PMID: 39830259 PMCID: PMC11741484 DOI: 10.1101/2025.01.08.25320130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/22/2025]
Abstract
Background Based on the Fukuoka and Kyoto international consensus guidelines, the current clinical management of intraductal papillary mucinous neoplasm (IPMN) largely depends on imaging features. While these criteria are highly sensitive in detecting high-risk IPMN, they lack specificity, resulting in surgical overtreatment. Artificial Intelligence (AI)-based medical image analysis has the potential to augment the clinical management of IPMNs by improving diagnostic accuracy. Methods Based on a systematic review of the academic literature on AI in IPMN imaging, 1041 publications were identified of which 25 published studies were included in the analysis. The studies were stratified based on prediction target, underlying data type and imaging modality, patient cohort size, and stage of clinical translation and were subsequently analyzed to identify trends and gaps in the field. Results Research on AI in IPMN imaging has been increasing in recent years. The majority of studies utilized CT imaging to train computational models. Most studies presented computational models developed on single-center datasets (n=11,44%) and included less than 250 patients (n=18,72%). Methodologically, convolutional neural network (CNN)-based algorithms were most commonly used. Thematically, most studies reported models augmenting differential diagnosis (n=9,36%) or risk stratification (n=10,40%) rather than IPMN detection (n=5,20%) or IPMN segmentation (n=2,8%). Conclusion This systematic review provides a comprehensive overview of the research landscape of AI in IPMN imaging. Computational models have potential to enhance the accurate and precise stratification of patients with IPMN. Multicenter collaboration and datasets comprising various modalities are necessary to fully utilize this potential, alongside concerted efforts towards clinical translation.
Collapse
Affiliation(s)
| | - Jackson A. Baril
- Division of Surgical Oncology, Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Michele T. Yip-Schneider
- Division of Surgical Oncology, Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Duane Schonlau
- Department of Radiology, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Thi Thanh Thoa Tran
- Division of Surgical Oncology, Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
| | - C. Max Schmidt
- Division of Surgical Oncology, Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
- Department of Biochemistry and Molecular Biology, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Fiona R. Kolbinger
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
- Regenstrief Center for Healthcare Engineering (RCHE), Purdue University, West Lafayette, IN, USA
- Department of Biostatistics and Health Data Science, Richard M. Fairbanks School of Public Health, Indiana University, Indianapolis, IN, USA
| |
Collapse
|
3
|
Udriștoiu AL, Podină N, Ungureanu BS, Constantin A, Georgescu CV, Bejinariu N, Pirici D, Burtea DE, Gruionu L, Udriștoiu S, Săftoiu A. Deep learning segmentation architectures for automatic detection of pancreatic ductal adenocarcinoma in EUS-guided fine-needle biopsy samples based on whole-slide imaging. Endosc Ultrasound 2024; 13:335-344. [PMID: 39802107 PMCID: PMC11723688 DOI: 10.1097/eus.0000000000000094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 10/27/2024] [Indexed: 01/16/2025] Open
Abstract
Background EUS-guided fine-needle biopsy is the procedure of choice for the diagnosis of pancreatic ductal adenocarcinoma (PDAC). Nevertheless, the samples obtained are small and require expertise in pathology, whereas the diagnosis is difficult in view of the scarcity of malignant cells and the important desmoplastic reaction of these tumors. With the help of artificial intelligence, the deep learning architectures produce a fast, accurate, and automated approach for PDAC image segmentation based on whole-slide imaging. Given the effectiveness of U-Net in semantic segmentation, numerous variants and improvements have emerged, specifically for whole-slide imaging segmentation. Methods In this study, a comparison of 7 U-Net architecture variants was performed on 2 different datasets of EUS-guided fine-needle biopsy samples from 2 medical centers (31 and 33 whole-slide images, respectively) with different parameters and acquisition tools. The U-Net architecture variants evaluated included some that had not been previously explored for PDAC whole-slide image segmentation. The evaluation of their performance involved calculating accuracy through the mean Dice coefficient and mean intersection over union (IoU). Results The highest segmentation accuracies were obtained using Inception U-Net architecture for both datasets. PDAC tissue was segmented with the overall average Dice coefficient of 97.82% and IoU of 0.87 for Dataset 1, respectively, overall average Dice coefficient of 95.70%, and IoU of 0.79 for Dataset 2. Also, we considered the external testing of the trained segmentation models by performing the cross evaluations between the 2 datasets. The Inception U-Net model trained on Train Dataset 1 performed with the overall average Dice coefficient of 93.12% and IoU of 0.74 on Test Dataset 2. The Inception U-Net model trained on Train Dataset 2 performed with the overall average Dice coefficient of 92.09% and IoU of 0.81 on Test Dataset 1. Conclusions The findings of this study demonstrated the feasibility of utilizing artificial intelligence for assessing PDAC segmentation in whole-slide imaging, supported by promising scores.
Collapse
Affiliation(s)
| | - Nicoleta Podină
- Department of Gastroenterology, Ponderas Academic Hospital, Bucharest, Romania
- Faculty of Medicine, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
| | - Bogdan Silviu Ungureanu
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
- Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy Craiova, Craiova, Romania
| | - Alina Constantin
- Department of Gastroenterology, Ponderas Academic Hospital, Bucharest, Romania
| | | | - Nona Bejinariu
- REGINA MARIA Regional Laboratory, Pathological Anatomy Division, Cluj-Napoca, Romania
| | - Daniel Pirici
- Department of Histology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Daniela Elena Burtea
- Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy Craiova, Craiova, Romania
| | - Lucian Gruionu
- Faculty of Mechanics, University of Craiova, Craiova, Romania
| | - Stefan Udriștoiu
- Faculty of Automation, Computers and Electronics, University of Craiova, Craiova, Romania
| | - Adrian Săftoiu
- Department of Gastroenterology, Ponderas Academic Hospital, Bucharest, Romania
- Department of Gastroenterology and Hepatology, Elias University Emergency Hospital, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
| |
Collapse
|
4
|
Ahmed TM, Lopez-Ramirez F, Fishman EK, Chu L. Artificial Intelligence Applications in Pancreatic Cancer Imaging. ADVANCES IN CLINICAL RADIOLOGY 2024; 6:41-54. [DOI: 10.1016/j.yacr.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
|
5
|
Liu W, Zhang B, Liu T, Jiang J, Liu Y. Artificial Intelligence in Pancreatic Image Analysis: A Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:4749. [PMID: 39066145 PMCID: PMC11280964 DOI: 10.3390/s24144749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 07/15/2024] [Accepted: 07/16/2024] [Indexed: 07/28/2024]
Abstract
Pancreatic cancer is a highly lethal disease with a poor prognosis. Its early diagnosis and accurate treatment mainly rely on medical imaging, so accurate medical image analysis is especially vital for pancreatic cancer patients. However, medical image analysis of pancreatic cancer is facing challenges due to ambiguous symptoms, high misdiagnosis rates, and significant financial costs. Artificial intelligence (AI) offers a promising solution by relieving medical personnel's workload, improving clinical decision-making, and reducing patient costs. This study focuses on AI applications such as segmentation, classification, object detection, and prognosis prediction across five types of medical imaging: CT, MRI, EUS, PET, and pathological images, as well as integrating these imaging modalities to boost diagnostic accuracy and treatment efficiency. In addition, this study discusses current hot topics and future directions aimed at overcoming the challenges in AI-enabled automated pancreatic cancer diagnosis algorithms.
Collapse
Affiliation(s)
- Weixuan Liu
- Sydney Smart Technology College, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China; (W.L.); (B.Z.)
| | - Bairui Zhang
- Sydney Smart Technology College, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China; (W.L.); (B.Z.)
| | - Tao Liu
- School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China;
| | - Juntao Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310058, China
| | - Yong Liu
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310058, China
| |
Collapse
|
6
|
Wang SJ, Hu Z, Li C, He X, Zhu C, Wang Y, Sattar U, Bazojoo V, He HYN, Blumenfeld JD, Prince MR. Automatically Detecting Pancreatic Cysts in Autosomal Dominant Polycystic Kidney Disease on MRI Using Deep Learning. Tomography 2024; 10:1148-1158. [PMID: 39058059 PMCID: PMC11281294 DOI: 10.3390/tomography10070087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Revised: 07/08/2024] [Accepted: 07/11/2024] [Indexed: 07/28/2024] Open
Abstract
BACKGROUND Pancreatic cysts in autosomal dominant polycystic kidney disease (ADPKD) correlate with PKD2 mutations, which have a different phenotype than PKD1 mutations. However, pancreatic cysts are commonly overlooked by radiologists. Here, we automate the detection of pancreatic cysts on abdominal MRI in ADPKD. METHODS Eight nnU-Net-based segmentation models with 2D or 3D configuration and various loss functions were trained on positive-only or positive-and-negative datasets, comprising axial and coronal T2-weighted MR images from 254 scans on 146 ADPKD patients with pancreatic cysts labeled independently by two radiologists. Model performance was evaluated on test subjects unseen in training, comprising 40 internal, 40 external, and 23 test-retest reproducibility ADPKD patients. RESULTS Two radiologists agreed on 52% of cysts labeled on training data, and 33%/25% on internal/external test datasets. The 2D model with a loss of combined dice similarity coefficient and cross-entropy trained with the dataset with both positive and negative cases produced an optimal dice score of 0.7 ± 0.5/0.8 ± 0.4 at the voxel level on internal/external validation and was thus used as the best-performing model. In the test-retest, the optimal model showed superior reproducibility (83% agreement between scan A and B) in segmenting pancreatic cysts compared to six expert observers (77% agreement). In the internal/external validation, the optimal model showed high specificity of 94%/100% but limited sensitivity of 20%/24%. CONCLUSIONS Labeling pancreatic cysts on T2 images of the abdomen in patients with ADPKD is challenging, deep learning can help the automated detection of pancreatic cysts, and further image quality improvement is warranted.
Collapse
Affiliation(s)
- Sophie J. Wang
- Department of Radiology, Weill Cornell Medicine, New York, NY 10065, USA; (S.J.W.); (Z.H.); (C.L.); (X.H.); (C.Z.); (Y.W.); (U.S.); (V.B.); (H.Y.N.H.)
| | - Zhongxiu Hu
- Department of Radiology, Weill Cornell Medicine, New York, NY 10065, USA; (S.J.W.); (Z.H.); (C.L.); (X.H.); (C.Z.); (Y.W.); (U.S.); (V.B.); (H.Y.N.H.)
| | - Collin Li
- Department of Radiology, Weill Cornell Medicine, New York, NY 10065, USA; (S.J.W.); (Z.H.); (C.L.); (X.H.); (C.Z.); (Y.W.); (U.S.); (V.B.); (H.Y.N.H.)
| | - Xinzi He
- Department of Radiology, Weill Cornell Medicine, New York, NY 10065, USA; (S.J.W.); (Z.H.); (C.L.); (X.H.); (C.Z.); (Y.W.); (U.S.); (V.B.); (H.Y.N.H.)
| | - Chenglin Zhu
- Department of Radiology, Weill Cornell Medicine, New York, NY 10065, USA; (S.J.W.); (Z.H.); (C.L.); (X.H.); (C.Z.); (Y.W.); (U.S.); (V.B.); (H.Y.N.H.)
| | - Yin Wang
- Department of Radiology, Weill Cornell Medicine, New York, NY 10065, USA; (S.J.W.); (Z.H.); (C.L.); (X.H.); (C.Z.); (Y.W.); (U.S.); (V.B.); (H.Y.N.H.)
| | - Usama Sattar
- Department of Radiology, Weill Cornell Medicine, New York, NY 10065, USA; (S.J.W.); (Z.H.); (C.L.); (X.H.); (C.Z.); (Y.W.); (U.S.); (V.B.); (H.Y.N.H.)
| | - Vahid Bazojoo
- Department of Radiology, Weill Cornell Medicine, New York, NY 10065, USA; (S.J.W.); (Z.H.); (C.L.); (X.H.); (C.Z.); (Y.W.); (U.S.); (V.B.); (H.Y.N.H.)
| | - Hui Yi Ng He
- Department of Radiology, Weill Cornell Medicine, New York, NY 10065, USA; (S.J.W.); (Z.H.); (C.L.); (X.H.); (C.Z.); (Y.W.); (U.S.); (V.B.); (H.Y.N.H.)
| | - Jon D. Blumenfeld
- The Rogosin Institute, New York, NY 10065, USA;
- Department of Medicine, Weill Cornell Medicine, New York, NY 10065, USA
| | - Martin R. Prince
- Department of Radiology, Weill Cornell Medicine, New York, NY 10065, USA; (S.J.W.); (Z.H.); (C.L.); (X.H.); (C.Z.); (Y.W.); (U.S.); (V.B.); (H.Y.N.H.)
- Department of Radiology, Columbia University Vagelos College of Physicians and Surgeons, New York, NY 10032, USA
| |
Collapse
|
7
|
Mazor N, Dar G, Lederman R, Lev-Cohain N, Sosna J, Joskowicz L. MC3DU-Net: a multisequence cascaded pipeline for the detection and segmentation of pancreatic cysts in MRI. Int J Comput Assist Radiol Surg 2024; 19:423-432. [PMID: 37796412 DOI: 10.1007/s11548-023-03020-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 09/12/2023] [Indexed: 10/06/2023]
Abstract
PURPOSE Radiological detection and follow-up of pancreatic cysts in multisequence MRI studies are required to assess the likelihood of their malignancy and to determine their treatment. The evaluation requires expertise and has not been automated. This paper presents MC3DU-Net, a novel multisequence cascaded pipeline for the detection and segmentation of pancreatic cysts in MRI studies consisting of coronal MRCP and axial TSE MRI sequences. METHODS MC3DU-Net leverages the information in both sequences by computing a pancreas Region of Interest (ROI) segmentation in the TSE MRI scan, transferring it to MRCP scan, and then detecting and segmenting the cysts in the ROI of the MRCP scan. Both the voxel-level ROI of the pancreas and the segmentation of the cysts are performed with 3D U-Nets trained with Hard Negative Patch Mining, a new technique for class imbalance correction and for the reduction in false positives. RESULTS MC3DU-Net was evaluated on a dataset of 158 MRI patient studies with a training/validation/testing split of 118/17/23. Ground truth segmentations of a total of 840 cysts were manually obtained by expert clinicians. MC3DU-Net achieves a mean recall of 0.80 ± 0.19, a mean precision of 0.75 ± 0.26, a mean Dice score of 0.80 ± 0.19 and a mean ASSD of 0.60 ± 0.53 for pancreatic cysts of diameter > 5 mm, which is the clinically relevant endpoint. CONCLUSION MC3DU-Net is the first fully automatic method for detection and segmentation of pancreatic cysts in MRI. Automatic detection and segmentation of pancreatic cysts in MRI can be performed accurately and reliably. It may provide a method for precise disease evaluation and may serve as a second expert reader.
Collapse
Affiliation(s)
- Nir Mazor
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Gili Dar
- Department of Radiology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Richard Lederman
- Department of Radiology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Naama Lev-Cohain
- Department of Radiology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Jacob Sosna
- Department of Radiology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem, Israel.
| |
Collapse
|
8
|
Tripathi S, Tabari A, Mansur A, Dabbara H, Bridge CP, Daye D. From Machine Learning to Patient Outcomes: A Comprehensive Review of AI in Pancreatic Cancer. Diagnostics (Basel) 2024; 14:174. [PMID: 38248051 PMCID: PMC10814554 DOI: 10.3390/diagnostics14020174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/28/2023] [Accepted: 12/29/2023] [Indexed: 01/23/2024] Open
Abstract
Pancreatic cancer is a highly aggressive and difficult-to-detect cancer with a poor prognosis. Late diagnosis is common due to a lack of early symptoms, specific markers, and the challenging location of the pancreas. Imaging technologies have improved diagnosis, but there is still room for improvement in standardizing guidelines. Biopsies and histopathological analysis are challenging due to tumor heterogeneity. Artificial Intelligence (AI) revolutionizes healthcare by improving diagnosis, treatment, and patient care. AI algorithms can analyze medical images with precision, aiding in early disease detection. AI also plays a role in personalized medicine by analyzing patient data to tailor treatment plans. It streamlines administrative tasks, such as medical coding and documentation, and provides patient assistance through AI chatbots. However, challenges include data privacy, security, and ethical considerations. This review article focuses on the potential of AI in transforming pancreatic cancer care, offering improved diagnostics, personalized treatments, and operational efficiency, leading to better patient outcomes.
Collapse
Affiliation(s)
- Satvik Tripathi
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Azadeh Tabari
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Arian Mansur
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Harika Dabbara
- Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA;
| | - Christopher P. Bridge
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
9
|
Mervak BM, Fried JG, Wasnik AP. A Review of the Clinical Applications of Artificial Intelligence in Abdominal Imaging. Diagnostics (Basel) 2023; 13:2889. [PMID: 37761253 PMCID: PMC10529018 DOI: 10.3390/diagnostics13182889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 08/23/2023] [Accepted: 09/05/2023] [Indexed: 09/29/2023] Open
Abstract
Artificial intelligence (AI) has been a topic of substantial interest for radiologists in recent years. Although many of the first clinical applications were in the neuro, cardiothoracic, and breast imaging subspecialties, the number of investigated and real-world applications of body imaging has been increasing, with more than 30 FDA-approved algorithms now available for applications in the abdomen and pelvis. In this manuscript, we explore some of the fundamentals of artificial intelligence and machine learning, review major functions that AI algorithms may perform, introduce current and potential future applications of AI in abdominal imaging, provide a basic understanding of the pathways by which AI algorithms can receive FDA approval, and explore some of the challenges with the implementation of AI in clinical practice.
Collapse
Affiliation(s)
| | | | - Ashish P. Wasnik
- Department of Radiology, University of Michigan—Michigan Medicine, 1500 E. Medical Center Dr., Ann Arbor, MI 48109, USA; (B.M.M.); (J.G.F.)
| |
Collapse
|
10
|
Ahmed TM, Kawamoto S, Hruban RH, Fishman EK, Soyer P, Chu LC. A primer on artificial intelligence in pancreatic imaging. Diagn Interv Imaging 2023; 104:435-447. [PMID: 36967355 DOI: 10.1016/j.diii.2023.03.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 03/06/2023] [Indexed: 06/18/2023]
Abstract
Artificial Intelligence (AI) is set to transform medical imaging by leveraging the vast data contained in medical images. Deep learning and radiomics are the two main AI methods currently being applied within radiology. Deep learning uses a layered set of self-correcting algorithms to develop a mathematical model that best fits the data. Radiomics converts imaging data into mineable features such as signal intensity, shape, texture, and higher-order features. Both methods have the potential to improve disease detection, characterization, and prognostication. This article reviews the current status of artificial intelligence in pancreatic imaging and critically appraises the quality of existing evidence using the radiomics quality score.
Collapse
Affiliation(s)
- Taha M Ahmed
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Satomi Kawamoto
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Ralph H Hruban
- Sol Goldman Pancreatic Research Center, Department of Pathology, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Elliot K Fishman
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Philippe Soyer
- Université Paris Cité, Faculté de Médecine, Department of Radiology, Hôpital Cochin-APHP, 75014, 75006, Paris, France, 7501475006
| | - Linda C Chu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
11
|
Duh MM, Torra-Ferrer N, Riera-Marín M, Cumelles D, Rodríguez-Comas J, García López J, Fernández Planas MT. Deep Learning to Detect Pancreatic Cystic Lesions on Abdominal Computed Tomography Scans: Development and Validation Study. JMIR AI 2023; 2:e40702. [PMID: 38875547 PMCID: PMC11041052 DOI: 10.2196/40702] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 09/02/2022] [Accepted: 11/11/2022] [Indexed: 06/16/2024]
Abstract
BACKGROUND Pancreatic cystic lesions (PCLs) are frequent and underreported incidental findings on computed tomography (CT) scans and can evolve to pancreatic cancer-the most lethal cancer, with less than 5 months of life expectancy. OBJECTIVE The aim of this study was to develop and validate an artificial deep neural network (attention gate U-Net, also named "AGNet") for automated detection of PCLs. This kind of technology can help radiologists to cope with an increasing demand of cross-sectional imaging tests and increase the number of PCLs incidentally detected, thus increasing the early detection of pancreatic cancer. METHODS We adapted and evaluated an algorithm based on an attention gate U-Net architecture for automated detection of PCL on CTs. A total of 335 abdominal CTs with PCLs and control cases were manually segmented in 3D by 2 radiologists with over 10 years of experience in consensus with a board-certified radiologist specialized in abdominal radiology. This information was used to train a neural network for segmentation followed by a postprocessing pipeline that filtered the results of the network and applied some physical constraints, such as the expected position of the pancreas, to minimize the number of false positives. RESULTS Of 335 studies included in this study, 297 had a PCL, including serous cystadenoma, intraductal pseudopapillary mucinous neoplasia, mucinous cystic neoplasm, and pseudocysts . The Shannon Index of the chosen data set was 0.991 with an evenness of 0.902. The mean sensitivity obtained in the detection of these lesions was 93.1% (SD 0.1%), and the specificity was 81.8% (SD 0.1%). CONCLUSIONS This study shows a good performance of an automated artificial deep neural network in the detection of PCL on both noncontrast- and contrast-enhanced abdominal CT scans.
Collapse
Affiliation(s)
- Maria Montserrat Duh
- Department of Radiology, Consorci Sanitari del Maresme (Hospital de Mataró), Mataró, Spain
| | - Neus Torra-Ferrer
- Department of Radiology, Consorci Sanitari del Maresme (Hospital de Mataró), Mataró, Spain
| | | | - Dídac Cumelles
- Scientific and Technical Department, Sycai Technologies SL, Barcelona, Spain
| | | | - Javier García López
- Scientific and Technical Department, Sycai Technologies SL, Barcelona, Spain
| | | |
Collapse
|
12
|
Park HJ, Shin K, You MW, Kyung SG, Kim SY, Park SH, Byun JH, Kim N, Kim HJ. Deep Learning-based Detection of Solid and Cystic Pancreatic Neoplasms at Contrast-enhanced CT. Radiology 2023; 306:140-149. [PMID: 35997607 DOI: 10.1148/radiol.220171] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Background Deep learning (DL) may facilitate the diagnosis of various pancreatic lesions at imaging. Purpose To develop and validate a DL-based approach for automatic identification of patients with various solid and cystic pancreatic neoplasms at abdominal CT and compare its diagnostic performance with that of radiologists. Materials and Methods In this retrospective study, a three-dimensional nnU-Net-based DL model was trained using the CT data of patients who underwent resection for pancreatic lesions between January 2014 and March 2015 and a subset of patients without pancreatic abnormality who underwent CT in 2014. Performance of the DL-based approach to identify patients with pancreatic lesions was evaluated in a temporally independent cohort (test set 1) and a temporally and spatially independent cohort (test set 2) and was compared with that of two board-certified radiologists. Performance was assessed using receiver operating characteristic analysis. Results The study included 852 patients in the training set (median age, 60 years [range, 19-85 years]; 462 men), 603 patients in test set 1 (median age, 58 years [range, 18-82 years]; 376 men), and 589 patients in test set 2 (median age, 63 years [range, 18-99 years]; 343 men). In test set 1, the DL-based approach had an area under the receiver operating characteristic curve (AUC) of 0.91 (95% CI: 0.89, 0.94) and showed slightly worse performance in test set 2 (AUC, 0.87 [95% CI: 0.84, 0.89]). The DL-based approach showed high sensitivity in identifying patients with solid lesions of any size (98%-100%) or cystic lesions measuring 1.0 cm or larger (92%-93%), which was comparable with the radiologists (95%-100% for solid lesions [P = .51 to P > .99]; 93%-98% for cystic lesions ≥1.0 cm [P = .38 to P > .99]). Conclusion The deep learning-based approach demonstrated high performance in identifying patients with various solid and cystic pancreatic lesions at CT. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Hyo Jung Park
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Keewon Shin
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Myung-Won You
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Sung-Gu Kyung
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - So Yeon Kim
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Seong Ho Park
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Jae Ho Byun
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Namkug Kim
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| | - Hyoung Jung Kim
- From the Department of Radiology and Research Institute of Radiology (H.J.P., S.Y.K., S.H.P., J.H.B., H.J.K.) and Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology (K.S., S.G.K., N.K.), Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; and Department of Radiology, Kyung Hee University Hospital, Seoul, Republic of Korea (M.W.Y.)
| |
Collapse
|
13
|
Huang B, Huang H, Zhang S, Zhang D, Shi Q, Liu J, Guo J. Artificial intelligence in pancreatic cancer. Theranostics 2022; 12:6931-6954. [PMID: 36276650 PMCID: PMC9576619 DOI: 10.7150/thno.77949] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 09/24/2022] [Indexed: 11/30/2022] Open
Abstract
Pancreatic cancer is the deadliest disease, with a five-year overall survival rate of just 11%. The pancreatic cancer patients diagnosed with early screening have a median overall survival of nearly ten years, compared with 1.5 years for those not diagnosed with early screening. Therefore, early diagnosis and early treatment of pancreatic cancer are particularly critical. However, as a rare disease, the general screening cost of pancreatic cancer is high, the accuracy of existing tumor markers is not enough, and the efficacy of treatment methods is not exact. In terms of early diagnosis, artificial intelligence technology can quickly locate high-risk groups through medical images, pathological examination, biomarkers, and other aspects, then screening pancreatic cancer lesions early. At the same time, the artificial intelligence algorithm can also be used to predict the survival time, recurrence risk, metastasis, and therapy response which could affect the prognosis. In addition, artificial intelligence is widely used in pancreatic cancer health records, estimating medical imaging parameters, developing computer-aided diagnosis systems, etc. Advances in AI applications for pancreatic cancer will require a concerted effort among clinicians, basic scientists, statisticians, and engineers. Although it has some limitations, it will play an essential role in overcoming pancreatic cancer in the foreseeable future due to its mighty computing power.
Collapse
Affiliation(s)
- Bowen Huang
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
- School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Haoran Huang
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
- School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Shuting Zhang
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
- School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Dingyue Zhang
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
- School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Qingya Shi
- School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Jianzhou Liu
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
| | - Junchao Guo
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
14
|
Pușcașu CI, Rimbaş M, Mateescu RB, Larghi A, Cauni V. Advances in the Diagnosis of Pancreatic Cystic Lesions. Diagnostics (Basel) 2022; 12:diagnostics12081779. [PMID: 35892490 PMCID: PMC9394320 DOI: 10.3390/diagnostics12081779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 07/18/2022] [Accepted: 07/18/2022] [Indexed: 11/16/2022] Open
Abstract
Pancreatic cystic lesions (PCLs) are a heterogenous group of lesions ranging from benign to malignant. There has been an increase in PCLs prevalence in recent years, mostly due to advances in imaging techniques, increased awareness of their existence and population aging. Reliable discrimination between neoplastic and non-neoplastic cystic lesions is paramount to ensuring adequate treatment and follow-up. Although conventional diagnostic techniques such as ultrasound (US), magnetic resonance imaging (MRI) and computer tomography (CT) can easily identify these lesions, assessing the risk of malignancy is limited. Endoscopic ultrasound (EUS) is superior to cross-sectional imaging in identifying potentially malignant lesions due to its high resolution and better imaging characteristics, and the advantage of allowing for cyst fluid sampling via fine-needle aspiration (FNA). More complex testing, such as cytological and histopathological analysis and biochemical and molecular testing of the aspirated fluid, can ensure an accurate diagnosis.
Collapse
Affiliation(s)
- Claudia Irina Pușcașu
- Gastroenterology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania; (C.I.P.); (R.B.M.)
| | - Mihai Rimbaş
- Gastroenterology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania; (C.I.P.); (R.B.M.)
- Department of Internal Medicine, Carol Davila University of Medicine, 050474 Bucharest, Romania
- Correspondence: ; Tel.: +40-723-232-052
| | - Radu Bogdan Mateescu
- Gastroenterology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania; (C.I.P.); (R.B.M.)
- Department of Internal Medicine, Carol Davila University of Medicine, 050474 Bucharest, Romania
| | - Alberto Larghi
- Digestive Endoscopy Unit, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168 Rome, Italy;
| | - Victor Cauni
- Urology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania;
| |
Collapse
|
15
|
Preuss K, Thach N, Liang X, Baine M, Chen J, Zhang C, Du H, Yu H, Lin C, Hollingsworth MA, Zheng D. Using Quantitative Imaging for Personalized Medicine in Pancreatic Cancer: A Review of Radiomics and Deep Learning Applications. Cancers (Basel) 2022; 14:cancers14071654. [PMID: 35406426 PMCID: PMC8997008 DOI: 10.3390/cancers14071654] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 03/16/2022] [Accepted: 03/18/2022] [Indexed: 12/12/2022] Open
Abstract
Simple Summary With a five-year survival rate of only 3% for the majority of patients, pancreatic cancer is a global healthcare challenge. Radiomics and deep learning, two novel quantitative imaging methods that treat medical images as minable data instead of just pictures, have shown promise in advancing personalized management of pancreatic cancer through diagnosing precursor diseases, early detection, accurate diagnosis, and treatment personalization. Radiomics and deep learning methods aim to collect hidden information in medical images that is missed by conventional radiology practices through expanding the data search and comparing information across different patients. Both methods have been studied and applied in pancreatic cancer. In this review, we focus on the current progress of these two methods in pancreatic cancer and provide a comprehensive narrative review on the topic. With better regulation, enhanced workflow, and larger prospective patient datasets, radiomics and deep learning methods could show real hope in the battle against pancreatic cancer through personalized precision medicine. Abstract As the most lethal major cancer, pancreatic cancer is a global healthcare challenge. Personalized medicine utilizing cutting-edge multi-omics data holds potential for major breakthroughs in tackling this critical problem. Radiomics and deep learning, two trendy quantitative imaging methods that take advantage of data science and modern medical imaging, have shown increasing promise in advancing the precision management of pancreatic cancer via diagnosing of precursor diseases, early detection, accurate diagnosis, and treatment personalization and optimization. Radiomics employs manually-crafted features, while deep learning applies computer-generated automatic features. These two methods aim to mine hidden information in medical images that is missed by conventional radiology and gain insights by systematically comparing the quantitative image information across different patients in order to characterize unique imaging phenotypes. Both methods have been studied and applied in various pancreatic cancer clinical applications. In this review, we begin with an introduction to the clinical problems and the technology. After providing technical overviews of the two methods, this review focuses on the current progress of clinical applications in precancerous lesion diagnosis, pancreatic cancer detection and diagnosis, prognosis prediction, treatment stratification, and radiogenomics. The limitations of current studies and methods are discussed, along with future directions. With better standardization and optimization of the workflow from image acquisition to analysis and with larger and especially prospective high-quality datasets, radiomics and deep learning methods could show real hope in the battle against pancreatic cancer through big data-based high-precision personalization.
Collapse
Affiliation(s)
- Kiersten Preuss
- Department of Radiation Oncology, University of Nebraska Medical Center, Omaha, NE 68198, USA; (K.P.); (N.T.); (M.B.); (J.C.); (C.L.)
- Department of Nutrition and Health Sciences, University of Nebraska Lincoln, Lincoln, NE 68588, USA
| | - Nate Thach
- Department of Radiation Oncology, University of Nebraska Medical Center, Omaha, NE 68198, USA; (K.P.); (N.T.); (M.B.); (J.C.); (C.L.)
- Department of Computer Science, University of Nebraska Lincoln, Lincoln, NE 68588, USA;
| | - Xiaoying Liang
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL 32224, USA;
| | - Michael Baine
- Department of Radiation Oncology, University of Nebraska Medical Center, Omaha, NE 68198, USA; (K.P.); (N.T.); (M.B.); (J.C.); (C.L.)
| | - Justin Chen
- Department of Radiation Oncology, University of Nebraska Medical Center, Omaha, NE 68198, USA; (K.P.); (N.T.); (M.B.); (J.C.); (C.L.)
- Naperville North High School, Naperville, IL 60563, USA
| | - Chi Zhang
- School of Biological Sciences, University of Nebraska Lincoln, Lincoln, NE 68588, USA;
| | - Huijing Du
- Department of Mathematics, University of Nebraska Lincoln, Lincoln, NE 68588, USA;
| | - Hongfeng Yu
- Department of Computer Science, University of Nebraska Lincoln, Lincoln, NE 68588, USA;
| | - Chi Lin
- Department of Radiation Oncology, University of Nebraska Medical Center, Omaha, NE 68198, USA; (K.P.); (N.T.); (M.B.); (J.C.); (C.L.)
| | - Michael A. Hollingsworth
- Eppley Institute for Research in Cancer, University of Nebraska Medical Center, Omaha, NE 68198, USA;
| | - Dandan Zheng
- Department of Radiation Oncology, University of Nebraska Medical Center, Omaha, NE 68198, USA; (K.P.); (N.T.); (M.B.); (J.C.); (C.L.)
- Department of Radiation Oncology, University of Rochester, Rochester, NY 14626, USA
- Correspondence: ; Tel.: +1-(585)-276-3255
| |
Collapse
|
16
|
Li M, Lian F, Guo S. Multi-scale Selection and Multi-channel Fusion Model for Pancreas Segmentation Using Adversarial Deep Convolutional Nets. J Digit Imaging 2022; 35:47-55. [PMID: 34921356 PMCID: PMC8854512 DOI: 10.1007/s10278-021-00563-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 11/13/2021] [Accepted: 11/16/2021] [Indexed: 02/03/2023] Open
Abstract
Organ segmentation from existing imaging is vital to the medical image analysis and disease diagnosis. However, the boundary shapes and area sizes of the target region tend to be diverse and flexible. And the frequent applications of pooling operations in traditional segmentor result in the loss of spatial information which is advantageous to segmentation. All these issues pose challenges and difficulties for accurate organ segmentation from medical imaging, particularly for organs with small volumes and variable shapes such as the pancreas. To offset aforesaid information loss, we propose a deep convolutional neural network (DCNN) named multi-scale selection and multi-channel fusion segmentation model (MSC-DUnet) for pancreas segmentation. This proposed model contains three stages to collect detailed cues for accurate segmentation: (1) increasing the consistency between the distributions of the output probability maps from the segmentor and the original samples by involving the adversarial mechanism that can capture spatial distributions, (2) gathering global spatial features from several receptive fields via multi-scale field selection (MSFS), and (3) integrating multi-level features located in varying network positions through the multi-channel fusion module (MCFM). Experimental results on the NIH Pancreas-CT dataset show that our proposed MSC-DUnet obtains superior performance to the baseline network by achieving an improvement of 5.1% in index dice similarity coefficient (DSC), which adequately indicates that MSC-DUnet has great potential for pancreas segmentation.
Collapse
Affiliation(s)
- Meiyu Li
- College of Electronic Science and Engineering, Jilin University, Changchun, 130012, China
| | - Fenghui Lian
- School of Aviation Operations and Services, Air Force Aviation University, Changchun, 130000, China
| | - Shuxu Guo
- College of Electronic Science and Engineering, Jilin University, Changchun, 130012, China.
| |
Collapse
|
17
|
Nguon LS, Seo K, Lim JH, Song TJ, Cho SH, Park JS, Park S. Deep Learning-Based Differentiation between Mucinous Cystic Neoplasm and Serous Cystic Neoplasm in the Pancreas Using Endoscopic Ultrasonography. Diagnostics (Basel) 2021; 11:diagnostics11061052. [PMID: 34201066 PMCID: PMC8229855 DOI: 10.3390/diagnostics11061052] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 06/05/2021] [Accepted: 06/06/2021] [Indexed: 12/12/2022] Open
Abstract
Mucinous cystic neoplasms (MCN) and serous cystic neoplasms (SCN) account for a large portion of solitary pancreatic cystic neoplasms (PCN). In this study we implemented a convolutional neural network (CNN) model using ResNet50 to differentiate between MCN and SCN. The training data were collected retrospectively from 59 MCN and 49 SCN patients from two different hospitals. Data augmentation was used to enhance the size and quality of training datasets. Fine-tuning training approaches were utilized by adopting the pre-trained model from transfer learning while training selected layers. Testing of the network was conducted by varying the endoscopic ultrasonography (EUS) image sizes and positions to evaluate the network performance for differentiation. The proposed network model achieved up to 82.75% accuracy and a 0.88 (95% CI: 0.817–0.930) area under curve (AUC) score. The performance of the implemented deep learning networks in decision-making using only EUS images is comparable to that of traditional manual decision-making using EUS images along with supporting clinical information. Gradient-weighted class activation mapping (Grad-CAM) confirmed that the network model learned the features from the cyst region accurately. This study proves the feasibility of diagnosing MCN and SCN using a deep learning network model. Further improvement using more datasets is needed.
Collapse
Affiliation(s)
- Leang Sim Nguon
- School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Korea; (L.S.N.); (K.S.)
| | - Kangwon Seo
- School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Korea; (L.S.N.); (K.S.)
| | - Jung-Hyun Lim
- Division of Gastroenterology, Department of Internal Medicine, Inha University School of Medicine, Incheon 22332, Korea;
| | - Tae-Jun Song
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Korea; (T.-J.S.); (S.-H.C.)
| | - Sung-Hyun Cho
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Korea; (T.-J.S.); (S.-H.C.)
| | - Jin-Seok Park
- Division of Gastroenterology, Department of Internal Medicine, Inha University School of Medicine, Incheon 22332, Korea;
- Correspondence: (J.-S.P.); (S.P.)
| | - Suhyun Park
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul 03760, Korea
- Correspondence: (J.-S.P.); (S.P.)
| |
Collapse
|