1
|
Fortson BL, Abu-El-Haija M, Mahalingam N, Thompson TL, Vitale DS, Trout AT. Pancreas volumes in pediatric patients following index acute pancreatitis and acute recurrent pancreatitis. Pancreatology 2024; 24:1-5. [PMID: 37945498 PMCID: PMC10872738 DOI: 10.1016/j.pan.2023.10.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 10/27/2023] [Accepted: 10/31/2023] [Indexed: 11/12/2023]
Abstract
BACKGROUND/OBJECTIVES Pancreas volume derived from imaging may objectively reveal volume loss relevant to identifying sequelae of acute pancreatitis (AP) and ultimately diagnosing chronic pancreatitis (CP). The purposes of this study were to: (1) quantify pancreas volume by imaging in children with either (a) a single episode of AP or (b) acute recurrent pancreatitis (ARP), and (2) compare these volumes to normative volumes. METHODS This retrospective study was institutional review board approved. A single observer segmented the pancreas (3D Slicer; slicer.org) on n = 30 CT and MRI exams for 23 children selected from a prospective registry of patients with either an index attack of AP or with ARP after a known index attack date. Patients with CP were excluded. Segmented pancreas volumes were compared to published normal values. RESULTS Mean pancreas volumes normalized to body surface area (BSA) in the index AP and ARP groups were 38.2 mL/m2 (range: 11.8-73.5 mL/m2) and 27.9 mL/m2 (range: 8.0-69.2 mL/m2) respectively. 43 % (6/14) of patients post-AP had volumes below the 25th percentile, 1 (17 %) of which was below the 5th percentile (p = 0.3027 vs. a normal distribution). Post-ARP, 44 % (7/16) of patients had volumes below the 5th percentile (p < 0.001). CONCLUSIONS A significant fraction (40 %) of children with ARP have pancreas volumes <5th percentile for BSA even in the absence of CP. A similar, but not statistically significant, fraction have pancreas volumes <25th percentile after an index attack of AP. Pancreatic parenchymal volume deserves additional investigation as an objective marker of parenchymal damage from acute pancreatitis and of progressive pancreatitis in children.
Collapse
|
2
|
Mikalsen SG, Skjøtskift T, Flote VG, Hämäläinen NP, Heydari M, Rydén-Eilertsen K. Extensive clinical testing of Deep Learning Segmentation models for thorax and breast cancer radiotherapy planning. Acta Oncol 2023; 62:1184-1193. [PMID: 37883678 DOI: 10.1080/0284186x.2023.2270152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 10/04/2023] [Indexed: 10/28/2023]
Abstract
BACKGROUND The performance of deep learning segmentation (DLS) models for automatic organ extraction from CT images in the thorax and breast regions was investigated. Furthermore, the readiness and feasibility of integrating DLS into clinical practice were addressed by measuring the potential time savings and dosimetric impact. MATERIAL AND METHODS Thirty patients referred to radiotherapy for breast cancer were prospectively included. A total of 23 clinically relevant left- and right-sided organs were contoured manually on CT images according to ESTRO guidelines. Next, auto-segmentation was executed, and the geometric agreement between the auto-segmented and manually contoured organs was qualitatively assessed applying a scale in the range [0-not acceptable, 3-no corrections]. A quantitative validation was carried out by calculating Dice coefficients (DSC) and the 95% percentile of Hausdorff distances (HD95). The dosimetric impact of optimizing the treatment plans on the uncorrected DLS contours, was investigated from a dose coverage analysis using DVH values of the manually delineated contours as references. RESULTS The qualitative analysis showed that 93% of the DLS generated OAR contours did not need corrections, except for the heart where 67% of the contours needed corrections. The majority of DLS generated CTVs needed corrections, whereas a minority were deemed not acceptable. Still, using the DLS-model for CTV and heart delineation is on average 14 minutes faster. An average DSC=0.91 and H95=9.8 mm were found for the left and right breasts, respectively. Likewise, and average DSC in the range [0.66, 0.76]mm and HD95 in the range [7.04, 12.05]mm were found for the lymph nodes. CONCLUSION The validation showed that the DLS generated OAR contours can be used clinically. Corrections were required to most of the DLS generated CTVs, and therefore warrants more attention before possibly implementing the DLS models clinically.
Collapse
Affiliation(s)
| | | | | | | | - Mojgan Heydari
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
| | | |
Collapse
|
3
|
Berbís MA, Paulano Godino F, Royuela del Val J, Alcalá Mata L, Luna A. Clinical impact of artificial intelligence-based solutions on imaging of the pancreas and liver. World J Gastroenterol 2023; 29:1427-1445. [PMID: 36998424 PMCID: PMC10044858 DOI: 10.3748/wjg.v29.i9.1427] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/13/2023] [Accepted: 02/27/2023] [Indexed: 03/07/2023] Open
Abstract
Artificial intelligence (AI) has experienced substantial progress over the last ten years in many fields of application, including healthcare. In hepatology and pancreatology, major attention to date has been paid to its application to the assisted or even automated interpretation of radiological images, where AI can generate accurate and reproducible imaging diagnosis, reducing the physicians’ workload. AI can provide automatic or semi-automatic segmentation and registration of the liver and pancreatic glands and lesions. Furthermore, using radiomics, AI can introduce new quantitative information which is not visible to the human eye to radiological reports. AI has been applied in the detection and characterization of focal lesions and diffuse diseases of the liver and pancreas, such as neoplasms, chronic hepatic disease, or acute or chronic pancreatitis, among others. These solutions have been applied to different imaging techniques commonly used to diagnose liver and pancreatic diseases, such as ultrasound, endoscopic ultrasonography, computerized tomography (CT), magnetic resonance imaging, and positron emission tomography/CT. However, AI is also applied in this context to many other relevant steps involved in a comprehensive clinical scenario to manage a gastroenterological patient. AI can also be applied to choose the most convenient test prescription, to improve image quality or accelerate its acquisition, and to predict patient prognosis and treatment response. In this review, we summarize the current evidence on the application of AI to hepatic and pancreatic radiology, not only in regard to the interpretation of images, but also to all the steps involved in the radiological workflow in a broader sense. Lastly, we discuss the challenges and future directions of the clinical application of AI methods.
Collapse
Affiliation(s)
- M Alvaro Berbís
- Department of Radiology, HT Médica, San Juan de Dios Hospital, Córdoba 14960, Spain
- Faculty of Medicine, Autonomous University of Madrid, Madrid 28049, Spain
| | | | | | - Lidia Alcalá Mata
- Department of Radiology, HT Médica, Clínica las Nieves, Jaén 23007, Spain
| | - Antonio Luna
- Department of Radiology, HT Médica, Clínica las Nieves, Jaén 23007, Spain
| |
Collapse
|
4
|
Luo X, Liao W, He Y, Tang F, Wu M, Shen Y, Huang H, Song T, Li K, Zhang S, Zhang S, Wang G. Deep learning-based accurate delineation of primary gross tumor volume of nasopharyngeal carcinoma on heterogeneous magnetic resonance imaging: A large-scale and multi-center study. Radiother Oncol 2023; 180:109480. [PMID: 36657723 DOI: 10.1016/j.radonc.2023.109480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 01/07/2023] [Accepted: 01/08/2023] [Indexed: 01/18/2023]
Abstract
BACKGROUND AND PURPOSE The problem of obtaining accurate primary gross tumor volume (GTVp) segmentation for nasopharyngeal carcinoma (NPC) on heterogeneous magnetic resonance imaging (MRI) images with deep learning remains unsolved. Herein, we reported a new deep-learning method than can accurately delineate GTVp for NPC on multi-center MRI scans. MATERIAL AND METHODS We collected 1057 patients with MRI images from five hospitals and randomly selected 600 patients from three hospitals to constitute a mixed training cohort for model development. The resting patients were used as internal (n = 259) and external (n = 198) testing cohorts for model evaluation. An augmentation-invariant strategy was proposed to delineate GTVp from multi-center MRI images, which encouraged networks to produce similar predictions for inputs with different augmentations to learn invariant anatomical structure features. The Dice similarity coefficient (DSC), 95 % Hausdorff distance (HD95), average surface distance (ASD), and relative absolute volume difference (RAVD) were used to measure segmentation performance. RESULTS The model-generated predictions had a high overlap ratio with the ground truth. For the internal testing cohorts, the average DSC, HD95, ASD, and RAVD were 0.88, 4.99 mm, 1.03 mm, and 0.13, respectively. For external testing cohorts, the average DSC, HD95, ASD, and RAVD were 0.88, 3.97 mm, 0.97 mm, and 0.10, respectively. No significant differences were found in DSC, HD95, and ASD for patients with different T categories, MRI thickness, or in-plane spacings. Moreover, the proposed augmentation-invariant strategy outperformed the widely-used nnUNet, which uses conventional data augmentation approaches. CONCLUSION Our proposed method showed a highly accurate GTVp segmentation for NPC on multi-center MRI images, suggesting that it has the potential to act as a generalized delineation solution for heterogeneous MRI images.
Collapse
Affiliation(s)
- Xiangde Luo
- University of Electronic Science and Technology of China, Chengdu 611731, China; Shanghai AI Laboratory, Shanghai 200030, China
| | - Wenjun Liao
- University of Electronic Science and Technology of China, Chengdu 611731, China; Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu 610041, China.
| | - Yuan He
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui 23000, China
| | - Fan Tang
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Mengwan Wu
- Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu 610041, China
| | - Yuanyuan Shen
- Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu 610041, China
| | - Hui Huang
- Cancer center, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China
| | - Tao Song
- SenseTime Research, Shanghai 200233, China
| | - Kang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Shichuan Zhang
- University of Electronic Science and Technology of China, Chengdu 611731, China; Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu 610041, China
| | - Shaoting Zhang
- University of Electronic Science and Technology of China, Chengdu 611731, China; Shanghai AI Laboratory, Shanghai 200030, China
| | - Guotai Wang
- University of Electronic Science and Technology of China, Chengdu 611731, China; Shanghai AI Laboratory, Shanghai 200030, China.
| |
Collapse
|
5
|
Baum ZMC, Hu Y, Barratt DC. Meta-Learning Initializations for Interactive Medical Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:823-833. [PMID: 36322502 PMCID: PMC7614355 DOI: 10.1109/tmi.2022.3218147] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We present a meta-learning framework for interactive medical image registration. Our proposed framework comprises three components: a learning-based medical image registration algorithm, a form of user interaction that refines registration at inference, and a meta-learning protocol that learns a rapidly adaptable network initialization. This paper describes a specific algorithm that implements the registration, interaction and meta-learning protocol for our exemplar clinical application: registration of magnetic resonance (MR) imaging to interactively acquired, sparsely-sampled transrectal ultrasound (TRUS) images. Our approach obtains comparable registration error (4.26 mm) to the best-performing non-interactive learning-based 3D-to-3D method (3.97 mm) while requiring only a fraction of the data, and occurring in real-time during acquisition. Applying sparsely sampled data to non-interactive methods yields higher registration errors (6.26 mm), demonstrating the effectiveness of interactive MR-TRUS registration, which may be applied intraoperatively given the real-time nature of the adaptation process.
Collapse
Affiliation(s)
- Zachary M. C. Baum
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, U.K.,; UCL Centre for Medical Image Computing, University College London, London W1W 7TS, U.K
| | - Yipeng Hu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, U.K.,; UCL Centre for Medical Image Computing, University College London, London W1W 7TS, U.K
| | - Dean C. Barratt
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, U.K.,; UCL Centre for Medical Image Computing, University College London, London W1W 7TS, U.K
| |
Collapse
|
6
|
Qu T, Li X, Wang X, Deng W, Mao L, He M, Li X, Wang Y, Liu Z, Zhang L, Jin Z, Xue H, Yu Y. Transformer guided progressive fusion network for 3D pancreas and pancreatic mass segmentation. Med Image Anal 2023; 86:102801. [PMID: 37028237 DOI: 10.1016/j.media.2023.102801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 12/21/2022] [Accepted: 03/22/2023] [Indexed: 04/03/2023]
Abstract
Pancreatic masses are diverse in type, often making their clinical management challenging. This study aims to address the task of various types of pancreatic mass segmentation and detection while accurately segmenting the pancreas. Although convolution operation performs well at extracting local details, it experiences difficulty capturing global representations. To alleviate this limitation, we propose a transformer guided progressive fusion network (TGPFN) that utilizes the global representation captured by the transformer to supplement long-range dependencies lost by convolution operations at different resolutions. TGPFN is built on a branch-integrated network structure, where the convolutional neural network and transformer branches first perform separate feature extraction in the encoder, and then the local and global features are progressively fused in the decoder. To effectively integrate the information of the two branches, we design a transformer guidance flow to ensure feature consistency, and present a cross-network attention module to capture the channel dependencies. Extensive experiments with nnUNet (3D) show that TGPFN improves the mass segmentation (Dice: 73.93% vs. 69.40%) and detection accuracy (detection rate: 91.71% vs. 84.97%) on 416 private CTs, and also obtains performance improvements of mass segmentation (Dice: 43.86% vs. 42.07%) and detection (detection rate: 83.33% vs. 71.74%) on 419 public CTs.
Collapse
|
7
|
Wei Z, Ren J, Korreman SS, Nijkamp J. Towards interactive deep-learning for tumour segmentation in head and neck cancer radiotherapy. Phys Imaging Radiat Oncol 2022; 25:100408. [PMID: 36655215 PMCID: PMC9841279 DOI: 10.1016/j.phro.2022.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 12/19/2022] [Accepted: 12/22/2022] [Indexed: 12/26/2022] Open
Abstract
Background and purpose With deep-learning, gross tumour volume (GTV) auto-segmentation has substantially been improved, but still substantial manual corrections are needed. With interactive deep-learning (iDL), manual corrections can be used to update a deep-learning tool while delineating, minimising the input to achieve acceptable segmentations. We present an iDL tool for GTV segmentation that took annotated slices as input and simulated its performance on a head and neck cancer (HNC) dataset. Materials and methods Multimodal image data of 204 HNC patients with clinical tumour and lymph node GTV delineations were used. A baseline convolutional neural network (CNN) was trained (n = 107 training, n = 22 validation) and tested (n = 24). Subsequently, user input was simulated on initial test set by replacing one or more of predicted slices with ground truth delineation, followed by re-training the CNN. The objective was to optimise re-training parameters and simulate slice selection scenarios while limiting annotations to maximally-five slices. The remaining 51 patients were used as an independent test set, where Dice similarity coefficient (DSC), mean surface distance (MSD), and 95% Hausdorff distance (HD95%) were assessed at baseline and after every update. Results Median segmentation accuracy at baseline was DSC = 0.65, MSD = 4.3 mm, HD95% = 17.5 mm. Updating CNN using three slices equally sampled from the craniocaudal axis of the GTV in the first round, followed by two rounds of annotating one extra slice, gave the best results. The accuracy improved to DSC = 0.82, MSD = 1.6 mm, HD95% = 4.8 mm. Every CNN update took 30 s. Conclusions The presented iDL tool achieved substantial segmentation improvement with only five annotated slices.
Collapse
Affiliation(s)
- Zixiang Wei
- Aarhus University, Department of Clinical Medicine, Aarhus, Denmark,Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Jintao Ren
- Aarhus University, Department of Clinical Medicine, Aarhus, Denmark,Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Stine Sofia Korreman
- Aarhus University, Department of Clinical Medicine, Aarhus, Denmark,Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark,Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Jasper Nijkamp
- Aarhus University, Department of Clinical Medicine, Aarhus, Denmark,Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark,Corresponding author at: Palle Juul Jensensboulevard 25, 8200 Aarhus, Denmark.
| |
Collapse
|
8
|
Laino ME, Ammirabile A, Lofino L, Mannelli L, Fiz F, Francone M, Chiti A, Saba L, Orlandi MA, Savevski V. Artificial Intelligence Applied to Pancreatic Imaging: A Narrative Review. Healthcare (Basel) 2022; 10:healthcare10081511. [PMID: 36011168 PMCID: PMC9408381 DOI: 10.3390/healthcare10081511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/31/2022] [Accepted: 08/08/2022] [Indexed: 12/19/2022] Open
Abstract
The diagnosis, evaluation, and treatment planning of pancreatic pathologies usually require the combined use of different imaging modalities, mainly, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). Artificial intelligence (AI) has the potential to transform the clinical practice of medical imaging and has been applied to various radiological techniques for different purposes, such as segmentation, lesion detection, characterization, risk stratification, or prediction of response to treatments. The aim of the present narrative review is to assess the available literature on the role of AI applied to pancreatic imaging. Up to now, the use of computer-aided diagnosis (CAD) and radiomics in pancreatic imaging has proven to be useful for both non-oncological and oncological purposes and represents a promising tool for personalized approaches to patients. Although great developments have occurred in recent years, it is important to address the obstacles that still need to be overcome before these technologies can be implemented into our clinical routine, mainly considering the heterogeneity among studies.
Collapse
Affiliation(s)
- Maria Elena Laino
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
- Correspondence: (M.E.L.); (A.A.)
| | - Angela Ammirabile
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
- Correspondence: (M.E.L.); (A.A.)
| | - Ludovica Lofino
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | | | - Francesco Fiz
- Nuclear Medicine Unit, Department of Diagnostic Imaging, E.O. Ospedali Galliera, 56321 Genoa, Italy
- Department of Nuclear Medicine and Clinical Molecular Imaging, University Hospital, 72074 Tübingen, Germany
| | - Marco Francone
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Arturo Chiti
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Nuclear Medicine, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Luca Saba
- Department of Radiology, University of Cagliari, 09124 Cagliari, Italy
| | | | - Victor Savevski
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
9
|
Trimpl MJ, Primakov S, Lambin P, Stride EPJ, Vallis KA, Gooding MJ. Beyond automatic medical image segmentation-the spectrum between fully manual and fully automatic delineation. Phys Med Biol 2022; 67. [PMID: 35523158 DOI: 10.1088/1361-6560/ac6d9c] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 05/06/2022] [Indexed: 12/19/2022]
Abstract
Semi-automatic and fully automatic contouring tools have emerged as an alternative to fully manual segmentation to reduce time spent contouring and to increase contour quality and consistency. Particularly, fully automatic segmentation has seen exceptional improvements through the use of deep learning in recent years. These fully automatic methods may not require user interactions, but the resulting contours are often not suitable to be used in clinical practice without a review by the clinician. Furthermore, they need large amounts of labelled data to be available for training. This review presents alternatives to manual or fully automatic segmentation methods along the spectrum of variable user interactivity and data availability. The challenge lies to determine how much user interaction is necessary and how this user interaction can be used most effectively. While deep learning is already widely used for fully automatic tools, interactive methods are just at the starting point to be transformed by it. Interaction between clinician and machine, via artificial intelligence, can go both ways and this review will present the avenues that are being pursued to improve medical image segmentation.
Collapse
Affiliation(s)
- Michael J Trimpl
- Mirada Medical Ltd, Oxford, United Kingdom
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
- Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom
| | - Sergey Primakov
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University, Maastricht, NL, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University, Maastricht, NL, The Netherlands
| | - Eleanor P J Stride
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Katherine A Vallis
- Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom
| | | |
Collapse
|
10
|
nnU-Net Deep Learning Method for Segmenting Parenchyma and Determining Liver Volume From Computed Tomography Images. ANNALS OF SURGERY OPEN 2022; 3. [PMID: 36275876 PMCID: PMC9585534 DOI: 10.1097/as9.0000000000000155] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Background Recipient donor matching in liver transplantation can require precise estimations of liver volume. Currently utilized demographic-based organ volume estimates are imprecise and nonspecific. Manual image organ annotation from medical imaging is effective; however, this process is cumbersome, often taking an undesirable length of time to complete. Additionally, manual organ segmentation and volume measurement incurs additional direct costs to payers for either a clinician or trained technician to complete. Deep learning-based image automatic segmentation tools are well positioned to address this clinical need. Objectives To build a deep learning model that could accurately estimate liver volumes and create 3D organ renderings from computed tomography (CT) medical images. Methods We trained a nnU-Net deep learning model to identify liver borders in images of the abdominal cavity. We used 151 publicly available CT scans. For each CT scan, a board-certified radiologist annotated the liver margins (ground truth annotations). We split our image dataset into training, validation, and test sets. We trained our nnU-Net model on these data to identify liver borders in 3D voxels and integrated these to reconstruct a total organ volume estimate. Results The nnU-Net model accurately identified the border of the liver with a mean overlap accuracy of 97.5% compared with ground truth annotations. Our calculated volume estimates achieved a mean percent error of 1.92% + 1.54% on the test set. Conclusions Precise volume estimation of livers from CT scans is accurate using a nnU-Net deep learning architecture. Appropriately deployed, a nnU-Net algorithm is accurate and quick, making it suitable for incorporation into the pretransplant clinical decision-making workflow.
Collapse
|
11
|
Althobaiti MM, Almulihi A, Ashour AA, Mansour RF, Gupta D. Design of Optimal Deep Learning-Based Pancreatic Tumor and Nontumor Classification Model Using Computed Tomography Scans. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:2872461. [PMID: 35070232 PMCID: PMC8769827 DOI: 10.1155/2022/2872461] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 12/10/2021] [Accepted: 12/17/2021] [Indexed: 12/18/2022]
Abstract
Pancreatic tumor is a lethal kind of tumor and its prediction is really poor in the current scenario. Automated pancreatic tumor classification using computer-aided diagnosis (CAD) model is necessary to track, predict, and classify the existence of pancreatic tumors. Artificial intelligence (AI) can offer extensive diagnostic expertise and accurate interventional image interpretation. With this motivation, this study designs an optimal deep learning based pancreatic tumor and nontumor classification (ODL-PTNTC) model using CT images. The goal of the ODL-PTNTC technique is to detect and classify the existence of pancreatic tumors and nontumor. The proposed ODL-PTNTC technique includes adaptive window filtering (AWF) technique to remove noise existing in it. In addition, sailfish optimizer based Kapur's Thresholding (SFO-KT) technique is employed for image segmentation process. Moreover, feature extraction using Capsule Network (CapsNet) is derived to generate a set of feature vectors. Furthermore, Political Optimizer (PO) with Cascade Forward Neural Network (CFNN) is employed for classification purposes. In order to validate the enhanced performance of the ODL-PTNTC technique, a series of simulations take place and the results are investigated under several aspects. A comprehensive comparative results analysis stated the promising performance of the ODL-PTNTC technique over the recent approaches.
Collapse
Affiliation(s)
- Maha M. Althobaiti
- Department of Computer Science College of Computing and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Ahmed Almulihi
- Department of Computer Science College of Computing and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Amal Adnan Ashour
- Department of Oral & Maxillofacial Surgery and Diagnostic Sciences Faculty of Dentistry, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Romany F. Mansour
- Department of Mathematics Faculty of Science, New Valley University, El-Kharga 72511, Egypt
| | - Deepak Gupta
- Department of Computer Science & Engineering, Maharaja Agrasen Institute of Technology, Delhi, India
| |
Collapse
|
12
|
Chen X, Fu R, Shao Q, Chen Y, Ye Q, Li S, He X, Zhu J. Application of artificial intelligence to pancreatic adenocarcinoma. Front Oncol 2022; 12:960056. [PMID: 35936738 PMCID: PMC9353734 DOI: 10.3389/fonc.2022.960056] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 06/24/2022] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Pancreatic cancer (PC) is one of the deadliest cancers worldwide although substantial advancement has been made in its comprehensive treatment. The development of artificial intelligence (AI) technology has allowed its clinical applications to expand remarkably in recent years. Diverse methods and algorithms are employed by AI to extrapolate new data from clinical records to aid in the treatment of PC. In this review, we will summarize AI's use in several aspects of PC diagnosis and therapy, as well as its limits and potential future research avenues. METHODS We examine the most recent research on the use of AI in PC. The articles are categorized and examined according to the medical task of their algorithm. Two search engines, PubMed and Google Scholar, were used to screen the articles. RESULTS Overall, 66 papers published in 2001 and after were selected. Of the four medical tasks (risk assessment, diagnosis, treatment, and prognosis prediction), diagnosis was the most frequently researched, and retrospective single-center studies were the most prevalent. We found that the different medical tasks and algorithms included in the reviewed studies caused the performance of their models to vary greatly. Deep learning algorithms, on the other hand, produced excellent results in all of the subdivisions studied. CONCLUSIONS AI is a promising tool for helping PC patients and may contribute to improved patient outcomes. The integration of humans and AI in clinical medicine is still in its infancy and requires the in-depth cooperation of multidisciplinary personnel.
Collapse
Affiliation(s)
- Xi Chen
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Ruibiao Fu
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Qian Shao
- Department of Surgical Ward 1, Ningbo Women and Children’s Hospital, Ningbo, China
| | - Yan Chen
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Qinghuang Ye
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Jinhui Zhu
- Department of General Surgery, Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
- *Correspondence: Jinhui Zhu,
| |
Collapse
|
13
|
Qu T, Wang X, Fang C, Mao L, Li J, Li P, Qu J, Li X, Xue H, Yu Y, Jin Z. M 3Net: A multi-scale multi-view framework for multi-phase pancreas segmentation based on cross-phase non-local attention. Med Image Anal 2021; 75:102232. [PMID: 34700243 DOI: 10.1016/j.media.2021.102232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 06/21/2021] [Accepted: 09/10/2021] [Indexed: 10/20/2022]
Abstract
The complementation of arterial and venous phases visual information of CTs can help better distinguish the pancreas from its surrounding structures. However, the exploration of cross-phase contextual information is still under research in computer-aided pancreas segmentation. This paper presents M3Net, a framework that integrates multi-scale multi-view information for multi-phase pancreas segmentation. The core of M3Net is built upon a dual-path network in which individual branches are set up for two phases. Cross-phase interactive connections bridging the two branches are introduced to interleave and integrate dual-phase complementary visual information. Besides, we further devise two types of non-local attention modules to enhance the high-level feature representation across phases. First, we design a location attention module to generate cross-phase reliable feature correlations to suppress the misalignment regions. Second, the depth-wise attention module is used to capture the channel dependencies and then strengthen feature representations. The experiment data consists of 224 internal CTs (106 normal and 118 abnormal) with 1 mm slice thickness, and 66 external CTs (29 normal and 37 abnormal) with 5 mm slice thickness. We achieve new state-of-the-art performance with average DSC of 91.19% on internal data, and promising result with average DSC of 86.34% on external data.
Collapse
Affiliation(s)
- Taiping Qu
- AI Lab, Deepwise Healthcare, Beijing 100080, China
| | - Xiheng Wang
- Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing 100730, China
| | - Chaowei Fang
- School of Artificial Intelligence, Xidian University, Xian, China
| | - Li Mao
- AI Lab, Deepwise Healthcare, Beijing 100080, China
| | - Juan Li
- Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing 100730, China
| | - Ping Li
- Department of Radiology, the Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, 127 Dongming Road, Zhengzhou 450008, China
| | - Jinrong Qu
- Department of Radiology, the Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, 127 Dongming Road, Zhengzhou 450008, China
| | - Xiuli Li
- AI Lab, Deepwise Healthcare, Beijing 100080, China
| | - Huadan Xue
- Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing 100730, China.
| | - Yizhou Yu
- Department of Computer Science, The University of Hong Kong, Pokfulam, Hong Kong.
| | - Zhengyu Jin
- Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing 100730, China
| |
Collapse
|
14
|
Enriquez JS, Chu Y, Pudakalakatti S, Hsieh KL, Salmon D, Dutta P, Millward NZ, Lurie E, Millward S, McAllister F, Maitra A, Sen S, Killary A, Zhang J, Jiang X, Bhattacharya PK, Shams S. Hyperpolarized Magnetic Resonance and Artificial Intelligence: Frontiers of Imaging in Pancreatic Cancer. JMIR Med Inform 2021; 9:e26601. [PMID: 34137725 PMCID: PMC8277399 DOI: 10.2196/26601] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 02/24/2021] [Accepted: 04/03/2021] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND There is an unmet need for noninvasive imaging markers that can help identify the aggressive subtype(s) of pancreatic ductal adenocarcinoma (PDAC) at diagnosis and at an earlier time point, and evaluate the efficacy of therapy prior to tumor reduction. In the past few years, there have been two major developments with potential for a significant impact in establishing imaging biomarkers for PDAC and pancreatic cancer premalignancy: (1) hyperpolarized metabolic (HP)-magnetic resonance (MR), which increases the sensitivity of conventional MR by over 10,000-fold, enabling real-time metabolic measurements; and (2) applications of artificial intelligence (AI). OBJECTIVE Our objective of this review was to discuss these two exciting but independent developments (HP-MR and AI) in the realm of PDAC imaging and detection from the available literature to date. METHODS A systematic review following the PRISMA extension for Scoping Reviews (PRISMA-ScR) guidelines was performed. Studies addressing the utilization of HP-MR and/or AI for early detection, assessment of aggressiveness, and interrogating the early efficacy of therapy in patients with PDAC cited in recent clinical guidelines were extracted from the PubMed and Google Scholar databases. The studies were reviewed following predefined exclusion and inclusion criteria, and grouped based on the utilization of HP-MR and/or AI in PDAC diagnosis. RESULTS Part of the goal of this review was to highlight the knowledge gap of early detection in pancreatic cancer by any imaging modality, and to emphasize how AI and HP-MR can address this critical gap. We reviewed every paper published on HP-MR applications in PDAC, including six preclinical studies and one clinical trial. We also reviewed several HP-MR-related articles describing new probes with many functional applications in PDAC. On the AI side, we reviewed all existing papers that met our inclusion criteria on AI applications for evaluating computed tomography (CT) and MR images in PDAC. With the emergence of AI and its unique capability to learn across multimodal data, along with sensitive metabolic imaging using HP-MR, this knowledge gap in PDAC can be adequately addressed. CT is an accessible and widespread imaging modality worldwide as it is affordable; because of this reason alone, most of the data discussed are based on CT imaging datasets. Although there were relatively few MR-related papers included in this review, we believe that with rapid adoption of MR imaging and HP-MR, more clinical data on pancreatic cancer imaging will be available in the near future. CONCLUSIONS Integration of AI, HP-MR, and multimodal imaging information in pancreatic cancer may lead to the development of real-time biomarkers of early detection, assessing aggressiveness, and interrogating early efficacy of therapy in PDAC.
Collapse
Affiliation(s)
- José S Enriquez
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Yan Chu
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Shivanand Pudakalakatti
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Kang Lin Hsieh
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Duncan Salmon
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, United States
| | - Prasanta Dutta
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Niki Zacharias Millward
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Urology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Eugene Lurie
- Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Steven Millward
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Florencia McAllister
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Clinical Cancer Prevention, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Anirban Maitra
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Subrata Sen
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Ann Killary
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Jian Zhang
- Division of Computer Science and Engineering, Louisiana State University, Baton Rouge, LA, United States
| | - Xiaoqian Jiang
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Pratip K Bhattacharya
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Shayan Shams
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
15
|
Shao Y, Zhang YX, Chen HH, Lu SS, Zhang SC, Zhang JX. Advances in the application of artificial intelligence in solid tumor imaging. Artif Intell Cancer 2021; 2:12-24. [DOI: 10.35713/aic.v2.i2.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 04/02/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Early diagnosis and timely treatment are crucial in reducing cancer-related mortality. Artificial intelligence (AI) has greatly relieved clinical workloads and changed the current medical workflows. We searched for recent studies, reports and reviews referring to AI and solid tumors; many reviews have summarized AI applications in the diagnosis and treatment of a single tumor type. We herein systematically review the advances of AI application in multiple solid tumors including esophagus, stomach, intestine, breast, thyroid, prostate, lung, liver, cervix, pancreas and kidney with a specific focus on the continual improvement on model performance in imaging practice.
Collapse
Affiliation(s)
- Ying Shao
- Department of Laboratory Medicine, People Hospital of Jiangying, Jiangying 214400, Jiangsu Province, China
| | - Yu-Xuan Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Huan-Huan Chen
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Shan-Shan Lu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Shi-Chang Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Jie-Xin Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| |
Collapse
|
16
|
Panda A, Korfiatis P, Suman G, Garg SK, Polley EC, Singh DP, Chari ST, Goenka AH. Two-stage deep learning model for fully automated pancreas segmentation on computed tomography: Comparison with intra-reader and inter-reader reliability at full and reduced radiation dose on an external dataset. Med Phys 2021; 48:2468-2481. [PMID: 33595105 DOI: 10.1002/mp.14782] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 01/07/2021] [Accepted: 02/11/2021] [Indexed: 01/24/2023] Open
Abstract
PURPOSE To develop a two-stage three-dimensional (3D) convolutional neural networks (CNNs) for fully automated volumetric segmentation of pancreas on computed tomography (CT) and to further evaluate its performance in the context of intra-reader and inter-reader reliability at full dose and reduced radiation dose CTs on a public dataset. METHODS A dataset of 1994 abdomen CT scans (portal venous phase, slice thickness ≤ 3.75-mm, multiple CT vendors) was curated by two radiologists (R1 and R2) to exclude cases with pancreatic pathology, suboptimal image quality, and image artifacts (n = 77). Remaining 1917 CTs were equally allocated between R1 and R2 for volumetric pancreas segmentation [ground truth (GT)]. This internal dataset was randomly divided into training (n = 1380), validation (n = 248), and test (n = 289) sets for the development of a two-stage 3D CNN model based on a modified U-net architecture for automated volumetric pancreas segmentation. Model's performance for pancreas segmentation and the differences in model-predicted pancreatic volumes vs GT volumes were compared on the test set. Subsequently, an external dataset from The Cancer Imaging Archive (TCIA) that had CT scans acquired at standard radiation dose and same scans reconstructed at a simulated 25% radiation dose was curated (n = 41). Volumetric pancreas segmentation was done on this TCIA dataset by R1 and R2 independently on the full dose and then at the reduced radiation dose CT images. Intra-reader and inter-reader reliability, model's segmentation performance, and reliability between model-predicted pancreatic volumes at full vs reduced dose were measured. Finally, model's performance was tested on the benchmarking National Institute of Health (NIH)-Pancreas CT (PCT) dataset. RESULTS Three-dimensional CNN had mean (SD) Dice similarity coefficient (DSC): 0.91 (0.03) and average Hausdorff distance of 0.15 (0.09) mm on the test set. Model's performance was equivalent between males and females (P = 0.08) and across different CT slice thicknesses (P > 0.05) based on noninferiority statistical testing. There was no difference in model-predicted and GT pancreatic volumes [mean predicted volume 99 cc (31cc); GT volume 101 cc (33 cc), P = 0.33]. Mean pancreatic volume difference was -2.7 cc (percent difference: -2.4% of GT volume) with excellent correlation between model-predicted and GT volumes [concordance correlation coefficient (CCC)=0.97]. In the external TCIA dataset, the model had higher reliability than R1 and R2 on full vs reduced dose CT scans [model mean (SD) DSC: 0.96 (0.02), CCC = 0.995 vs R1 DSC: 0.83 (0.07), CCC = 0.89, and R2 DSC:0.87 (0.04), CCC = 0.97]. The DSC and volume concordance correlations for R1 vs R2 (inter-reader reliability) were 0.85 (0.07), CCC = 0.90 at full dose and 0.83 (0.07), CCC = 0.96 at reduced dose datasets. There was good reliability between model and R1 at both full and reduced dose CT [full dose: DSC: 0.81 (0.07), CCC = 0.83 and reduced dose DSC:0.81 (0.08), CCC = 0.87]. Likewise, there was good reliability between model and R2 at both full and reduced dose CT [full dose: DSC: 0.84 (0.05), CCC = 0.89 and reduced dose DSC:0.83(0.06), CCC = 0.89]. There was no difference in model-predicted and GT pancreatic volume in TCIA dataset (mean predicted volume 96 cc (33); GT pancreatic volume 89 cc (30), p = 0.31). Model had mean (SD) DSC: 0.89 (0.04) (minimum-maximum DSC: 0.79 -0.96) on the NIH-PCT dataset. CONCLUSION A 3D CNN developed on the largest dataset of CTs is accurate for fully automated volumetric pancreas segmentation and is generalizable across a wide range of CT slice thicknesses, radiation dose, and patient gender. This 3D CNN offers a scalable tool to leverage biomarkers from pancreas morphometrics and radiomics for pancreatic diseases including for early pancreatic cancer detection.
Collapse
Affiliation(s)
- Ananya Panda
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Panagiotis Korfiatis
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Garima Suman
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Sushil K Garg
- Department of Gastroenterology and Hepatology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Eric C Polley
- Department of Biostatistics, Health Sciences Research, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Dhruv P Singh
- Department of Gastroenterology and Hepatology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Suresh T Chari
- Department of Gastroenterology, Hepatology and Nutrition, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | - Ajit H Goenka
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| |
Collapse
|
17
|
Barat M, Chassagnon G, Dohan A, Gaujoux S, Coriat R, Hoeffel C, Cassinotto C, Soyer P. Artificial intelligence: a critical review of current applications in pancreatic imaging. Jpn J Radiol 2021; 39:514-523. [PMID: 33550513 DOI: 10.1007/s11604-021-01098-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 01/25/2021] [Indexed: 12/11/2022]
Abstract
The applications of artificial intelligence (AI), including machine learning and deep learning, in the field of pancreatic disease imaging are rapidly expanding. AI can be used for the detection of pancreatic ductal adenocarcinoma and other pancreatic tumors but also for pancreatic lesion characterization. In this review, the basic of radiomics, recent developments and current results of AI in the field of pancreatic tumors are presented. Limitations and future perspectives of AI are discussed.
Collapse
Affiliation(s)
- Maxime Barat
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
| | - Guillaume Chassagnon
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
| | - Anthony Dohan
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
| | - Sébastien Gaujoux
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
- Department of Abdominal Surgery, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 75014, Paris, France
| | - Romain Coriat
- Université de Paris, Descartes-Paris 5, 75006, Paris, France
- Department of Gastroenterology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 75014, Paris, France
| | - Christine Hoeffel
- Department of Radiology, Robert Debré Hospital, 51092, Reims, France
| | - Christophe Cassinotto
- Department of Radiology, CHU Montpellier, University of Montpellier, Saint-Éloi Hospital, 34000, Montpellier, France
| | - Philippe Soyer
- Department of Radiology, Hopital Cochin, Assistance Publique-Hopitaux de Paris, 27 Rue du Faubourg Saint-Jacques, Paris, France.
- Université de Paris, Descartes-Paris 5, 75006, Paris, France.
| |
Collapse
|
18
|
Si K, Xue Y, Yu X, Zhu X, Li Q, Gong W, Liang T, Duan S. Fully end-to-end deep-learning-based diagnosis of pancreatic tumors. Am J Cancer Res 2021; 11:1982-1990. [PMID: 33408793 PMCID: PMC7778580 DOI: 10.7150/thno.52508] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 11/17/2020] [Indexed: 12/15/2022] Open
Abstract
Artificial intelligence can facilitate clinical decision making by considering massive amounts of medical imaging data. Various algorithms have been implemented for different clinical applications. Accurate diagnosis and treatment require reliable and interpretable data. For pancreatic tumor diagnosis, only 58.5% of images from the First Affiliated Hospital and the Second Affiliated Hospital, Zhejiang University School of Medicine are used, increasing labor and time costs to manually filter out images not directly used by the diagnostic model. Methods: This study used a training dataset of 143,945 dynamic contrast-enhanced CT images of the abdomen from 319 patients. The proposed model contained four stages: image screening, pancreas location, pancreas segmentation, and pancreatic tumor diagnosis. Results: We established a fully end-to-end deep-learning model for diagnosing pancreatic tumors and proposing treatment. The model considers original abdominal CT images without any manual preprocessing. Our artificial-intelligence-based system achieved an area under the curve of 0.871 and a F1 score of 88.5% using an independent testing dataset containing 107,036 clinical CT images from 347 patients. The average accuracy for all tumor types was 82.7%, and the independent accuracies of identifying intraductal papillary mucinous neoplasm and pancreatic ductal adenocarcinoma were 100% and 87.6%, respectively. The average test time per patient was 18.6 s, compared with at least 8 min for manual reviewing. Furthermore, the model provided a transparent and interpretable diagnosis by producing saliency maps highlighting the regions relevant to its decision. Conclusions: The proposed model can potentially deliver efficient and accurate preoperative diagnoses that could aid the surgical management of pancreatic tumor.
Collapse
|
19
|
The integration of artificial intelligence models to augment imaging modalities in pancreatic cancer. JOURNAL OF PANCREATOLOGY 2020. [DOI: 10.1097/jp9.0000000000000056] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
|
20
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|