1
|
Mukherjee S, Antony A, Patnam NG, Trivedi KH, Karbhari A, Nagaraj M, Murlidhar M, Goenka AH. Pancreas segmentation using AI developed on the largest CT dataset with multi-institutional validation and implications for early cancer detection. Sci Rep 2025; 15:17096. [PMID: 40379726 PMCID: PMC12084540 DOI: 10.1038/s41598-025-01802-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2025] [Accepted: 05/08/2025] [Indexed: 05/19/2025] Open
Abstract
Accurate and fully automated pancreas segmentation is critical for advancing imaging biomarkers in early pancreatic cancer detection and for biomarker discovery in endocrine and exocrine pancreatic diseases. We developed and evaluated a deep learning (DL)-based convolutional neural network (CNN) for automated pancreas segmentation using the largest single-institution dataset to date (n = 3031 CTs). Ground truth segmentations were performed by radiologists, which were used to train a 3D nnU-Net model through five-fold cross-validation, generating an ensemble of top-performing models. To assess generalizability, the model was externally validated on the multi-institutional AbdomenCT-1K dataset (n = 585), for which volumetric segmentations were newly generated by expert radiologists and will be made publicly available. In the test subset (n = 452), the CNN achieved a mean Dice Similarity Coefficient (DSC) of 0.94 (SD 0.05), demonstrating high spatial concordance with radiologist-annotated volumes (Concordance Correlation Coefficient [CCC]: 0.95). On the AbdomenCT-1K dataset, the model achieved a DSC of 0.96 (SD 0.04) and a CCC of 0.98, confirming its robustness across diverse imaging conditions. The proposed DL model establishes new performance benchmarks for fully automated pancreas segmentation, offering a scalable and generalizable solution for large-scale imaging biomarker research and clinical translation.
Collapse
Affiliation(s)
- Sovanlal Mukherjee
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Ajith Antony
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Nandakumar G Patnam
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Kamaxi H Trivedi
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Aashna Karbhari
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Madhu Nagaraj
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Murlidhar Murlidhar
- Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Ajit H Goenka
- Professor of Radiology, Consultant, Divisions of Abdominal and Nuclear Radiology, Co-Chair, Nuclear Radiology Research Operations, Chair, Enterprise PET/MR Research, Education and Executive Committee, Program Co-Leader, Risk Assessment, Early Detection and Interception (REDI), Mayo Clinic Comprehensive Cancer Center (MCCCC), 200 First St SW, Charlton 1, Rochester, MN, 55905, USA.
| |
Collapse
|
2
|
Huang C, Shen Y, Galgano SJ, Goenka AH, Hecht EM, Kambadakone A, Wang ZJ, Chu LC. Advancements in early detection of pancreatic cancer: the role of artificial intelligence and novel imaging techniques. Abdom Radiol (NY) 2025; 50:1731-1743. [PMID: 39467913 DOI: 10.1007/s00261-024-04644-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Revised: 10/10/2024] [Accepted: 10/14/2024] [Indexed: 10/30/2024]
Abstract
Early detection is crucial for improving survival rates of pancreatic ductal adenocarcinoma (PDA), yet current diagnostic methods can often fail at this stage. Recently, there has been significant interest in improving risk stratification and developing imaging biomarkers, through novel imaging techniques, and most notably, artificial intelligence (AI) technology. This review provides an overview of these advancements, with a focus on deep learning methods for early detection of PDA.
Collapse
Affiliation(s)
| | - Yiqiu Shen
- New York University Langone Health, New York, USA
| | | | | | | | | | - Zhen Jane Wang
- University of California, San Francisco, San Francisco, USA
| | - Linda C Chu
- Johns Hopkins University School of Medicine, Baltimore, USA
| |
Collapse
|
3
|
Podină N, Gheorghe EC, Constantin A, Cazacu I, Croitoru V, Gheorghe C, Balaban DV, Jinga M, Țieranu CG, Săftoiu A. Artificial Intelligence in Pancreatic Imaging: A Systematic Review. United European Gastroenterol J 2025; 13:55-77. [PMID: 39865461 PMCID: PMC11866320 DOI: 10.1002/ueg2.12723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 10/24/2024] [Accepted: 11/03/2024] [Indexed: 01/28/2025] Open
Abstract
The rising incidence of pancreatic diseases, including acute and chronic pancreatitis and various pancreatic neoplasms, poses a significant global health challenge. Pancreatic ductal adenocarcinoma (PDAC) for example, has a high mortality rate due to late-stage diagnosis and its inaccessible location. Advances in imaging technologies, though improving diagnostic capabilities, still necessitate biopsy confirmation. Artificial intelligence, particularly machine learning and deep learning, has emerged as a revolutionary force in healthcare, enhancing diagnostic precision and personalizing treatment. This narrative review explores Artificial intelligence's role in pancreatic imaging, its technological advancements, clinical applications, and associated challenges. Following the PRISMA-DTA guidelines, a comprehensive search of databases including PubMed, Scopus, and Cochrane Library was conducted, focusing on Artificial intelligence, machine learning, deep learning, and radiomics in pancreatic imaging. Articles involving human subjects, written in English, and published up to March 31, 2024, were included. The review process involved title and abstract screening, followed by full-text review and refinement based on relevance and novelty. Recent Artificial intelligence advancements have shown promise in detecting and diagnosing pancreatic diseases. Deep learning techniques, particularly convolutional neural networks (CNNs), have been effective in detecting and segmenting pancreatic tissues as well as differentiating between benign and malignant lesions. Deep learning algorithms have also been used to predict survival time, recurrence risk, and therapy response in pancreatic cancer patients. Radiomics approaches, extracting quantitative features from imaging modalities such as CT, MRI, and endoscopic ultrasound, have enhanced the accuracy of these deep learning models. Despite the potential of Artificial intelligence in pancreatic imaging, challenges such as legal and ethical considerations, algorithm transparency, and data security remain. This review underscores the transformative potential of Artificial intelligence in enhancing the diagnosis and treatment of pancreatic diseases, ultimately aiming to improve patient outcomes and survival rates.
Collapse
Affiliation(s)
- Nicoleta Podină
- “Carol Davila” University of Medicine and PharmacyBucharestRomania
- Department of GastroenterologyPonderas Academic HospitalBucharestRomania
| | | | - Alina Constantin
- Department of GastroenterologyPonderas Academic HospitalBucharestRomania
| | - Irina Cazacu
- Oncology DepartmentFundeni Clinical InstituteBucharestRomania
| | - Vlad Croitoru
- Oncology DepartmentFundeni Clinical InstituteBucharestRomania
| | - Cristian Gheorghe
- “Carol Davila” University of Medicine and PharmacyBucharestRomania
- Center of Gastroenterology and HepatologyFundeni Clinical InstituteBucharestRomania
| | - Daniel Vasile Balaban
- “Carol Davila” University of Medicine and PharmacyBucharestRomania
- Department of Gastroenterology“Carol Davila” Central Military University Emergency HospitalBucharestRomania
| | - Mariana Jinga
- “Carol Davila” University of Medicine and PharmacyBucharestRomania
- Department of Gastroenterology“Carol Davila” Central Military University Emergency HospitalBucharestRomania
| | - Cristian George Țieranu
- “Carol Davila” University of Medicine and PharmacyBucharestRomania
- Department of Gastroenterology and HepatologyElias Emergency University HospitalBucharestRomania
| | - Adrian Săftoiu
- “Carol Davila” University of Medicine and PharmacyBucharestRomania
- Department of GastroenterologyPonderas Academic HospitalBucharestRomania
- Department of Gastroenterology and HepatologyElias Emergency University HospitalBucharestRomania
| |
Collapse
|
4
|
Antony A, Mukherjee S, Bi Y, Collisson EA, Nagaraj M, Murlidhar M, Wallace MB, Goenka AH. AI-Driven insights in pancreatic cancer imaging: from pre-diagnostic detection to prognostication. Abdom Radiol (NY) 2024:10.1007/s00261-024-04775-x. [PMID: 39738571 DOI: 10.1007/s00261-024-04775-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2024] [Revised: 12/15/2024] [Accepted: 12/16/2024] [Indexed: 01/02/2025]
Abstract
Pancreatic ductal adenocarcinoma (PDAC) is the third leading cause of cancer-related deaths in the United States, largely due to its poor five-year survival rate and frequent late-stage diagnosis. A significant barrier to early detection even in high-risk cohorts is that the pancreas often appears morphologically normal during the pre-diagnostic phase. Yet, the disease can progress rapidly from subclinical stages to widespread metastasis, undermining the effectiveness of screening. Recently, artificial intelligence (AI) applied to cross-sectional imaging has shown significant potential in identifying subtle, early-stage changes in pancreatic tissue that are often imperceptible to the human eye. Moreover, AI-driven imaging also aids in the discovery of prognostic and predictive biomarkers, essential for personalized treatment planning. This article uniquely integrates a critical discussion on AI's role in detecting visually occult PDAC on pre-diagnostic imaging, addresses challenges of model generalizability, and emphasizes solutions like standardized datasets and clinical workflows. By focusing on both technical advancements and practical implementation, this article provides a forward-thinking conceptual framework that bridges current gaps in AI-driven PDAC research.
Collapse
Affiliation(s)
- Ajith Antony
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | - Yan Bi
- Department of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL, USA
| | - Eric A Collisson
- Department of Medical Oncology, Fred Hutchinson Cancer Center, Seattle, WA, USA
| | - Madhu Nagaraj
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | - Michael B Wallace
- Department of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL, USA
| | - Ajit H Goenka
- Department of Radiology, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
5
|
Udriștoiu AL, Podină N, Ungureanu BS, Constantin A, Georgescu CV, Bejinariu N, Pirici D, Burtea DE, Gruionu L, Udriștoiu S, Săftoiu A. Deep learning segmentation architectures for automatic detection of pancreatic ductal adenocarcinoma in EUS-guided fine-needle biopsy samples based on whole-slide imaging. Endosc Ultrasound 2024; 13:335-344. [PMID: 39802107 PMCID: PMC11723688 DOI: 10.1097/eus.0000000000000094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 10/27/2024] [Indexed: 01/16/2025] Open
Abstract
Background EUS-guided fine-needle biopsy is the procedure of choice for the diagnosis of pancreatic ductal adenocarcinoma (PDAC). Nevertheless, the samples obtained are small and require expertise in pathology, whereas the diagnosis is difficult in view of the scarcity of malignant cells and the important desmoplastic reaction of these tumors. With the help of artificial intelligence, the deep learning architectures produce a fast, accurate, and automated approach for PDAC image segmentation based on whole-slide imaging. Given the effectiveness of U-Net in semantic segmentation, numerous variants and improvements have emerged, specifically for whole-slide imaging segmentation. Methods In this study, a comparison of 7 U-Net architecture variants was performed on 2 different datasets of EUS-guided fine-needle biopsy samples from 2 medical centers (31 and 33 whole-slide images, respectively) with different parameters and acquisition tools. The U-Net architecture variants evaluated included some that had not been previously explored for PDAC whole-slide image segmentation. The evaluation of their performance involved calculating accuracy through the mean Dice coefficient and mean intersection over union (IoU). Results The highest segmentation accuracies were obtained using Inception U-Net architecture for both datasets. PDAC tissue was segmented with the overall average Dice coefficient of 97.82% and IoU of 0.87 for Dataset 1, respectively, overall average Dice coefficient of 95.70%, and IoU of 0.79 for Dataset 2. Also, we considered the external testing of the trained segmentation models by performing the cross evaluations between the 2 datasets. The Inception U-Net model trained on Train Dataset 1 performed with the overall average Dice coefficient of 93.12% and IoU of 0.74 on Test Dataset 2. The Inception U-Net model trained on Train Dataset 2 performed with the overall average Dice coefficient of 92.09% and IoU of 0.81 on Test Dataset 1. Conclusions The findings of this study demonstrated the feasibility of utilizing artificial intelligence for assessing PDAC segmentation in whole-slide imaging, supported by promising scores.
Collapse
Affiliation(s)
| | - Nicoleta Podină
- Department of Gastroenterology, Ponderas Academic Hospital, Bucharest, Romania
- Faculty of Medicine, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
| | - Bogdan Silviu Ungureanu
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
- Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy Craiova, Craiova, Romania
| | - Alina Constantin
- Department of Gastroenterology, Ponderas Academic Hospital, Bucharest, Romania
| | | | - Nona Bejinariu
- REGINA MARIA Regional Laboratory, Pathological Anatomy Division, Cluj-Napoca, Romania
| | - Daniel Pirici
- Department of Histology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Daniela Elena Burtea
- Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy Craiova, Craiova, Romania
| | - Lucian Gruionu
- Faculty of Mechanics, University of Craiova, Craiova, Romania
| | - Stefan Udriștoiu
- Faculty of Automation, Computers and Electronics, University of Craiova, Craiova, Romania
| | - Adrian Săftoiu
- Department of Gastroenterology, Ponderas Academic Hospital, Bucharest, Romania
- Department of Gastroenterology and Hepatology, Elias University Emergency Hospital, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
| |
Collapse
|
6
|
Cavicchioli M, Moglia A, Pierelli L, Pugliese G, Cerveri P. Main challenges on the curation of large scale datasets for pancreas segmentation using deep learning in multi-phase CT scans: Focus on cardinality, manual refinement, and annotation quality. Comput Med Imaging Graph 2024; 117:102434. [PMID: 39284244 DOI: 10.1016/j.compmedimag.2024.102434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 06/20/2024] [Accepted: 09/07/2024] [Indexed: 10/20/2024]
Abstract
Accurate segmentation of the pancreas in computed tomography (CT) holds paramount importance in diagnostics, surgical planning, and interventions. Recent studies have proposed supervised deep-learning models for segmentation, but their efficacy relies on the quality and quantity of the training data. Most of such works employed small-scale public datasets, without proving the efficacy of generalization to external datasets. This study explored the optimization of pancreas segmentation accuracy by pinpointing the ideal dataset size, understanding resource implications, examining manual refinement impact, and assessing the influence of anatomical subregions. We present the AIMS-1300 dataset encompassing 1,300 CT scans. Its manual annotation by medical experts required 938 h. A 2.5D UNet was implemented to assess the impact of training sample size on segmentation accuracy by partitioning the original AIMS-1300 dataset into 11 smaller subsets of progressively increasing numerosity. The findings revealed that training sets exceeding 440 CTs did not lead to better segmentation performance. In contrast, nnU-Net and UNet with Attention Gate reached a plateau for 585 CTs. Tests on generalization on the publicly available AMOS-CT dataset confirmed this outcome. As the size of the partition of the AIMS-1300 training set increases, the number of error slices decreases, reaching a minimum with 730 and 440 CTs, for AIMS-1300 and AMOS-CT datasets, respectively. Segmentation metrics on the AIMS-1300 and AMOS-CT datasets improved more on the head than the body and tail of the pancreas as the dataset size increased. By carefully considering the task and the characteristics of the available data, researchers can develop deep learning models without sacrificing performance even with limited data. This could accelerate developing and deploying artificial intelligence tools for pancreas surgery and other surgical data science applications.
Collapse
Affiliation(s)
- Matteo Cavicchioli
- Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milano, 20133, Italy; Fondazione MIAS (AIMS Academy), Piazza dell'Ospedale Maggiore 3, Milano, 20162, Italy.
| | - Andrea Moglia
- Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milano, 20133, Italy
| | - Ludovica Pierelli
- Fondazione MIAS (AIMS Academy), Piazza dell'Ospedale Maggiore 3, Milano, 20162, Italy
| | - Giacomo Pugliese
- Fondazione MIAS (AIMS Academy), Piazza dell'Ospedale Maggiore 3, Milano, 20162, Italy
| | - Pietro Cerveri
- Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milano, 20133, Italy; Department of Industrial and Information Engineering, University of Pavia, Via Adolfo Ferrata 5, Pavia, 27100, Italy
| |
Collapse
|
7
|
Huang C, Hecht EM, Soloff EV, Tiwari HA, Bhosale PR, Dasayam A, Galgano SJ, Kambadakone A, Kulkarni NM, Le O, Liau J, Luk L, Rosenthal MH, Sangster GP, Goenka AH. Imaging for Early Detection of Pancreatic Ductal Adenocarcinoma: Updates and Challenges in the Implementation of Screening and Surveillance Programs. AJR Am J Roentgenol 2024; 223:e2431151. [PMID: 38809122 DOI: 10.2214/ajr.24.31151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/30/2024]
Abstract
Pancreatic ductal adenocarcinoma (PDA) is one of the most aggressive cancers. It has a poor 5-year survival rate of 12%, partly because most cases are diagnosed at advanced stages, precluding curative surgical resection. Early-stage PDA has significantly better prognoses due to increased potential for curative interventions, making early detection of PDA critically important to improved patient outcomes. We examine current and evolving early detection concepts, screening strategies, diagnostic yields among high-risk individuals, controversies, and limitations of standard-of-care imaging.
Collapse
Affiliation(s)
- Chenchan Huang
- Department of Radiology, NYU Langone Health, 660 First Ave, 3rd Fl, New York, NY 10016
| | | | - Erik V Soloff
- Department of Radiology, University of Washington, Seattle, WA
| | - Hina Arif Tiwari
- Department of Radiology, University of Arizona College of Medicine, Banner University Medicine, Tucson, AZ
| | - Priya R Bhosale
- Department of Radiology, The University of Texas MD Anderson Cancer Center, Bellaire, TX
| | - Anil Dasayam
- Department of Radiology, University of Pittsburgh Medical Center, Pittsburgh, PA
| | - Samuel J Galgano
- Department of Radiology, University of Alabama at Birmingham, Birmingham, AL
| | | | - Naveen M Kulkarni
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI
| | - Ott Le
- Department of Radiology, The University of Texas MD Anderson Cancer Center, Bellaire, TX
| | - Joy Liau
- Department of Radiology, University of California at San Diego, San Diego, CA
| | - Lyndon Luk
- Department of Radiology, Columbia University Medical Center, New York, NY
| | - Michael H Rosenthal
- Department of Radiology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | | | | |
Collapse
|
8
|
Somasundaram E, Taylor Z, Alves VV, Qiu L, Fortson BL, Mahalingam N, Dudley JA, Li H, Brady SL, Trout AT, Dillman JR. Deep Learning Models for Abdominal CT Organ Segmentation in Children: Development and Validation in Internal and Heterogeneous Public Datasets. AJR Am J Roentgenol 2024; 223:e2430931. [PMID: 38691411 DOI: 10.2214/ajr.24.30931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2024]
Abstract
BACKGROUND. Deep learning abdominal organ segmentation algorithms have shown excellent results in adults; validation in children is sparse. OBJECTIVE. The purpose of this article is to develop and validate deep learning models for liver, spleen, and pancreas segmentation on pediatric CT examinations. METHODS. This retrospective study developed and validated deep learning models for liver, spleen, and pancreas segmentation using 1731 CT examinations (1504 training, 221 testing), derived from three internal institutional pediatric (age ≤ 18 years) datasets (n = 483) and three public datasets comprising pediatric and adult examinations with various pathologies (n = 1248). Three deep learning model architectures (SegResNet, DynUNet, and SwinUNETR) from the Medical Open Network for Artificial Intelligence (MONAI) framework underwent training using native training (NT), relying solely on institutional datasets, and transfer learning (TL), incorporating pretraining on public datasets. For comparison, TotalSegmentator, a publicly available segmentation model, was applied to test data without further training. Segmentation performance was evaluated using mean Dice similarity coefficient (DSC), with manual segmentations as reference. RESULTS. For internal pediatric data, the DSC for TotalSegmentator, NT models, and TL models for normal liver was 0.953, 0.964-0.965, and 0.965-0.966, respectively; for normal spleen, 0.914, 0.942-0.945, and 0.937-0.945; for normal pancreas, 0.733, 0.774-0.785, and 0.775-0.786; and for pancreas with pancreatitis, 0.703, 0.590-0.640, and 0.667-0.711. For public pediatric data, the DSC for TotalSegmentator, NT models, and TL models for liver was 0.952, 0.871-0.908, and 0.941-0.946, respectively; for spleen, 0.905, 0.771-0.827, and 0.897-0.926; and for pancreas, 0.700, 0.577-0.648, and 0.693-0.736. For public primarily adult data, the DSC for TotalSegmentator, NT models, and TL models for liver was 0.991, 0.633-0.750, and 0.926-0.952, respectively; for spleen, 0.983, 0.569-0.604, and 0.923-0.947; and for pancreas, 0.909, 0.148-0.241, and 0.699-0.775. The DynUNet TL model was selected as the best-performing NT or TL model considering DSC values across organs and test datasets and was made available as an open-source MONAI bundle (https://github.com/cchmc-dll/pediatric_abdominal_segmentation_bundle.git). CONCLUSION. TL models trained on heterogeneous public datasets and fine-tuned using institutional pediatric data outperformed internal NT models and Total-Segmentator across internal and external pediatric test data. Segmentation performance was better in liver and spleen than in pancreas. CLINICAL IMPACT. The selected model may be used for various volumetry applications in pediatric imaging.
Collapse
Affiliation(s)
- Elanchezhian Somasundaram
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 5033, Cincinnati, OH 45229
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH
| | - Zachary Taylor
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 5033, Cincinnati, OH 45229
| | - Vinicius V Alves
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 5033, Cincinnati, OH 45229
| | - Lisa Qiu
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 5033, Cincinnati, OH 45229
| | - Benjamin L Fortson
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 5033, Cincinnati, OH 45229
| | - Neeraja Mahalingam
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 5033, Cincinnati, OH 45229
| | - Jonathan A Dudley
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 5033, Cincinnati, OH 45229
| | - Hailong Li
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 5033, Cincinnati, OH 45229
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH
| | - Samuel L Brady
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 5033, Cincinnati, OH 45229
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH
| | - Andrew T Trout
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 5033, Cincinnati, OH 45229
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH
| | - Jonathan R Dillman
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 5033, Cincinnati, OH 45229
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH
| |
Collapse
|
9
|
Yang E, Kim JH, Min JH, Jeong WK, Hwang JA, Lee JH, Shin J, Kim H, Lee SE, Baek SY. nnU-Net-Based Pancreas Segmentation and Volume Measurement on CT Imaging in Patients with Pancreatic Cancer. Acad Radiol 2024; 31:2784-2794. [PMID: 38350812 DOI: 10.1016/j.acra.2024.01.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Revised: 12/29/2023] [Accepted: 01/03/2024] [Indexed: 02/15/2024]
Abstract
RATIONALE AND OBJECTIVES To develop and validate a deep learning (DL)-based method for pancreas segmentation on CT and automatic measurement of pancreatic volume in pancreatic cancer. MATERIALS AND METHODS This retrospective study used 3D nnU-net architecture for fully automated pancreatic segmentation in patients with pancreatic cancer. The study used 851 portal venous phase CT images (499 pancreatic cancer and 352 normal pancreas). This dataset was divided into training (n = 506), internal validation (n = 126), and external test set (n = 219). For the external test set, the pancreas was manually segmented by two abdominal radiologists (R1 and R2) to obtain the ground truth. In addition, the consensus segmentation was obtained using Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. Segmentation performance was assessed using the Dice similarity coefficient (DSC). Next, the pancreatic volumes determined by automatic segmentation were compared to those determined by manual segmentation by two radiologists. RESULTS The DL-based model for pancreatic segmentation showed a mean DSC of 0.764 in the internal validation dataset and DSC of 0.807, 0.805, and 0.803 using R1, R2, and STAPLE as references in the external test dataset. The pancreas parenchymal volume measured by automatic and manual segmentations were similar (DL-based model: 65.5 ± 19.3 cm3 and STAPLE: 65.1 ± 21.4 cm3; p = 0.486). The pancreatic parenchymal volume difference between the DL-based model predictions and the manual segmentation by STAPLE was 0.5 cm3, with correlation coefficients of 0.88. CONCLUSION The DL-based model efficiently generates automatic segmentation of the pancreas and measures the pancreatic volume in patients with pancreatic cancer.
Collapse
Affiliation(s)
- Ehwa Yang
- Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jae-Hun Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Ji Hye Min
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| | - Woo Kyoung Jeong
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jeong Ah Hwang
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jeong Hyun Lee
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jaeseung Shin
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Honsoul Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Seol Eui Lee
- Research Institute for Future Medicine, Samsung Medical Center, Seoul, Republic of Korea
| | - Sun-Young Baek
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea; Research Institute for Future Medicine, Samsung Medical Center, Seoul, Republic of Korea
| |
Collapse
|
10
|
Mukherjee S, Korfiatis P, Patnam NG, Trivedi KH, Karbhari A, Suman G, Fletcher JG, Goenka AH. Assessing the robustness of a machine-learning model for early detection of pancreatic adenocarcinoma (PDA): evaluating resilience to variations in image acquisition and radiomics workflow using image perturbation methods. Abdom Radiol (NY) 2024; 49:964-974. [PMID: 38175255 DOI: 10.1007/s00261-023-04127-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 11/08/2023] [Accepted: 11/12/2023] [Indexed: 01/05/2024]
Abstract
PURPOSE To evaluate robustness of a radiomics-based support vector machine (SVM) model for detection of visually occult PDA on pre-diagnostic CTs by simulating common variations in image acquisition and radiomics workflow using image perturbation methods. METHODS Eighteen algorithmically generated-perturbations, which simulated variations in image noise levels (σ, 2σ, 3σ, 5σ), image rotation [both CT image and the corresponding pancreas segmentation mask by 45° and 90° in axial plane], voxel resampling (isotropic and anisotropic), gray-level discretization [bin width (BW) 32 and 64)], and pancreas segmentation (sequential erosions by 3, 4, 6, and 8 pixels and dilations by 3, 4, and 6 pixels from the boundary), were introduced to the original (unperturbed) test subset (n = 128; 45 pre-diagnostic CTs, 83 control CTs with normal pancreas). Radiomic features were extracted from pancreas masks of these additional test subsets, and the model's performance was compared vis-a-vis the unperturbed test subset. RESULTS The model correctly classified 43 out of 45 pre-diagnostic CTs and 75 out of 83 control CTs in the unperturbed test subset, achieving 92.2% accuracy and 0.98 AUC. Model's performance was unaffected by a three-fold increase in noise level except for sensitivity declining to 80% at 3σ (p = 0.02). Performance remained comparable vis-a-vis the unperturbed test subset despite variations in image rotation (p = 0.99), voxel resampling (p = 0.25-0.31), change in gray-level BW to 32 (p = 0.31-0.99), and erosions/dilations up to 4 pixels from the pancreas boundary (p = 0.12-0.34). CONCLUSION The model's high performance for detection of visually occult PDA was robust within a broad range of clinically relevant variations in image acquisition and radiomics workflow.
Collapse
Affiliation(s)
- Sovanlal Mukherjee
- Divisions of Abdominal and Nuclear Imaging, Nuclear Radiology Fellowship, Nuclear Radiology Research Operations, Enterprise PET/MR Research and Development, Department of Radiology, Mayo Clinic, 200 First St SW, Charlton 1, Rochester, MN, 55905, USA
| | - Panagiotis Korfiatis
- Divisions of Abdominal and Nuclear Imaging, Nuclear Radiology Fellowship, Nuclear Radiology Research Operations, Enterprise PET/MR Research and Development, Department of Radiology, Mayo Clinic, 200 First St SW, Charlton 1, Rochester, MN, 55905, USA
| | - Nandakumar G Patnam
- Divisions of Abdominal and Nuclear Imaging, Nuclear Radiology Fellowship, Nuclear Radiology Research Operations, Enterprise PET/MR Research and Development, Department of Radiology, Mayo Clinic, 200 First St SW, Charlton 1, Rochester, MN, 55905, USA
| | - Kamaxi H Trivedi
- Divisions of Abdominal and Nuclear Imaging, Nuclear Radiology Fellowship, Nuclear Radiology Research Operations, Enterprise PET/MR Research and Development, Department of Radiology, Mayo Clinic, 200 First St SW, Charlton 1, Rochester, MN, 55905, USA
| | - Aashna Karbhari
- Divisions of Abdominal and Nuclear Imaging, Nuclear Radiology Fellowship, Nuclear Radiology Research Operations, Enterprise PET/MR Research and Development, Department of Radiology, Mayo Clinic, 200 First St SW, Charlton 1, Rochester, MN, 55905, USA
| | - Garima Suman
- Divisions of Abdominal and Nuclear Imaging, Nuclear Radiology Fellowship, Nuclear Radiology Research Operations, Enterprise PET/MR Research and Development, Department of Radiology, Mayo Clinic, 200 First St SW, Charlton 1, Rochester, MN, 55905, USA
| | - Joel G Fletcher
- Divisions of Abdominal and Nuclear Imaging, Nuclear Radiology Fellowship, Nuclear Radiology Research Operations, Enterprise PET/MR Research and Development, Department of Radiology, Mayo Clinic, 200 First St SW, Charlton 1, Rochester, MN, 55905, USA
| | - Ajit H Goenka
- Divisions of Abdominal and Nuclear Imaging, Nuclear Radiology Fellowship, Nuclear Radiology Research Operations, Enterprise PET/MR Research and Development, Department of Radiology, Mayo Clinic, 200 First St SW, Charlton 1, Rochester, MN, 55905, USA.
| |
Collapse
|
11
|
Kawamoto S, Zhu Z, Chu LC, Javed AA, Kinny-Köster B, Wolfgang CL, Hruban RH, Kinzler KW, Fouladi DF, Blanco A, Shayesteh S, Fishman EK. Deep neural network-based segmentation of normal and abnormal pancreas on abdominal CT: evaluation of global and local accuracies. Abdom Radiol (NY) 2024; 49:501-511. [PMID: 38102442 DOI: 10.1007/s00261-023-04122-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 10/30/2023] [Accepted: 11/03/2023] [Indexed: 12/17/2023]
Abstract
PURPOSE Delay in diagnosis can contribute to poor outcomes in pancreatic ductal adenocarcinoma (PDAC), and new tools for early detection are required. Recent application of artificial intelligence to cancer imaging has demonstrated great potential in detecting subtle early lesions. The aim of the study was to evaluate global and local accuracies of deep neural network (DNN) segmentation of normal and abnormal pancreas with pancreatic mass. METHODS Our previously developed and reported residual deep supervision network for segmentation of PDAC was applied to segment pancreas using CT images of potential renal donors (normal pancreas) and patients with suspected PDAC (abnormal pancreas). Accuracy of DNN pancreas segmentation was assessed using DICE simulation coefficient (DSC), average symmetric surface distance (ASSD), and Hausdorff distance 95% percentile (HD95) as compared to manual segmentation. Furthermore, two radiologists semi-quantitatively assessed local accuracies and estimated volume of correctly segmented pancreas. RESULTS Forty-two normal and 49 abnormal CTs were assessed. Average DSC was 87.4 ± 3.1% and 85.5 ± 3.2%, ASSD 0.97 ± 0.30 and 1.34 ± 0.65, HD95 4.28 ± 2.36 and 6.31 ± 6.31 for normal and abnormal pancreas, respectively. Semi-quantitatively, ≥95% of pancreas volume was correctly segmented in 95.2% and 53.1% of normal and abnormal pancreas by both radiologists, and 97.6% and 75.5% by at least one radiologist. Most common segmentation errors were made on pancreatic and duodenal borders in both groups, and related to pancreatic tumor including duct dilatation, atrophy, tumor infiltration and collateral vessels. CONCLUSION Pancreas DNN segmentation is accurate in a majority of cases, however, minor manual editing may be necessary; particularly in abnormal pancreas.
Collapse
Affiliation(s)
- Satomi Kawamoto
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline Street, Baltimore, MD, 21287, USA.
| | - Zhuotun Zhu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline Street, Baltimore, MD, 21287, USA
| | - Linda C Chu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline Street, Baltimore, MD, 21287, USA
| | - Ammar A Javed
- Department of Surgery, School of Medicine, Johns Hopkins University, Blalock Building, 600 N. Wolfe Street, Baltimore, MD, 21287, USA
| | - Benedict Kinny-Köster
- Department of Surgery, School of Medicine, Johns Hopkins University, Blalock Building, 600 N. Wolfe Street, Baltimore, MD, 21287, USA
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Christopher L Wolfgang
- Department of Surgery, School of Medicine, Johns Hopkins University, Blalock Building, 600 N. Wolfe Street, Baltimore, MD, 21287, USA
| | - Ralph H Hruban
- Department of Pathology, The Sol Goldman Pancreatic Cancer Research Center, Johns Hopkins University School of Medicine, Baltimore, MD, 21287, USA
- The Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, MD, 21231, USA
| | - Kenneth W Kinzler
- The Ludwig Center, The Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, MD, 21231, USA
| | - Daniel Fadaei Fouladi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline Street, Baltimore, MD, 21287, USA
| | - Alejandra Blanco
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline Street, Baltimore, MD, 21287, USA
| | - Shahab Shayesteh
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline Street, Baltimore, MD, 21287, USA
| | - Elliot K Fishman
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline Street, Baltimore, MD, 21287, USA
| |
Collapse
|
12
|
Korfiatis P, Suman G, Patnam NG, Trivedi KH, Karbhari A, Mukherjee S, Cook C, Klug JR, Patra A, Khasawneh H, Rajamohan N, Fletcher JG, Truty MJ, Majumder S, Bolan CW, Sandrasegaran K, Chari ST, Goenka AH. Automated Artificial Intelligence Model Trained on a Large Data Set Can Detect Pancreas Cancer on Diagnostic Computed Tomography Scans As Well As Visually Occult Preinvasive Cancer on Prediagnostic Computed Tomography Scans. Gastroenterology 2023; 165:1533-1546.e4. [PMID: 37657758 PMCID: PMC10843414 DOI: 10.1053/j.gastro.2023.08.034] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 08/13/2023] [Accepted: 08/17/2023] [Indexed: 09/03/2023]
Abstract
BACKGROUND & AIMS The aims of our case-control study were (1) to develop an automated 3-dimensional (3D) Convolutional Neural Network (CNN) for detection of pancreatic ductal adenocarcinoma (PDA) on diagnostic computed tomography scans (CTs), (2) evaluate its generalizability on multi-institutional public data sets, (3) its utility as a potential screening tool using a simulated cohort with high pretest probability, and (4) its ability to detect visually occult preinvasive cancer on prediagnostic CTs. METHODS A 3D-CNN classification system was trained using algorithmically generated bounding boxes and pancreatic masks on a curated data set of 696 portal phase diagnostic CTs with PDA and 1080 control images with a nonneoplastic pancreas. The model was evaluated on (1) an intramural hold-out test subset (409 CTs with PDA, 829 controls); (2) a simulated cohort with a case-control distribution that matched the risk of PDA in glycemically defined new-onset diabetes, and Enriching New-Onset Diabetes for Pancreatic Cancer score ≥3; (3) multi-institutional public data sets (194 CTs with PDA, 80 controls), and (4) a cohort of 100 prediagnostic CTs (i.e., CTs incidentally acquired 3-36 months before clinical diagnosis of PDA) without a focal mass, and 134 controls. RESULTS Of the CTs in the intramural test subset, 798 (64%) were from other hospitals. The model correctly classified 360 CTs (88%) with PDA and 783 control CTs (94%), with a mean accuracy 0.92 (95% CI, 0.91-0.94), area under the receiver operating characteristic (AUROC) curve of 0.97 (95% CI, 0.96-0.98), sensitivity of 0.88 (95% CI, 0.85-0.91), and specificity of 0.95 (95% CI, 0.93-0.96). Activation areas on heat maps overlapped with the tumor in 350 of 360 CTs (97%). Performance was high across tumor stages (sensitivity of 0.80, 0.87, 0.95, and 1.0 on T1 through T4 stages, respectively), comparable for hypodense vs isodense tumors (sensitivity: 0.90 vs 0.82), different age, sex, CT slice thicknesses, and vendors (all P > .05), and generalizable on both the simulated cohort (accuracy, 0.95 [95% 0.94-0.95]; AUROC curve, 0.97 [95% CI, 0.94-0.99]) and public data sets (accuracy, 0.86 [95% CI, 0.82-0.90]; AUROC curve, 0.90 [95% CI, 0.86-0.95]). Despite being exclusively trained on diagnostic CTs with larger tumors, the model could detect occult PDA on prediagnostic CTs (accuracy, 0.84 [95% CI, 0.79-0.88]; AUROC curve, 0.91 [95% CI, 0.86-0.94]; sensitivity, 0.75 [95% CI, 0.67-0.84]; and specificity, 0.90 [95% CI, 0.85-0.95]) at a median 475 days (range, 93-1082 days) before clinical diagnosis. CONCLUSIONS This automated artificial intelligence model trained on a large and diverse data set shows high accuracy and generalizable performance for detection of PDA on diagnostic CTs as well as for visually occult PDA on prediagnostic CTs. Prospective validation with blood-based biomarkers is warranted to assess the potential for early detection of sporadic PDA in high-risk individuals.
Collapse
Affiliation(s)
| | - Garima Suman
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | | | | | | | - Cole Cook
- Division of Medical Imaging Technology Services, Mayo Clinic, Rochester, Minnesota
| | - Jason R Klug
- Division of Medical Imaging Technology Services, Mayo Clinic, Rochester, Minnesota
| | - Anurima Patra
- Department of Radiology, Tata Medical Center, Kolkata, India
| | - Hala Khasawneh
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | | | - Mark J Truty
- Department of Surgery, Mayo Clinic, Rochester, Minnesota
| | - Shounak Majumder
- Department of Gastroenterology, Mayo Clinic, Rochester, Minnesota
| | | | | | - Suresh T Chari
- Department of Gastroenterology, Mayo Clinic, Rochester, Minnesota
| | - Ajit H Goenka
- Department of Radiology, Mayo Clinic, Rochester, Minnesota.
| |
Collapse
|
13
|
Karbhari A, Mosessian S, Trivedi KH, Valla F, Jacobson M, Truty MJ, Patnam NG, Simeone DM, Zan E, Brennan T, Chen H, Kuo PH, Herrmann K, Goenka AH. Gallium-68-labeled fibroblast activation protein inhibitor-46 PET in patients with resectable or borderline resectable pancreatic ductal adenocarcinoma: A phase 2, multicenter, single arm, open label non-randomized study protocol. PLoS One 2023; 18:e0294564. [PMID: 38011131 PMCID: PMC10681241 DOI: 10.1371/journal.pone.0294564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 10/20/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND Pancreatic ductal adenocarcinoma (PDAC) is a lethal disease prone to widespread metastatic dissemination and characterized by a desmoplastic stroma that contributes to poor outcomes. Fibroblast activation protein (FAP)-expressing Cancer-Associated Fibroblasts (CAFs) are crucial components of the tumor stroma, influencing carcinogenesis, fibrosis, tumor growth, metastases, and treatment resistance. Non-invasive tools to profile CAF identity and function are essential for overcoming CAF-mediated therapy resistance, developing innovative targeted therapies, and improved patient outcomes. We present the design of a multicenter phase 2 study (clinicaltrials.gov identifier NCT05262855) of [68Ga]FAPI-46 PET to image FAP-expressing CAFs in resectable or borderline resectable PDAC. METHODS We will enroll up to 60 adult treatment-naïve patients with confirmed PDAC. These patients will be eligible for curative surgical resection, either without prior treatment (Cohort 1) or after neoadjuvant therapy (NAT) (Cohort 2). A baseline PET scan will be conducted from the vertex to mid-thighs approximately 15 minutes after administering 5 mCi (±2) of [68Ga]FAPI-46 intravenously. Cohort 2 patients will undergo an additional PET after completing NAT but before surgery. Histopathology and FAP immunohistochemistry (IHC) of initial diagnostic biopsy and resected tumor samples will serve as the truth standards. Primary objective is to assess the sensitivity, specificity, and accuracy of [68Ga]FAPI-46 PET for detecting FAP-expressing CAFs. Secondary objectives will assess predictive values and safety profile validation. Exploratory objectives are comparison of diagnostic performance of [68Ga]FAPI-46 PET to standard-of-care imaging, and comparison of pre- versus post-NAT [68Ga]FAPI-46 PET in Cohort 2. CONCLUSION To facilitate the clinical translation of [68Ga]FAPI-46 in PDAC, the current study seeks to implement a coherent strategy to mitigate risks and increase the probability of meeting FDA requirements and stakeholder expectations. The findings from this study could potentially serve as a foundation for a New Drug Application to the FDA. TRIAL REGISTRATION @ClinicalTrials.gov identifier NCT05262855.
Collapse
Affiliation(s)
- Aashna Karbhari
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, United States of America
| | - Sherly Mosessian
- Clinical Development, Sofie Biosciences, Dulles, Virginia, United States of America
| | - Kamaxi H. Trivedi
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, United States of America
| | - Frank Valla
- Radiopharmaceutical and Contract Manufacturing, Sofie Biosciences, Dulles, Virginia, United States of America
| | - Mark Jacobson
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, United States of America
| | - Mark J. Truty
- Department of Surgery, Mayo Clinic, Rochester, Minnesota, United States of America
| | - Nandakumar G. Patnam
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, United States of America
| | - Diane M. Simeone
- Departments of Surgery and Pathology, NYU Langone Health, New York, New York, United States of America
| | - Elcin Zan
- Department of Radiology, Weill Cornell Medicine, New York, New York, United States of America
| | - Tracy Brennan
- Discovery Life Sciences, Newtown, Pennsylvania, United States of America
| | - Hongli Chen
- Discovery Life Sciences, Newtown, Pennsylvania, United States of America
| | - Phillip H. Kuo
- Departments of Medical Imaging, Medicine and Biomedical Engineering, University of Arizona, Tucson, Arizona, United States of America
| | - Ken Herrmann
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Ajit H. Goenka
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, United States of America
| |
Collapse
|
14
|
Xue T, Zhang F, Zhang C, Chen Y, Song Y, Golby AJ, Makris N, Rathi Y, Cai W, O'Donnell LJ. Superficial white matter analysis: An efficient point-cloud-based deep learning framework with supervised contrastive learning for consistent tractography parcellation across populations and dMRI acquisitions. Med Image Anal 2023; 85:102759. [PMID: 36706638 PMCID: PMC9975054 DOI: 10.1016/j.media.2023.102759] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 01/05/2023] [Accepted: 01/20/2023] [Indexed: 01/24/2023]
Abstract
Diffusion MRI tractography is an advanced imaging technique that enables in vivo mapping of the brain's white matter connections. White matter parcellation classifies tractography streamlines into clusters or anatomically meaningful tracts. It enables quantification and visualization of whole-brain tractography. Currently, most parcellation methods focus on the deep white matter (DWM), whereas fewer methods address the superficial white matter (SWM) due to its complexity. We propose a novel two-stage deep-learning-based framework, Superficial White Matter Analysis (SupWMA), that performs an efficient and consistent parcellation of 198 SWM clusters from whole-brain tractography. A point-cloud-based network is adapted to our SWM parcellation task, and supervised contrastive learning enables more discriminative representations between plausible streamlines and outliers for SWM. We train our model on a large-scale tractography dataset including streamline samples from labeled long- and medium-range (over 40 mm) SWM clusters and anatomically implausible streamline samples, and we perform testing on six independently acquired datasets of different ages and health conditions (including neonates and patients with space-occupying brain tumors). Compared to several state-of-the-art methods, SupWMA obtains highly consistent and accurate SWM parcellation results on all datasets, showing good generalization across the lifespan in health and disease. In addition, the computational speed of SupWMA is much faster than other methods.
Collapse
Affiliation(s)
- Tengfei Xue
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA; School of Computer Science, University of Sydney, Sydney, Australia
| | - Fan Zhang
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA.
| | - Chaoyi Zhang
- School of Computer Science, University of Sydney, Sydney, Australia
| | - Yuqian Chen
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA; School of Computer Science, University of Sydney, Sydney, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| | | | - Nikos Makris
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA; Center for Morphometric Analysis, Massachusetts General Hospital, Boston, USA
| | - Yogesh Rathi
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Weidong Cai
- School of Computer Science, University of Sydney, Sydney, Australia
| | | |
Collapse
|
15
|
Khasawneh H, Patra A, Rajamohan N, Suman G, Klug J, Majumder S, Chari ST, Korfiatis P, Goenka AH. Volumetric Pancreas Segmentation on Computed Tomography: Accuracy and Efficiency of a Convolutional Neural Network Versus Manual Segmentation in 3D Slicer in the Context of Interreader Variability of Expert Radiologists. J Comput Assist Tomogr 2022; 46:841-847. [PMID: 36055122 DOI: 10.1097/rct.0000000000001374] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE This study aimed to compare accuracy and efficiency of a convolutional neural network (CNN)-enhanced workflow for pancreas segmentation versus radiologists in the context of interreader reliability. METHODS Volumetric pancreas segmentations on a data set of 294 portal venous computed tomographies were performed by 3 radiologists (R1, R2, and R3) and by a CNN. Convolutional neural network segmentations were reviewed and, if needed, corrected ("corrected CNN [c-CNN]" segmentations) by radiologists. Ground truth was obtained from radiologists' manual segmentations using simultaneous truth and performance level estimation algorithm. Interreader reliability and model's accuracy were evaluated with Dice-Sorenson coefficient (DSC) and Jaccard coefficient (JC). Equivalence was determined using a two 1-sided test. Convolutional neural network segmentations below the 25th percentile DSC were reviewed to evaluate segmentation errors. Time for manual segmentation and c-CNN was compared. RESULTS Pancreas volumes from 3 sets of segmentations (manual, CNN, and c-CNN) were noninferior to simultaneous truth and performance level estimation-derived volumes [76.6 cm 3 (20.2 cm 3 ), P < 0.05]. Interreader reliability was high (mean [SD] DSC between R2-R1, 0.87 [0.04]; R3-R1, 0.90 [0.05]; R2-R3, 0.87 [0.04]). Convolutional neural network segmentations were highly accurate (DSC, 0.88 [0.05]; JC, 0.79 [0.07]) and required minimal-to-no corrections (c-CNN: DSC, 0.89 [0.04]; JC, 0.81 [0.06]; equivalence, P < 0.05). Undersegmentation (n = 47 [64%]) was common in the 73 CNN segmentations below 25th percentile DSC, but there were no major errors. Total inference time (minutes) for CNN was 1.2 (0.3). Average time (minutes) taken by radiologists for c-CNN (0.6 [0.97]) was substantially lower compared with manual segmentation (3.37 [1.47]; savings of 77.9%-87% [ P < 0.0001]). CONCLUSIONS Convolutional neural network-enhanced workflow provides high accuracy and efficiency for volumetric pancreas segmentation on computed tomography.
Collapse
Affiliation(s)
- Hala Khasawneh
- From the Department of Radiology, Mayo Clinic, Rochester, MN
| | - Anurima Patra
- Department of Radiology, Tata Medical Center, Kolkata, India
| | | | - Garima Suman
- From the Department of Radiology, Mayo Clinic, Rochester, MN
| | - Jason Klug
- From the Department of Radiology, Mayo Clinic, Rochester, MN
| | | | | | | | | |
Collapse
|
16
|
Mukherjee S, Patra A, Khasawneh H, Korfiatis P, Rajamohan N, Suman G, Majumder S, Panda A, Johnson MP, Larson NB, Wright DE, Kline TL, Fletcher JG, Chari ST, Goenka AH. Radiomics-based Machine-learning Models Can Detect Pancreatic Cancer on Prediagnostic Computed Tomography Scans at a Substantial Lead Time Before Clinical Diagnosis. Gastroenterology 2022; 163:1435-1446.e3. [PMID: 35788343 DOI: 10.1053/j.gastro.2022.06.066] [Citation(s) in RCA: 73] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/20/2022] [Accepted: 06/22/2022] [Indexed: 02/01/2023]
Abstract
BACKGROUND & AIMS Our purpose was to detect pancreatic ductal adenocarcinoma (PDAC) at the prediagnostic stage (3-36 months before clinical diagnosis) using radiomics-based machine-learning (ML) models, and to compare performance against radiologists in a case-control study. METHODS Volumetric pancreas segmentation was performed on prediagnostic computed tomography scans (CTs) (median interval between CT and PDAC diagnosis: 398 days) of 155 patients and an age-matched cohort of 265 subjects with normal pancreas. A total of 88 first-order and gray-level radiomic features were extracted and 34 features were selected through the least absolute shrinkage and selection operator-based feature selection method. The dataset was randomly divided into training (292 CTs: 110 prediagnostic and 182 controls) and test subsets (128 CTs: 45 prediagnostic and 83 controls). Four ML classifiers, k-nearest neighbor (KNN), support vector machine (SVM), random forest (RM), and extreme gradient boosting (XGBoost), were evaluated. Specificity of model with highest accuracy was further validated on an independent internal dataset (n = 176) and the public National Institutes of Health dataset (n = 80). Two radiologists (R4 and R5) independently evaluated the pancreas on a 5-point diagnostic scale. RESULTS Median (range) time between prediagnostic CTs of the test subset and PDAC diagnosis was 386 (97-1092) days. SVM had the highest sensitivity (mean; 95% confidence interval) (95.5; 85.5-100.0), specificity (90.3; 84.3-91.5), F1-score (89.5; 82.3-91.7), area under the curve (AUC) (0.98; 0.94-0.98), and accuracy (92.2%; 86.7-93.7) for classification of CTs into prediagnostic versus normal. All 3 other ML models, KNN, RF, and XGBoost, had comparable AUCs (0.95, 0.95, and 0.96, respectively). The high specificity of SVM was generalizable to both the independent internal (92.6%) and the National Institutes of Health dataset (96.2%). In contrast, interreader radiologist agreement was only fair (Cohen's kappa 0.3) and their mean AUC (0.66; 0.46-0.86) was lower than each of the 4 ML models (AUCs: 0.95-0.98) (P < .001). Radiologists also recorded false positive indirect findings of PDAC in control subjects (n = 83) (7% R4, 18% R5). CONCLUSIONS Radiomics-based ML models can detect PDAC from normal pancreas when it is beyond human interrogation capability at a substantial lead time before clinical diagnosis. Prospective validation and integration of such models with complementary fluid-based biomarkers has the potential for PDAC detection at a stage when surgical cure is a possibility.
Collapse
Affiliation(s)
| | - Anurima Patra
- Department of Radiology, Tata Medical Centre, Kolkata, India
| | - Hala Khasawneh
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | | | - Garima Suman
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Shounak Majumder
- Department of Gastroenterology, Mayo Clinic, Rochester, Minnesota
| | - Ananya Panda
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Matthew P Johnson
- Department of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Nicholas B Larson
- Department of Radiology, Mayo Clinic, Rochester, Minnesota; Department of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, Minnesota
| | | | | | | | - Suresh T Chari
- Department of Gastroenterology, Mayo Clinic, Rochester, Minnesota; Department of Gastroenterology, Hepatology, and Nutrition, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Ajit H Goenka
- Department of Radiology, Mayo Clinic, Rochester, Minnesota.
| |
Collapse
|
17
|
Wright DE, Mukherjee S, Patra A, Khasawneh H, Korfiatis P, Suman G, Chari ST, Kudva YC, Kline TL, Goenka AH. Radiomics-based machine learning (ML) classifier for detection of type 2 diabetes on standard-of-care abdomen CTs: a proof-of-concept study. Abdom Radiol (NY) 2022; 47:3806-3816. [PMID: 36085379 DOI: 10.1007/s00261-022-03668-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/26/2022] [Accepted: 08/27/2022] [Indexed: 01/18/2023]
Abstract
PURPOSE To determine if pancreas radiomics-based AI model can detect the CT imaging signature of type 2 diabetes (T2D). METHODS Total 107 radiomic features were extracted from volumetrically segmented normal pancreas in 422 T2D patients and 456 age-matched controls. Dataset was randomly split into training (300 T2D, 300 control CTs) and test subsets (122 T2D, 156 control CTs). An XGBoost model trained on 10 features selected through top-K-based selection method and optimized through threefold cross-validation on training subset was evaluated on test subset. RESULTS Model correctly classified 73 (60%) T2D patients and 96 (62%) controls yielding F1-score, sensitivity, specificity, precision, and AUC of 0.57, 0.62, 0.61, 0.55, and 0.65, respectively. Model's performance was equivalent across gender, CT slice thicknesses, and CT vendors (p values > 0.05). There was no difference between correctly classified versus misclassified patients in the mean (range) T2D duration [4.5 (0-15.4) versus 4.8 (0-15.7) years, p = 0.8], antidiabetic treatment [insulin (22% versus 18%), oral antidiabetics (10% versus 18%), both (41% versus 39%) (p > 0.05)], and treatment duration [5.4 (0-15) versus 5 (0-13) years, p = 0.4]. CONCLUSION Pancreas radiomics-based AI model can detect the imaging signature of T2D. Further refinement and validation are needed to evaluate its potential for opportunistic T2D detection on millions of CTs that are performed annually.
Collapse
Affiliation(s)
- Darryl E Wright
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Sovanlal Mukherjee
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Anurima Patra
- Department of Radiology, Tata Medical Center, Kolkata, 700160, India
| | - Hala Khasawneh
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Panagiotis Korfiatis
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Garima Suman
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Suresh T Chari
- Department of Gastroenterology, Hepatology and Nutrition, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
- Department of Gastroenterology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Yogish C Kudva
- Department of Endocrinology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Timothy L Kline
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA
| | - Ajit H Goenka
- Department of Radiology, Mayo Clinic, 200 First Street SW, Charlton 1, Rochester, MN, 55905, USA.
| |
Collapse
|
18
|
Lafata KJ, Wang Y, Konkel B, Yin FF, Bashir MR. Radiomics: a primer on high-throughput image phenotyping. Abdom Radiol (NY) 2022; 47:2986-3002. [PMID: 34435228 DOI: 10.1007/s00261-021-03254-x] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 08/15/2021] [Accepted: 08/16/2021] [Indexed: 01/18/2023]
Abstract
Radiomics is a high-throughput approach to image phenotyping. It uses computer algorithms to extract and analyze a large number of quantitative features from radiological images. These radiomic features collectively describe unique patterns that can serve as digital fingerprints of disease. They may also capture imaging characteristics that are difficult or impossible to characterize by the human eye. The rapid development of this field is motivated by systems biology, facilitated by data analytics, and powered by artificial intelligence. Here, as part of Abdominal Radiology's special issue on Quantitative Imaging, we provide an introduction to the field of radiomics. The technique is formally introduced as an advanced application of data analytics, with illustrating examples in abdominal radiology. Artificial intelligence is then presented as the main driving force of radiomics, and common techniques are defined and briefly compared. The complete step-by-step process of radiomic phenotyping is then broken down into five key phases. Potential pitfalls of each phase are highlighted, and recommendations are provided to reduce sources of variation, non-reproducibility, and error associated with radiomics.
Collapse
Affiliation(s)
- Kyle J Lafata
- Department of Radiology, Duke University School of Medicine, Durham, NC, USA. .,Department of Radiation Oncology, Duke University School of Medicine, Durham, NC, USA. .,Department of Electrical & Computer Engineering, Duke University Pratt School of Engineering, Durham, NC, USA.
| | - Yuqi Wang
- Department of Electrical & Computer Engineering, Duke University Pratt School of Engineering, Durham, NC, USA
| | - Brandon Konkel
- Department of Radiology, Duke University School of Medicine, Durham, NC, USA
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University School of Medicine, Durham, NC, USA
| | - Mustafa R Bashir
- Department of Radiology, Duke University School of Medicine, Durham, NC, USA.,Department of Medicine, Gastroenterology, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
19
|
Laino ME, Ammirabile A, Lofino L, Mannelli L, Fiz F, Francone M, Chiti A, Saba L, Orlandi MA, Savevski V. Artificial Intelligence Applied to Pancreatic Imaging: A Narrative Review. Healthcare (Basel) 2022; 10:1511. [PMID: 36011168 PMCID: PMC9408381 DOI: 10.3390/healthcare10081511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/31/2022] [Accepted: 08/08/2022] [Indexed: 12/19/2022] Open
Abstract
The diagnosis, evaluation, and treatment planning of pancreatic pathologies usually require the combined use of different imaging modalities, mainly, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). Artificial intelligence (AI) has the potential to transform the clinical practice of medical imaging and has been applied to various radiological techniques for different purposes, such as segmentation, lesion detection, characterization, risk stratification, or prediction of response to treatments. The aim of the present narrative review is to assess the available literature on the role of AI applied to pancreatic imaging. Up to now, the use of computer-aided diagnosis (CAD) and radiomics in pancreatic imaging has proven to be useful for both non-oncological and oncological purposes and represents a promising tool for personalized approaches to patients. Although great developments have occurred in recent years, it is important to address the obstacles that still need to be overcome before these technologies can be implemented into our clinical routine, mainly considering the heterogeneity among studies.
Collapse
Affiliation(s)
- Maria Elena Laino
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Angela Ammirabile
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Ludovica Lofino
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | | | - Francesco Fiz
- Nuclear Medicine Unit, Department of Diagnostic Imaging, E.O. Ospedali Galliera, 56321 Genoa, Italy
- Department of Nuclear Medicine and Clinical Molecular Imaging, University Hospital, 72074 Tübingen, Germany
| | - Marco Francone
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Arturo Chiti
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Nuclear Medicine, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Luca Saba
- Department of Radiology, University of Cagliari, 09124 Cagliari, Italy
| | | | - Victor Savevski
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
20
|
Tallam H, Elton DC, Lee S, Wakim P, Pickhardt PJ, Summers RM. Fully Automated Abdominal CT Biomarkers for Type 2 Diabetes Using Deep Learning. Radiology 2022; 304:85-95. [PMID: 35380492 DOI: 10.1148/radiol.211914] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Background CT biomarkers both inside and outside the pancreas can potentially be used to diagnose type 2 diabetes mellitus. Previous studies on this topic have shown significant results but were limited by manual methods and small study samples. Purpose To investigate abdominal CT biomarkers for type 2 diabetes mellitus in a large clinical data set using fully automated deep learning. Materials and Methods For external validation, noncontrast abdominal CT images were retrospectively collected from consecutive patients who underwent routine colorectal cancer screening with CT colonography from 2004 to 2016. The pancreas was segmented using a deep learning method that outputs measurements of interest, including CT attenuation, volume, fat content, and pancreas fractal dimension. Additional biomarkers assessed included visceral fat, atherosclerotic plaque, liver and muscle CT attenuation, and muscle volume. Univariable and multivariable analyses were performed, separating patients into groups based on time between type 2 diabetes diagnosis and CT date and including clinical factors such as sex, age, body mass index (BMI), BMI greater than 30 kg/m2, and height. The best set of predictors for type 2 diabetes were determined using multinomial logistic regression. Results A total of 8992 patients (mean age, 57 years ± 8 [SD]; 5009 women) were evaluated in the test set, of whom 572 had type 2 diabetes mellitus. The deep learning model had a mean Dice similarity coefficient for the pancreas of 0.69 ± 0.17, similar to the interobserver Dice similarity coefficient of 0.69 ± 0.09 (P = .92). The univariable analysis showed that patients with diabetes had, on average, lower pancreatic CT attenuation (mean, 18.74 HU ± 16.54 vs 29.99 HU ± 13.41; P < .0001) and greater visceral fat volume (mean, 235.0 mL ± 108.6 vs 130.9 mL ± 96.3; P < .0001) than those without diabetes. Patients with diabetes also showed a progressive decrease in pancreatic attenuation with greater duration of disease. The final multivariable model showed pairwise areas under the receiver operating characteristic curve (AUCs) of 0.81 and 0.85 between patients without and patients with diabetes who were diagnosed 0-2499 days before and after undergoing CT, respectively. In the multivariable analysis, adding clinical data did not improve upon CT-based AUC performance (AUC = 0.67 for the CT-only model vs 0.68 for the CT and clinical model). The best predictors of type 2 diabetes mellitus included intrapancreatic fat percentage, pancreatic fractal dimension, plaque severity between the L1 and L4 vertebra levels, average liver CT attenuation, and BMI. Conclusion The diagnosis of type 2 diabetes mellitus was associated with abdominal CT biomarkers, especially measures of pancreatic CT attenuation and visceral fat. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Hima Tallam
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Daniel C Elton
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Sungwon Lee
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Paul Wakim
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Perry J Pickhardt
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| | - Ronald M Summers
- From the Department of Radiology and Imaging Sciences (H.T., D.C.E., S.L., R.M.S.) and Department of Biostatistics and Clinical Epidemiology Service (P.W.), Clinical Center, National Institutes of Health, 10 Center Dr, Bldg 10, Room 1C224D, MSC 1182, Bethesda, MD 20892-1182; and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.J.P.)
| |
Collapse
|
21
|
Artificial intelligence in gastrointestinal and hepatic imaging: past, present and future scopes. Clin Imaging 2022; 87:43-53. [DOI: 10.1016/j.clinimag.2022.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 03/09/2022] [Accepted: 04/11/2022] [Indexed: 11/19/2022]
|
22
|
Roger R, Hilmes MA, Williams JM, Moore DJ, Powers AC, Craddock RC, Virostko J. Deep learning-based pancreas volume assessment in individuals with type 1 diabetes. BMC Med Imaging 2022; 22:5. [PMID: 34986790 PMCID: PMC8734282 DOI: 10.1186/s12880-021-00729-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 12/10/2021] [Indexed: 01/11/2023] Open
Abstract
Pancreas volume is reduced in individuals with diabetes and in autoantibody positive individuals at high risk for developing type 1 diabetes (T1D). Studies investigating pancreas volume are underway to assess pancreas volume in large clinical databases and studies, but manual pancreas annotation is time-consuming and subjective, preventing extension to large studies and databases. This study develops deep learning for automated pancreas volume measurement in individuals with diabetes. A convolutional neural network was trained using manual pancreas annotation on 160 abdominal magnetic resonance imaging (MRI) scans from individuals with T1D, controls, or a combination thereof. Models trained using each cohort were then tested on scans of 25 individuals with T1D. Deep learning and manual segmentations of the pancreas displayed high overlap (Dice coefficient = 0.81) and excellent correlation of pancreas volume measurements (R2 = 0.94). Correlation was highest when training data included individuals both with and without T1D. The pancreas of individuals with T1D can be automatically segmented to measure pancreas volume. This algorithm can be applied to large imaging datasets to quantify the spectrum of human pancreas volume.
Collapse
Affiliation(s)
- Raphael Roger
- Department of Diagnostic Medicine, Dell Medical School, University of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX, 78712, USA
| | - Melissa A Hilmes
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Jonathan M Williams
- Division of Diabetes, Endocrinology, and Metabolism, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Daniel J Moore
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Pathology, Immunology, and Microbiology, Vanderbilt University, Nashville, TN, USA
| | - Alvin C Powers
- Division of Diabetes, Endocrinology, and Metabolism, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Molecular Physiology and Biophysics, Vanderbilt University, Nashville, TN, USA.,VA Tennessee Valley Healthcare System, Nashville, TN, USA
| | - R Cameron Craddock
- Department of Diagnostic Medicine, Dell Medical School, University of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX, 78712, USA
| | - John Virostko
- Department of Diagnostic Medicine, Dell Medical School, University of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX, 78712, USA. .,Livestrong Cancer Institutes, University of Texas at Austin, Austin, TX, USA. .,Department of Oncology, University of Texas at Austin, Austin, TX, USA. .,Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|