1
|
Zhuang Y, Mathai TS, Mukherjee P, Khoury B, Kim B, Hou B, Rabbee N, Suri A, Summers RM. MRISegmenter: A Fully Accurate and Robust Automated Multiorgan and Structure Segmentation Tool for T1-weighted Abdominal MRI. Radiology 2025; 315:e241979. [PMID: 40261178 DOI: 10.1148/radiol.241979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/24/2025]
Abstract
Background There is a pressing demand to develop an automated segmentation tool for abdominal MRI that can provide accurate and robust segmentation in more than 60 abdominal organs and structures. Purpose To develop and evaluate the accuracy and robustness of an automated multiorgan and structure segmentation tool for T1-weighted abdominal MRI. Materials and Methods In this retrospective study, a T1-weighted abdominal MRI dataset composed of axial precontrast T1-weighted and contrast-enhanced T1-weighted arterial, portal venous, and delayed phases for each patient in a randomly selected sample was included at the National Institutes of Health Clinical Center. Each MRI series contained voxel-level annotations of 62 abdominal organs and structures. A three-dimensional segmentation (nnU-Net) model, called MRISegmenter, was trained on this dataset. This internal dataset was then randomly split into training and internal test sets. Evaluation was conducted on the internal test set and two external test sets (Abdominal Multi-Organ Segmentation Challenge 2022 [AMOS22] and Duke Liver). The predicted segmentations were compared against the radiologist-verified reference standard annotations using means ± SDs for the Dice similarity coefficient (Dice score) and normalized surface distance (NSD). The segmentation tool and dataset are publicly available at https://github.com/rsummers11/MRISegmenter. Results A total of 195 patients (training set, 135 patients [mean age, 54.7 years ± 16.3 {SD}; 72 male patients, 63 female patients]; internal test set, 60 patients [mean age, 51.1 years ± 14.4; 26 male patients, 34 female patients]) with 780 MRI scans containing 62 annotations each were included. On the internal test set, MRISegmenter achieved a mean Dice score of 0.861 ± 0.118 and a mean NSD of 0.924 ± 0.073. On external test sets AMOS22 (60 MRI scans) and Duke Liver (95 patients; 172 MRI scans), MRISegmenter attained a mean Dice score of 0.829 ± 0.133 and a mean NSD of 0.908 ± 0.067 and a mean Dice score of 0.933 ± 0.015 and a mean NSD of 0.929 ± 0.021, respectively. Conclusion MRISegmenter provided accurate and robust segmentation of 62 organs and structures at T1-weighted abdominal MRI. © RSNA, 2025 Supplemental material is available for this article. See also the editorial by Murphy in this issue.
Collapse
Affiliation(s)
- Yan Zhuang
- Department of Diagnostic, Molecular, and Interventional Radiology, Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tejas Sudharshan Mathai
- Department of Radiology and Imaging Sciences, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health Clinical Center, 10 Center Dr, Bldg 10, Rm 1C224D, Bethesda, MD 20892-1182
| | - Pritam Mukherjee
- Department of Radiology and Imaging Sciences, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health Clinical Center, 10 Center Dr, Bldg 10, Rm 1C224D, Bethesda, MD 20892-1182
| | - Brandon Khoury
- Department of Radiology, Walter Reed National Military Medical Center, Bethesda, Md
| | - Boah Kim
- Department of Radiology and Imaging Sciences, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health Clinical Center, 10 Center Dr, Bldg 10, Rm 1C224D, Bethesda, MD 20892-1182
- Now with Department of MetaBioHealth, Sungkyunkwan University, Seoul, Korea
| | - Benjamin Hou
- Department of Radiology and Imaging Sciences, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health Clinical Center, 10 Center Dr, Bldg 10, Rm 1C224D, Bethesda, MD 20892-1182
| | - Nusrat Rabbee
- Department of Biostatistics and Clinical Epidemiology Services, National Institutes of Health Clinical Center, Bethesda, Md
| | - Abhinav Suri
- Department of Radiology and Imaging Sciences, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health Clinical Center, 10 Center Dr, Bldg 10, Rm 1C224D, Bethesda, MD 20892-1182
| | - Ronald M Summers
- Department of Radiology and Imaging Sciences, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health Clinical Center, 10 Center Dr, Bldg 10, Rm 1C224D, Bethesda, MD 20892-1182
| |
Collapse
|
2
|
Pomohaci MD, Grasu MC, Băicoianu-Nițescu AŞ, Enache RM, Lupescu IG. Systematic Review: AI Applications in Liver Imaging with a Focus on Segmentation and Detection. Life (Basel) 2025; 15:258. [PMID: 40003667 PMCID: PMC11856300 DOI: 10.3390/life15020258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2024] [Revised: 02/02/2025] [Accepted: 02/05/2025] [Indexed: 02/27/2025] Open
Abstract
The liver is a frequent focus in radiology due to its diverse pathology, and artificial intelligence (AI) could improve diagnosis and management. This systematic review aimed to assess and categorize research studies on AI applications in liver radiology from 2018 to 2024, classifying them according to areas of interest (AOIs), AI task and imaging modality used. We excluded reviews and non-liver and non-radiology studies. Using the PRISMA guidelines, we identified 6680 articles from the PubMed/Medline, Scopus and Web of Science databases; 1232 were found to be eligible. A further analysis of a subgroup of 329 studies focused on detection and/or segmentation tasks was performed. Liver lesions were the main AOI and CT was the most popular modality, while classification was the predominant AI task. Most detection and/or segmentation studies (48.02%) used only public datasets, and 27.65% used only one public dataset. Code sharing was practiced by 10.94% of these articles. This review highlights the predominance of classification tasks, especially applied to liver lesion imaging, most often using CT imaging. Detection and/or segmentation tasks relied mostly on public datasets, while external testing and code sharing were lacking. Future research should explore multi-task models and improve dataset availability to enhance AI's clinical impact in liver imaging.
Collapse
Affiliation(s)
- Mihai Dan Pomohaci
- Department 8: Radiology, Discipline of Radiology, Medical Imaging and Interventional Radiology I, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (M.D.P.); (A.-Ș.B.-N.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania;
| | - Mugur Cristian Grasu
- Department 8: Radiology, Discipline of Radiology, Medical Imaging and Interventional Radiology I, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (M.D.P.); (A.-Ș.B.-N.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania;
| | - Alexandru-Ştefan Băicoianu-Nițescu
- Department 8: Radiology, Discipline of Radiology, Medical Imaging and Interventional Radiology I, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (M.D.P.); (A.-Ș.B.-N.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania;
| | - Robert Mihai Enache
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania;
| | - Ioana Gabriela Lupescu
- Department 8: Radiology, Discipline of Radiology, Medical Imaging and Interventional Radiology I, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (M.D.P.); (A.-Ș.B.-N.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania;
| |
Collapse
|
3
|
Jeltsch P, Monnin K, Jreige M, Fernandes-Mendes L, Girardet R, Dromain C, Richiardi J, Vietti-Violi N. Magnetic Resonance Imaging Liver Segmentation Protocol Enables More Consistent and Robust Annotations, Paving the Way for Advanced Computer-Assisted Analysis. Diagnostics (Basel) 2024; 14:2785. [PMID: 39767146 PMCID: PMC11726866 DOI: 10.3390/diagnostics14242785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2024] [Revised: 12/05/2024] [Accepted: 12/10/2024] [Indexed: 01/16/2025] Open
Abstract
BACKGROUND/OBJECTIVES Recent advancements in artificial intelligence (AI) have spurred interest in developing computer-assisted analysis for imaging examinations. However, the lack of high-quality datasets remains a significant bottleneck. Labeling instructions are critical for improving dataset quality but are often lacking. This study aimed to establish a liver MRI segmentation protocol and assess its impact on annotation quality and inter-reader agreement. METHODS This retrospective study included 20 patients with chronic liver disease. Manual liver segmentations were performed by a radiologist in training and a radiology technician on T2-weighted imaging (wi) and T1wi at the portal venous phase. Based on the inter-reader discrepancies identified after the first segmentation round, a segmentation protocol was established, guiding the second round of segmentation, resulting in a total of 160 segmentations. The Dice Similarity Coefficient (DSC) assessed inter-reader agreement pre- and post-protocol, with a Wilcoxon signed-rank test for per-volume analysis and an Aligned-Rank Transform (ART) for repeated measures analyses of variance (ANOVA) for per-slice analysis. Slice selection at extreme cranial or caudal liver positions was evaluated using the McNemar test. RESULTS The per-volume DSC significantly increased after protocol implementation for both T2wi (p < 0.001) and T1wi (p = 0.03). Per-slice DSC also improved significantly for both T2wi and T1wi (p < 0.001). The protocol reduced the number of liver segmentations with a non-annotated slice on T1wi (p = 0.04), but the change was not significant on T2wi (p = 0.16). CONCLUSIONS Establishing a liver MRI segmentation protocol improves annotation robustness and reproducibility, paving the way for advanced computer-assisted analysis. Moreover, segmentation protocols could be extended to other organs and lesions and incorporated into guidelines, thereby expanding the potential applications of AI in daily clinical practice.
Collapse
Affiliation(s)
- Patrick Jeltsch
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Killian Monnin
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Mario Jreige
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Lucia Fernandes-Mendes
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Raphaël Girardet
- Department of Radiology, South Metropolitan Health Service, Murdoch, WA 6150, Australia;
| | - Clarisse Dromain
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Jonas Richiardi
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| | - Naik Vietti-Violi
- Department of Radiology and Interventional Radiology, Lausanne University Hospital, Lausanne University, 1015 Lausanne, Switzerland; (P.J.); (K.M.); (M.J.); (L.F.-M.); (C.D.); (J.R.)
| |
Collapse
|
4
|
Kim B, Mathai TS, Helm K, Pinto PA, Summers RM. Classification of Multi-Parametric Body MRI Series Using Deep Learning. IEEE J Biomed Health Inform 2024; 28:6791-6802. [PMID: 39178097 PMCID: PMC11574742 DOI: 10.1109/jbhi.2024.3448373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) exams have various series types acquired with different imaging protocols. The DICOM headers of these series often have incorrect information due to the sheer diversity of protocols and occasional technologist errors. To address this, we present a deep learning-based classification model to classify 8 different body mpMRI series types so that radiologists read the exams efficiently. Using mpMRI data from various institutions, multiple deep learning-based classifiers of ResNet, EfficientNet, and DenseNet are trained to classify 8 different MRI series, and their performance is compared. Then, the best-performing classifier is identified, and its classification capability under the setting of different training data quantities is studied. Also, the model is evaluated on the out-of-training-distribution datasets. Moreover, the model is trained using mpMRI exams obtained from different scanners in two training strategies, and its performance is tested. Experimental results show that the DenseNet-121 model achieves the highest F1-score and accuracy of 0.966 and 0.972 over the other classification models with p-value 0.05. The model shows greater than 0.95 accuracy when trained with over 729 studies of the training data, whose performance improves as the training data quantities grow larger. On the external data with the DLDS and CPTAC-UCEC datasets, the model yields 0.872 and 0.810 accuracy for each. These results indicate that in both the internal and external datasets, the DenseNet-121 model attains high accuracy for the task of classifying 8 body MRI series types.
Collapse
|
5
|
Murugesan GK, McCrumb D, Aboian M, Verma T, Soni R, Memon F, Farahani K, Pei L, Wagner U, Fedorov AY, Clunie D, Moore S, Van Oss J. AI-Generated Annotations Dataset for Diverse Cancer Radiology Collections in NCI Image Data Commons. Sci Data 2024; 11:1165. [PMID: 39443503 PMCID: PMC11500357 DOI: 10.1038/s41597-024-03977-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 10/07/2024] [Indexed: 10/25/2024] Open
Abstract
The National Cancer Institute (NCI) Image Data Commons (IDC) offers publicly available cancer radiology collections for cloud computing, crucial for developing advanced imaging tools and algorithms. Despite their potential, these collections are minimally annotated; only 4% of DICOM studies in collections considered in the project had existing segmentation annotations. This project increases the quantity of segmentations in various IDC collections. We produced high-quality, AI-generated imaging annotations dataset of tissues, organs, and/or cancers for 11 distinct IDC image collections. These collections contain images from a variety of modalities, including computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). The collections cover various body parts, such as the chest, breast, kidneys, prostate, and liver. A portion of the AI annotations were reviewed and corrected by a radiologist to assess the performance of the AI models. Both the AI's and the radiologist's annotations were encoded in conformance to the Digital Imaging and Communications in Medicine (DICOM) standard, allowing for seamless integration into the IDC collections as third-party analysis collections. All the models, images and annotations are publicly accessible.
Collapse
Affiliation(s)
| | | | | | - Tej Verma
- Yale School of Medicine, New Haven, CT, USA
| | | | | | | | - Linmin Pei
- Frederick National Laboratory for Cancer Research, Frederick, MD, USA
| | - Ulrike Wagner
- Frederick National Laboratory for Cancer Research, Frederick, MD, USA
| | - Andrey Y Fedorov
- Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | | | | | | |
Collapse
|
6
|
Miller CM, Zhu Z, Mazurowski MA, Bashir MR, Wiggins WF. Automated selection of abdominal MRI series using a DICOM metadata classifier and selective use of a pixel-based classifier. Abdom Radiol (NY) 2024; 49:3735-3746. [PMID: 38860997 DOI: 10.1007/s00261-024-04379-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 05/10/2024] [Accepted: 05/11/2024] [Indexed: 06/12/2024]
Abstract
Accurate, automated MRI series identification is important for many applications, including display ("hanging") protocols, machine learning, and radiomics. The use of the series description or a pixel-based classifier each has limitations. We demonstrate a combined approach utilizing a DICOM metadata-based classifier and selective use of a pixel-based classifier to identify abdominal MRI series. The metadata classifier was assessed alone as Group metadata and combined with selective use of the pixel-based classifier for predictions with less than 70% certainty (Group combined). The overall accuracy (mean and 95% confidence intervals) for Groups metadata and combined on the test dataset were 0.870 CI (0.824,0.912) and 0.930 CI (0.893,0.963), respectively. With this combined metadata and pixel-based approach, we demonstrate accurate classification of 95% or greater for all pre-contrast MRI series and improved performance for some post-contrast series.
Collapse
Affiliation(s)
- Chad M Miller
- Duke University School of Medicine, Durham, NC, 27710, USA.
| | - Zhe Zhu
- Duke University School of Medicine, Durham, NC, 27710, USA
| | | | | | | |
Collapse
|
7
|
Patel N, Celaya A, Eltaher M, Glenn R, Savannah KB, Brock KK, Sanchez JI, Calderone TL, Cleere D, Elsaiey A, Cagley M, Gupta N, Victor D, Beretta L, Koay EJ, Netherton TJ, Fuentes DT. Training robust T1-weighted magnetic resonance imaging liver segmentation models using ensembles of datasets with different contrast protocols and liver disease etiologies. Sci Rep 2024; 14:20988. [PMID: 39251664 PMCID: PMC11385384 DOI: 10.1038/s41598-024-71674-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 08/29/2024] [Indexed: 09/11/2024] Open
Abstract
Image segmentation of the liver is an important step in treatment planning for liver cancer. However, manual segmentation at a large scale is not practical, leading to increasing reliance on deep learning models to automatically segment the liver. This manuscript develops a generalizable deep learning model to segment the liver on T1-weighted MR images. In particular, three distinct deep learning architectures (nnUNet, PocketNet, Swin UNETR) were considered using data gathered from six geographically different institutions. A total of 819 T1-weighted MR images were gathered from both public and internal sources. Our experiments compared each architecture's testing performance when trained both intra-institutionally and inter-institutionally. Models trained using nnUNet and its PocketNet variant achieved mean Dice-Sorensen similarity coefficients>0.9 on both intra- and inter-institutional test set data. The performance of these models suggests that nnUNet and PocketNet liver segmentation models trained on a large and diverse collection of T1-weighted MR images would on average achieve good intra-institutional segmentation performance.
Collapse
Affiliation(s)
- Nihil Patel
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Adrian Celaya
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Computational Applied Mathematics and Operations Research, Rice University, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Mohamed Eltaher
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Rachel Glenn
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kari Brewer Savannah
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kristy K Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Jessica I Sanchez
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Tiffany L Calderone
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Darrel Cleere
- Department of Gastroenterology, Houston Methodist Hospital, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ahmed Elsaiey
- Department of Gastroenterology, Houston Methodist Hospital, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Matthew Cagley
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Nakul Gupta
- Department of Radiology, Houston Methodist Hospital, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - David Victor
- Department of Gastroenterology, Houston Methodist Hospital, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Laura Beretta
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Eugene J Koay
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Tucker J Netherton
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
| | - David T Fuentes
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
| |
Collapse
|
8
|
Kim B, Mathai TS, Helm K, Summers RM. AUTOMATED CLASSIFICATION OF MULTI-PARAMETRIC BODY MRI SERIES. ARXIV 2024:arXiv:2405.08247v1. [PMID: 38903740 PMCID: PMC11188138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 06/22/2024]
Abstract
Multi-parametric MRI (mpMRI) studies are widely available in clinical practice for the diagnosis of various diseases. As the volume of mpMRI exams increases yearly, there are concomitant inaccuracies that exist within the DICOM header fields of these exams. This precludes the use of the header information for the arrangement of the different series as part of the radiologist's hanging protocol, and clinician oversight is needed for correction. In this pilot work, we propose an automated framework to classify the type of 8 different series in mpMRI studies. We used 1,363 studies acquired by three Siemens scanners to train a DenseNet-121 model with 5-fold cross-validation. Then, we evaluated the performance of the DenseNet-121 ensemble on a held-out test set of 313 mpMRI studies. Our method achieved an average precision of 96.6%, sensitivity of 96.6%, specificity of 99.6%, and F 1 score of 96.6% for the MRI series classification task. To the best of our knowledge, we are the first to develop a method to classify the series type in mpMRI studies acquired at the level of the chest, abdomen, and pelvis. Our method has the capability for robust automation of hanging protocols in modern radiology practice.
Collapse
Affiliation(s)
- Boah Kim
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Tejas Sudharshan Mathai
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Kimberly Helm
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| |
Collapse
|
9
|
Patel N, Eltaher M, Glenn R, Savannah KB, Brock K, Sanchez J, Calderone T, Cleere D, Elsaiey A, Cagley M, Gupta N, Victor D, Beretta L, Celaya A, Koay E, Netherton T, Fuentes D. Training Robust T1-Weighted Magnetic Resonance Imaging Liver Segmentation Models Using Ensembles of Datasets with Different Contrast Protocols and Liver Disease Etiologies. RESEARCH SQUARE 2024:rs.3.rs-4259791. [PMID: 38746406 PMCID: PMC11092841 DOI: 10.21203/rs.3.rs-4259791/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Image segmentation of the liver is an important step in several treatments for liver cancer. However, manual segmentation at a large scale is not practical, leading to increasing reliance on deep learning models to automatically segment the liver. This manuscript develops a deep learning model to segment the liver on T1w MR images. We sought to determine the best architecture by training, validating, and testing three different deep learning architectures using a total of 819 T1w MR images gathered from six different datasets, both publicly and internally available. Our experiments compared each architecture's testing performance when trained on data from the same dataset via 5-fold cross validation to its testing performance when trained on all other datasets. Models trained using nnUNet achieved mean Dice-Sorensen similarity coefficients > 90% when tested on each of the six datasets individually. The performance of these models suggests that an nnUNet liver segmentation model trained on a large and diverse collection of T1w MR images would be robust to potential changes in contrast protocol and disease etiology.
Collapse
Affiliation(s)
- Nihil Patel
- The University of Texas MD Anderson Cancer Center
| | | | - Rachel Glenn
- The University of Texas MD Anderson Cancer Center
| | | | - Kristy Brock
- The University of Texas MD Anderson Cancer Center
| | | | | | | | | | | | | | | | | | | | - Eugene Koay
- The University of Texas MD Anderson Cancer Center
| | | | | |
Collapse
|
10
|
Kahn CE. Data Resources: Milestones and Building Blocks. Radiol Artif Intell 2023; 5:e230418. [PMID: 38074788 PMCID: PMC10698608 DOI: 10.1148/ryai.230418] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/15/2023] [Accepted: 10/15/2023] [Indexed: 06/11/2024]
Affiliation(s)
- Charles E. Kahn
- From the Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104-6243
| |
Collapse
|
11
|
Mongan J, Halabi SS. On the Centrality of Data: Data Resources in Radiologic Artificial Intelligence. Radiol Artif Intell 2023; 5:e230231. [PMID: 37795139 PMCID: PMC10546351 DOI: 10.1148/ryai.230231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/16/2023] [Accepted: 07/24/2023] [Indexed: 10/06/2023]
Affiliation(s)
- John Mongan
- From the Department of Radiology and Biomedical Imaging; Center for
Intelligent Imaging, University of California San Francisco, 505 Parnassus Ave,
Room M-391, San Francisco, CA 94143 (J.M.); and Department of Medical Imaging,
Ann & Robert H. Lurie Children’s Hospital of Chicago, Chicago, Ill
(S.S.H.)
| | - Safwan S. Halabi
- From the Department of Radiology and Biomedical Imaging; Center for
Intelligent Imaging, University of California San Francisco, 505 Parnassus Ave,
Room M-391, San Francisco, CA 94143 (J.M.); and Department of Medical Imaging,
Ann & Robert H. Lurie Children’s Hospital of Chicago, Chicago, Ill
(S.S.H.)
| |
Collapse
|