1
|
Lei W, Xu W, Li K, Zhang X, Zhang S. MedLSAM: Localize and segment anything model for 3D CT images. Med Image Anal 2025; 99:103370. [PMID: 39447436 DOI: 10.1016/j.media.2024.103370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 09/09/2024] [Accepted: 10/09/2024] [Indexed: 10/26/2024]
Abstract
Recent advancements in foundation models have shown significant potential in medical image analysis. However, there is still a gap in models specifically designed for medical image localization. To address this, we introduce MedLAM, a 3D medical foundation localization model that accurately identifies any anatomical part within the body using only a few template scans. MedLAM employs two self-supervision tasks: unified anatomical mapping (UAM) and multi-scale similarity (MSS) across a comprehensive dataset of 14,012 CT scans. Furthermore, we developed MedLSAM by integrating MedLAM with the Segment Anything Model (SAM). This innovative framework requires extreme point annotations across three directions on several templates to enable MedLAM to locate the target anatomical structure in the image, with SAM performing the segmentation. It significantly reduces the amount of manual annotation required by SAM in 3D medical imaging scenarios. We conducted extensive experiments on two 3D datasets covering 38 distinct organs. Our findings are twofold: (1) MedLAM can directly localize anatomical structures using just a few template scans, achieving performance comparable to fully supervised models; (2) MedLSAM closely matches the performance of SAM and its specialized medical adaptations with manual prompts, while minimizing the need for extensive point annotations across the entire dataset. Moreover, MedLAM has the potential to be seamlessly integrated with future 3D SAM models, paving the way for enhanced segmentation performance. Our code is public at https://github.com/openmedlab/MedLSAM.
Collapse
Affiliation(s)
- Wenhui Lei
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China; Shanghai AI Lab, Shanghai, China
| | - Wei Xu
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China; West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Kang Li
- Shanghai AI Lab, Shanghai, China; West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Xiaofan Zhang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China; Shanghai AI Lab, Shanghai, China.
| | - Shaoting Zhang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China; Shanghai AI Lab, Shanghai, China
| |
Collapse
|
2
|
Zhang Z, Keles E, Durak G, Taktak Y, Susladkar O, Gorade V, Jha D, Ormeci AC, Medetalibeyoglu A, Yao L, Wang B, Isler IS, Peng L, Pan H, Vendrami CL, Bourhani A, Velichko Y, Gong B, Spampinato C, Pyrros A, Tiwari P, Klatte DCF, Engels M, Hoogenboom S, Bolan CW, Agarunov E, Harfouch N, Huang C, Bruno MJ, Schoots I, Keswani RN, Miller FH, Gonda T, Yazici C, Tirkes T, Turkbey B, Wallace MB, Bagci U. Large-scale multi-center CT and MRI segmentation of pancreas with deep learning. Med Image Anal 2025; 99:103382. [PMID: 39541706 PMCID: PMC11698238 DOI: 10.1016/j.media.2024.103382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 10/24/2024] [Accepted: 10/27/2024] [Indexed: 11/16/2024]
Abstract
Automated volumetric segmentation of the pancreas on cross-sectional imaging is needed for diagnosis and follow-up of pancreatic diseases. While CT-based pancreatic segmentation is more established, MRI-based segmentation methods are understudied, largely due to a lack of publicly available datasets, benchmarking research efforts, and domain-specific deep learning methods. In this retrospective study, we collected a large dataset (767 scans from 499 participants) of T1-weighted (T1 W) and T2-weighted (T2 W) abdominal MRI series from five centers between March 2004 and November 2022. We also collected CT scans of 1,350 patients from publicly available sources for benchmarking purposes. We introduced a new pancreas segmentation method, called PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation. We tested PanSegNet's accuracy in cross-modality (a total of 2,117 scans) and cross-center settings with Dice and Hausdorff distance (HD95) evaluation metrics. We used Cohen's kappa statistics for intra and inter-rater agreement evaluation and paired t-tests for volume and Dice comparisons, respectively. For segmentation accuracy, we achieved Dice coefficients of 88.3% (±7.2%, at case level) with CT, 85.0% (±7.9%) with T1 W MRI, and 86.3% (±6.4%) with T2 W MRI. There was a high correlation for pancreas volume prediction with R2 of 0.91, 0.84, and 0.85 for CT, T1 W, and T2 W, respectively. We found moderate inter-observer (0.624 and 0.638 for T1 W and T2 W MRI, respectively) and high intra-observer agreement scores. All MRI data is made available at https://osf.io/kysnj/. Our source code is available at https://github.com/NUBagciLab/PaNSegNet.
Collapse
Affiliation(s)
- Zheyuan Zhang
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Elif Keles
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Gorkem Durak
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Yavuz Taktak
- Department of Internal Medicine, Istanbul University Faculty of Medicine, Istanbul, Turkey
| | - Onkar Susladkar
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Vandan Gorade
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Debesh Jha
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Asli C Ormeci
- Department of Internal Medicine, Istanbul University Faculty of Medicine, Istanbul, Turkey
| | - Alpay Medetalibeyoglu
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA; Department of Internal Medicine, Istanbul University Faculty of Medicine, Istanbul, Turkey
| | - Lanhong Yao
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Bin Wang
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Ilkin Sevgi Isler
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA; Department of Computer Science, University of Central Florida, Florida, FL, USA
| | - Linkai Peng
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Hongyi Pan
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Camila Lopes Vendrami
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Amir Bourhani
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Yury Velichko
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | | | | | - Ayis Pyrros
- Department of Radiology, Duly Health and Care and Department of Biomedical and Health Information Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - Pallavi Tiwari
- Dept of Biomedical Engineering, University of Wisconsin-Madison, WI, USA
| | - Derk C F Klatte
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, University of Amsterdam, Netherlands; Department of Radiology, Mayo Clinic, Jacksonville, FL, USA
| | - Megan Engels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, University of Amsterdam, Netherlands; Department of Radiology, Mayo Clinic, Jacksonville, FL, USA
| | - Sanne Hoogenboom
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, University of Amsterdam, Netherlands; Department of Radiology, Mayo Clinic, Jacksonville, FL, USA
| | | | - Emil Agarunov
- Division of Gastroenterology and Hepatology, New York University, NY, USA
| | - Nassier Harfouch
- Department of Radiology, NYU Grossman School of Medicine, New York, NY, USA
| | - Chenchan Huang
- Department of Radiology, NYU Grossman School of Medicine, New York, NY, USA
| | - Marco J Bruno
- Departments of Gastroenterology and Hepatology, Erasmus Medical Center, Rotterdam, Netherlands
| | - Ivo Schoots
- Department of Radiology and Nuclear Medicine, Erasmus University Medical Center, Rotterdam, Netherlands
| | - Rajesh N Keswani
- Departments of Gastroenterology and Hepatology, Northwestern University, IL, USA
| | - Frank H Miller
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Tamas Gonda
- Division of Gastroenterology and Hepatology, New York University, NY, USA
| | - Cemal Yazici
- Division of Gastroenterology and Hepatology, University of Illinois at Chicago, Chicago, IL, USA
| | - Temel Tirkes
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic in Florida, Jacksonville, USA
| | - Ulas Bagci
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA.
| |
Collapse
|
3
|
Ajani SN, Mulla RA, Limkar S, Ashtagi R, Wagh SK, Pawar ME. RETRACTED ARTICLE: DLMBHCO: design of an augmented bioinspired deep learning-based multidomain body parameter analysis via heterogeneous correlative body organ analysis. Soft comput 2024; 28:635. [PMID: 37362266 PMCID: PMC10248994 DOI: 10.1007/s00500-023-08613-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/23/2023] [Indexed: 06/28/2023]
Affiliation(s)
- Samir N. Ajani
- Department of Computer Science
& Engineering (Data Science), St.
Vincent Pallotti College of Engineering and Technology,
Nagpur, Maharashtra India
| | - Rais Allauddin Mulla
- Department of Computer Engineering, Vasantdada Patil
Pratishthan College of Engineering and Visual Arts, Mumbai, Maharashtra India
| | - Suresh Limkar
- Department of Artificial Intelligence
and Data Science, AISSMS Institute of
Information Technology, Pune,
Maharashtra India
| | - Rashmi Ashtagi
- Department of Computer Engineering,
Vishwakarma Institute of Technology,
Bibwewadi, Pune, 411037 Maharashtra
India
| | - Sharmila K. Wagh
- Department of Computer Engineering,
Modern Education Society’s College of
Engineering, Pune, Maharashtra India
| | - Mahendra Eknath Pawar
- Department of Computer Engineering, Vasantdada Patil
Pratishthan College of Engineering and Visual Arts, Mumbai, Maharashtra India
| |
Collapse
|
4
|
Xu W, Lai C, Mo Z, Liu C, Li M, Zhao G, Xu K. Clinical-Inspired Framework for Automatic Kidney Stone Recognition and Analysis on Transverse CT Images. IEEE J Biomed Health Inform 2024; 28:7263-7274. [PMID: 38861442 DOI: 10.1109/jbhi.2024.3411801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
The stone recognition and analysis in CT images are significant for automatic kidney stone diagnosis. Although certain contributions have been made, existing methods overlook the promoting effect of clinical knowledge on model performance and clinical interpretation. Thus, it is attractive to establish methods for detecting and evaluating kidney stones originating from the practical diagnostic process. Inspired by this, a novel clinical-inspired framework is proposed to involve the diagnostic process of urologists for better analysis. The diagnostic process contains three main steps, the localization step, the identification step and the evaluation step. Three modules integrating the decision-making mode of urologists are designed to mimic the diagnosis process. The object attention module simulates the localization step to provide the position of kidneys by embedding weight feature factor and angle loss. The feature-driven discriminative module mimics the identification step to detect stones by extracting geometric and positional features. The analysis module based on the principle of clustering and graphic combination is a quantitative analysis strategy for simulating the evaluation step. This work constructed a clinical dataset collecting 27,885 transverse CT images with stones and/or clinical interference. Experiments on the dataset show that the object attention module outperforms the well-performing Yolov7 model by 1% , and the analysis module outperforms the well-performing AR-DBSCAN model and the formula method by 21.9% average cluster accuracy and 17.35% average error. Experiments demonstrate that the proposed framework is recently the most effective solution for recognizing and evaluating kidney stones.
Collapse
|
5
|
Chen Z, Lu Y, Long S, Campello VM, Bai J, Lekadir K. Fetal Head and Pubic Symphysis Segmentation in Intrapartum Ultrasound Image Using a Dual-Path Boundary-Guided Residual Network. IEEE J Biomed Health Inform 2024; 28:4648-4659. [PMID: 38739504 DOI: 10.1109/jbhi.2024.3399762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Accurate segmentation of the fetal head and pubic symphysis in intrapartum ultrasound images and measurement of fetal angle of progression (AoP) are critical to both outcome prediction and complication prevention in delivery. However, due to poor quality of perinatal ultrasound imaging with blurred target boundaries and the relatively small target of the public symphysis, fully automated and accurate segmentation remains challenging. In this paper, we propse a dual-path boundary-guided residual network (DBRN), which is a novel approach to tackle these challenges. The model contains a multi-scale weighted module (MWM) to gather global context information, and enhance the feature response within the target region by weighting the feature map. The model also incorporates an enhanced boundary module (EBM) to obtain more precise boundary information. Furthermore, the model introduces a boundary-guided dual-attention residual module (BDRM) for residual learning. BDRM leverages boundary information as prior knowledge and employs spatial attention to simultaneously focus on background and foreground information, in order to capture concealed details and improve segmentation accuracy. Extensive comparative experiments have been conducted on three datasets. The proposed method achieves average Dice score of 0.908 ±0.05 and average Hausdorff distance of 3.396 ±0.66 mm. Compared with state-of-the-art competitors, the proposed DBRN achieves better results. In addition, the average difference between the automatic measurement of AoPs based on this model and the manual measurement results is 6.157 °, which has good consistency and has broad application prospects in clinical practice.
Collapse
|
6
|
Wendler T, Kreissl MC, Schemmer B, Rogasch JMM, De Benetti F. Artificial Intelligence-powered automatic volume calculation in medical images - available tools, performance and challenges for nuclear medicine. Nuklearmedizin 2023; 62:343-353. [PMID: 37995707 PMCID: PMC10667065 DOI: 10.1055/a-2200-2145] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023]
Abstract
Volumetry is crucial in oncology and endocrinology, for diagnosis, treatment planning, and evaluating response to therapy for several diseases. The integration of Artificial Intelligence (AI) and Deep Learning (DL) has significantly accelerated the automatization of volumetric calculations, enhancing accuracy and reducing variability and labor. In this review, we show that a high correlation has been observed between Machine Learning (ML) methods and expert assessments in tumor volumetry; Yet, it is recognized as more challenging than organ volumetry. Liver volumetry has shown progression in accuracy with a decrease in error. If a relative error below 10 % is acceptable, ML-based liver volumetry can be considered reliable for standardized imaging protocols if used in patients without major anomalies. Similarly, ML-supported automatic kidney volumetry has also shown consistency and reliability in volumetric calculations. In contrast, AI-supported thyroid volumetry has not been extensively developed, despite initial works in 3D ultrasound showing promising results in terms of accuracy and reproducibility. Despite the advancements presented in the reviewed literature, the lack of standardization limits the generalizability of ML methods across diverse scenarios. The domain gap, i. e., the difference in probability distribution of training and inference data, is of paramount importance before clinical deployment of AI, to maintain accuracy and reliability in patient care. The increasing availability of improved segmentation tools is expected to further incorporate AI methods into routine workflows where volumetry will play a more prominent role in radionuclide therapy planning and quantitative follow-up of disease evolution.
Collapse
Affiliation(s)
- Thomas Wendler
- Clinical Computational Medical Imaging Research, Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, Germany
- Institute of Digital Medicine, Universitätsklinikum Augsburg, Germany
- Computer-Aided Medical Procedures and Augmented Reality School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | | | | | - Julian Manuel Michael Rogasch
- Department of Nuclear Medicine, Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin,Germany
| | - Francesca De Benetti
- Computer-Aided Medical Procedures and Augmented Reality School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| |
Collapse
|
7
|
Müller L, Tibyampansha D, Mildenberger P, Panholzer T, Jungmann F, Halfmann MC. Convolutional neural network-based kidney volume estimation from low-dose unenhanced computed tomography scans. BMC Med Imaging 2023; 23:187. [PMID: 37968580 PMCID: PMC10648730 DOI: 10.1186/s12880-023-01142-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 10/27/2023] [Indexed: 11/17/2023] Open
Abstract
PURPOSE Kidney volume is important in the management of renal diseases. Unfortunately, the currently available, semi-automated kidney volume determination is time-consuming and prone to errors. Recent advances in its automation are promising but mostly require contrast-enhanced computed tomography (CT) scans. This study aimed at establishing an automated estimation of kidney volume in non-contrast, low-dose CT scans of patients with suspected urolithiasis. METHODS The kidney segmentation process was automated with 2D Convolutional Neural Network (CNN) models trained on manually segmented 2D transverse images extracted from low-dose, unenhanced CT scans of 210 patients. The models' segmentation accuracy was assessed using Dice Similarity Coefficient (DSC), for the overlap with manually-generated masks on a set of images not used in the training. Next, the models were applied to 22 previously unseen cases to segment kidney regions. The volume of each kidney was calculated from the product of voxel number and their volume in each segmented mask. Kidney volume results were then validated against results semi-automatically obtained by radiologists. RESULTS The CNN-enabled kidney volume estimation took a mean of 32 s for both kidneys in a CT scan with an average of 1026 slices. The DSC was 0.91 and 0.86 and for left and right kidneys, respectively. Inter-rater variability had consistencies of ICC = 0.89 (right), 0.92 (left), and absolute agreements of ICC = 0.89 (right), 0.93 (left) between the CNN-enabled and semi-automated volume estimations. CONCLUSION In our work, we demonstrated that CNN-enabled kidney volume estimation is feasible and highly reproducible in low-dose, non-enhanced CT scans. Automatic segmentation can thereby quantitatively enhance radiological reports.
Collapse
Affiliation(s)
- Lukas Müller
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst, 1, 55131, Mainz, Germany
| | - Dativa Tibyampansha
- Institute of Medical Biostatistics, Epidemiology and Informatics, University Medical Center of the Johannes Gutenberg University Mainz, Obere Zahlbacher Str. 69, 55131, Mainz, Germany
| | - Peter Mildenberger
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst, 1, 55131, Mainz, Germany
| | - Torsten Panholzer
- Institute of Medical Biostatistics, Epidemiology and Informatics, University Medical Center of the Johannes Gutenberg University Mainz, Obere Zahlbacher Str. 69, 55131, Mainz, Germany
| | - Florian Jungmann
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst, 1, 55131, Mainz, Germany
| | - Moritz C Halfmann
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst, 1, 55131, Mainz, Germany.
| |
Collapse
|
8
|
Zhao Q, Zhong L, Xiao J, Zhang J, Chen Y, Liao W, Zhang S, Wang G. Efficient Multi-Organ Segmentation From 3D Abdominal CT Images With Lightweight Network and Knowledge Distillation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2513-2523. [PMID: 37030798 DOI: 10.1109/tmi.2023.3262680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Accurate segmentation of multiple abdominal organs from Computed Tomography (CT) images plays an important role in computer-aided diagnosis, treatment planning and follow-up. Currently, 3D Convolution Neural Networks (CNN) have achieved promising performance for automatic medical image segmentation tasks. However, most existing 3D CNNs have a large set of parameters and huge floating point operations (FLOPs), and 3D CT volumes have a large size, leading to high computational cost, which limits their clinical application. To tackle this issue, we propose a novel framework based on lightweight network and Knowledge Distillation (KD) for delineating multiple organs from 3D CT volumes. We first propose a novel lightweight medical image segmentation network named LCOV-Net for reducing the model size and then introduce two knowledge distillation modules (i.e., Class-Affinity KD and Multi-Scale KD) to effectively distill the knowledge from a heavy-weight teacher model to improve LCOV-Net's segmentation accuracy. Experiments on two public abdominal CT datasets for multiple organ segmentation showed that: 1) Our LCOV-Net outperformed existing lightweight 3D segmentation models in both computational cost and accuracy; 2) The proposed KD strategy effectively improved the performance of the lightweight network, and it outperformed existing KD methods; 3) Combining the proposed LCOV-Net and KD strategy, our framework achieved better performance than the state-of-the-art 3D nnU-Net with only one-fifth parameters. The code is available at https://github.com/HiLab-git/LCOVNet-and-KD.
Collapse
|
9
|
Mahmud S, Abbas TO, Mushtak A, Prithula J, Chowdhury MEH. Kidney Cancer Diagnosis and Surgery Selection by Machine Learning from CT Scans Combined with Clinical Metadata. Cancers (Basel) 2023; 15:3189. [PMID: 37370799 DOI: 10.3390/cancers15123189] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 05/30/2023] [Accepted: 06/07/2023] [Indexed: 06/29/2023] Open
Abstract
Kidney cancers are one of the most common malignancies worldwide. Accurate diagnosis is a critical step in the management of kidney cancer patients and is influenced by multiple factors including tumor size or volume, cancer types and stages, etc. For malignant tumors, partial or radical surgery of the kidney might be required, but for clinicians, the basis for making this decision is often unclear. Partial nephrectomy could result in patient death due to cancer if kidney removal was necessary, whereas radical nephrectomy in less severe cases could resign patients to lifelong dialysis or need for future transplantation without sufficient cause. Using machine learning to consider clinical data alongside computed tomography images could potentially help resolve some of these surgical ambiguities, by enabling a more robust classification of kidney cancers and selection of optimal surgical approaches. In this study, we used the publicly available KiTS dataset of contrast-enhanced CT images and corresponding patient metadata to differentiate four major classes of kidney cancer: clear cell (ccRCC), chromophobe (chRCC), papillary (pRCC) renal cell carcinoma, and oncocytoma (ONC). We rationalized these data to overcome the high field of view (FoV), extract tumor regions of interest (ROIs), classify patients using deep machine-learning models, and extract/post-process CT image features for combination with clinical data. Regardless of marked data imbalance, our combined approach achieved a high level of performance (85.66% accuracy, 84.18% precision, 85.66% recall, and 84.92% F1-score). When selecting surgical procedures for malignant tumors (RCC), our method proved even more reliable (90.63% accuracy, 90.83% precision, 90.61% recall, and 90.50% F1-score). Using feature ranking, we confirmed that tumor volume and cancer stage are the most relevant clinical features for predicting surgical procedures. Once fully mature, the approach we propose could be used to assist surgeons in performing nephrectomies by guiding the choices of optimal procedures in individual patients with kidney cancer.
Collapse
Affiliation(s)
- Sakib Mahmud
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Tariq O Abbas
- Urology Division, Surgery Department, Sidra Medicine, Doha 26999, Qatar
- Department of Surgery, Weill Cornell Medicine-Qatar, Doha 24811, Qatar
- College of Medicine, Qatar University, Doha 2713, Qatar
| | - Adam Mushtak
- Clinical Imaging Department, Hamad Medical Corporation, Doha 3050, Qatar
| | - Johayra Prithula
- Department of Electrical and Electronics Engineering, University of Dhaka, Dhaka 1000, Bangladesh
| | | |
Collapse
|
10
|
Zhao D, Wang W, Tang T, Zhang YY, Yu C. Current progress in artificial intelligence-assisted medical image analysis for chronic kidney disease: A literature review. Comput Struct Biotechnol J 2023; 21:3315-3326. [PMID: 37333860 PMCID: PMC10275698 DOI: 10.1016/j.csbj.2023.05.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 05/28/2023] [Accepted: 05/28/2023] [Indexed: 06/20/2023] Open
Abstract
Chronic kidney disease (CKD) causes irreversible damage to kidney structure and function. Arising from various etiologies, risk factors for CKD include hypertension and diabetes. With a progressively increasing global prevalence, CKD is an important public health problem worldwide. Medical imaging has become an important diagnostic tool for CKD through the non-invasive identification of macroscopic renal structural abnormalities. Artificial intelligence (AI)-assisted medical imaging techniques aid clinicians in the analysis of characteristics that cannot be easily discriminated by the naked eye, providing valuable information for the identification and management of CKD. Recent studies have demonstrated the effectiveness of AI-assisted medical image analysis as a clinical support tool using radiomics- and deep learning-based AI algorithms for improving the early detection, pathological assessment, and prognostic evaluation of various forms of CKD, including autosomal dominant polycystic kidney disease. Herein, we provide an overview of the potential roles of AI-assisted medical image analysis for the diagnosis and management of CKD.
Collapse
Affiliation(s)
- Dan Zhao
- Department of Nephrology, Tongji Hospital, School of Medicine, Tongji University, Shanghai 200065, China
| | - Wei Wang
- Department of Radiology, Tongji Hospital, School of Medicine, Tongji University, Shanghai 200065, China
| | - Tian Tang
- Department of Nephrology, Tongji Hospital, School of Medicine, Tongji University, Shanghai 200065, China
| | - Ying-Ying Zhang
- Department of Nephrology, Tongji Hospital, School of Medicine, Tongji University, Shanghai 200065, China
| | - Chen Yu
- Department of Nephrology, Tongji Hospital, School of Medicine, Tongji University, Shanghai 200065, China
| |
Collapse
|
11
|
Mu N, Lyu Z, Rezaeitaleshmahalleh M, Zhang X, Rasmussen T, McBane R, Jiang J. Automatic segmentation of abdominal aortic aneurysms from CT angiography using a context-aware cascaded U-Net. Comput Biol Med 2023; 158:106569. [PMID: 36989747 PMCID: PMC10625464 DOI: 10.1016/j.compbiomed.2023.106569] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/22/2022] [Accepted: 01/22/2023] [Indexed: 01/24/2023]
Abstract
We delineate abdominal aortic aneurysms, including lumen and intraluminal thrombosis (ILT), from contrast-enhanced computed tomography angiography (CTA) data in 70 patients with complete automation. A novel context-aware cascaded U-Net configuration enables automated image segmentation. Notably, auto-context structure, in conjunction with dilated convolutions, anisotropic context module, hierarchical supervision, and a multi-class loss function, are proposed to improve the delineation of ILT in an unbalanced, low-contrast multi-class labeling problem. A quantitative analysis shows that the automated image segmentation produces comparable results with trained human users (e.g., DICE scores of 0.945 and 0.804 for lumen and ILT, respectively). Resultant morphological metrics (e.g., volume, surface area, etc.) are highly correlated to those parameters generated by trained human users. In conclusion, the proposed automated multi-class image segmentation tool has the potential to be further developed as a translational software tool that can be used to improve the clinical management of AAAs.
Collapse
Affiliation(s)
- Nan Mu
- Biomedical Engineering, Michigan Technological University, Houghton, MI, 49931, USA
| | - Zonghan Lyu
- Biomedical Engineering, Michigan Technological University, Houghton, MI, 49931, USA
| | | | | | | | | | - Jingfeng Jiang
- Biomedical Engineering, Michigan Technological University, Houghton, MI, 49931, USA; Center for Biocomputing and Digital Health, Health Research Institute, Institute of Computing and Cybernetics, Michigan Technological University, Houghton, MI, 49931, USA.
| |
Collapse
|
12
|
Khafaga DS, Ibrahim A, El-Kenawy ESM, Abdelhamid AA, Karim FK, Mirjalili S, Khodadadi N, Lim WH, Eid MM, Ghoneim ME. An Al-Biruni Earth Radius Optimization-Based Deep Convolutional Neural Network for Classifying Monkeypox Disease. Diagnostics (Basel) 2022; 12:diagnostics12112892. [PMID: 36428952 PMCID: PMC9689640 DOI: 10.3390/diagnostics12112892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 11/04/2022] [Accepted: 11/18/2022] [Indexed: 11/23/2022] Open
Abstract
Human skin diseases have become increasingly prevalent in recent decades, with millions of individuals in developed countries experiencing monkeypox. Such conditions often carry less obvious but no less devastating risks, including increased vulnerability to monkeypox, cancer, and low self-esteem. Due to the low visual resolution of monkeypox disease images, medical specialists with high-level tools are typically required for a proper diagnosis. The manual diagnosis of monkeypox disease is subjective, time-consuming, and labor-intensive. Therefore, it is necessary to create a computer-aided approach for the automated diagnosis of monkeypox disease. Most research articles on monkeypox disease relied on convolutional neural networks (CNNs) and using classical loss functions, allowing them to pick up discriminative elements in monkeypox images. To enhance this, a novel framework using Al-Biruni Earth radius (BER) optimization-based stochastic fractal search (BERSFS) is proposed to fine-tune the deep CNN layers for classifying monkeypox disease from images. As a first step in the proposed approach, we use deep CNN-based models to learn the embedding of input images in Euclidean space. In the second step, we use an optimized classification model based on the triplet loss function to calculate the distance between pairs of images in Euclidean space and learn features that may be used to distinguish between different cases, including monkeypox cases. The proposed approach uses images of human skin diseases obtained from an African hospital. The experimental results of the study demonstrate the proposed framework's efficacy, as it outperforms numerous examples of prior research on skin disease problems. On the other hand, statistical experiments with Wilcoxon and analysis of variance (ANOVA) tests are conducted to evaluate the proposed approach in terms of effectiveness and stability. The recorded results confirm the superiority of the proposed method when compared with other optimization algorithms and machine learning models.
Collapse
Affiliation(s)
- Doaa Sami Khafaga
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
- Correspondence: (D.S.K.); (E.-S.M.E.-K.); (A.A.A.); (F.K.K.)
| | - Abdelhameed Ibrahim
- Computer Engineering and Control Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
| | - El-Sayed M. El-Kenawy
- Department of Communications and Electronics, Delta Higher Institute of Engineering and Technology, Mansoura 35111, Egypt
- Correspondence: (D.S.K.); (E.-S.M.E.-K.); (A.A.A.); (F.K.K.)
| | - Abdelaziz A. Abdelhamid
- Department of Computer Science, College of Computing and Information Technology, Shaqra University, Shaqra 11961, Saudi Arabia
- Department of Computer Science, Faculty of Computer and Information Sciences, Ain Shams University, Cairo 11566, Egypt
- Correspondence: (D.S.K.); (E.-S.M.E.-K.); (A.A.A.); (F.K.K.)
| | - Faten Khalid Karim
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
- Correspondence: (D.S.K.); (E.-S.M.E.-K.); (A.A.A.); (F.K.K.)
| | - Seyedali Mirjalili
- Centre for Artificial Intelligence Research and Optimization, Torrens University Australia, Fortitude Valley, QLD 4006, Australia
- Yonsei Frontier Lab, Yonsei University, Seoul 03722, Republic of Korea
| | - Nima Khodadadi
- Department of Civil and Environmental Engineering, Florida International University, Miami, FL 33199, USA
| | - Wei Hong Lim
- Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur 56000, Malaysia
| | - Marwa M. Eid
- Faculty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35712, Egypt
| | - Mohamed E. Ghoneim
- Department of Mathematical Sciences, Faculty of Applied Science, Umm Al-Qura University, Makkah 21955, Saudi Arabia
- Faculty of Computers and Artificial Intelligence, Damietta University, Damietta 34511, Egypt
| |
Collapse
|
13
|
Gong Z, Song J, Guo W, Ju R, Zhao D, Tan W, Zhou W, Zhang G. Abdomen tissues segmentation from computed tomography images using deep learning and level set methods. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:14074-14085. [PMID: 36654080 DOI: 10.3934/mbe.2022655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Accurate abdomen tissues segmentation is one of the crucial tasks in radiation therapy planning of related diseases. However, abdomen tissues segmentation (liver, kidney) is difficult because the low contrast between abdomen tissues and their surrounding organs. In this paper, an attention-based deep learning method for automated abdomen tissues segmentation is proposed. In our method, image cropping is first applied to the original images. U-net model with attention mechanism is then constructed to obtain the initial abdomen tissues. Finally, level set evolution which consists of three energy terms is used for optimize the initial abdomen segmentation. The proposed model is evaluated across 470 subsets. For liver segmentation, the mean dice are 96.2 and 95.1% for the FLARE21 datasets and the LiTS datasets, respectively. For kidney segmentation, the mean dice are 96.6 and 95.7% for the FLARE21 datasets and the LiTS datasets, respectively. Experimental evaluation exhibits that the proposed method can obtain better segmentation results than other methods.
Collapse
Affiliation(s)
- Zhaoxuan Gong
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Jing Song
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
| | - Wei Guo
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Ronghui Ju
- Liaoning provincial people's hospital, Shenyang 110067, China
| | - Dazhe Zhao
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Wei Zhou
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
| | - Guodong Zhang
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| |
Collapse
|
14
|
Zabihollahy F, Viswanathan AN, Schmidt EJ, Lee J. Fully automated segmentation of clinical target volume in cervical cancer from magnetic resonance imaging with convolutional neural network. J Appl Clin Med Phys 2022; 23:e13725. [PMID: 35894782 PMCID: PMC9512359 DOI: 10.1002/acm2.13725] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 06/25/2022] [Indexed: 01/14/2023] Open
Abstract
PURPOSE Contouring clinical target volume (CTV) from medical images is an essential step for radiotherapy (RT) planning. Magnetic resonance imaging (MRI) is used as a standard imaging modality for CTV segmentation in cervical cancer due to its superior soft-tissue contrast. However, the delineation of CTV is challenging as CTV contains microscopic extensions that are not clearly visible even in MR images, resulting in significant contour variability among radiation oncologists depending on their knowledge and experience. In this study, we propose a fully automated deep learning-based method to segment CTV from MR images. METHODS Our method begins with the bladder segmentation, from which the CTV position is estimated in the axial view. The superior-inferior CTV span is then detected using an Attention U-Net. A CTV-specific region of interest (ROI) is determined, and three-dimensional (3-D) blocks are extracted from the ROI volume. Finally, a CTV segmentation map is computed using a 3-D U-Net from the extracted 3-D blocks. RESULTS We developed and evaluated our method using 213 MRI scans obtained from 125 patients (183 for training, 30 for test). Our method achieved (mean ± SD) Dice similarity coefficient of 0.85 ± 0.03 and the 95th percentile Hausdorff distance of 3.70 ± 0.35 mm on test cases, outperforming other state-of-the-art methods significantly (p-value < 0.05). Our method also produces an uncertainty map along with the CTV segmentation by employing the Monte Carlo dropout technique to draw physician's attention to the regions with high uncertainty, where careful review and manual correction may be needed. CONCLUSIONS Experimental results show that the developed method is accurate, fast, and reproducible for contouring CTV from MRI, demonstrating its potential to assist radiation oncologists in alleviating the burden of tedious contouring for RT planning in cervical cancer.
Collapse
Affiliation(s)
- Fatemeh Zabihollahy
- Department of Radiation Oncology and Molecular Radiation SciencesJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Akila N. Viswanathan
- Department of Radiation Oncology and Molecular Radiation SciencesJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Ehud J. Schmidt
- Division of Cardiology, Department of MedicineJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Junghoon Lee
- Department of Radiation Oncology and Molecular Radiation SciencesJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| |
Collapse
|
15
|
Navarro F, Sasahara G, Shit S, Sekuboyina A, Ezhov I, Peeken JC, Combs SE, Menze BH. A Unified 3D Framework for Organs-at-Risk Localization and Segmentation for Radiation Therapy Planning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1544-1547. [PMID: 36086554 DOI: 10.1109/embc48229.2022.9871680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Automatic localization and segmentation of organs-at-risk (OAR) in CT are essential pre-processing steps in medical image analysis tasks, such as radiation therapy planning. For instance, the segmentation of OAR surrounding tumors enables the maximization of radiation to the tumor area without compromising the healthy tissues. However, the current medical workflow requires manual delineation of OAR, which is prone to errors and is annotator-dependent. In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation rather than novel localization or segmentation architectures. To the best of our knowledge, our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging. In the first step, a 3D multi-variate regression network predicts organs' centroids and bounding boxes. Secondly, 3D organ-specific segmentation networks are leveraged to generate a multi-organ segmentation map. Our method achieved an overall Dice score of 0.9260 ± 0.18% on the VISCERAL dataset containing CT scans with varying fields of view and multiple organs.
Collapse
|
16
|
Hsiao CH, Sun TL, Lin PC, Peng TY, Chen YH, Cheng CY, Yang FJ, Yang SY, Wu CH, Lin FYS, Huang Y. A deep learning-based precision volume calculation approach for kidney and tumor segmentation on computed tomography images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106861. [PMID: 35588664 DOI: 10.1016/j.cmpb.2022.106861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 03/24/2022] [Accepted: 05/07/2022] [Indexed: 06/15/2023]
Abstract
Previously, doctors interpreted computed tomography (CT) images based on their experience in diagnosing kidney diseases. However, with the rapid increase in CT images, such interpretations were required considerable time and effort, producing inconsistent results. Several novel neural network models were proposed to automatically identify kidney or tumor areas in CT images for solving this problem. In most of these models, only the neural network structure was modified to improve accuracy. However, data pre-processing was also a crucial step in improving the results. This study systematically discussed the necessary pre-processing methods before processing medical images in a neural network model. The experimental results were shown that the proposed pre-processing methods or models significantly improve the accuracy rate compared with the case without data pre-processing. Specifically, the dice score was improved from 0.9436 to 0.9648 for kidney segmentation and 0.7294 for all types of tumor detections. The performance was suitable for clinical applications with lower computational resources based on the proposed medical image processing methods and deep learning models. The cost efficiency and effectiveness were also achieved for automatic kidney volume calculation and tumor detection accurately.
Collapse
Affiliation(s)
- Chiu-Han Hsiao
- Research Center for Information Technology Innovation, Academia Sinica, Taipei City, Taiwan, ROC
| | - Tzu-Lung Sun
- Department of Information Management, National Taiwan University, Taipei City, Taiwan, ROC
| | - Ping-Cherng Lin
- Research Center for Information Technology Innovation, Academia Sinica, Taipei City, Taiwan, ROC
| | - Tsung-Yu Peng
- Department of Information Management, National Taiwan University, Taipei City, Taiwan, ROC
| | - Yu-Hsin Chen
- Department of Information Management, National Taiwan University, Taipei City, Taiwan, ROC
| | - Chieh-Yun Cheng
- Department of Information Management, National Taiwan University, Taipei City, Taiwan, ROC
| | - Feng-Jung Yang
- Department of Internal Medicine, National Taiwan University Hospital Yunlin Branch, Douliu City, Yunlin County; School of Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC.
| | - Shao-Yu Yang
- Department of Internal Medicine, National Taiwan University Hospital, Taipei City, Taiwan, ROC
| | - Chih-Horng Wu
- Department of Medical Imaging, National Taiwan University Hospital, Taipei City, Taiwan, ROC
| | - Frank Yeong-Sung Lin
- Department of Information Management, National Taiwan University, Taipei City, Taiwan, ROC
| | - Yennun Huang
- Research Center for Information Technology Innovation, Academia Sinica, Taipei City, Taiwan, ROC
| |
Collapse
|
17
|
Poonia RC, Gupta MK, Abunadi I, Albraikan AA, Al-Wesabi FN, Hamza MA, B T. Intelligent Diagnostic Prediction and Classification Models for Detection of Kidney Disease. Healthcare (Basel) 2022; 10:healthcare10020371. [PMID: 35206985 PMCID: PMC8871759 DOI: 10.3390/healthcare10020371] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 02/02/2022] [Accepted: 02/03/2022] [Indexed: 11/17/2022] Open
Abstract
Kidney disease is a major public health concern that has only recently emerged. Toxins are removed from the body by the kidneys through urine. In the early stages of the condition, the patient has no problems, but recovery is difficult in the later stages. Doctors must be able to recognize this condition early in order to save the lives of their patients. To detect this illness early on, researchers have used a variety of methods. Prediction analysis based on machine learning has been shown to be more accurate than other methodologies. This research can help us to better understand global disparities in kidney disease, as well as what we can do to address them and coordinate our efforts to achieve global kidney health equity. This study provides an excellent feature-based prediction model for detecting kidney disease. Various machine learning algorithms, including k-nearest neighbors algorithm (KNN), artificial neural networks (ANN), support vector machines (SVM), naive bayes (NB), and others, as well as Re-cursive Feature Elimination (RFE) and Chi-Square test feature-selection techniques, were used to build and analyze various prediction models on a publicly available dataset of healthy and kidney disease patients. The studies found that a logistic regression-based prediction model with optimal features chosen using the Chi-Square technique had the highest accuracy of 98.75 percent. White Blood Cell Count (Wbcc), Blood Glucose Random (bgr), Blood Urea (Bu), Serum Creatinine (Sc), Packed Cell Volume (Pcv), Albumin (Al), Hemoglobin (Hemo), Age, Sugar (Su), Hypertension (Htn), Diabetes Mellitus (Dm), and Blood Pressure (Bp) are examples of these traits.
Collapse
Affiliation(s)
- Ramesh Chandra Poonia
- Department of Computer Science, CHRIST (Deemed to be University), Bangalore 560029, India; (R.C.P.); (T.B.)
| | - Mukesh Kumar Gupta
- Department of Computer Science & Engineering, Swami Keshvanand Institute of Technology, Management & Gramothan (SKIT), Jaipur 302017, India;
| | - Ibrahim Abunadi
- Department of Information Systems, Prince Sultan University, P.O. Box No. 66833 Rafha Street, Riyadh 11586, Saudi Arabia;
| | - Amani Abdulrahman Albraikan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Fahd N. Al-Wesabi
- Department of Computer Science, College of Science & Art at Mahayil, King Khalid University, Abha 61421, Saudi Arabia
- Correspondence: ; Tel.: +966-534227096
| | - Manar Ahmed Hamza
- Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, Al-Kharj 16273, Saudi Arabia;
| | - Tulasi B
- Department of Computer Science, CHRIST (Deemed to be University), Bangalore 560029, India; (R.C.P.); (T.B.)
| |
Collapse
|
18
|
Zabihollahy F, Viswanathan AN, Schmidt EJ, Morcos M, Lee J. Fully automated multiorgan segmentation of female pelvic magnetic resonance images with coarse-to-fine convolutional neural network. Med Phys 2021; 48:7028-7042. [PMID: 34609756 PMCID: PMC8597653 DOI: 10.1002/mp.15268] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Revised: 08/25/2021] [Accepted: 09/17/2021] [Indexed: 02/03/2023] Open
Abstract
PURPOSE Brachytherapy combined with external beam radiotherapy (EBRT) is the standard treatment for cervical cancer and has been shown to improve overall survival rates compared to EBRT only. Magnetic resonance (MR) imaging is used for radiotherapy (RT) planning and image guidance due to its excellent soft tissue image contrast. Rapid and accurate segmentation of organs at risk (OAR) is a crucial step in MR image-guided RT. In this paper, we propose a fully automated two-step convolutional neural network (CNN) approach to delineate multiple OARs from T2-weighted (T2W) MR images. METHODS We employ a coarse-to-fine segmentation strategy. The coarse segmentation step first identifies the approximate boundary of each organ of interest and crops the MR volume around the centroid of organ-specific region of interest (ROI). The cropped ROI volumes are then fed to organ-specific fine segmentation networks to produce detailed segmentation of each organ. A three-dimensional (3-D) U-Net is trained to perform the coarse segmentation. For the fine segmentation, a 3-D Dense U-Net is employed in which a modified 3-D dense block is incorporated into the 3-D U-Net-like network to acquire inter and intra-slice features and improve information flow while reducing computational complexity. Two sets of T2W MR images (221 cases for MR1 and 62 for MR2) were taken with slightly different imaging parameters and used for our network training and test. The network was first trained on MR1 which was a larger sample set. The trained model was then transferred to the MR2 domain via a fine-tuning approach. Active learning strategy was utilized for selecting the most valuable data from MR2 to be included in the adaptation via transfer learning. RESULTS The proposed method was tested on 20 MR1 and 32 MR2 test sets. Mean ± SD dice similarity coefficients are 0.93 ± 0.04, 0.87 ± 0.03, and 0.80 ± 0.10 on MR1 and 0.94 ± 0.05, 0.88 ± 0.04, and 0.80 ± 0.05 on MR2 for bladder, rectum, and sigmoid, respectively. Hausdorff distances (95th percentile) are 4.18 ± 0.52, 2.54 ± 0.41, and 5.03 ± 1.31 mm on MR1 and 2.89 ± 0.33, 2.24 ± 0.40, and 3.28 ± 1.08 mm on MR2, respectively. The performance of our method is superior to other state-of-the-art segmentation methods. CONCLUSIONS We proposed a two-step CNN approach for fully automated segmentation of female pelvic MR bladder, rectum, and sigmoid from T2W MR volume. Our experimental results demonstrate that the developed method is accurate, fast, and reproducible, and outperforms alternative state-of-the-art methods for OAR segmentation significantly (p < 0.05).
Collapse
Affiliation(s)
- Fatemeh Zabihollahy
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Akila N Viswanathan
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Ehud J Schmidt
- Division of Cardiology, Department of Medicine, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Marc Morcos
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Junghoon Lee
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD 21287, USA
| |
Collapse
|
19
|
Wang X, Wang F, Niu Y. A Convolutional Neural Network Combining Discriminative Dictionary Learning and Sequence Tracking for Left Ventricular Detection. SENSORS 2021; 21:s21113693. [PMID: 34073315 PMCID: PMC8199243 DOI: 10.3390/s21113693] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 05/06/2021] [Accepted: 05/10/2021] [Indexed: 11/16/2022]
Abstract
Cardiac MRI left ventricular (LV) detection is frequently employed to assist cardiac registration or segmentation in computer-aided diagnosis of heart diseases. Focusing on the challenging problems in LV detection, such as the large span and varying size of LV areas in MRI, as well as the heterogeneous myocardial and blood pool parts in LV areas, a convolutional neural network (CNN) detection method combining discriminative dictionary learning and sequence tracking is proposed in this paper. To efficiently represent the different sub-objects in LV area, the method deploys discriminant dictionary to classify the superpixel oversegmented regions, then the target LV region is constructed by label merging and multi-scale adaptive anchors are generated in the target region for handling the varying sizes. Combining with non-differential anchors in regional proposal network, the left ventricle object is localized by the CNN based regression and classification strategy. In order to solve the problem of slow classification speed of discriminative dictionary, a fast generation module of left ventricular scale adaptive anchors based on sequence tracking is also proposed on the same individual. The method and its variants were tested on the heart atlas data set. Experimental results verified the effectiveness of the proposed method and according to some evaluation indicators, it obtained 92.95% in AP50 metric and it was the most competitive result compared to typical related methods. The combination of discriminative dictionary learning and scale adaptive anchor improves adaptability of the proposed algorithm to the varying left ventricular areas. This study would be beneficial in some cardiac image processing such as region-of-interest cropping and left ventricle volume measurement.
Collapse
Affiliation(s)
- Xuchu Wang
- Key Laboratory of Optoelectronic Technology and Systems of Ministry of Education, College of Optoelectronic Engineering, Chongqing University, Chongqing 400044, China;
- Correspondence:
| | - Fusheng Wang
- Key Laboratory of Optoelectronic Technology and Systems of Ministry of Education, College of Optoelectronic Engineering, Chongqing University, Chongqing 400044, China;
| | - Yanmin Niu
- College of Computer and Information Science, Chongqing Normal University, Chongqing 400050, China;
| |
Collapse
|