1
|
Sharon JJ, Anbarasi LJ. An attention enhanced dilated bottleneck network for kidney disease classification. Sci Rep 2025; 15:9865. [PMID: 40118887 PMCID: PMC11928611 DOI: 10.1038/s41598-025-90519-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2024] [Accepted: 02/13/2025] [Indexed: 03/24/2025] Open
Abstract
Computer-Aided Design (CAD) techniques have been developed to assist nephrologists by optimising clinical workflows, ensuring accurate results and effectively handling extensive datasets. The proposed work introduces a Dilated Bottleneck Attention-based Renal Network (DBAR-Net) to automate the diagnosis and classification of kidney diseases like cysts, stones, and tumour. To overcome the challenges caused by complex and overlapping features, the DBAR_Net model implements a multi-feature fusion technique. Two fold convolved layer normalization blocks [Formula: see text]& [Formula: see text] capture fine-grained detail and abstract patterns to achieve faster convergence and improved robustness. Spatially focused features and channel-wise refined features are generated through dual bottleneck attention modules [Formula: see text] to improve the representation of convolved features by highlighting channel and spatial regions resulting enhanced interpretability and feature generalisation. Additionally, adaptive contextual features are obtained from a dilated convolved layer normalisation block [Formula: see text], which effectively captures contextual insights from semantic feature interpretation. The resulting features are fused additively and processed through a linear layer with global average pooling and layer normalization. This combination effectively reduces spatial dimensions, internal covariate shifts and improved generalization along with essential features. The proposed approach was evaluated using the CT KIDNEY DATASET that includes 8750 CT images classified into four categories: Normal, Cyst, Tumour, and Stone. Experimental results showed that [Formula: see text] improved feature detection ability enhanced the performance of DBAR_Net model attaining a F1 score as 0.98 with minimal computational complexity and optimum classification accuracy of 98.86%. The integration of these blocks resulted in precise multi-class kidney disease detection, thereby leading to the superior performance of DBAR_Net compared to other transfer learning models like VGG16, VGG19, ResNet50, EfficientNetB0, Inception V3, MobileNetV2, and Xception.
Collapse
Affiliation(s)
- J Jenifa Sharon
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
| | - L Jani Anbarasi
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India.
| |
Collapse
|
2
|
Karunanayake N, Lu L, Yang H, Geng P, Akin O, Furberg H, Schwartz LH, Zhao B. Dual-Stage AI Model for Enhanced CT Imaging: Precision Segmentation of Kidney and Tumors. Tomography 2025; 11:3. [PMID: 39852683 PMCID: PMC11769543 DOI: 10.3390/tomography11010003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2024] [Revised: 12/16/2024] [Accepted: 12/21/2024] [Indexed: 01/26/2025] Open
Abstract
OBJECTIVES Accurate kidney and tumor segmentation of computed tomography (CT) scans is vital for diagnosis and treatment, but manual methods are time-consuming and inconsistent, highlighting the value of AI automation. This study develops a fully automated AI model using vision transformers (ViTs) and convolutional neural networks (CNNs) to detect and segment kidneys and kidney tumors in Contrast-Enhanced (CECT) scans, with a focus on improving sensitivity for small, indistinct tumors. METHODS The segmentation framework employs a ViT-based model for the kidney organ, followed by a 3D UNet model with enhanced connections and attention mechanisms for tumor detection and segmentation. Two CECT datasets were used: a public dataset (KiTS23: 489 scans) and a private institutional dataset (Private: 592 scans). The AI model was trained on 389 public scans, with validation performed on the remaining 100 scans and external validation performed on all 592 private scans. Tumors were categorized by TNM staging as small (≤4 cm) (KiTS23: 54%, Private: 41%), medium (>4 cm to ≤7 cm) (KiTS23: 24%, Private: 35%), and large (>7 cm) (KiTS23: 22%, Private: 24%) for detailed evaluation. RESULTS Kidney and kidney tumor segmentations were evaluated against manual annotations as the reference standard. The model achieved a Dice score of 0.97 ± 0.02 for kidney organ segmentation. For tumor detection and segmentation on the KiTS23 dataset, the sensitivities and average false-positive rates per patient were as follows: 0.90 and 0.23 for small tumors, 1.0 and 0.08 for medium tumors, and 0.96 and 0.04 for large tumors. The corresponding Dice scores were 0.84 ± 0.11, 0.89 ± 0.07, and 0.91 ± 0.06, respectively. External validation on the private data confirmed the model's effectiveness, achieving the following sensitivities and average false-positive rates per patient: 0.89 and 0.15 for small tumors, 0.99 and 0.03 for medium tumors, and 1.0 and 0.01 for large tumors. The corresponding Dice scores were 0.84 ± 0.08, 0.89 ± 0.08, and 0.92 ± 0.06. CONCLUSIONS The proposed model demonstrates consistent and robust performance in segmenting kidneys and kidney tumors of various sizes, with effective generalization to unseen data. This underscores the model's significant potential for clinical integration, offering enhanced diagnostic precision and reliability in radiological assessments.
Collapse
Affiliation(s)
- Nalan Karunanayake
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA; (L.L.); (H.Y.); (P.G.); (O.A.); (L.H.S.); (B.Z.)
| | - Lin Lu
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA; (L.L.); (H.Y.); (P.G.); (O.A.); (L.H.S.); (B.Z.)
| | - Hao Yang
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA; (L.L.); (H.Y.); (P.G.); (O.A.); (L.H.S.); (B.Z.)
| | - Pengfei Geng
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA; (L.L.); (H.Y.); (P.G.); (O.A.); (L.H.S.); (B.Z.)
| | - Oguz Akin
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA; (L.L.); (H.Y.); (P.G.); (O.A.); (L.H.S.); (B.Z.)
| | - Helena Furberg
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY 10017, USA;
| | - Lawrence H. Schwartz
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA; (L.L.); (H.Y.); (P.G.); (O.A.); (L.H.S.); (B.Z.)
| | - Binsheng Zhao
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA; (L.L.); (H.Y.); (P.G.); (O.A.); (L.H.S.); (B.Z.)
| |
Collapse
|
3
|
Buriboev AS, Khashimov A, Abduvaitov A, Jeon HS. CNN-Based Kidney Segmentation Using a Modified CLAHE Algorithm. SENSORS (BASEL, SWITZERLAND) 2024; 24:7703. [PMID: 39686240 DOI: 10.3390/s24237703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 11/26/2024] [Accepted: 11/28/2024] [Indexed: 12/18/2024]
Abstract
This paper presents an enhanced approach to kidney segmentation using a modified CLAHE preprocessing method, aimed at improving image clarity and CNN performance on the KiTS19 dataset. To assess the impact of the modified CLAHE method, we conducted quality evaluations using the BRISQUE metric, comparing the original, standard CLAHE and modified CLAHE versions of the dataset. The BRISQUE score decreased from 28.8 in the original dataset to 21.1 with the modified CLAHE method, indicating a significant improvement in image quality. Furthermore, CNN segmentation accuracy rose from 0.951 with the original dataset to 0.996 with the modified CLAHE method, outperforming the accuracy achieved with standard CLAHE preprocessing (0.969). These results highlight the benefits of the modified CLAHE method in refining image quality and enhancing segmentation performance. This study highlights the value of adaptive preprocessing in medical imaging workflows and shows that CNN-based kidney segmentation accuracy may be greatly increased by altering conventional CLAHE. Our method provides insightful information on optimizing preprocessing for medical imaging applications, leading to more accurate and dependable segmentation results for better clinical diagnosis.
Collapse
Affiliation(s)
| | - Ahmadjon Khashimov
- Department of Digital Technologies and Mathematics, Kokand University, Kokand 150700, Uzbekistan
| | - Akmal Abduvaitov
- Department of IT, Samarkand Branch of Tashkent University of Information Technologies, Samarkand 100084, Uzbekistan
| | - Heung Seok Jeon
- Department of Computer Engineering, Konkuk University, Chungju 27478, Republic of Korea
| |
Collapse
|
4
|
Appati JK, Yirenkyi IA. A cascading approach using se-resnext, resnet and feature pyramid network for kidney tumor segmentation. Heliyon 2024; 10:e38612. [PMID: 39430467 PMCID: PMC11489355 DOI: 10.1016/j.heliyon.2024.e38612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 09/25/2024] [Accepted: 09/26/2024] [Indexed: 10/22/2024] Open
Abstract
Accurate segmentation of kidney tumors in CT images is very important in the diagnosis of kidney cancer. Automatic semantic segmentation of the kidney tumor has shown promising results towards developing advance surgical planning techniques in the treatment of kidney tumor. However, the relatively small size of kidney tumor volume in comparison to the overall kidney volume, and its irregular distribution and shape makes it difficult to accurately segment the tumors. In addressing this issue, we proposed a coarse to fine segmentation which leverages on transfer learning using SE-ResNeXt model for the initial segmentation and ResNet and Feature Pyramid Network for the final segmentation. The processes are related and the output of the initial results was used for the final training. We trained and evaluated our method on the KITS19 dataset and achieved a dice score of 0.7388 and Jaccard score 0.7321 for the final segmentation demonstrating promising results when compared to other approaches.
Collapse
|
5
|
Koukoutegos K, 's Heeren R, De Wever L, De Keyzer F, Maes F, Bosmans H. Segmentation-based quantitative measurements in renal CT imaging using deep learning. Eur Radiol Exp 2024; 8:110. [PMID: 39382755 PMCID: PMC11465135 DOI: 10.1186/s41747-024-00507-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 08/22/2024] [Indexed: 10/10/2024] Open
Abstract
BACKGROUND Renal quantitative measurements are important descriptors for assessing kidney function. We developed a deep learning-based method for automated kidney measurements from computed tomography (CT) images. METHODS The study datasets comprised potential kidney donors (n = 88), both contrast-enhanced (Dataset 1 CE) and noncontrast (Dataset 1 NC) CT scans, and test sets of contrast-enhanced cases (Test set 2, n = 18), cases from a photon-counting (PC)CT scanner reconstructed at 60 and 190 keV (Test set 3 PCCT, n = 15), and low-dose cases (Test set 4, n = 8), which were retrospectively analyzed to train, validate, and test two networks for kidney segmentation and subsequent measurements. Segmentation performance was evaluated using the Dice similarity coefficient (DSC). The quantitative measurements' effectiveness was compared to manual annotations using the intraclass correlation coefficient (ICC). RESULTS The contrast-enhanced and noncontrast models demonstrated excellent reliability in renal segmentation with DSC of 0.95 (Test set 1 CE), 0.94 (Test set 2), 0.92 (Test set 3 PCCT) and 0.94 (Test set 1 NC), 0.92 (Test set 3 PCCT), and 0.93 (Test set 4). Volume estimation was accurate with mean volume errors of 4%, 3%, 6% mL (contrast test sets) and 4%, 5%, 7% mL (noncontrast test sets). Renal axes measurements (length, width, and thickness) had ICC values greater than 0.90 (p < 0.001) for all test sets, supported by narrow 95% confidence intervals. CONCLUSION Two deep learning networks were shown to derive quantitative measurements from contrast-enhanced and noncontrast renal CT imaging at the human performance level. RELEVANCE STATEMENT Deep learning-based networks can automatically obtain renal clinical descriptors from both noncontrast and contrast-enhanced CT images. When healthy subjects comprise the training cohort, careful consideration is required during model adaptation, especially in scenarios involving unhealthy kidneys. This creates an opportunity for improved clinical decision-making without labor-intensive manual effort. KEY POINTS Trained 3D UNet models quantify renal measurements from contrast and noncontrast CT. The models performed interchangeably to the manual annotator and to each other. The models can provide expert-level, quantitative, accurate, and rapid renal measurements.
Collapse
Affiliation(s)
- Konstantinos Koukoutegos
- KU Leuven, Department of Imaging and Pathology, Division of Medical Physics, Herestraat 49, 3000, Leuven, Belgium.
- UZ Leuven, Department of Radiology, Herestraat 49, 3000, Leuven, Belgium.
| | - Richard 's Heeren
- UZ Leuven, Department of Radiology, Herestraat 49, 3000, Leuven, Belgium
| | - Liesbeth De Wever
- UZ Leuven, Department of Radiology, Herestraat 49, 3000, Leuven, Belgium
| | - Frederik De Keyzer
- UZ Leuven, Department of Radiology, Herestraat 49, 3000, Leuven, Belgium
| | - Frederik Maes
- KU Leuven, Department of Electrical Engineering, ESAT/PSI, 3000, Leuven, Belgium
| | - Hilde Bosmans
- KU Leuven, Department of Imaging and Pathology, Division of Medical Physics, Herestraat 49, 3000, Leuven, Belgium.
- UZ Leuven, Department of Radiology, Herestraat 49, 3000, Leuven, Belgium.
| |
Collapse
|
6
|
Moldovanu CG. Virtual and augmented reality systems and three-dimensional printing of the renal model-novel trends to guide preoperative planning for renal cancer. Asian J Urol 2024; 11:521-529. [PMID: 39534007 PMCID: PMC11551381 DOI: 10.1016/j.ajur.2023.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/09/2023] [Indexed: 11/16/2024] Open
Abstract
Objective This study aimed to explore the applications of three-dimensional (3D) technology, including virtual reality, augmented reality (AR), and 3D printing system, in the field of medicine, particularly in renal interventions for cancer treatment. Methods A specialized software transforms 2D medical images into precise 3D digital models, facilitating improved anatomical understanding and surgical planning. Patient-specific 3D printed anatomical models are utilized for preoperative planning, intraoperative guidance, and surgical education. AR technology enables the overlay of digital perceptions onto real-world surgical environments. Results Patient-specific 3D printed anatomical models have multiple applications, such as preoperative planning, intraoperative guidance, trainee education, and patient counseling. Virtual reality involves substituting the real world with a computer-generated 3D environment, while AR overlays digitally created perceptions onto the existing reality. The advances in 3D modeling technology have sparked considerable interest in their application to partial nephrectomy in the realm of renal cancer. 3D printing, also known as additive manufacturing, constructs 3D objects based on computer-aided design or digital 3D models. Utilizing 3D-printed preoperative renal models provides benefits for surgical planning, offering a more reliable assessment of the tumor's relationship with vital anatomical structures and enabling better preparation for procedures. AR technology allows surgeons to visualize patient-specific renal anatomical structures and their spatial relationships with surrounding organs by projecting CT/MRI images onto a live laparoscopic video. Incorporating patient-specific 3D digital models into healthcare enhances best practice, resulting in improved patient care, increased patient satisfaction, and cost saving for the healthcare system.
Collapse
Affiliation(s)
- Claudia-Gabriela Moldovanu
- Department of Radiology, Municipal Clinical Hospital, Cluj-Napoca, Romania
- Department of Radiology, Emergency Heart Institute “N. Stancioiu”, Cluj-Napoca, Romania
| |
Collapse
|
7
|
Becker J, Woźnicki P, Decker JA, Risch F, Wudy R, Kaufmann D, Canalini L, Wollny C, Scheurig-Muenkler C, Kroencke T, Bette S, Schwarz F. Radiomics signature for automatic hydronephrosis detection in unenhanced Low-Dose CT. Eur J Radiol 2024; 179:111677. [PMID: 39178684 DOI: 10.1016/j.ejrad.2024.111677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 08/02/2024] [Accepted: 08/07/2024] [Indexed: 08/26/2024]
Abstract
PURPOSE To investigate the diagnostic performance of an automatic pipeline for detection of hydronephrosis on kidney's parenchyma on unenhanced low-dose CT of the abdomen. METHODS This retrospective study included 95 patients with confirmed unilateral hydronephrosis in an unenhanced low-dose CT of the abdomen. Data were split into training (n = 67) and test (n = 28) cohorts. Both kidneys for each case were included in further analyses, whereas the kidney without hydronephrosis was used as control. Using the training cohort, we developed a pipeline consisting of a deep-learning model for automatic segmentation (a Convolutional Neural Network based on nnU-Net architecture) of the kidney's parenchyma and a radiomics classifier to detect hydronephrosis. The models were assessed using standard classification metrics, such as area under the ROC curve (AUC), sensitivity and specificity, as well as semantic segmentation metrics, including Dice coefficient and Jaccard index. RESULTS Using manual segmentation of the kidney's parenchyma, hydronephrosis can be detected with an AUC of 0.84, a sensitivity of 75% and a specificity of 82%, a PPV of 81% and a NPV of 77%. Automatic kidney segmentation achieved a mean Dice score of 0.87 and 0.91 for the right and left kidney, respectively. Additionally, automatic segmentation achieved an AUC of 0.83, a sensitivity of 86%, specificity of 64%, PPV of 71%, and NPV of 82%. CONCLUSION Our proposed radiomics signature using automatic kidney's parenchyma segmentation allows for accurate hydronephrosis detection on unenhanced low-dose CT scans of the abdomen independently of widened renal pelvis. This method could be used in clinical routine to highlight hydronephrosis to radiologists as well as clinicians, especially in patients with concurrent parapelvic cysts and might reduce time and costs associated with diagnosing hydronephrosis.
Collapse
Affiliation(s)
- Judith Becker
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Piotr Woźnicki
- Diagnostic and Interventional Radiology, University Hospital Würzburg, Josef-Schneider-Straße 2, 97080 Würzburg, Germany
| | - Josua A Decker
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Franka Risch
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Ramona Wudy
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - David Kaufmann
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Luca Canalini
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Claudia Wollny
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Christian Scheurig-Muenkler
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Thomas Kroencke
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany; Centre for Advanced Analytics and Predictive Sciences (CAAPS), University of Augsburg, Universitätsstr. 2, 86159 Augsburg, Germany.
| | - Stefanie Bette
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Florian Schwarz
- Centre for Diagnostic Imaging and Interventional Therapy, Donau-Isar-Klinikum, Perlasberger Straße 41, 94469 Deggendorf, Germany; Medical Faculty, Ludwig Maximilian University Munich, Bavariaring 19, 80336 Munich, Germany
| |
Collapse
|
8
|
Delgado-Rodriguez P, Lamanna-Rama N, Saande C, Aldabe R, Soto-Montenegro ML, Munoz-Barrutia A. Multiscale and multimodal evaluation of autosomal dominant polycystic kidney disease development. Commun Biol 2024; 7:1183. [PMID: 39300231 DOI: 10.1038/s42003-024-06868-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 09/09/2024] [Indexed: 09/22/2024] Open
Abstract
Autosomal Dominant Polycystic Kidney Disease (ADPKD) is the most prevalent kidney genetic disorder, producing structural abnormalities and impaired function. This research investigates its evolution on mouse models, utilizing a combination of histology imaging, Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) to evaluate its progression thoroughly. ADPKD has been induced in mice via PKD2 gene knockout, followed by image acquisition at different stages. Histology data provides two-dimensional details, like the cystic area ratio, whereas CT and MRI facilitate three-dimensional temporal monitoring. Our approach allows to quantify the affected tissue at different disease stages through multiple quantitative metrics. A pivotal point is shown at approximately ten weeks after induction, marked by a swift acceleration in disease advancement, and leading to a notable increase in cyst formation. This multimodal strategy augments our comprehension of ADPKD dynamics and suggests the possibility of employing higher-resolution imaging in the future for more accurate volumetric analyses.
Collapse
Affiliation(s)
- Pablo Delgado-Rodriguez
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain.
- Instituto de Investigacion Sanitaria Gregorio Marañon (IiSGM), Madrid, Spain.
| | - Nicolás Lamanna-Rama
- Instituto de Investigacion Sanitaria Gregorio Marañon (IiSGM), Madrid, Spain
- Instituto de Investigacion Sanitaria Fundación Jimenez Diaz (IIS - FJD), Madrid, Spain
| | - Cassondra Saande
- Division of Gene Therapy and Regulation of Gene Expression, Centre for Applied Medical Research (CIMA), University of Navarra, Pamplona, Spain
| | - Rafael Aldabe
- Division of Gene Therapy and Regulation of Gene Expression, Centre for Applied Medical Research (CIMA), University of Navarra, Pamplona, Spain
| | - María L Soto-Montenegro
- Instituto de Investigacion Sanitaria Gregorio Marañon (IiSGM), Madrid, Spain
- CIBER de Salud Mental (CIBERSAM), Madrid, Spain
- High Performance Research Group in Physiopathology and Pharmacology of the Digestive System (NeuGut), University Rey Juan Carlos (URJC), Alcorcon, Spain
| | - Arrate Munoz-Barrutia
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigacion Sanitaria Gregorio Marañon (IiSGM), Madrid, Spain
| |
Collapse
|
9
|
Correa-Medero RL, Jeong J, Patel B, Banerjee I, Abdul-Muhsin H. Automated Analysis of Split Kidney Function from CT Scans Using Deep Learning and Delta Radiomics. J Endourol 2024; 38:817-823. [PMID: 38695176 DOI: 10.1089/end.2023.0488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/11/2024] Open
Abstract
Background: Differential kidney function assessment is an important part of preoperative evaluation of various urological interventions. It is obtained through dedicated nuclear medical imaging and is not yet implemented through conventional Imaging. Objective: We assess if differential kidney function can be obtained through evaluation of contrast-enhanced computed tomography(CT) using a combination of deep learning and (2D and 3D) radiomic features. Methods: All patients who underwent kidney nuclear scanning at Mayo Clinic sites between 2018-2022 were collected. CT scans of the kidneys were obtained within a 3-month interval before or after the nuclear scans were extracted. Patients who underwent a urological or radiological intervention within this time frame were excluded. A segmentation model was used to segment both kidneys. 2D and 3D radiomics features were extracted and compared between the two kidneys to compute delta radiomics and assess its ability to predict differential kidney function. Performance was reported using receiver operating characteristics, sensitivity, and specificity. Results: Studies from Arizona & Rochester formed our internal dataset (n = 1,159). Studies from Florida were separately processed as an external test set to validate generalizability. We obtained 323 studies from our internal sites and 39 studies from external sites. The best results were obtained by a random forest model trained on 3D delta radiomics features. This model achieved an area under curve (AUC) of 0.85 and 0.81 on internal and external test sets, while specificity and sensitivity were 0.84,0.68 on the internal set, 0.70, and 0.65 on the external set. Conclusion: This proposed automated pipeline can derive important differential kidney function information from contrast-enhanced CT and reduce the need for dedicated nuclear scans for early-stage differential kidney functional assessment. Clinical Impact: We establish a machine learning methodology for assessing differential kidney function from routine CT without the need for expensive and radioactive nuclear medicine scans.
Collapse
Affiliation(s)
| | - Jiwoong Jeong
- School of Computing and Augmented Intelligence, Arizona State University, Arizona, USA
| | - Bhavik Patel
- School of Computing and Augmented Intelligence, Arizona State University, Arizona, USA
- Department of Radiology, Mayo Clinic Hospital, Phoenix, Arizona, USA
| | - Imon Banerjee
- School of Computing and Augmented Intelligence, Arizona State University, Arizona, USA
- Department of Radiology, Mayo Clinic Hospital, Phoenix, Arizona, USA
| | | |
Collapse
|
10
|
Luo H, Li J, Huang H, Jiao L, Zheng S, Ying Y, Li Q. AI-based segmentation of renal enhanced CT images for quantitative evaluate of chronic kidney disease. Sci Rep 2024; 14:16890. [PMID: 39043766 PMCID: PMC11266695 DOI: 10.1038/s41598-024-67658-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Accepted: 07/15/2024] [Indexed: 07/25/2024] Open
Abstract
To quantitatively evaluate chronic kidney disease (CKD), a deep convolutional neural network-based segmentation model was applied to renal enhanced computed tomography (CT) images. A retrospective analysis was conducted on a cohort of 100 individuals diagnosed with CKD and 90 individuals with healthy kidneys, who underwent contrast-enhanced CT scans of the kidneys or abdomen. Demographic and clinical data were collected from all participants. The study consisted of two distinct stages: firstly, the development and validation of a three-dimensional (3D) nnU-Net model for segmenting the arterial phase of renal enhanced CT scans; secondly, the utilization of the 3D nnU-Net model for quantitative evaluation of CKD. The 3D nnU-Net model achieved a mean Dice Similarity Coefficient (DSC) of 93.53% for renal parenchyma and 81.48% for renal cortex. Statistically significant differences were observed among different stages of renal function for renal parenchyma volume (VRP), renal cortex volume (VRC), renal medulla volume (VRM), the CT values of renal parenchyma (HuRP), the CT values of renal cortex (HuRC), and the CT values of renal medulla (HuRM) (F = 93.476, 144.918, 9.637, 170.533, 216.616, and 94.283; p < 0.001). Pearson correlation analysis revealed significant positive associations between glomerular filtration rate (eGFR) and VRP, VRC, VRM, HuRP, HuRC, and HuRM (r = 0.749, 0.818, 0.321, 0.819, 0.820, and 0.747, respectively, all p < 0.001). Similarly, a negative correlation was observed between serum creatinine (Scr) levels and VRP, VRC, VRM, HuRP, HuRC, and HuRM (r = - 0.759, - 0.777, - 0.420, - 0.762, - 0.771, and - 0.726, respectively, all p < 0.001). For predicting CKD in males, VRP had an area under the curve (AUC) of 0.726, p < 0.001; VRC, AUC 0.765, p < 0.001; VRM, AUC 0.578, p = 0.018; HuRP, AUC 0.912, p < 0.001; HuRC, AUC 0.952, p < 0.001; and HuRM, AUC 0.772, p < 0.001 in males. In females, VRP had an AUC of 0.813, p < 0.001; VRC, AUC 0.851, p < 0.001; VRM, AUC 0.623, p = 0.060; HuRP, AUC 0.904, p < 0.001; HuRC, AUC 0.934, p < 0.001; and HuRM, AUC 0.840, p < 0.001. The optimal cutoff values for predicting CKD in HuRP are 99.9 Hu for males and 98.4 Hu for females, while in HuRC are 120.1 Hu for males and 111.8 Hu for females. The kidney was effectively segmented by our AI-based 3D nnU-Net model for enhanced renal CT images. In terms of mild kidney injury, the CT values exhibited higher sensitivity compared to kidney volume. The correlation analysis revealed a stronger association between VRC, HuRP, and HuRC with renal function, while the association between VRP and HuRM was weaker, and the association between VRM was the weakest. Particularly, HuRP and HuRC demonstrated significant potential in predicting renal function. For diagnosing CKD, it is recommended to set the threshold values as follows: HuRP < 99.9 Hu and HuRC < 120.1 Hu in males, and HuRP < 98.4 Hu and HuRC < 111.8 Hu in females.
Collapse
Affiliation(s)
- Hui Luo
- Department of Radiology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Jingzhen Li
- Department of Nephrology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Haiyang Huang
- Department of Radiology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Lianghong Jiao
- Department of Radiology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Siyuan Zheng
- Department of Radiology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Yibo Ying
- Department of Radiology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Qiang Li
- Department of Radiology, The Affiliated People's Hospital of Ningbo University, Ningbo, 315000, China.
| |
Collapse
|
11
|
Jang DH, Lee J, Jeon YJ, Yoon YE, Ahn H, Kang BK, Choi WS, Oh J, Lee DK. Kidney, ureter, and urinary bladder segmentation based on non-contrast enhanced computed tomography images using modified U-Net. Sci Rep 2024; 14:15325. [PMID: 38961140 PMCID: PMC11222420 DOI: 10.1038/s41598-024-66045-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 06/26/2024] [Indexed: 07/05/2024] Open
Abstract
This study was performed to segment the urinary system as the basis for diagnosing urinary system diseases on non-contrast computed tomography (CT). This study was conducted with images obtained between January 2016 and December 2020. During the study period, non-contrast abdominopelvic CT scans of patients and diagnosed and treated with urinary stones at the emergency departments of two institutions were collected. Region of interest extraction was first performed, and urinary system segmentation was performed using a modified U-Net. Thereafter, fivefold cross-validation was performed to evaluate the robustness of the model performance. In fivefold cross-validation results of the segmentation of the urinary system, the average dice coefficient was 0.8673, and the dice coefficients for each class (kidney, ureter, and urinary bladder) were 0.9651, 0.7172, and 0.9196, respectively. In the test dataset, the average dice coefficient of best performing model in fivefold cross validation for whole urinary system was 0.8623, and the dice coefficients for each class (kidney, ureter, and urinary bladder) were 0.9613, 0.7225, and 0.9032, respectively. The segmentation of the urinary system using the modified U-Net proposed in this study could be the basis for the detection of kidney, ureter, and urinary bladder lesions, such as stones and tumours, through machine learning.
Collapse
Affiliation(s)
- Dong-Hyun Jang
- Department of Public Healthcare Service, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Juncheol Lee
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea
| | | | - Young Eun Yoon
- Department of Urology, College of Medicine, Hanyang University, Seoul, Republic of Korea
| | - Hyungwoo Ahn
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Bo-Kyeong Kang
- Department of Radiology, College of Medicine, Hanyang University, Seoul, Republic of Korea
| | - Won Seok Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Jaehoon Oh
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea.
| | - Dong Keon Lee
- Department of Emergency Medicine, Seoul National University Bundang Hospital, 13620, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, Republic of Korea.
- Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
12
|
Özbay E, Özbay FA, Gharehchopogh FS. Kidney Tumor Classification on CT images using Self-supervised Learning. Comput Biol Med 2024; 176:108554. [PMID: 38744013 DOI: 10.1016/j.compbiomed.2024.108554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 04/06/2024] [Accepted: 04/30/2024] [Indexed: 05/16/2024]
Abstract
One of the most common diseases affecting society around the world is kidney tumor. The risk of kidney disease increases due to reasons such as consumption of ready-made food and bad habits. Early diagnosis of kidney tumors is essential for effective treatment, reducing side effects, and reducing the number of deaths. With the development of computer-aided diagnostic methods, the need for accurate renal tumor classification is also increasing. Because traditional methods based on manual detection are time-consuming, boring, and costly, high-accuracy tests can be performed faster and at a lower cost with deep learning (DL) methods in kidney tumor detection (KTD). Among the current challenges regarding artificial intelligence-assisted KTD, obtaining more precise programming information and the capacity to group with high accuracy make clinical determination more vital and bring it to an important point for current treatment in KTD prediction. This encourages us to propose a more effective DL model that can effectively assist specialist physicians in the diagnosis of kidney tumors. In this way, the workload of radiologists can be alleviated and errors in clinical diagnoses that may occur due to the complex structure of the kidney can be prevented. A large amount of data is needed during the training of the developed methods. Although various studies have been conducted to reduce the amount of data with feature selection techniques, these techniques provide little improvement in the classification accuracy rate. In this paper, a masked autoencoder (MAE) is proposed for KTD, which can produce effective results on datasets containing some samples and can be directly fine-tuned and pre-trained. Self-supervised learning (SSL) is achieved through self-distillation (SD), which can be reintroduced into the configuration loss calculation using masked patches. The SD loss on the decoder and encoder outputs' latent representation is calculated operating SSLSD-KTD. The encoder obtains local attention, while the decoder transfers its global attention to calculate losses. The SSLSD-KTD method reached 98.04 % classification accuracy on the KAUH-kidney dataset, including 8400 samples, and 82.14 % on the CT-kidney dataset, containing 840 samples. By adding more external information to the SSLSD-KTD method with transfer learning, accuracy results of 99.82 % and 95.24 % were obtained on the same datasets. Experimental results have shown that the SSLSD-KTD method can effectively extract kidney tumor features with limited data and can be an aid or even an alternative for radiologists in decision-making in the diagnosis of the disease.
Collapse
Affiliation(s)
- Erdal Özbay
- Department of Computer Engineering, Firat University, 23119, Elazig, Turkey.
| | | | | |
Collapse
|
13
|
Kellner E, Sekula P, Reisert M, Köttgen A, Lipovsek J, Russe M, Horbach H, Schlett CL, Nauck M, Völzke H, Kröncke T, Bette S, Kauczor HU, Keil T, Pischon T, Heid IM, Peters A, Niendorf T, Lieb W, Bamberg F, Büchert M, Reichardt W. Imaging Markers Derived From MRI-Based Automated Kidney Segmentation—an Analysis of Data From the German National Cohort (NAKO Gesundheitsstudie). DEUTSCHES ARZTEBLATT INTERNATIONAL 2024; 121:284-290. [PMID: 38530931 PMCID: PMC11381199 DOI: 10.3238/arztebl.m2024.0040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 02/19/2024] [Accepted: 02/19/2024] [Indexed: 03/28/2024]
Abstract
BACKGROUND Population-wide research on potential new imaging biomarkers of the kidney depends on accurate automated segmentation of the kidney and its compartments (cortex, medulla, and sinus). METHODS We developed a robust deep-learning framework for kidney (sub-)segmentation based on a hierarchical, three-dimensional convolutional neural network (CNN) that was optimized for multiscale problems of combined localization and segmentation. We applied the CNN to abdominal magnetic resonance images from the population-based German National Cohort (NAKO) study. RESULTS There was good to excellent agreement between the model predictions and manual segmentations. The median values for the body-surface normalized total kidney, cortex, medulla, and sinus volumes of 9934 persons were 158, 115, 43, and 24 mL/m2. Distributions of these markers are provided both for the overall study population and for a subgroup of persons without kidney disease or any associated conditions. Multivariable adjusted regression analyses revealed that diabetes, male sex, and a higher estimated glomerular filtration rate (eGFR) are important predictors of higher total and cortical volumes. Each increase of eGFR by one unit (i.e., 1 mL/min per 1.73 m2 body surface area) was associated with a 0.98 mL/m2 increase in total kidney volume, and this association was significant. Volumes were lower in persons with eGFR-defined chronic kidney disease. CONCLUSION The extraction of image-based biomarkers through CNN-based renal sub-segmentation using data from a population-based study yields reliable results, forming a solid foundation for future investigations.
Collapse
Affiliation(s)
- Elias Kellner
- Division of Medical Physics, Department of Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Peggy Sekula
- Institute of Genetic Epidemiology, Faculty of Medicine and Medical Center – University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Division of Medical Physics, Department of Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Anna Köttgen
- Institute of Genetic Epidemiology, Faculty of Medicine and Medical Center – University of Freiburg, Freiburg, Germany
| | - Jan Lipovsek
- Institute of Genetic Epidemiology, Faculty of Medicine and Medical Center – University of Freiburg, Freiburg, Germany
| | - Maximilian Russe
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, Albert-Ludwigs-University Freiburg, Germany
| | - Harald Horbach
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, Albert-Ludwigs-University Freiburg, Germany
| | - Christopher L. Schlett
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, Albert-Ludwigs-University Freiburg, Germany
| | - Matthias Nauck
- Institute of Clinical Chemistry and Laboratory Medicine, University Medicine Greifswald, Germany
- DZHK (German Centre for Cardiovascular Research), Partner Site Greifswald, University Medicine Greifswald, Germany
| | - Henry Völzke
- DZHK (German Centre for Cardiovascular Research), Partner Site Greifswald, University Medicine Greifswald, Germany
- Institute for Community Medicine, University Medicine Greifswald, Germany
| | - Thomas Kröncke
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Germany
- Centre for Advanced Analytics and Predictive Sciences (CAAPS), University of Augsburg, Germany
| | - Stefanie Bette
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Germany
| | - Hans-Ulrich Kauczor
- Department of Diagnostical and Interventional Radiology, University Hospital Heidelberg, Germany
| | - Thomas Keil
- Institute of Social Medicine, Epidemiology and Health Economics, Charité – Universitätsmedizin Berlin, Institute of Clinical Epidemiology and Biometry, University of Würzburg, State Institute of Health I, Bavarian Health and Food Safety Authority, Erlangen, Germany
| | - Tobias Pischon
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association (MDC), Molecular Epidemiology Research Group; Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association (MDC), Biobank Technology Platform, Berlin; Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany
| | - Iris M. Heid
- Chair of Genetic Epidemiology, University of Regensburg, Germany
| | - Annette Peters
- Institute of Epidemiology, Helmholtz Center Munich, German Research Center for Environmental Health, Neuherberg; Chair of Epidemiology, Institute for Medical Information Processing, Biometrics, and Epidemiology, Medical Faculty, Ludwig-Maximilians-University Munich; DZHK (German Centre for Cardiovascular Research), Partner Site Munich, Munich Heart Alliance, Munich; DZD (German Centre for Diabetes Research), Neuherberg
| | - Thoralf Niendorf
- Berlin Ultrahigh Field Facility (B.U.F.F.), Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin
| | - Wolfgang Lieb
- Institute of Epidemiology, Kiel University, Kiel, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, Albert-Ludwigs-University Freiburg, Germany
| | - Martin Büchert
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Department of Diagnostic and Interventional Radiology, Core Facility MRDAC, University Medical Center Freiburg, Faculty of Medicine, Albert-Ludwigs-University Freiburg, Germany
| | - Wilfried Reichardt
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
14
|
Gharahbagh AA, Hajihashemi V, Machado JJM, Tavares JMRS. Feature Extraction Based on Local Histogram with Unequal Bins and a Recurrent Neural Network for the Diagnosis of Kidney Diseases from CT Images. Bioengineering (Basel) 2024; 11:220. [PMID: 38534494 DOI: 10.3390/bioengineering11030220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 02/20/2024] [Accepted: 02/22/2024] [Indexed: 03/28/2024] Open
Abstract
Kidney disease remains one of the most common ailments worldwide, with cancer being one of its most common forms. Early diagnosis can significantly increase the good prognosis for the patient. The development of an artificial intelligence-based system to assist in kidney cancer diagnosis is crucial because kidney illness is a global health concern, and there are limited nephrologists qualified to evaluate kidney cancer. Diagnosing and categorising different forms of renal failure presents the biggest treatment hurdle for kidney cancer. Thus, this article presents a novel method for detecting and classifying kidney cancer subgroups in Computed Tomography (CT) images based on an asymmetric local statistical pixel distribution. In the first step, the input image is non-overlapping windowed, and a statistical distribution of its pixels in each cancer type is built. Then, the method builds the asymmetric statistical distribution of the image's gradient pixels. Finally, the cancer type is identified by applying the two built statistical distributions to a Deep Neural Network (DNN). The proposed method was evaluated using a dataset collected and authorised by the Dhaka Central International Medical Hospital in Bangladesh, which includes 12,446 CT images of the whole abdomen and urogram, acquired with and without contrast. Based on the results, it is possible to confirm that the proposed method outperformed state-of-the-art methods in terms of the usual correctness criteria. The accuracy of the proposed method for all kidney cancer subtypes presented in the dataset was 99.89%, which is promising.
Collapse
Affiliation(s)
| | - Vahid Hajihashemi
- Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal
| | - José J M Machado
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Departamento de Engenharia Mecânica, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal
| | - João Manuel R S Tavares
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Departamento de Engenharia Mecânica, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal
| |
Collapse
|
15
|
Wang L, Ye M, Lu Y, Qiu Q, Niu Z, Shi H, Wang J. A combined encoder-transformer-decoder network for volumetric segmentation of adrenal tumors. Biomed Eng Online 2023; 22:106. [PMID: 37940921 PMCID: PMC10631161 DOI: 10.1186/s12938-023-01160-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 09/25/2023] [Indexed: 11/10/2023] Open
Abstract
BACKGROUND The morphology of the adrenal tumor and the clinical statistics of the adrenal tumor area are two crucial diagnostic and differential diagnostic features, indicating precise tumor segmentation is essential. Therefore, we build a CT image segmentation method based on an encoder-decoder structure combined with a Transformer for volumetric segmentation of adrenal tumors. METHODS This study included a total of 182 patients with adrenal metastases, and an adrenal tumor volumetric segmentation method combining encoder-decoder structure and Transformer was constructed. The Dice Score coefficient (DSC), Hausdorff distance, Intersection over union (IOU), Average surface distance (ASD) and Mean average error (MAE) were calculated to evaluate the performance of the segmentation method. RESULTS Analyses were made among our proposed method and other CNN-based and transformer-based methods. The results showed excellent segmentation performance, with a mean DSC of 0.858, a mean Hausdorff distance of 10.996, a mean IOU of 0.814, a mean MAE of 0.0005, and a mean ASD of 0.509. The boxplot of all test samples' segmentation performance implies that the proposed method has the lowest skewness and the highest average prediction performance. CONCLUSIONS Our proposed method can directly generate 3D lesion maps and showed excellent segmentation performance. The comparison of segmentation metrics and visualization results showed that our proposed method performed very well in the segmentation.
Collapse
Affiliation(s)
- Liping Wang
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Mingtao Ye
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Yanjie Lu
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Qicang Qiu
- Zhejiang Lab, No. 1818, Western Road of Wenyi, Hangzhou, Zhejiang, China.
| | - Zhongfeng Niu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Hengfeng Shi
- Department of Radiology, Anqing Municipal Hospital, Anqing, Anhui, China
| | - Jian Wang
- Department of Radiology, Tongde Hospital of Zhejiang Province, No.234, Gucui Road, Hangzhou, Zhejiang, China.
| |
Collapse
|
16
|
Rao PK, Chatterjee S, Janardhan M, Nagaraju K, Khan SB, Almusharraf A, Alharbe AI. Optimizing Inference Distribution for Efficient Kidney Tumor Segmentation Using a UNet-PWP Deep-Learning Model with XAI on CT Scan Images. Diagnostics (Basel) 2023; 13:3244. [PMID: 37892065 PMCID: PMC10606269 DOI: 10.3390/diagnostics13203244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/10/2023] [Accepted: 10/10/2023] [Indexed: 10/29/2023] Open
Abstract
Kidney tumors represent a significant medical challenge, characterized by their often-asymptomatic nature and the need for early detection to facilitate timely and effective intervention. Although neural networks have shown great promise in disease prediction, their computational demands have limited their practicality in clinical settings. This study introduces a novel methodology, the UNet-PWP architecture, tailored explicitly for kidney tumor segmentation, designed to optimize resource utilization and overcome computational complexity constraints. A key novelty in our approach is the application of adaptive partitioning, which deconstructs the intricate UNet architecture into smaller submodels. This partitioning strategy reduces computational requirements and enhances the model's efficiency in processing kidney tumor images. Additionally, we augment the UNet's depth by incorporating pre-trained weights, therefore significantly boosting its capacity to handle intricate and detailed segmentation tasks. Furthermore, we employ weight-pruning techniques to eliminate redundant zero-weighted parameters, further streamlining the UNet-PWP model without compromising its performance. To rigorously assess the effectiveness of our proposed UNet-PWP model, we conducted a comparative evaluation alongside the DeepLab V3+ model, both trained on the "KiTs 19, 21, and 23" kidney tumor dataset. Our results are optimistic, with the UNet-PWP model achieving an exceptional accuracy rate of 97.01% on both the training and test datasets, surpassing the DeepLab V3+ model in performance. Furthermore, to ensure our model's results are easily understandable and explainable. We included a fusion of the attention and Grad-CAM XAI methods. This approach provides valuable insights into the decision-making process of our model and the regions of interest that affect its predictions. In the medical field, this interpretability aspect is crucial for healthcare professionals to trust and comprehend the model's reasoning.
Collapse
Affiliation(s)
- P. Kiran Rao
- Artificial Intelligence, Department of Computer Science and Engineering, Ravindra College of Engineering for Women, Kurnool 518001, India
- Department of Computer Science and Engineering, Faculty of Engineering, MS Ramaiah University of Applied Sciences, Bengaluru 560058, India;
| | - Subarna Chatterjee
- Department of Computer Science and Engineering, Faculty of Engineering, MS Ramaiah University of Applied Sciences, Bengaluru 560058, India;
| | - M. Janardhan
- Artificial Intelligence, Department of Computer Science and Engineering, G. Pullaiah College of Engineering and Technology, Kurnool 518008, India;
| | - K. Nagaraju
- Department of Computer Science and Engineering, Indian Institute of Information Technology Design and Manufacturing Kurnool, Kurnool 518008, India;
| | - Surbhi Bhatia Khan
- Department of Data Science, School of Science, Engineering and Environment, University of Salford, Salford M5 4WT, UK
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 13-5053, Lebanon
| | - Ahlam Almusharraf
- Department of Business Administration, College of Business and Administration, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Abdullah I. Alharbe
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Rabigh 21911, Saudi Arabia
| |
Collapse
|
17
|
Ji Y, Hwang G, Lee SJ, Lee K, Yoon H. A deep learning model for automated kidney calculi detection on non-contrast computed tomography scans in dogs. Front Vet Sci 2023; 10:1236579. [PMID: 37799401 PMCID: PMC10548669 DOI: 10.3389/fvets.2023.1236579] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 09/04/2023] [Indexed: 10/07/2023] Open
Abstract
Nephrolithiasis is one of the most common urinary disorders in dogs. Although a majority of kidney calculi are non-obstructive and are likely to be asymptomatic, they can lead to parenchymal loss and obstruction as they progress. Thus, early diagnosis of kidney calculi is important for patient monitoring and better prognosis. However, detecting kidney calculi and monitoring changes in the sizes of the calculi from computed tomography (CT) images is time-consuming for clinicians. This study, in a first of its kind, aims to develop a deep learning model for automatic kidney calculi detection using pre-contrast CT images of dogs. A total of 34,655 transverseimage slices obtained from 76 dogs with kidney calculi were used to develop the deep learning model. Because of the differences in kidney location and calculi sizes in dogs compared to humans, several processing methods were used. The first stage of the models, based on the Attention U-Net (AttUNet), was designed to detect the kidney for the coarse feature map. Five different models-AttUNet, UTNet, TransUNet, SwinUNet, and RBCANet-were used in the second stage to detect the calculi in the kidneys, and the performance of the models was evaluated. Compared with a previously developed model, all the models developed in this study yielded better dice similarity coefficients (DSCs) for the automatic segmentation of the kidney. To detect kidney calculi, RBCANet and SwinUNet yielded the best DSC, which was 0.74. In conclusion, the deep learning model developed in this study can be useful for the automated detection of kidney calculi.
Collapse
Affiliation(s)
- Yewon Ji
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
| | - Gyeongyeon Hwang
- Division of Electronic Engineering, College of Engineering, Jeonbuk National University, Jeonju, Republic of Korea
| | - Sang Jun Lee
- Division of Electronic Engineering, College of Engineering, Jeonbuk National University, Jeonju, Republic of Korea
| | - Kichang Lee
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
| | - Hakyoung Yoon
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
| |
Collapse
|
18
|
Mahmud S, Abbas TO, Mushtak A, Prithula J, Chowdhury MEH. Kidney Cancer Diagnosis and Surgery Selection by Machine Learning from CT Scans Combined with Clinical Metadata. Cancers (Basel) 2023; 15:3189. [PMID: 37370799 DOI: 10.3390/cancers15123189] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 05/30/2023] [Accepted: 06/07/2023] [Indexed: 06/29/2023] Open
Abstract
Kidney cancers are one of the most common malignancies worldwide. Accurate diagnosis is a critical step in the management of kidney cancer patients and is influenced by multiple factors including tumor size or volume, cancer types and stages, etc. For malignant tumors, partial or radical surgery of the kidney might be required, but for clinicians, the basis for making this decision is often unclear. Partial nephrectomy could result in patient death due to cancer if kidney removal was necessary, whereas radical nephrectomy in less severe cases could resign patients to lifelong dialysis or need for future transplantation without sufficient cause. Using machine learning to consider clinical data alongside computed tomography images could potentially help resolve some of these surgical ambiguities, by enabling a more robust classification of kidney cancers and selection of optimal surgical approaches. In this study, we used the publicly available KiTS dataset of contrast-enhanced CT images and corresponding patient metadata to differentiate four major classes of kidney cancer: clear cell (ccRCC), chromophobe (chRCC), papillary (pRCC) renal cell carcinoma, and oncocytoma (ONC). We rationalized these data to overcome the high field of view (FoV), extract tumor regions of interest (ROIs), classify patients using deep machine-learning models, and extract/post-process CT image features for combination with clinical data. Regardless of marked data imbalance, our combined approach achieved a high level of performance (85.66% accuracy, 84.18% precision, 85.66% recall, and 84.92% F1-score). When selecting surgical procedures for malignant tumors (RCC), our method proved even more reliable (90.63% accuracy, 90.83% precision, 90.61% recall, and 90.50% F1-score). Using feature ranking, we confirmed that tumor volume and cancer stage are the most relevant clinical features for predicting surgical procedures. Once fully mature, the approach we propose could be used to assist surgeons in performing nephrectomies by guiding the choices of optimal procedures in individual patients with kidney cancer.
Collapse
Affiliation(s)
- Sakib Mahmud
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Tariq O Abbas
- Urology Division, Surgery Department, Sidra Medicine, Doha 26999, Qatar
- Department of Surgery, Weill Cornell Medicine-Qatar, Doha 24811, Qatar
- College of Medicine, Qatar University, Doha 2713, Qatar
| | - Adam Mushtak
- Clinical Imaging Department, Hamad Medical Corporation, Doha 3050, Qatar
| | - Johayra Prithula
- Department of Electrical and Electronics Engineering, University of Dhaka, Dhaka 1000, Bangladesh
| | | |
Collapse
|
19
|
Sun P, Mo Z, Hu F, Song X, Mo T, Yu B, Zhang Y, Chen Z. 2.5D MFFAU-Net: a convolutional neural network for kidney segmentation. BMC Med Inform Decis Mak 2023; 23:92. [PMID: 37165349 PMCID: PMC10173575 DOI: 10.1186/s12911-023-02189-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 05/04/2023] [Indexed: 05/12/2023] Open
Abstract
BACKGROUND Kidney tumors have become increasingly prevalent among adults and are now considered one of the most common types of tumors. Accurate segmentation of kidney tumors can help physicians assess tumor complexity and aggressiveness before surgery. However, segmenting kidney tumors manually can be difficult because of their heterogeneity. METHODS This paper proposes a 2.5D MFFAU-Net (multi-level Feature Fusion Attention U-Net) to segment kidneys, tumors and cysts. First, we propose a 2.5D model for learning to combine and represent a given slice in 2D slices, thereby introducing 3D information to balance memory consumption and model complexity. Then, we propose a ResConv architecture in MFFAU-Net and use the high-level and low-level feature in the model. Finally, we use multi-level information to analyze the spatial features between slices to segment kidneys and tumors. RESULTS The 2.5D MFFAU-Net was evaluated on KiTS19 and KiTS21 kidney datasets and demonstrated an average dice score of 0.924 and 0.875, respectively, and an average Surface dice (SD) score of 0.794 in KiTS21. CONCLUSION The 2.5D MFFAU-Net model can effectively segment kidney tumors, and the results are comparable to those obtained with high-performance 3D CNN models, and have the potential to serve as a point of reference in clinical practice.
Collapse
Affiliation(s)
- Peng Sun
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China
| | - Zengnan Mo
- Center for Genomic and Personalized Medicine, Guangxi Medical University, Nanning, 530021, Guangxi, China
| | - Fangrong Hu
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China
| | - Xin Song
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China
| | - Taiping Mo
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China
| | - Bonan Yu
- School of Architecture and Transportation Engineering, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China.
| | - Yewei Zhang
- Hepatopancreatobiliary Center, The Second Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Zhencheng Chen
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China.
| |
Collapse
|
20
|
Ivanov KO, Kazarinov AV, Dubrovin VN, Rozhentsov AA, Baev AA, Evdokimov AO. An Algorithm for Segmentation of Kidney Tissues on CT Images Based on a U-Net Convolutional Neural Network. BIOMEDICAL ENGINEERING 2023. [DOI: 10.1007/s10527-023-10249-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
21
|
Sun P, Yang S, Guan H, Mo T, Yu B, Chen Z. MTAN: A semi-supervised learning model for kidney tumor segmentation. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:1295-1313. [PMID: 37718833 DOI: 10.3233/xst-230133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/19/2023]
Abstract
BACKGROUND Medical image segmentation is crucial in disease diagnosis and treatment planning. Deep learning (DL) techniques have shown promise. However, optimizing DL models requires setting numerous parameters, and demands substantial labeled datasets, which are labor-intensive to create. OBJECTIVE This study proposes a semi-supervised model that can utilize labeled and unlabeled data to accurately segment kidneys, tumors, and cysts on CT images, even with limited labeled samples. METHODS An end-to-end semi-supervised learning model named MTAN (Mean Teacher Attention N-Net) is designed to segment kidneys, tumors, and cysts on CT images. The MTAN model is built on the foundation of the AN-Net architecture, functioning dually as teachers and students. In its student role, AN-Net learns conventionally. In its teacher role, it generates objects and instructs the student model on their utilization to enhance learning quality. The semi-supervised nature of MTAN allows it to effectively utilize unlabeled data for training, thus improving performance and reducing overfitting. RESULTS We evaluate the proposed model using two CT image datasets (KiTS19 and KiTS21). In the KiTS19 dataset, MTAN achieved segmentation results with an average Dice score of 0.975 for kidneys and 0.869 for tumors, respectively. Moreover, on the KiTS21 dataset, MTAN demonstrates its robustness, yielding average Dice scores of 0.977 for kidneys, 0.886 for masses, 0.861 for tumors, and 0.759 for cysts, respectively. CONCLUSION The proposed MTAN model presents a compelling solution for accurate medical image segmentation, particularly in scenarios where the labeled data is scarce. By effectively utilizing the unlabeled data through a semi-supervised learning approach, MTAN mitigates overfitting concerns and achieves high-quality segmentation results. The consistent performance across two distinct datasets, KiTS19 and KiTS21, underscores model's reliability and potential for clinical reference.
Collapse
Affiliation(s)
- Peng Sun
- School of Electronic Engineering and Automation Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Sijing Yang
- School of Life and Environmental Science Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Haolin Guan
- School of Electronic Engineering and Automation Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Taiping Mo
- School of Electronic Engineering and Automation Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Bonan Yu
- School of Architecture and Transportation Engineering Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Zhencheng Chen
- School of Electronic Engineering and Automation Guilin University of Electronic Technology, Guilin, Guangxi, China
- School of Life and Environmental Science Guilin University of Electronic Technology, Guilin, Guangxi, China
- Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin, Guangxi, China
- Guangxi Engineering Technology Research Center of Human Physiological Information Noninvasive Detection, Guilin, Guangxi, China
| |
Collapse
|
22
|
Deng X, Tian L, Zhang Y, Li A, Cai S, Zhou Y, Jie Y. Is histogram manipulation always beneficial when trying to improve model performance across devices? Experiments using a Meibomian gland segmentation model. Front Cell Dev Biol 2022; 10:1067914. [PMID: 36544900 PMCID: PMC9760981 DOI: 10.3389/fcell.2022.1067914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 11/14/2022] [Indexed: 12/12/2022] Open
Abstract
Meibomian gland dysfunction (MGD) is caused by abnormalities of the meibomian glands (MG) and is one of the causes of evaporative dry eye (DED). Precise MG segmentation is crucial for MGD-related DED diagnosis because the morphological parameters of MG are of importance. Deep learning has achieved state-of-the-art performance in medical image segmentation tasks, especially when training and test data come from the same distribution. But in practice, MG images can be acquired from different devices or hospitals. When testing image data from different distributions, deep learning models that have been trained on a specific distribution are prone to poor performance. Histogram specification (HS) has been reported as an effective method for contrast enhancement and improving model performance on images of different modalities. Additionally, contrast limited adaptive histogram equalization (CLAHE) will be used as a preprocessing method to enhance the contrast of MG images. In this study, we developed and evaluated the automatic segmentation method of the eyelid area and the MG area based on CNN and automatically calculated MG loss rate. This method is evaluated in the internal and external testing sets from two meibography devices. In addition, to assess whether HS and CLAHE improve segmentation results, we trained the network model using images from one device (internal testing set) and tested on images from another device (external testing set). High DSC (0.84 for MG region, 0.92 for eyelid region) for the internal test set was obtained, while for the external testing set, lower DSC (0.69-0.71 for MG region, 0.89-0.91 for eyelid region) was obtained. Also, HS and CLAHE were reported to have no statistical improvement in the segmentation results of MG in this experiment.
Collapse
Affiliation(s)
- Xianyu Deng
- Health Science Center, School of Biomedical Engineering, Shenzhen University, Shenzhen, China,Marshall Laboratory of Biomedical Engineering, Shenzhen, China
| | - Lei Tian
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Institute of Ophthalmology, Capital Medical University, Beijing, China,Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Yinghuai Zhang
- Health Science Center, School of Biomedical Engineering, Shenzhen University, Shenzhen, China,Marshall Laboratory of Biomedical Engineering, Shenzhen, China
| | - Ao Li
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Institute of Ophthalmology, Capital Medical University, Beijing, China,Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Shangyu Cai
- Health Science Center, School of Biomedical Engineering, Shenzhen University, Shenzhen, China,Marshall Laboratory of Biomedical Engineering, Shenzhen, China
| | - Yongjin Zhou
- Health Science Center, School of Biomedical Engineering, Shenzhen University, Shenzhen, China,Marshall Laboratory of Biomedical Engineering, Shenzhen, China,*Correspondence: Yongjin Zhou, ; Ying Jie,
| | - Ying Jie
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Institute of Ophthalmology, Capital Medical University, Beijing, China,Ophthalmology and Visual Sciences Key Laboratory, Beijing, China,*Correspondence: Yongjin Zhou, ; Ying Jie,
| |
Collapse
|
23
|
Ji Y, Cho H, Seon S, Lee K, Yoon H. A deep learning model for CT-based kidney volume determination in dogs and normal reference definition. Front Vet Sci 2022; 9:1011804. [PMID: 36387402 PMCID: PMC9649823 DOI: 10.3389/fvets.2022.1011804] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 10/13/2022] [Indexed: 10/07/2023] Open
Abstract
Kidney volume is associated with renal function and the severity of renal diseases, thus accurate assessment of the kidney is important. Although the voxel count method is reported to be more accurate than several methods, its laborious and time-consuming process is considered as a main limitation. In need of a new technology that is fast and as accurate as the manual voxel count method, the aim of this study was to develop the first deep learning model for automatic kidney detection and volume estimation from computed tomography (CT) images of dogs. A total of 182,974 image slices from 386 CT scans of 211 dogs were used to develop this deep learning model. Owing to the variance of kidney size and location in dogs compared to humans, several processing methods and an architecture based on UNEt Transformers which is known to show promising results for various medical image segmentation tasks including this study. Combined loss function and data augmentation were applied to elevate the performance of the model. The Dice similarity coefficient (DSC) which shows the similarity between manual segmentation and automated segmentation by deep-learning model was 0.915 ± 0.054 (mean ± SD) with post-processing. Kidney volume agreement analysis assessing the similarity between the kidney volume estimated by manual voxel count method and the deep-learning model was r = 0.960 (p < 0.001), 0.95 from Lin's concordance correlation coefficient (CCC), and 0.975 from the intraclass correlation coefficient (ICC). Kidney volume was positively correlated with body weight (BW), and insignificantly correlated with body conditions score (BCS), age, and sex. The correlations between BW, BCS, and kidney volume were as follows: kidney volume = 3.701 × BW + 11.962 (R 2 = 0.74, p < 0.001) and kidney volume = 19.823 × BW/BCS index + 10.705 (R 2 = 0.72, p < 0.001). The deep learning model developed in this study is useful for the automatic estimation of kidney volume. Furthermore, a reference range established in this study for CT-based normal kidney volume considering BW and BCS can be helpful in assessment of kidney in dogs.
Collapse
Affiliation(s)
- Yewon Ji
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, South Korea
| | | | | | - Kichang Lee
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, South Korea
| | - Hakyoung Yoon
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, South Korea
| |
Collapse
|
24
|
Gong Z, Song J, Guo W, Ju R, Zhao D, Tan W, Zhou W, Zhang G. Abdomen tissues segmentation from computed tomography images using deep learning and level set methods. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:14074-14085. [PMID: 36654080 DOI: 10.3934/mbe.2022655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Accurate abdomen tissues segmentation is one of the crucial tasks in radiation therapy planning of related diseases. However, abdomen tissues segmentation (liver, kidney) is difficult because the low contrast between abdomen tissues and their surrounding organs. In this paper, an attention-based deep learning method for automated abdomen tissues segmentation is proposed. In our method, image cropping is first applied to the original images. U-net model with attention mechanism is then constructed to obtain the initial abdomen tissues. Finally, level set evolution which consists of three energy terms is used for optimize the initial abdomen segmentation. The proposed model is evaluated across 470 subsets. For liver segmentation, the mean dice are 96.2 and 95.1% for the FLARE21 datasets and the LiTS datasets, respectively. For kidney segmentation, the mean dice are 96.6 and 95.7% for the FLARE21 datasets and the LiTS datasets, respectively. Experimental evaluation exhibits that the proposed method can obtain better segmentation results than other methods.
Collapse
Affiliation(s)
- Zhaoxuan Gong
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Jing Song
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
| | - Wei Guo
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Ronghui Ju
- Liaoning provincial people's hospital, Shenyang 110067, China
| | - Dazhe Zhao
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Wei Zhou
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
| | - Guodong Zhang
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| |
Collapse
|
25
|
Rani G, Thakkar P, Verma A, Mehta V, Chavan R, Dhaka VS, Sharma RK, Vocaturo E, Zumpano E. KUB-UNet: Segmentation of Organs of Urinary System from a KUB X-ray Image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:107031. [PMID: 35878485 DOI: 10.1016/j.cmpb.2022.107031] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 07/01/2022] [Accepted: 07/17/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE The alarming increase in diseases of urinary system is a cause of concern for the populace and health experts. The traditional techniques used for the diagnosis of these diseases are inconvenient for patients, require high cost, and additional waiting time for generating the reports. The objective of this research is to utilize the proven potential of Artificial Intelligence for organ segmentation. Correct identification and segmentation of the region of interest in a medical image are important to enhance the accuracy of disease diagnosis. Also, it improves the reliability of the system by ensuring the extraction of features only from the region of interest. METHOD A lot of research works are proposed in the literature for the segmentation of organs using MRI, CT scans, and ultrasound images. But, the segmentation of kidneys, ureters, and bladder from KUB X-ray images is found under explored. Also, there is a lack of validated datasets comprising KUB X-ray images. These challenges motivated the authors to tie up with the team of radiologists and gather the anonymous and validated dataset that can be used to automate the diagnosis of diseases of the urinary system. Further, they proposed a KUB-UNet model for semantic segmentation of the urinary system. RESULTS The proposed KUB-UNet model reported the highest accuracy of 99.18% for segmentation of organs of urinary system. CONCLUSION The comparative analysis of its performance with state-of-the-art models and validation of results by radiology experts prove its reliability, robustness, and supremacy. This segmentation phase may prove useful in extracting the features only from the region of interest and improve the accuracy diagnosis.
Collapse
Affiliation(s)
- Geeta Rani
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | - Priyam Thakkar
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | - Akshat Verma
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | - Vanshika Mehta
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | - Rugved Chavan
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | - Vijaypal Singh Dhaka
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | | | - Eugenio Vocaturo
- Department of Computer Engineering, Modeling, Electronics and Systems (DIMES), University of Calabria, Italy; CNR NANOTEC, National Research Council, Rende, Italy.
| | - Ester Zumpano
- Department of Computer Engineering, Modeling, Electronics and Systems (DIMES), University of Calabria, Italy; CNR NANOTEC, National Research Council, Rende, Italy.
| |
Collapse
|
26
|
|
27
|
Pandey M, Gupta A. Tumorous kidney segmentation in abdominal CT images using active contour and 3D-UNet. Ir J Med Sci 2022:10.1007/s11845-022-03113-8. [PMID: 35930139 DOI: 10.1007/s11845-022-03113-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 07/20/2022] [Indexed: 11/29/2022]
Abstract
BACKGROUND AND PURPOSE The precise segmentation of the kidneys in computed tomography (CT) images is vital in urology for diagnosis, treatment, and surgical planning. Medical experts can get assistance through segmentation, as it provides information about kidney malformations in terms of shape and size. Manual segmentation is slow, tedious, and not reproducible. An automatic computer-aided system is a solution to this problem. This paper presents an automated kidney segmentation technique based on active contour and deep learning. MATERIALS AND METHODS In this work, 210 CTs from the KiTS 19 repository were used. The used dataset was divided into a train set (168 CTs), test set (21 CTs), and validation set (21 CTs). The suggested technique has broadly four phases: (1) extraction of kidney regions using active contours, (2) preprocessing, (3) kidney segmentation using 3D U-Net, and (4) reconstruction of the segmented CT images. RESULTS The proposed segmentation method has received the Dice score of 97.62%, Jaccard index of 95.74%, average sensitivity of 98.28%, specificity of 99.95%, and accuracy of 99.93% over the validation dataset. CONCLUSION The proposed method can efficiently solve the problem of tumorous kidney segmentation in CT images by using active contour and deep learning. The active contour was used to select kidney regions and 3D-UNet was used for precisely segmenting the tumorous kidney.
Collapse
Affiliation(s)
- Mohit Pandey
- School of Computer Science & Engineering, Shri Mata Vaishno Devi University, Kakryal, Katra-182320, Jammu & Kashmir, India
| | - Abhishek Gupta
- School of Computer Science & Engineering, Shri Mata Vaishno Devi University, Kakryal, Katra-182320, Jammu & Kashmir, India.
| |
Collapse
|
28
|
|
29
|
Hsiao CH, Lin PC, Chung LA, Lin FYS, Yang FJ, Yang SY, Wu CH, Huang Y, Sun TL. A deep learning-based precision and automatic kidney segmentation system using efficient feature pyramid networks in computed tomography images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106854. [PMID: 35567864 DOI: 10.1016/j.cmpb.2022.106854] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 11/07/2021] [Accepted: 05/02/2022] [Indexed: 06/15/2023]
Abstract
This paper proposes an encoder-decoder architecture for kidney segmentation. A hyperparameter optimization process is implemented, including the development of a model architecture, selecting a windowing method and a loss function, and data augmentation. The model consists of EfficientNet-B5 as the encoder and a feature pyramid network as the decoder that yields the best performance with a Dice score of 0.969 on the 2019 Kidney and Kidney Tumor Segmentation Challenge dataset. The proposed model is tested with different voxel spacing, anatomical planes, and kidney and tumor volumes. Moreover, case studies are conducted to analyze segmentation outliers. Finally, five-fold cross-validation and the 3D-IRCAD-01 dataset are used to evaluate the developed model in terms of the following evaluation metrics: the Dice score, recall, precision, and the Intersection over Union score. A new development and application of artificial intelligence algorithms to solve image analysis and interpretation will be demonstrated in this paper. Overall, our experiment results show that the proposed kidney segmentation solutions in CT images can be significantly applied to clinical needs to assist surgeons in surgical planning. It enables the calculation of the total kidney volume for kidney function estimation in ADPKD and supports radiologists or doctors in disease diagnoses and disease progression.
Collapse
Affiliation(s)
- Chiu-Han Hsiao
- Research Center for Information Technology Innovation, Academia Sinica, Taipei City, (R.O.C.) Taiwan
| | - Ping-Cherng Lin
- Research Center for Information Technology Innovation, Academia Sinica, Taipei City, (R.O.C.) Taiwan
| | - Li-An Chung
- Research Center for Information Technology Innovation, Academia Sinica, Taipei City, (R.O.C.) Taiwan
| | - Frank Yeong-Sung Lin
- Department of Information Management, National Taiwan University, Taipei City, (R.O.C.) Taiwan
| | - Feng-Jung Yang
- Department of Internal Medicine, National Taiwan University Hospital Yunlin Branch, Douliu City, Yunlin County; School of Medicine, College of Medicine, National Taiwan University, Taipei, (R.O.C.) Taiwan.
| | - Shao-Yu Yang
- Department of Internal Medicine, National Taiwan University Hospital, Taipei City, (R.O.C.) Taiwan
| | - Chih-Horng Wu
- Department of Radiology, National Taiwan University Hospital, Taipei City, (R.O.C.) Taiwan
| | - Yennun Huang
- Research Center for Information Technology Innovation, Academia Sinica, Taipei City, (R.O.C.) Taiwan
| | - Tzu-Lung Sun
- Research Center for Information Technology Innovation, Academia Sinica, Taipei City, (R.O.C.) Taiwan
| |
Collapse
|
30
|
Hsiao CH, Sun TL, Lin PC, Peng TY, Chen YH, Cheng CY, Yang FJ, Yang SY, Wu CH, Lin FYS, Huang Y. A deep learning-based precision volume calculation approach for kidney and tumor segmentation on computed tomography images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106861. [PMID: 35588664 DOI: 10.1016/j.cmpb.2022.106861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 03/24/2022] [Accepted: 05/07/2022] [Indexed: 06/15/2023]
Abstract
Previously, doctors interpreted computed tomography (CT) images based on their experience in diagnosing kidney diseases. However, with the rapid increase in CT images, such interpretations were required considerable time and effort, producing inconsistent results. Several novel neural network models were proposed to automatically identify kidney or tumor areas in CT images for solving this problem. In most of these models, only the neural network structure was modified to improve accuracy. However, data pre-processing was also a crucial step in improving the results. This study systematically discussed the necessary pre-processing methods before processing medical images in a neural network model. The experimental results were shown that the proposed pre-processing methods or models significantly improve the accuracy rate compared with the case without data pre-processing. Specifically, the dice score was improved from 0.9436 to 0.9648 for kidney segmentation and 0.7294 for all types of tumor detections. The performance was suitable for clinical applications with lower computational resources based on the proposed medical image processing methods and deep learning models. The cost efficiency and effectiveness were also achieved for automatic kidney volume calculation and tumor detection accurately.
Collapse
Affiliation(s)
- Chiu-Han Hsiao
- Research Center for Information Technology Innovation, Academia Sinica, Taipei City, Taiwan, ROC
| | - Tzu-Lung Sun
- Department of Information Management, National Taiwan University, Taipei City, Taiwan, ROC
| | - Ping-Cherng Lin
- Research Center for Information Technology Innovation, Academia Sinica, Taipei City, Taiwan, ROC
| | - Tsung-Yu Peng
- Department of Information Management, National Taiwan University, Taipei City, Taiwan, ROC
| | - Yu-Hsin Chen
- Department of Information Management, National Taiwan University, Taipei City, Taiwan, ROC
| | - Chieh-Yun Cheng
- Department of Information Management, National Taiwan University, Taipei City, Taiwan, ROC
| | - Feng-Jung Yang
- Department of Internal Medicine, National Taiwan University Hospital Yunlin Branch, Douliu City, Yunlin County; School of Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC.
| | - Shao-Yu Yang
- Department of Internal Medicine, National Taiwan University Hospital, Taipei City, Taiwan, ROC
| | - Chih-Horng Wu
- Department of Medical Imaging, National Taiwan University Hospital, Taipei City, Taiwan, ROC
| | - Frank Yeong-Sung Lin
- Department of Information Management, National Taiwan University, Taipei City, Taiwan, ROC
| | - Yennun Huang
- Research Center for Information Technology Innovation, Academia Sinica, Taipei City, Taiwan, ROC
| |
Collapse
|
31
|
Sun P, Mo Z, Hu F, Liu F, Mo T, Zhang Y, Chen Z. Kidney Tumor Segmentation Based on FR2PAttU-Net Model. Front Oncol 2022; 12:853281. [PMID: 35372025 PMCID: PMC8968695 DOI: 10.3389/fonc.2022.853281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 02/17/2022] [Indexed: 11/14/2022] Open
Abstract
The incidence rate of kidney tumors increases year by year, especially for some incidental small tumors. It is challenging for doctors to segment kidney tumors from kidney CT images. Therefore, this paper proposes a deep learning model based on FR2PAttU-Net to help doctors process many CT images quickly and efficiently and save medical resources. FR2PAttU-Net is not a new CNN structure but focuses on improving the segmentation effect of kidney tumors, even when the kidney tumors are not clear. Firstly, we use the R2Att network in the "U" structure of the original U-Net, add parallel convolution, and construct FR2PAttU-Net model, to increase the width of the model, improve the adaptability of the model to the features of different scales of the image, and avoid the failure of network deepening to learn valuable features. Then, we use the fuzzy set enhancement algorithm to enhance the input image and construct the FR2PAttU-Net model to make the image obtain more prominent features to adapt to the model. Finally, we used the KiTS19 data set and took the size of the kidney tumor as the category judgment standard to enhance the small sample data set to balance the sample data set. We tested the segmentation effect of the model at different convolution and depths, and we got scored a 0.948 kidney Dice and a 0.911 tumor Dice results in a 0.930 composite score, showing a good segmentation effect.
Collapse
Affiliation(s)
- Peng Sun
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| | - Zengnan Mo
- Center for Genomic and Personalized Medicine, Guangxi Medical University, Nanning, China
| | - Fangrong Hu
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| | - Fang Liu
- College of Life and Environment Science, Guilin University of Electronic Technology, Guilin, China
| | - Taiping Mo
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| | - Yewei Zhang
- Hepatopancreatobiliary Center, The Second Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Zhencheng Chen
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| |
Collapse
|
32
|
Abdelrahman A, Viriri S. Kidney Tumor Semantic Segmentation Using Deep Learning: A Survey of State-of-the-Art. J Imaging 2022; 8:55. [PMID: 35324610 PMCID: PMC8954467 DOI: 10.3390/jimaging8030055] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 01/26/2022] [Accepted: 02/10/2022] [Indexed: 01/27/2023] Open
Abstract
Cure rates for kidney cancer vary according to stage and grade; hence, accurate diagnostic procedures for early detection and diagnosis are crucial. Some difficulties with manual segmentation have necessitated the use of deep learning models to assist clinicians in effectively recognizing and segmenting tumors. Deep learning (DL), particularly convolutional neural networks, has produced outstanding success in classifying and segmenting images. Simultaneously, researchers in the field of medical image segmentation employ DL approaches to solve problems such as tumor segmentation, cell segmentation, and organ segmentation. Segmentation of tumors semantically is critical in radiation and therapeutic practice. This article discusses current advances in kidney tumor segmentation systems based on DL. We discuss the various types of medical images and segmentation techniques and the assessment criteria for segmentation outcomes in kidney tumor segmentation, highlighting their building blocks and various strategies.
Collapse
Affiliation(s)
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4000, South Africa;
| |
Collapse
|
33
|
Araújo JDL, da Cruz LB, Diniz JOB, Ferreira JL, Silva AC, de Paiva AC, Gattass M. Liver segmentation from computed tomography images using cascade deep learning. Comput Biol Med 2022; 140:105095. [PMID: 34902610 DOI: 10.1016/j.compbiomed.2021.105095] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 11/17/2021] [Accepted: 11/27/2021] [Indexed: 12/18/2022]
Abstract
BACKGROUND Liver segmentation is a fundamental step in the treatment planning and diagnosis of liver cancer. However, manual segmentation of liver is time-consuming because of the large slice quantity and subjectiveness associated with the specialist's experience, which can lead to segmentation errors. Thus, the segmentation process can be automated using computational methods for better time efficiency and accuracy. However, automatic liver segmentation is a challenging task, as the liver can vary in shape, ill-defined borders, and lesions, which affect its appearance. We aim to propose an automatic method for liver segmentation using computed tomography (CT) images. METHODS The proposed method, based on deep convolutional neural network models and image processing techniques, comprise of four main steps: (1) image preprocessing, (2) initial segmentation, (3) reconstruction, and (4) final segmentation. RESULTS We evaluated the proposed method using 131 CT images from the LiTS image base. An average sensitivity of 95.45%, an average specificity of 99.86%, an average Dice coefficient of 95.64%, an average volumetric overlap error (VOE) of 8.28%, an average relative volume difference (RVD) of -0.41%, and an average Hausdorff distance (HD) of 26.60 mm were achieved. CONCLUSIONS This study demonstrates that liver segmentation, even when lesions are present in CT images, can be efficiently performed using a cascade approach and including a reconstruction step based on deep convolutional neural networks.
Collapse
Affiliation(s)
- José Denes Lima Araújo
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil.
| | - Luana Batista da Cruz
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil.
| | - João Otávio Bandeira Diniz
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil; Federal Institute of Maranhão, BR-226, SN, Campus Grajaú, Vila Nova, 65 940-000, Grajaú, MA, Brazil.
| | - Jonnison Lima Ferreira
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil; Federal Institute of Amazonas, Rua Santos Dumont, SN, Campus Tabatinga, Vila Verde, 69 640-000, Tabatinga, AM, Brazil.
| | - Aristófanes Corrêa Silva
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil.
| | - Anselmo Cardoso de Paiva
- Applied Computing Group (NCA - UFMA), Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65 085-580, São Luís, MA, Brazil.
| | - Marcelo Gattass
- Pontifical Catholic University of Rio de Janeiro, R. São Vicente, 225, Gávea, 22 453-900, Rio de Janeiro, RJ, Brazil.
| |
Collapse
|
34
|
Bandeira Diniz JO, Ferreira JL, Bandeira Diniz PH, Silva AC, Paiva AC. A deep learning method with residual blocks for automatic spinal cord segmentation in planning CT. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103074] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
35
|
Dias Júnior DA, da Cruz LB, Bandeira Diniz JO, França da Silva GL, Junior GB, Silva AC, de Paiva AC, Nunes RA, Gattass M. Automatic method for classifying COVID-19 patients based on chest X-ray images, using deep features and PSO-optimized XGBoost. EXPERT SYSTEMS WITH APPLICATIONS 2021; 183:115452. [PMID: 34177133 PMCID: PMC8218245 DOI: 10.1016/j.eswa.2021.115452] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 02/18/2021] [Accepted: 06/14/2021] [Indexed: 05/05/2023]
Abstract
The COVID-19 pandemic, which originated in December 2019 in the city of Wuhan, China, continues to have a devastating effect on the health and well-being of the global population. Currently, approximately 8.8 million people have already been infected and more than 465,740 people have died worldwide. An important step in combating COVID-19 is the screening of infected patients using chest X-ray (CXR) images. However, this task is extremely time-consuming and prone to variability among specialists owing to its heterogeneity. Therefore, the present study aims to assist specialists in identifying COVID-19 patients from their chest radiographs, using automated computational techniques. The proposed method has four main steps: (1) the acquisition of the dataset, from two public databases; (2) the standardization of images through preprocessing; (3) the extraction of features using a deep features-based approach implemented through the networks VGG19, Inception-v3, and ResNet50; (4) the classifying of images into COVID-19 groups, using eXtreme Gradient Boosting (XGBoost) optimized by particle swarm optimization (PSO). In the best-case scenario, the proposed method achieved an accuracy of 98.71%, a precision of 98.89%, a recall of 99.63%, and an F1-score of 99.25%. In our study, we demonstrated that the problem of classifying CXR images of patients under COVID-19 and non-COVID-19 conditions can be solved efficiently by combining a deep features-based approach with a robust classifier (XGBoost) optimized by an evolutionary algorithm (PSO). The proposed method offers considerable advantages for clinicians seeking to tackle the current COVID-19 pandemic.
Collapse
Affiliation(s)
- Domingos Alves Dias Júnior
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
| | - Luana Batista da Cruz
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
| | - João Otávio Bandeira Diniz
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
- Federal Institute of Maranhão BR-226, SN, Campus Grajaú, Vila Nova 65940-00, Grajaú, MA, Brazil
| | | | - Geraldo Braz Junior
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
| | - Aristófanes Corrêa Silva
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
| | - Anselmo Cardoso de Paiva
- Federal University of Maranhão Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, 65085-580 São Luís, MA, Brazil
| | - Rodolfo Acatauassú Nunes
- Rio de Janeiro State University, Boulevard 28 de Setembro, 77, Vila Isabel 20551-030, Rio de Janeiro, RJ, Brazil
| | - Marcelo Gattass
- Pontifical Catholic University of Rio de Janeiro, R. São Vicente, 225, Gávea, 22453-900, Rio de Janeiro, RJ, Brazil
| |
Collapse
|
36
|
Pandey M, Gupta A. A systematic review of the automatic kidney segmentation methods in abdominal images. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.10.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
37
|
Diniz JOB, Quintanilha DBP, Santos Neto AC, da Silva GLF, Ferreira JL, Netto SMB, Araújo JDL, Da Cruz LB, Silva TFB, da S. Martins CM, Ferreira MM, Rego VG, Boaro JMC, Cipriano CLS, Silva AC, de Paiva AC, Junior GB, de Almeida JDS, Nunes RA, Mogami R, Gattass M. Segmentation and quantification of COVID-19 infections in CT using pulmonary vessels extraction and deep learning. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:29367-29399. [PMID: 34188605 PMCID: PMC8224997 DOI: 10.1007/s11042-021-11153-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 05/26/2021] [Accepted: 06/03/2021] [Indexed: 05/07/2023]
Abstract
At the end of 2019, the World Health Organization (WHO) reported pneumonia that started in Wuhan, China, as a global emergency problem. Researchers quickly advanced in research to try to understand this COVID-19 and sough solutions for the front-line professionals fighting this fatal disease. One of the tools to aid in the detection, diagnosis, treatment, and prevention of this disease is computed tomography (CT). CT images provide valuable information on how this new disease affects the lungs of patients. However, the analysis of these images is not trivial, especially when researchers are searching for quick solutions. Detecting and evaluating this disease can be tiring, time-consuming, and susceptible to errors. Thus, in this study, we aim to automatically segment infections caused by COVID19 and provide quantitative measures of these infections to specialists, thus serving as a support tool. We use a database of real clinical cases from Pedro Ernesto University Hospital of the State of Rio de Janeiro, Brazil. The method involves five steps: lung segmentation, segmentation and extraction of pulmonary vessels, infection segmentation, infection classification, and infection quantification. For the lung segmentation and infection segmentation tasks, we propose modifications to the traditional U-Net, including batch normalization, leaky ReLU, dropout, and residual block techniques, and name it as Residual U-Net. The proposed method yields an average Dice value of 77.1% and an average specificity of 99.76%. For quantification of infectious findings, the proposed method achieves results like that of specialists, and no measure presented a value of ρ < 0.05 in the paired t-test. The results demonstrate the potential of the proposed method as a tool to help medical professionals combat COVID-19. fight the COVID-19.
Collapse
Affiliation(s)
- João O. B. Diniz
- Federal Institute of Maranhão, BR-226, SN, Campus Grajaú, Vila Nova, Grajaú, MA 65940-00 Brazil
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Darlan B. P. Quintanilha
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Antonino C. Santos Neto
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Giovanni L. F. da Silva
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
- Dom Bosco Higher Education Unit (UNDB), Av. Colares Moreira, 443 - Jardim Renascença, São Luís, MA 65075-441 Brazil
| | - Jonnison L. Ferreira
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
- Federal Institute of Amazonas (IFAM), BR-226, SN, Campus Grajaú, Vila Nova, Grajaú, MA 65940-00 Brazil
| | - Stelmo M. B. Netto
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - José D. L. Araújo
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Luana B. Da Cruz
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Thamila F. B. Silva
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Caio M. da S. Martins
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Marcos M. Ferreira
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Venicius G. Rego
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - José M. C. Boaro
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Carolina L. S. Cipriano
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Aristófanes C. Silva
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Anselmo C. de Paiva
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Geraldo Braz Junior
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - João D. S. de Almeida
- Federal University of Maranhão, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga, São Luís, MA 65085-580 Brazil
| | - Rodolfo A. Nunes
- Rio de Janeiro State University, Boulevard 28 de Setembro, 77, Vila Isabel, Rio de Janeiro, RJ 20551-030 Brazil
| | - Roberto Mogami
- Rio de Janeiro State University, Boulevard 28 de Setembro, 77, Vila Isabel, Rio de Janeiro, RJ 20551-030 Brazil
| | - M. Gattass
- Pontifical Catholic University of Rio de Janeiro, R. São Vicente, 225, Gávea, Rio de Janeiro, RJ 22453-900 Brazil
| |
Collapse
|
38
|
Detection of deep myometrial invasion in endometrial cancer MR imaging based on multi-feature fusion and probabilistic support vector machine ensemble. Comput Biol Med 2021; 134:104487. [PMID: 34022489 DOI: 10.1016/j.compbiomed.2021.104487] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 04/25/2021] [Accepted: 05/07/2021] [Indexed: 11/21/2022]
Abstract
The depth of myometrial invasion affects the treatment and prognosis of patients with endometrial cancer (EC), conventionally evaluated using MR imaging (MRI). However, only a few computer-aided diagnosis methods have been reported for identifying deep myometrial invasion (DMI) using MRI. Moreover, these existing methods exhibit relatively unsatisfactory sensitivity and specificity. This study proposes a novel computerized method to facilitate the accurate detection of DMI on MRI. This method requires only the corpus uteri region provided by humans or computers instead of the tumor region. We also propose a geometric feature called LS to describe the irregularity of the tissue structure inside the corpus uteri triggered by EC, which has not been leveraged for the DMI prediction model in other studies. Texture features are extracted and then automatically selected by recursive feature elimination. Utilizing a feature fusion strategy of strong and weak features devised in this study, multiple probabilistic support vector machines incorporate LS and texture features, which are then merged to form the ensemble model EPSVM. The model performance is evaluated via leave-one-out cross-validation. We make the following comparisons, EPSVM versus the commonly used classifiers such as random forest, logistic regression, and naive Bayes; EPSVM versus the models using LS or texture features alone. The results show that EPSVM attains an accuracy, sensitivity, specificity, and F1 score of 93.7%, 94.7%, 93.3%, and 87.8%, all of which are higher than those of the commonly used classifiers and the models using LS or texture features alone. Compared with the methods in existing studies, EPSVM exhibits high performance in terms of both sensitivity and specificity. Moreover, LS can achieve an accuracy, sensitivity, and specificity of 89.9%, 89.5%, and 90.0%. Thus, the devised geometric feature LS is significant for DMI detection. The fusion of LS and texture features in the proposed EPSVM can provide more reliable prediction. The computer-aided classification based on the proposed method can assist radiologists in accurately identifying DMI on MRI.
Collapse
|
39
|
Diniz JOB, Ferreira JL, Diniz PHB, Silva AC, de Paiva AC. Esophagus segmentation from planning CT images using an atlas-based deep learning approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105685. [PMID: 32798976 DOI: 10.1016/j.cmpb.2020.105685] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 07/28/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE One of the main steps in the planning of radiotherapy (RT) is the segmentation of organs at risk (OARs) in Computed Tomography (CT). The esophagus is one of the most difficult OARs to segment. The boundaries between the esophagus and other surrounding tissues are not well-defined, and it is presented in several slices of the CT. Thus, manually segment the esophagus requires a lot of experience and takes time. This difficulty in manual segmentation combined with fatigue due to the number of slices to segment can cause human errors. To address these challenges, computational solutions for analyzing medical images and proposing automated segmentation have been developed and explored in recent years. In this work, we propose a fully automatic method for esophagus segmentation for better planning of radiotherapy in CT. METHODS The proposed method is a fully automated segmentation of the esophagus, consisting of 5 main steps: (a) image acquisition; (b) VOI segmentation; (c) preprocessing; (d) esophagus segmentation; and (e) segmentation refinement. RESULTS The method was applied in a database of 36 CT acquired from 3 different institutes. It achieved the best results in literature so far: Dice coefficient value of 82.15%, Jaccard Index of 70.21%, accuracy of 99.69%, sensitivity of 90.61%, specificity of 99.76%, and Hausdorff Distance of 6.1030 mm. CONCLUSIONS With the achieved results, we were able to show how promising the method is, and that applying it in large medical centers, where esophagus segmentation is still an arduous and challenging task, can be of great help to the specialists.
Collapse
Affiliation(s)
| | - Jonnison Lima Ferreira
- Federal University of Maranho, Brazil; Federal Institute of Amazonas - IFAM, Manaus, AM, Brazil
| | | | | | | |
Collapse
|
40
|
Abstract
Kidney tumors represent a type of cancer that people of advanced age are more likely to develop. For this reason, it is important to exercise caution and provide diagnostic tests in the later stages of life. Medical imaging and deep learning methods are becoming increasingly attractive in this sense. Developing deep learning models to help physicians identify tumors with successful segmentation is of great importance. However, not many successful systems exist for soft tissue organs, such as the kidneys and the prostate, of which segmentation is relatively difficult. In such cases where segmentation is difficult, V-Net-based models are mostly used. This paper proposes a new hybrid model using the superior features of existing V-Net models. The model represents a more successful system with improvements in the encoder and decoder phases not previously applied. We believe that this new hybrid V-Net model could help the majority of physicians, particularly those focused on kidney and kidney tumor segmentation. The proposed model showed better performance in segmentation than existing imaging models and can be easily integrated into all systems due to its flexible structure and applicability. The hybrid V-Net model exhibited average Dice coefficients of 97.7% and 86.5% for kidney and tumor segmentation, respectively, and, therefore, could be used as a reliable method for soft tissue organ segmentation.
Collapse
|