1
|
Zhu H, Huang J, Chen K, Ying X, Qian Y. multiPI-TransBTS: A multi-path learning framework for brain tumor image segmentation based on multi-physical information. Comput Biol Med 2025; 191:110148. [PMID: 40215867 DOI: 10.1016/j.compbiomed.2025.110148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Revised: 04/01/2025] [Accepted: 04/04/2025] [Indexed: 04/29/2025]
Abstract
Brain Tumor Segmentation (BraTS) plays a critical role in clinical diagnosis, treatment planning, and monitoring the progression of brain tumors. However, due to the variability in tumor appearance, size, and intensity across different MRI modalities, automated segmentation remains a challenging task. In this study, we propose a novel Transformer-based framework, multiPI-TransBTS, which integrates multi-physical information to enhance segmentation accuracy. The model leverages spatial information, semantic information, and multi-modal imaging data, addressing the inherent heterogeneity in brain tumor characteristics. The multiPI-TransBTS framework consists of an encoder, an Adaptive Feature Fusion (AFF) module, and a multi-source, multi-scale feature decoder. The encoder incorporates a multi-branch architecture to separately extract modality-specific features from different MRI sequences. The AFF module fuses information from multiple sources using channel-wise and element-wise attention, ensuring effective feature recalibration. The decoder combines both common and task-specific features through a Task-Specific Feature Introduction (TSFI) strategy, producing accurate segmentation outputs for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET) regions. Comprehensive evaluations on the BraTS2019 and BraTS2020 datasets demonstrate the superiority of multiPI-TransBTS over the state-of-the-art methods. The model consistently achieves better Dice coefficients, Hausdorff distances, and Sensitivity scores, highlighting its effectiveness in addressing the BraTS challenges. Our results also indicate the need for further exploration of the balance between precision and recall in the ET segmentation task. The proposed framework represents a significant advancement in BraTS, with potential implications for improving clinical outcomes for brain tumor patients.
Collapse
Affiliation(s)
- Hongjun Zhu
- School of Software Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; Chongqing Engineering Research Center of Software Quality Assurance, Testing and Assessment, Chongqing, 400065, China; Key Laboratory of Big Data Intelligent Computing, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Jiaohang Huang
- School of Software Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Kuo Chen
- School of Software Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; Chongqing Engineering Research Center of Software Quality Assurance, Testing and Assessment, Chongqing, 400065, China
| | - Xuehui Ying
- School of Software Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; Chongqing Engineering Research Center of Software Quality Assurance, Testing and Assessment, Chongqing, 400065, China
| | - Ying Qian
- School of Software Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; Chongqing Engineering Research Center of Software Quality Assurance, Testing and Assessment, Chongqing, 400065, China
| |
Collapse
|
2
|
Shaikh A, Amin S, Zeb MA, Sulaiman A, Al Reshan MS, Alshahrani H. Enhanced brain tumor detection and segmentation using densely connected convolutional networks with stacking ensemble learning. Comput Biol Med 2025; 186:109703. [PMID: 39862469 DOI: 10.1016/j.compbiomed.2025.109703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 12/29/2024] [Accepted: 01/14/2025] [Indexed: 01/27/2025]
Abstract
- Brain tumors (BT), both benign and malignant, pose a substantial impact on human health and need precise and early detection for successful treatment. Analysing magnetic resonance imaging (MRI) image is a common method for BT diagnosis and segmentation, yet misdiagnoses yield effective medical responses, impacting patient survival rates. Recent technological advancements have popularized deep learning-based medical image analysis, leveraging transfer learning to reuse pre-trained models for various applications. BT segmentation with MRI remains challenging despite advancements in image acquisition techniques. Accurate detection and segmentation are essential for proper diagnosis and treatment planning. This study aims to enhance BT detection and segmentation accuracy and effectiveness of categorization through the implementation of an advanced stacking ensemble learning (SEL) approach. This study explores the efficiency of SEL architecture in augmenting the precision of BT segmentation. SEL, a prominent approach within the machine learning paradigm, combines the predictions of base-level models and improves the overall performance of predictions in order to reduce the errors and biases of each model. The proposed approach involves designing a stacked DenseNet201 as the meta-model called SEL-DenseNet201, complemented by six diverse base models such as mobile network version 3 (MobileNet-v3), 3-dimensional convolutional neural network (3D-CNN), visual geometry group network with 16 and 19 layers (VGG-16 and VGG-19), residual network with 50 layers (ResNet50), and Alex network (AlexNet). The strengths of the base models are calculated to capture distinct aspects of the BT MRI, aiming for enhanced segmentation performance. The proposed SEL-DenseNet201 is trained using BT MRI datasets. The augmentation techniques are applied to MRI scans to balance and enhance the model performance through the application of image enhancement and segmentation techniques. The proposed SEL-DenseNet201 achieves impressive results with an accuracy of 99.65 % and a dice coefficient of 97.43 %. These outcomes underscore the superiority of the proposed model over existing approaches. This study holds the potential to be an initial screening approach for early BT detection, with a high success rate.
Collapse
Affiliation(s)
- Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia; Emerging Technologies Research Lab (ETRL), College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia.
| | - Samina Amin
- Institute of Computing, Kohat University of Science and Technology, Kohat, 26000, Pakistan.
| | - Muhammad Ali Zeb
- Institute of Computing, Kohat University of Science and Technology, Kohat, 26000, Pakistan.
| | - Adel Sulaiman
- Emerging Technologies Research Lab (ETRL), College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia; Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia.
| | - Mana Saleh Al Reshan
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia; Emerging Technologies Research Lab (ETRL), College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia.
| | - Hani Alshahrani
- Emerging Technologies Research Lab (ETRL), College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia; Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia.
| |
Collapse
|
3
|
Zhang X, Zhao J, Zong D, Ren H, Gao C. Taming vision transformers for clinical laryngoscopy assessment. J Biomed Inform 2025; 162:104766. [PMID: 39827999 DOI: 10.1016/j.jbi.2024.104766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2024] [Revised: 12/09/2024] [Accepted: 12/26/2024] [Indexed: 01/22/2025]
Abstract
OBJECTIVE Laryngoscopy, essential for diagnosing laryngeal cancer (LCA), faces challenges due to high inter-observer variability and the reliance on endoscopist expertise. Distinguishing precancerous from early-stage cancerous lesions is particularly challenging, even for experienced practitioners, given their similar appearances. This study aims to enhance laryngoscopic image analysis to improve early screening/detection of cancer or precancerous conditions. METHODS We propose MedFormer, a laryngeal cancer classification method based on the Vision Transformer (ViT). To address data scarcity, MedFormer employs a customized transfer learning approach that leverages the representational power of pre-trained transformers. This method enables robust out-of-domain generalization by fine-tuning a minimal set of additional parameters. RESULTS MedFormer exhibits sensitivity-specificity values of 98%-89% for identifying precancerous lesions (leukoplakia) and 89%-97% for detecting cancer, surpassing CNN counterparts significantly. Additionally, when compared to the two selected ViT-based models, MedFormer also demonstrates superior performance. It also outperforms physician visual evaluations (PVE) in certain scenarios and matches PVE performance in all cases. Visualizations using class activation maps (CAM) and deformable patches demonstrate MedFormer's interpretability, aiding clinicians in understanding the model's predictions. CONCLUSION We highlight the potential of visual transformers in clinical laryngoscopic assessments, presenting MedFormer as an effective method for the early detection of laryngeal cancer.
Collapse
Affiliation(s)
- Xinzhu Zhang
- School of Computer Science and Technology, East China Normal University, North Zhongshan Road 3663, Shanghai, 200062, China
| | - Jing Zhao
- School of Computer Science and Technology, East China Normal University, North Zhongshan Road 3663, Shanghai, 200062, China.
| | - Daoming Zong
- School of Computer Science and Technology, East China Normal University, North Zhongshan Road 3663, Shanghai, 200062, China
| | - Henglei Ren
- Eye & ENT Hospital of Fudan University, Fenyang Road 83, Shanghai, 200000, China
| | - Chunli Gao
- Eye & ENT Hospital of Fudan University, Fenyang Road 83, Shanghai, 200000, China.
| |
Collapse
|
4
|
Ma B, Sun Q, Ma Z, Li B, Cao Q, Wang Y, Yu G. DTASUnet: a local and global dual transformer with the attention supervision U-network for brain tumor segmentation. Sci Rep 2024; 14:28379. [PMID: 39551805 PMCID: PMC11570615 DOI: 10.1038/s41598-024-78067-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 10/28/2024] [Indexed: 11/19/2024] Open
Abstract
Glioma refers to a highly prevalent type of brain tumor that is strongly associated with a high mortality rate. During the treatment process of the disease, it is particularly important to accurately perform segmentation of the glioma from Magnetic Resonance Imaging (MRI). However, existing methods used for glioma segmentation usually rely solely on either local or global features and perform poorly in terms of capturing and exploiting critical information from tumor volume features. Herein, we propose a local and global dual transformer with an attentional supervision U-shape network called DTASUnet, which is purposed for glioma segmentation. First, we built a pyramid hierarchical encoder based on 3D shift local and global transformers to effectively extract the features and relationships of different tumor regions. We also designed a 3D channel and spatial attention supervision module to guide the network, allowing it to capture key information in volumetric features more accurately during the training process. In the BraTS 2018 validation set, the average Dice scores of DTASUnet for the tumor core (TC), whole tumor (WT), and enhancing tumor (ET) regions were 0.845, 0.905, and 0.808, respectively. These results demonstrate that DTASUnet has utility in assisting clinicians with determining the location of gliomas to facilitate more efficient and accurate brain surgery and diagnosis.
Collapse
Affiliation(s)
- Bo Ma
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Qian Sun
- Cancer Center, The Second Hospital of Shandong University, Jinan, China
| | - Ze Ma
- School of Materials Science and Engineering, University of Jinan, Jinan, China
| | - Baosheng Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No. 440, Jiyan Road, Jinan, 250117, Shandong Province, People's Republic of China
| | - Qiang Cao
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No. 440, Jiyan Road, Jinan, 250117, Shandong Province, People's Republic of China.
| | - Yungang Wang
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No. 440, Jiyan Road, Jinan, 250117, Shandong Province, People's Republic of China.
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China.
| |
Collapse
|
5
|
Zhou T. M2GCNet: Multi-Modal Graph Convolution Network for Precise Brain Tumor Segmentation Across Multiple MRI Sequences. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4896-4910. [PMID: 39236123 DOI: 10.1109/tip.2024.3451936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
Accurate segmentation of brain tumors across multiple MRI sequences is essential for diagnosis, treatment planning, and clinical decision-making. In this paper, I propose a cutting-edge framework, named multi-modal graph convolution network (M2GCNet), to explore the relationships across different MR modalities, and address the challenge of brain tumor segmentation. The core of M2GCNet is the multi-modal graph convolution module (M2GCM), a pivotal component that represents MR modalities as graphs, with nodes corresponding to image pixels and edges capturing latent relationships between pixels. This graph-based representation enables the effective utilization of both local and global contextual information. Notably, M2GCM comprises two important modules: the spatial-wise graph convolution module (SGCM), adept at capturing extensive spatial dependencies among distinct regions within an image, and the channel-wise graph convolution module (CGCM), dedicated to modelling intricate contextual dependencies among different channels within the image. Additionally, acknowledging the intrinsic correlation present among different MR modalities, a multi-modal correlation loss function is introduced. This novel loss function aims to capture specific nonlinear relationships between correlated modality pairs, enhancing the model's ability to achieve accurate segmentation results. The experimental evaluation on two brain tumor datasets demonstrates the superiority of the proposed M2GCNet over other state-of-the-art segmentation methods. Furthermore, the proposed method paves the way for improved tumor diagnosis, multi-modal information fusion, and a deeper understanding of brain tumor pathology.
Collapse
|
6
|
Abidin ZU, Naqvi RA, Haider A, Kim HS, Jeong D, Lee SW. Recent deep learning-based brain tumor segmentation models using multi-modality magnetic resonance imaging: a prospective survey. Front Bioeng Biotechnol 2024; 12:1392807. [PMID: 39104626 PMCID: PMC11298476 DOI: 10.3389/fbioe.2024.1392807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 06/14/2024] [Indexed: 08/07/2024] Open
Abstract
Radiologists encounter significant challenges when segmenting and determining brain tumors in patients because this information assists in treatment planning. The utilization of artificial intelligence (AI), especially deep learning (DL), has emerged as a useful tool in healthcare, aiding radiologists in their diagnostic processes. This empowers radiologists to understand the biology of tumors better and provide personalized care to patients with brain tumors. The segmentation of brain tumors using multi-modal magnetic resonance imaging (MRI) images has received considerable attention. In this survey, we first discuss multi-modal and available magnetic resonance imaging modalities and their properties. Subsequently, we discuss the most recent DL-based models for brain tumor segmentation using multi-modal MRI. We divide this section into three parts based on the architecture: the first is for models that use the backbone of convolutional neural networks (CNN), the second is for vision transformer-based models, and the third is for hybrid models that use both convolutional neural networks and transformer in the architecture. In addition, in-depth statistical analysis is performed of the recent publication, frequently used datasets, and evaluation metrics for segmentation tasks. Finally, open research challenges are identified and suggested promising future directions for brain tumor segmentation to improve diagnostic accuracy and treatment outcomes for patients with brain tumors. This aligns with public health goals to use health technologies for better healthcare delivery and population health management.
Collapse
Affiliation(s)
- Zain Ul Abidin
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Republic of Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Republic of Korea
| | - Amir Haider
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Republic of Korea
| | - Hyung Seok Kim
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Republic of Korea
| | - Daesik Jeong
- College of Convergence Engineering, Sangmyung University, Seoul, Republic of Korea
| | - Seung Won Lee
- School of Medicine, Sungkyunkwan University, Suwon, Republic of Korea
| |
Collapse
|
7
|
Saluja S, Trivedi MC, Saha A. Deep CNNs for glioma grading on conventional MRIs: Performance analysis, challenges, and future directions. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5250-5282. [PMID: 38872535 DOI: 10.3934/mbe.2024232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
The increasing global incidence of glioma tumors has raised significant healthcare concerns due to their high mortality rates. Traditionally, tumor diagnosis relies on visual analysis of medical imaging and invasive biopsies for precise grading. As an alternative, computer-assisted methods, particularly deep convolutional neural networks (DCNNs), have gained traction. This research paper explores the recent advancements in DCNNs for glioma grading using brain magnetic resonance images (MRIs) from 2015 to 2023. The study evaluated various DCNN architectures and their performance, revealing remarkable results with models such as hybrid and ensemble based DCNNs achieving accuracy levels of up to 98.91%. However, challenges persisted in the form of limited datasets, lack of external validation, and variations in grading formulations across diverse literature sources. Addressing these challenges through expanding datasets, conducting external validation, and standardizing grading formulations can enhance the performance and reliability of DCNNs in glioma grading, thereby advancing brain tumor classification and extending its applications to other neurological disorders.
Collapse
Affiliation(s)
- Sonam Saluja
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Munesh Chandra Trivedi
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Ashim Saha
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| |
Collapse
|
8
|
Hassan MT, Tayara H, Chong KT. An integrative machine learning model for the identification of tumor T-cell antigens. Biosystems 2024; 237:105177. [PMID: 38458346 DOI: 10.1016/j.biosystems.2024.105177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 01/28/2024] [Accepted: 03/05/2024] [Indexed: 03/10/2024]
Abstract
The escalating global incidence of cancer poses significant health challenges, underscoring the need for innovative and more efficacious treatments. Cancer immunotherapy, a promising approach leveraging the body's immune system against cancer, emerges as a compelling solution. Consequently, the identification and characterization of tumor T-cell antigens (TTCAs) have become pivotal for exploration. In this manuscript, we introduce TTCA-IF, an integrative machine learning-based framework designed for TTCAs identification. TTCA-IF employs ten feature encoding types in conjunction with five conventional machine learning classifiers. To establish a robust foundation, these classifiers are trained, resulting in the creation of 150 baseline models. The outputs from these baseline models are then fed back into the five classifiers, generating their respective meta-models. Through an ensemble approach, the five meta-models are seamlessly integrated to yield the final predictive model, the TTCA-IF model. Our proposed model, TTCA-IF, surpasses both baseline models and existing predictors in performance. In a comparative analysis involving nine novel peptide sequences, TTCA-IF demonstrated exceptional accuracy by correctly identifying 8 out of 9 peptides as TTCAs. As a tool for screening and pinpointing potential TTCAs, we anticipate TTCA-IF to be invaluable in advancing cancer immunotherapy.
Collapse
Affiliation(s)
- Mir Tanveerul Hassan
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju, 54896, South Korea
| | - Hilal Tayara
- School of International Engineering and Science, Jeonbuk National University, Jeonju, 54896, South Korea.
| | - Kil To Chong
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju, 54896, South Korea; Advances Electronics and Information Research Centre, Jeonbuk National University, Jeonju, 54896, South Korea.
| |
Collapse
|
9
|
Saluja S, Trivedi MC, Sarangdevot SS. Advancing glioma diagnosis: Integrating custom U-Net and VGG-16 for improved grading in MR imaging. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:4328-4350. [PMID: 38549330 DOI: 10.3934/mbe.2024191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
In the realm of medical imaging, the precise segmentation and classification of gliomas represent fundamental challenges with profound clinical implications. Leveraging the BraTS 2018 dataset as a standard benchmark, this study delves into the potential of advanced deep learning models for addressing these challenges. We propose a novel approach that integrates a customized U-Net for segmentation and VGG-16 for classification. The U-Net, with its tailored encoder-decoder pathways, accurately identifies glioma regions, thus improving tumor localization. The fine-tuned VGG-16, featuring a customized output layer, precisely differentiates between low-grade and high-grade gliomas. To ensure consistency in data pre-processing, a standardized methodology involving gamma correction, data augmentation, and normalization is introduced. This novel integration surpasses existing methods, offering significantly improved glioma diagnosis, validated by high segmentation dice scores (WT: 0.96, TC: 0.92, ET: 0.89), and a remarkable overall classification accuracy of 97.89%. The experimental findings underscore the potential of integrating deep learning-based methodologies for tumor segmentation and classification in enhancing glioma diagnosis and formulating subsequent treatment strategies.
Collapse
Affiliation(s)
- Sonam Saluja
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura, 799046, India
| | - Munesh Chandra Trivedi
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura, 799046, India
| | | |
Collapse
|
10
|
Zhou Y, Yang Z, Bai X, Li C, Wang S, Peng G, Li G, Wang Q, Chang H. Semantic Segmentation of Surface Cracks in Urban Comprehensive Pipe Galleries Based on Global Attention. SENSORS (BASEL, SWITZERLAND) 2024; 24:1005. [PMID: 38339722 PMCID: PMC10857760 DOI: 10.3390/s24031005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 01/20/2024] [Accepted: 02/02/2024] [Indexed: 02/12/2024]
Abstract
Cracks inside urban underground comprehensive pipe galleries are small and their characteristics are not obvious. Due to low lighting and large shadow areas, the differentiation between the cracks and background in an image is low. Most current semantic segmentation methods focus on overall segmentation and have a large perceptual range. However, for urban underground comprehensive pipe gallery crack segmentation tasks, it is difficult to pay attention to the detailed features of local edges to obtain accurate segmentation results. A Global Attention Segmentation Network (GA-SegNet) is proposed in this paper. The GA-SegNet is designed to perform semantic segmentation by incorporating global attention mechanisms. In order to perform precise pixel classification in the image, a residual separable convolution attention model is employed in an encoder to extract features at multiple scales. A global attention upsample model (GAM) is utilized in a decoder to enhance the connection between shallow-level features and deep abstract features, which could increase the attention of the network towards small cracks. By employing a balanced loss function, the contribution of crack pixels is increased while reducing the focus on background pixels in the overall loss. This approach aims to improve the segmentation accuracy of cracks. The comparative experimental results with other classic models show that the GA SegNet model proposed in this study has better segmentation performance and multiple evaluation indicators, and has advantages in segmentation accuracy and efficiency.
Collapse
Affiliation(s)
- Yuan Zhou
- School of Instrument Science and Engineering, Harbin Institute of Technology, Harbin 150001, China; (Y.Z.); (X.B.)
| | - Zhiyu Yang
- School of Control and Mechanical, Tianjin Chengjian University, Tianjin 300384, China;
| | - Xiaofeng Bai
- School of Instrument Science and Engineering, Harbin Institute of Technology, Harbin 150001, China; (Y.Z.); (X.B.)
| | - Chengwei Li
- School of Instrument Science and Engineering, Harbin Institute of Technology, Harbin 150001, China; (Y.Z.); (X.B.)
| | - Shoubin Wang
- School of Control and Mechanical, Tianjin Chengjian University, Tianjin 300384, China;
| | - Guili Peng
- School of Control and Mechanical, Tianjin Chengjian University, Tianjin 300384, China;
| | - Guodong Li
- STECOL Corporation, Power Construction Corporation of China, Tianjin 300384, China; (G.L.); (Q.W.); (H.C.)
| | - Qinghua Wang
- STECOL Corporation, Power Construction Corporation of China, Tianjin 300384, China; (G.L.); (Q.W.); (H.C.)
| | - Huailei Chang
- STECOL Corporation, Power Construction Corporation of China, Tianjin 300384, China; (G.L.); (Q.W.); (H.C.)
| |
Collapse
|
11
|
Bayareh-Mancilla R, Medina-Ramos LA, Toriz-Vázquez A, Hernández-Rodríguez YM, Cigarroa-Mayorga OE. Automated Computer-Assisted Medical Decision-Making System Based on Morphological Shape and Skin Thickness Analysis for Asymmetry Detection in Mammographic Images. Diagnostics (Basel) 2023; 13:3440. [PMID: 37998576 PMCID: PMC10670641 DOI: 10.3390/diagnostics13223440] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 07/28/2023] [Accepted: 08/03/2023] [Indexed: 11/25/2023] Open
Abstract
Breast cancer is a significant health concern for women, emphasizing the need for early detection. This research focuses on developing a computer system for asymmetry detection in mammographic images, employing two critical approaches: Dynamic Time Warping (DTW) for shape analysis and the Growing Seed Region (GSR) method for breast skin segmentation. The methodology involves processing mammograms in DICOM format. In the morphological study, a centroid-based mask is computed using extracted images from DICOM files. Distances between the centroid and the breast perimeter are then calculated to assess similarity through Dynamic Time Warping analysis. For skin thickness asymmetry identification, a seed is initially set on skin pixels and expanded based on intensity and depth similarities. The DTW analysis achieves an accuracy of 83%, correctly identifying 23 possible asymmetry cases out of 20 ground truth cases. The GRS method is validated using Average Symmetric Surface Distance and Relative Volumetric metrics, yielding similarities of 90.47% and 66.66%, respectively, for asymmetry cases compared to 182 ground truth segmented images, successfully identifying 35 patients with potential skin asymmetry. Additionally, a Graphical User Interface is designed to facilitate the insertion of DICOM files and provide visual representations of asymmetrical findings for validation and accessibility by physicians.
Collapse
Affiliation(s)
- Rafael Bayareh-Mancilla
- Department Advanced Technologies, UPIITA-Instituto Politécnico Nacional, Av. IPN No. 2580, Mexico City C.P. 07340, Mexico
| | | | - Alfonso Toriz-Vázquez
- Academic Unit, Institute of Applied Mathematics and Systems Research of the State of Yucatan, National Autonomous University of Mexico, Merida C.P. 97302, Yucatan, Mexico
| | | | - Oscar Eduardo Cigarroa-Mayorga
- Department Advanced Technologies, UPIITA-Instituto Politécnico Nacional, Av. IPN No. 2580, Mexico City C.P. 07340, Mexico
| |
Collapse
|
12
|
Wu H, Niyogisubizo J, Zhao K, Meng J, Xi W, Li H, Pan Y, Wei Y. A Weakly Supervised Learning Method for Cell Detection and Tracking Using Incomplete Initial Annotations. Int J Mol Sci 2023; 24:16028. [PMID: 38003217 PMCID: PMC10670924 DOI: 10.3390/ijms242216028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 09/06/2023] [Indexed: 11/26/2023] Open
Abstract
The automatic detection of cells in microscopy image sequences is a significant task in biomedical research. However, routine microscopy images with cells, which are taken during the process whereby constant division and differentiation occur, are notoriously difficult to detect due to changes in their appearance and number. Recently, convolutional neural network (CNN)-based methods have made significant progress in cell detection and tracking. However, these approaches require many manually annotated data for fully supervised training, which is time-consuming and often requires professional researchers. To alleviate such tiresome and labor-intensive costs, we propose a novel weakly supervised learning cell detection and tracking framework that trains the deep neural network using incomplete initial labels. Our approach uses incomplete cell markers obtained from fluorescent images for initial training on the Induced Pluripotent Stem (iPS) cell dataset, which is rarely studied for cell detection and tracking. During training, the incomplete initial labels were updated iteratively by combining detection and tracking results to obtain a model with better robustness. Our method was evaluated using two fields of the iPS cell dataset, along with the cell detection accuracy (DET) evaluation metric from the Cell Tracking Challenge (CTC) initiative, and it achieved 0.862 and 0.924 DET, respectively. The transferability of the developed model was tested using the public dataset FluoN2DH-GOWT1, which was taken from CTC; this contains two datasets with reference annotations. We randomly removed parts of the annotations in each labeled data to simulate the initial annotations on the public dataset. After training the model on the two datasets, with labels that comprise 10% cell markers, the DET improved from 0.130 to 0.903 and 0.116 to 0.877. When trained with labels that comprise 60% cell markers, the performance was better than the model trained using the supervised learning method. This outcome indicates that the model's performance improved as the quality of the labels used for training increased.
Collapse
Affiliation(s)
- Hao Wu
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Jovial Niyogisubizo
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Keliang Zhao
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jintao Meng
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Wenhui Xi
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Hongchang Li
- Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yi Pan
- College of Computer Science and Control Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yanjie Wei
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| |
Collapse
|
13
|
Kalantar R, Curcean S, Winfield JM, Lin G, Messiou C, Blackledge MD, Koh DM. Deep Learning Framework with Multi-Head Dilated Encoders for Enhanced Segmentation of Cervical Cancer on Multiparametric Magnetic Resonance Imaging. Diagnostics (Basel) 2023; 13:3381. [PMID: 37958277 PMCID: PMC10647438 DOI: 10.3390/diagnostics13213381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 10/29/2023] [Accepted: 11/01/2023] [Indexed: 11/15/2023] Open
Abstract
T2-weighted magnetic resonance imaging (MRI) and diffusion-weighted imaging (DWI) are essential components of cervical cancer diagnosis. However, combining these channels for the training of deep learning models is challenging due to image misalignment. Here, we propose a novel multi-head framework that uses dilated convolutions and shared residual connections for the separate encoding of multiparametric MRI images. We employ a residual U-Net model as a baseline, and perform a series of architectural experiments to evaluate the tumor segmentation performance based on multiparametric input channels and different feature encoding configurations. All experiments were performed on a cohort of 207 patients with locally advanced cervical cancer. Our proposed multi-head model using separate dilated encoding for T2W MRI and combined b1000 DWI and apparent diffusion coefficient (ADC) maps achieved the best median Dice similarity coefficient (DSC) score, 0.823 (confidence interval (CI), 0.595-0.797), outperforming the conventional multi-channel model, DSC 0.788 (95% CI, 0.568-0.776), although the difference was not statistically significant (p > 0.05). We investigated channel sensitivity using 3D GRAD-CAM and channel dropout, and highlighted the critical importance of T2W and ADC channels for accurate tumor segmentation. However, our results showed that b1000 DWI had a minor impact on the overall segmentation performance. We demonstrated that the use of separate dilated feature extractors and independent contextual learning improved the model's ability to reduce the boundary effects and distortion of DWI, leading to improved segmentation performance. Our findings could have significant implications for the development of robust and generalizable models that can extend to other multi-modal segmentation applications.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SW7 3RP, UK; (R.K.); (J.M.W.); (C.M.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Sebastian Curcean
- Department of Radiation Oncology, Iuliu Hatieganu University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SW7 3RP, UK; (R.K.); (J.M.W.); (C.M.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, Chang Gung University, Guishan, Taoyuan 333, Taiwan;
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SW7 3RP, UK; (R.K.); (J.M.W.); (C.M.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SW7 3RP, UK; (R.K.); (J.M.W.); (C.M.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SW7 3RP, UK; (R.K.); (J.M.W.); (C.M.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
14
|
Fu X, Song C, Zhang R, Shi H, Jiao Z. Multimodal Classification Framework Based on Hypergraph Latent Relation for End-Stage Renal Disease Associated with Mild Cognitive Impairment. Bioengineering (Basel) 2023; 10:958. [PMID: 37627843 PMCID: PMC10451373 DOI: 10.3390/bioengineering10080958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/09/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
Combined arterial spin labeling (ASL) and functional magnetic resonance imaging (fMRI) can reveal more comprehensive properties of the spatiotemporal and quantitative properties of brain networks. Imaging markers of end-stage renal disease associated with mild cognitive impairment (ESRDaMCI) will be sought from these properties. The current multimodal classification methods often neglect to collect high-order relationships of brain regions and remove noise from the feature matrix. A multimodal classification framework is proposed to address this issue using hypergraph latent relation (HLR). A brain functional network with hypergraph structural information is constructed by fMRI data. The feature matrix is obtained through graph theory (GT). The cerebral blood flow (CBF) from ASL is selected as the second modal feature matrix. Then, the adaptive similarity matrix is constructed by learning the latent relation between feature matrices. Latent relation adaptive similarity learning (LRAS) is introduced to multi-task feature learning to construct a multimodal feature selection method based on latent relation (LRMFS). The experimental results show that the best classification accuracy (ACC) reaches 88.67%, at least 2.84% better than the state-of-the-art methods. The proposed framework preserves more valuable information between brain regions and reduces noise among feature matrixes. It provides an essential reference value for ESRDaMCI recognition.
Collapse
Affiliation(s)
- Xidong Fu
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China
| | - Chaofan Song
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China
| | - Rupu Zhang
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China
| | - Haifeng Shi
- Department of Radiology, The Affiliated Changzhou No.2 People’s Hospital of Nanjing Medical University, Changzhou 213003, China
| | - Zhuqing Jiao
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China
| |
Collapse
|
15
|
Khan MM, Chowdhury MEH, Arefin ASMS, Podder KK, Hossain MSA, Alqahtani A, Murugappan M, Khandakar A, Mushtak A, Nahiduzzaman M. A Deep Learning-Based Automatic Segmentation and 3D Visualization Technique for Intracranial Hemorrhage Detection Using Computed Tomography Images. Diagnostics (Basel) 2023; 13:2537. [PMID: 37568900 PMCID: PMC10417300 DOI: 10.3390/diagnostics13152537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 07/20/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023] Open
Abstract
Intracranial hemorrhage (ICH) occurs when blood leaks inside the skull as a result of trauma to the skull or due to medical conditions. ICH usually requires immediate medical and surgical attention because the disease has a high mortality rate, long-term disability potential, and other potentially life-threatening complications. There are a wide range of severity levels, sizes, and morphologies of ICHs, making accurate identification challenging. Hemorrhages that are small are more likely to be missed, particularly in healthcare systems that experience high turnover when it comes to computed tomography (CT) investigations. Although many neuroimaging modalities have been developed, CT remains the standard for diagnosing trauma and hemorrhage (including non-traumatic ones). A CT scan-based diagnosis can provide time-critical, urgent ICH surgery that could save lives because CT scan-based diagnoses can be obtained rapidly. The purpose of this study is to develop a machine-learning algorithm that can detect intracranial hemorrhage based on plain CT images taken from 75 patients. CT images were preprocessed using brain windowing, skull-stripping, and image inversion techniques. Hemorrhage segmentation was performed using multiple pre-trained models on preprocessed CT images. A U-Net model with DenseNet201 pre-trained encoder outperformed other U-Net, U-Net++, and FPN (Feature Pyramid Network) models with the highest Dice similarity coefficient (DSC) and intersection over union (IoU) scores, which were previously used in many other medical applications. We presented a three-dimensional brain model highlighting hemorrhages from ground truth and predicted masks. The volume of hemorrhage was measured volumetrically to determine the size of the hematoma. This study is essential in examining ICH for diagnostic purposes in clinical practice by comparing the predicted 3D model with the ground truth.
Collapse
Affiliation(s)
- Muntakim Mahmud Khan
- Department of Biomedical Physics and Technology, University of Dhaka, Dhaka 1000, Bangladesh
| | | | - A. S. M. Shamsul Arefin
- Department of Biomedical Physics and Technology, University of Dhaka, Dhaka 1000, Bangladesh
| | - Kanchon Kanti Podder
- Department of Biomedical Physics and Technology, University of Dhaka, Dhaka 1000, Bangladesh
| | - Md. Sakib Abrar Hossain
- Department of Biomedical Physics and Technology, University of Dhaka, Dhaka 1000, Bangladesh
| | - Abdulrahman Alqahtani
- Department of Medical Equipment Technology, College of Applied, Medical Science, Majmaah University, Majmaah City 11952, Saudi Arabia
- Department of Biomedical Technology, College of Applied Medical Sciences in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - M. Murugappan
- Intelligent Signal Processing (ISP) Research Lab, Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, Block 4, Doha 13133, Kuwait
- Department of Electronics and Communication Engineering, School of Engineering, Vels Institute of Sciences, Technology, and Advanced Studies, Chennai 600117, India
- Center of Excellence for Unmanned Aerial Systems (CoEUAS), Universiti Malaysia Perlis, Perlis 02600, Malaysia
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Adam Mushtak
- Clinical Imaging Department, Hamad Medical Corporation, Doha 3050, Qatar
| | - Md. Nahiduzzaman
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| |
Collapse
|
16
|
Song K, Zhang Y, Bao Y, Zhao Y, Yan Y. Self-Enhanced Mixed Attention Network for Three-Modal Images Few-Shot Semantic Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:6612. [PMID: 37514905 PMCID: PMC10386587 DOI: 10.3390/s23146612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/13/2023] [Accepted: 07/19/2023] [Indexed: 07/30/2023]
Abstract
As an important computer vision technique, image segmentation has been widely used in various tasks. However, in some extreme cases, the insufficient illumination would result in a great impact on the performance of the model. So more and more fully supervised methods use multi-modal images as their input. The dense annotated large datasets are difficult to obtain, but the few-shot methods still can have satisfactory results with few pixel-annotated samples. Therefore, we propose the Visible-Depth-Thermal (three-modal) images few-shot semantic segmentation method. It utilizes the homogeneous information of three-modal images and the complementary information of different modal images, which can improve the performance of few-shot segmentation tasks. We constructed a novel indoor dataset VDT-2048-5i for the three-modal images few-shot semantic segmentation task. We also proposed a Self-Enhanced Mixed Attention Network (SEMANet), which consists of a Self-Enhanced module (SE) and a Mixed Attention module (MA). The SE module amplifies the difference between the different kinds of features and strengthens the weak connection for the foreground features. The MA module fuses the three-modal feature to obtain a better feature. Compared with the most advanced methods before, our model improves mIoU by 3.8% and 3.3% in 1-shot and 5-shot settings, respectively, which achieves state-of-the-art performance. In the future, we will solve failure cases by obtaining more discriminative and robust feature representations, and explore achieving high performance with fewer parameters and computational costs.
Collapse
Affiliation(s)
- Kechen Song
- School of Mechanical Engineering & Automation, Northeastern University, Shenyang 110819, China
| | - Yiming Zhang
- School of Mechanical Engineering & Automation, Northeastern University, Shenyang 110819, China
| | - Yanqi Bao
- National Key Laboratory for Novel Software Technology, Department of Computer Science and Technology, Nanjing University, Nanjing 210023, China
| | - Ying Zhao
- School of Mechanical Engineering & Automation, Northeastern University, Shenyang 110819, China
| | - Yunhui Yan
- School of Mechanical Engineering & Automation, Northeastern University, Shenyang 110819, China
| |
Collapse
|
17
|
Kodipalli A, Fernandes SL, Gururaj V, Varada Rameshbabu S, Dasar S. Performance Analysis of Segmentation and Classification of CT-Scanned Ovarian Tumours Using U-Net and Deep Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:2282. [PMID: 37443676 DOI: 10.3390/diagnostics13132282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 06/29/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Abstract
Difficulty in detecting tumours in early stages is the major cause of mortalities in patients, despite the advancements in treatment and research regarding ovarian cancer. Deep learning algorithms were applied to serve the purpose as a diagnostic tool and applied to CT scan images of the ovarian region. The images went through a series of pre-processing techniques and, further, the tumour was segmented using the UNet model. The instances were then classified into two categories-benign and malignant tumours. Classification was performed using deep learning models like CNN, ResNet, DenseNet, Inception-ResNet, VGG16 and Xception, along with machine learning models such as Random Forest, Gradient Boosting, AdaBoosting and XGBoosting. DenseNet 121 emerges as the best model on this dataset after applying optimization on the machine learning models by obtaining an accuracy of 95.7%. The current work demonstrates the comparison of multiple CNN architectures with common machine learning algorithms, with and without optimization techniques applied.
Collapse
Affiliation(s)
- Ashwini Kodipalli
- Department of Artificial Intelligence & Data Science, Global Academy of Technology, Bangalore 560098, India
| | - Steven L Fernandes
- Department of Computer Science, Design, Journalism, Creighton University, Omaha, NE 68178, USA
| | - Vaishnavi Gururaj
- Department of Computer Science, George Mason University, Fairfax, VA 22030, USA
| | - Shriya Varada Rameshbabu
- Department of Computer Science & Engineering, Global Academy of Technology, Bangalore 560098, India
| | - Santosh Dasar
- Department of Radiologist, SDM College of Medical Sciences and Hospital, Dharwad 580009, India
| |
Collapse
|