1
|
Kumar S, Bhowmik B. ADConv-Net: Advanced Deep Convolution Neural Network for COVID-19 Diagnostics Using Chest X-Ray and CT Images. SN COMPUTER SCIENCE 2025; 6:423. [DOI: https:/doi.org/10.1007/s42979-025-03923-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 03/22/2025] [Indexed: 04/30/2025]
|
2
|
Liao H, Huang C, Liu C, Zhang J, Tao F, Liu H, Liang H, Hu X, Li Y, Chen S, Li Y. Deep learning-based MVIT-MLKA model for accurate classification of pancreatic lesions: a multicenter retrospective cohort study. LA RADIOLOGIA MEDICA 2025; 130:508-523. [PMID: 39832039 DOI: 10.1007/s11547-025-01949-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Accepted: 01/01/2025] [Indexed: 01/22/2025]
Abstract
BACKGROUND Accurate differentiation between benign and malignant pancreatic lesions is critical for effective patient management. This study aimed to develop and validate a novel deep learning network using baseline computed tomography (CT) images to predict the classification of pancreatic lesions. METHODS This retrospective study included 864 patients (422 men, 442 women) with confirmed histopathological results across three medical centers, forming a training cohort, internal testing cohort, and external validation cohort. A novel hybrid model, Multi-Scale Large Kernel Attention with Mobile Vision Transformer (MVIT-MLKA), was developed, integrating CNN and Transformer architectures to classify pancreatic lesions. The model's performance was compared with traditional machine learning methods and advanced deep learning models. We also evaluated the diagnostic accuracy of radiologists with and without the assistance of the optimal model. Model performance was assessed through discrimination, calibration, and clinical applicability. RESULTS The MVIT-MLKA model demonstrated superior performance in classifying pancreatic lesions, achieving an AUC of 0.974 (95% CI 0.967-0.980) in the training set, 0.935 (95% CI 0.915-0.954) in the internal testing set, and 0.924 (95% CI 0.902-0.945) in the external validation set, outperforming traditional models and other deep learning models (P < 0.05). Radiologists aided by the MVIT-MLKA model showed significant improvements in diagnostic accuracy and sensitivity compared to those without model assistance (P < 0.05). Grad-CAM visualization enhanced model interpretability by effectively highlighting key lesion areas. CONCLUSION The MVIT-MLKA model efficiently differentiates between benign and malignant pancreatic lesions, surpassing traditional methods and significantly improving radiologists' diagnostic performance. The integration of this advanced deep learning model into clinical practice has the potential to reduce diagnostic errors and optimize treatment strategies.
Collapse
Affiliation(s)
- Hongfan Liao
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Cheng Huang
- College of Computer and Information Science, Southwest University, Chongqing, 400715, China
| | - Chunhua Liu
- Department of Radiology, Daping Hospital, Army Medical University, Chongqing, China
| | - Jiao Zhang
- Department of Radiology, The Third Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Fengming Tao
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Haotian Liu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Hongwei Liang
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Xiaoli Hu
- Department of Radiology, The Third Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yi Li
- Department of Radiology, The Third People's Hospital of Chengdu, Chengdu, China
| | - Shanxiong Chen
- College of Computer and Information Science, Southwest University, Chongqing, 400715, China.
| | - Yongmei Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China.
| |
Collapse
|
3
|
Wolf D, Payer T, Lisson CS, Lisson CG, Beer M, Götz M, Ropinski T. Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance. Comput Biol Med 2024; 183:109242. [PMID: 39388839 DOI: 10.1016/j.compbiomed.2024.109242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 09/30/2024] [Accepted: 10/01/2024] [Indexed: 10/12/2024]
Abstract
BACKGROUND Self-supervised pre-training of deep learning models with contrastive learning is a widely used technique in image analysis. Current findings indicate a strong potential for contrastive pre-training on medical images. However, further research is necessary to incorporate the particular characteristics of these images. METHOD We hypothesize that the similarity of medical images hinders the success of contrastive learning in the medical imaging domain. To this end, we investigate different strategies based on deep embedding, information theory, and hashing in order to identify and reduce redundancy in medical pre-training datasets. The effect of these different reduction strategies on contrastive learning is evaluated on two pre-training datasets and several downstream classification tasks. RESULTS In all of our experiments, dataset reduction leads to a considerable performance gain in downstream tasks, e.g., an AUC score improvement from 0.78 to 0.83 for the COVID CT Classification Grand Challenge, 0.97 to 0.98 for the OrganSMNIST Classification Challenge and 0.73 to 0.83 for a brain hemorrhage classification task. Furthermore, pre-training is up to nine times faster due to the dataset reduction. CONCLUSIONS In conclusion, the proposed approach highlights the importance of dataset quality and provides a transferable approach to improve contrastive pre-training for classification downstream tasks on medical images.
Collapse
Affiliation(s)
- Daniel Wolf
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, James-Franck-Ring, Ulm, 89081, Germany; Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany.
| | - Tristan Payer
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, James-Franck-Ring, Ulm, 89081, Germany
| | - Catharina Silvia Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Christoph Gerhard Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Meinrad Beer
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Michael Götz
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Timo Ropinski
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, James-Franck-Ring, Ulm, 89081, Germany
| |
Collapse
|
4
|
Nastase INA, Moldovanu S, Biswas KC, Moraru L. Role of inter- and extra-lesion tissue, transfer learning, and fine-tuning in the robust classification of breast lesions. Sci Rep 2024; 14:22754. [PMID: 39354128 PMCID: PMC11448494 DOI: 10.1038/s41598-024-74316-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Accepted: 09/25/2024] [Indexed: 10/03/2024] Open
Abstract
Accurate and unbiased classification of breast lesions is pivotal for early diagnosis and treatment, and a deep learning approach can effectively represent and utilize the digital content of images for more precise medical image analysis. Breast ultrasound imaging is useful for detecting and distinguishing benign masses from malignant masses. Based on the different ways in which benign and malignant tumors affect neighboring tissues, i.e., the pattern of growth and border irregularities, the penetration degree of the adjacent tissue, and tissue-level changes, we investigated the relationship between breast cancer imaging features and the roles of inter- and extra-lesional tissues and their impact on refining the performance of deep learning classification. The novelty of the proposed approach lies in considering the features extracted from the tissue inside the tumor (by performing an erosion operation) and from the lesion and surrounding tissue (by performing a dilation operation) for classification. This study uses these new features and three pre-trained deep neuronal networks to address the challenge of breast lesion classification in ultrasound images. To improve the classification accuracy and interpretability of the model, the proposed model leverages transfer learning to accelerate the training process. Three modern pre-trained CNN architectures (MobileNetV2, VGG16, and EfficientNetB7) are used for transfer learning and fine-tuning for optimization. There are concerns related to the neuronal networks producing erroneous outputs in the presence of noisy images, variations in input data, or adversarial attacks; thus, the proposed system uses the BUS-BRA database (two classes/benign and malignant) for training and testing and the unseen BUSI database (two classes/benign and malignant) for testing. Extensive experiments have recorded accuracy and AUC as performance parameters. The results indicate that the proposed system outperforms the existing breast cancer detection algorithms reported in the literature. AUC values of 1.00 are calculated for VGG16 and EfficientNet-B7 in the dilation cases. The proposed approach will facilitate this challenging and time-consuming classification task.
Collapse
Affiliation(s)
- Iulia-Nela Anghelache Nastase
- The Modeling & Simulation Laboratory, Dunarea de Jos University of Galati, 47 Domneasca Street, Galati, 800008, Romania
- Emil Racovita Theoretical Highschool, 12-14, Regiment 11 Siret Street, Galati, 800332, Romania
| | - Simona Moldovanu
- The Modeling & Simulation Laboratory, Dunarea de Jos University of Galati, 47 Domneasca Street, Galati, 800008, Romania.
- Department of Computer Science and Information Technology, Faculty of Automation, Computers, Electrical Engineering and Electronics, Dunarea de Jos University of Galati, 47 Domneasca Street, Galati, 800008, Romania.
| | - Keka C Biswas
- Department of Biological Sciences, University of Alabama at Huntsville, Huntsville, AL, 35899, USA
| | - Luminita Moraru
- The Modeling & Simulation Laboratory, Dunarea de Jos University of Galati, 47 Domneasca Street, Galati, 800008, Romania.
- Department of Chemistry, Physics & Environment, Faculty of Sciences and Environment, Dunarea de Jos University of Galati, 47 Domneasca Street, Galati, 800008, Romania.
- Department of Physics, School of Science and Technology, Sefako Makgatho Health Sciences University, Medunsa-0204, Pretoria, South Africa.
| |
Collapse
|
5
|
Badkul A, Vamsi I, Sudha R. Comparative study of DCNN and image processing based classification of chest X-rays for identification of COVID-19 patients using fine-tuning. J Med Eng Technol 2024; 48:213-222. [PMID: 39648993 DOI: 10.1080/03091902.2024.2438158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 11/20/2024] [Accepted: 11/30/2024] [Indexed: 12/10/2024]
Abstract
The conventional detection of COVID-19 by evaluating the CT scan images is tiresome, often experiences high inter-observer variability and uncertainty issues. This work proposes the automatic detection and classification of COVID-19 by analysing the chest X-ray images (CXR) with the deep convolutional neural network (DCNN) models through a fine-tuning and pre-training approach. CXR images pertaining to four health scenarios, namely, healthy, COVID-19, bacterial pneumonia and viral pneumonia, are considered and subjected to data augmentation. Two types of input datasets are prepared; in which dataset I contains the original image dataset categorised under four classes, whereas the original CXR images are subjected to image pre-processing via Contrast Limited Adaptive Histogram Equalisation (CLAHE) algorithm and Blackhat Morphological Operation (BMO) for devising the input dataset II. Both datasets are supplied as input to various DCNN models such as DenseNet, MobileNet, ResNet, VGG16, and Xception for achieving multi-class classification. It is observed that the classification accuracies are improved, and the classification errors are reduced with the image pre-processing. Overall, the VGG16 model resulted in better classification accuracies and reduced classification errors while accomplishing multi-class classification. Thus, the proposed work would assist the clinical diagnosis, and reduce the workload of the front-line healthcare workforce and medical professionals.
Collapse
Affiliation(s)
- Amitesh Badkul
- Department of Electrical and Electronics, Birla Institute of Technology and Science-Pilani, Hyderabad, India
| | - Inturi Vamsi
- Mechanical Engineering Department, Chaitanya Bharathi Institute of Technology (A), Hyderabad, India
| | - Radhika Sudha
- Department of Electrical and Electronics, Birla Institute of Technology and Science-Pilani, Hyderabad, India
| |
Collapse
|
6
|
Balaha HM, Elgendy M, Alksas A, Shehata M, Alghamdi NS, Taher F, Ghazal M, Ghoneim M, Abdou EH, Sherif F, Elgarayhi A, Sallah M, Abdelbadie Salem M, Kamal E, Sandhu H, El-Baz A. A non-invasive AI-based system for precise grading of anosmia in COVID-19 using neuroimaging. Heliyon 2024; 10:e32726. [PMID: 38975154 PMCID: PMC11226840 DOI: 10.1016/j.heliyon.2024.e32726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 06/05/2024] [Accepted: 06/07/2024] [Indexed: 07/09/2024] Open
Abstract
COVID-19 (Coronavirus), an acute respiratory disorder, is caused by SARS-CoV-2 (coronavirus severe acute respiratory syndrome). The high prevalence of COVID-19 infection has drawn attention to a frequent illness symptom: olfactory and gustatory dysfunction. The primary purpose of this manuscript is to create a Computer-Assisted Diagnostic (CAD) system to determine whether a COVID-19 patient has normal, mild, or severe anosmia. To achieve this goal, we used fluid-attenuated inversion recovery (FLAIR) Magnetic Resonance Imaging (FLAIR-MRI) and Diffusion Tensor Imaging (DTI) to extract the appearance, morphological, and diffusivity markers from the olfactory nerve. The proposed system begins with the identification of the olfactory nerve, which is performed by a skilled expert or radiologist. It then proceeds to carry out the subsequent primary steps: (i) extract appearance markers (i.e.,1 s t and2 n d order markers), morphology/shape markers (i.e., spherical harmonics), and diffusivity markers (i.e., Fractional Anisotropy (FA) & Mean Diffusivity (MD)), (ii) apply markers fusion based on the integrated markers, and (iii) determine the decision and corresponding performance metrics based on the most-promising classifier. The current study is unusual in that it ensemble bags the learned and fine-tuned ML classifiers and diagnoses olfactory bulb (OB) anosmia using majority voting. In the 5-fold approach, it achieved an accuracy of 94.1%, a balanced accuracy (BAC) of 92.18%, precision of 91.6%, recall of 90.61%, specificity of 93.75%, F1 score of 89.82%, and Intersection over Union (IoU) of 82.62%. In the 10-fold approach, stacking continued to demonstrate impressive results with an accuracy of 94.43%, BAC of 93.0%, precision of 92.03%, recall of 91.39%, specificity of 94.61%, F1 score of 91.23%, and IoU of 84.56%. In the leave-one-subject-out (LOSO) approach, the model continues to exhibit notable outcomes, achieving an accuracy of 91.6%, BAC of 90.27%, precision of 88.55%, recall of 87.96%, specificity of 92.59%, F1 score of 87.94%, and IoU of 78.69%. These results indicate that stacking and majority voting are crucial components of the CAD system, contributing significantly to the overall performance improvements. The proposed technology can help doctors assess which patients need more intensive clinical care.
Collapse
Affiliation(s)
- Hossam Magdy Balaha
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| | - Mayada Elgendy
- Applied Theoretical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
| | - Ahmed Alksas
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| | - Mohamed Shehata
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Fatma Taher
- The College of Technological Innovation, Zayed University, Dubai, 19282, United Arab Emirates
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| | - Mahitab Ghoneim
- Department of Radiology, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt
| | - Eslam Hamed Abdou
- Otolaryngology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt
| | - Fatma Sherif
- Department of Radiology, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt
| | - Ahmed Elgarayhi
- Applied Theoretical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
| | - Mohammed Sallah
- Applied Theoretical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
- Department of Physics, College of Sciences, University of Bisha, Saudi Arabia
| | | | - Elsharawy Kamal
- Otolaryngology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt
| | - Harpal Sandhu
- Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA
| | - Ayman El-Baz
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
7
|
Balaha HM, Ayyad SM, Alksas A, Shehata M, Elsorougy A, Badawy MA, Abou El-Ghar M, Mahmoud A, Alghamdi NS, Ghazal M, Contractor S, El-Baz A. Precise Prostate Cancer Assessment Using IVIM-Based Parametric Estimation of Blood Diffusion from DW-MRI. Bioengineering (Basel) 2024; 11:629. [PMID: 38927865 PMCID: PMC11200510 DOI: 10.3390/bioengineering11060629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 05/22/2024] [Accepted: 06/14/2024] [Indexed: 06/28/2024] Open
Abstract
Prostate cancer is a significant health concern with high mortality rates and substantial economic impact. Early detection plays a crucial role in improving patient outcomes. This study introduces a non-invasive computer-aided diagnosis (CAD) system that leverages intravoxel incoherent motion (IVIM) parameters for the detection and diagnosis of prostate cancer (PCa). IVIM imaging enables the differentiation of water molecule diffusion within capillaries and outside vessels, offering valuable insights into tumor characteristics. The proposed approach utilizes a two-step segmentation approach through the use of three U-Net architectures for extracting tumor-containing regions of interest (ROIs) from the segmented images. The performance of the CAD system is thoroughly evaluated, considering the optimal classifier and IVIM parameters for differentiation and comparing the diagnostic value of IVIM parameters with the commonly used apparent diffusion coefficient (ADC). The results demonstrate that the combination of central zone (CZ) and peripheral zone (PZ) features with the Random Forest Classifier (RFC) yields the best performance. The CAD system achieves an accuracy of 84.08% and a balanced accuracy of 82.60%. This combination showcases high sensitivity (93.24%) and reasonable specificity (71.96%), along with good precision (81.48%) and F1 score (86.96%). These findings highlight the effectiveness of the proposed CAD system in accurately segmenting and diagnosing PCa. This study represents a significant advancement in non-invasive methods for early detection and diagnosis of PCa, showcasing the potential of IVIM parameters in combination with machine learning techniques. This developed solution has the potential to revolutionize PCa diagnosis, leading to improved patient outcomes and reduced healthcare costs.
Collapse
Affiliation(s)
- Hossam Magdy Balaha
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| | - Sarah M. Ayyad
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
| | - Ahmed Alksas
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| | - Mohamed Shehata
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| | - Ali Elsorougy
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt
| | - Mohamed Ali Badawy
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt
| | - Mohamed Abou El-Ghar
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt
| | - Ali Mahmoud
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 84428, Saudi Arabia
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Depatrment, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| | - Sohail Contractor
- Department of Radiology, University of Louisville, Louisville, KY 40202, USA
| | - Ayman El-Baz
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
8
|
Morani K, Ayana EK, Kollias D, Unay D. COVID-19 Detection from Computed Tomography Images Using Slice Processing Techniques and a Modified Xception Classifier. Int J Biomed Imaging 2024; 2024:9962839. [PMID: 38883272 PMCID: PMC11178392 DOI: 10.1155/2024/9962839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/09/2024] [Accepted: 05/06/2024] [Indexed: 06/18/2024] Open
Abstract
This paper extends our previous method for COVID-19 diagnosis, proposing an enhanced solution for detecting COVID-19 from computed tomography (CT) images using a lean transfer learning-based model. To decrease model misclassifications, two key steps of image processing were employed. Firstly, the uppermost and lowermost slices were removed, preserving sixty percent of each patient's slices. Secondly, all slices underwent manual cropping to emphasize the lung areas. Subsequently, resized CT scans (224 × 224) were input into an Xception transfer learning model with a modified output. Both Xception's architecture and pretrained weights were leveraged in the method. A big and rigorously annotated database of CT images was used to verify the method. The number of patients/subjects in the dataset is more than 5000, and the number and shape of the slices in each CT scan varies greatly. Verification was made both on the validation partition and on the test partition of unseen images. Results on the COV19-CT database showcased not only improvement from our previous solution and the baseline but also comparable performance to the highest-achieving methods on the same dataset. Further validation studies could explore the scalability and adaptability of the developed methodologies across diverse healthcare settings and patient populations. Additionally, investigating the integration of advanced image processing techniques, such as automated region of interest detection and segmentation algorithms, could enhance the efficiency and accuracy of COVID-19 diagnosis.
Collapse
Affiliation(s)
- Kenan Morani
- Izmir Democracy University Uckuyular, Gursel Aksel Blv No: 14 35140, Karabaglar, Izmir, Türkiye
| | - Esra Kaya Ayana
- Yildiz Technical University Yildiz 34349 Besiktas, Istanbul, Türkiye
| | | | - Devrim Unay
- Izmir Democracy University Uckuyular, Gursel Aksel Blv No: 14 35140, Karabaglar, Izmir, Türkiye
| |
Collapse
|
9
|
Tegegne AM, Lohani TK, Eshete AA. Groundwater potential delineation using geodetector based convolutional neural network in the Gunabay watershed of Ethiopia. ENVIRONMENTAL RESEARCH 2024; 242:117790. [PMID: 38036202 DOI: 10.1016/j.envres.2023.117790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 11/09/2023] [Accepted: 11/23/2023] [Indexed: 12/02/2023]
Abstract
Groundwater potential delineation is essential for efficient water resource utilization and long-term development. The scarcity of potable and irrigation water has become a critical issue due to natural and anthropogenic activities in meeting the demands of human survival and productivity. With these constraints, groundwater resource is now being used extensively in Ethiopia. Therefore, an innovative convolutional neural network (CNN) is successfully applied in the Gunabay watershed to delineate groundwater potential based on the selected major influencing factors. Groundwater recharge, lithology, drainage density, lineament density, transmissivity, and geomorphology were selected as major influencing factors during the groundwater potential of the study area. For dataset training, 70% of samples were selected and 30% were used for serving out of the total 128 samples. The spatial distribution of groundwater potential has been classified into five groups: very low (10.72%), low (25.67%), moderate (31.62%), high (19.93%), and very high (12.06%). The area obtains high rainfall but has a very low amount of recharge due to lack of proper soil and water conservation structures. The major outcome of the study showed that moderate and low potential is dominant. Geodetoctor results revealed that the magnitude influences on groundwater potential have been ranked as transmissivity (0.48), recharge (0.26), lineament density (0.26), lithology (0.13), drainage density (0.12), and geomorphology (0.06). The model results showed that using a convolutional neural network (CNN), groundwater potentiality can be delineated with higher predictive capability and accuracy. CNN based AUC validation platform showed that, 81.58% and 86.84% were accrued from the accuracy of training and testing values, respectively. Based on the findings, the local government can receive technical assistance for groundwater exploration, and sustainable water resource development in the Gunabay watershed. Finally, the use of a detector-based deep learning algorithm can provide a new platform for industrial sectors, groundwater experts, scholars, and decision-makers.
Collapse
Affiliation(s)
| | - Tarun Kumar Lohani
- Arba Minch Water Technology Institute, Arba Minch University, Arba Minch, Ethiopia
| | | |
Collapse
|
10
|
Abd El-Khalek AA, Balaha HM, Alghamdi NS, Ghazal M, Khalil AT, Abo-Elsoud MEA, El-Baz A. A concentrated machine learning-based classification system for age-related macular degeneration (AMD) diagnosis using fundus images. Sci Rep 2024; 14:2434. [PMID: 38287062 PMCID: PMC10825213 DOI: 10.1038/s41598-024-52131-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/14/2024] [Indexed: 01/31/2024] Open
Abstract
The increase in eye disorders among older individuals has raised concerns, necessitating early detection through regular eye examinations. Age-related macular degeneration (AMD), a prevalent condition in individuals over 45, is a leading cause of vision impairment in the elderly. This paper presents a comprehensive computer-aided diagnosis (CAD) framework to categorize fundus images into geographic atrophy (GA), intermediate AMD, normal, and wet AMD categories. This is crucial for early detection and precise diagnosis of age-related macular degeneration (AMD), enabling timely intervention and personalized treatment strategies. We have developed a novel system that extracts both local and global appearance markers from fundus images. These markers are obtained from the entire retina and iso-regions aligned with the optical disc. Applying weighted majority voting on the best classifiers improves performance, resulting in an accuracy of 96.85%, sensitivity of 93.72%, specificity of 97.89%, precision of 93.86%, F1 of 93.72%, ROC of 95.85%, balanced accuracy of 95.81%, and weighted sum of 95.38%. This system not only achieves high accuracy but also provides a detailed assessment of the severity of each retinal region. This approach ensures that the final diagnosis aligns with the physician's understanding of AMD, aiding them in ongoing treatment and follow-up for AMD patients.
Collapse
Affiliation(s)
- Aya A Abd El-Khalek
- Communications and Electronics Engineering Department, Nile Higher Institute for Engineering and Technology, Mansoura, Egypt
| | - Hossam Magdy Balaha
- BioImaging Lab, Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY, USA
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Depatrment, Abu Dhabi University, Abu Dhabi, UAE
| | - Abeer T Khalil
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mohy Eldin A Abo-Elsoud
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Ayman El-Baz
- BioImaging Lab, Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY, USA.
| |
Collapse
|
11
|
Henao JAG, Depotter A, Bower DV, Bajercius H, Todorova PT, Saint-James H, de Mortanges AP, Barroso MC, He J, Yang J, You C, Staib LH, Gange C, Ledda RE, Caminiti C, Silva M, Cortopassi IO, Dela Cruz CS, Hautz W, Bonel HM, Sverzellati N, Duncan JS, Reyes M, Poellinger A. A Multiclass Radiomics Method-Based WHO Severity Scale for Improving COVID-19 Patient Assessment and Disease Characterization From CT Scans. Invest Radiol 2023; 58:882-893. [PMID: 37493348 PMCID: PMC10662611 DOI: 10.1097/rli.0000000000001005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 05/26/2023] [Indexed: 07/27/2023]
Abstract
OBJECTIVES The aim of this study was to evaluate the severity of COVID-19 patients' disease by comparing a multiclass lung lesion model to a single-class lung lesion model and radiologists' assessments in chest computed tomography scans. MATERIALS AND METHODS The proposed method, AssessNet-19, was developed in 2 stages in this retrospective study. Four COVID-19-induced tissue lesions were manually segmented to train a 2D-U-Net network for a multiclass segmentation task followed by extensive extraction of radiomic features from the lung lesions. LASSO regression was used to reduce the feature set, and the XGBoost algorithm was trained to classify disease severity based on the World Health Organization Clinical Progression Scale. The model was evaluated using 2 multicenter cohorts: a development cohort of 145 COVID-19-positive patients from 3 centers to train and test the severity prediction model using manually segmented lung lesions. In addition, an evaluation set of 90 COVID-19-positive patients was collected from 2 centers to evaluate AssessNet-19 in a fully automated fashion. RESULTS AssessNet-19 achieved an F1-score of 0.76 ± 0.02 for severity classification in the evaluation set, which was superior to the 3 expert thoracic radiologists (F1 = 0.63 ± 0.02) and the single-class lesion segmentation model (F1 = 0.64 ± 0.02). In addition, AssessNet-19 automated multiclass lesion segmentation obtained a mean Dice score of 0.70 for ground-glass opacity, 0.68 for consolidation, 0.65 for pleural effusion, and 0.30 for band-like structures compared with ground truth. Moreover, it achieved a high agreement with radiologists for quantifying disease extent with Cohen κ of 0.94, 0.92, and 0.95. CONCLUSIONS A novel artificial intelligence multiclass radiomics model including 4 lung lesions to assess disease severity based on the World Health Organization Clinical Progression Scale more accurately determines the severity of COVID-19 patients than a single-class model and radiologists' assessment.
Collapse
|
12
|
Wolf D, Payer T, Lisson CS, Lisson CG, Beer M, Götz M, Ropinski T. Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging. Sci Rep 2023; 13:20260. [PMID: 37985685 PMCID: PMC10662445 DOI: 10.1038/s41598-023-46433-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 10/31/2023] [Indexed: 11/22/2023] Open
Abstract
Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis. Training such deep learning models requires large and accurate datasets, with annotations for all training samples. However, in the medical imaging domain, annotated datasets for specific tasks are often small due to the high complexity of annotations, limited access, or the rarity of diseases. To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning. After pre-training, small annotated datasets are sufficient to fine-tune the models for a specific task. The most popular self-supervised pre-training approaches in medical imaging are based on contrastive learning. However, recent studies in natural image processing indicate a strong potential for masked autoencoder approaches. Our work compares state-of-the-art contrastive learning methods with the recently introduced masked autoencoder approach "SparK" for convolutional neural networks (CNNs) on medical images. Therefore, we pre-train on a large unannotated CT image dataset and fine-tune on several CT classification tasks. Due to the challenge of obtaining sufficient annotated training data in medical imaging, it is of particular interest to evaluate how the self-supervised pre-training methods perform when fine-tuning on small datasets. By experimenting with gradually reducing the training dataset size for fine-tuning, we find that the reduction has different effects depending on the type of pre-training chosen. The SparK pre-training method is more robust to the training dataset size than the contrastive methods. Based on our results, we propose the SparK pre-training for medical imaging tasks with only small annotated datasets.
Collapse
Affiliation(s)
- Daniel Wolf
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany.
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany.
| | - Tristan Payer
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany
| | - Catharina Silvia Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Christoph Gerhard Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Meinrad Beer
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Michael Götz
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Timo Ropinski
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany
| |
Collapse
|
13
|
Nahiduzzaman M, Goni MOF, Hassan R, Islam MR, Syfullah MK, Shahriar SM, Anower MS, Ahsan M, Haider J, Kowalski M. Parallel CNN-ELM: A multiclass classification of chest X-ray images to identify seventeen lung diseases including COVID-19. EXPERT SYSTEMS WITH APPLICATIONS 2023; 229:120528. [PMID: 37274610 PMCID: PMC10223636 DOI: 10.1016/j.eswa.2023.120528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Revised: 05/19/2023] [Accepted: 05/19/2023] [Indexed: 06/06/2023]
Abstract
Numerous epidemic lung diseases such as COVID-19, tuberculosis (TB), and pneumonia have spread over the world, killing millions of people. Medical specialists have experienced challenges in correctly identifying these diseases due to their subtle differences in Chest X-ray images (CXR). To assist the medical experts, this study proposed a computer-aided lung illness identification method based on the CXR images. For the first time, 17 different forms of lung disorders were considered and the study was divided into six trials with each containing two, two, three, four, fourteen, and seventeen different forms of lung disorders. The proposed framework combined robust feature extraction capabilities of a lightweight parallel convolutional neural network (CNN) with the classification abilities of the extreme learning machine algorithm named CNN-ELM. An optimistic accuracy of 90.92% and an area under the curve (AUC) of 96.93% was achieved when 17 classes were classified side by side. It also accurately identified COVID-19 and TB with 99.37% and 99.98% accuracy, respectively, in 0.996 microseconds for a single image. Additionally, the current results also demonstrated that the framework could outperform the existing state-of-the-art (SOTA) models. On top of that, a secondary conclusion drawn from this study was that the prospective framework retained its effectiveness over a range of real-world environments, including balanced-unbalanced or large-small datasets, large multiclass or simple binary class, and high- or low-resolution images. A prototype Android App was also developed to establish the potential of the framework in real-life implementation.
Collapse
Affiliation(s)
- Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Omaer Faruq Goni
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Rakibul Hassan
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Robiul Islam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Khalid Syfullah
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Saleh Mohammed Shahriar
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Shamim Anower
- Department of Electrical & Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M1 5GD, UK
| | - Marcin Kowalski
- Institute of Optoelectronics, Military University of Technology, Gen. S. Kaliskiego 2, 00-908 Warsaw, Poland
| |
Collapse
|
14
|
Badawy M, Balaha HM, Maklad AS, Almars AM, Elhosseini MA. Revolutionizing Oral Cancer Detection: An Approach Using Aquila and Gorilla Algorithms Optimized Transfer Learning-Based CNNs. Biomimetics (Basel) 2023; 8:499. [PMID: 37887629 PMCID: PMC10604828 DOI: 10.3390/biomimetics8060499] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/11/2023] [Accepted: 10/17/2023] [Indexed: 10/28/2023] Open
Abstract
The early detection of oral cancer is pivotal for improving patient survival rates. However, the high cost of manual initial screenings poses a challenge, especially in resource-limited settings. Deep learning offers an enticing solution by enabling automated and cost-effective screening. This study introduces a groundbreaking empirical framework designed to revolutionize the accurate and automatic classification of oral cancer using microscopic histopathology slide images. This innovative system capitalizes on the power of convolutional neural networks (CNNs), strengthened by the synergy of transfer learning (TL), and further fine-tuned using the novel Aquila Optimizer (AO) and Gorilla Troops Optimizer (GTO), two cutting-edge metaheuristic optimization algorithms. This integration is a novel approach, addressing bias and unpredictability issues commonly encountered in the preprocessing and optimization phases. In the experiments, the capabilities of well-established pre-trained TL models, including VGG19, VGG16, MobileNet, MobileNetV3Small, MobileNetV2, MobileNetV3Large, NASNetMobile, and DenseNet201, all initialized with 'ImageNet' weights, were harnessed. The experimental dataset consisted of the Histopathologic Oral Cancer Detection dataset, which includes a 'normal' class with 2494 images and an 'OSCC' (oral squamous cell carcinoma) class with 2698 images. The results reveal a remarkable performance distinction between the AO and GTO, with the AO consistently outperforming the GTO across all models except for the Xception model. The DenseNet201 model stands out as the most accurate, achieving an astounding average accuracy rate of 99.25% with the AO and 97.27% with the GTO. This innovative framework signifies a significant leap forward in automating oral cancer detection, showcasing the tremendous potential of applying optimized deep learning models in the realm of healthcare diagnostics. The integration of the AO and GTO in our CNN-based system not only pushes the boundaries of classification accuracy but also underscores the transformative impact of metaheuristic optimization techniques in the field of medical image analysis.
Collapse
Affiliation(s)
- Mahmoud Badawy
- Department of Computer Science and Informatics, Applied College, Taibah University, Al Madinah Al Munawwarah 41461, Saudi Arabia
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt (M.A.E.)
| | - Hossam Magdy Balaha
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt (M.A.E.)
- Department of Bioengineering, Speed School of Engineering, University of Louisville, Louisville, KY 40208, USA
| | - Ahmed S. Maklad
- College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia; (A.S.M.); (A.M.A.)
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Beni-Suef University, Beni-Suif 62521, Egypt
| | - Abdulqader M. Almars
- College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia; (A.S.M.); (A.M.A.)
| | - Mostafa A. Elhosseini
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt (M.A.E.)
- College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia; (A.S.M.); (A.M.A.)
| |
Collapse
|
15
|
Santosh KC, GhoshRoy D, Nakarmi S. A Systematic Review on Deep Structured Learning for COVID-19 Screening Using Chest CT from 2020 to 2022. Healthcare (Basel) 2023; 11:2388. [PMID: 37685422 PMCID: PMC10486542 DOI: 10.3390/healthcare11172388] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The emergence of the COVID-19 pandemic in Wuhan in 2019 led to the discovery of a novel coronavirus. The World Health Organization (WHO) designated it as a global pandemic on 11 March 2020 due to its rapid and widespread transmission. Its impact has had profound implications, particularly in the realm of public health. Extensive scientific endeavors have been directed towards devising effective treatment strategies and vaccines. Within the healthcare and medical imaging domain, the application of artificial intelligence (AI) has brought significant advantages. This study delves into peer-reviewed research articles spanning the years 2020 to 2022, focusing on AI-driven methodologies for the analysis and screening of COVID-19 through chest CT scan data. We assess the efficacy of deep learning algorithms in facilitating decision making processes. Our exploration encompasses various facets, including data collection, systematic contributions, emerging techniques, and encountered challenges. However, the comparison of outcomes between 2020 and 2022 proves intricate due to shifts in dataset magnitudes over time. The initiatives aimed at developing AI-powered tools for the detection, localization, and segmentation of COVID-19 cases are primarily centered on educational and training contexts. We deliberate on their merits and constraints, particularly in the context of necessitating cross-population train/test models. Our analysis encompassed a review of 231 research publications, bolstered by a meta-analysis employing search keywords (COVID-19 OR Coronavirus) AND chest CT AND (deep learning OR artificial intelligence OR medical imaging) on both the PubMed Central Repository and Web of Science platforms.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab, Vermillion, SD 57069, USA
| | - Debasmita GhoshRoy
- School of Automation, Banasthali Vidyapith, Tonk 304022, Rajasthan, India;
| | - Suprim Nakarmi
- Department of Computer Science, University of South Dakota, Vermillion, SD 57069, USA;
| |
Collapse
|
16
|
Almutairi SA. A multimodal AI-based non-invasive COVID-19 grading framework powered by deep learning, manta ray, and fuzzy inference system from multimedia vital signs. Heliyon 2023; 9:e16552. [PMID: 37251492 PMCID: PMC10210825 DOI: 10.1016/j.heliyon.2023.e16552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 05/19/2023] [Accepted: 05/19/2023] [Indexed: 05/31/2023] Open
Abstract
The COVID-19 pandemic has presented unprecedented challenges to healthcare systems worldwide. One of the key challenges in controlling and managing the pandemic is accurate and rapid diagnosis of COVID-19 cases. Traditional diagnostic methods such as RT-PCR tests are time-consuming and require specialized equipment and trained personnel. Computer-aided diagnosis systems and artificial intelligence (AI) have emerged as promising tools for developing cost-effective and accurate diagnostic approaches. Most studies in this area have focused on diagnosing COVID-19 based on a single modality, such as chest X-rays or cough sounds. However, relying on a single modality may not accurately detect the virus, especially in its early stages. In this research, we propose a non-invasive diagnostic framework consisting of four cascaded layers that work together to accurately detect COVID-19 in patients. The first layer of the framework performs basic diagnostics such as patient temperature, blood oxygen level, and breathing profile, providing initial insights into the patient's condition. The second layer analyzes the coughing profile, while the third layer evaluates chest imaging data such as X-ray and CT scans. Finally, the fourth layer utilizes a fuzzy logic inference system based on the previous three layers to generate a reliable and accurate diagnosis. To evaluate the effectiveness of the proposed framework, we used two datasets: the Cough Dataset and the COVID-19 Radiography Database. The experimental results demonstrate that the proposed framework is effective and trustworthy in terms of accuracy, precision, sensitivity, specificity, F1-score, and balanced accuracy. The audio-based classification achieved an accuracy of 96.55%, while the CXR-based classification achieved an accuracy of 98.55%. The proposed framework has the potential to significantly improve the accuracy and speed of COVID-19 diagnosis, allowing for more effective control and management of the pandemic. Furthermore, the framework's non-invasive nature makes it a more attractive option for patients, reducing the risk of infection and discomfort associated with traditional diagnostic methods.
Collapse
Affiliation(s)
- Saleh Ateeq Almutairi
- Taibah University, Applied College, Computer Science and Information department, Medinah, 41461, Saudi Arabia
| |
Collapse
|
17
|
COVINet: A hybrid model for classification of COVID and Non-COVID pneumonia in CT and X-Ray imagery. INTERNATIONAL JOURNAL OF COGNITIVE COMPUTING IN ENGINEERING 2023; 4:149-159. [PMCID: PMC10017176 DOI: 10.1016/j.ijcce.2023.03.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 03/13/2023] [Accepted: 03/14/2023] [Indexed: 01/05/2024]
Abstract
The COVID-19 pandemic has resulted in a significant increase in the number of pneumonia cases, including those caused by the Coronavirus. To detect COVID pneumonia, RT-PCR is used as the primary detection tool for COVID-19 pneumonia but chest imaging, including CT scans and X-Ray imagery, can also be used as a secondary important tool for the diagnosis of pneumonia, including COVID pneumonia. However, the interpretation of chest imaging in COVID-19 pneumonia can be challenging, as the signs of the disease on imaging may be subtle and may overlap with normal pneumonia. In this paper, we propose a hybrid model with the name COVINet which uses ResNet-101 as the feature extractor and classical K-Nearest Neighbors as the classifier that led us to give automated results for detecting COVID pneumonia in X-Rays and CT imagery. The proposed hybrid model achieved a classification accuracy of 98.6%. The model's precision, recall, and F1-Score values were also impressive, ranging from 98-99%. To back and support the proposed model, several CNN-based feature extractors and classical machine learning classifiers have been exploited. The outcome with exploited combinations suggests that our model can significantly enhance the accuracy and precision of detecting COVID-19 pneumonia on chest imaging, and this holds the potential of being a valuable resource for early identification and diagnosis of the illness by radiologists and medical practitioners.
Collapse
|
18
|
Wu Y, Dai Q, Lu H. COVID-19 diagnosis utilizing wavelet-based contrastive learning with chest CT images. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS : AN INTERNATIONAL JOURNAL SPONSORED BY THE CHEMOMETRICS SOCIETY 2023; 236:104799. [PMID: 36883063 PMCID: PMC9981271 DOI: 10.1016/j.chemolab.2023.104799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 02/20/2023] [Accepted: 03/01/2023] [Indexed: 06/18/2023]
Abstract
The pandemic caused by the coronavirus disease 2019 (COVID-19) has continuously wreaked havoc on human health. Computer-aided diagnosis (CAD) system based on chest computed tomography (CT) has been a hotspot option for COVID-19 diagnosis. However, due to the high cost of data annotation in the medical field, it happens that the number of unannotated data is much larger than the annotated data. Meanwhile, having a highly accurate CAD system always requires a large amount of labeled data training. To solve this problem while meeting the needs, this paper presents an automated and accurate COVID-19 diagnosis system using few labeled CT images. The overall framework of this system is based on the self-supervised contrastive learning (SSCL). Based on the framework, our enhancement of our system can be summarized as follows. 1) We integrated a two-dimensional discrete wavelet transform with contrastive learning to fully use all the features from the images. 2) We use the recently proposed COVID-Net as the encoder, with a redesign to target the specificity of the task and learning efficiency. 3) A new pretraining strategy based on contrastive learning is applied for broader generalization ability. 4) An additional auxiliary task is exerted to promote performance during classification. The final experimental result of our system attained 93.55%, 91.59%, 96.92% and 94.18% for accuracy, recall, precision, and F1-score respectively. By comparing results with the existing schemes, we demonstrate the performance enhancement and superiority of our proposed system.
Collapse
Affiliation(s)
- Yanfu Wu
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, PR China
| | - Qun Dai
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, PR China
| | - Han Lu
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, PR China
| |
Collapse
|
19
|
Rahman T, Chowdhury MEH, Khandakar A, Mahbub ZB, Hossain MSA, Alhatou A, Abdalla E, Muthiyal S, Islam KF, Kashem SBA, Khan MS, Zughaier SM, Hossain M. BIO-CXRNET: a robust multimodal stacking machine learning technique for mortality risk prediction of COVID-19 patients using chest X-ray images and clinical data. Neural Comput Appl 2023; 35:1-23. [PMID: 37362565 PMCID: PMC10157130 DOI: 10.1007/s00521-023-08606-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 04/11/2023] [Indexed: 06/28/2023]
Abstract
Nowadays, quick, and accurate diagnosis of COVID-19 is a pressing need. This study presents a multimodal system to meet this need. The presented system employs a machine learning module that learns the required knowledge from the datasets collected from 930 COVID-19 patients hospitalized in Italy during the first wave of COVID-19 (March-June 2020). The dataset consists of twenty-five biomarkers from electronic health record and Chest X-ray (CXR) images. It is found that the system can diagnose low- or high-risk patients with an accuracy, sensitivity, and F1-score of 89.03%, 90.44%, and 89.03%, respectively. The system exhibits 6% higher accuracy than the systems that employ either CXR images or biomarker data. In addition, the system can calculate the mortality risk of high-risk patients using multivariate logistic regression-based nomogram scoring technique. Interested physicians can use the presented system to predict the early mortality risks of COVID-19 patients using the web-link: Covid-severity-grading-AI. In this case, a physician needs to input the following information: CXR image file, Lactate Dehydrogenase (LDH), Oxygen Saturation (O2%), White Blood Cells Count, C-reactive protein, and Age. This way, this study contributes to the management of COVID-19 patients by predicting early mortality risk. Supplementary Information The online version contains supplementary material available at 10.1007/s00521-023-08606-w.
Collapse
Affiliation(s)
- Tawsifur Rahman
- Department of Electrical Engineering, Qatar University, P.O. Box 2713, Doha, Qatar
| | | | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, P.O. Box 2713, Doha, Qatar
| | - Zaid Bin Mahbub
- Department of Physics and Mathematics, North South University, Dhaka, 1229 Bangladesh
| | | | - Abraham Alhatou
- Department of Biology, University of South Carolina (USC), Columbia, SC 29208 USA
| | - Eynas Abdalla
- Anesthesia Department, Hamad General Hospital, P.O. Box 3050, Doha, Qatar
| | - Sreekumar Muthiyal
- Department of Radiology, Hamad General Hospital, P.O. Box 3050, Doha, Qatar
| | | | - Saad Bin Abul Kashem
- Department of Computer Science, AFG College with the University of Aberdeen, Doha, Qatar
| | - Muhammad Salman Khan
- Department of Electrical Engineering, Qatar University, P.O. Box 2713, Doha, Qatar
| | - Susu M. Zughaier
- Department of Basic Medical Sciences, College of Medicine, QU Health, Qatar University, P.O. Box 2713, Doha, Qatar
| | - Maqsud Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka, 1229 Bangladesh
| |
Collapse
|
20
|
Badawy M, Almars AM, Balaha HM, Shehata M, Qaraad M, Elhosseini M. A two-stage renal disease classification based on transfer learning with hyperparameters optimization. Front Med (Lausanne) 2023; 10:1106717. [PMID: 37089598 PMCID: PMC10113505 DOI: 10.3389/fmed.2023.1106717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 03/14/2023] [Indexed: 04/09/2023] Open
Abstract
Renal diseases are common health problems that affect millions of people around the world. Among these diseases, kidney stones, which affect anywhere from 1 to 15% of the global population and thus; considered one of the leading causes of chronic kidney diseases (CKD). In addition to kidney stones, renal cancer is the tenth most prevalent type of cancer, accounting for 2.5% of all cancers. Artificial intelligence (AI) in medical systems can assist radiologists and other healthcare professionals in diagnosing different renal diseases (RD) with high reliability. This study proposes an AI-based transfer learning framework to detect RD at an early stage. The framework presented on CT scans and images from microscopic histopathological examinations will help automatically and accurately classify patients with RD using convolutional neural network (CNN), pre-trained models, and an optimization algorithm on images. This study used the pre-trained CNN models VGG16, VGG19, Xception, DenseNet201, MobileNet, MobileNetV2, MobileNetV3Large, and NASNetMobile. In addition, the Sparrow search algorithm (SpaSA) is used to enhance the pre-trained model's performance using the best configuration. Two datasets were used, the first dataset are four classes: cyst, normal, stone, and tumor. In case of the latter, there are five categories within the second dataset that relate to the severity of the tumor: Grade 0, Grade 1, Grade 2, Grade 3, and Grade 4. DenseNet201 and MobileNet pre-trained models are the best for the four-classes dataset compared to others. Besides, the SGD Nesterov parameters optimizer is recommended by three models, while two models only recommend AdaGrad and AdaMax. Among the pre-trained models for the five-class dataset, DenseNet201 and Xception are the best. Experimental results prove the superiority of the proposed framework over other state-of-the-art classification models. The proposed framework records an accuracy of 99.98% (four classes) and 100% (five classes).
Collapse
Affiliation(s)
- Mahmoud Badawy
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura, Egypt
- Department of Computer Science and Informatics, Applied College, Taibah University, Al Madinah Al Munawwarah, Saudi Arabia
| | - Abdulqader M Almars
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
| | - Hossam Magdy Balaha
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura, Egypt
- Department of Bioengineering, Speed School of Engineering, University of Louisville, Louisville, KY, United States
| | - Mohamed Shehata
- Department of Computer Science and Engineering, Speed School of Engineering, University of Louisville, Louisville, KY, United States
| | - Mohammed Qaraad
- Department of Computer Science, Faculty of Science, Amran University, Amran, Yemen
- TIMS, Faculty of Science, Abdelmalek Essaadi University, Tetouan, Morocco
| | - Mostafa Elhosseini
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura, Egypt
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
| |
Collapse
|
21
|
Novel Light Convolutional Neural Network for COVID Detection with Watershed Based Region Growing Segmentation. J Imaging 2023; 9:jimaging9020042. [PMID: 36826961 PMCID: PMC9963211 DOI: 10.3390/jimaging9020042] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 02/06/2023] [Accepted: 02/08/2023] [Indexed: 02/16/2023] Open
Abstract
A rapidly spreading epidemic, COVID-19 had a serious effect on millions and took many lives. Therefore, for individuals with COVID-19, early discovery is essential for halting the infection's progress. To quickly and accurately diagnose COVID-19, imaging modalities, including computed tomography (CT) scans and chest X-ray radiographs, are frequently employed. The potential of artificial intelligence (AI) approaches further explored the creation of automated and precise COVID-19 detection systems. Scientists widely use deep learning techniques to identify coronavirus infection in lung imaging. In our paper, we developed a novel light CNN model architecture with watershed-based region-growing segmentation on Chest X-rays. Both CT scans and X-ray radiographs were employed along with 5-fold cross-validation. Compared to earlier state-of-the-art models, our model is lighter and outperformed the previous methods by achieving a mean accuracy of 98.8% on X-ray images and 98.6% on CT scans, predicting the rate of 0.99% and 0.97% for PPV (Positive predicted Value) and NPV (Negative predicted Value) rate of 0.98% and 0.99%, respectively.
Collapse
|
22
|
Zaitseva E, Rabcan J, Levashenko V, Kvassay M. Importance analysis of decision making factors based on fuzzy decision trees. Appl Soft Comput 2023. [DOI: 10.1016/j.asoc.2023.109988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
23
|
Gharehchopogh FS, Namazi M, Ebrahimi L, Abdollahzadeh B. Advances in Sparrow Search Algorithm: A Comprehensive Survey. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023. [PMID: 36034191 DOI: 10.1007/s11831-021-09698-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Mathematical programming and meta-heuristics are two types of optimization methods. Meta-heuristic algorithms can identify optimal/near-optimal solutions by mimicking natural behaviours or occurrences and provide benefits such as simplicity of execution, a few parameters, avoidance of local optimization, and flexibility. Many meta-heuristic algorithms have been introduced to solve optimization issues, each of which has advantages and disadvantages. Studies and research on presented meta-heuristic algorithms in prestigious journals showed they had good performance in solving hybrid, improved and mutated problems. This paper reviews the sparrow search algorithm (SSA), one of the new and robust algorithms for solving optimization problems. This paper covers all the SSA literature on variants, improvement, hybridization, and optimization. According to studies, the use of SSA in the mentioned areas has been equal to 32%, 36%, 4%, and 28%, respectively. The highest percentage belongs to Improved, which has been analyzed by three subsections: Meat-Heuristics, artificial neural networks, and Deep Learning.
Collapse
Affiliation(s)
| | - Mohammad Namazi
- Department of Computer Engineering, Maybod Branch. Islamic Azad University, Maybod, Iran
| | - Laya Ebrahimi
- Department of Computer Engineering, Urmia Branch, Islamic Azad University, Urmia, Iran
| | | |
Collapse
|
24
|
Ghashghaei S, Wood DA, Sadatshojaei E, Jalilpoor M. Grayscale Image Statistical Attributes Effectively Distinguish the Severity of Lung Abnormalities in CT Scan Slices of COVID-19 Patients. SN COMPUTER SCIENCE 2023; 4:201. [PMID: 36789248 PMCID: PMC9912234 DOI: 10.1007/s42979-022-01642-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 12/27/2022] [Indexed: 02/12/2023]
Abstract
Grayscale statistical attributes analysed for 513 extract images taken from pulmonary computed tomography (CT) scan slices of 57 individuals (49 confirmed COVID-19 positive; eight confirmed COVID-19 negative) are able to accurately predict a visual score (VS from 0 to 4) used by a clinician to assess the severity of lung abnormalities in the patients. Some of these attributes can be used graphically to distinguish useful but overlapping distributions for the VS classes. Using machine and deep learning (ML/DL) algorithms with twelve grayscale image attributes as inputs enables the VS classes to be accurately distinguished. A convolutional neural network achieves this with better than 96% accuracy (only 18 images misclassified out of 513) on a supervised learning basis. Analysis of confusion matrices enables the VS prediction performance of ML/DL algorithms to be explored in detail. Those matrices demonstrate that the best performing ML/DL algorithms successfully distinguish between VS classes 0 and 1, which clinicians cannot readily do with the naked eye. Just five image grayscale attributes can also be used to generate an algorithmically defined scoring system (AS) that can also graphically distinguish the degree of pulmonary impacts in the dataset evaluated. The AS classification illustrated involves less overlap between its classes than the VS system and could be exploited as an automated expert system. The best-performing ML/DL models are able to predict the AS classes with better than 99% accuracy using twelve grayscale attributes as inputs. The decision tree and random forest algorithms accomplish that distinction with just one classification error in the 513 images tested.
Collapse
Affiliation(s)
- Sara Ghashghaei
- Medical School, Shiraz University of Medical Sciences, Shiraz, Iran
| | | | - Erfan Sadatshojaei
- Department of Chemical Engineering, Shiraz University, Shiraz, 71345 Iran
| | | |
Collapse
|
25
|
Balaha HM, Hassan AES. A variate brain tumor segmentation, optimization, and recognition framework. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10337-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
26
|
Phung KA, Nguyen TT, Wangad N, Baraheem S, Vo ND, Nguyen K. Disease Recognition in X-ray Images with Doctor Consultation-Inspired Model. J Imaging 2022; 8:jimaging8120323. [PMID: 36547488 PMCID: PMC9786084 DOI: 10.3390/jimaging8120323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 11/24/2022] [Accepted: 11/30/2022] [Indexed: 12/12/2022] Open
Abstract
The application of chest X-ray imaging for early disease screening is attracting interest from the computer vision and deep learning community. To date, various deep learning models have been applied in X-ray image analysis. However, models perform inconsistently depending on the dataset. In this paper, we consider each individual model as a medical doctor. We then propose a doctor consultation-inspired method that fuses multiple models. In particular, we consider both early and late fusion mechanisms for consultation. The early fusion mechanism combines the deep learned features from multiple models, whereas the late fusion method combines the confidence scores of all individual models. Experiments on two X-ray imaging datasets demonstrate the superiority of the proposed method relative to baseline. The experimental results also show that early consultation consistently outperforms the late consultation mechanism in both benchmark datasets. In particular, the early doctor consultation-inspired model outperforms all individual models by a large margin, i.e., 3.03 and 1.86 in terms of accuracy in the UIT COVID-19 and chest X-ray datasets, respectively.
Collapse
Affiliation(s)
- Kim Anh Phung
- Department of Computer Science, University of Dayton, Dayton, OH 45469, USA
| | - Thuan Trong Nguyen
- Faculty of Software Engineering, University of Information Technology, Linh Trung Ward, Thu Duc District, Ho Chi Minh City 70000, Vietnam
| | - Nileshkumar Wangad
- Department of Computer Science, University of Dayton, Dayton, OH 45469, USA
| | - Samah Baraheem
- Department of Computer Science, University of Dayton, Dayton, OH 45469, USA
| | - Nguyen D. Vo
- Faculty of Software Engineering, University of Information Technology, Linh Trung Ward, Thu Duc District, Ho Chi Minh City 70000, Vietnam
| | - Khang Nguyen
- Faculty of Software Engineering, University of Information Technology, Linh Trung Ward, Thu Duc District, Ho Chi Minh City 70000, Vietnam
- Correspondence:
| |
Collapse
|
27
|
Ahuja S, Panigrahi BK, Dey N, Taneja A, Gandhi TK. McS-Net: Multi-class Siamese network for severity of COVID-19 infection classification from lung CT scan slices. Appl Soft Comput 2022; 131:109683. [PMID: 36277300 PMCID: PMC9573862 DOI: 10.1016/j.asoc.2022.109683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 08/25/2022] [Accepted: 09/22/2022] [Indexed: 11/29/2022]
Abstract
Worldwide COVID-19 is a highly infectious and rapidly spreading disease in almost all age groups. The Computed Tomography (CT) scans of lungs are found to be accurate for the timely diagnosis of COVID-19 infection. In the proposed work, a deep learning-based P-shot N-ways Siamese network along with prototypical nearest neighbor classifiers is implemented for the classification of COVID-19 infection from lung CT scan slices. For this, a Siamese network with an identical sub-network (weight sharing) is used for image classification with a limited dataset for each class. The feature vectors are obtained from the pre-trained sub-networks having weight sharing. The performance of the proposed methodology is evaluated on the benchmark MosMed dataset having categories zero (healthy control) and numerous COVID-19 infections. The proposed methodology is evaluated on (a) chest CT scans provided by medical hospitals in Moscow, Russia for 1110 patients, and (b) case study of low-dose CT scans of 42 patients provided by Avtaran healthcare in India. The deep learning-based Siamese network (15-shot 5-ways) obtained an accuracy of 98.07%, the sensitivity of 95.66%, specificity of 98.83%, and F1-score of 95.10%. The proposed work outperforms the COVID-19 infection severity classification with limited scans availability for numerous infection categories.
Collapse
Affiliation(s)
- Sakshi Ahuja
- Electrical Engineering Department, Indian Institute of Technology Delhi, New Delhi, 110016, India
| | - Bijaya Ketan Panigrahi
- Electrical Engineering Department, Indian Institute of Technology Delhi, New Delhi, 110016, India
| | - Nilanjan Dey
- Department of Computer Science and Engineering, Techno International New Town, Kolkata, 700156, India
| | - Arpit Taneja
- Department of Radiology, Avtaran Healthcare LLP, Kurukshetra, 136118, India
| | - Tapan Kumar Gandhi
- Electrical Engineering Department, Indian Institute of Technology Delhi, New Delhi, 110016, India
| |
Collapse
|
28
|
Liu S, Cai T, Tang X, Zhang Y, Wang C. COVID-19 diagnosis via chest X-ray image classification based on multiscale class residual attention. Comput Biol Med 2022; 149:106065. [PMID: 36081225 PMCID: PMC9433340 DOI: 10.1016/j.compbiomed.2022.106065] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 08/07/2022] [Accepted: 08/27/2022] [Indexed: 12/11/2022]
Abstract
Aiming at detecting COVID-19 effectively, a multiscale class residual attention (MCRA) network is proposed via chest X-ray (CXR) image classification. First, to overcome the data shortage and improve the robustness of our network, a pixel-level image mixing of local regions was introduced to achieve data augmentation and reduce noise. Secondly, multi-scale fusion strategy was adopted to extract global contextual information at different scales and enhance semantic representation. Last but not least, class residual attention was employed to generate spatial attention for each class, which can avoid inter-class interference and enhance related features to further improve the COVID-19 detection. Experimental results show that our network achieves superior diagnostic performance on COVIDx dataset, and its accuracy, PPV, sensitivity, specificity and F1-score are 97.71%, 96.76%, 96.56%, 98.96% and 96.64%, respectively; moreover, the heat maps can endow our deep model with somewhat interpretability.
Collapse
Affiliation(s)
- Shangwang Liu
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China.
| | - Tongbo Cai
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| | - Xiufang Tang
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| | - Yangyang Zhang
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| | - Changgeng Wang
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| |
Collapse
|
29
|
Skin cancer diagnosis based on deep transfer learning and sparrow search algorithm. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07762-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AbstractSkin cancer affects the lives of millions of people every year, as it is considered the most popular form of cancer. In the USA alone, approximately three and a half million people are diagnosed with skin cancer annually. The survival rate diminishes steeply as the skin cancer progresses. Despite this, it is an expensive and difficult procedure to discover this cancer type in the early stages. In this study, a threshold-based automatic approach for skin cancer detection, classification, and segmentation utilizing a meta-heuristic optimizer named sparrow search algorithm (SpaSA) is proposed. Five U-Net models (i.e., U-Net, U-Net++, Attention U-Net, V-net, and Swin U-Net) with different configurations are utilized to perform the segmentation process. Besides this, the meta-heuristic SpaSA optimizer is used to perform the optimization of the hyperparameters using eight pre-trained CNN models (i.e., VGG16, VGG19, MobileNet, MobileNetV2, MobileNetV3Large, MobileNetV3Small, NASNetMobile, and NASNetLarge). The dataset is gathered from five public sources in which two types of datasets are generated (i.e., 2-classes and 10-classes). For the segmentation, concerning the “skin cancer segmentation and classification” dataset, the best reported scores by U-Net++ with DenseNet201 as a backbone architecture are 0.104, $$94.16\%$$
94.16
%
, $$91.39\%$$
91.39
%
, $$99.03\%$$
99.03
%
, $$96.08\%$$
96.08
%
, $$96.41\%$$
96.41
%
, $$77.19\%$$
77.19
%
, $$75.47\%$$
75.47
%
in terms of loss, accuracy, F1-score, AUC, IoU, dice, hinge, and squared hinge, respectively, while for the “PH2” dataset, the best reported scores by the Attention U-Net with DenseNet201 as backbone architecture are 0.137, $$94.75\%$$
94.75
%
, $$92.65\%$$
92.65
%
, $$92.56\%$$
92.56
%
, $$92.74\%$$
92.74
%
, $$96.20\%$$
96.20
%
, $$86.30\%$$
86.30
%
, $$92.65\%$$
92.65
%
, $$69.28\%$$
69.28
%
, and $$68.04\%$$
68.04
%
in terms of loss, accuracy, F1-score, precision, sensitivity, specificity, IoU, dice, hinge, and squared hinge, respectively. For the “ISIC 2019 and 2020 Melanoma” dataset, the best reported overall accuracy from the applied CNN experiments is $$98.27\%$$
98.27
%
by the MobileNet pre-trained model. Similarly, for the “Melanoma Classification (HAM10K)” dataset, the best reported overall accuracy from the applied CNN experiments is $$98.83\%$$
98.83
%
by the MobileNet pre-trained model. For the “skin diseases image” dataset, the best reported overall accuracy from the applied CNN experiments is $$85.87\%$$
85.87
%
by the MobileNetV2 pre-trained model. After computing the results, the suggested approach is compared with 13 related studies.
Collapse
|
30
|
Polat H. A modified DeepLabV3+ based semantic segmentation of chest computed tomography images for COVID-19 lung infections. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:1481-1495. [PMID: 35941930 PMCID: PMC9349869 DOI: 10.1002/ima.22772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Revised: 04/19/2022] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
Coronavirus disease (COVID-19) affects the lives of billions of people worldwide and has destructive impacts on daily life routines, the global economy, and public health. Early diagnosis and quantification of COVID-19 infection have a vital role in improving treatment outcomes and interrupting transmission. For this purpose, advances in medical imaging techniques like computed tomography (CT) scans offer great potential as an alternative to RT-PCR assay. CT scans enable a better understanding of infection morphology and tracking of lesion boundaries. Since manual analysis of CT can be extremely tedious and time-consuming, robust automated image segmentation is necessary for clinical diagnosis and decision support. This paper proposes an efficient segmentation framework based on the modified DeepLabV3+ using lower atrous rates in the Atrous Spatial Pyramid Pooling (ASPP) module. The lower atrous rates make receptive small to capture intricate morphological details. The encoder part of the framework utilizes a pre-trained residual network based on dilated convolutions for optimum resolution of feature maps. In order to evaluate the robustness of the modified model, a comprehensive comparison with other state-of-the-art segmentation methods was also performed. The experiments were carried out using a fivefold cross-validation technique on a publicly available database containing 100 single-slice CT scans from >40 patients with COVID-19. The modified DeepLabV3+ achieved good segmentation performance using around 43.9 M parameters. The lower atrous rates in the ASPP module improved segmentation performance. After fivefold cross-validation, the framework achieved an overall Dice similarity coefficient score of 0.881. The results demonstrate that several minor modifications to the DeepLabV3+ pipeline can provide robust solutions for improving segmentation performance and hardware implementation.
Collapse
Affiliation(s)
- Hasan Polat
- Department of Electrical and EnergyBingol UniversityBingölTurkey
| |
Collapse
|
31
|
Baghdadi NA, Malki A, Magdy Balaha H, AbdulAzeem Y, Badawy M, Elhosseini M. An optimized deep learning approach for suicide detection through Arabic tweets. PeerJ Comput Sci 2022; 8:e1070. [PMID: 36092010 PMCID: PMC9455273 DOI: 10.7717/peerj-cs.1070] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 07/28/2022] [Indexed: 06/15/2023]
Abstract
Many people worldwide suffer from mental illnesses such as major depressive disorder (MDD), which affect their thoughts, behavior, and quality of life. Suicide is regarded as the second leading cause of death among teenagers when treatment is not received. Twitter is a platform for expressing their emotions and thoughts about many subjects. Many studies, including this one, suggest using social media data to track depression and other mental illnesses. Even though Arabic is widely spoken and has a complex syntax, depressive detection methods have not been applied to the language. The Arabic tweets dataset should be scraped and annotated first. Then, a complete framework for categorizing tweet inputs into two classes (such as Normal or Suicide) is suggested in this study. The article also proposes an Arabic tweet preprocessing algorithm that contrasts lemmatization, stemming, and various lexical analysis methods. Experiments are conducted using Twitter data scraped from the Internet. Five different annotators have annotated the data. Performance metrics are reported on the suggested dataset using the latest Bidirectional Encoder Representations from Transformers (BERT) and Universal Sentence Encoder (USE) models. The measured performance metrics are balanced accuracy, specificity, F1-score, IoU, ROC, Youden Index, NPV, and weighted sum metric (WSM). Regarding USE models, the best-weighted sum metric (WSM) is 80.2%, and with regards to Arabic BERT models, the best WSM is 95.26%.
Collapse
Affiliation(s)
- Nadiah A. Baghdadi
- Nursing Management and Education Department, College of Nursing, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amer Malki
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
| | - Hossam Magdy Balaha
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Yousry AbdulAzeem
- Computer Engineering Department, Misr Higher Institute for Engineering and Technology, Mansoura, Egypt
| | - Mahmoud Badawy
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mostafa Elhosseini
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| |
Collapse
|
32
|
Gharehchopogh FS, Namazi M, Ebrahimi L, Abdollahzadeh B. Advances in Sparrow Search Algorithm: A Comprehensive Survey. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2022; 30:427-455. [PMID: 36034191 PMCID: PMC9395821 DOI: 10.1007/s11831-022-09804-w] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Accepted: 08/02/2022] [Indexed: 05/29/2023]
Abstract
Mathematical programming and meta-heuristics are two types of optimization methods. Meta-heuristic algorithms can identify optimal/near-optimal solutions by mimicking natural behaviours or occurrences and provide benefits such as simplicity of execution, a few parameters, avoidance of local optimization, and flexibility. Many meta-heuristic algorithms have been introduced to solve optimization issues, each of which has advantages and disadvantages. Studies and research on presented meta-heuristic algorithms in prestigious journals showed they had good performance in solving hybrid, improved and mutated problems. This paper reviews the sparrow search algorithm (SSA), one of the new and robust algorithms for solving optimization problems. This paper covers all the SSA literature on variants, improvement, hybridization, and optimization. According to studies, the use of SSA in the mentioned areas has been equal to 32%, 36%, 4%, and 28%, respectively. The highest percentage belongs to Improved, which has been analyzed by three subsections: Meat-Heuristics, artificial neural networks, and Deep Learning.
Collapse
Affiliation(s)
| | - Mohammad Namazi
- Department of Computer Engineering, Maybod Branch. Islamic Azad University, Maybod, Iran
| | - Laya Ebrahimi
- Department of Computer Engineering, Urmia Branch, Islamic Azad University, Urmia, Iran
| | | |
Collapse
|
33
|
Mohammed MA, Al-Khateeb B, Yousif M, Mostafa SA, Kadry S, Abdulkareem KH, Garcia-Zapirain B. Novel Crow Swarm Optimization Algorithm and Selection Approach for Optimal Deep Learning COVID-19 Diagnostic Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1307944. [PMID: 35996653 PMCID: PMC9392599 DOI: 10.1155/2022/1307944] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Revised: 03/16/2022] [Accepted: 07/19/2022] [Indexed: 02/07/2023]
Abstract
Due to the COVID-19 pandemic, computerized COVID-19 diagnosis studies are proliferating. The diversity of COVID-19 models raises the questions of which COVID-19 diagnostic model should be selected and which decision-makers of healthcare organizations should consider performance criteria. Because of this, a selection scheme is necessary to address all the above issues. This study proposes an integrated method for selecting the optimal deep learning model based on a novel crow swarm optimization algorithm for COVID-19 diagnosis. The crow swarm optimization is employed to find an optimal set of coefficients using a designed fitness function for evaluating the performance of the deep learning models. The crow swarm optimization is modified to obtain a good selected coefficient distribution by considering the best average fitness. We have utilized two datasets: the first dataset includes 746 computed tomography images, 349 of them are of confirmed COVID-19 cases and the other 397 are of healthy individuals, and the second dataset are composed of unimproved computed tomography images of the lung for 632 positive cases of COVID-19 with 15 trained and pretrained deep learning models with nine evaluation metrics are used to evaluate the developed methodology. Among the pretrained CNN and deep models using the first dataset, ResNet50 has an accuracy of 91.46% and a F1-score of 90.49%. For the first dataset, the ResNet50 algorithm is the optimal deep learning model selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5715.988 for COVID-19 computed tomography lung images case considered differential advancement. In contrast, the VGG16 algorithm is the optimal deep learning model is selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5758.791 for the second dataset. Overall, InceptionV3 had the lowest performance for both datasets. The proposed evaluation methodology is a helpful tool to assist healthcare managers in selecting and evaluating the optimal COVID-19 diagnosis models based on deep learning.
Collapse
Affiliation(s)
- Mazin Abed Mohammed
- College of Computer Science and Information Technology, University of Anbar, Ramadi 31001, Anbar, Iraq
| | - Belal Al-Khateeb
- College of Computer Science and Information Technology, University of Anbar, Ramadi 31001, Anbar, Iraq
| | - Mohammed Yousif
- Directorate of Regions and Governorates Affairs, Ministry of Youth & Sport, Ramadi 31065, Anbar, Iraq
| | - Salama A. Mostafa
- Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, Johor 86400, Malaysia
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, Kristiansand 4608, Norway
| | - Karrar Hameed Abdulkareem
- College of Agriculture, Al-Muthanna University, Samawah 66001, Iraq
- College of Engineering, University of Warith Al-Anbiyaa, Karbala, Iraq
| | | |
Collapse
|
34
|
Baghdadi NA, Malki A, Magdy Balaha H, AbdulAzeem Y, Badawy M, Elhosseini M. Classification of breast cancer using a manta-ray foraging optimized transfer learning framework. PeerJ Comput Sci 2022; 8:e1054. [PMID: 36092017 PMCID: PMC9454783 DOI: 10.7717/peerj-cs.1054] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
Due to its high prevalence and wide dissemination, breast cancer is a particularly dangerous disease. Breast cancer survival chances can be improved by early detection and diagnosis. For medical image analyzers, diagnosing is tough, time-consuming, routine, and repetitive. Medical image analysis could be a useful method for detecting such a disease. Recently, artificial intelligence technology has been utilized to help radiologists identify breast cancer more rapidly and reliably. Convolutional neural networks, among other technologies, are promising medical image recognition and classification tools. This study proposes a framework for automatic and reliable breast cancer classification based on histological and ultrasound data. The system is built on CNN and employs transfer learning technology and metaheuristic optimization. The Manta Ray Foraging Optimization (MRFO) approach is deployed to improve the framework's adaptability. Using the Breast Cancer Dataset (two classes) and the Breast Ultrasound Dataset (three-classes), eight modern pre-trained CNN architectures are examined to apply the transfer learning technique. The framework uses MRFO to improve the performance of CNN architectures by optimizing their hyperparameters. Extensive experiments have recorded performance parameters, including accuracy, AUC, precision, F1-score, sensitivity, dice, recall, IoU, and cosine similarity. The proposed framework scored 97.73% on histopathological data and 99.01% on ultrasound data in terms of accuracy. The experimental results show that the proposed framework is superior to other state-of-the-art approaches in the literature review.
Collapse
Affiliation(s)
- Nadiah A. Baghdadi
- College of Nursing, Nursing Management and Education Department, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amer Malki
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
| | - Hossam Magdy Balaha
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Yousry AbdulAzeem
- Computer Engineering Department, Misr Higher Institute for Engineering and Technology, Mansoura, Egypt
| | - Mahmoud Badawy
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mostafa Elhosseini
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| |
Collapse
|