1
|
Yang M, Zhang X, Jin J. Radiomics and Deep Learning Model for Benign and Malignant Soft Tissue Tumors Differentiation of Extremities and Trunk. Acad Radiol 2025; 32:2838-2846. [PMID: 39753479 DOI: 10.1016/j.acra.2024.12.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 12/11/2024] [Accepted: 12/12/2024] [Indexed: 04/23/2025]
Abstract
RATIONALE AND OBJECTIVES To develop radiomics and deep learning models for differentiating malignant and benign soft tissue tumors (STTs) preoperatively based on fat saturation T2-weighted imaging (FS-T2WI) of patients. MATERIALS AND METHODS Data of 115 patients with STTs of extremities and trunk were collected from our hospital as the training set, and data of other 70 patients were collected from another center as the external validation set. Outlined Regions of interest included the intratumor and the peritumor region extending outward by 5 mm, then the corresponding radiomics features were extracted respectively. Deep learning was performed using pretrained 3D ResNet algorithms, and deep learning features were extracted from the entire FS-T2WI of patients. Recursive feature elimination and least absolute shrinkage and selection operator were used to select the radiomics and deep learning features with predictive value. Five machine learning algorithms were applied to build radiomics models, the area under the ROC curve (AUC) in the validation set were used to evaluate the diagnostic performance, and decision curve analysis (DCA) was used to evaluate the clinical benefit of models. RESULTS Based on 20 selected deep learning and radiomics features, the deep learning radiomics (DLR) model had the best predictive performance in the validation set, with an AUC of 0.9410. DCA and calibration curves showed that the DLR model had better clinical net benefit and goodness of fit. CONCLUSION By extracting more features from FS-T2WI, the DLR model is a noninvasive, low-cost, and highly accurate preoperative differential diagnosis of benign and malignant STTs.
Collapse
Affiliation(s)
- Miaomiao Yang
- Department of Radiology, Southeast University Zhongda Hospital, No. 87 Dingjiaqiao Road, Gulou District, Nanjing, Jiangsu Province, China (M.Y., J.J.)
| | - Xiuming Zhang
- Department of Radiology, Jiangsu Cancer Hospital, Nanjing, Jiangsu Province, China (X.Z.)
| | - Jiyang Jin
- Department of Radiology, Southeast University Zhongda Hospital, No. 87 Dingjiaqiao Road, Gulou District, Nanjing, Jiangsu Province, China (M.Y., J.J.).
| |
Collapse
|
2
|
Hansun S, Argha A, Bakhshayeshi I, Wicaksana A, Alinejad-Rokny H, Fox GJ, Liaw ST, Celler BG, Marks GB. Diagnostic Performance of Artificial Intelligence-Based Methods for Tuberculosis Detection: Systematic Review. J Med Internet Res 2025; 27:e69068. [PMID: 40053773 PMCID: PMC11928776 DOI: 10.2196/69068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2024] [Revised: 01/10/2025] [Accepted: 02/07/2025] [Indexed: 03/09/2025] Open
Abstract
BACKGROUND Tuberculosis (TB) remains a significant health concern, contributing to the highest mortality among infectious diseases worldwide. However, none of the various TB diagnostic tools introduced is deemed sufficient on its own for the diagnostic pathway, so various artificial intelligence (AI)-based methods have been developed to address this issue. OBJECTIVE We aimed to provide a comprehensive evaluation of AI-based algorithms for TB detection across various data modalities. METHODS Following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) 2020 guidelines, we conducted a systematic review to synthesize current knowledge on this topic. Our search across 3 major databases (Scopus, PubMed, Association for Computing Machinery [ACM] Digital Library) yielded 1146 records, of which we included 152 (13.3%) studies in our analysis. QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies version 2) was performed for the risk-of-bias assessment of all included studies. RESULTS Radiographic biomarkers (n=129, 84.9%) and deep learning (DL; n=122, 80.3%) approaches were predominantly used, with convolutional neural networks (CNNs) using Visual Geometry Group (VGG)-16 (n=37, 24.3%), ResNet-50 (n=33, 21.7%), and DenseNet-121 (n=19, 12.5%) architectures being the most common DL approach. The majority of studies focused on model development (n=143, 94.1%) and used a single modality approach (n=141, 92.8%). AI methods demonstrated good performance in all studies: mean accuracy=91.93% (SD 8.10%, 95% CI 90.52%-93.33%; median 93.59%, IQR 88.33%-98.32%), mean area under the curve (AUC)=93.48% (SD 7.51%, 95% CI 91.90%-95.06%; median 95.28%, IQR 91%-99%), mean sensitivity=92.77% (SD 7.48%, 95% CI 91.38%-94.15%; median 94.05% IQR 89%-98.87%), and mean specificity=92.39% (SD 9.4%, 95% CI 90.30%-94.49%; median 95.38%, IQR 89.42%-99.19%). AI performance across different biomarker types showed mean accuracies of 92.45% (SD 7.83%), 89.03% (SD 8.49%), and 84.21% (SD 0%); mean AUCs of 94.47% (SD 7.32%), 88.45% (SD 8.33%), and 88.61% (SD 5.9%); mean sensitivities of 93.8% (SD 6.27%), 88.41% (SD 10.24%), and 93% (SD 0%); and mean specificities of 94.2% (SD 6.63%), 85.89% (SD 14.66%), and 95% (SD 0%) for radiographic, molecular/biochemical, and physiological types, respectively. AI performance across various reference standards showed mean accuracies of 91.44% (SD 7.3%), 93.16% (SD 6.44%), and 88.98% (SD 9.77%); mean AUCs of 90.95% (SD 7.58%), 94.89% (SD 5.18%), and 92.61% (SD 6.01%); mean sensitivities of 91.76% (SD 7.02%), 93.73% (SD 6.67%), and 91.34% (SD 7.71%); and mean specificities of 86.56% (SD 12.8%), 93.69% (SD 8.45%), and 92.7% (SD 6.54%) for bacteriological, human reader, and combined reference standards, respectively. The transfer learning (TL) approach showed increasing popularity (n=89, 58.6%). Notably, only 1 (0.7%) study conducted domain-shift analysis for TB detection. CONCLUSIONS Findings from this review underscore the considerable promise of AI-based methods in the realm of TB detection. Future research endeavors should prioritize conducting domain-shift analyses to better simulate real-world scenarios in TB detection. TRIAL REGISTRATION PROSPERO CRD42023453611; https://www.crd.york.ac.uk/PROSPERO/view/CRD42023453611.
Collapse
Affiliation(s)
- Seng Hansun
- School of Clinical Medicine, South West Sydney, UNSW Medicine & Health, UNSW Sydney, Sydney, Australia
- Woolcock Vietnam Research Group, Woolcock Institute of Medical Research, Sydney, Australia
| | - Ahmadreza Argha
- Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
- Tyree Institute of Health Engineering, UNSW Sydney, Sydney, Australia
- Ageing Future Institute, UNSW Sydney, Sydney, Australia
| | - Ivan Bakhshayeshi
- Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
- BioMedical Machine Learning Lab, Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
| | - Arya Wicaksana
- Informatics Department, Universitas Multimedia Nusantara, Tangerang, Indonesia
| | - Hamid Alinejad-Rokny
- Tyree Institute of Health Engineering, UNSW Sydney, Sydney, Australia
- Ageing Future Institute, UNSW Sydney, Sydney, Australia
- BioMedical Machine Learning Lab, Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
| | - Greg J Fox
- NHMRC Clinical Trials Centre, Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Siaw-Teng Liaw
- School of Population Health and School of Clinical Medicine, UNSW Sydney, Sydney, Australia
| | - Branko G Celler
- Biomedical Systems Research Laboratory, School of Electrical Engineering and Telecommunications, UNSW Sydney, Sydney, Australia
| | - Guy B Marks
- School of Clinical Medicine, South West Sydney, UNSW Medicine & Health, UNSW Sydney, Sydney, Australia
- Woolcock Vietnam Research Group, Woolcock Institute of Medical Research, Sydney, Australia
- Burnet Institute, Melbourne, Australia
| |
Collapse
|
3
|
Fouad S, Usman M, Kabir R, Rajasekaran A, Morlese J, Nagori P, Bhatia B. Explained Deep Learning Framework for COVID-19 Detection in Volumetric CT Images Aligned with the British Society of Thoracic Imaging Reporting Guidance: A Pilot Study. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01444-3. [PMID: 40011345 DOI: 10.1007/s10278-025-01444-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2024] [Revised: 01/07/2025] [Accepted: 02/06/2025] [Indexed: 02/28/2025]
Abstract
In March 2020, the British Society of Thoracic Imaging (BSTI) introduced a reporting guidance for COVID-19 detection to streamline standardised reporting and enhance agreement between radiologists. However, most current DL methods do not conform to this guidance. This study introduces a multi-class deep learning (DL) model to identify BSTI COVID-19 categories within CT volumes, classified as 'Classic', 'Probable', 'Indeterminate', or 'Non-COVID'. A total of 56 CT pseudoanonymised images were collected from patients with suspected COVID-19 and annotated by an experienced chest subspecialty radiologist following the BSTI guidance. We evaluated the performance of multiple DL-based models, including three-dimensional (3D) ResNet architectures, pre-trained on the Kinetics-700 video dataset. For better interpretability of the results, our approach incorporates a post-hoc visual explainability feature to highlight the areas of the image most indicative of the COVID-19 category. Our four-class classification DL framework achieves an overall accuracy of 75%. However, the model struggled to detect the 'Indeterminate' COVID-19 group, whose removal significantly improved the model's accuracy to 90%. The proposed explainable multi-classification DL model yields accurate detection of 'Classic', 'Probable', and 'Non-COVID' categories with poor detection ability for 'Indeterminate' COVID-19 cases. These findings are consistent with clinical studies that aimed at validating the BSTI reporting manually amongst consultant radiologists.
Collapse
Affiliation(s)
- Shereen Fouad
- School of Computer Science and Digital Technologies, Aston University, Birmingham, UK.
| | - Muhammad Usman
- School of Computer Science and Digital Technologies, Aston University, Birmingham, UK
| | - Ra'eesa Kabir
- School of Computer Science and Digital Technologies, Aston University, Birmingham, UK
| | | | - John Morlese
- Sandwell and West Birmingham Hospitals NHS Trust, West Birmingham, UK
| | - Pankaj Nagori
- Sandwell and West Birmingham Hospitals NHS Trust, West Birmingham, UK
| | - Bahadar Bhatia
- Sandwell and West Birmingham Hospitals NHS Trust, West Birmingham, UK
| |
Collapse
|
4
|
Islam O, Assaduzzaman M, Hasan MZ. An explainable AI-based blood cell classification using optimized convolutional neural network. J Pathol Inform 2024; 15:100389. [PMID: 39161471 PMCID: PMC11332798 DOI: 10.1016/j.jpi.2024.100389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 06/16/2024] [Accepted: 06/24/2024] [Indexed: 08/21/2024] Open
Abstract
White blood cells (WBCs) are a vital component of the immune system. The efficient and precise classification of WBCs is crucial for medical professionals to diagnose diseases accurately. This study presents an enhanced convolutional neural network (CNN) for detecting blood cells with the help of various image pre-processing techniques. Various image pre-processing techniques, such as padding, thresholding, erosion, dilation, and masking, are utilized to minimize noise and improve feature enhancement. Additionally, performance is further enhanced by experimenting with various architectural structures and hyperparameters to optimize the proposed model. A comparative evaluation is conducted to compare the performance of the proposed model with three transfer learning models, including Inception V3, MobileNetV2, and DenseNet201.The results indicate that the proposed model outperforms existing models, achieving a testing accuracy of 99.12%, precision of 99%, and F1-score of 99%. In addition, We utilized SHAP (Shapley Additive explanations) and LIME (Local Interpretable Model-agnostic Explanations) techniques in our study to improve the interpretability of the proposed model, providing valuable insights into how the model makes decisions. Furthermore, the proposed model has been further explained using the Grad-CAM and Grad-CAM++ techniques, which is a class-discriminative localization approach, to improve trust and transparency. Grad-CAM++ performed slightly better than Grad-CAM in identifying the predicted area's location. Finally, the most efficient model has been integrated into an end-to-end (E2E) system, accessible through both web and Android platforms for medical professionals to classify blood cell.
Collapse
Affiliation(s)
- Oahidul Islam
- Dept. of EEE, Daffodil International University, Dhaka, Bangladesh
| | - Md Assaduzzaman
- Health Informatics Research Laboratory (HIRL), Dept. of CSE, Daffodil International University, Dhaka, Bangladesh
| | - Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Dept. of CSE, Daffodil International University, Dhaka, Bangladesh
| |
Collapse
|
5
|
Ko K, Lee B, Hong J, Ko H. Open Set Medical Diagnosis via Difficulty-Aware Multi-Label Thorax Disease Classification. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039027 DOI: 10.1109/embc53108.2024.10782506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Interest in emerging diseases is increasing due to recent global outbreaks like COVID-19. Unlike general image classification tasks, medical imaging is a multi-label classification that can have multiple diseases simultaneously. Then, suppose the results of all classes do not exceed the thresholds. In that case, it is classified as normal rather than unknown. This is why the existing open-set recognition (OSR) methods cannot be applied to open-set medical diagnosis. In this paper, we propose a novel open-set medical diagnosis method to solve the fundamental problem of OSR in multi-label classification. To solve this problem, we employ Copycat and entropy-based thresholds. To our knowledge, open-set multi-label medical diagnosis has not yet been addressed in research. We conduct experiments to confirm that our proposed method performs well in both multi-label classification and recognizing normal and unknown.
Collapse
|
6
|
Duan W, Wu Z, Zhu H, Zhu Z, Liu X, Shu Y, Zhu X, Wu J, Peng D. Deep learning modeling using mammography images for predicting estrogen receptor status in breast cancer. Am J Transl Res 2024; 16:2411-2422. [PMID: 39006260 PMCID: PMC11236640 DOI: 10.62347/puhr6185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 05/12/2024] [Indexed: 07/16/2024]
Abstract
BACKGROUND The estrogen receptor (ER) serves as a pivotal indicator for assessing endocrine therapy efficacy and breast cancer prognosis. Invasive biopsy is a conventional approach for appraising ER expression levels, but it bears disadvantages due to tumor heterogeneity. To address the issue, a deep learning model leveraging mammography images was developed in this study for accurate evaluation of ER status in patients with breast cancer. OBJECTIVES To predict the ER status in breast cancer patients with a newly developed deep learning model leveraging mammography images. MATERIALS AND METHODS Datasets comprising preoperative mammography images, ER expression levels, and clinical data spanning from October 2016 to October 2021 were retrospectively collected from 358 patients diagnosed with invasive ductal carcinoma. Following collection, these datasets were divided into a training dataset (n = 257) and a testing dataset (n = 101). Subsequently, a deep learning prediction model, referred to as IP-SE-DResNet model, was developed utilizing two deep residual networks along with the Squeeze-and-Excitation attention mechanism. This model was tailored to forecast the ER status in breast cancer patients utilizing mammography images from both craniocaudal view and mediolateral oblique view. Performance measurements including prediction accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curves (AUCs) were employed to assess the effectiveness of the model. RESULTS In the training dataset, the AUCs for the IP-SE-DResNet model utilizing mammography images from the craniocaudal view, mediolateral oblique view, and the combined images from both views, were 0.849 (95% CIs: 0.809-0.868), 0.858 (95% CIs: 0.813-0.872), and 0.895 (95% CIs: 0.866-0.913), respectively. Correspondingly, the AUCs for these three image categories in the testing dataset were 0.835 (95% CIs: 0.790-0.887), 0.746 (95% CIs: 0.793-0.889), and 0.886 (95% CIs: 0.809-0.934), respectively. A comprehensive comparison between performance measurements underscored a substantial enhancement achieved by the proposed IP-SE-DResNet model in contrast to a traditional radiomics model employing the naive Bayesian classifier. For the latter, the AUCs stood at only 0.614 (95% CIs: 0.594-0.638) in the training dataset and 0.613 (95% CIs: 0.587-0.654) in the testing dataset, both utilizing a combination of mammography images from the craniocaudal and mediolateral oblique views. CONCLUSIONS The proposed IP-SE-DResNet model presents a potent and non-invasive approach for predicting ER status in breast cancer patients, potentially enhancing the efficiency and diagnostic precision of radiologists.
Collapse
Affiliation(s)
- Wenfeng Duan
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| | - Zhiheng Wu
- School of Information Engineering, Nanchang UniversityNanchang, Jiangxi, China
| | - Huijun Zhu
- School of Information Engineering, Nanchang UniversityNanchang, Jiangxi, China
| | - Zhiyun Zhu
- Department of Cardiology, Jiangxi Provincial People’s HospitalNanchang, Jiangxi, China
| | - Xiang Liu
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| | - Yongqiang Shu
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| | - Xishun Zhu
- School of Advanced Manufacturing, Nanchang UniversityNanchang, Jiangxi, China
| | - Jianhua Wu
- School of Information Engineering, Nanchang UniversityNanchang, Jiangxi, China
| | - Dechang Peng
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| |
Collapse
|
7
|
Ma J, Choi SJ, Kim S, Hong M. Performance Comparison of Convolutional Neural Network-Based Hearing Loss Classification Model Using Auditory Brainstem Response Data. Diagnostics (Basel) 2024; 14:1232. [PMID: 38928647 PMCID: PMC11202863 DOI: 10.3390/diagnostics14121232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 06/06/2024] [Accepted: 06/09/2024] [Indexed: 06/28/2024] Open
Abstract
This study evaluates the efficacy of several Convolutional Neural Network (CNN) models for the classification of hearing loss in patients using preprocessed auditory brainstem response (ABR) image data. Specifically, we employed six CNN architectures-VGG16, VGG19, DenseNet121, DenseNet-201, AlexNet, and InceptionV3-to differentiate between patients with hearing loss and those with normal hearing. A dataset comprising 7990 preprocessed ABR images was utilized to assess the performance and accuracy of these models. Each model was systematically tested to determine its capability to accurately classify hearing loss. A comparative analysis of the models focused on metrics of accuracy and computational efficiency. The results indicated that the AlexNet model exhibited superior performance, achieving an accuracy of 95.93%. The findings from this research suggest that deep learning models, particularly AlexNet in this instance, hold significant potential for automating the diagnosis of hearing loss using ABR graph data. Future work will aim to refine these models to enhance their diagnostic accuracy and efficiency, fostering their practical application in clinical settings.
Collapse
Affiliation(s)
- Jun Ma
- Department of Software Convergence, Soonchunhyang University, Asan 31538, Republic of Korea;
| | - Seong Jun Choi
- Department of Otorhinolaryngology—Head and Neck Surgery, College of Medicine, Soonchunhyang University Cheonan Hospital, Cheonan 31151, Republic of Korea;
| | - Sungyeup Kim
- Insitute for Artificial Intelligence and Software, Soonchunhyang University, Asan 31538, Republic of Korea;
| | - Min Hong
- Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Republic of Korea
| |
Collapse
|
8
|
Shayegan MJ. A brief review and scientometric analysis on ensemble learning methods for handling COVID-19. Heliyon 2024; 10:e26694. [PMID: 38420425 PMCID: PMC10901105 DOI: 10.1016/j.heliyon.2024.e26694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 02/07/2024] [Accepted: 02/19/2024] [Indexed: 03/02/2024] Open
Abstract
Numerous efforts and research have been conducted worldwide to combat the coronavirus disease 2019 (COVID-19) pandemic. In this regard, some researchers have focused on deep and machine-learning approaches to discover more about this disease. There have been many articles on using ensemble learning methods for COVID-19 detection. Still, there seems to be no scientometric analysis or a brief review of these researches. Hence, a combined method of scientometric analysis and brief review was used to study the published articles that employed an ensemble learning approach to detect COVID-19. This research used both methods to overcome their limitations, leading to enhanced and reliable outcomes. The related articles were retrieved from the Scopus database. Then a two-step procedure was employed. A concise review of the collected articles was conducted. Then they underwent scientometric and bibliometric analyses. The findings revealed that convolutional neural network (CNN) is the mostly employed algorithm, while support vector machine (SVM), random forest, Resnet, DenseNet, and visual geometry group (VGG) were also frequently used. Additionally, China has had a significant presence in the numerous top-ranking categories of this field of research. Both study phases yielded valuable results and rankings.
Collapse
|
9
|
Misra S, Yoon C, Kim K, Managuli R, Barr RG, Baek J, Kim C. Deep learning-based multimodal fusion network for segmentation and classification of breast cancers using B-mode and elastography ultrasound images. Bioeng Transl Med 2023; 8:e10480. [PMID: 38023698 PMCID: PMC10658476 DOI: 10.1002/btm2.10480] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 12/02/2022] [Accepted: 12/13/2022] [Indexed: 12/01/2023] Open
Abstract
Ultrasonography is one of the key medical imaging modalities for evaluating breast lesions. For differentiating benign from malignant lesions, computer-aided diagnosis (CAD) systems have greatly assisted radiologists by automatically segmenting and identifying features of lesions. Here, we present deep learning (DL)-based methods to segment the lesions and then classify benign from malignant, utilizing both B-mode and strain elastography (SE-mode) images. We propose a weighted multimodal U-Net (W-MM-U-Net) model for segmenting lesions where optimum weight is assigned on different imaging modalities using a weighted-skip connection method to emphasize its importance. We design a multimodal fusion framework (MFF) on cropped B-mode and SE-mode ultrasound (US) lesion images to classify benign and malignant lesions. The MFF consists of an integrated feature network (IFN) and a decision network (DN). Unlike other recent fusion methods, the proposed MFF method can simultaneously learn complementary information from convolutional neural networks (CNNs) trained using B-mode and SE-mode US images. The features from the CNNs are ensembled using the multimodal EmbraceNet model and DN classifies the images using those features. The experimental results (sensitivity of 100 ± 0.00% and specificity of 94.28 ± 7.00%) on the real-world clinical data showed that the proposed method outperforms the existing single- and multimodal methods. The proposed method predicts seven benign patients as benign three times out of five trials and six malignant patients as malignant five out of five trials. The proposed method would potentially enhance the classification accuracy of radiologists for breast cancer detection in US images.
Collapse
Affiliation(s)
- Sampa Misra
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Device Innovation Center, and Graduate School of Artificial IntelligencePohang University of Science and TechnologyPohangSouth Korea
| | - Chiho Yoon
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Device Innovation Center, and Graduate School of Artificial IntelligencePohang University of Science and TechnologyPohangSouth Korea
| | - Kwang‐Ju Kim
- Daegu‐Gyeongbuk Research CenterElectronics and Telecommunications Research Institute (ETRI)DaeguSouth Korea
| | - Ravi Managuli
- Department of BioengineeringUniversity of WashingtonSeattleWashingtonUSA
| | - Richard G. Barr
- Department of RadiologyNortheastern Ohio Medical UniversityYoungstownOhioUSA
| | - Jongduk Baek
- School of Integrated TechnologyYonsei UniversitySeoulSouth Korea
| | - Chulhong Kim
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Device Innovation Center, and Graduate School of Artificial IntelligencePohang University of Science and TechnologyPohangSouth Korea
| |
Collapse
|
10
|
Ghassemi N, Shoeibi A, Khodatars M, Heras J, Rahimi A, Zare A, Zhang YD, Pachori RB, Gorriz JM. Automatic diagnosis of COVID-19 from CT images using CycleGAN and transfer learning. Appl Soft Comput 2023; 144:110511. [PMID: 37346824 PMCID: PMC10263244 DOI: 10.1016/j.asoc.2023.110511] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 08/23/2022] [Accepted: 06/08/2023] [Indexed: 06/23/2023]
Abstract
The outbreak of the corona virus disease (COVID-19) has changed the lives of most people on Earth. Given the high prevalence of this disease, its correct diagnosis in order to quarantine patients is of the utmost importance in the steps of fighting this pandemic. Among the various modalities used for diagnosis, medical imaging, especially computed tomography (CT) imaging, has been the focus of many previous studies due to its accuracy and availability. In addition, automation of diagnostic methods can be of great help to physicians. In this paper, a method based on pre-trained deep neural networks is presented, which, by taking advantage of a cyclic generative adversarial net (CycleGAN) model for data augmentation, has reached state-of-the-art performance for the task at hand, i.e., 99.60% accuracy. Also, in order to evaluate the method, a dataset containing 3163 images from 189 patients has been collected and labeled by physicians. Unlike prior datasets, normal data have been collected from people suspected of having COVID-19 disease and not from data from other diseases, and this database is made available publicly. Moreover, the method's reliability is further evaluated by calibration metrics, and its decision is interpreted by Grad-CAM also to find suspicious regions as another output of the method and make its decisions trustworthy and explainable.
Collapse
Affiliation(s)
- Navid Ghassemi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Afshin Shoeibi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Marjane Khodatars
- Department of Medical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Jonathan Heras
- Department of Mathematics and Computer Science, University of La Rioja, La Rioja, Spain
| | - Alireza Rahimi
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Assef Zare
- Faculty of Electrical Engineering, Gonabad Branch, Islamic Azad University, Gonabad, Iran
| | - Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, LE1 7RH, UK
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Indore 453552, India
| | - J Manuel Gorriz
- Department of Signal Theory, Networking and Communications, Universidad de Granada, Spain
- Department of Psychiatry, University of Cambridge, UK
| |
Collapse
|
11
|
Chen Y, Wang L, Dong X, Luo R, Ge Y, Liu H, Zhang Y, Wang D. Deep Learning Radiomics of Preoperative Breast MRI for Prediction of Axillary Lymph Node Metastasis in Breast Cancer. J Digit Imaging 2023; 36:1323-1331. [PMID: 36973631 PMCID: PMC10042410 DOI: 10.1007/s10278-023-00818-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/09/2023] [Accepted: 03/13/2023] [Indexed: 03/29/2023] Open
Abstract
The objective of this study is to develop a radiomic signature constructed from deep learning features and a nomogram for prediction of axillary lymph node metastasis (ALNM) in breast cancer patients. Preoperative magnetic resonance imaging data from 479 breast cancer patients with 488 lesions were studied. The included patients were divided into two cohorts by time (training/testing cohort, n = 366/122). Deep learning features were extracted from diffusion-weighted imaging-quantitatively measured apparent diffusion coefficient (DWI-ADC) imaging and dynamic contrast-enhanced MRI (DCE-MRI) by a pretrained neural network of DenseNet121. After the selection of both radiomic and clinicopathological features, deep learning signature and a nomogram were built for independent validation. Twenty-three deep learning features were automatically selected in the training cohort to establish the deep learning signature of ALNM. Three clinicopathological factors, including LN palpability (odds ratio (OR) = 6.04; 95% confidence interval (CI) = 3.06-12.54, P = 0.004), tumor size in MRI (OR = 1.45, 95% CI = 1.18-1.80, P = 0.104), and Ki-67 (OR = 1.01; 95% CI = 1.00-1.02, P = 0.099), were selected and combined with radiomic signature to build a combined nomogram. The nomogram showed excellent predictive ability for ALNM (AUC 0.80 and 0.71 in training and testing cohorts, respectively). The sensitivity, specificity, and accuracy were 65%, 80%, and 75%, respectively, in the testing cohort. MRI-based deep learning radiomics in patients with breast cancer could be used to predict ALNM, providing a noninvasive approach to structuring the treatment strategy.
Collapse
Affiliation(s)
- Yanhong Chen
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, 200092, Shanghai, China
| | - Lijun Wang
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, 200092, Shanghai, China
| | - Xue Dong
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, 200092, Shanghai, China
| | - Ran Luo
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, 200092, Shanghai, China
| | - Yaqiong Ge
- Department of Medicine, GE Healthcare, No. 1, Huatuo Road, 210000, Shanghai, China
| | - Huanhuan Liu
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, 200092, Shanghai, China
| | - Yuzhen Zhang
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, 200092, Shanghai, China.
| | - Dengbin Wang
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, 200092, Shanghai, China.
| |
Collapse
|
12
|
Challenges, opportunities, and advances related to COVID-19 classification based on deep learning. DATA SCIENCE AND MANAGEMENT 2023. [PMCID: PMC10063459 DOI: 10.1016/j.dsm.2023.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
Abstract
The novel coronavirus disease, or COVID-19, is a hazardous disease. It is endangering the lives of many people living in more than two hundred countries. It directly affects the lungs. In general, two main imaging modalities: - computed tomography (CT) and chest x-ray (CXR) are used to achieve a speedy and reliable medical diagnosis. Identifying the coronavirus in medical images is exceedingly difficult for diagnosis, assessment, and treatment. It is demanding, time-consuming, and subject to human mistakes. In biological disciplines, excellent performance can be achieved by employing artificial intelligence (AI) models. As a subfield of AI, deep learning (DL) networks have drawn considerable attention than standard machine learning (ML) methods. DL models automatically carry out all the steps of feature extraction, feature selection, and classification. This study has performed comprehensive analysis of coronavirus classification using CXR and CT imaging modalities using DL architectures. Additionally, we have discussed how transfer learning is helpful in this regard. Finally, the problem of designing and implementing a system using computer-aided diagnostic (CAD) to find COVID-19 using DL approaches is highlighted a future research possibility.
Collapse
|
13
|
Jiang Y, Sui X, Ding Y, Xiao W, Zheng Y, Zhang Y. A semi-supervised learning approach with consistency regularization for tumor histopathological images analysis. Front Oncol 2023; 12:1044026. [PMID: 36698401 PMCID: PMC9870542 DOI: 10.3389/fonc.2022.1044026] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 12/06/2022] [Indexed: 01/12/2023] Open
Abstract
Introduction Manual inspection of histopathological images is important in clinical cancer diagnosis. Pathologists implement pathological diagnosis and prognostic evaluation through the microscopic examination of histopathological slices. This entire process is time-consuming, laborious, and challenging for pathologists. The modern use of whole-slide imaging, which scans histopathology slides to digital slices, and analysis using computer-aided diagnosis is an essential problem. Methods To solve the problem of difficult labeling of histopathological data, and improve the flexibility of histopathological analysis in clinical applications, we herein propose a semi-supervised learning algorithm coupled with consistency regularization strategy, called"Semi- supervised Histopathology Analysis Network"(Semi-His-Net), for automated normal-versus-tumor and subtype classifications. Specifically, when inputted disturbing versions of the same image, the model should predict similar outputs. Based on this, the model itself can assign artificial labels to unlabeled data for subsequent model training, thereby effectively reducing the labeled data required for training. Results Our Semi-His-Net is able to classify patches from breast cancer histopathological images into normal tissue and three other different tumor subtypes, achieving an accuracy was 90%. The average AUC of cross-classification between tumors reached 0.893. Discussion To overcome the limitations of visual inspection by pathologists for histopathology images, such as long time and low repeatability, we have developed a deep learning-based framework (Semi-His-Net) for automatic classification subdivision of the subtypes contained in the whole pathological images. This learning-based framework has great potential to improve the efficiency and repeatability of histopathological image diagnosis.
Collapse
Affiliation(s)
- Yanyun Jiang
- School of Mathematics and Statistics, Shandong Normal University, Jinan, China
| | - Xiaodan Sui
- School of Mathematics and Statistics, Shandong Normal University, Jinan, China
| | - Yanhui Ding
- School of Mathematics and Statistics, Shandong Normal University, Jinan, China
| | - Wei Xiao
- Shandong Provincial Hospital, Shandong University, Jinan, China
| | - Yuanjie Zheng
- School of Mathematics and Statistics, Shandong Normal University, Jinan, China,*Correspondence: Yuanjie Zheng, ; Yongxin Zhang,
| | - Yongxin Zhang
- School of Mathematics and Statistics, Shandong Normal University, Jinan, China,*Correspondence: Yuanjie Zheng, ; Yongxin Zhang,
| |
Collapse
|
14
|
Deep learning-based important weights-only transfer learning approach for COVID-19 CT-scan classification. APPL INTELL 2023; 53:7201-7215. [PMID: 35875199 PMCID: PMC9289654 DOI: 10.1007/s10489-022-03893-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/13/2022] [Indexed: 11/18/2022]
Abstract
COVID-19 has become a pandemic for the entire world, and it has significantly affected the world economy. The importance of early detection and treatment of the infection cannot be overstated. The traditional diagnosis techniques take more time in detecting the infection. Although, numerous deep learning-based automated solutions have recently been developed in this regard, nevertheless, the limitation of computational and battery power in resource-constrained devices makes it difficult to deploy trained models for real-time inference. In this paper, to detect the presence of COVID-19 in CT-scan images, an important weights-only transfer learning method has been proposed for devices with limited runt-time resources. In the proposed method, the pre-trained models are made point-of-care devices friendly by pruning less important weight parameters of the model. The experiments were performed on two popular VGG16 and ResNet34 models and the empirical results showed that pruned ResNet34 model achieved 95.47% accuracy, 0.9216 sensitivity, 0.9567 F-score, and 0.9942 specificity with 41.96% fewer FLOPs and 20.64% fewer weight parameters on the SARS-CoV-2 CT-scan dataset. The results of our experiments showed that the proposed method significantly reduces the run-time resource requirements of the computationally intensive models and makes them ready to be utilized on the point-of-care devices.
Collapse
|
15
|
Bhowal P, Sen S, Sarkar R. A two-tier feature selection method using Coalition game and Nystrom sampling for screening COVID-19 from chest X-Ray images. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2023; 14:3659-3674. [PMID: 34567278 PMCID: PMC8455233 DOI: 10.1007/s12652-021-03491-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 08/31/2021] [Indexed: 05/09/2023]
Abstract
The world is still under the threat of different strains of the coronavirus and the pandemic situation is far from over. The method, that is widely used for the detection of COVID-19 is Reverse Transcription Polymerase chain reaction (RT-PCR), which is a time-consuming method and is prone to manual errors, and has poor precision. Although many nations across the globe have begun the mass immunization procedure, the COVID-19 vaccine will take a long time to reach everyone. The application of artificial intelligence (AI) and computer-aided diagnosis (CAD) has been used in the domain of medical imaging for a long period. It is quite evident that the use of CAD in the detection of COVID-19 is inevitable. The main objective of this paper is to use convolutional neural network (CNN) and a novel feature selection technique to analyze Chest X-Ray (CXR) images for the detection of COVID-19. We propose a novel two-tier feature selection method, which increases the accuracy of the overall classification model used for screening COVID-19 CXRs. Filter feature selection models are often more effective than wrapper methods as wrapper methods tend to be computationally more expensive and are not useful for large datasets dealing with a large number of features. However, most filter methods do not take into consideration how a group of features would work together, rather they just look at the features individually and decide on a score. We have used approximate Shapley value, a concept of Coalition game theory, to deal with this problem. Further, in the case of a large dataset, it is important to work with shorter embeddings of the features. We have used CUR decomposition and Nystrom sampling to further reduce the feature space. To check the efficacy of this two-tier feature selection method, we have applied it to the features extracted by three standard deep learning models, namely VGG16, Xception and InceptionV3, where the features have been extracted from the CXR images of COVID-19 datasets and we have found that the selection procedure works quite well for the features extracted by Xception and InceptionV3. The source code of this work is available at https://github.com/subhankar01/covidfs-aihc.
Collapse
Affiliation(s)
- Pratik Bhowal
- Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata, India
| | - Subhankar Sen
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur, India
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| |
Collapse
|
16
|
Bhatele KR, Jha A, Tiwari D, Bhatele M, Sharma S, Mithora MR, Singhal S. COVID-19 Detection: A Systematic Review of Machine and Deep Learning-Based Approaches Utilizing Chest X-Rays and CT Scans. Cognit Comput 2022; 16:1-38. [PMID: 36593991 PMCID: PMC9797382 DOI: 10.1007/s12559-022-10076-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 11/15/2022] [Indexed: 12/30/2022]
Abstract
This review study presents the state-of-the-art machine and deep learning-based COVID-19 detection approaches utilizing the chest X-rays or computed tomography (CT) scans. This study aims to systematically scrutinize as well as to discourse challenges and limitations of the existing state-of-the-art research published in this domain from March 2020 to August 2021. This study also presents a comparative analysis of the performance of four majorly used deep transfer learning (DTL) models like VGG16, VGG19, ResNet50, and DenseNet over the COVID-19 local CT scans dataset and global chest X-ray dataset. A brief illustration of the majorly used chest X-ray and CT scan datasets of COVID-19 patients utilized in state-of-the-art COVID-19 detection approaches are also presented for future research. The research databases like IEEE Xplore, PubMed, and Web of Science are searched exhaustively for carrying out this survey. For the comparison analysis, four deep transfer learning models like VGG16, VGG19, ResNet50, and DenseNet are initially fine-tuned and trained using the augmented local CT scans and global chest X-ray dataset in order to observe their performance. This review study summarizes major findings like AI technique employed, type of classification performed, used datasets, results in terms of accuracy, specificity, sensitivity, F1 score, etc., along with the limitations, and future work for COVID-19 detection in tabular manner for conciseness. The performance analysis of the four majorly used deep transfer learning models affirms that Visual Geometry Group 19 (VGG19) model delivered the best performance over both COVID-19 local CT scans dataset and global chest X-ray dataset.
Collapse
Affiliation(s)
| | - Anand Jha
- RJIT BSF Academy, Tekanpur, Gwalior India
| | | | | | | | | | | |
Collapse
|
17
|
Mohan R, Kadry S, Rajinikanth V, Majumdar A, Thinnukool O. Automatic Detection of Tuberculosis Using VGG19 with Seagull-Algorithm. LIFE (BASEL, SWITZERLAND) 2022; 12:life12111848. [PMID: 36430983 PMCID: PMC9692667 DOI: 10.3390/life12111848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 10/28/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Due to various reasons, the incidence rate of communicable diseases in humans is steadily rising, and timely detection and handling will reduce the disease distribution speed. Tuberculosis (TB) is a severe communicable illness caused by the bacterium Mycobacterium-Tuberculosis (M. tuberculosis), which predominantly affects the lungs and causes severe respiratory problems. Due to its significance, several clinical level detections of TB are suggested, including lung diagnosis with chest X-ray images. The proposed work aims to develop an automatic TB detection system to assist the pulmonologist in confirming the severity of the disease, decision-making, and treatment execution. The proposed system employs a pre-trained VGG19 with the following phases: (i) image pre-processing, (ii) mining of deep features, (iii) enhancing the X-ray images with chosen procedures and mining of the handcrafted features, (iv) feature optimization using Seagull-Algorithm and serial concatenation, and (v) binary classification and validation. The classification is executed with 10-fold cross-validation in this work, and the proposed work is investigated using MATLAB® software. The proposed research work was executed using the concatenated deep and handcrafted features, which provided a classification accuracy of 98.6190% with the SVM-Medium Gaussian (SVM-MG) classifier.
Collapse
Affiliation(s)
- Ramya Mohan
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, SIMATS, Chennai 602105, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 1401, Lebanon
| | - Venkatesan Rajinikanth
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, SIMATS, Chennai 602105, India
| | - Arnab Majumdar
- Faculty of Engineering, Imperial College London, London SW7 2AZ, UK
| | - Orawit Thinnukool
- Faculty of Engineering, Imperial College London, London SW7 2AZ, UK
- College of Arts, Media, and Technology, Chiang Mai University, Chiang Mai 50200, Thailand
- Correspondence:
| |
Collapse
|
18
|
Tuberculosis Detection in Chest Radiographs Using Spotted Hyena Algorithm Optimized Deep and Handcrafted Features. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9263379. [PMID: 36248926 PMCID: PMC9560840 DOI: 10.1155/2022/9263379] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 09/03/2022] [Accepted: 09/13/2022] [Indexed: 11/25/2022]
Abstract
Lung abnormality in humans is steadily increasing due to various causes, and early recognition and treatment are extensively suggested. Tuberculosis (TB) is one of the lung diseases, and due to its occurrence rate and harshness, the World Health Organization (WHO) lists TB among the top ten diseases which lead to death. The clinical level detection of TB is usually performed using bio-medical imaging methods, and a chest X-ray is a commonly adopted imaging modality. This work aims to develop an automated procedure to detect TB from X-ray images using VGG-UNet-supported joint segmentation and classification. The various phases of the proposed scheme involved; (i) image collection and resizing, (ii) deep-features mining, (iii) segmentation of lung section, (iv) local-binary-pattern (LBP) generation and feature extraction, (v) optimal feature selection using spotted hyena algorithm (SHA), (vi) serial feature concatenation, and (vii) classification and validation. This research considered 3000 test images (1500 healthy and 1500 TB class) for the assessment, and the proposed experiment is implemented using Matlab®. This work implements the pretrained models to detect TB in X-rays with improved accuracy, and this research helped achieve a classification accuracy of >99% with a fine-tree classifier.
Collapse
|
19
|
Alsaaidah B, Al-Hadidi MR, Al-Nsour H, Masadeh R, AlZubi N. Comprehensive Survey of Machine Learning Systems for COVID-19 Detection. J Imaging 2022; 8:267. [PMID: 36286361 PMCID: PMC9604704 DOI: 10.3390/jimaging8100267] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 09/11/2022] [Accepted: 09/20/2022] [Indexed: 01/14/2023] Open
Abstract
The last two years are considered the most crucial and critical period of the COVID-19 pandemic affecting most life aspects worldwide. This virus spreads quickly within a short period, increasing the fatality rate associated with the virus. From a clinical perspective, several diagnosis methods are carried out for early detection to avoid virus propagation. However, the capabilities of these methods are limited and have various associated challenges. Consequently, many studies have been performed for COVID-19 automated detection without involving manual intervention and allowing an accurate and fast decision. As is the case with other diseases and medical issues, Artificial Intelligence (AI) provides the medical community with potential technical solutions that help doctors and radiologists diagnose based on chest images. In this paper, a comprehensive review of the mentioned AI-based detection solution proposals is conducted. More than 200 papers are reviewed and analyzed, and 145 articles have been extensively examined to specify the proposed AI mechanisms with chest medical images. A comprehensive examination of the associated advantages and shortcomings is illustrated and summarized. Several findings are concluded as a result of a deep analysis of all the previous works using machine learning for COVID-19 detection, segmentation, and classification.
Collapse
Affiliation(s)
- Bayan Alsaaidah
- Department of Computer Science, Prince Abdullah bin Ghazi Faculty of Information Technology and Communications, Al-Balqa Applied University, Salt 19117, Jordan
| | - Moh’d Rasoul Al-Hadidi
- Department of Electrical Engineering, Electrical Power Engineering and Computer Engineering, Faculty of Engineering, Al-Balqa Applied University, Salt 19117, Jordan
| | - Heba Al-Nsour
- Department of Computer Science, Prince Abdullah bin Ghazi Faculty of Information Technology and Communications, Al-Balqa Applied University, Salt 19117, Jordan
| | - Raja Masadeh
- Computer Science Department, The World Islamic Sciences and Education University, Amman 11947, Jordan
| | - Nael AlZubi
- Department of Electrical Engineering, Electrical Power Engineering and Computer Engineering, Faculty of Engineering, Al-Balqa Applied University, Salt 19117, Jordan
| |
Collapse
|
20
|
A segmentation-based sequence residual attention model for KRAS gene mutation status prediction in colorectal cancer. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04011-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
21
|
Gao MZ, Chou YH, Chang YZ, Pai JY, Bair H, Pai S, Yu NC. Designing Mobile Epidemic Prevention Medical Stations for the COVID-19 Pandemic and International Medical Aid. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19169959. [PMID: 36011595 PMCID: PMC9407823 DOI: 10.3390/ijerph19169959] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 08/08/2022] [Accepted: 08/09/2022] [Indexed: 06/01/2023]
Abstract
The demand for mobile epidemic prevention medical stations originated from the rapid spread of the COVID-19 pandemic. In order to reduce the infection risk of medical practitioners and provide flexible medical facilities in response to the variable needs of the pandemic, this research aimed to design mobile medical stations for COVID-19 epidemic prevention, the emergence of which began in February 2020. The mobile medical stations include a negative pressure isolation ward, a positive pressure swabbing station, a fever clinic and a laboratory. In Taiwan, many medical institutions used the mobile swabbing station design of this study to practice COVID-19 screening pre-tests. Internationally, this study assisted Palau in setting up medical stations to provide anti-epidemic goods and materials. The design of this study not only provides a highly flexible and safe medical environment but the benefits of screening can also be used as resources for medical research, forming an economic circulation for operation sustainability. In addition, the design of this study can also be used during the non-epidemic period as a healthcare station for rural areas or as a long-term community medical station.
Collapse
Affiliation(s)
- Mi-Zuo Gao
- Institute of Medicine, Chung Shan Medical University, No. 110, Sec. 1, Jianguo N. Rd., South Dist., Taichung City 40201, Taiwan
| | - Ying-Hsiang Chou
- Radiotherapy, Department of Medical Imaging and Radiological Sciences, Chung Shan Medical University Hospital, Chung Shan Medical University, No. 110, Sec. 1, Jianguo N. Rd., South Dist., Taichung City 40201, Taiwan
| | - Yan-Zin Chang
- Institute of Medicine, Chung Shan Medical University, No. 110, Sec. 1, Jianguo N. Rd., South Dist., Taichung City 40201, Taiwan
| | - Jar-Yuan Pai
- Department of Health Policy and Management, Chung Shan Medical University Hospital, Chung Shan Medical University, No. 110, Sec. 1, Jianguo N. Rd., South Dist., Taichung City 40202, Taiwan
| | - Henry Bair
- Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, 450 Jane Stanford Way, Stanford, CA 94305, USA
| | - Sharon Pai
- Department of Health Science, University of Washington, 4218 Roosevelt Way, Seattle, WA 98105, USA
| | - Nai-Chi Yu
- Department of Health Policy and Management, Chung Shan Medical University Hospital, Chung Shan Medical University, No. 110, Sec. 1, Jianguo N. Rd., South Dist., Taichung City 40202, Taiwan
| |
Collapse
|
22
|
Deep Residual Learning Image Recognition Model for Skin Cancer Disease Detection and Classification. ACTA INFORMATICA PRAGENSIA 2022. [DOI: 10.18267/j.aip.189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
23
|
Hassan H, Ren Z, Zhou C, Khan MA, Pan Y, Zhao J, Huang B. Supervised and weakly supervised deep learning models for COVID-19 CT diagnosis: A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106731. [PMID: 35286874 PMCID: PMC8897838 DOI: 10.1016/j.cmpb.2022.106731] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 01/28/2022] [Accepted: 03/03/2022] [Indexed: 05/05/2023]
Abstract
Artificial intelligence (AI) and computer vision (CV) methods become reliable to extract features from radiological images, aiding COVID-19 diagnosis ahead of the pathogenic tests and saving critical time for disease management and control. Thus, this review article focuses on cascading numerous deep learning-based COVID-19 computerized tomography (CT) imaging diagnosis research, providing a baseline for future research. Compared to previous review articles on the topic, this study pigeon-holes the collected literature very differently (i.e., its multi-level arrangement). For this purpose, 71 relevant studies were found using a variety of trustworthy databases and search engines, including Google Scholar, IEEE Xplore, Web of Science, PubMed, Science Direct, and Scopus. We classify the selected literature in multi-level machine learning groups, such as supervised and weakly supervised learning. Our review article reveals that weak supervision has been adopted extensively for COVID-19 CT diagnosis compared to supervised learning. Weakly supervised (conventional transfer learning) techniques can be utilized effectively for real-time clinical practices by reusing the sophisticated features rather than over-parameterizing the standard models. Few-shot and self-supervised learning are the recent trends to address data scarcity and model efficacy. The deep learning (artificial intelligence) based models are mainly utilized for disease management and control. Therefore, it is more appropriate for readers to comprehend the related perceptive of deep learning approaches for the in-progress COVID-19 CT diagnosis research.
Collapse
Affiliation(s)
- Haseeb Hassan
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China; College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
| | - Zhaoyu Ren
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China
| | - Chengmin Zhou
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China
| | - Muazzam A Khan
- Department of Computer Sciences, Quaid-i-Azam University, Islamabad, Pakistan
| | - Yi Pan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Jian Zhao
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China.
| | - Bingding Huang
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China.
| |
Collapse
|
24
|
Yu Z, Liu Y, Yu S, Wang R, Song Z, Yan Y, Li F, Wang Z, Tian F. Automatic Detection Method of Dairy Cow Feeding Behaviour Based on YOLO Improved Model and Edge Computing. SENSORS 2022; 22:s22093271. [PMID: 35590962 PMCID: PMC9102446 DOI: 10.3390/s22093271] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 04/22/2022] [Accepted: 04/22/2022] [Indexed: 02/02/2023]
Abstract
The feeding behaviour of cows is an essential sign of their health in dairy farming. For the impression of cow health status, precise and quick assessment of cow feeding behaviour is critical. This research presents a method for monitoring dairy cow feeding behaviour utilizing edge computing and deep learning algorithms based on the characteristics of dairy cow feeding behaviour. Images of cow feeding behaviour were captured and processed in real time using an edge computing device. A DenseResNet-You Only Look Once (DRN-YOLO) deep learning method was presented to address the difficulties of existing cow feeding behaviour detection algorithms’ low accuracy and sensitivity to the open farm environment. The deep learning and feature extraction enhancement of the model was improved by replacing the CSPDarknet backbone network with the self-designed DRNet backbone network based on the YOLOv4 algorithm using multiple feature scales and the Spatial Pyramid Pooling (SPP) structure to enrich the scale semantic feature interactions, finally achieving the recognition of cow feeding behaviour in the farm feeding environment. The experimental results showed that DRN-YOLO improved the accuracy, recall, and mAP by 1.70%, 1.82%, and 0.97%, respectively, compared to YOLOv4. The research results can effectively solve the problems of low recognition accuracy and insufficient feature extraction in the analysis of dairy cow feeding behaviour by traditional methods in complex breeding environments, and at the same time provide an important reference for the realization of intelligent animal husbandry and precision breeding.
Collapse
Affiliation(s)
- Zhenwei Yu
- College of Mechanical and Electronic Engineering, Shandong Agricultural University, Tai’an 271018, China; (Z.Y.); (Z.S.); (Y.Y.); (F.L.)
| | - Yuehua Liu
- Shandong Provincial Key Laboratory of Horticultural Machineries and Equipment, Tai’an 271018, China;
- Shandong Provincial Engineering Laboratory of Agricultural Equipment Intelligence, Tai’an 271018, China
| | - Sufang Yu
- College of Life Sciences, Shangdong Agriculture University, Tai’an 271018, China;
| | - Ruixue Wang
- Chinese Academy of Agricultural Mechanization Sciences, Beijing 100083, China;
| | - Zhanhua Song
- College of Mechanical and Electronic Engineering, Shandong Agricultural University, Tai’an 271018, China; (Z.Y.); (Z.S.); (Y.Y.); (F.L.)
| | - Yinfa Yan
- College of Mechanical and Electronic Engineering, Shandong Agricultural University, Tai’an 271018, China; (Z.Y.); (Z.S.); (Y.Y.); (F.L.)
| | - Fade Li
- College of Mechanical and Electronic Engineering, Shandong Agricultural University, Tai’an 271018, China; (Z.Y.); (Z.S.); (Y.Y.); (F.L.)
| | - Zhonghua Wang
- College of Animal Science and Technology, Shangdong Agriculture University, Tai’an 271018, China;
| | - Fuyang Tian
- College of Mechanical and Electronic Engineering, Shandong Agricultural University, Tai’an 271018, China; (Z.Y.); (Z.S.); (Y.Y.); (F.L.)
- Correspondence:
| |
Collapse
|
25
|
A Reliable Machine Intelligence Model for Accurate Identification of Cardiovascular Diseases Using Ensemble Techniques. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:2585235. [PMID: 35299686 PMCID: PMC8923755 DOI: 10.1155/2022/2585235] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 05/30/2021] [Accepted: 08/13/2021] [Indexed: 11/19/2022]
Abstract
Machine intelligence can convert raw clinical data into an informational source that helps make decisions and predictions. As a result, cardiovascular diseases are more likely to be addressed as early as possible before affecting the lifespan. Artificial intelligence has taken research on disease diagnosis and identification to another level. Despite several methods and models coming into existence, there is a possibility of improving the classification or forecast accuracy. By selecting the connected combination of models and features, we can improve accuracy. To achieve a better solution, we have proposed a reliable ensemble model in this paper. The proposed model produced results of 96.75% on the cardiovascular disease dataset obtained from the Mendeley Data Center, 93.39% on the comprehensive dataset collected from IEEE DataPort, and 88.24% on data collected from the Cleveland dataset. With this proposed model, we can achieve the safety and health security of an individual.
Collapse
|
26
|
Binh NT, Hien NM, Tin DT. Improving U-Net architecture and graph cuts optimization to classify arterioles and venules in retina fundus images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The central retinal artery and its branches supply blood to the inner retina. Vascular manifestations in the retina indirectly reflect the vascular changes and damage in organs such as the heart, kidneys, and brain because of the similar vascular structure of these organs. The diabetic retinopathy and risk of stroke are caused by increased venular caliber. The degrees of these diseases depend on the changes of arterioles and venules. The ratio between the calibers of arterioles and venules (AVR) is various. AVR is considered as the useful diagnostic indicator of different associated health problems. However, the task is not easy because of the lack of information of the features being used to classify the retinal vessels as arterioles and venules. This paper proposed a method to classify the retinal vessels into the arterioles and venules based on improving U-Net architecture and graph cuts. The accuracy of the proposed method is about 97.6%. The results of the proposed method are better than the other methods in RITE dataset and AVRDB dataset.
Collapse
Affiliation(s)
- Nguyen Thanh Binh
- Department of Information Systems, Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
- Vietnam National University Ho Chi Minh City, Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Vietnam
| | - Nguyen Mong Hien
- Department of Information Systems, Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
- Tra Vinh University, Vietnam
| | - Dang Thanh Tin
- Vietnam National University Ho Chi Minh City, Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Vietnam
- Information Systems Engineering Laboratory, Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
| |
Collapse
|
27
|
Kini AS, Gopal Reddy AN, Kaur M, Satheesh S, Singh J, Martinetz T, Alshazly H. Ensemble Deep Learning and Internet of Things-Based Automated COVID-19 Diagnosis Framework. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:7377502. [PMID: 35280708 PMCID: PMC8896964 DOI: 10.1155/2022/7377502] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 01/24/2022] [Indexed: 12/17/2022]
Abstract
Coronavirus disease (COVID-19) is a viral infection caused by SARS-CoV-2. The modalities such as computed tomography (CT) have been successfully utilized for the early stage diagnosis of COVID-19 infected patients. Recently, many researchers have utilized deep learning models for the automated screening of COVID-19 suspected cases. An ensemble deep learning and Internet of Things (IoT) based framework is proposed for screening of COVID-19 suspected cases. Three well-known pretrained deep learning models are ensembled. The medical IoT devices are utilized to collect the CT scans, and automated diagnoses are performed on IoT servers. The proposed framework is compared with thirteen competitive models over a four-class dataset. Experimental results reveal that the proposed ensembled deep learning model yielded 98.98% accuracy. Moreover, the model outperforms all competitive models in terms of other performance metrics achieving 98.56% precision, 98.58% recall, 98.75% F-score, and 98.57% AUC. Therefore, the proposed framework can improve the acceleration of COVID-19 diagnosis.
Collapse
Affiliation(s)
- Anita S. Kini
- Manipal Institute of Technology MAHE, Manipal, Karnataka 576104, India
| | - A. Nanda Gopal Reddy
- Department of IT, Mahaveer Institute of Science and Technology, Hyderabad, Telangana 500005, India
| | - Manjit Kaur
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju 61005, Republic of Korea
| | - S. Satheesh
- Department of Electronics and Communication Engineering, Malineni Lakshmaiah Women's Engineering College, Guntur, Andhra Pradesh 522017, India
| | - Jagendra Singh
- School of Computer Science Engineering and Technology, Bennett University, Greater Noida-203206, India
| | - Thomas Martinetz
- Institute for Neuro- and Bioinformatics, University of Lübeck, Lübeck 23562, Germany
| | - Hammam Alshazly
- Faculty of Computers and Information, South Valley University, Qena 83523, Egypt
| |
Collapse
|
28
|
Nneji GU, Deng J, Monday HN, Hossin MA, Obiora S, Nahar S, Cai J. COVID-19 Identification from Low-Quality Computed Tomography Using a Modified Enhanced Super-Resolution Generative Adversarial Network Plus and Siamese Capsule Network. Healthcare (Basel) 2022; 10:healthcare10020403. [PMID: 35207017 PMCID: PMC8871692 DOI: 10.3390/healthcare10020403] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Revised: 02/09/2022] [Accepted: 02/17/2022] [Indexed: 12/22/2022] Open
Abstract
Computed Tomography has become a vital screening method for the detection of coronavirus 2019 (COVID-19). With the high mortality rate and overload for domain experts, radiologists, and clinicians, there is a need for the application of a computerized diagnostic technique. To this effect, we have taken into consideration improving the performance of COVID-19 identification by tackling the issue of low quality and resolution of computed tomography images by introducing our method. We have reported about a technique named the modified enhanced super resolution generative adversarial network for a better high resolution of computed tomography images. Furthermore, in contrast to the fashion of increasing network depth and complexity to beef up imaging performance, we incorporated a Siamese capsule network that extracts distinct features for COVID-19 identification.The qualitative and quantitative results establish that the proposed model is effective, accurate, and robust for COVID-19 screening. We demonstrate the proposed model for COVID-19 identification on a publicly available dataset COVID-CT, which contains 349 COVID-19 and 463 non-COVID-19 computed tomography images. The proposed method achieves an accuracy of 97.92%, sensitivity of 98.85%, specificity of 97.21%, AUC of 98.03%, precision of 98.44%, and F1 score of 97.52%. Our approach obtained state-of-the-art performance, according to experimental results, which is helpful for COVID-19 screening. This new conceptual framework is proposed to play an influential task in the issue facing COVID-19 and related ailments, with the availability of few datasets.
Collapse
Affiliation(s)
- Grace Ugochi Nneji
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (G.U.N.); (J.D.)
| | - Jianhua Deng
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (G.U.N.); (J.D.)
| | - Happy Nkanta Monday
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China;
| | - Md Altab Hossin
- School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 611731, China; (M.A.H.); (S.O.)
| | - Sandra Obiora
- School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 611731, China; (M.A.H.); (S.O.)
| | - Saifun Nahar
- Department of Information System and Technology, University of Missouri St. Louis, St. Louis 63121, MO, USA;
| | - Jingye Cai
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (G.U.N.); (J.D.)
- Correspondence:
| |
Collapse
|
29
|
Yao HY, Wan WG, Li X. A deep adversarial model for segmentation-assisted COVID-19 diagnosis using CT images. EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING 2022; 2022:10. [PMID: 35194421 PMCID: PMC8830991 DOI: 10.1186/s13634-022-00842-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 01/27/2022] [Indexed: 06/14/2023]
Abstract
The outbreak of coronavirus disease 2019 (COVID-19) is spreading rapidly around the world, resulting in a global pandemic. Imaging techniques such as computed tomography (CT) play an essential role in the diagnosis and treatment of the disease since lung infection or pneumonia is a common complication. However, training a deep network to learn how to diagnose COVID-19 rapidly and accurately in CT images and segment the infected regions like a radiologist is challenging. Since the infectious area is difficult to distinguish manually annotation, the segmentation results are time-consuming. To tackle these problems, we propose an efficient method based on a deep adversarial network to segment the infection regions automatically. Then, the predicted segment results can assist the diagnostic network in identifying the COVID-19 samples from the CT images. On the other hand, a radiologist-like segmentation network provides detailed information of the infectious regions by separating areas of ground-glass, consolidation, and pleural effusion, respectively. Our method can accurately predict the COVID-19 infectious probability and provide lesion regions in CT images with limited training data. Additionally, we have established a public dataset for multitask learning. Extensive experiments on diagnosis and segmentation show superior performance over state-of-the-art methods.
Collapse
Affiliation(s)
- Hai-yan Yao
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
- Anyang Institute of Technology, Anyang, China
| | - Wang-gen Wan
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Xiang Li
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| |
Collapse
|
30
|
Hassan H, Ren Z, Zhao H, Huang S, Li D, Xiang S, Kang Y, Chen S, Huang B. Review and classification of AI-enabled COVID-19 CT imaging models based on computer vision tasks. Comput Biol Med 2022; 141:105123. [PMID: 34953356 PMCID: PMC8684223 DOI: 10.1016/j.compbiomed.2021.105123] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 12/03/2021] [Accepted: 12/03/2021] [Indexed: 01/12/2023]
Abstract
This article presents a systematic overview of artificial intelligence (AI) and computer vision strategies for diagnosing the coronavirus disease of 2019 (COVID-19) using computerized tomography (CT) medical images. We analyzed the previous review works and found that all of them ignored classifying and categorizing COVID-19 literature based on computer vision tasks, such as classification, segmentation, and detection. Most of the COVID-19 CT diagnosis methods comprehensively use segmentation and classification tasks. Moreover, most of the review articles are diverse and cover CT as well as X-ray images. Therefore, we focused on the COVID-19 diagnostic methods based on CT images. Well-known search engines and databases such as Google, Google Scholar, Kaggle, Baidu, IEEE Xplore, Web of Science, PubMed, ScienceDirect, and Scopus were utilized to collect relevant studies. After deep analysis, we collected 114 studies and reported highly enriched information for each selected research. According to our analysis, AI and computer vision have substantial potential for rapid COVID-19 diagnosis as they could significantly assist in automating the diagnosis process. Accurate and efficient models will have real-time clinical implications, though further research is still required. Categorization of literature based on computer vision tasks could be helpful for future research; therefore, this review article will provide a good foundation for conducting such research.
Collapse
Affiliation(s)
- Haseeb Hassan
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China
| | - Zhaoyu Ren
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Huishi Zhao
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Shoujin Huang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Dan Li
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Shaohua Xiang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Yan Kang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China; Medical Device Innovation Research Center, Shenzhen Technology University, Shenzhen, China
| | - Sifan Chen
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China; Medical Research Center, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Bingding Huang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China.
| |
Collapse
|
31
|
Meivel S, Sindhwani N, Anand R, Pandey D, Alnuaim AA, Altheneyan AS, Jabarulla MY, Lelisho ME. Mask Detection and Social Distance Identification Using Internet of Things and Faster R-CNN Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2103975. [PMID: 35116063 PMCID: PMC8804552 DOI: 10.1155/2022/2103975] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 12/30/2021] [Accepted: 01/03/2022] [Indexed: 02/03/2023]
Abstract
The drones can be used to detect a group of people who are unmasked and do not maintain social distance. In this paper, a deep learning-enabled drone is designed for mask detection and social distance monitoring. A drone is one of the unmanned systems that can be automated. This system mainly focuses on Industrial Internet of Things (IIoT) monitoring using Raspberry Pi 4. This drone automation system sends alerts to the people via speaker for maintaining the social distance. This system captures images and detects unmasked persons using faster regions with convolutional neural network (faster R-CNN) model. When the system detects unmasked persons, it sends their details to respective authorities and the nearest police station. The built model covers the majority of face detection using different benchmark datasets. OpenCV camera utilizes 24/7 service reports on a daily basis using Raspberry Pi 4 and a faster R-CNN algorithm.
Collapse
Affiliation(s)
- S. Meivel
- M. Kumarasamy College of Engineering, Karur, Tamil Nadu, India
| | | | - Rohit Anand
- DSEU, G. B. Pant Okhla-1 Campus, New Delhi, India
| | - Digvijay Pandey
- Department of Technical Education, IET Lucknow, Dr. A. P. J Abdul Kalam Technical University Lucknow, Lucknow, India
| | - Abeer Ali Alnuaim
- Department of Computer Science and Engineering, College of Applied Studies and Community Services, King Saud University, P.O. Box 22459, Riyadh 11495, Saudi Arabia
| | - Alaa S. Altheneyan
- Department of Computer Science and Engineering, College of Applied Studies and Community Services, King Saud University, P.O. Box 22459, Riyadh 11495, Saudi Arabia
| | - Mohamed Yaseen Jabarulla
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Republic of Korea
| | - Mesfin Esayas Lelisho
- Department of Statistics, College of Natural and Computational Science, Mizan-Tepi University, Tepi, Ethiopia
| |
Collapse
|
32
|
Yadav A, Saxena R, Kumar A, Walia TS, Zaguia A, Kamal SMM. FVC-NET: An Automated Diagnosis of Pulmonary Fibrosis Progression Prediction Using Honeycombing and Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2832400. [PMID: 35103054 PMCID: PMC8799953 DOI: 10.1155/2022/2832400] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 11/29/2021] [Accepted: 12/28/2021] [Indexed: 11/17/2022]
Abstract
Pulmonary fibrosis is a severe chronic lung disease that causes irreversible scarring in the tissues of the lungs, which results in the loss of lung capacity. The Forced Vital Capacity (FVC) of the patient is an interesting measure to investigate this disease to have the prognosis of the disease. This paper proposes a deep learning-based FVC-Net architecture to predict the progression of the disease from the patient's computed tomography (CT) scan and the patient's metadata. The input to the model combines the image score generated based on the degree of honeycombing for a patient identified based on segmented lung images and the metadata. This input is then fed to a 3-layer net to obtain the final output. The performance of the proposed FVC-Net model is compared with various contemporary state-of-the-art deep learning-based models, which are available on a cohort from the pulmonary fibrosis progression dataset. The model showcased significant improvement in the performance over other models for modified Laplace Log-Likelihood (-6.64). Finally, the paper concludes with some prospects to be explored in the proposed study.
Collapse
Affiliation(s)
- Anju Yadav
- Manipal University Jaipur, Jaipur, India
| | | | | | | | - Atef Zaguia
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia
| | | |
Collapse
|
33
|
Shan C, Chen X. Multichannel concat-fusional convolutional neural networks. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Because of the advantages of deep learning and information fusion technology, it has drawn much attention for researchers to combine them to achieve target recognition, positioning, and tracking. However, when the existing neural network process multichannel images (e.g., color images), multiple channels as a whole input into neural networks, which makes it hard for networks to fully learn information in R, G, and B channels of images. Therefore, it is not conducive to the final learning effect of the networks. To solve the problem, using different combinations of R, G, and B channels of color images for feature-level fusion, this paper proposes three fusion types as “R/G/B”, “R+G/G+B/B+R”, and “R+G+B/R+G+B/R+G+B” multichannel concat-fusional convolutional neural networks. Experimental results show that multichannel concat-fusional convolutional neural networks with fusional types of “R+G/G+B/B+R” and “R+G+B/R+G+B/R+G+B” achieve better performance than the corresponding non-fusional convolutional neural networks on different datasets. It shows that networks with fusion types of “R+G/G+B/B+R” and “R+G+B/R+G+B/R+G+B” can learn more fully information of R, G, and B channels of color images and improve the learning performance of networks.
Collapse
Affiliation(s)
- Chuanhui Shan
- College of Electrical Engineering, Anhui Polytechnic University, Wuhu, China
| | - Xiumei Chen
- College of Biological and Food Engineering, Anhui Polytechnic University, Wuhu, China
| |
Collapse
|
34
|
Singh D, Kumar V, Kaur M, Kumari R. Early diagnosis of COVID-19 patients using deep learning-based deep forest model. J EXP THEOR ARTIF IN 2022. [DOI: 10.1080/0952813x.2021.2021300] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
- Dilbag Singh
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, South Korea
| | - Vijay Kumar
- Department of Computer Science & Engineering National Institute of Technology Hamirpur, Hamirpur, India
| | - Manjit Kaur
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, South Korea
| | - Rajani Kumari
- Department of Computer Science, Christ (Deemed to Be University), Bangalore, India
| |
Collapse
|
35
|
Danilov VV, Proutski A, Karpovsky A, Kirpich A, Litmanovich D, Nefaridze D, Talalov O, Semyonov S, Koniukhovskii V, Shvartc V, Gankin Y. Indirect supervision applied to COVID-19 and pneumonia classification. INFORMATICS IN MEDICINE UNLOCKED 2021; 28:100835. [PMID: 34977331 PMCID: PMC8712713 DOI: 10.1016/j.imu.2021.100835] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 12/11/2021] [Accepted: 12/23/2021] [Indexed: 01/08/2023] Open
Abstract
The novel coronavirus 19 (COVID-19) continues to have a devastating effect around the globe, leading many scientists and clinicians to actively seek to develop new techniques to assist with the tackling of this disease. Modern machine learning methods have shown promise in their adoption to assist the healthcare industry through their data and analytics-driven decision making, inspiring researchers to develop new angles to fight the virus. In this paper, we aim to develop a CNN-based method for the detection of COVID-19 by utilizing patients' chest X-ray images. Developing upon the inclusion of convolutional units, the proposed method makes use of indirect supervision based on Grad-CAM. This technique is used in the training process where Grad-CAM's attention heatmaps support the network's predictions. Despite recent progress, scarcity of data has thus far limited the development of a robust solution. We extend upon existing work by combining publicly available data across 5 different sources and carefully annotate the comprising images across three categories: normal, pneumonia, and COVID-19. To achieve a high classification accuracy, we propose a training pipeline based on indirect supervision of traditional classification networks, where the guidance is directed by an external algorithm. With this method, we observed that the widely used, standard networks can achieve an accuracy comparable to tailor-made models, specifically for COVID-19, with one network in particular, VGG-16, outperforming the best of the tailor-made models.
Collapse
Affiliation(s)
- Viacheslav V Danilov
- Tomsk Polytechnic University, Tomsk, Russia
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
36
|
Data-Driven Analytics Leveraging Artificial Intelligence in the Era of COVID-19: An Insightful Review of Recent Developments. Symmetry (Basel) 2021. [DOI: 10.3390/sym14010016] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
This paper presents the role of artificial intelligence (AI) and other latest technologies that were employed to fight the recent pandemic (i.e., novel coronavirus disease-2019 (COVID-19)). These technologies assisted the early detection/diagnosis, trends analysis, intervention planning, healthcare burden forecasting, comorbidity analysis, and mitigation and control, to name a few. The key-enablers of these technologies was data that was obtained from heterogeneous sources (i.e., social networks (SN), internet of (medical) things (IoT/IoMT), cellular networks, transport usage, epidemiological investigations, and other digital/sensing platforms). To this end, we provide an insightful overview of the role of data-driven analytics leveraging AI in the era of COVID-19. Specifically, we discuss major services that AI can provide in the context of COVID-19 pandemic based on six grounds, (i) AI role in seven different epidemic containment strategies (a.k.a non-pharmaceutical interventions (NPIs)), (ii) AI role in data life cycle phases employed to control pandemic via digital solutions, (iii) AI role in performing analytics on heterogeneous types of data stemming from the COVID-19 pandemic, (iv) AI role in the healthcare sector in the context of COVID-19 pandemic, (v) general-purpose applications of AI in COVID-19 era, and (vi) AI role in drug design and repurposing (e.g., iteratively aligning protein spikes and applying three/four-fold symmetry to yield a low-resolution candidate template) against COVID-19. Further, we discuss the challenges involved in applying AI to the available data and privacy issues that can arise from personal data transitioning into cyberspace. We also provide a concise overview of other latest technologies that were increasingly applied to limit the spread of the ongoing pandemic. Finally, we discuss the avenues of future research in the respective area. This insightful review aims to highlight existing AI-based technological developments and future research dynamics in this area.
Collapse
|
37
|
Gudigar A, Raghavendra U, Nayak S, Ooi CP, Chan WY, Gangavarapu MR, Dharmik C, Samanth J, Kadri NA, Hasikin K, Barua PD, Chakraborty S, Ciaccio EJ, Acharya UR. Role of Artificial Intelligence in COVID-19 Detection. SENSORS (BASEL, SWITZERLAND) 2021; 21:8045. [PMID: 34884045 PMCID: PMC8659534 DOI: 10.3390/s21238045] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 11/26/2021] [Accepted: 11/26/2021] [Indexed: 12/15/2022]
Abstract
The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.
Collapse
Affiliation(s)
- Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Sneha Nayak
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore;
| | - Wai Yee Chan
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur 50603, Malaysia;
| | - Mokshagna Rohit Gangavarapu
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chinmay Dharmik
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nahrizul Adib Kadri
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Prabal Datta Barua
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia;
- School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
| | - Subrata Chakraborty
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Edward J. Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, NY 10032, USA;
| | - U. Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore;
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
- International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 860-8555, Japan
| |
Collapse
|
38
|
Sharma R, Sharma M, Shukla A, Chaudhury S. Conditional Deep 3D-Convolutional Generative Adversarial Nets for RGB-D Generation. MATHEMATICAL PROBLEMS IN ENGINEERING 2021; 2021:1-8. [DOI: 10.1155/2021/8358314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
Abstract
Generation of synthetic data is a challenging task. There are only a few significant works on RGB video generation and no pertinent works on RGB-D data generation. In the present work, we focus our attention on synthesizing RGB-D data which can further be used as dataset for various applications like object tracking, gesture recognition, and action recognition. This paper has put forward a proposal for a novel architecture that uses conditional deep 3D-convolutional generative adversarial networks to synthesize RGB-D data by exploiting 3D spatio-temporal convolutional framework. The proposed architecture can be used to generate virtually unlimited data. In this work, we have presented the architecture to generate RGB-D data conditioned on class labels. In the architecture, two parallel paths were used, one to generate RGB data and the second to synthesize depth map. The output from the two parallel paths is combined to generate RGB-D data. The proposed model is used for video generation at 30 fps (frames per second). The frame referred here is an RGB-D with the spatial resolution of 512 × 512.
Collapse
Affiliation(s)
| | - Manoj Sharma
- ECE Department of Bennet University, Greater Noida, India
| | - Ankit Shukla
- ECE Department of Bennet University, Greater Noida, India
| | - Santanu Chaudhury
- Department of Electrical Engineering, IIT Delhi and Director of IIT Jodhpur, New Delhi, India
| |
Collapse
|
39
|
Khan MA, Alhaisoni M, Tariq U, Hussain N, Majid A, Damaševičius R, Maskeliūnas R. COVID-19 Case Recognition from Chest CT Images by Deep Learning, Entropy-Controlled Firefly Optimization, and Parallel Feature Fusion. SENSORS (BASEL, SWITZERLAND) 2021; 21:7286. [PMID: 34770595 PMCID: PMC8588229 DOI: 10.3390/s21217286] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 10/28/2021] [Accepted: 10/29/2021] [Indexed: 12/12/2022]
Abstract
In healthcare, a multitude of data is collected from medical sensors and devices, such as X-ray machines, magnetic resonance imaging, computed tomography (CT), and so on, that can be analyzed by artificial intelligence methods for early diagnosis of diseases. Recently, the outbreak of the COVID-19 disease caused many deaths. Computer vision researchers support medical doctors by employing deep learning techniques on medical images to diagnose COVID-19 patients. Various methods were proposed for COVID-19 case classification. A new automated technique is proposed using parallel fusion and optimization of deep learning models. The proposed technique starts with a contrast enhancement using a combination of top-hat and Wiener filters. Two pre-trained deep learning models (AlexNet and VGG16) are employed and fine-tuned according to target classes (COVID-19 and healthy). Features are extracted and fused using a parallel fusion approach-parallel positive correlation. Optimal features are selected using the entropy-controlled firefly optimization method. The selected features are classified using machine learning classifiers such as multiclass support vector machine (MC-SVM). Experiments were carried out using the Radiopaedia database and achieved an accuracy of 98%. Moreover, a detailed analysis is conducted and shows the improved performance of the proposed scheme.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (M.A.K.); (N.H.); (A.M.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- Information Systems Department, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al Khraj 11942, Saudi Arabia;
| | - Nazar Hussain
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (M.A.K.); (N.H.); (A.M.)
| | - Abdul Majid
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (M.A.K.); (N.H.); (A.M.)
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| | - Rytis Maskeliūnas
- Department of Multimedia Engineering, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| |
Collapse
|
40
|
Goyal S, Singh R. Detection and classification of lung diseases for pneumonia and Covid-19 using machine and deep learning techniques. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2021; 14:3239-3259. [PMID: 34567277 PMCID: PMC8449225 DOI: 10.1007/s12652-021-03464-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 08/31/2021] [Indexed: 05/16/2023]
Abstract
Since the arrival of the novel Covid-19, several types of researches have been initiated for its accurate prediction across the world. The earlier lung disease pneumonia is closely related to Covid-19, as several patients died due to high chest congestion (pneumonic condition). It is challenging to differentiate Covid-19 and pneumonia lung diseases for medical experts. The chest X-ray imaging is the most reliable method for lung disease prediction. In this paper, we propose a novel framework for the lung disease predictions like pneumonia and Covid-19 from the chest X-ray images of patients. The framework consists of dataset acquisition, image quality enhancement, adaptive and accurate region of interest (ROI) estimation, features extraction, and disease anticipation. In dataset acquisition, we have used two publically available chest X-ray image datasets. As the image quality degraded while taking X-ray, we have applied the image quality enhancement using median filtering followed by histogram equalization. For accurate ROI extraction of chest regions, we have designed a modified region growing technique that consists of dynamic region selection based on pixel intensity values and morphological operations. For accurate detection of diseases, robust set of features plays a vital role. We have extracted visual, shape, texture, and intensity features from each ROI image followed by normalization. For normalization, we formulated a robust technique to enhance the detection and classification results. Soft computing methods such as artificial neural network (ANN), support vector machine (SVM), K-nearest neighbour (KNN), ensemble classifier, and deep learning classifier are used for classification. For accurate detection of lung disease, deep learning architecture has been proposed using recurrent neural network (RNN) with long short-term memory (LSTM). Experimental results show the robustness and efficiency of the proposed model in comparison to the existing state-of-the-art methods.
Collapse
Affiliation(s)
- Shimpy Goyal
- Department of Computer Science, Banasthali Vidyapith, Banasthali, 304022 Rajasthan India
| | - Rajiv Singh
- Department of Computer Science, Banasthali Vidyapith, Banasthali, 304022 Rajasthan India
| |
Collapse
|
41
|
Zhang Z, Chen B, Sun J, Luo Y. A bagging dynamic deep learning network for diagnosing COVID-19. Sci Rep 2021; 11:16280. [PMID: 34381079 PMCID: PMC8358001 DOI: 10.1038/s41598-021-95537-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 07/26/2021] [Indexed: 01/19/2023] Open
Abstract
COVID-19 is a serious ongoing worldwide pandemic. Using X-ray chest radiography images for automatically diagnosing COVID-19 is an effective and convenient means of providing diagnostic assistance to clinicians in practice. This paper proposes a bagging dynamic deep learning network (B-DDLN) for diagnosing COVID-19 by intelligently recognizing its symptoms in X-ray chest radiography images. After a series of preprocessing steps for images, we pre-train convolution blocks as a feature extractor. For the extracted features, a bagging dynamic learning network classifier is trained based on neural dynamic learning algorithm and bagging algorithm. B-DDLN connects the feature extractor and bagging classifier in series. Experimental results verify that the proposed B-DDLN achieves 98.8889% testing accuracy, which shows the best diagnosis performance among the existing state-of-the-art methods on the open image set. It also provides evidence for further detection and treatment.
Collapse
Affiliation(s)
- Zhijun Zhang
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510640, China.
- Guangdong Artificial Intelligence and Digital Economy Laboratory (Pazhou Lab), Guangzhou, 510335, China.
- School of Automation Science and Engineering, East China Jiaotong University, Nanchang, 330052, China.
- Shaanxi Provincial Key Laboratory of Industrial Automation, School of Mechanical Engineering, Shaanxi University of Technology, Hanzhong, 723001, China.
- School of Information Technology and Management, Hunan University of Finance and Economics, Changsha, 410205, China.
| | - Bozhao Chen
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510640, China
| | - Jiansheng Sun
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510640, China
| | - Yamei Luo
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510640, China
| |
Collapse
|
42
|
Chen J, Chen L, Shabaz M. Image Fusion Algorithm at Pixel Level Based on Edge Detection. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5760660. [PMID: 34422244 PMCID: PMC8371621 DOI: 10.1155/2021/5760660] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 07/29/2021] [Indexed: 02/07/2023]
Abstract
In the present scenario, image fusion is utilized at a large level for various applications. But, the techniques and algorithms are cumbersome and time-consuming. So, aiming at the problems of low efficiency, long running time, missing image detail information, and poor image fusion, the image fusion algorithm at pixel level based on edge detection is proposed. The improved ROEWA (Ratio of Exponentially Weighted Averages) operator is used to detect the edge of the image. The variable precision fitting algorithm and edge curvature change are used to extract the feature line of the image edge and edge angle point of the feature to improve the stability of image fusion. According to the information and characteristics of the high-frequency region and low-frequency region, different image fusion rules are set. To cope with the high-frequency area, the local energy weighted fusion approach based on edge information is utilized. The low-frequency region is processed by merging the region energy with the weighting factor, and the fusion results of the high findings demonstrate that the image fusion technique presented in this work increases the resolution by 1.23 and 1.01, respectively, when compared to the two standard approaches. When compared to the two standard approaches, the experimental results show that the proposed algorithm can effectively reduce the lack of image information. The sharpness and information entropy of the fused image are higher than the experimental comparison method, and the running time is shorter and has better robustness.
Collapse
Affiliation(s)
- Jiming Chen
- School of Computer and Information Science, Hunan Institute of Technology, Hengyang 421002, China
| | - Liping Chen
- School of Computer and Information Science, Hunan Institute of Technology, Hengyang 421002, China
| | - Mohammad Shabaz
- Arba Minch University, Arba Minch, Ethiopia
- Department of Computer Science Engineering, Chitkara University, Chandigarh, India
| |
Collapse
|
43
|
Hou J, Gao T. Explainable DCNN based chest X-ray image analysis and classification for COVID-19 pneumonia detection. Sci Rep 2021; 11:16071. [PMID: 34373554 PMCID: PMC8352869 DOI: 10.1038/s41598-021-95680-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 07/28/2021] [Indexed: 02/07/2023] Open
Abstract
To speed up the discovery of COVID-19 disease mechanisms by X-ray images, this research developed a new diagnosis platform using a deep convolutional neural network (DCNN) that is able to assist radiologists with diagnosis by distinguishing COVID-19 pneumonia from non-COVID-19 pneumonia in patients based on chest X-ray classification and analysis. Such a tool can save time in interpreting chest X-rays and increase the accuracy and thereby enhance our medical capacity for the detection and diagnosis of COVID-19. The explainable method is also used in the DCNN to select instances of the X-ray dataset images to explain the behavior of training-learning models to achieve higher prediction accuracy. The average accuracy of our method is above 96%, which can replace manual reading and has the potential to be applied to large-scale rapid screening of COVID-9 for widely use cases.
Collapse
Affiliation(s)
- Jie Hou
- School of Biomedical Engineering, Guangdong Medical University, Dongguan, Guangdong, China
| | - Terry Gao
- Counties Manukau District Health Board, Auckland, 1640, New Zealand.
| |
Collapse
|
44
|
Khanna M, Agarwal A, Singh LK, Thawkar S, Khanna A, Gupta D. Radiologist-Level Two Novel and Robust Automated Computer-Aided Prediction Models for Early Detection of COVID-19 Infection from Chest X-ray Images. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021; 48:1-33. [PMID: 34395156 PMCID: PMC8349241 DOI: 10.1007/s13369-021-05880-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 06/15/2021] [Indexed: 12/24/2022]
Abstract
COVID-19 is an ongoing pandemic that is widely spreading daily and reaches a significant community spread. X-ray images, computed tomography (CT) images and test kits (RT-PCR) are three easily available options for predicting this infection. Compared to the screening of COVID-19 infection from X-ray and CT images, the test kits(RT-PCR) available to diagnose COVID-19 face problems such as high analytical time, high false negative outcomes, poor sensitivity and specificity. Radiological signatures that X-rays can detect have been found in COVID-19 positive patients. Radiologists may examine these signatures, but it's a time-consuming and error-prone process (riddled with intra-observer variability). Thus, the chest X-ray analysis process needs to be automated, for which AI-driven tools have proven to be the best choice to increase accuracy and speed up analysis time, especially in the case of medical image analysis. We shortlisted four datasets and 20 CNN-based models to test and validate the best ones using 16 detailed experiments with fivefold cross-validation. The two proposed models, ensemble deep transfer learning CNN model and hybrid LSTMCNN, perform the best. The accuracy of ensemble CNN was up to 99.78% (96.51% average-wise), F1-score up to 0.9977 (0.9682 average-wise) and AUC up to 0.9978 (0.9583 average-wise). The accuracy of LSTMCNN was up to 98.66% (96.46% average-wise), F1-score up to 0.9974 (0.9668 average-wise) and AUC up to 0.9856 (0.9645 average-wise). These two best pre-trained transfer learning-based detection models can contribute clinically by offering the patients prediction correctly and rapidly.
Collapse
Affiliation(s)
- Munish Khanna
- Hindustan College of Science and Technology, Mathura, 281122 India
| | - Astitwa Agarwal
- Hindustan College of Science and Technology, Mathura, 281122 India
| | - Law Kumar Singh
- Hindustan College of Science and Technology, Mathura, 281122 India
| | - Shankar Thawkar
- Hindustan College of Science and Technology, Mathura, 281122 India
| | - Ashish Khanna
- Maharaja Agrasen Institute of Technology, Delhi, 110034 India
| | - Deepak Gupta
- Maharaja Agrasen Institute of Technology, Delhi, 110034 India
| |
Collapse
|
45
|
Sengupta K, Srivastava PR. Quantum algorithm for quicker clinical prognostic analysis: an application and experimental study using CT scan images of COVID-19 patients. BMC Med Inform Decis Mak 2021; 21:227. [PMID: 34330278 PMCID: PMC8323083 DOI: 10.1186/s12911-021-01588-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 07/18/2021] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND In medical diagnosis and clinical practice, diagnosing a disease early is crucial for accurate treatment, lessening the stress on the healthcare system. In medical imaging research, image processing techniques tend to be vital in analyzing and resolving diseases with a high degree of accuracy. This paper establishes a new image classification and segmentation method through simulation techniques, conducted over images of COVID-19 patients in India, introducing the use of Quantum Machine Learning (QML) in medical practice. METHODS This study establishes a prototype model for classifying COVID-19, comparing it with non-COVID pneumonia signals in Computed tomography (CT) images. The simulation work evaluates the usage of quantum machine learning algorithms, while assessing the efficacy for deep learning models for image classification problems, and thereby establishes performance quality that is required for improved prediction rate when dealing with complex clinical image data exhibiting high biases. RESULTS The study considers a novel algorithmic implementation leveraging quantum neural network (QNN). The proposed model outperformed the conventional deep learning models for specific classification task. The performance was evident because of the efficiency of quantum simulation and faster convergence property solving for an optimization problem for network training particularly for large-scale biased image classification task. The model run-time observed on quantum optimized hardware was 52 min, while on K80 GPU hardware it was 1 h 30 min for similar sample size. The simulation shows that QNN outperforms DNN, CNN, 2D CNN by more than 2.92% in gain in accuracy measure with an average recall of around 97.7%. CONCLUSION The results suggest that quantum neural networks outperform in COVID-19 traits' classification task, comparing to deep learning w.r.t model efficacy and training time. However, a further study needs to be conducted to evaluate implementation scenarios by integrating the model within medical devices.
Collapse
Affiliation(s)
- Kinshuk Sengupta
- Microsoft Corporation, New Delhi
, India
- Department of Information System, Indian Institute of Management, Rohtak, India
- City Southern Bypass, Sunaria, Rohtak, Haryana 124010 India
| | - Praveen Ranjan Srivastava
- Department of Information System, Indian Institute of Management, Rohtak, India
- City Southern Bypass, Sunaria, Rohtak, Haryana 124010 India
| |
Collapse
|
46
|
Alshazly H, Linse C, Abdalla M, Barth E, Martinetz T. COVID-Nets: deep CNN architectures for detecting COVID-19 using chest CT scans. PeerJ Comput Sci 2021; 7:e655. [PMID: 34401477 PMCID: PMC8330434 DOI: 10.7717/peerj-cs.655] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 07/09/2021] [Indexed: 05/10/2023]
Abstract
In this paper we propose two novel deep convolutional network architectures, CovidResNet and CovidDenseNet, to diagnose COVID-19 based on CT images. The models enable transfer learning between different architectures, which might significantly boost the diagnostic performance. Whereas novel architectures usually suffer from the lack of pretrained weights, our proposed models can be partly initialized with larger baseline models like ResNet50 and DenseNet121, which is attractive because of the abundance of public repositories. The architectures are utilized in a first experimental study on the SARS-CoV-2 CT-scan dataset, which contains 4173 CT images for 210 subjects structured in a subject-wise manner into three different classes. The models differentiate between COVID-19, non-COVID-19 viral pneumonia, and healthy samples. We also investigate their performance under three binary classification scenarios where we distinguish COVID-19 from healthy, COVID-19 from non-COVID-19 viral pneumonia, and non-COVID-19 from healthy, respectively. Our proposed models achieve up to 93.87% accuracy, 99.13% precision, 92.49% sensitivity, 97.73% specificity, 95.70% F1-score, and 96.80% AUC score for binary classification, and up to 83.89% accuracy, 80.36% precision, 82.04% sensitivity, 92.07% specificity, 81.05% F1-score, and 94.20% AUC score for the three-class classification tasks. We also validated our models on the COVID19-CT dataset to differentiate COVID-19 and other non-COVID-19 viral infections, and our CovidDenseNet model achieved the best performance with 81.77% accuracy, 79.05% precision, 84.69% sensitivity, 79.05% specificity, 81.77% F1-score, and 87.50% AUC score. The experimental results reveal the effectiveness of the proposed networks in automated COVID-19 detection where they outperform standard models on the considered datasets while being more efficient.
Collapse
Affiliation(s)
- Hammam Alshazly
- Institut für Neuro- und Bioinformatik, University of Lübeck, Lübeck, Germany
- Faculty of Computers and Information, South Valley University, Qena, Egypt
| | - Christoph Linse
- Institut für Neuro- und Bioinformatik, University of Lübeck, Lübeck, Germany
| | - Mohamed Abdalla
- Mathematics Department, Faculty of Science, King Khalid University, Abha, Saudi Arabia
- Mathematics Department, Faculty of Science, South Valley University, Qena, Egypt
| | - Erhardt Barth
- Institut für Neuro- und Bioinformatik, University of Lübeck, Lübeck, Germany
| | - Thomas Martinetz
- Institut für Neuro- und Bioinformatik, University of Lübeck, Lübeck, Germany
| |
Collapse
|
47
|
Chaahat, Kumar Gondhi N, Kumar Lehana P. An Evolutionary Approach for the Enhancement of Dermatological Images and Their Classification Using Deep Learning Models. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:8113403. [PMID: 34326979 PMCID: PMC8302402 DOI: 10.1155/2021/8113403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 07/06/2021] [Indexed: 12/24/2022]
Abstract
Dermatological problems are the most widely spread skin diseases amongst human beings. They can be infectious, chronic, and sometimes may also lead to serious health problems such as skin cancer. Generally, rural area clinics lack trained dermatologists and mostly rely on the analysis of remotely accessible experts through mobile-based networks for sharing the images and other related information. Under such circumstances, poor image quality introduced due to the capturing device results in misleading diagnosis. Here, a genetic-algorithm- (GA-) based approach used as an image enhancement technique has been explored to improve the low quality of the dermatological images received from the rural clinic. The diagnosis is performed on the enhanced images using convolutional neural network (CNN) classifier for the identification of the diseases. The scope of this paper is limited to only motion blurred images, which is the most prevalent problem in capturing of the images, specifically when any of the two (device or the object) may move unpredictably. Seven types of skin diseases, namely, melanoma, melanocytic nevus, basal cell carcinoma, actinic keratosis, benign keratosis, vascular lesion, and squamous cell carcinoma, have been investigated using ResNet-152 giving an overall accuracy of 87.40% for the blurred images. Use of GA-enhanced images increased the accuracy to 95.85%. The results were further analyzed using a confusion matrix and t-test-based statistical investigations. The advantage of the proposed technique is that it reduces the analysis time and errors due to manual diagnosis. Furthermore, speedy and reliable diagnosis at the earliest stage reduces the risk of developing more severe skin problems.
Collapse
Affiliation(s)
- Chaahat
- Department of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra 182301, India
- MIET, Jammu 181122, India
| | - Naveen Kumar Gondhi
- Department of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra 182301, India
| | | |
Collapse
|
48
|
Applications of Machine Learning and High-Performance Computing in the Era of COVID-19. APPLIED SYSTEM INNOVATION 2021. [DOI: 10.3390/asi4030040] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
During the ongoing pandemic of the novel coronavirus disease 2019 (COVID-19), latest technologies such as artificial intelligence (AI), blockchain, learning paradigms (machine, deep, smart, few short, extreme learning, etc.), high-performance computing (HPC), Internet of Medical Things (IoMT), and Industry 4.0 have played a vital role. These technologies helped to contain the disease’s spread by predicting contaminated people/places, as well as forecasting future trends. In this article, we provide insights into the applications of machine learning (ML) and high-performance computing (HPC) in the era of COVID-19. We discuss the person-specific data that are being collected to lower the COVID-19 spread and highlight the remarkable opportunities it provides for knowledge extraction leveraging low-cost ML and HPC techniques. We demonstrate the role of ML and HPC in the context of the COVID-19 era with the successful implementation or proposition in three contexts: (i) ML and HPC use in the data life cycle, (ii) ML and HPC use in analytics on COVID-19 data, and (iii) the general-purpose applications of both techniques in COVID-19’s arena. In addition, we discuss the privacy and security issues and architecture of the prototype system to demonstrate the proposed research. Finally, we discuss the challenges of the available data and highlight the issues that hinder the applicability of ML and HPC solutions on it.
Collapse
|
49
|
Mehedi IM, Shah HSM, Al-Saggaf UM, Mansouri R, Bettayeb M. Fuzzy PID Control for Respiratory Systems. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:7118711. [PMID: 34257855 PMCID: PMC8253636 DOI: 10.1155/2021/7118711] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 06/08/2021] [Accepted: 06/12/2021] [Indexed: 01/10/2023]
Abstract
This paper presents the implementation of a fuzzy proportional integral derivative (FPID) control design to track the airway pressure during the mechanical ventilation process. A respiratory system is modeled as a combination of a blower-hose-patient system and a single compartmental lung system with nonlinear lung compliance. For comparison purposes, the classical PID controller is also designed and simulated on the same system. According to the proposed control strategy, the ventilator will provide airway flow that maintains the peak pressure below critical levels when there are unknown parameters of the patient's hose leak and patient breathing effort. Results show that FPID is a better controller in the sense of quicker response, lower overshoot, and smaller tracking error. This provides valuable insight for the application of the proposed controller.
Collapse
Affiliation(s)
- Ibrahim M. Mehedi
- Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Center of Excellence in Intelligent Engineering Systems (CEIES), King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Heidir S. M. Shah
- Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Ubaid M. Al-Saggaf
- Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Center of Excellence in Intelligent Engineering Systems (CEIES), King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Rachid Mansouri
- Laboratoire de Conception et Conduite des Systemes de Production (L2CSP), Tizi Ouzou, Algeria
| | - Maamar Bettayeb
- Electrical Engineering Department, University of Sharjah, Sharjah, UAE
| |
Collapse
|
50
|
Mehedi IM, Shah HSM, Al-Saggaf UM, Mansouri R, Bettayeb M. Adaptive Fuzzy Sliding Mode Control of a Pressure-Controlled Artificial Ventilator. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:1926711. [PMID: 34257849 PMCID: PMC8249163 DOI: 10.1155/2021/1926711] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 06/08/2021] [Accepted: 06/12/2021] [Indexed: 11/19/2022]
Abstract
This paper presents the application of adaptive fuzzy sliding mode control (AFSMC) for the respiratory system to assist the patients facing difficulty in breathing. The ventilator system consists of a blower-hose-patient system and patient's lung model with nonlinear lung compliance. The AFSMC is based on two components: singleton control action and a discontinuous term. The singleton control action is based on fuzzy logic with adjustable tuning parameters to approximate the perfect feedback linearization control. The switching control law based on the sliding mode principle aims to minimize the estimation error between approximated single fuzzy control action and perfect feedback linearization control. The proposed control strategy manipulated the airway flow delivered by the ventilator such that the peak pressure will remain under critical values in presence of unknown patient-hose-leak parameters and patient breathing effort. The closed-loop stability of AFSMC will be proven in the sense of Lyapunov. For comparative analysis, classical PID and sliding mode controllers are also designed and implemented for mechanical ventilation problems. For performance analysis, numerical simulations were performed on a mechanical ventilator simulator. Simulation results reveal that the proposed controller demonstrates better tracking of targeted airway pressure compared with its counterparts in terms of faster convergence, less overshoot, and small tracking error. Hence, the proposed controller provides useful insight for its application to real-world scenarios.
Collapse
Affiliation(s)
- Ibrahim M. Mehedi
- Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Center of Excellence in Intelligent Engineering Systems (CEIES), King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Heidir S. M. Shah
- Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Ubaid M. Al-Saggaf
- Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Center of Excellence in Intelligent Engineering Systems (CEIES), King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Rachid Mansouri
- Laboratoire de Conception et Conduite des Systemes de Production (L2CSP), Tizi-Ouzou 15000, Algeria
| | - Maamar Bettayeb
- Electrical Engineering Department, University of Sharjah, Sharjah, UAE
| |
Collapse
|