1
|
Hao H, Zhao Y, Leng S, Gu Y, Ma Y, Wang F, Dai Q, Zheng J, Liu Y, Zhang J. Local salient location-aware anomaly mask synthesis for pulmonary disease anomaly detection and lesion localization in CT images. Med Image Anal 2025; 102:103523. [PMID: 40086182 DOI: 10.1016/j.media.2025.103523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Revised: 12/12/2024] [Accepted: 02/22/2025] [Indexed: 03/16/2025]
Abstract
Automated pulmonary anomaly detection using computed tomography (CT) examinations is important for the early warning of pulmonary diseases and can support clinical diagnosis and decision-making. Most training of existing pulmonary disease detection and lesion segmentation models requires expert annotations, which is time-consuming and labour-intensive, and struggles to generalize across atypical diseases. In contrast, unsupervised anomaly detection alleviates the demand for dataset annotation and is more generalizable than supervised methods in detecting rare pathologies. However, due to the large distribution differences of CT scans in a volume and the high similarity between lesion and normal tissues, existing anomaly detection methods struggle to accurately localize small lesions, leading to a low anomaly detection rate. To alleviate these challenges, we propose a local salient location-aware anomaly mask generation and reconstruction framework for pulmonary disease anomaly detection and lesion localization. The framework consists of four components: (1) a Vector Quantized Variational AutoEncoder (VQVAE)-based reconstruction network that generates a codebook storing high-dimensional features; (2) a unsupervised feature statistics based anomaly feature synthesizer to synthesize features that match the realistic anomaly distribution by filtering salient features and interacting with the codebook; (3) a transformer-based feature classification network that identifies synthetic anomaly features; (4) a residual neighbourhood aggregation feature classification loss that mitigates network overfitting by penalizing the classification loss of recoverable corrupted features. Our approach is based on two intuitions. First, generating synthetic anomalies in feature space is more effective due to the fact that lesions have different morphologies in image space and may not have much in common. Secondly, regions with salient features or high reconstruction errors in CT images tend to be similar to lesions and are more prone to synthesize abnormal features. The performance of the proposed method is validated on one public dataset with COVID-19 and one in-house dataset containing 63,610 CT images with five lung diseases. Experimental results show that compared to feature-based, synthesis-based and reconstruction-based methods, the proposed method is adaptable to CT images with four pneumonia types (COVID-19, bacteria, fungal, and mycoplasma) and one non-pneumonia (cancer) diseases and achieves state-of-the-art performance in image-level anomaly detection and lesion localization.
Collapse
Affiliation(s)
- Huaying Hao
- School of Optics and Photonics, Beijing Institute of Technology, China
| | - Yitian Zhao
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China; Ningbo Cixi Institute of Biomedical Engineering and Ningbo Key Laboratory of Biomedical Imaging Probe Materials and Technology, Cixi, China.
| | - Shaoyi Leng
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Yuanyuan Gu
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China; Ningbo Cixi Institute of Biomedical Engineering and Ningbo Key Laboratory of Biomedical Imaging Probe Materials and Technology, Cixi, China
| | - Yuhui Ma
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China; Ningbo Cixi Institute of Biomedical Engineering and Ningbo Key Laboratory of Biomedical Imaging Probe Materials and Technology, Cixi, China
| | - Feiming Wang
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Qi Dai
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jianjun Zheng
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Yue Liu
- School of Optics and Photonics, Beijing Institute of Technology, China.
| | - Jingfeng Zhang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China.
| |
Collapse
|
2
|
Chen C, Mat Isa NA, Liu X. A review of convolutional neural network based methods for medical image classification. Comput Biol Med 2025; 185:109507. [PMID: 39631108 DOI: 10.1016/j.compbiomed.2024.109507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 11/20/2024] [Accepted: 11/28/2024] [Indexed: 12/07/2024]
Abstract
This study systematically reviews CNN-based medical image classification methods. We surveyed 149 of the latest and most important papers published to date and conducted an in-depth analysis of the methods used therein. Based on the selected literature, we organized this review systematically. First, the development and evolution of CNN in the field of medical image classification are analyzed. Subsequently, we provide an in-depth overview of the main techniques of CNN applied to medical image classification, which is also the current research focus in this field, including data preprocessing, transfer learning, CNN architectures, and explainability, and their role in improving classification accuracy and efficiency. In addition, this overview summarizes the main public datasets for various diseases. Although CNN has great potential in medical image classification tasks and has achieved good results, clinical application is still difficult. Therefore, we conclude by discussing the main challenges faced by CNNs in medical image analysis and pointing out future research directions to address these challenges. This review will help researchers with their future studies and can promote the successful integration of deep learning into clinical practice and smart medical systems.
Collapse
Affiliation(s)
- Chao Chen
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia; School of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin, 644000, China
| | - Nor Ashidi Mat Isa
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia.
| | - Xin Liu
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia
| |
Collapse
|
3
|
Kanwal K, Asif M, Khalid SG, Liu H, Qurashi AG, Abdullah S. Current Diagnostic Techniques for Pneumonia: A Scoping Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:4291. [PMID: 39001069 PMCID: PMC11244398 DOI: 10.3390/s24134291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 06/22/2024] [Accepted: 06/28/2024] [Indexed: 07/16/2024]
Abstract
Community-acquired pneumonia is one of the most lethal infectious diseases, especially for infants and the elderly. Given the variety of causative agents, the accurate early detection of pneumonia is an active research area. To the best of our knowledge, scoping reviews on diagnostic techniques for pneumonia are lacking. In this scoping review, three major electronic databases were searched and the resulting research was screened. We categorized these diagnostic techniques into four classes (i.e., lab-based methods, imaging-based techniques, acoustic-based techniques, and physiological-measurement-based techniques) and summarized their recent applications. Major research has been skewed towards imaging-based techniques, especially after COVID-19. Currently, chest X-rays and blood tests are the most common tools in the clinical setting to establish a diagnosis; however, there is a need to look for safe, non-invasive, and more rapid techniques for diagnosis. Recently, some non-invasive techniques based on wearable sensors achieved reasonable diagnostic accuracy that could open a new chapter for future applications. Consequently, further research and technology development are still needed for pneumonia diagnosis using non-invasive physiological parameters to attain a better point of care for pneumonia patients.
Collapse
Affiliation(s)
- Kehkashan Kanwal
- College of Speech, Language, and Hearing Sciences, Ziauddin University, Karachi 75000, Pakistan
| | - Muhammad Asif
- Faculty of Computing and Applied Sciences, Sir Syed University of Engineering and Technology, Karachi 75300, Pakistan;
| | - Syed Ghufran Khalid
- Department of Engineering, Faculty of Science and Technology, Nottingham Trent University, Nottingham B15 3TN, UK
| | - Haipeng Liu
- Research Centre for Intelligent Healthcare, Coventry University, Coventry CV1 5FB, UK;
| | | | - Saad Abdullah
- School of Innovation, Design and Engineering, Mälardalen University, 721 23 Västerås, Sweden
| |
Collapse
|
4
|
Chen Z, Yao L, Liu Y, Han X, Gong Z, Luo J, Zhao J, Fang G. Deep learning-aided 3D proxy-bridged region-growing framework for multi-organ segmentation. Sci Rep 2024; 14:9784. [PMID: 38684904 PMCID: PMC11059262 DOI: 10.1038/s41598-024-60668-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 04/25/2024] [Indexed: 05/02/2024] Open
Abstract
Accurate multi-organ segmentation in 3D CT images is imperative for enhancing computer-aided diagnosis and radiotherapy planning. However, current deep learning-based methods for 3D multi-organ segmentation face challenges such as the need for labor-intensive manual pixel-level annotations and high hardware resource demands, especially regarding GPU resources. To address these issues, we propose a 3D proxy-bridged region-growing framework specifically designed for the segmentation of the liver and spleen. Specifically, a key slice is selected from each 3D volume according to the corresponding intensity histogram. Subsequently, a deep learning model is employed to pinpoint the semantic central patch on this key slice, to calculate the growing seed. To counteract the impact of noise, segmentation of the liver and spleen is conducted on superpixel images created through proxy-bridging strategy. The segmentation process is then extended to adjacent slices by applying the same methodology iteratively, culminating in the comprehensive segmentation results. Experimental results demonstrate that the proposed framework accomplishes segmentation of the liver and spleen with an average Dice Similarity Coefficient of approximately 0.93 and a Jaccard Similarity Coefficient of around 0.88. These outcomes substantiate the framework's capability to achieve performance on par with that of deep learning methods, albeit requiring less guidance information and lower GPU resources.
Collapse
Affiliation(s)
- Zhihong Chen
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, 510006, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Lisha Yao
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
- School of Medicine, South China University of Technology, Guangzhou, 510180, China
| | - Yue Liu
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, 510006, China
- School of Information Engineering, Jiangxi College of Applied Technology, Ganzhou, 341000, China
| | - Xiaorui Han
- Department of Radiology, School of Medicine, Guangzhou First People's Hospital, South China University of Technology, Guangzhou, 510180, China
| | - Zhengze Gong
- Information and Data Centre, School of Medicine, Guangzhou First People's Hospital, South China University of Technology Guangdong, Guangzhou, 510180, China
| | - Jichao Luo
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, 510006, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Jietong Zhao
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, 510006, China
| | - Gang Fang
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, 510006, China.
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| |
Collapse
|
5
|
Wang F, Li X, Wen R, Luo H, Liu D, Qi S, Jing Y, Wang P, Deng G, Huang C, Du T, Wang L, Liang H, Wang J, Liu C. Pneumonia-Plus: a deep learning model for the classification of bacterial, fungal, and viral pneumonia based on CT tomography. Eur Radiol 2023; 33:8869-8878. [PMID: 37389609 DOI: 10.1007/s00330-023-09833-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 03/17/2023] [Accepted: 03/30/2023] [Indexed: 07/01/2023]
Abstract
OBJECTIVES This study aims to develop a deep learning algorithm, Pneumonia-Plus, based on computed tomography (CT) images for accurate classification of bacterial, fungal, and viral pneumonia. METHODS A total of 2763 participants with chest CT images and definite pathogen diagnosis were included to train and validate an algorithm. Pneumonia-Plus was prospectively tested on a nonoverlapping dataset of 173 patients. The algorithm's performance in classifying three types of pneumonia was compared to that of three radiologists using the McNemar test to verify its clinical usefulness. RESULTS Among the 173 patients, area under the curve (AUC) values for viral, fungal, and bacterial pneumonia were 0.816, 0.715, and 0.934, respectively. Viral pneumonia was accurately classified with sensitivity, specificity, and accuracy of 0.847, 0.919, and 0.873. Three radiologists also showed good consistency with Pneumonia-Plus. The AUC values of bacterial, fungal, and viral pneumonia were 0.480, 0.541, and 0.580 (radiologist 1: 3-year experience); 0.637, 0.693, and 0.730 (radiologist 2: 7-year experience); and 0.734, 0.757, and 0.847 (radiologist 3: 12-year experience), respectively. The McNemar test results for sensitivity showed that the diagnostic performance of the algorithm was significantly better than that of radiologist 1 and radiologist 2 (p < 0.05) in differentiating bacterial and viral pneumonia. Radiologist 3 had a higher diagnostic accuracy than the algorithm. CONCLUSIONS The Pneumonia-Plus algorithm is used to differentiate between bacterial, fungal, and viral pneumonia, which has reached the level of an attending radiologist and reduce the risk of misdiagnosis. The Pneumonia-Plus is important for appropriate treatment and avoiding the use of unnecessary antibiotics, and provide timely information to guide clinical decision-making and improve patient outcomes. CLINICAL RELEVANCE STATEMENT Pneumonia-Plus algorithm could assist in the accurate classification of pneumonia based on CT images, which has great clinical value in avoiding the use of unnecessary antibiotics, and providing timely information to guide clinical decision-making and improve patient outcomes. KEY POINTS • The Pneumonia-Plus algorithm trained from data collected from multiple centers can accurately identify bacterial, fungal, and viral pneumonia. • The Pneumonia-Plus algorithm was found to have better sensitivity in classifying viral and bacterial pneumonia in comparison to radiologist 1 (5-year experience) and radiologist 2 (7-year experience). • The Pneumonia-Plus algorithm is used to differentiate between bacterial, fungal, and viral pneumonia, which has reached the level of an attending radiologist.
Collapse
Affiliation(s)
- Fang Wang
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China
| | - Xiaoming Li
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China
| | - Ru Wen
- Medical College, Guizhou University, Guiyang, Guizhou Province, 550000, China
| | - Hu Luo
- No 1. Intensive Care Unit, Huoshenshan Hospital, Wuhan, China
- Department of Respiratory and Critical Care Medicine, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Dong Liu
- Huiying Medical Technology Co., Ltd, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Shuai Qi
- Huiying Medical Technology Co., Ltd, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Yang Jing
- Huiying Medical Technology Co., Ltd, Dongsheng Science and Technology Park, Haidian District, Beijing, China
| | - Peng Wang
- Medical Big Data and Artificial Intelligence Center, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Gang Deng
- Department of Radiology, Maternal and Child Health Hospital of Hubei Province, Guanggu District, Wuhan, China
| | - Cong Huang
- Department of Radiology, The 926 Hospital of PLA, Kaiyuan, China
| | - Tingting Du
- Department of Radiology, Chongqing Traditional Chinese Medicine Hospital, Chongqing, China
| | - Limei Wang
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China
| | - Hongqin Liang
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China.
| | - Jian Wang
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China.
| | - Chen Liu
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), 30 Gao Tan Yan St, Chongqing, 400038, China.
| |
Collapse
|
6
|
Murphy K, Muhairwe J, Schalekamp S, van Ginneken B, Ayakaka I, Mashaete K, Katende B, van Heerden A, Bosman S, Madonsela T, Gonzalez Fernandez L, Signorell A, Bresser M, Reither K, Glass TR. COVID-19 screening in low resource settings using artificial intelligence for chest radiographs and point-of-care blood tests. Sci Rep 2023; 13:19692. [PMID: 37952026 PMCID: PMC10640556 DOI: 10.1038/s41598-023-46461-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 11/01/2023] [Indexed: 11/14/2023] Open
Abstract
Artificial intelligence (AI) systems for detection of COVID-19 using chest X-Ray (CXR) imaging and point-of-care blood tests were applied to data from four low resource African settings. The performance of these systems to detect COVID-19 using various input data was analysed and compared with antigen-based rapid diagnostic tests. Participants were tested using the gold standard of RT-PCR test (nasopharyngeal swab) to determine whether they were infected with SARS-CoV-2. A total of 3737 (260 RT-PCR positive) participants were included. In our cohort, AI for CXR images was a poor predictor of COVID-19 (AUC = 0.60), since the majority of positive cases had mild symptoms and no visible pneumonia in the lungs. AI systems using differential white blood cell counts (WBC), or a combination of WBC and C-Reactive Protein (CRP) both achieved an AUC of 0.74 with a suggested optimal cut-off point at 83% sensitivity and 63% specificity. The antigen-RDT tests in this trial obtained 65% sensitivity at 98% specificity. This study is the first to validate AI tools for COVID-19 detection in an African setting. It demonstrates that screening for COVID-19 using AI with point-of-care blood tests is feasible and can operate at a higher sensitivity level than antigen testing.
Collapse
Affiliation(s)
- Keelin Murphy
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands.
| | | | - Steven Schalekamp
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands
| | - Irene Ayakaka
- SolidarMed, Partnerships for Health, Maseru, Lesotho
| | | | | | - Alastair van Heerden
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
- SAMRC/WITS Developmental Pathways for Health Research Unit, Department of Paediatrics, School of Clinical Medicine, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, Gauteng, South Africa
| | - Shannon Bosman
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Thandanani Madonsela
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Lucia Gonzalez Fernandez
- Department of Infectious Diseases and Hospital Epidemiology, University Hospital Basel, Basel, Switzerland
- SolidarMed, Partnerships for Health, Lucerne, Switzerland
| | - Aita Signorell
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Moniek Bresser
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Klaus Reither
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Tracy R Glass
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| |
Collapse
|
7
|
Haennah JHJ, Christopher CS, King GRG. Prediction of the COVID disease using lung CT images by Deep Learning algorithm: DETS-optimized Resnet 101 classifier. Front Med (Lausanne) 2023; 10:1157000. [PMID: 37746067 PMCID: PMC10513469 DOI: 10.3389/fmed.2023.1157000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 08/18/2023] [Indexed: 09/26/2023] Open
Abstract
As a result of the COVID-19 (coronavirus) disease due to SARS-CoV2 becoming a pandemic, it has spread over the globe. It takes time to evaluate the results of the laboratory tests because of the rising number of cases each day. Therefore, there are restrictions in terms of both therapy and findings. A clinical decision-making system with predictive algorithms is needed to alleviate the pressure on healthcare systems via Deep Learning (DL) algorithms. With the use of DL and chest scans, this research intends to determine COVID-19 patients by utilizing the Transfer Learning (TL)-based Generative Adversarial Network (Pix 2 Pix-GAN). Moreover, the COVID-19 images are then classified as either positive or negative using a Duffing Equation Tuna Swarm (DETS)-optimized Resnet 101 classifier trained on synthetic and real images from the Kaggle lung CT Covid dataset. Implementation of the proposed technique is done using MATLAB simulations. Besides, is evaluated via accuracy, precision, F1-score, recall, and AUC. Experimental findings show that the proposed prediction model identifies COVID-19 patients with 97.2% accuracy, a recall of 95.9%, and a specificity of 95.5%, which suggests the proposed predictive model can be utilized to forecast COVID-19 infection by medical specialists for clinical prediction research and can be beneficial to them.
Collapse
Affiliation(s)
- J. H. Jensha Haennah
- St. Xavier’s Catholic College of Engineering, Affiliated to Anna University Chennai, Tamil Nadu, India
| | | | - G. R. Gnana King
- Sahrdaya College of Engineering and Technology, Thrissur, Kerala, India
| |
Collapse
|
8
|
Li W, Cao Y, Wang S, Wan B. Fully feature fusion based neural network for COVID-19 lesion segmentation in CT images. Biomed Signal Process Control 2023; 86:104939. [PMID: 37082352 PMCID: PMC10083211 DOI: 10.1016/j.bspc.2023.104939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 03/07/2023] [Accepted: 04/05/2023] [Indexed: 04/22/2023]
Abstract
Coronavirus Disease 2019 (COVID-19) spreads around the world, seriously affecting people's health. Computed tomography (CT) images contain rich semantic information as an auxiliary diagnosis method. However, the automatic segmentation of COVID-19 lesions in CT images faces several challenges, including inconsistency in size and shape of the lesion, the high variability of the lesion, and the low contrast of pixel values between the lesion and normal tissue surrounding the lesion. Therefore, this paper proposes a Fully Feature Fusion Based Neural Network for COVID-19 Lesion Segmentation in CT Images (F3-Net). F3-Net uses an encoder-decoder architecture. In F3-Net, the Multiple Scale Module (MSM) can sense features of different scales, and Dense Path Module (DPM) is used to eliminate the semantic gap between features. The Attention Fusion Module (AFM) is the attention module, which can better fuse the multiple features. Furthermore, we proposed an improved loss function L o s s C o v i d - B C E that pays more attention to the lesions based on the prior knowledge of the distribution of COVID-19 lesions in the lungs. Finally, we verified the superior performance of F3-Net on a COVID-19 segmentation dataset, experiments demonstrate that the proposed model can segment COVID-19 lesions more accurately in CT images than benchmarks of state of the art.
Collapse
Affiliation(s)
- Wei Li
- Key Laboratory of Intelligent Computing in Medical Image (MIIC), Northeastern University, Ministry of Education, Shenyang, China
| | - Yangyong Cao
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Shanshan Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Bolun Wan
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
9
|
Santosh KC, GhoshRoy D, Nakarmi S. A Systematic Review on Deep Structured Learning for COVID-19 Screening Using Chest CT from 2020 to 2022. Healthcare (Basel) 2023; 11:2388. [PMID: 37685422 PMCID: PMC10486542 DOI: 10.3390/healthcare11172388] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The emergence of the COVID-19 pandemic in Wuhan in 2019 led to the discovery of a novel coronavirus. The World Health Organization (WHO) designated it as a global pandemic on 11 March 2020 due to its rapid and widespread transmission. Its impact has had profound implications, particularly in the realm of public health. Extensive scientific endeavors have been directed towards devising effective treatment strategies and vaccines. Within the healthcare and medical imaging domain, the application of artificial intelligence (AI) has brought significant advantages. This study delves into peer-reviewed research articles spanning the years 2020 to 2022, focusing on AI-driven methodologies for the analysis and screening of COVID-19 through chest CT scan data. We assess the efficacy of deep learning algorithms in facilitating decision making processes. Our exploration encompasses various facets, including data collection, systematic contributions, emerging techniques, and encountered challenges. However, the comparison of outcomes between 2020 and 2022 proves intricate due to shifts in dataset magnitudes over time. The initiatives aimed at developing AI-powered tools for the detection, localization, and segmentation of COVID-19 cases are primarily centered on educational and training contexts. We deliberate on their merits and constraints, particularly in the context of necessitating cross-population train/test models. Our analysis encompassed a review of 231 research publications, bolstered by a meta-analysis employing search keywords (COVID-19 OR Coronavirus) AND chest CT AND (deep learning OR artificial intelligence OR medical imaging) on both the PubMed Central Repository and Web of Science platforms.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab, Vermillion, SD 57069, USA
| | - Debasmita GhoshRoy
- School of Automation, Banasthali Vidyapith, Tonk 304022, Rajasthan, India;
| | - Suprim Nakarmi
- Department of Computer Science, University of South Dakota, Vermillion, SD 57069, USA;
| |
Collapse
|
10
|
Zeng LL, Gao K, Hu D, Feng Z, Hou C, Rong P, Wang W. SS-TBN: A Semi-Supervised Tri-Branch Network for COVID-19 Screening and Lesion Segmentation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:10427-10442. [PMID: 37022260 DOI: 10.1109/tpami.2023.3240886] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Insufficient annotated data and minor lung lesions pose big challenges for computed tomography (CT)-aided automatic COVID-19 diagnosis at an early outbreak stage. To address this issue, we propose a Semi-Supervised Tri-Branch Network (SS-TBN). First, we develop a joint TBN model for dual-task application scenarios of image segmentation and classification such as CT-based COVID-19 diagnosis, in which pixel-level lesion segmentation and slice-level infection classification branches are simultaneously trained via lesion attention, and individual-level diagnosis branch aggregates slice-level outputs for COVID-19 screening. Second, we propose a novel hybrid semi-supervised learning method to make full use of unlabeled data, combining a new double-threshold pseudo labeling method specifically designed to the joint model and a new inter-slice consistency regularization method specifically tailored to CT images. Besides two publicly available external datasets, we collect internal and our own external datasets including 210,395 images (1,420 cases versus 498 controls) from ten hospitals. Experimental results show that the proposed method achieves state-of-the-art performance in COVID-19 classification with limited annotated data even if lesions are subtle, and that segmentation results promote interpretability for diagnosis, suggesting the potential of the SS-TBN in early screening in insufficient labeled data situations at the early stage of a pandemic outbreak like COVID-19.
Collapse
|
11
|
Tian F, Tian Z, Chen Z, Zhang D, Du S. Surface-GCN: Learning interaction experience for organ segmentation in 3D medical images. Med Phys 2023; 50:5030-5044. [PMID: 36738103 DOI: 10.1002/mp.16280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Revised: 12/26/2022] [Accepted: 01/13/2023] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND Accurate segmentation of organs has a great significance for clinical diagnosis, but it is still hard work due to the obscure imaging boundaries caused by tissue adhesion on medical images. Based on the image continuity in medical image volumes, segmentation on these slices could be inferred from adjacent slices with a clear organ boundary. Radiologists can delineate a clear organ boundary by observing adjacent slices. PURPOSE Inspired by the radiologists' delineating procedure, we design an organ segmentation model based on boundary information of adjacent slices and a human-machine interactive learning strategy to introduce clinical experience. METHODS We propose an interactive organ segmentation method for medical image volume based on Graph Convolution Network (GCN) called Surface-GCN. First, we propose a Surface Feature Extraction Network (SFE-Net) to capture surface features of a target organ, and supervise it by a Mini-batch Adaptive Surface Matching (MBASM) module. Then, to predict organ boundaries precisely, we design an automatic segmentation module based on a Surface Convolution Unit (SCU), which propagates information on organ surfaces to refine the generated boundaries. In addition, an interactive segmentation module is proposed to learn radiologists' experience of interactive corrections on organ surfaces to reduce interaction clicks. RESULTS We evaluate the proposed method on one prostate MR image dataset and two abdominal multi-organ CT datasets. The experimental results show that our method outperforms other state-of-the-art methods. For prostate segmentation, the proposed method conducts a DSC score of 94.49% on PROMISE12 test dataset. For abdominal multi-organ segmentation, the proposed method achieves DSC scores of 95, 91, 95, and 88% for the left kidney, gallbladder, spleen, and esophagus, respectively. For interactive segmentation, the proposed method reduces 5-10 interaction clicks to reach the same accuracy. CONCLUSIONS To overcome the medical organ segmentation challenge, we propose a Graph Convolutional Network called Surface-GCN by imitating radiologist interactions and learning clinical experience. On single and multiple organ segmentation tasks, the proposed method could obtain more accurate segmentation boundaries compared with other state-of-the-art methods.
Collapse
Affiliation(s)
- Fengrui Tian
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Zhiqiang Tian
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Zhang Chen
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Dong Zhang
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Shaoyi Du
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
12
|
Tian M, Wang H, Sun Y, Wu S, Tang Q, Zhang M. Fine-grained attention & knowledge-based collaborative network for diabetic retinopathy grading. Heliyon 2023; 9:e17217. [PMID: 37449186 PMCID: PMC10336422 DOI: 10.1016/j.heliyon.2023.e17217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 06/09/2023] [Accepted: 06/10/2023] [Indexed: 07/18/2023] Open
Abstract
Accurate diabetic retinopathy (DR) grading is crucial for making the proper treatment plan to reduce the damage caused by vision loss. This task is challenging due to the fact that the DR related lesions are often small and subtle in visual differences and intra-class variations. Moreover, relationships between the lesions and the DR levels are complicated. Although many deep learning (DL) DR grading systems have been developed with some success, there are still rooms for grading accuracy improvement. A common issue is that not much medical knowledge was used in these DL DR grading systems. As a result, the grading results are not properly interpreted by ophthalmologists, thus hinder the potential for practical applications. This paper proposes a novel fine-grained attention & knowledge-based collaborative network (FA+KC-Net) to address this concern. The fine-grained attention network dynamically divides the extracted feature maps into smaller patches and effectively captures small image features that are meaningful in the sense of its training from large amount of retinopathy fundus images. The knowledge-based collaborative network extracts a-priori medical knowledge features, i.e., lesions such as the microaneurysms (MAs), soft exudates (SEs), hard exudates (EXs), and hemorrhages (HEs). Finally, decision rules are developed to fuse the DR grading results from the fine-grained network and the knowledge-based collaborative network to make the final grading. Extensive experiments are carried out on four widely-used datasets, the DDR, Messidor, APTOS, and EyePACS to evaluate the efficacy of our method and compare with other state-of-the-art (SOTA) DL models. Simulation results show that proposed FA+KC-Net is accurate and stable, achieves the best performances on the DDR, Messidor, and APTOS datasets.
Collapse
Affiliation(s)
- Miao Tian
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Hongqiu Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Yingxue Sun
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Shaozhi Wu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Qingqing Tang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Meixia Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, 610041, China
| |
Collapse
|
13
|
Zhu H, Zhu Z, Wang S, Zhang Y. CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification. MACHINE LEARNING AND KNOWLEDGE EXTRACTION 2023; 5:684-712. [PMID: 38560420 PMCID: PMC7615781 DOI: 10.3390/make5030037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Since the COVID-19 pandemic outbreak, over 760 million confirmed cases and over 6.8 million deaths have been reported globally, according to the World Health Organization. While the SARS-CoV-2 virus carried by COVID-19 patients can be identified though the reverse transcription-polymerase chain reaction (RT-PCR) test with high accuracy, clinical misdiagnosis between COVID-19 and pneumonia patients remains a challenge. Therefore, we developed a novel CovC-ReDRNet model to distinguish COVID-19 patients from pneumonia patients as well as normal cases. ResNet-18 was introduced as the backbone model and tailored for the feature representation afterward. In our feature-based randomized neural network (RNN) framework, the feature representation automatically pairs with the deep random vector function link network (dRVFL) as the optimal classifier, producing a CovC-ReDRNet model for the classification task. Results based on five-fold cross-validation reveal that our method achieved 94.94%, 97.01%, 97.56%, 96.81%, and 95.84% MA sensitivity, MA specificity, MA accuracy, MA precision, and MA F1-score, respectively. Ablation studies evidence the superiority of ResNet-18 over different backbone networks, RNNs over traditional classifiers, and deep RNNs over shallow RNNs. Moreover, our proposed model achieved a better MA accuracy than the state-of-the-art (SOTA) methods, the highest score of which was 95.57%. To conclude, our CovC-ReDRNet model could be perceived as an advanced computer-aided diagnostic model with high speed and high accuracy for classifying and predicting COVID-19 diseases.
Collapse
Affiliation(s)
- Hanruo Zhu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Ziquan Zhu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
14
|
Wen R, Xu P, Cai Y, Wang F, Li M, Zeng X, Liu C. A Deep Learning Model for the Diagnosis and Discrimination of Gram-Positive and Gram-Negative Bacterial Pneumonia for Children Using Chest Radiography Images and Clinical Information. Infect Drug Resist 2023; 16:4083-4092. [PMID: 37388188 PMCID: PMC10305772 DOI: 10.2147/idr.s404786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 04/29/2023] [Indexed: 07/01/2023] Open
Abstract
Purpose This study aimed to develop a deep learning model based on chest radiography (CXR) images and clinical data to accurately classify gram-positive and gram-negative bacterial pneumonia in children to guide the use of antibiotics. Methods We retrospectively collected CXR images along with clinical information for gram-positive (n=447) and gram-negative (n=395) bacterial pneumonia in children from January 1, 2016, to June 30, 2021. Four types of machine learning models based on clinical data and six types of deep learning algorithm models based on image data were constructed, and multi-modal decision fusion was performed. Results In the machine learning models, CatBoost, which only used clinical data, had the best performance; its area under the receiver operating characteristic curve (AUC) was significantly higher than that of the other models (P<0.05). The incorporation of clinical information improved the performance of deep learning models that relied solely on image-based classification. Consequently, AUC and F1 increased by 5.6% and 10.2% on average, respectively. The best quality was achieved with ResNet101 (model accuracy: 0.75, recall rate: 0.84, AUC: 0.803, F1: 0.782). Conclusion Our study established a pediatric bacterial pneumonia model that utilizes CXR and clinical data to accurately classify cases of gram-negative and gram-positive bacterial pneumonia. The results confirmed that the addition of image data to the convolutional neural network model significantly improved its performance. While the CatBoost-based classifier had greater advantages owing to a smaller dataset, the quality of the Resnet101 model trained using multi-modal data was comparable to that of the CatBoost model, even with a limited number of samples.
Collapse
Affiliation(s)
- Ru Wen
- Medical College, Guizhou University, Guizhou, 550000, People’s Republic of China
- Department of Medical Imaging, Guizhou Provincial People Hospital, Guiyang City, Guizhou Province, 550000, People’s Republic of China
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, 400038, People’s Republic of China
| | - Peng Xu
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, 400038, People’s Republic of China
| | - Yimin Cai
- Medical College, Guizhou University, Guizhou, 550000, People’s Republic of China
| | - Fang Wang
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, 400038, People’s Republic of China
| | - Mengfei Li
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, 400038, People’s Republic of China
| | - Xianchun Zeng
- Department of Medical Imaging, Guizhou Provincial People Hospital, Guiyang City, Guizhou Province, 550000, People’s Republic of China
| | - Chen Liu
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, 400038, People’s Republic of China
| |
Collapse
|
15
|
Rondinella A, Crispino E, Guarnera F, Giudice O, Ortis A, Russo G, Di Lorenzo C, Maimone D, Pappalardo F, Battiato S. Boosting multiple sclerosis lesion segmentation through attention mechanism. Comput Biol Med 2023; 161:107021. [PMID: 37216775 DOI: 10.1016/j.compbiomed.2023.107021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 04/11/2023] [Accepted: 05/05/2023] [Indexed: 05/24/2023]
Abstract
Magnetic resonance imaging is a fundamental tool to reach a diagnosis of multiple sclerosis and monitoring its progression. Although several attempts have been made to segment multiple sclerosis lesions using artificial intelligence, fully automated analysis is not yet available. State-of-the-art methods rely on slight variations in segmentation architectures (e.g. U-Net, etc.). However, recent research has demonstrated how exploiting temporal-aware features and attention mechanisms can provide a significant boost to traditional architectures. This paper proposes a framework that exploits an augmented U-Net architecture with a convolutional long short-term memory layer and attention mechanism which is able to segment and quantify multiple sclerosis lesions detected in magnetic resonance images. Quantitative and qualitative evaluation on challenging examples demonstrated how the method outperforms previous state-of-the-art approaches, reporting an overall Dice score of 89% and also demonstrating robustness and generalization ability on never seen new test samples of a new dedicated under construction dataset.
Collapse
Affiliation(s)
- Alessia Rondinella
- Department of Mathematics and Computer Science, University of Catania, Viale Andrea Doria 6, Catania, 95125, Italy.
| | - Elena Crispino
- Department of Biomedical and Biotechnological Sciences, University of Catania, Via Santa Sofia 97, Catania, 95125, Italy
| | - Francesco Guarnera
- Department of Mathematics and Computer Science, University of Catania, Viale Andrea Doria 6, Catania, 95125, Italy
| | - Oliver Giudice
- Department of Mathematics and Computer Science, University of Catania, Viale Andrea Doria 6, Catania, 95125, Italy
| | - Alessandro Ortis
- Department of Mathematics and Computer Science, University of Catania, Viale Andrea Doria 6, Catania, 95125, Italy
| | - Giulia Russo
- Department of Drug and Health Sciences, University of Catania, Viale Andrea Doria 6, Catania, 95125, Italy
| | - Clara Di Lorenzo
- UOC Radiologia, ARNAS Garibaldi, P.zza S. Maria di Gesù, Catania, 95124, Italy
| | - Davide Maimone
- Centro Sclerosi Multipla, UOC Neurologia, ARNAS Garibaldi, P.zza S. Maria di Gesù, Catania, 95124, Italy
| | - Francesco Pappalardo
- Department of Drug and Health Sciences, University of Catania, Viale Andrea Doria 6, Catania, 95125, Italy
| | - Sebastiano Battiato
- Department of Mathematics and Computer Science, University of Catania, Viale Andrea Doria 6, Catania, 95125, Italy
| |
Collapse
|
16
|
Alablani IAL, Alenazi MJF. COVID-ConvNet: A Convolutional Neural Network Classifier for Diagnosing COVID-19 Infection. Diagnostics (Basel) 2023; 13:diagnostics13101675. [PMID: 37238159 DOI: 10.3390/diagnostics13101675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 04/25/2023] [Accepted: 04/28/2023] [Indexed: 05/28/2023] Open
Abstract
The novel coronavirus (COVID-19) pandemic still has a significant impact on the worldwide population's health and well-being. Effective patient screening, including radiological examination employing chest radiography as one of the main screening modalities, is an important step in the battle against the disease. Indeed, the earliest studies on COVID-19 found that patients infected with COVID-19 present with characteristic anomalies in chest radiography. In this paper, we introduce COVID-ConvNet, a deep convolutional neural network (DCNN) design suitable for detecting COVID-19 symptoms from chest X-ray (CXR) scans. The proposed deep learning (DL) model was trained and evaluated using 21,165 CXR images from the COVID-19 Database, a publicly available dataset. The experimental results demonstrate that our COVID-ConvNet model has a high prediction accuracy at 97.43% and outperforms recent related works by up to 5.9% in terms of prediction accuracy.
Collapse
Affiliation(s)
- Ibtihal A L Alablani
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh P.O. Box 11451, Saudi Arabia
| | - Mohammed J F Alenazi
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh P.O. Box 11451, Saudi Arabia
| |
Collapse
|
17
|
Wang X, Cheng L, Zhang D, Liu Z, Jiang L. Broad learning solution for rapid diagnosis of COVID-19. Biomed Signal Process Control 2023; 83:104724. [PMID: 36811035 PMCID: PMC9935280 DOI: 10.1016/j.bspc.2023.104724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 01/27/2023] [Accepted: 02/14/2023] [Indexed: 02/19/2023]
Abstract
COVID-19 has put all of humanity in a health dilemma as it spreads rapidly. For many infectious diseases, the delay of detection results leads to the spread of infection and an increase in healthcare costs. COVID-19 diagnostic methods rely on a large number of redundant labeled data and time-consuming data training processes to obtain satisfactory results. However, as a new epidemic, obtaining large clinical datasets is still challenging, which will inhibit the training of deep models. And a model that can really rapidly diagnose COVID-19 at all stages of the model has still not been proposed. To address these limitations, we combine feature attention and broad learning to propose a diagnostic system (FA-BLS) for COVID-19 pulmonary infection, which introduces a broad learning structure to address the slow diagnosis speed of existing deep learning methods. In our network, transfer learning is performed with ResNet50 convolutional modules with fixed weights to extract image features, and the attention mechanism is used to enhance feature representation. After that, feature nodes and enhancement nodes are generated by broad learning with random weights to adaptly select features for diagnosis. Finally, three publicly accessible datasets were used to evaluate our optimization model. It was determined that the FA-BLS model had a 26-130 times faster training speed than deep learning with a similar level of accuracy, which can achieve a fast and accurate diagnosis, achieve effective isolation from COVID-19 and the proposed method also opens up a new method for other types of chest CT image recognition problems.
Collapse
Affiliation(s)
- Xiaowei Wang
- School of Physical Science and Technology, Shenyang Normal University, Shenyang, 110034, China
| | - Liying Cheng
- School of Physical Science and Technology, Shenyang Normal University, Shenyang, 110034, China
| | - Dan Zhang
- Navigation College, Dalian Maritime University, Dalian, 116026, China
| | - Zuchen Liu
- School of Physical Science and Technology, Shenyang Normal University, Shenyang, 110034, China
| | - Longtao Jiang
- School of Physical Science and Technology, Shenyang Normal University, Shenyang, 110034, China
| |
Collapse
|
18
|
Jia Z, You K, He W, Tian Y, Feng Y, Wang Y, Jia X, Lou Y, Zhang J, Li G, Zhang Z. Event-Based Semantic Segmentation With Posterior Attention. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:1829-1842. [PMID: 37028052 DOI: 10.1109/tip.2023.3249579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In the past years, attention-based Transformers have swept across the field of computer vision, starting a new stage of backbones in semantic segmentation. Nevertheless, semantic segmentation under poor light conditions remains an open problem. Moreover, most papers about semantic segmentation work on images produced by commodity frame-based cameras with a limited framerate, hindering their deployment to auto-driving systems that require instant perception and response at milliseconds. An event camera is a new sensor that generates event data at microseconds and can work in poor light conditions with a high dynamic range. It looks promising to leverage event cameras to enable perception where commodity cameras are incompetent, but algorithms for event data are far from mature. Pioneering researchers stack event data as frames so that event-based segmentation is converted to frame-based segmentation, but characteristics of event data are not explored. Noticing that event data naturally highlight moving objects, we propose a posterior attention module that adjusts the standard attention by the prior knowledge provided by event data. The posterior attention module can be readily plugged into many segmentation backbones. Plugging the posterior attention module into a recently proposed SegFormer network, we get EvSegFormer (the event-based version of SegFormer) with state-of-the-art performance in two datasets (MVSEC and DDD-17) collected for event-based segmentation. Code is available at https://github.com/zexiJia/EvSegFormer to facilitate research on event-based vision.
Collapse
|
19
|
Chen Y, Tang Y, Huang J, Xiong S. Multi-scale Triplet Hashing for Medical Image Retrieval. Comput Biol Med 2023; 155:106633. [PMID: 36827786 DOI: 10.1016/j.compbiomed.2023.106633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Revised: 01/12/2023] [Accepted: 02/04/2023] [Indexed: 02/10/2023]
Abstract
For medical image retrieval task, deep hashing algorithms are widely applied in large-scale datasets for auxiliary diagnosis due to the retrieval efficiency advantage of hash codes. Most of which focus on features learning, whilst neglecting the discriminate area of medical images and hierarchical similarity for deep features and hash codes. In this paper, we tackle these dilemmas with a new Multi-scale Triplet Hashing (MTH) algorithm, which can leverage multi-scale information, convolutional self-attention and hierarchical similarity to learn effective hash codes simultaneously. The MTH algorithm first designs multi-scale DenseBlock module to learn multi-scale information of medical images. Meanwhile, a convolutional self-attention mechanism is developed to perform information interaction of the channel domain, which can capture the discriminate area of medical images effectively. On top of the two paths, a novel loss function is proposed to not only conserve the category-level information of deep features and the semantic information of hash codes in the learning process, but also capture the hierarchical similarity for deep features and hash codes. Extensive experiments on the Curated X-ray Dataset, Skin Cancer MNIST Dataset and COVID-19 Radiography Dataset illustrate that the MTH algorithm can further enhance the effect of medical retrieval compared to other state-of-the-art medical image retrieval algorithms.
Collapse
Affiliation(s)
- Yaxiong Chen
- School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan 430070, China; Sanya Science and Education Innovation Park, Wuhan University of Technology, Sanya 572000, China; Wuhan University of Technology Chongqing Research Institute, Chongqing 401120, China
| | - Yibo Tang
- School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan 430070, China
| | - Jinghao Huang
- School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan 430070, China; Sanya Science and Education Innovation Park, Wuhan University of Technology, Sanya 572000, China
| | - Shengwu Xiong
- School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan 430070, China; Sanya Science and Education Innovation Park, Wuhan University of Technology, Sanya 572000, China.
| |
Collapse
|
20
|
A review of deep learning-based multiple-lesion recognition from medical images: classification, detection and segmentation. Comput Biol Med 2023; 157:106726. [PMID: 36924732 DOI: 10.1016/j.compbiomed.2023.106726] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 02/07/2023] [Accepted: 02/27/2023] [Indexed: 03/05/2023]
Abstract
Deep learning-based methods have become the dominant methodology in medical image processing with the advancement of deep learning in natural image classification, detection, and segmentation. Deep learning-based approaches have proven to be quite effective in single lesion recognition and segmentation. Multiple-lesion recognition is more difficult than single-lesion recognition due to the little variation between lesions or the too wide range of lesions involved. Several studies have recently explored deep learning-based algorithms to solve the multiple-lesion recognition challenge. This paper includes an in-depth overview and analysis of deep learning-based methods for multiple-lesion recognition developed in recent years, including multiple-lesion recognition in diverse body areas and recognition of whole-body multiple diseases. We discuss the challenges that still persist in the multiple-lesion recognition tasks by critically assessing these efforts. Finally, we outline existing problems and potential future research areas, with the hope that this review will help researchers in developing future approaches that will drive additional advances.
Collapse
|
21
|
Malik H, Anees T, Naeem A, Naqvi RA, Loh WK. Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans. Bioengineering (Basel) 2023; 10:203. [PMID: 36829697 PMCID: PMC9952069 DOI: 10.3390/bioengineering10020203] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 01/30/2023] [Accepted: 02/01/2023] [Indexed: 02/09/2023] Open
Abstract
Due to the rapid rate of SARS-CoV-2 dissemination, a conversant and effective strategy must be employed to isolate COVID-19. When it comes to determining the identity of COVID-19, one of the most significant obstacles that researchers must overcome is the rapid propagation of the virus, in addition to the dearth of trustworthy testing models. This problem continues to be the most difficult one for clinicians to deal with. The use of AI in image processing has made the formerly insurmountable challenge of finding COVID-19 situations more manageable. In the real world, there is a problem that has to be handled about the difficulties of sharing data between hospitals while still honoring the privacy concerns of the organizations. When training a global deep learning (DL) model, it is crucial to handle fundamental concerns such as user privacy and collaborative model development. For this study, a novel framework is designed that compiles information from five different databases (several hospitals) and edifies a global model using blockchain-based federated learning (FL). The data is validated through the use of blockchain technology (BCT), and FL trains the model on a global scale while maintaining the secrecy of the organizations. The proposed framework is divided into three parts. First, we provide a method of data normalization that can handle the diversity of data collected from five different sources using several computed tomography (CT) scanners. Second, to categorize COVID-19 patients, we ensemble the capsule network (CapsNet) with incremental extreme learning machines (IELMs). Thirdly, we provide a strategy for interactively training a global model using BCT and FL while maintaining anonymity. Extensive tests employing chest CT scans and a comparison of the classification performance of the proposed model to that of five DL algorithms for predicting COVID-19, while protecting the privacy of the data for a variety of users, were undertaken. Our findings indicate improved effectiveness in identifying COVID-19 patients and achieved an accuracy of 98.99%. Thus, our model provides substantial aid to medical practitioners in their diagnosis of COVID-19.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore 54000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
22
|
Aslani S, Jacob J. Utilisation of deep learning for COVID-19 diagnosis. Clin Radiol 2023; 78:150-157. [PMID: 36639173 PMCID: PMC9831845 DOI: 10.1016/j.crad.2022.11.006] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 01/12/2023]
Abstract
The COVID-19 pandemic that began in 2019 has resulted in millions of deaths worldwide. Over this period, the economic and healthcare consequences of COVID-19 infection in survivors of acute COVID-19 infection have become apparent. During the course of the pandemic, computer analysis of medical images and data have been widely used by the medical research community. In particular, deep-learning methods, which are artificial intelligence (AI)-based approaches, have been frequently employed. This paper provides a review of deep-learning-based AI techniques for COVID-19 diagnosis using chest radiography and computed tomography. Thirty papers published from February 2020 to March 2022 that used two-dimensional (2D)/three-dimensional (3D) deep convolutional neural networks combined with transfer learning for COVID-19 detection were reviewed. The review describes how deep-learning methods detect COVID-19, and several limitations of the proposed methods are highlighted.
Collapse
Affiliation(s)
- S Aslani
- Centre for Medical Image Computing and Department of Respiratory Medicine, University College London, London, UK.
| | - J Jacob
- Centre for Medical Image Computing and Department of Respiratory Medicine, University College London, London, UK
| |
Collapse
|
23
|
Ding W, Abdel-Basset M, Hawash H, ELkomy OM. MT-nCov-Net: A Multitask Deep-Learning Framework for Efficient Diagnosis of COVID-19 Using Tomography Scans. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:1285-1298. [PMID: 34748510 DOI: 10.1109/tcyb.2021.3123173] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The localization and segmentation of the novel coronavirus disease of 2019 (COVID-19) lesions from computerized tomography (CT) scans are of great significance for developing an efficient computer-aided diagnosis system. Deep learning (DL) has emerged as one of the best choices for developing such a system. However, several challenges limit the efficiency of DL approaches, including data heterogeneity, considerable variety in the shape and size of the lesions, lesion imbalance, and scarce annotation. In this article, a novel multitask regression network for segmenting COVID-19 lesions is proposed to address these challenges. We name the framework MT-nCov-Net. We formulate lesion segmentation as a multitask shape regression problem that enables partaking the poor-, intermediate-, and high-quality features between various tasks. A multiscale feature learning (MFL) module is presented to capture the multiscale semantic information, which helps to efficiently learn small and large lesion features while reducing the semantic gap between different scale representations. In addition, a fine-grained lesion localization (FLL) module is introduced to detect infection lesions using an adaptive dual-attention mechanism. The generated location map and the fused multiscale representations are subsequently passed to the lesion regression (LR) module to segment the infection lesions. MT-nCov-Net enables learning complete lesion properties to accurately segment the COVID-19 lesion by regressing its shape. MT-nCov-Net is experimentally evaluated on two public multisource datasets, and the overall performance validates its superiority over the current cutting-edge approaches and demonstrates its effectiveness in tackling the problems facing the diagnosis of COVID-19.
Collapse
|
24
|
Meng Y, Bridge J, Addison C, Wang M, Merritt C, Franks S, Mackey M, Messenger S, Sun R, Fitzmaurice T, McCann C, Li Q, Zhao Y, Zheng Y. Bilateral adaptive graph convolutional network on CT based Covid-19 diagnosis with uncertainty-aware consensus-assisted multiple instance learning. Med Image Anal 2023; 84:102722. [PMID: 36574737 PMCID: PMC9753459 DOI: 10.1016/j.media.2022.102722] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 10/17/2022] [Accepted: 12/02/2022] [Indexed: 12/23/2022]
Abstract
Coronavirus disease (COVID-19) has caused a worldwide pandemic, putting millions of people's health and lives in jeopardy. Detecting infected patients early on chest computed tomography (CT) is critical in combating COVID-19. Harnessing uncertainty-aware consensus-assisted multiple instance learning (UC-MIL), we propose to diagnose COVID-19 using a new bilateral adaptive graph-based (BA-GCN) model that can use both 2D and 3D discriminative information in 3D CT volumes with arbitrary number of slices. Given the importance of lung segmentation for this task, we have created the largest manual annotation dataset so far with 7,768 slices from COVID-19 patients, and have used it to train a 2D segmentation model to segment the lungs from individual slices and mask the lungs as the regions of interest for the subsequent analyses. We then used the UC-MIL model to estimate the uncertainty of each prediction and the consensus between multiple predictions on each CT slice to automatically select a fixed number of CT slices with reliable predictions for the subsequent model reasoning. Finally, we adaptively constructed a BA-GCN with vertices from different granularity levels (2D and 3D) to aggregate multi-level features for the final diagnosis with the benefits of the graph convolution network's superiority to tackle cross-granularity relationships. Experimental results on three largest COVID-19 CT datasets demonstrated that our model can produce reliable and accurate COVID-19 predictions using CT volumes with any number of slices, which outperforms existing approaches in terms of learning and generalisation ability. To promote reproducible research, we have made the datasets, including the manual annotations and cleaned CT dataset, as well as the implementation code, available at https://doi.org/10.5281/zenodo.6361963.
Collapse
Affiliation(s)
- Yanda Meng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Joshua Bridge
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Cliff Addison
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | - Manhui Wang
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | | | - Stu Franks
- Alces Flight Limited, Bicester, United Kingdom
| | - Maria Mackey
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Steve Messenger
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Renrong Sun
- Department of Radiology, Hubei Provincial Hospital of Integrated Chinese and Western Medicine, Hubei University of Chinese Medicine, Wuhan, China
| | - Thomas Fitzmaurice
- Adult Cystic Fibrosis Unit, Liverpool Heart and Chest Hospital NHS Foundation Trust, Liverpool, United Kingdom
| | - Caroline McCann
- Radiology, Liverpool Heart and Chest Hospital NHS Foundation Trust, United Kingdom
| | - Qiang Li
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Yitian Zhao
- The Affiliated People's Hospital of Ningbo University, Ningbo, China; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Science, Ningbo, China.
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom; Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart & Chest Hospital, Liverpool, United Kingdom.
| |
Collapse
|
25
|
Khan A, Khan SH, Saif M, Batool A, Sohail A, Waleed Khan M. A Survey of Deep Learning Techniques for the Analysis of COVID-19 and their usability for Detecting Omicron. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Affiliation(s)
- Asifullah Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Islamabad, Pakistan
- Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Islamabad, Pakistan
| | - Saddam Hussain Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Computer Systems Engineering, University of Engineering and Applied Sciences (UEAS), Swat, Pakistan
| | - Mahrukh Saif
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
| | - Asiya Batool
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Computer Science, Faculty of Computing & Artificial Intelligence, Air University, Islamabad, Pakistan
| | - Muhammad Waleed Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Mechanical and Aerospace Engineering, Columbus, OH, USA
| |
Collapse
|
26
|
Deep SVDD and Transfer Learning for COVID-19 Diagnosis Using CT Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:6070970. [PMID: 36926185 PMCID: PMC10014155 DOI: 10.1155/2023/6070970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 01/25/2023] [Accepted: 02/06/2023] [Indexed: 03/09/2023]
Abstract
The novel coronavirus disease (COVID-19), which appeared in Wuhan, China, is spreading rapidly worldwide. Health systems in many countries have collapsed as a result of this pandemic, and hundreds of thousands of people have died due to acute respiratory distress syndrome caused by this virus. As a result, diagnosing COVID-19 in the early stages of infection is critical in the fight against the disease because it saves the patient's life and prevents the disease from spreading. In this study, we proposed a novel approach based on transfer learning and deep support vector data description (DSVDD) to distinguish among COVID-19, non-COVID-19 pneumonia, and intact CT images. Our approach consists of three models, each of which can classify one specific category as normal and the other as anomalous. To our knowledge, this is the first study to use the one-class DSVDD and transfer learning to diagnose lung disease. For the proposed approach, we used two scenarios: one with pretrained VGG16 and one with ResNet50. The proposed models were trained using data gathered with the assistance of an expert radiologist from three internet-accessible sources in end-to-end fusion using three split data ratios. Based on training with 70%, 50%, and 30% of the data, the proposed VGG16 models achieved (0.8281, 0.9170, and 0.9294) for the F1 score, while the proposed ResNet50 models achieved (0.9109, 0.9188, and 0.9333).
Collapse
|
27
|
Detection of COVID-19 Case from Chest CT Images Using Deformable Deep Convolutional Neural Network. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:4301745. [PMID: 36844950 PMCID: PMC9949952 DOI: 10.1155/2023/4301745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/14/2022] [Accepted: 01/24/2023] [Indexed: 02/18/2023]
Abstract
The infectious coronavirus disease (COVID-19) has become a great threat to global human health. Timely and rapid detection of COVID-19 cases is very crucial to control its spreading through isolation measures as well as for proper treatment. Though the real-time reverse transcription-polymerase chain reaction (RT-PCR) test is a widely used technique for COVID-19 infection, recent researches suggest chest computed tomography (CT)-based screening as an effective substitute in cases of time and availability limitations of RT-PCR. In consequence, deep learning-based COVID-19 detection from chest CT images is gaining momentum. Furthermore, visual analysis of data has enhanced the opportunities of maximizing the prediction performance in this big data and deep learning realm. In this article, we have proposed two separate deformable deep networks converting from the conventional convolutional neural network (CNN) and the state-of-the-art ResNet-50, to detect COVID-19 cases from chest CT images. The impact of the deformable concept has been observed through performance comparative analysis among the designed deformable and normal models, and it is found that the deformable models show better prediction results than their normal form. Furthermore, the proposed deformable ResNet-50 model shows better performance than the proposed deformable CNN model. The gradient class activation mapping (Grad-CAM) technique has been used to visualize and check the targeted regions' localization effort at the final convolutional layer and has been found excellent. Total 2481 chest CT images have been used to evaluate the performance of the proposed models with a train-valid-test data splitting ratio of 80 : 10 : 10 in random fashion. The proposed deformable ResNet-50 model achieved training accuracy of 99.5% and test accuracy of 97.6% with specificity of 98.5% and sensitivity of 96.5% which are satisfactory compared with related works. The comprehensive discussion demonstrates that the proposed deformable ResNet-50 model-based COVID-19 detection technique can be useful for clinical applications.
Collapse
|
28
|
Takateyama Y, Haruishi T, Hashimoto M, Otake Y, Akashi T, Shimizu A. Attention induction for a CT volume classification of COVID-19. Int J Comput Assist Radiol Surg 2023; 18:289-301. [PMID: 36251150 PMCID: PMC9574825 DOI: 10.1007/s11548-022-02769-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 09/29/2022] [Indexed: 02/03/2023]
Abstract
PURPOSE This study proposes a method to draw attention toward the specific radiological findings of coronavirus disease 2019 (COVID-19) in CT images, such as bilaterality of ground glass opacity (GGO) and/or consolidation, in order to improve the classification accuracy of input CT images. METHODS We propose an induction mask that combines a similarity and a bilateral mask. A similarity mask guides attention to regions with similar appearances, and a bilateral mask induces attention to the opposite side of the lung to capture bilaterally distributed lesions. An induction mask for pleural effusion is also proposed in this study. ResNet18 with nonlocal blocks was trained by minimizing the loss function defined by the induction mask. RESULTS The four-class classification accuracy of the CT images of 1504 cases was 0.6443, where class 1 was the typical appearance of COVID-19 pneumonia, class 2 was the indeterminate appearance of COVID-19 pneumonia, class 3 was the atypical appearance of COVID-19 pneumonia, and class 4 was negative for pneumonia. The four classes were divided into two subgroups. The accuracy of COVID-19 and pneumonia classifications was evaluated, which were 0.8205 and 0.8604, respectively. The accuracy of the four-class and COVID-19 classifications improved when attention was paid to pleural effusion. CONCLUSION The proposed attention induction method was effective for the classification of CT images of COVID-19 patients. Improvement of the classification accuracy of class 3 by focusing on features specific to the class remains a topic for future work.
Collapse
Affiliation(s)
- Yusuke Takateyama
- grid.136594.c0000 0001 0689 5974Institute of Engineering, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan
| | - Takahito Haruishi
- grid.136594.c0000 0001 0689 5974Institute of Engineering, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan
| | - Masahiro Hashimoto
- grid.26091.3c0000 0004 1936 9959Department of Radiology, Keio University school of Medicine, Shinjuku-ku, Tokyo, Japan
| | - Yoshito Otake
- grid.260493.a0000 0000 9227 2257Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma-shi, Nara, Japan
| | - Toshiaki Akashi
- grid.258269.20000 0004 1762 2738Department of Radiology, Juntendo University, Bunkyo-ku, Tokyo, Japan
| | - Akinobu Shimizu
- grid.136594.c0000 0001 0689 5974Institute of Engineering, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan
| |
Collapse
|
29
|
Deep Learning for Detecting COVID-19 Using Medical Images. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 10:bioengineering10010019. [PMID: 36671590 PMCID: PMC9854504 DOI: 10.3390/bioengineering10010019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
The global spread of COVID-19 (also known as SARS-CoV-2) is a major international public health crisis [...].
Collapse
|
30
|
Li J, Wang S, Hu S, Sun Y, Wang Y, Xu P, Ye J. Class-Aware Attention Network for infectious keratitis diagnosis using corneal photographs. Comput Biol Med 2022; 151:106301. [PMID: 36403354 DOI: 10.1016/j.compbiomed.2022.106301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/18/2022] [Accepted: 11/06/2022] [Indexed: 11/11/2022]
Abstract
Infectious keratitis is one of the common ophthalmic diseases and also one of the main blinding eye diseases in China, hence rapid and accurate diagnosis and treatment for infectious keratitis are urgent to prevent the progression of the disease and limit the degree of corneal injury. Unfortunately, the traditional manual diagnosis accuracy is usually unsatisfactory due to the indistinguishable visual features. In this paper, we propose a novel end-to-end fully convolutional network, named Class-Aware Attention Network (CAA-Net), for automatically diagnosing infectious keratitis (normal, viral keratitis, fungal keratitis, and bacterial keratitis) using corneal photographs. In CAA-Net, a class-aware classification module is first trained to learn class-related discriminative features using separate branches for each class. Then, the learned class-aware discriminative features are fed into the main branch and fused with other feature maps using two attention strategies to assist the final multi-class classification performance. For the experiments, we have built a new corneal photograph dataset with 1886 images from 519 patients and conducted comprehensive experiments to verify the effectiveness of our proposed method. The code is available at https://github.com/SWF-hao/CAA-Net_Pytorch.
Collapse
Affiliation(s)
- Jinhao Li
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, Shandong, China.
| | - Shuai Wang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, Shandong, China; Suzhou Research Institute of Shandong University, Suzhou, 215123, Jiangsu, China.
| | - Shaodan Hu
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| | - Yiming Sun
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, 310018, Zhejiang, China.
| | - Peifang Xu
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| | - Juan Ye
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| |
Collapse
|
31
|
Lasker A, Obaidullah SM, Chakraborty C, Roy K. Application of Machine Learning and Deep Learning Techniques for COVID-19 Screening Using Radiological Imaging: A Comprehensive Review. SN COMPUTER SCIENCE 2022; 4:65. [PMID: 36467853 PMCID: PMC9702883 DOI: 10.1007/s42979-022-01464-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 10/18/2022] [Indexed: 11/26/2022]
Abstract
Lung, being one of the most important organs in human body, is often affected by various SARS diseases, among which COVID-19 has been found to be the most fatal disease in recent times. In fact, SARS-COVID 19 led to pandemic that spreads fast among the community causing respiratory problems. Under such situation, radiological imaging-based screening [mostly chest X-ray and computer tomography (CT) modalities] has been performed for rapid screening of the disease as it is a non-invasive approach. Due to scarcity of physician/chest specialist/expert doctors, technology-enabled disease screening techniques have been developed by several researchers with the help of artificial intelligence and machine learning (AI/ML). It can be remarkably observed that the researchers have introduced several AI/ML/DL (deep learning) algorithms for computer-assisted detection of COVID-19 using chest X-ray and CT images. In this paper, a comprehensive review has been conducted to summarize the works related to applications of AI/ML/DL for diagnostic prediction of COVID-19, mainly using X-ray and CT images. Following the PRISMA guidelines, total 265 articles have been selected out of 1715 published articles till the third quarter of 2021. Furthermore, this review summarizes and compares varieties of ML/DL techniques, various datasets, and their results using X-ray and CT imaging. A detailed discussion has been made on the novelty of the published works, along with advantages and limitations.
Collapse
Affiliation(s)
- Asifuzzaman Lasker
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Sk Md Obaidullah
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Chandan Chakraborty
- Department of Computer Science & Engineering, National Institute of Technical Teachers’ Training & Research Kolkata, Kolkata, India
| | - Kaushik Roy
- Department of Computer Science, West Bengal State University, Barasat, India
| |
Collapse
|
32
|
Lee KW, Chin RKY. Diverse COVID-19 CT Image-to-Image Translation with Stacked Residual Dropout. Bioengineering (Basel) 2022; 9:698. [PMID: 36421099 PMCID: PMC9688018 DOI: 10.3390/bioengineering9110698] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 10/31/2022] [Accepted: 11/13/2022] [Indexed: 01/11/2024] Open
Abstract
Machine learning models are renowned for their high dependency on a large corpus of data in solving real-world problems, including the recent COVID-19 pandemic. In practice, data acquisition is an onerous process, especially in medical applications, due to lack of data availability for newly emerged diseases and privacy concerns. This study introduces a data synthesization framework (sRD-GAN) that generates synthetic COVID-19 CT images using a novel stacked-residual dropout mechanism (sRD). sRD-GAN aims to alleviate the problem of data paucity by generating synthetic lung medical images that contain precise radiographic annotations. The sRD mechanism is designed using a regularization-based strategy to facilitate perceptually significant instance-level diversity without content-style attribute disentanglement. Extensive experiments show that sRD-GAN can generate exceptional perceptual realism on COVID-19 CT images examined by an experiment radiologist, with an outstanding Fréchet Inception Distance (FID) of 58.68 and Learned Perceptual Image Patch Similarity (LPIPS) of 0.1370 on the test set. In a benchmarking experiment, sRD-GAN shows superior performance compared to GAN, CycleGAN, and one-to-one CycleGAN. The encouraging results achieved by sRD-GAN in different clinical cases, such as community-acquired pneumonia CT images and COVID-19 in X-ray images, suggest that the proposed method can be easily extended to other similar image synthetization problems.
Collapse
Affiliation(s)
| | - Renee Ka Yin Chin
- Faculty of Engineering, Universiti Malaysia Sabah, Kota Kinabalu 88400, Malaysia
| |
Collapse
|
33
|
Diwakar M, Singh P, Swarup C, Bajal E, Jindal M, Ravi V, Singh KU, Singh T. Noise Suppression and Edge Preservation for Low-Dose COVID-19 CT Images Using NLM and Method Noise Thresholding in Shearlet Domain. Diagnostics (Basel) 2022; 12:diagnostics12112766. [PMID: 36428826 PMCID: PMC9689094 DOI: 10.3390/diagnostics12112766] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/09/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022] Open
Abstract
In the COVID-19 era, it may be possible to detect COVID-19 by detecting lesions in scans, i.e., ground-glass opacity, consolidation, nodules, reticulation, or thickened interlobular septa, and lesion distribution, but it becomes difficult at the early stages due to embryonic lesion growth and the restricted use of high dose X-ray detection. Therefore, it may be possible for a patient who may or may not be infected with coronavirus to consider using high-dose X-rays, but it may cause more risks. Conclusively, using low-dose X-rays to produce CT scans and then adding a rigorous denoising algorithm to the scans is the best way to protect patients from side effects or a high dose X-ray when diagnosing coronavirus involvement early. Hence, this paper proposed a denoising scheme using an NLM filter and method noise thresholding concept in the shearlet domain for noisy COVID CT images. Low-dose COVID CT images can be further utilized. The results and comparative analysis showed that, in most cases, the proposed method gives better outcomes than existing ones.
Collapse
Affiliation(s)
- Manoj Diwakar
- Computer Science and Engineering Department, Graphic Era Deemed to be University, Dehradun 248007, India
| | - Prabhishek Singh
- School of Computer Science Engineering and Technology, Bennett University, Greater Noida 201310, India
| | - Chetan Swarup
- Department of Basic Science, College of Science and Theoretical Studies, Saudi Electronic University, Riyadh-Male Campus, Riyadh 13316, Saudi Arabia
- Correspondence:
| | - Eshan Bajal
- Department of Computer Science and Engineering, Amity School of Engineering and Technology, Amity University, Noida 201303, India
| | - Muskan Jindal
- Department of Computer Science and Engineering, Amity School of Engineering and Technology, Amity University, Noida 201303, India
| | - Vinayakumar Ravi
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar 34754, Saudi Arabia
| | - Kamred Udham Singh
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan 701, Taiwan
| | - Teekam Singh
- School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, India
| |
Collapse
|
34
|
Roth HR, Xu Z, Tor-Díez C, Sanchez Jacob R, Zember J, Molto J, Li W, Xu S, Turkbey B, Turkbey E, Yang D, Harouni A, Rieke N, Hu S, Isensee F, Tang C, Yu Q, Sölter J, Zheng T, Liauchuk V, Zhou Z, Moltz JH, Oliveira B, Xia Y, Maier-Hein KH, Li Q, Husch A, Zhang L, Kovalev V, Kang L, Hering A, Vilaça JL, Flores M, Xu D, Wood B, Linguraru MG. Rapid artificial intelligence solutions in a pandemic-The COVID-19-20 Lung CT Lesion Segmentation Challenge. Med Image Anal 2022; 82:102605. [PMID: 36156419 PMCID: PMC9444848 DOI: 10.1016/j.media.2022.102605] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 07/01/2022] [Accepted: 08/25/2022] [Indexed: 11/30/2022]
Abstract
Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge - 2020.
Collapse
Affiliation(s)
- Holger R Roth
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany.
| | - Ziyue Xu
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Carlos Tor-Díez
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, WA, DC, USA
| | - Ramon Sanchez Jacob
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Jonathan Zember
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Jose Molto
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Wenqi Li
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Sheng Xu
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Baris Turkbey
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Evrim Turkbey
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Dong Yang
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Ahmed Harouni
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Nicola Rieke
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Shishuai Hu
- School of Computer Science and Engineering, Northwestern Polytechnical University, China
| | - Fabian Isensee
- Applied Computer Vision Lab, Helmholtz Imaging , Heidelberg, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Qinji Yu
- Shanghai Jiao Tong University, China
| | - Jan Sölter
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, Luxembourg
| | - Tong Zheng
- School of Informatics, Nagoya University, Japan
| | - Vitali Liauchuk
- Biomedical Image Analysis Department, United Institute of Informatics Problems, Belarus
| | - Ziqi Zhou
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, China
| | | | - Bruno Oliveira
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Yong Xia
- School of Computer Science and Engineering, Northwestern Polytechnical University, China
| | - Klaus H Maier-Hein
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Qikai Li
- Shanghai Jiao Tong University, China
| | - Andreas Husch
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | | | - Vassili Kovalev
- Biomedical Image Analysis Department, United Institute of Informatics Problems, Belarus
| | - Li Kang
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, China
| | - Alessa Hering
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - João L Vilaça
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Mona Flores
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Daguang Xu
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Bradford Wood
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, WA, DC, USA; School of Medicine and Health Sciences, George Washington University, WA, DC, USA
| |
Collapse
|
35
|
Jalali Moghaddam M, Ghavipour M. Towards smart diagnostic methods for COVID-19: Review of deep learning for medical imaging. IPEM-TRANSLATION 2022; 3:100008. [PMID: 36312890 PMCID: PMC9597575 DOI: 10.1016/j.ipemt.2022.100008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 10/20/2022] [Accepted: 10/24/2022] [Indexed: 11/08/2022]
Abstract
The infectious disease known as COVID-19 has spread dramatically all over the world since December 2019. The fast diagnosis and isolation of infected patients are key factors in slowing down the spread of this virus and better management of the pandemic. Although the CT and X-ray modalities are commonly used for the diagnosis of COVID-19, identifying COVID-19 patients from medical images is a time-consuming and error-prone task. Artificial intelligence has shown to have great potential to speed up and optimize the prognosis and diagnosis process of COVID-19. Herein, we review publications on the application of deep learning (DL) techniques for diagnostics of patients with COVID-19 using CT and X-ray chest images for a period from January 2020 to October 2021. Our review focuses solely on peer-reviewed, well-documented articles. It provides a comprehensive summary of the technical details of models developed in these articles and discusses the challenges in the smart diagnosis of COVID-19 using DL techniques. Based on these challenges, it seems that the effectiveness of the developed models in clinical use needs to be further investigated. This review provides some recommendations to help researchers develop more accurate prediction models.
Collapse
Affiliation(s)
- Marjan Jalali Moghaddam
- Department of Computer Engineering and Information Technology, Amirkabir University of Technology, Tehran, Iran
| | - Mina Ghavipour
- Department of Computer Engineering and Information Technology, Amirkabir University of Technology, Tehran, Iran
| |
Collapse
|
36
|
Wang J, Ji X, Zhao M, Wen Y, She Y, Deng J, Chen C, Qian D, Lu H, Zhao D. Size-adaptive mediastinal multilesion detection in chest CT images via deep learning and a benchmark dataset. Med Phys 2022; 49:7222-7236. [PMID: 35689486 DOI: 10.1002/mp.15804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 05/12/2022] [Accepted: 06/03/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Many deep learning methods have been developed for pulmonary lesion detection in chest computed tomography (CT) images. However, these methods generally target one particular lesion type, that is, pulmonary nodules. In this work, we intend to develop and evaluate a novel deep learning method for a more challenging task, detecting various benign and malignant mediastinal lesions with wide variations in sizes, shapes, intensities, and locations in chest CT images. METHODS Our method for mediastinal lesion detection contains two main stages: (a) size-adaptive lesion candidate detection followed by (b) false-positive (FP) reduction and benign-malignant classification. For candidate detection, an anchor-free and one-stage detector, namely 3D-CenterNet is designed to locate suspicious regions (i.e., candidates with various sizes) within the mediastinum. Then, a 3D-SEResNet-based classifier is used to differentiate FPs, benign lesions, and malignant lesions from the candidates. RESULTS We evaluate the proposed method by conducting five-fold cross-validation on a relatively large-scale dataset, which consists of data collected on 1136 patients from a grade A tertiary hospital. The method can achieve sensitivity scores of 84.3% ± 1.9%, 90.2% ± 1.4%, 93.2% ± 0.8%, and 93.9% ± 1.1%, respectively, in finding all benign and malignant lesions at 1/8, 1/4, ½, and 1 FPs per scan, and the accuracy of benign-malignant classification can reach up to 78.7% ± 2.5%. CONCLUSIONS The proposed method can effectively detect mediastinal lesions with various sizes, shapes, and locations in chest CT images. It can be integrated into most existing pulmonary lesion detection systems to promote their clinical applications. The method can also be readily extended to other similar 3D lesion detection tasks.
Collapse
Affiliation(s)
- Jun Wang
- School of Computer and Computing Science, Zhejiang University City College, Hangzhou, China
| | - Xiawei Ji
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Mengmeng Zhao
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Yaofeng Wen
- Lanhui Medical Technology Co., Ltd, Shanghai, China
| | - Yunlang She
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Jiajun Deng
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Chang Chen
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hongbing Lu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Deping Zhao
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
37
|
Peng Y, Zhang T, Guo Y. Cov-TransNet: Dual branch fusion network with transformer for COVID-19 infection segmentation. Biomed Signal Process Control 2022; 80:104366. [PMCID: PMC9671472 DOI: 10.1016/j.bspc.2022.104366] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/06/2022] [Accepted: 10/30/2022] [Indexed: 11/09/2022]
Abstract
Segmentation of COVID-19 infection is a challenging task due to the blurred boundaries and low contrast between the infected and the non-infected areas in COVID-19 CT images, especially for small infection regions. COV-TransNet is presented to achieve high-precision segmentation of COVID-19 infection regions in this paper. The proposed segmentation network is composed of the auxiliary branch and the backbone branch. The auxiliary branch network adopts transformer to provide global information, helping the convolution layers in backbone branch to learn specific local features better. A multi-scale feature attention module is introduced to capture contextual information and adaptively enhance feature representations. Specially, a high internal resolution is maintained during the attention calculation process. Moreover, feature activation module can effectively reduce the loss of valid information during sampling. The proposed network can take full advantage of different depth and multi-scale features to achieve high sensitivity for identifying lesions of varied sizes and locations. We experiment on several datasets of the COVID-19 lesion segmentation task, including COVID-19-CT-Seg, UESTC-COVID-19, MosMedData and COVID-19-MedSeg. Comprehensive results demonstrate that COV-TransNet outperforms the existing state-of-the-art segmentation methods and achieves better segmentation performance for multi-scale lesions.
Collapse
|
38
|
Sherwani MK, Marzullo A, De Momi E, Calimeri F. Lesion segmentation in lung CT scans using unsupervised adversarial learning. Med Biol Eng Comput 2022; 60:3203-3215. [PMID: 36125656 PMCID: PMC9486778 DOI: 10.1007/s11517-022-02651-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 07/28/2022] [Indexed: 12/01/2022]
Abstract
Lesion segmentation in medical images is difficult yet crucial for proper diagnosis and treatment. Identifying lesions in medical images is costly and time-consuming and requires highly specialized knowledge. For this reason, supervised and semi-supervised learning techniques have been developed. Nevertheless, the lack of annotated data, which is common in medical imaging, is an issue; in this context, interesting approaches can use unsupervised learning to accurately distinguish between healthy tissues and lesions, training the network without using the annotations. In this work, an unsupervised learning technique is proposed to automatically segment coronavirus disease 2019 (COVID-19) lesions on 2D axial CT lung slices. The proposed approach uses the technique of image translation to generate healthy lung images based on the infected lung image without the need for lesion annotations. Attention masks are used to improve the quality of the segmentation further. Experiments showed the capability of the proposed approaches to segment the lesions, and it outperforms a range of unsupervised lesion detection approaches. The average reported results for the test dataset based on the metrics: Dice Score, Sensitivity, Specificity, Structure Measure, Enhanced-Alignment Measure, and Mean Absolute Error are 0.695, 0.694, 0.961, 0.791, 0.875, and 0.082 respectively. The achieved results are promising compared with the state-of-the-art and could constitute a valuable tool for future developments.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy.
| | - Aldo Marzullo
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milan, Italy
| | - Francesco Calimeri
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| |
Collapse
|
39
|
Doraiswami PR, Sarveshwaran V, Swamidason ITJ, Sorna SCD. Jaya-tunicate swarm algorithm based generative adversarial network for COVID-19 prediction with chest computed tomography images. CONCURRENCY AND COMPUTATION : PRACTICE & EXPERIENCE 2022; 34:e7211. [PMID: 35945987 PMCID: PMC9353441 DOI: 10.1002/cpe.7211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 03/30/2022] [Accepted: 05/06/2022] [Indexed: 06/15/2023]
Abstract
A novel corona virus (COVID-19) has materialized as the respiratory syndrome in recent decades. Chest computed tomography scanning is the significant technology for monitoring and predicting COVID-19. To predict the patients of COVID-19 at early stage poses an open challenge in the research community. Therefore, an effective prediction mechanism named Jaya-tunicate swarm algorithm driven generative adversarial network (Jaya-TSA with GAN) is proposed in this research to find patients of COVID-19 infections. The developed Jaya-TSA is the incorporation of Jaya algorithm with tunicate swarm algorithm (TSA). However, lungs lobs are segmented using Bayesian fuzzy clustering, which effectively find the boundary regions of lung lobes. Based on the extracted features, the process of COVID-19 prediction is accomplished using GAN. The optimal solution is obtained by training GAN using proposed Jaya-TSA with respect to fitness measure. The dimensionality of features is reduced by extracting the optimal features, which enable to increase the speed of training process. Moreover, the developed Jaya-TSA based GAN attained outstanding effectiveness by considering the factors, like, specificity, accuracy, and sensitivity that captured the importance as 0.8857, 0.8727, and 0.85 by varying training data.
Collapse
Affiliation(s)
| | - Velliangiri Sarveshwaran
- Department of Computational IntelligenceSRM Institute of Science and Technology, Kattankulathur CampusChennaiIndia
| | | | | |
Collapse
|
40
|
Sadik F, Dastider AG, Subah MR, Mahmud T, Fattah SA. A dual-stage deep convolutional neural network for automatic diagnosis of COVID-19 and pneumonia from chest CT images. Comput Biol Med 2022; 149:105806. [PMID: 35994932 PMCID: PMC9295386 DOI: 10.1016/j.compbiomed.2022.105806] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 06/05/2022] [Accepted: 06/26/2022] [Indexed: 11/15/2022]
Abstract
In the Coronavirus disease-2019 (COVID-19) pandemic, for fast and accurate diagnosis of a large number of patients, besides traditional methods, automated diagnostic tools are now extremely required. In this paper, a deep convolutional neural network (CNN) based scheme is proposed for automated accurate diagnosis of COVID-19 from lung computed tomography (CT) scan images. First, for the automated segmentation of lung regions in a chest CT scan, a modified CNN architecture, namely SKICU-Net is proposed by incorporating additional skip interconnections in the U-Net model that overcome the loss of information in dimension scaling. Next, an agglomerative hierarchical clustering is deployed to eliminate the CT slices without significant information. Finally, for effective feature extraction and diagnosis of COVID-19 and pneumonia from the segmented lung slices, a modified DenseNet architecture, namely P-DenseCOVNet is designed where parallel convolutional paths are introduced on top of the conventional DenseNet model for getting better performance through overcoming the loss of positional arguments. Outstanding performances have been achieved with an F1 score of 0.97 in the segmentation task along with an accuracy of 87.5% in diagnosing COVID-19, common pneumonia, and normal cases. Significant experimental results and comparison with other studies show that the proposed scheme provides very satisfactory performances and can serve as an effective diagnostic tool in the current pandemic.
Collapse
Affiliation(s)
- Farhan Sadik
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
| | - Ankan Ghosh Dastider
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
| | - Mohseu Rashid Subah
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
| | - Tanvir Mahmud
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
| | - Shaikh Anowarul Fattah
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh.
| |
Collapse
|
41
|
Li M, Li X, Jiang Y, Zhang J, Luo H, Yin S. Explainable multi-instance and multi-task learning for COVID-19 diagnosis and lesion segmentation in CT images. Knowl Based Syst 2022; 252:109278. [PMID: 35783000 PMCID: PMC9235304 DOI: 10.1016/j.knosys.2022.109278] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 06/12/2022] [Accepted: 06/13/2022] [Indexed: 11/16/2022]
Abstract
Coronavirus Disease 2019 (COVID-19) still presents a pandemic trend globally. Detecting infected individuals and analyzing their status can provide patients with proper healthcare while protecting the normal population. Chest CT (computed tomography) is an effective tool for screening of COVID-19. It displays detailed pathology-related information. To achieve automated COVID-19 diagnosis and lung CT image segmentation, convolutional neural networks (CNNs) have become mainstream methods. However, most of the previous works consider automated diagnosis and image segmentation as two independent tasks, in which some focus on lung fields segmentation and the others focus on single-lesion segmentation. Moreover, lack of clinical explainability is a common problem for CNN-based methods. In such context, we develop a multi-task learning framework in which the diagnosis of COVID-19 and multi-lesion recognition (segmentation of CT images) are achieved simultaneously. The core of the proposed framework is an explainable multi-instance multi-task network. The network learns task-related features adaptively with learnable weights, and gives explicable diagnosis results by suggesting local CT images with lesions as additional evidence. Then, severity assessment of COVID-19 and lesion quantification are performed to analyze patient status. Extensive experimental results on real-world datasets show that the proposed framework outperforms all the compared approaches for COVID-19 diagnosis and multi-lesion segmentation.
Collapse
Affiliation(s)
- Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Jiusi Zhang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Shen Yin
- Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, Trondheim, 7034, Norway
| |
Collapse
|
42
|
Chen J, Li Y, Guo L, Zhou X, Zhu Y, He Q, Han H, Feng Q. Machine learning techniques for CT imaging diagnosis of novel coronavirus pneumonia: a review. Neural Comput Appl 2022; 36:1-19. [PMID: 36159188 PMCID: PMC9483435 DOI: 10.1007/s00521-022-07709-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 08/04/2022] [Indexed: 11/20/2022]
Abstract
Since 2020, novel coronavirus pneumonia has been spreading rapidly around the world, bringing tremendous pressure on medical diagnosis and treatment for hospitals. Medical imaging methods, such as computed tomography (CT), play a crucial role in diagnosing and treating COVID-19. A large number of CT images (with large volume) are produced during the CT-based medical diagnosis. In such a situation, the diagnostic judgement by human eyes on the thousands of CT images is inefficient and time-consuming. Recently, in order to improve diagnostic efficiency, the machine learning technology is being widely used in computer-aided diagnosis and treatment systems (i.e., CT Imaging) to help doctors perform accurate analysis and provide them with effective diagnostic decision support. In this paper, we comprehensively review these frequently used machine learning methods applied in the CT Imaging Diagnosis for the COVID-19, discuss the machine learning-based applications from the various kinds of aspects including the image acquisition and pre-processing, image segmentation, quantitative analysis and diagnosis, and disease follow-up and prognosis. Moreover, we also discuss the limitations of the up-to-date machine learning technology in the context of CT imaging computer-aided diagnosis.
Collapse
Affiliation(s)
- Jingjing Chen
- Zhejiang University City College, Hangzhou, China
- Zhijiang College of Zhejiang University of Technology, Shaoxing, China
| | - Yixiao Li
- Faculty of Science, Zhejiang University of Technology, Hangzhou, China
| | - Lingling Guo
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Xiaokang Zhou
- Faculty of Data Science, Shiga University, Hikone, Japan
- RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
| | - Yihan Zhu
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Qingfeng He
- School of Pharmacy, Fudan University, Shanghai, China
| | - Haijun Han
- School of Medicine, Zhejiang University City College, Hangzhou, China
| | - Qilong Feng
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| |
Collapse
|
43
|
Soni M, Singh AK, Babu KS, Kumar S, Kumar A, Singh S. Convolutional neural network based CT scan classification method for COVID-19 test validation. SMART HEALTH (AMSTERDAM, NETHERLANDS) 2022; 25:100296. [PMID: 35722028 PMCID: PMC9188200 DOI: 10.1016/j.smhl.2022.100296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 04/24/2022] [Accepted: 05/28/2022] [Indexed: 11/19/2022]
Abstract
Given the novel corona virus discovered in Wuhan, China, in December 2019, due to the high false-negative rate of RT-PCR and the time-consuming to obtain the results, research has proved that computed tomography (CT) has become an auxiliary One of the essential means of diagnosis and treatment of new corona virus pneumonia. Since few COVID-19 CT datasets are currently available, it is proposed to use conditional generative adversarial networks to enhance data to obtain CT datasets with more samples to reduce the risk of over fitting. In addition, a BIN residual block-based method is proposed. The improved U-Net network is used for image segmentation and then combined with multi-layer perception for classification prediction. By comparing with network models such as AlexNet and GoogleNet, it is concluded that the proposed BUF-Net network model has the best performance, reaching an accuracy rate of 93%. Using Grad-CAM technology to visualize the system's output can more intuitively illustrate the critical role of CT images in diagnosing COVID-19. Applying deep learning using the proposed techniques suggested by the above study in medical imaging can help radiologists achieve more effective diagnoses that is the main objective of the research. On the basis of the foregoing, this study proposes to employ CGAN technology to augment the restricted data set, integrate the residual block into the U-Net network, and combine multi-layer perception in order to construct new network architecture for COVID-19 detection using CT images. -19. Given the scarcity of COVID-19 CT datasets, it is proposed that conditional generative adversarial networks be used to augment data in order to obtain CT datasets with more samples and therefore lower the danger of overfitting.
Collapse
Affiliation(s)
- Mukesh Soni
- Department of CSE, University Centre for Research & Development Chandigarh University, Mohali, Punjab, 140413, India
| | | | - K Suresh Babu
- Department of Biochemistry, Symbiosis, Medical College for Women, Symbiosis International, Deemed University, Pune, India
| | - Sumit Kumar
- Indian Institute of Management, Kozhikode, India
| | - Akhilesh Kumar
- Department of Information Technology, Gaya College, Gaya, Bihar, India
| | - Shweta Singh
- Electronics and Communication Department, IES College of Technology, Bhopal, India
| |
Collapse
|
44
|
Shah A, Shah M. Advancement of deep learning in pneumonia/Covid-19 classification and localization: A systematic review with qualitative and quantitative analysis. Chronic Dis Transl Med 2022; 8:154-171. [PMID: 35572951 PMCID: PMC9086991 DOI: 10.1002/cdt3.17] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 01/20/2022] [Indexed: 12/15/2022] Open
Abstract
Around 450 million people are affected by pneumonia every year, which results in 2.5 million deaths. Coronavirus disease 2019 (Covid-19) has also affected 181 million people, which led to 3.92 million casualties. The chances of death in both of these diseases can be significantly reduced if they are diagnosed early. However, the current methods of diagnosing pneumonia (complaints + chest X-ray) and Covid-19 (real-time polymerase chain reaction) require the presence of expert radiologists and time, respectively. With the help of deep learning models, pneumonia and Covid-19 can be detected instantly from chest X-rays or computerized tomography (CT) scans. The process of diagnosing pneumonia/Covid-19 can become faster and more widespread. In this paper, we aimed to elicit, explain, and evaluate qualitatively and quantitatively all advancements in deep learning methods aimed at detecting community-acquired pneumonia, viral pneumonia, and Covid-19 from images of chest X-rays and CT scans. Being a systematic review, the focus of this paper lies in explaining various deep learning model architectures, which have either been modified or created from scratch for the task at hand. For each model, this paper answers the question of why the model is designed the way it is, the challenges that a particular model overcomes, and the tradeoffs that come with modifying a model to the required specifications. A grouped quantitative analysis of all models described in the paper is also provided to quantify the effectiveness of different models with a similar goal. Some tradeoffs cannot be quantified and, hence, they are mentioned explicitly in the qualitative analysis, which is done throughout the paper. By compiling and analyzing a large quantum of research details in one place with all the data sets, model architectures, and results, we aimed to provide a one-stop solution to beginners and current researchers interested in this field.
Collapse
Affiliation(s)
- Aakash Shah
- Department of Computer Science & Engineering, Institute of TechnologyNirma UniversityAhmedabadIndia
| | - Manan Shah
- Department of Chemical Engineering, School of TechnologyPandit Deendayal Energy UniversityGandhinagarIndia
| |
Collapse
|
45
|
Liang S, Nie R, Cao J, Wang X, Zhang G. FCF: Feature complement fusion network for detecting COVID-19 through CT scan images. Appl Soft Comput 2022; 125:109111. [PMID: 35693545 PMCID: PMC9167685 DOI: 10.1016/j.asoc.2022.109111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 05/12/2022] [Accepted: 05/26/2022] [Indexed: 11/17/2022]
Abstract
COVID-19 spreads and contracts people rapidly, to diagnose this disease accurately and timely is essential for quarantine and medical treatment. RT-PCR plays a crucial role in diagnosing the COVID-19, whereas computed tomography (CT) delivers a faster result when combining artificial assistance. Developing a Deep Learning classification model for detecting the COVID-19 through CT images is conducive to assisting doctors in consultation. We proposed a feature complement fusion network (FCF) for detecting COVID-19 through lung CT scan images. This framework can extract both local features and global features by CNN extractor and ViT extractor severally, which successfully complement the deficiency problem of the receptive field of the other. Due to the attention mechanism in our designed feature complement Transformer (FCT), extracted local and global feature embeddings achieve a better representation. We combined a supervised with a weakly supervised strategy to train our model, which can promote CNN to guide the VIT to converge faster. Finally, we got a 99.34% accuracy on our test set, which surpasses the current state-of-art popular classification model. Moreover, this proposed structure can easily extend to other classification tasks when changing other proper extractors.
Collapse
Affiliation(s)
- Shu Liang
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, Yunnan, China
| | - Rencan Nie
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, Yunnan, China.,School of Automation, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Jinde Cao
- School of Mathematics, Southeast University, Nanjing, 210096, Jiangsu, China.,Yonsei Frontier Lab, Yonsei University, Seoul, 03722, South Korea
| | - Xue Wang
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, Yunnan, China
| | - Gucheng Zhang
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, Yunnan, China
| |
Collapse
|
46
|
Karnati M, Seal A, Sahu G, Yazidi A, Krejcar O. A novel multi-scale based deep convolutional neural network for detecting COVID-19 from X-rays. Appl Soft Comput 2022; 125:109109. [PMID: 35693544 PMCID: PMC9167691 DOI: 10.1016/j.asoc.2022.109109] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Revised: 04/26/2022] [Accepted: 05/26/2022] [Indexed: 11/23/2022]
Abstract
The COVID-19 pandemic has posed an unprecedented threat to the global public health system, primarily infecting the airway epithelial cells in the respiratory tract. Chest X-ray (CXR) is widely available, faster, and less expensive therefore it is preferred to monitor the lungs for COVID-19 diagnosis over other techniques such as molecular test, antigen test, antibody test, and chest computed tomography (CT). As the pandemic continues to reveal the limitations of our current ecosystems, researchers are coming together to share their knowledge and experience in order to develop new systems to tackle it. In this work, an end-to-end IoT infrastructure is designed and built to diagnose patients remotely in the case of a pandemic, limiting COVID-19 dissemination while also improving measurement science. The proposed framework comprises six steps. In the last step, a model is designed to interpret CXR images and intelligently measure the severity of COVID-19 lung infections using a novel deep neural network (DNN). The proposed DNN employs multi-scale sampling filters to extract reliable and noise-invariant features from a variety of image patches. Experiments are conducted on five publicly available databases, including COVIDx, COVID-19 Radiography, COVID-XRay-5K, COVID-19-CXR, and COVIDchestxray, with classification accuracies of 96.01%, 99.62%, 99.22%, 98.83%, and 100%, and testing times of 0.541, 0.692, 1.28, 0.461, and 0.202 s, respectively. The obtained results show that the proposed model surpasses fourteen baseline techniques. As a result, the newly developed model could be utilized to evaluate treatment efficacy, particularly in remote locations.
Collapse
Affiliation(s)
- Mohan Karnati
- Department of Computer Science and Engineering, PDPM Indian Institute of Information Technology Design & Manufacturing Jabalpur, Jabalpur, Madhya Pradesh 482005, India
| | - Ayan Seal
- Department of Computer Science and Engineering, PDPM Indian Institute of Information Technology Design & Manufacturing Jabalpur, Jabalpur, Madhya Pradesh 482005, India
| | - Geet Sahu
- Department of Computer Science and Engineering, PDPM Indian Institute of Information Technology Design & Manufacturing Jabalpur, Jabalpur, Madhya Pradesh 482005, India
| | - Anis Yazidi
- Department of Computer Science, OsloMet - Oslo Metropolitan University, Oslo, 460167, Norway
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, 460167, Norway
- Department of Plastic and Reconstructive Surgery, Oslo University Hospital, Oslo, 460167, Norway
| | - Ondrej Krejcar
- Center for Basic and Applied Science, Faculty of Informatics and Management, University of Hradec Kralove, Rokitanskeho 62, 500 03 Hradec Kralove, Czech Republic
- Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Jalan Sultan Yahya Petra, 54100 Kuala Lumpur, Malaysia
| |
Collapse
|
47
|
Ravi V, Acharya V, Alazab M. A multichannel EfficientNet deep learning-based stacking ensemble approach for lung disease detection using chest X-ray images. CLUSTER COMPUTING 2022; 26:1181-1203. [PMID: 35874187 PMCID: PMC9295885 DOI: 10.1007/s10586-022-03664-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 05/21/2022] [Accepted: 06/17/2022] [Indexed: 06/15/2023]
Abstract
This paper proposes a multichannel deep learning approach for lung disease detection using chest X-rays. The multichannel models used in this work are EfficientNetB0, EfficientNetB1, and EfficientNetB2 pretrained models. The features from EfficientNet models are fused together. Next, the fused features are passed into more than one non-linear fully connected layer. Finally, the features passed into a stacked ensemble learning classifier for lung disease detection. The stacked ensemble learning classifier contains random forest and SVM in the first stage and logistic regression in the second stage for lung disease detection. The performance of the proposed method is studied in detail for more than one lung disease such as pneumonia, Tuberculosis (TB), and COVID-19. The performances of the proposed method for lung disease detection using chest X-rays compared with similar methods with the aim to show that the method is robust and has the capability to achieve better performances. In all the experiments on lung disease, the proposed method showed better performance and outperformed similar lung disease existing methods. This indicates that the proposed method is robust and generalizable on unseen chest X-rays data samples. To ensure that the features learnt by the proposed method is optimal, t-SNE feature visualization was shown on all three lung disease models. Overall, the proposed method has shown 98% detection accuracy for pediatric pneumonia lung disease, 99% detection accuracy for TB lung disease, and 98% detection accuracy for COVID-19 lung disease. The proposed method can be used as a tool for point-of-care diagnosis by healthcare radiologists.Journal instruction requires a city for affiliations; however, this is missing in affiliation 3. Please verify if the provided city is correct and amend if necessary.correct.
Collapse
Affiliation(s)
- Vinayakumar Ravi
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar, Saudi Arabia
| | - Vasundhara Acharya
- Manipal Institute of Technology (MIT), Manipal Academy of Higher Education (MAHE), Manipal, India
| | - Mamoun Alazab
- College of Engineering, IT and Environment, Charles Darwin University, Casuarina, NT Australia
| |
Collapse
|
48
|
Szepesi P, Szilágyi L. Detection of pneumonia using convolutional neural networks and deep learning. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
49
|
Yaşar H, Ceylan M, Cebeci H, Kılınçer A, Kanat F, Koplay M. A novel study to increase the classification parameters on automatic three-class COVID-19 classification from CT images, including cases from Turkey. J EXP THEOR ARTIF IN 2022. [DOI: 10.1080/0952813x.2022.2093980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
- Hüseyin Yaşar
- General Directorate of Health Investments, Ministry of Health of Republic of Turkey, Ankara, Turkey
| | - Murat Ceylan
- Department of Electrical and Electronics Engineering, Faculty of Engineering and Natural Sciences, Konya Technical University, Konya, Turkey
| | - Hakan Cebeci
- Department of Radiology, Selçuk University Faculty of Medicine, Konya, Turkey
| | - Abidin Kılınçer
- Department of Radiology, Selçuk University Faculty of Medicine, Konya, Turkey
| | - Fikret Kanat
- Department of Chest Diseases, Selçuk University Faculty of Medicine, Konya, Turkey
| | - Mustafa Koplay
- Department of Radiology, Selçuk University Faculty of Medicine, Konya, Turkey
| |
Collapse
|
50
|
Liu J, Qi J, Chen W, Nian Y. Multi-branch fusion auxiliary learning for the detection of pneumonia from chest X-ray images. Comput Biol Med 2022; 147:105732. [PMID: 35779478 PMCID: PMC9212341 DOI: 10.1016/j.compbiomed.2022.105732] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 05/23/2022] [Accepted: 06/11/2022] [Indexed: 11/26/2022]
Abstract
Lung infections caused by bacteria and viruses are infectious and require timely screening and isolation, and different types of pneumonia require different treatment plans. Therefore, finding a rapid and accurate screening method for lung infections is critical. To achieve this goal, we proposed a multi-branch fusion auxiliary learning (MBFAL) method for pneumonia detection from chest X-ray (CXR) images. The MBFAL method was used to perform two tasks through a double-branch network. The first task was to recognize the absence of pneumonia (normal), COVID-19, other viral pneumonia and bacterial pneumonia from CXR images, and the second task was to recognize the three types of pneumonia from CXR images. The latter task was used to assist the learning of the former task to achieve a better recognition effect. In the process of auxiliary parameter updating, the feature maps of different branches were fused after sample screening through label information to enhance the model’s ability to recognize case of pneumonia without impacting its ability to recognize normal cases. Experiments show that an average classification accuracy of 95.61% is achieved using MBFAL. The single class accuracy for normal, COVID-19, other viral pneumonia and bacterial pneumonia was 98.70%, 99.10%, 96.60% and 96.80%, respectively, and the recall was 97.20%, 98.60%, 96.10% and 89.20%, respectively, using the MBFAL method. Compared with the baseline model and the model constructed using the above methods separately, better results for the rapid screening of pneumonia were achieved using MBFAL.
Collapse
|