1
|
Wang C, Shao J, He Y, Wu J, Liu X, Yang L, Wei Y, Zhou XS, Zhan Y, Shi F, Shen D, Li W. Data-driven risk stratification and precision management of pulmonary nodules detected on chest computed tomography. Nat Med 2024:10.1038/s41591-024-03211-3. [PMID: 39289570 DOI: 10.1038/s41591-024-03211-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 07/22/2024] [Indexed: 09/19/2024]
Abstract
The widespread implementation of low-dose computed tomography (LDCT) in lung cancer screening has led to the increasing detection of pulmonary nodules. However, precisely evaluating the malignancy risk of pulmonary nodules remains a formidable challenge. Here we propose a triage-driven Chinese Lung Nodules Reporting and Data System (C-Lung-RADS) utilizing a medical checkup cohort of 45,064 cases. The system was operated in a stepwise fashion, initially distinguishing low-, mid-, high- and extremely high-risk nodules based on their size and density. Subsequently, it progressively integrated imaging information, demographic characteristics and follow-up data to pinpoint suspicious malignant nodules and refine the risk scale. The multidimensional system achieved a state-of-the-art performance with an area under the curve (AUC) of 0.918 (95% confidence interval (CI) 0.918-0.919) on the internal testing dataset, outperforming the single-dimensional approach (AUC of 0.881, 95% CI 0.880-0.882). Moreover, C-Lung-RADS exhibited a superior sensitivity compared with Lung-RADS v2022 (87.1% versus 63.3%) in an independent cohort, which was screened using mobile computed tomography scanners to broaden screening accessibility in resource-constrained settings. With its foundation in precise risk stratification and tailored management, this system has minimized unnecessary invasive procedures for low-risk cases and recommended prompt intervention for extremely high-risk nodules to avert diagnostic delays. This approach has the potential to enhance the decision-making paradigm and facilitate a more efficient diagnosis of lung cancer during routine checkups as well as screening scenarios.
Collapse
Affiliation(s)
- Chengdi Wang
- Department of Pulmonary and Critical Care Medicine, Targeted Tracer Research and Development Laboratory, Frontiers Science Center for Disease-Related Molecular Network, State Key Laboratory of Respiratory Health and Multimorbidity, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China.
- Frontiers Medical Center, Tianfu Jincheng Laboratory, Chengdu, China.
| | - Jun Shao
- Department of Pulmonary and Critical Care Medicine, Targeted Tracer Research and Development Laboratory, Frontiers Science Center for Disease-Related Molecular Network, State Key Laboratory of Respiratory Health and Multimorbidity, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| | - Yichu He
- Department of Research and Development, United Imaging Intelligence, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, United Imaging Intelligence, Shanghai, China
| | - Xingting Liu
- Department of Pulmonary and Critical Care Medicine, Targeted Tracer Research and Development Laboratory, Frontiers Science Center for Disease-Related Molecular Network, State Key Laboratory of Respiratory Health and Multimorbidity, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| | - Liuqing Yang
- Department of Pulmonary and Critical Care Medicine, Targeted Tracer Research and Development Laboratory, Frontiers Science Center for Disease-Related Molecular Network, State Key Laboratory of Respiratory Health and Multimorbidity, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| | - Ying Wei
- Department of Research and Development, United Imaging Intelligence, Shanghai, China
| | - Xiang Sean Zhou
- School of Biomedical Engineering and State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Yiqiang Zhan
- School of Biomedical Engineering and State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Feng Shi
- Department of Research and Development, United Imaging Intelligence, Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering and State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China.
- Shanghai Clinical Research and Trial Center, Shanghai, China.
| | - Weimin Li
- Department of Pulmonary and Critical Care Medicine, Targeted Tracer Research and Development Laboratory, Frontiers Science Center for Disease-Related Molecular Network, State Key Laboratory of Respiratory Health and Multimorbidity, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China.
- Frontiers Medical Center, Tianfu Jincheng Laboratory, Chengdu, China.
| |
Collapse
|
2
|
Mousavi M, Hosseini S. A deep convolutional neural network approach using medical image classification. BMC Med Inform Decis Mak 2024; 24:239. [PMID: 39210320 PMCID: PMC11360845 DOI: 10.1186/s12911-024-02646-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 08/22/2024] [Indexed: 09/04/2024] Open
Abstract
The epidemic diseases such as COVID-19 are rapidly spreading all around the world. The diagnosis of epidemic at initial stage is of high importance to provide medical care to and recovery of infected people as well as protecting the uninfected population. In this paper, an automatic COVID-19 detection model using respiratory sound and medical image based on internet of health things (IoHT) is proposed. In this model, primarily to screen those people having suspected Coronavirus disease, the sound of coughing used to detect healthy people and those suffering from COVID-19, which finally obtained an accuracy of 94.999%. This approach not only expedites diagnosis and enhances accuracy but also facilitates swift screening in public places using simple equipment. Then, in the second step, in order to help radiologists to interpret medical images as best as possible, we use three pre-trained convolutional neural network models InceptionResNetV2, InceptionV3 and EfficientNetB4 and two data sets of chest radiology medical images, and CT Scan in a three-class classification. Utilizing transfer learning and pre-existing knowledge in these models leads to notable improvements in disease diagnosis and identification compared to traditional techniques. Finally, the best result obtained for CT-Scan images belonging to InceptionResNetV2 architecture with 99.414% accuracy and for radiology images related to InceptionV3 and EfficientNetB4 architectures with the accuracy is 96.943%. Therefore, the proposed model can help radiology specialists to confirm the initial assessments of the COVID-19 disease.
Collapse
Affiliation(s)
- Mohammad Mousavi
- Department of Computer Science, Faculty of Mathematics and Computer, Shahid Bahonar University of Kerman, Kerman, Iran
| | - Soodeh Hosseini
- Department of Computer Science, Faculty of Mathematics and Computer, Shahid Bahonar University of Kerman, Kerman, Iran.
| |
Collapse
|
3
|
Kanwal K, Asif M, Khalid SG, Liu H, Qurashi AG, Abdullah S. Current Diagnostic Techniques for Pneumonia: A Scoping Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:4291. [PMID: 39001069 PMCID: PMC11244398 DOI: 10.3390/s24134291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 06/22/2024] [Accepted: 06/28/2024] [Indexed: 07/16/2024]
Abstract
Community-acquired pneumonia is one of the most lethal infectious diseases, especially for infants and the elderly. Given the variety of causative agents, the accurate early detection of pneumonia is an active research area. To the best of our knowledge, scoping reviews on diagnostic techniques for pneumonia are lacking. In this scoping review, three major electronic databases were searched and the resulting research was screened. We categorized these diagnostic techniques into four classes (i.e., lab-based methods, imaging-based techniques, acoustic-based techniques, and physiological-measurement-based techniques) and summarized their recent applications. Major research has been skewed towards imaging-based techniques, especially after COVID-19. Currently, chest X-rays and blood tests are the most common tools in the clinical setting to establish a diagnosis; however, there is a need to look for safe, non-invasive, and more rapid techniques for diagnosis. Recently, some non-invasive techniques based on wearable sensors achieved reasonable diagnostic accuracy that could open a new chapter for future applications. Consequently, further research and technology development are still needed for pneumonia diagnosis using non-invasive physiological parameters to attain a better point of care for pneumonia patients.
Collapse
Affiliation(s)
- Kehkashan Kanwal
- College of Speech, Language, and Hearing Sciences, Ziauddin University, Karachi 75000, Pakistan
| | - Muhammad Asif
- Faculty of Computing and Applied Sciences, Sir Syed University of Engineering and Technology, Karachi 75300, Pakistan
| | - Syed Ghufran Khalid
- Department of Engineering, Faculty of Science and Technology, Nottingham Trent University, Nottingham B15 3TN, UK
| | - Haipeng Liu
- Research Centre for Intelligent Healthcare, Coventry University, Coventry CV1 5FB, UK
| | | | - Saad Abdullah
- School of Innovation, Design and Engineering, Mälardalen University, 721 23 Västerås, Sweden
| |
Collapse
|
4
|
AlJabri M, Alghamdi M, Collado-Mesa F, Abdel-Mottaleb M. Recurrent attention U-Net for segmentation and quantification of breast arterial calcifications on synthesized 2D mammograms. PeerJ Comput Sci 2024; 10:e2076. [PMID: 38855260 PMCID: PMC11157579 DOI: 10.7717/peerj-cs.2076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 04/30/2024] [Indexed: 06/11/2024]
Abstract
Breast arterial calcifications (BAC) are a type of calcification commonly observed on mammograms and are generally considered benign and not associated with breast cancer. However, there is accumulating observational evidence of an association between BAC and cardiovascular disease, the leading cause of death in women. We present a deep learning method that could assist radiologists in detecting and quantifying BAC in synthesized 2D mammograms. We present a recurrent attention U-Net model consisting of encoder and decoder modules that include multiple blocks that each use a recurrent mechanism, a recurrent mechanism, and an attention module between them. The model also includes a skip connection between the encoder and the decoder, similar to a U-shaped network. The attention module was used to enhance the capture of long-range dependencies and enable the network to effectively classify BAC from the background, whereas the recurrent blocks ensured better feature representation. The model was evaluated using a dataset containing 2,000 synthesized 2D mammogram images. We obtained 99.8861% overall accuracy, 69.6107% sensitivity, 66.5758% F-1 score, and 59.5498% Jaccard coefficient, respectively. The presented model achieved promising performance compared with related models.
Collapse
Affiliation(s)
- Manar AlJabri
- Department of Computer Science and Artificial Intelligence, Umm Al-Qura University, Makkah, Makkah, Saudi Arabia
- King Abdul Aziz University, Jeddah, Makkah, Saudi Arabia
| | - Manal Alghamdi
- Department of Computer Science and Artificial Intelligence, Umm Al-Qura University, Makkah, Makkah, Saudi Arabia
| | - Fernando Collado-Mesa
- Department of Radiology, Miller School of Medicine, University of Miami, Miami, Florida, United States
| | - Mohamed Abdel-Mottaleb
- Department of Electrical and Computer Engineering, University of Miami, Miami, Florida, United States
| |
Collapse
|
5
|
Arefin MS, Rahman MM, Hasan MT, Mahmud M. A Topical Review on Enabling Technologies for the Internet of Medical Things: Sensors, Devices, Platforms, and Applications. MICROMACHINES 2024; 15:479. [PMID: 38675290 PMCID: PMC11051832 DOI: 10.3390/mi15040479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Revised: 03/17/2024] [Accepted: 03/22/2024] [Indexed: 04/28/2024]
Abstract
The Internet of Things (IoT) is still a relatively new field of research, and its potential to be used in the healthcare and medical sectors is enormous. In the last five years, IoT has been a go-to option for various applications such as using sensors for different features, machine-to-machine communication, etc., but precisely in the medical sector, it is still lagging far behind compared to other sectors. Hence, this study emphasises IoT applications in medical fields, Medical IoT sensors and devices, IoT platforms for data visualisation, and artificial intelligence in medical applications. A systematic review considering PRISMA guidelines on research articles as well as the websites on IoMT sensors and devices has been carried out. After the year 2001, an integrated outcome of 986 articles was initially selected, and by applying the inclusion-exclusion criterion, a total of 597 articles were identified. 23 new studies have been finally found, including records from websites and citations. This review then analyses different sensor monitoring circuits in detail, considering an Intensive Care Unit (ICU) scenario, device applications, and the data management system, including IoT platforms for the patients. Lastly, detailed discussion and challenges have been outlined, and possible prospects have been presented.
Collapse
Affiliation(s)
- Md. Shamsul Arefin
- Department of Electrical and Electronic Engineering (EEE), Bangladesh University of Business & Technology, Dhaka 1216, Bangladesh;
| | | | - Md. Tanvir Hasan
- Department of Electrical and Electronic Engineering (EEE), Jashore University of Science & Technology, Jashore 7408, Bangladesh;
- Department of Electrical Engineering, University of South Carolina, Columbia, SC 29208, USA
| | - Mufti Mahmud
- Department of Computer Science, Nottingham Trent University, Nottingham NG11 8NS, UK
- Computing and Informatics Research Centre, Nottingham Trent University, Nottingham NG11 8NS, UK
- Medical Technologies Innovation Facility, Nottingham Trent University, Nottingham NG11 8NS, UK
| |
Collapse
|
6
|
Li Y, Deng W, Zhou Y, Luo Y, Wu Y, Wen J, Cheng L, Liang X, Wu T, Wang F, Huang Z, Tan C, Liu Y. A nomogram based on clinical factors and CT radiomics for predicting anti-MDA5+ DM complicated by RP-ILD. Rheumatology (Oxford) 2024; 63:809-816. [PMID: 37267146 DOI: 10.1093/rheumatology/kead263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/30/2023] [Accepted: 05/08/2023] [Indexed: 06/04/2023] Open
Abstract
OBJECTIVES Anti-melanoma differentiation-associated gene 5 antibody-positive (anti-MDA5+) DM complicated by rapidly progressive interstitial lung disease (RP-ILD) has a high incidence and poor prognosis. The objective of this study was to establish a model for the prediction and early diagnosis of anti-MDA5+ DM-associated RP-ILD based on clinical manifestations and imaging features. METHODS A total of 103 patients with anti-MDA5+ DM were included. The patients were randomly split into training and testing sets of 72 and 31 patients, respectively. After image analysis, we collected clinical, imaging and radiomics features from each patient. Feature selection was performed first with the minimum redundancy and maximum relevance algorithm and then with the best subset selection method. The final remaining features comprised the radscore. A clinical model and imaging model were then constructed with the selected independent risk factors for the prediction of non-RP-ILD and RP-ILD. We also combined these models in different ways and compared their predictive abilities. A nomogram was also established. The predictive performances of the models were assessed based on receiver operating characteristics curves, calibration curves, discriminability and clinical utility. RESULTS The analyses showed that two clinical factors, dyspnoea (P = 0.000) and duration of illness in months (P = 0.001), and three radiomics features (P = 0.001, 0.044 and 0.008, separately) were independent predictors of non-RP-ILD and RP-ILD. However, no imaging features were significantly different between the two groups. The radiomics model built with the three radiomics features performed worse than the clinical model and showed areas under the curve (AUCs) of 0.805 and 0.754 in the training and test sets, respectively. The clinical model demonstrated a good predictive ability for RP-ILD in MDA5+ DM patients, with an AUC, sensitivity, specificity and accuracy of 0.954, 0.931, 0.837 and 0.847 in the training set and 0.890, 0.875, 0.800 and 0.774 in the testing set, respectively. The combination model built with clinical and radiomics features performed slightly better than the clinical model, with an AUC, sensitivity, specificity and accuracy of 0.994, 0.966, 0.977 and 0.931 in the training set and 0.890, 0.812, 1.000 and 0.839 in the testing set, respectively. The calibration curve and decision curve analyses showed satisfactory consistency and clinical utility of the nomogram. CONCLUSION Our results suggest that the combination model built with clinical and radiomics features could reliably predict the occurrence of RP-ILD in MDA5+ DM patients.
Collapse
Affiliation(s)
- Yanhong Li
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Wen Deng
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Yu Zhou
- Department of Respiratory and Critical Care Medicine, Chengdu First People's Hospital, Chengdu, China
| | - Yubin Luo
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Yinlan Wu
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Ji Wen
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Lu Cheng
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Xiuping Liang
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Tong Wu
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Fang Wang
- Department of Research and Development, Shanghai United Imaging Intelligence, Shanghai, China
| | - Zixing Huang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Chunyu Tan
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Yi Liu
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| |
Collapse
|
7
|
Vaikunta Pai T, Maithili K, Arun Kumar R, Nagaraju D, Anuradha D, Kumar S, Ravuri A, Sunilkumar Reddy T, Sivaram M, Vidhya RG. DKCNN: Improving deep kernel convolutional neural network-based COVID-19 identification from CT images of the chest. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:913-930. [PMID: 38820059 DOI: 10.3233/xst-230424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
BACKGROUND An efficient deep convolutional neural network (DeepCNN) is proposed in this article for the classification of Covid-19 disease. OBJECTIVE A novel structure known as the Pointwise-Temporal-pointwise convolution unit is developed incorporated with the varying kernel-based depth wise temporal convolution before and after the pointwise convolution operations. METHODS The outcome is optimized by the Slap Swarm algorithm (SSA). The proposed Deep CNN is composed of depth wise temporal convolution and end-to-end automatic detection of disease. First, the datasets SARS-COV-2 Ct-Scan Dataset and CT scan COVID Prediction dataset are preprocessed using the min-max approach and the features are extracted for further processing. RESULTS The experimental analysis is conducted between the proposed and some state-of-art works and stated that the proposed work effectively classifies the disease than the other approaches. CONCLUSION The proposed structural unit is used to design the deep CNN with the increasing kernel sizes. The classification process is improved by the inclusion of depth wise temporal convolutions along with the kernel variation. The computational complexity is reduced by the introduction of stride convolutions are used in the residual linkage among the adjacent structural units.
Collapse
Affiliation(s)
- T Vaikunta Pai
- Department of Information Science and Engineering, NMAM Institute of Technology-Affiliated to NITTE (Deemed to be University), Bangalore, Karnataka, India
| | - K Maithili
- Department of Computer Science and Engineering (Ai & ML), KG Reddy College of Engineering and Technology, Hyderabad, Telangana, India
| | - Ravula Arun Kumar
- Department of Computer Science and Engineering, Vardhaman College of Engineering, Hyderabad, Telangana, India
| | - D Nagaraju
- Department of Computer Science and Engineering, Sri Venkatesa Perumal College of Engineering and Technology, Puttur, Andhra Pradesh, India
| | - D Anuradha
- Department of Computer Science and Business Systems, Panimalar Engineering College, Chennai, India
| | - Shailendra Kumar
- Department of Electronics and Communication Engineering, Integral University Lucknow, Uttar Pradesh, India
| | | | - T Sunilkumar Reddy
- Department of Computer Science and Engineering, Sri Venkatesa Perumal College of Engineering and Technology, Puttur, Andhra Pradesh, India
| | - M Sivaram
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha Nagar, Thandalam, Tamil Nadu, India
| | - R G Vidhya
- Department of ECE, HKBKCE, Bangalore, India
| |
Collapse
|
8
|
Zhang X, Yang S, Shi Y, Ji J, Liu Y, Wang Z, Xu H. Weakly guided attention model with hierarchical interaction for brain CT report generation. Comput Biol Med 2023; 167:107650. [PMID: 37976828 DOI: 10.1016/j.compbiomed.2023.107650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 10/20/2023] [Accepted: 10/31/2023] [Indexed: 11/19/2023]
Abstract
Brain Computed Tomography (CT) report generation, which aims to assist radiologists in diagnosing cerebrovascular diseases efficiently, is challenging in feature representation for dozens of images and language descriptions with several sentences. Existing report generation methods have achieved significant achievement based on the encoder-decoder framework and attention mechanism. However, current research has limitations in solving the many-to-many alignment between the multi-images of Brain CT imaging and the multi-sentences of Brain CT report, and fails to attend to critical images and lesion areas, resulting in inaccurate descriptions. In this paper, we propose a novel Weakly Guided Attention Model with Hierarchical Interaction, named WGAM-HI, to improve Brain CT report generation. Specifically, WGAM-HI conducts many-to-many matching for multiple visual images and semantic sentences via a hierarchical interaction framework with a two-layer attention model and a two-layer report generator. In addition, two weakly guided mechanisms are proposed to facilitate the attention model to focus more on important images and lesion areas under the guidance of pathological events and Gradient-weighted Class Activation Mapping (Grad-CAM) respectively. The pathological event acts as a bridge between the essential serial images and the corresponding sentence, and the Grad-CAM bridges the lesion areas and pathology words. Therefore, under the hierarchical interaction with the weakly guided attention model, the report generator generates more accurate words and sentences. Experiments on the Brain CT dataset demonstrate the effectiveness of WGAM-HI in attending to important images and lesion areas gradually, and generating more accurate reports.
Collapse
Affiliation(s)
- Xiaodan Zhang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China.
| | - Sisi Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Yanzhao Shi
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Junzhong Ji
- Faculty of Information Technology, Beijing University of Technology, Beijing, China.
| | - Ying Liu
- Department of Radiology, Peking University Third Hospital, Beijing, China.
| | - Zheng Wang
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Huimin Xu
- Department of Radiology, Peking University Third Hospital, Beijing, China
| |
Collapse
|
9
|
Murphy K, Muhairwe J, Schalekamp S, van Ginneken B, Ayakaka I, Mashaete K, Katende B, van Heerden A, Bosman S, Madonsela T, Gonzalez Fernandez L, Signorell A, Bresser M, Reither K, Glass TR. COVID-19 screening in low resource settings using artificial intelligence for chest radiographs and point-of-care blood tests. Sci Rep 2023; 13:19692. [PMID: 37952026 PMCID: PMC10640556 DOI: 10.1038/s41598-023-46461-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 11/01/2023] [Indexed: 11/14/2023] Open
Abstract
Artificial intelligence (AI) systems for detection of COVID-19 using chest X-Ray (CXR) imaging and point-of-care blood tests were applied to data from four low resource African settings. The performance of these systems to detect COVID-19 using various input data was analysed and compared with antigen-based rapid diagnostic tests. Participants were tested using the gold standard of RT-PCR test (nasopharyngeal swab) to determine whether they were infected with SARS-CoV-2. A total of 3737 (260 RT-PCR positive) participants were included. In our cohort, AI for CXR images was a poor predictor of COVID-19 (AUC = 0.60), since the majority of positive cases had mild symptoms and no visible pneumonia in the lungs. AI systems using differential white blood cell counts (WBC), or a combination of WBC and C-Reactive Protein (CRP) both achieved an AUC of 0.74 with a suggested optimal cut-off point at 83% sensitivity and 63% specificity. The antigen-RDT tests in this trial obtained 65% sensitivity at 98% specificity. This study is the first to validate AI tools for COVID-19 detection in an African setting. It demonstrates that screening for COVID-19 using AI with point-of-care blood tests is feasible and can operate at a higher sensitivity level than antigen testing.
Collapse
Affiliation(s)
- Keelin Murphy
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands.
| | | | - Steven Schalekamp
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands
| | - Irene Ayakaka
- SolidarMed, Partnerships for Health, Maseru, Lesotho
| | | | | | - Alastair van Heerden
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
- SAMRC/WITS Developmental Pathways for Health Research Unit, Department of Paediatrics, School of Clinical Medicine, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, Gauteng, South Africa
| | - Shannon Bosman
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Thandanani Madonsela
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Lucia Gonzalez Fernandez
- Department of Infectious Diseases and Hospital Epidemiology, University Hospital Basel, Basel, Switzerland
- SolidarMed, Partnerships for Health, Lucerne, Switzerland
| | - Aita Signorell
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Moniek Bresser
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Klaus Reither
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Tracy R Glass
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| |
Collapse
|
10
|
Li Y, Zhou T, He K, Zhou Y, Shen D. Multi-Scale Transformer Network With Edge-Aware Pre-Training for Cross-Modality MR Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3395-3407. [PMID: 37339020 DOI: 10.1109/tmi.2023.3288001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/22/2023]
Abstract
Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones. Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model. However, it is often challenging to obtain sufficient paired data for supervised training. In reality, we often have a small number of paired data while a large number of unpaired data. To take advantage of both paired and unpaired data, in this paper, we propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis. Specifically, an Edge-preserving Masked AutoEncoder (Edge-MAE) is first pre-trained in a self-supervised manner to simultaneously perform 1) image imputation for randomly masked patches in each image and 2) whole edge map estimation, which effectively learns both contextual and structural information. Besides, a novel patch-wise loss is proposed to enhance the performance of Edge-MAE by treating different masked patches differently according to the difficulties of their respective imputations. Based on this proposed pre-training, in the subsequent fine-tuning stage, a Dual-scale Selective Fusion (DSF) module is designed (in our MT-Net) to synthesize missing-modality images by integrating multi-scale features extracted from the encoder of the pre-trained Edge-MAE. Furthermore, this pre-trained encoder is also employed to extract high-level features from the synthesized image and corresponding ground-truth image, which are required to be similar (consistent) in the training. Experimental results show that our MT-Net achieves comparable performance to the competing methods even using 70% of all available paired data. Our code will be released at https://github.com/lyhkevin/MT-Net.
Collapse
|
11
|
Chu WT, Reza SMS, Anibal JT, Landa A, Crozier I, Bağci U, Wood BJ, Solomon J. Artificial Intelligence and Infectious Disease Imaging. J Infect Dis 2023; 228:S322-S336. [PMID: 37788501 PMCID: PMC10547369 DOI: 10.1093/infdis/jiad158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 05/06/2023] [Indexed: 10/05/2023] Open
Abstract
The mass production of the graphics processing unit and the coronavirus disease 2019 (COVID-19) pandemic have provided the means and the motivation, respectively, for rapid developments in artificial intelligence (AI) and medical imaging techniques. This has led to new opportunities to improve patient care but also new challenges that must be overcome before these techniques are put into practice. In particular, early AI models reported high performances but failed to perform as well on new data. However, these mistakes motivated further innovation focused on developing models that were not only accurate but also stable and generalizable to new data. The recent developments in AI in response to the COVID-19 pandemic will reap future dividends by facilitating, expediting, and informing other medical AI applications and educating the broad academic audience on the topic. Furthermore, AI research on imaging animal models of infectious diseases offers a unique problem space that can fill in evidence gaps that exist in clinical infectious disease research. Here, we aim to provide a focused assessment of the AI techniques leveraged in the infectious disease imaging research space, highlight the unique challenges, and discuss burgeoning solutions.
Collapse
Affiliation(s)
- Winston T Chu
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Frederick, Maryland, USA
| | - Syed M S Reza
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| | - James T Anibal
- Center for Interventional Oncology, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| | - Adam Landa
- Center for Interventional Oncology, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| | - Ian Crozier
- Clinical Monitoring Research Program Directorate, Frederick National Laboratory for Cancer Research sponsored by the National Cancer Institute, Frederick, Maryland, USA
| | - Ulaş Bağci
- Department of Radiology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Bradford J Wood
- Center for Interventional Oncology, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
- Center for Interventional Oncology, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Jeffrey Solomon
- Clinical Monitoring Research Program Directorate, Frederick National Laboratory for Cancer Research sponsored by the National Cancer Institute, Frederick, Maryland, USA
| |
Collapse
|
12
|
Ghassemi N, Shoeibi A, Khodatars M, Heras J, Rahimi A, Zare A, Zhang YD, Pachori RB, Gorriz JM. Automatic diagnosis of COVID-19 from CT images using CycleGAN and transfer learning. Appl Soft Comput 2023; 144:110511. [PMID: 37346824 PMCID: PMC10263244 DOI: 10.1016/j.asoc.2023.110511] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 08/23/2022] [Accepted: 06/08/2023] [Indexed: 06/23/2023]
Abstract
The outbreak of the corona virus disease (COVID-19) has changed the lives of most people on Earth. Given the high prevalence of this disease, its correct diagnosis in order to quarantine patients is of the utmost importance in the steps of fighting this pandemic. Among the various modalities used for diagnosis, medical imaging, especially computed tomography (CT) imaging, has been the focus of many previous studies due to its accuracy and availability. In addition, automation of diagnostic methods can be of great help to physicians. In this paper, a method based on pre-trained deep neural networks is presented, which, by taking advantage of a cyclic generative adversarial net (CycleGAN) model for data augmentation, has reached state-of-the-art performance for the task at hand, i.e., 99.60% accuracy. Also, in order to evaluate the method, a dataset containing 3163 images from 189 patients has been collected and labeled by physicians. Unlike prior datasets, normal data have been collected from people suspected of having COVID-19 disease and not from data from other diseases, and this database is made available publicly. Moreover, the method's reliability is further evaluated by calibration metrics, and its decision is interpreted by Grad-CAM also to find suspicious regions as another output of the method and make its decisions trustworthy and explainable.
Collapse
Affiliation(s)
- Navid Ghassemi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Afshin Shoeibi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Marjane Khodatars
- Department of Medical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Jonathan Heras
- Department of Mathematics and Computer Science, University of La Rioja, La Rioja, Spain
| | - Alireza Rahimi
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Assef Zare
- Faculty of Electrical Engineering, Gonabad Branch, Islamic Azad University, Gonabad, Iran
| | - Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, LE1 7RH, UK
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Indore 453552, India
| | - J Manuel Gorriz
- Department of Signal Theory, Networking and Communications, Universidad de Granada, Spain
- Department of Psychiatry, University of Cambridge, UK
| |
Collapse
|
13
|
Li W, Cao Y, Wang S, Wan B. Fully feature fusion based neural network for COVID-19 lesion segmentation in CT images. Biomed Signal Process Control 2023; 86:104939. [PMID: 37082352 PMCID: PMC10083211 DOI: 10.1016/j.bspc.2023.104939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 03/07/2023] [Accepted: 04/05/2023] [Indexed: 04/22/2023]
Abstract
Coronavirus Disease 2019 (COVID-19) spreads around the world, seriously affecting people's health. Computed tomography (CT) images contain rich semantic information as an auxiliary diagnosis method. However, the automatic segmentation of COVID-19 lesions in CT images faces several challenges, including inconsistency in size and shape of the lesion, the high variability of the lesion, and the low contrast of pixel values between the lesion and normal tissue surrounding the lesion. Therefore, this paper proposes a Fully Feature Fusion Based Neural Network for COVID-19 Lesion Segmentation in CT Images (F3-Net). F3-Net uses an encoder-decoder architecture. In F3-Net, the Multiple Scale Module (MSM) can sense features of different scales, and Dense Path Module (DPM) is used to eliminate the semantic gap between features. The Attention Fusion Module (AFM) is the attention module, which can better fuse the multiple features. Furthermore, we proposed an improved loss function L o s s C o v i d - B C E that pays more attention to the lesions based on the prior knowledge of the distribution of COVID-19 lesions in the lungs. Finally, we verified the superior performance of F3-Net on a COVID-19 segmentation dataset, experiments demonstrate that the proposed model can segment COVID-19 lesions more accurately in CT images than benchmarks of state of the art.
Collapse
Affiliation(s)
- Wei Li
- Key Laboratory of Intelligent Computing in Medical Image (MIIC), Northeastern University, Ministry of Education, Shenyang, China
| | - Yangyong Cao
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Shanshan Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Bolun Wan
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
14
|
Reza SMS, Chu WT, Homayounieh F, Blain M, Firouzabadi FD, Anari PY, Lee JH, Worwa G, Finch CL, Kuhn JH, Malayeri A, Crozier I, Wood BJ, Feuerstein IM, Solomon J. Deep-Learning-Based Whole-Lung and Lung-Lesion Quantification Despite Inconsistent Ground Truth: Application to Computerized Tomography in SARS-CoV-2 Nonhuman Primate Models. Acad Radiol 2023; 30:2037-2045. [PMID: 36966070 PMCID: PMC9968618 DOI: 10.1016/j.acra.2023.02.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 02/21/2023] [Accepted: 02/22/2023] [Indexed: 03/01/2023]
Abstract
RATIONALE AND OBJECTIVES Animal modeling of infectious diseases such as coronavirus disease 2019 (COVID-19) is important for exploration of natural history, understanding of pathogenesis, and evaluation of countermeasures. Preclinical studies enable rigorous control of experimental conditions as well as pre-exposure baseline and longitudinal measurements, including medical imaging, that are often unavailable in the clinical research setting. Computerized tomography (CT) imaging provides important diagnostic, prognostic, and disease characterization to clinicians and clinical researchers. In that context, automated deep-learning systems for the analysis of CT imaging have been broadly proposed, but their practical utility has been limited. Manual outlining of the ground truth (i.e., lung-lesions) requires accurate distinctions between abnormal and normal tissues that often have vague boundaries and is subject to reader heterogeneity in interpretation. Indeed, this subjectivity is demonstrated as wide inconsistency in manual outlines among experts and from the same expert. The application of deep-learning data-science tools has been less well-evaluated in the preclinical setting, including in nonhuman primate (NHP) models of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection/COVID-19, in which the translation of human-derived deep-learning tools is challenging. The automated segmentation of the whole lung and lung lesions provides a potentially standardized and automated method to detect and quantify disease. MATERIALS AND METHODS We used deep-learning-based quantification of the whole lung and lung lesions on CT scans of NHPs exposed to SARS-CoV-2. We proposed a novel multi-model ensemble technique to address the inconsistency in the ground truths for deep-learning-based automated segmentation of the whole lung and lung lesions. Multiple models were obtained by training the convolutional neural network (CNN) on different subsets of the training data instead of having a single model using the entire training dataset. Moreover, we employed a feature pyramid network (FPN), a CNN that provides predictions at different resolution levels, enabling the network to predict objects with wide size variations. RESULTS We achieved an average of 99.4 and 60.2% Dice coefficients for whole-lung and lung-lesion segmentation, respectively. The proposed multi-model FPN outperformed well-accepted methods U-Net (50.5%), V-Net (54.5%), and Inception (53.4%) for the challenging lesion-segmentation task. We show the application of segmentation outputs for longitudinal quantification of lung disease in SARS-CoV-2-exposed and mock-exposed NHPs. CONCLUSION Deep-learning methods should be optimally characterized for and targeted specifically to preclinical research needs in terms of impact, automation, and dynamic quantification independently from purely clinical applications.
Collapse
Affiliation(s)
- Syed M S Reza
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Winston T Chu
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Fatemeh Homayounieh
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Maxim Blain
- Center for Interventional Oncology, Radiology and Imaging Sciences, NIH Clinical Center and National Cancer Institute, Center for Cancer Research, National Institutes of Health, Bethesda, Maryland
| | - Fatemeh D Firouzabadi
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Pouria Y Anari
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Ji Hyun Lee
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Gabriella Worwa
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Frederick, Maryland
| | - Courtney L Finch
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Frederick, Maryland
| | - Jens H Kuhn
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Frederick, Maryland
| | - Ashkan Malayeri
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Ian Crozier
- Clinical Monitoring Research Program Directorate, Frederick National Laboratory for Cancer Research, Frederick, Maryland
| | - Bradford J Wood
- Center for Interventional Oncology, Radiology and Imaging Sciences, NIH Clinical Center and National Cancer Institute, Center for Cancer Research, National Institutes of Health, Bethesda, Maryland
| | - Irwin M Feuerstein
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Frederick, Maryland
| | - Jeffrey Solomon
- Clinical Monitoring Research Program Directorate, Frederick National Laboratory for Cancer Research, Frederick, Maryland.
| |
Collapse
|
15
|
Santosh KC, GhoshRoy D, Nakarmi S. A Systematic Review on Deep Structured Learning for COVID-19 Screening Using Chest CT from 2020 to 2022. Healthcare (Basel) 2023; 11:2388. [PMID: 37685422 PMCID: PMC10486542 DOI: 10.3390/healthcare11172388] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The emergence of the COVID-19 pandemic in Wuhan in 2019 led to the discovery of a novel coronavirus. The World Health Organization (WHO) designated it as a global pandemic on 11 March 2020 due to its rapid and widespread transmission. Its impact has had profound implications, particularly in the realm of public health. Extensive scientific endeavors have been directed towards devising effective treatment strategies and vaccines. Within the healthcare and medical imaging domain, the application of artificial intelligence (AI) has brought significant advantages. This study delves into peer-reviewed research articles spanning the years 2020 to 2022, focusing on AI-driven methodologies for the analysis and screening of COVID-19 through chest CT scan data. We assess the efficacy of deep learning algorithms in facilitating decision making processes. Our exploration encompasses various facets, including data collection, systematic contributions, emerging techniques, and encountered challenges. However, the comparison of outcomes between 2020 and 2022 proves intricate due to shifts in dataset magnitudes over time. The initiatives aimed at developing AI-powered tools for the detection, localization, and segmentation of COVID-19 cases are primarily centered on educational and training contexts. We deliberate on their merits and constraints, particularly in the context of necessitating cross-population train/test models. Our analysis encompassed a review of 231 research publications, bolstered by a meta-analysis employing search keywords (COVID-19 OR Coronavirus) AND chest CT AND (deep learning OR artificial intelligence OR medical imaging) on both the PubMed Central Repository and Web of Science platforms.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab, Vermillion, SD 57069, USA
| | - Debasmita GhoshRoy
- School of Automation, Banasthali Vidyapith, Tonk 304022, Rajasthan, India;
| | - Suprim Nakarmi
- Department of Computer Science, University of South Dakota, Vermillion, SD 57069, USA;
| |
Collapse
|
16
|
Zhang Y, Teng Q, He X, Niu T, Zhang L, Liu Y, Ren C. Attention-based 3D CNN with Multi-layer Features for Alzheimer's Disease Diagnosis using Brain Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083225 DOI: 10.1109/embc40787.2023.10340536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Structural MRI and PET imaging play an important role in the diagnosis of Alzheimer's disease (AD), showing the morphological changes and glucose metabolism changes in the brain respectively. The manifestations in the brain image of some cognitive impairment patients are relatively inconspicuous, for example, it still has difficulties in achieving accurate diagnosis through sMRI in clinical practice. With the emergence of deep learning, convolutional neural network (CNN) has become a valuable method in AD-aided diagnosis, but some CNN methods cannot effectively learn the features of brain image, making the diagnosis of AD still presents some challenges. In this work, we propose an end-to-end 3D CNN framework for AD diagnosis based on ResNet, which integrates multi-layer features obtained under the effect of the attention mechanism to better capture subtle differences in brain images. The attention maps showed our model can focus on key brain regions related to the disease diagnosis. Our method was verified in ablation experiments with two modality images on 792 subjects from the ADNI database, where AD diagnostic accuracies of 89.71% and 91.18% were achieved based on sMRI and PET respectively, and also outperformed some state-of-the-art methods.
Collapse
|
17
|
Wu H, Huang X, Guo X, Wen Z, Qin J. Cross-Image Dependency Modeling for Breast Ultrasound Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1619-1631. [PMID: 37018315 DOI: 10.1109/tmi.2022.3233648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present a novel deep network (namely BUSSeg) equipped with both within- and cross-image long-range dependency modeling for automated lesions segmentation from breast ultrasound images, which is a quite daunting task due to (1) the large variation of breast lesions, (2) the ambiguous lesion boundaries, and (3) the existence of speckle noise and artifacts in ultrasound images. Our work is motivated by the fact that most existing methods only focus on modeling the within-image dependencies while neglecting the cross-image dependencies, which are essential for this task under limited training data and noise. We first propose a novel cross-image dependency module (CDM) with a cross-image contextual modeling scheme and a cross-image dependency loss (CDL) to capture more consistent feature expression and alleviate noise interference. Compared with existing cross-image methods, the proposed CDM has two merits. First, we utilize more complete spatial features instead of commonly used discrete pixel vectors to capture the semantic dependencies between images, mitigating the negative effects of speckle noise and making the acquired features more representative. Second, the proposed CDM includes both intra- and inter-class contextual modeling rather than just extracting homogeneous contextual dependencies. Furthermore, we develop a parallel bi-encoder architecture (PBA) to tame a Transformer and a convolutional neural network to enhance BUSSeg's capability in capturing within-image long-range dependencies and hence offer richer features for CDM. We conducted extensive experiments on two representative public breast ultrasound datasets, and the results demonstrate that the proposed BUSSeg consistently outperforms state-of-the-art approaches in most metrics.
Collapse
|
18
|
Guetari R, Ayari H, Sakly H. Computer-aided diagnosis systems: a comparative study of classical machine learning versus deep learning-based approaches. Knowl Inf Syst 2023; 65:1-41. [PMID: 37361377 PMCID: PMC10205571 DOI: 10.1007/s10115-023-01894-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 04/23/2023] [Accepted: 04/25/2023] [Indexed: 06/28/2023]
Abstract
The diagnostic phase of the treatment process is essential for patient guidance and follow-up. The accuracy and effectiveness of this phase can determine the life or death of a patient. For the same symptoms, different doctors may come up with different diagnoses whose treatments may, instead of curing a patient, be fatal. Machine learning (ML) brings new solutions to healthcare professionals to save time and optimize the appropriate diagnosis. ML is a data analysis method that automates the creation of analytical models and promotes predictive data. There are several ML models and algorithms that rely on features extracted from, for example, a patient's medical images to indicate whether a tumor is benign or malignant. The models differ in the way they operate and the method used to extract the discriminative features of the tumor. In this article, we review different ML models for tumor classification and COVID-19 infection to evaluate the different works. The computer-aided diagnosis (CAD) systems, which we referred to as classical, are based on accurate feature identification, usually performed manually or with other ML techniques that are not involved in classification. The deep learning-based CAD systems automatically perform the identification and extraction of discriminative features. The results show that the two types of DAC have quite close performances but the use of one or the other type depends on the datasets. Indeed, manual feature extraction is necessary when the size of the dataset is small; otherwise, deep learning is used.
Collapse
Affiliation(s)
- Ramzi Guetari
- SERCOM Laboratory, Polytechnic School of Tunisia, University of Carthage, PO Box 743, La Marsa, 2078 Tunisia
| | - Helmi Ayari
- SERCOM Laboratory, Polytechnic School of Tunisia, University of Carthage, PO Box 743, La Marsa, 2078 Tunisia
| | - Houneida Sakly
- RIADI Laboratory, National School of Computer Sciences, University of Manouba, Manouba, 2010 Tunisia
| |
Collapse
|
19
|
Wang X, Cheng L, Zhang D, Liu Z, Jiang L. Broad learning solution for rapid diagnosis of COVID-19. Biomed Signal Process Control 2023; 83:104724. [PMID: 36811035 PMCID: PMC9935280 DOI: 10.1016/j.bspc.2023.104724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 01/27/2023] [Accepted: 02/14/2023] [Indexed: 02/19/2023]
Abstract
COVID-19 has put all of humanity in a health dilemma as it spreads rapidly. For many infectious diseases, the delay of detection results leads to the spread of infection and an increase in healthcare costs. COVID-19 diagnostic methods rely on a large number of redundant labeled data and time-consuming data training processes to obtain satisfactory results. However, as a new epidemic, obtaining large clinical datasets is still challenging, which will inhibit the training of deep models. And a model that can really rapidly diagnose COVID-19 at all stages of the model has still not been proposed. To address these limitations, we combine feature attention and broad learning to propose a diagnostic system (FA-BLS) for COVID-19 pulmonary infection, which introduces a broad learning structure to address the slow diagnosis speed of existing deep learning methods. In our network, transfer learning is performed with ResNet50 convolutional modules with fixed weights to extract image features, and the attention mechanism is used to enhance feature representation. After that, feature nodes and enhancement nodes are generated by broad learning with random weights to adaptly select features for diagnosis. Finally, three publicly accessible datasets were used to evaluate our optimization model. It was determined that the FA-BLS model had a 26-130 times faster training speed than deep learning with a similar level of accuracy, which can achieve a fast and accurate diagnosis, achieve effective isolation from COVID-19 and the proposed method also opens up a new method for other types of chest CT image recognition problems.
Collapse
Affiliation(s)
- Xiaowei Wang
- School of Physical Science and Technology, Shenyang Normal University, Shenyang, 110034, China
| | - Liying Cheng
- School of Physical Science and Technology, Shenyang Normal University, Shenyang, 110034, China
| | - Dan Zhang
- Navigation College, Dalian Maritime University, Dalian, 116026, China
| | - Zuchen Liu
- School of Physical Science and Technology, Shenyang Normal University, Shenyang, 110034, China
| | - Longtao Jiang
- School of Physical Science and Technology, Shenyang Normal University, Shenyang, 110034, China
| |
Collapse
|
20
|
Liu Y, Chen B, Zhang Z, Yu H, Ru S, Chen X, Lu G. Self-paced Multi-view Learning for CT-based severity assessment of COVID-19. Biomed Signal Process Control 2023; 83:104672. [PMID: 36777556 PMCID: PMC9905104 DOI: 10.1016/j.bspc.2023.104672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 01/30/2023] [Accepted: 02/04/2023] [Indexed: 02/11/2023]
Abstract
Prior studies for the task of severity assessment of COVID-19 (SA-COVID) usually suffer from domain-specific cognitive deficits. They mainly focus on visual cues based on single cognitive functions but fail to reconcile the valuable information from other alternative views. Inspired by the cognitive process of radiologists, this paper shifts naturally from single-symptom measurements to a multi-view analysis, and proposes a novel Self-paced Multi-view Learning (SPML) framework for automated SA-COVID. Specifically, the proposed SPML framework first comprehensively aggregates multi-view contexts in lung infection with different measure paradigms, i.e., Global Feature Branch, Texture Feature Branch, and Volume Feature Branch. In this way, multiple-perspective clues are taken into account to reflect the most essential pathological manifestation on CT images. To alleviate small-sample learning problems, we also introduce an optimization with self-paced learning strategy to cognitively increase the characterization capabilities of training samples by learning from simple to complex. In contrast to traditional batch-wise learning, a pure self-paced way can further guarantee the efficiency and accuracy of SPML when dealing with small and biased samples. Furthermore, we construct a well-established SA-COVID dataset that contains 300 CT images with fine annotations. Extensive experiments on this dataset demonstrate that SPML consistently outperforms the state-of-the-art baselines. The SA-COVID dataset is publicly released at https://github.com/YishuLiu/SA-COVID.
Collapse
Affiliation(s)
- Yishu Liu
- Harbin Institute of Technology, Shenzhen, 518055, China
| | - Bingzhi Chen
- South China Normal University, Guangzhou, 510631, China
| | - Zheng Zhang
- Harbin Institute of Technology, Shenzhen, 518055, China
| | - Hongbing Yu
- Nanshan District Chronic Disease Prevention and Control Hospital, Shenzhen, 518055, China
| | - Shouhang Ru
- Shenzhen Second People's Hospital, Shenzhen, 518000, China
| | - Xiaosheng Chen
- Shenzhen Second People's Hospital, Shenzhen, 518000, China
| | - Guangming Lu
- Harbin Institute of Technology, Shenzhen, 518055, China
| |
Collapse
|
21
|
Wu J, Xia Y, Wang X, Wei Y, Liu A, Innanje A, Zheng M, Chen L, Shi J, Wang L, Zhan Y, Zhou XS, Xue Z, Shi F, Shen D. uRP: An integrated research platform for one-stop analysis of medical images. FRONTIERS IN RADIOLOGY 2023; 3:1153784. [PMID: 37492386 PMCID: PMC10365282 DOI: 10.3389/fradi.2023.1153784] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/31/2023] [Indexed: 07/27/2023]
Abstract
Introduction Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible. Methods We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable. Results and Discussion The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.
Collapse
Affiliation(s)
- Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yuwei Xia
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xuechun Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Aie Liu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Arun Innanje
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Meng Zheng
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Lei Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Liye Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Zhong Xue
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
22
|
Liu R, Wang T, Li H, Zhang P, Li J, Yang X, Shen D, Sheng B. TMM-Nets: Transferred Multi- to Mono-Modal Generation for Lupus Retinopathy Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1083-1094. [PMID: 36409801 DOI: 10.1109/tmi.2022.3223683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Rare diseases, which are severely underrepresented in basic and clinical research, can particularly benefit from machine learning techniques. However, current learning-based approaches usually focus on either mono-modal image data or matched multi-modal data, whereas the diagnosis of rare diseases necessitates the aggregation of unstructured and unmatched multi-modal image data due to their rare and diverse nature. In this study, we therefore propose diagnosis-guided multi-to-mono modal generation networks (TMM-Nets) along with training and testing procedures. TMM-Nets can transfer data from multiple sources to a single modality for diagnostic data structurization. To demonstrate their potential in the context of rare diseases, TMM-Nets were deployed to diagnose the lupus retinopathy (LR-SLE), leveraging unmatched regular and ultra-wide-field fundus images for transfer learning. The TMM-Nets encoded the transfer learning from diabetic retinopathy to LR-SLE based on the similarity of the fundus lesions. In addition, a lesion-aware multi-scale attention mechanism was developed for clinical alerts, enabling TMM-Nets not only to inform patient care, but also to provide insights consistent with those of clinicians. An adversarial strategy was also developed to refine multi- to mono-modal image generation based on diagnostic results and the data distribution to enhance the data augmentation performance. Compared to the baseline model, the TMM-Nets showed 35.19% and 33.56% F1 score improvements on the test and external validation sets, respectively. In addition, the TMM-Nets can be used to develop diagnostic models for other rare diseases.
Collapse
|
23
|
Wu Y, Qi Q, Qi S, Yang L, Wang H, Yu H, Li J, Wang G, Zhang P, Liang Z, Chen R. Classification of COVID-19 from community-acquired pneumonia: Boosting the performance with capsule network and maximum intensity projection image of CT scans. Comput Biol Med 2023; 154:106567. [PMID: 36738705 PMCID: PMC9869624 DOI: 10.1016/j.compbiomed.2023.106567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 12/30/2022] [Accepted: 01/22/2023] [Indexed: 01/24/2023]
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) and community-acquired pneumonia (CAP) present a high degree of similarity in chest computed tomography (CT) images. Therefore, a procedure for accurately and automatically distinguishing between them is crucial. METHODS A deep learning method for distinguishing COVID-19 from CAP is developed using maximum intensity projection (MIP) images from CT scans. LinkNet is employed for lung segmentation of chest CT images. MIP images are produced by superposing the maximum gray of intrapulmonary CT values. The MIP images are input into a capsule network for patient-level pred iction and diagnosis of COVID-19. The network is trained using 333 CT scans (168 COVID-19/165 CAP) and validated on three external datasets containing 3581 CT scans (2110 COVID-19/1471 CAP). RESULTS LinkNet achieves the highest Dice coefficient of 0.983 for lung segmentation. For the classification of COVID-19 and CAP, the capsule network with the DenseNet-121 feature extractor outperforms ResNet-50 and Inception-V3, achieving an accuracy of 0.970 on the training dataset. Without MIP or the capsule network, the accuracy decreases to 0.857 and 0.818, respectively. Accuracy scores of 0.961, 0.997, and 0.949 are achieved on the external validation datasets. The proposed method has higher or comparable sensitivity compared with ten state-of-the-art methods. CONCLUSIONS The proposed method illustrates the feasibility of applying MIP images from CT scans to distinguish COVID-19 from CAP using capsule networks. MIP images provide conspicuous benefits when exploiting deep learning to detect COVID-19 lesions from CT scans and the capsule network improves COVID-19 diagnosis.
Collapse
Affiliation(s)
- Yanan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Qianqian Qi
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Liming Yang
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China.
| | - Hanlin Wang
- Department of Radiology, General Hospital of the Yangtze River Shipping, Wuhan, China.
| | - Hui Yu
- General Practice Center, The Seventh Affiliated Hospital, Southern Medical University, Guangzhou, China.
| | - Jianpeng Li
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China.
| | - Gang Wang
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China.
| | - Ping Zhang
- Department of Pulmonary and Critical Care Medicine, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China.
| | - Zhenyu Liang
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.
| | - Rongchang Chen
- Key Laboratory of Respiratory Disease of Shenzhen, Shenzhen Institute of Respiratory Disease, Shenzhen People's Hospital (Second Affiliated Hospital of Jinan University, First Affiliated Hospital of South University of Science and Technology of China), Shenzhen, China.
| |
Collapse
|
24
|
Malik H, Anees T, Naeem A, Naqvi RA, Loh WK. Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans. Bioengineering (Basel) 2023; 10:203. [PMID: 36829697 PMCID: PMC9952069 DOI: 10.3390/bioengineering10020203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 01/30/2023] [Accepted: 02/01/2023] [Indexed: 02/09/2023] Open
Abstract
Due to the rapid rate of SARS-CoV-2 dissemination, a conversant and effective strategy must be employed to isolate COVID-19. When it comes to determining the identity of COVID-19, one of the most significant obstacles that researchers must overcome is the rapid propagation of the virus, in addition to the dearth of trustworthy testing models. This problem continues to be the most difficult one for clinicians to deal with. The use of AI in image processing has made the formerly insurmountable challenge of finding COVID-19 situations more manageable. In the real world, there is a problem that has to be handled about the difficulties of sharing data between hospitals while still honoring the privacy concerns of the organizations. When training a global deep learning (DL) model, it is crucial to handle fundamental concerns such as user privacy and collaborative model development. For this study, a novel framework is designed that compiles information from five different databases (several hospitals) and edifies a global model using blockchain-based federated learning (FL). The data is validated through the use of blockchain technology (BCT), and FL trains the model on a global scale while maintaining the secrecy of the organizations. The proposed framework is divided into three parts. First, we provide a method of data normalization that can handle the diversity of data collected from five different sources using several computed tomography (CT) scanners. Second, to categorize COVID-19 patients, we ensemble the capsule network (CapsNet) with incremental extreme learning machines (IELMs). Thirdly, we provide a strategy for interactively training a global model using BCT and FL while maintaining anonymity. Extensive tests employing chest CT scans and a comparison of the classification performance of the proposed model to that of five DL algorithms for predicting COVID-19, while protecting the privacy of the data for a variety of users, were undertaken. Our findings indicate improved effectiveness in identifying COVID-19 patients and achieved an accuracy of 98.99%. Thus, our model provides substantial aid to medical practitioners in their diagnosis of COVID-19.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore 54000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
25
|
Zhuang Z, Si L, Wang S, Xuan K, Ouyang X, Zhan Y, Xue Z, Zhang L, Shen D, Yao W, Wang Q. Knee Cartilage Defect Assessment by Graph Representation and Surface Convolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:368-379. [PMID: 36094985 DOI: 10.1109/tmi.2022.3206042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Knee osteoarthritis (OA) is the most common osteoarthritis and a leading cause of disability. Cartilage defects are regarded as major manifestations of knee OA, which are visible by magnetic resonance imaging (MRI). Thus early detection and assessment for knee cartilage defects are important for protecting patients from knee OA. In this way, many attempts have been made on knee cartilage defect assessment by applying convolutional neural networks (CNNs) to knee MRI. However, the physiologic characteristics of the cartilage may hinder such efforts: the cartilage is a thin curved layer, implying that only a small portion of voxels in knee MRI can contribute to the cartilage defect assessment; heterogeneous scanning protocols further challenge the feasibility of the CNNs in clinical practice; the CNN-based knee cartilage evaluation results lack interpretability. To address these challenges, we model the cartilages structure and appearance from knee MRI into a graph representation, which is capable of handling highly diverse clinical data. Then, guided by the cartilage graph representation, we design a non-Euclidean deep learning network with the self-attention mechanism, to extract cartilage features in the local and global, and to derive the final assessment with a visualized result. Our comprehensive experiments show that the proposed method yields superior performance in knee cartilage defect assessment, plus its convenient 3D visualization for interpretability.
Collapse
|
26
|
Ding W, Abdel-Basset M, Hawash H, ELkomy OM. MT-nCov-Net: A Multitask Deep-Learning Framework for Efficient Diagnosis of COVID-19 Using Tomography Scans. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:1285-1298. [PMID: 34748510 DOI: 10.1109/tcyb.2021.3123173] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The localization and segmentation of the novel coronavirus disease of 2019 (COVID-19) lesions from computerized tomography (CT) scans are of great significance for developing an efficient computer-aided diagnosis system. Deep learning (DL) has emerged as one of the best choices for developing such a system. However, several challenges limit the efficiency of DL approaches, including data heterogeneity, considerable variety in the shape and size of the lesions, lesion imbalance, and scarce annotation. In this article, a novel multitask regression network for segmenting COVID-19 lesions is proposed to address these challenges. We name the framework MT-nCov-Net. We formulate lesion segmentation as a multitask shape regression problem that enables partaking the poor-, intermediate-, and high-quality features between various tasks. A multiscale feature learning (MFL) module is presented to capture the multiscale semantic information, which helps to efficiently learn small and large lesion features while reducing the semantic gap between different scale representations. In addition, a fine-grained lesion localization (FLL) module is introduced to detect infection lesions using an adaptive dual-attention mechanism. The generated location map and the fused multiscale representations are subsequently passed to the lesion regression (LR) module to segment the infection lesions. MT-nCov-Net enables learning complete lesion properties to accurately segment the COVID-19 lesion by regressing its shape. MT-nCov-Net is experimentally evaluated on two public multisource datasets, and the overall performance validates its superiority over the current cutting-edge approaches and demonstrates its effectiveness in tackling the problems facing the diagnosis of COVID-19.
Collapse
|
27
|
Meng Y, Bridge J, Addison C, Wang M, Merritt C, Franks S, Mackey M, Messenger S, Sun R, Fitzmaurice T, McCann C, Li Q, Zhao Y, Zheng Y. Bilateral adaptive graph convolutional network on CT based Covid-19 diagnosis with uncertainty-aware consensus-assisted multiple instance learning. Med Image Anal 2023; 84:102722. [PMID: 36574737 PMCID: PMC9753459 DOI: 10.1016/j.media.2022.102722] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 10/17/2022] [Accepted: 12/02/2022] [Indexed: 12/23/2022]
Abstract
Coronavirus disease (COVID-19) has caused a worldwide pandemic, putting millions of people's health and lives in jeopardy. Detecting infected patients early on chest computed tomography (CT) is critical in combating COVID-19. Harnessing uncertainty-aware consensus-assisted multiple instance learning (UC-MIL), we propose to diagnose COVID-19 using a new bilateral adaptive graph-based (BA-GCN) model that can use both 2D and 3D discriminative information in 3D CT volumes with arbitrary number of slices. Given the importance of lung segmentation for this task, we have created the largest manual annotation dataset so far with 7,768 slices from COVID-19 patients, and have used it to train a 2D segmentation model to segment the lungs from individual slices and mask the lungs as the regions of interest for the subsequent analyses. We then used the UC-MIL model to estimate the uncertainty of each prediction and the consensus between multiple predictions on each CT slice to automatically select a fixed number of CT slices with reliable predictions for the subsequent model reasoning. Finally, we adaptively constructed a BA-GCN with vertices from different granularity levels (2D and 3D) to aggregate multi-level features for the final diagnosis with the benefits of the graph convolution network's superiority to tackle cross-granularity relationships. Experimental results on three largest COVID-19 CT datasets demonstrated that our model can produce reliable and accurate COVID-19 predictions using CT volumes with any number of slices, which outperforms existing approaches in terms of learning and generalisation ability. To promote reproducible research, we have made the datasets, including the manual annotations and cleaned CT dataset, as well as the implementation code, available at https://doi.org/10.5281/zenodo.6361963.
Collapse
Affiliation(s)
- Yanda Meng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Joshua Bridge
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Cliff Addison
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | - Manhui Wang
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | | | - Stu Franks
- Alces Flight Limited, Bicester, United Kingdom
| | - Maria Mackey
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Steve Messenger
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Renrong Sun
- Department of Radiology, Hubei Provincial Hospital of Integrated Chinese and Western Medicine, Hubei University of Chinese Medicine, Wuhan, China
| | - Thomas Fitzmaurice
- Adult Cystic Fibrosis Unit, Liverpool Heart and Chest Hospital NHS Foundation Trust, Liverpool, United Kingdom
| | - Caroline McCann
- Radiology, Liverpool Heart and Chest Hospital NHS Foundation Trust, United Kingdom
| | - Qiang Li
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Yitian Zhao
- The Affiliated People's Hospital of Ningbo University, Ningbo, China; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Science, Ningbo, China.
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom; Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart & Chest Hospital, Liverpool, United Kingdom.
| |
Collapse
|
28
|
Wen C, Liu S, Liu S, Heidari AA, Hijji M, Zarco C, Muhammad K. ACSN: Attention capsule sampling network for diagnosing COVID-19 based on chest CT scans. Comput Biol Med 2023; 153:106338. [PMID: 36640529 PMCID: PMC9678829 DOI: 10.1016/j.compbiomed.2022.106338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 11/08/2022] [Accepted: 11/16/2022] [Indexed: 11/23/2022]
Abstract
Automated diagnostic techniques based on computed tomography (CT) scans of the chest for the coronavirus disease (COVID-19) help physicians detect suspected cases rapidly and precisely, which is critical in providing timely medical treatment and preventing the spread of epidemic outbreaks. Existing capsule networks have played a significant role in automatic COVID-19 detection systems based on small datasets. However, extracting key slices is difficult because CT scans typically show many scattered lesion sections. In addition, existing max pooling sampling methods cannot effectively fuse the features from multiple regions. Therefore, in this study, we propose an attention capsule sampling network (ACSN) to detect COVID-19 based on chest CT scans. A key slices enhancement method is used to obtain critical information from a large number of slices by applying attention enhancement to key slices. Then, the lost active and background features are retained by integrating two types of sampling. The results of experiments on an open dataset of 35,000 slices show that the proposed ACSN achieve high performance compared with state-of-the-art models and exhibits 96.3% accuracy, 98.8% sensitivity, 93.8% specificity, and 98.3% area under the receiver operating characteristic curve.
Collapse
Affiliation(s)
- Cuihong Wen
- College of Information Science and Engineering, Hunan Normal University, Changsha, 410081, China; State Key Laboratory for Turbulence and Complex Systems, Department of Advanced Manufacturing and Robotics, College of Engineering, Peking University, Beijing, 100871, China
| | - Shaowu Liu
- College of Information Science and Engineering, Hunan Normal University, Changsha, 410081, China
| | - Shuai Liu
- College of Information Science and Engineering, Hunan Normal University, Changsha, 410081, China; School of Educational Science, Hunan Normal University, Changsha, 410081, China; Key Laboratory of Big Data Research and Application for Basic Education, Hunan Normal University, Changsha, 410081, China.
| | - Ali Asghar Heidari
- School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran, 1439957131, Iran.
| | - Mohammad Hijji
- Faculty of Computers and Information Technology (FCIT), University of Tabuk, Tabuk, 47711, Saudi Arabia
| | - Carmen Zarco
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada (UGR), Spain
| | - Khan Muhammad
- Visual Analytics for Knowledge Laboratory (VIS2KNOW Lab), Department of Applied AI, School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul, 03063, South Korea.
| |
Collapse
|
29
|
Wang X, Yang B, Pan X, Liu F, Zhang S. BPCN: bilateral progressive compensation network for lung infection image segmentation. Phys Med Biol 2023; 68. [PMID: 36580682 DOI: 10.1088/1361-6560/acaf21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 12/29/2022] [Indexed: 12/31/2022]
Abstract
Lung infection image segmentation is a key technology for autonomous understanding of the potential illness. However, current approaches usually lose the low-level details, which leads to a considerable accuracy decrease for lung infection areas with varied shapes and sizes. In this paper, we propose bilateral progressive compensation network (BPCN), a bilateral progressive compensation network to improve the accuracy of lung lesion segmentation through complementary learning of spatial and semantic features. The proposed BPCN are mainly composed of two deep branches. One branch is the multi-scale progressive fusion for main region features. The other branch is a flow-field based adaptive body-edge aggregation operations to explicitly learn detail features of lung infection areas which is supplement to region features. In addition, we propose a bilateral spatial-channel down-sampling to generate a hierarchical complementary feature which avoids losing discriminative features caused by pooling operations. Experimental results show that our proposed network outperforms state-of-the-art segmentation methods in lung infection segmentation on two public image datasets with or without a pseudo-label training strategy.
Collapse
Affiliation(s)
- Xiaoyan Wang
- Zhejiang University of Technology, Zhejiang Province, People's Republic of China
| | - Baoqi Yang
- Zhejiang University of Technology, Zhejiang Province, People's Republic of China
| | - Xiang Pan
- Zhejiang University of Technology, Zhejiang Province, People's Republic of China
| | - Fuchang Liu
- Hangzhou Normal University, Zhejiang Province, People's Republic of China
| | - Sanyuan Zhang
- Zhejiang University, Zhejiang Province, People's Republic of China
| |
Collapse
|
30
|
Hybrid intelligent model for classifying chest X-ray images of COVID-19 patients using genetic algorithm and neutrosophic logic. Soft comput 2023; 27:3427-3442. [PMID: 34421342 PMCID: PMC8371596 DOI: 10.1007/s00500-021-06103-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/30/2021] [Indexed: 12/23/2022]
Abstract
The highly spreading virus, COVID-19, created a huge need for an accurate and speedy diagnosis method. The famous RT-PCR test is costly and not available for many suspected cases. This article proposes a neurotrophic model to diagnose COVID-19 patients based on their chest X-ray images. The proposed model has five main phases. First, the speeded up robust features (SURF) method is applied to each X-ray image to extract robust invariant features. Second, three sampling algorithms are applied to treat imbalanced dataset. Third, the neutrosophic rule-based classification system is proposed to generate a set of rules based on the three neutrosophic values < T; I; F>, the degrees of truth, indeterminacy falsity. Fourth, a genetic algorithm is applied to select the optimal neutrosophic rules to improve the classification performance. Fifth, in this phase, the classification-based neutrosophic logic is proposed. The testing rule matrix is constructed with no class label, and the goal of this phase is to determine the class label for each testing rule using intersection percentage between testing and training rules. The proposed model is referred to as GNRCS. It is compared with six state-of-the-art classifiers such as multilayer perceptron (MLP), support vector machines (SVM), linear discriminant analysis (LDA), decision tree (DT), naive Bayes (NB), and random forest classifiers (RFC) with quality measures of accuracy, precision, sensitivity, specificity, and F1-score. The results show that the proposed model is powerful for COVID-19 recognition with high specificity and high sensitivity and less computational complexity. Therefore, the proposed GNRCS model could be used for real-time automatic early recognition of COVID-19.
Collapse
|
31
|
Jyoti K, Sushma S, Yadav S, Kumar P, Pachori RB, Mukherjee S. Automatic diagnosis of COVID-19 with MCA-inspired TQWT-based classification of chest X-ray images. Comput Biol Med 2023; 152:106331. [PMID: 36502692 PMCID: PMC9683525 DOI: 10.1016/j.compbiomed.2022.106331] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 11/01/2022] [Accepted: 11/14/2022] [Indexed: 11/25/2022]
Abstract
In this era of Coronavirus disease 2019 (COVID-19), an accurate method of diagnosis with less diagnosis time and cost can effectively help in controlling the disease spread with the new variants taking birth from time to time. In order to achieve this, a two-dimensional (2D) tunable Q-wavelet transform (TQWT) based on a memristive crossbar array (MCA) is introduced in this work for the decomposition of chest X-ray images of two different datasets. TQWT has resulted in promising values of peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) at the optimum values of its parameters namely quality factor (Q) of 4, and oversampling rate (r) of 3 and at a decomposition level (J) of 2. The MCA-based model is used to process decomposed images for further classification with efficient storage. These images have been further used for the classification of COVID-19 and non-COVID-19 images using ResNet50 and AlexNet convolutional neural network (CNN) models. The average accuracy values achieved for the processed chest X-ray images classification in the small and large datasets are 98.82% and 94.64%, respectively which are higher than the reported conventional methods based on different models of deep learning techniques. The average accuracy of detection of COVID-19 via the proposed method of image classification has also been achieved with less complexity, energy, power, and area consumption along with lower cost estimation as compared to CMOS-based technology.
Collapse
Affiliation(s)
- Kumari Jyoti
- Hybrid Nanodevice Research Group (HNRG), Department of Electrical Engineering, Indian Institute of Technology Indore, Madhya Pradesh, 453552, India
| | - Sai Sushma
- Hybrid Nanodevice Research Group (HNRG), Department of Electrical Engineering, Indian Institute of Technology Indore, Madhya Pradesh, 453552, India
| | - Saurabh Yadav
- Hybrid Nanodevice Research Group (HNRG), Centre for Advanced Electronics (CAE), Indian Institute of Technology Indore, Madhya Pradesh, 453552, India
| | - Pawan Kumar
- Hybrid Nanodevice Research Group (HNRG), Department of Electrical Engineering, Indian Institute of Technology Indore, Madhya Pradesh, 453552, India
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Madhya Pradesh, 453552, India
| | - Shaibal Mukherjee
- Hybrid Nanodevice Research Group (HNRG), Department of Electrical Engineering, Indian Institute of Technology Indore, Madhya Pradesh, 453552, India; Hybrid Nanodevice Research Group (HNRG), Centre for Advanced Electronics (CAE), Indian Institute of Technology Indore, Madhya Pradesh, 453552, India; Centre for Rural Development and Technology (CRDT), Indian Institute of Technology Indore, Madhya Pradesh, 453552, India; School of Engineering, RMIT University, Melbourne, Victoria, 3001, Australia.
| |
Collapse
|
32
|
Performance improvement in multi-label thoracic abnormality classification of chest X-rays with noisy labels. Int J Comput Assist Radiol Surg 2023; 18:181-189. [PMID: 35616775 DOI: 10.1007/s11548-022-02684-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 04/21/2022] [Indexed: 02/01/2023]
Abstract
PURPOSE This study aimed at developing a deep learning-based method for multi-label thoracic abnormality classification on frontal view chest X-ray (CXR). To improve the performance of classification, issues of class imbalance, noisy labels and ensemble of networks are addressed in the paper. METHODS The experiments were performed on a public dataset called Chest X-ray 14 (CXR14), which includes 112,120 frontal view CXRs from 30,805 patients. We came up with an ensemble learning framework to improve the classification and a noisy label detection method to detect the CXRs with noisy labels. The detected CXRs were reviewed by two board-certificated radiologists in a consensus fashion to evaluate detected noisy labels. The classification was assessed on CXR14 with area under the receiver operating characteristic curve (AUC). RESULTS Report from the radiologists indicated that detected noisy labels had high possibility to be true positives. A notable improvement from baseline in performance of classification was observed with the ensemble learning framework. After removing the CXRs with detected noisy labels, 8 out of 14 abnormalities improved significantly on CXR14. The suggested framework achieved AUC score of 0.827 on CXR14. CONCLUSION The methods of this study boost the classification on CXR with awareness of the label noise. Expanded experimental results show that all of them were able to improve multi-label thoracic abnormality classification performance, respectively. A new state-of-the-art is achieved in this study.
Collapse
|
33
|
Chen H, Jiang Y, Ko H, Loew M. A teacher-student framework with Fourier Transform augmentation for COVID-19 infection segmentation in CT images. Biomed Signal Process Control 2023; 79:104250. [PMID: 36188130 PMCID: PMC9510070 DOI: 10.1016/j.bspc.2022.104250] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 08/11/2022] [Accepted: 09/18/2022] [Indexed: 11/23/2022]
Abstract
Automatic segmentation of infected regions in computed tomography (CT) images is necessary for the initial diagnosis of COVID-19. Deep-learning-based methods have the potential to automate this task but require a large amount of data with pixel-level annotations. Training a deep network with annotated lung cancer CT images, which are easier to obtain, can alleviate this problem to some extent. However, this approach may suffer from a reduction in performance when applied to unseen COVID-19 images during the testing phase, caused by the difference in the image intensity and object region distribution between the training set and test set. In this paper, we proposed a novel unsupervised method for COVID-19 infection segmentation that aims to learn the domain-invariant features from lung cancer and COVID-19 images to improve the generalization ability of the segmentation network for use with COVID-19 CT images. First, to address the intensity difference, we proposed a novel data augmentation module based on Fourier Transform, which transfers the annotated lung cancer data into the style of COVID-19 image. Secondly, to reduce the distribution difference, we designed a teacher-student network to learn rotation-invariant features for segmentation. The experiments demonstrated that even without getting access to the annotations of the COVID-19 CT images during the training phase, the proposed network can achieve a state-of-the-art segmentation performance on COVID-19 infection.
Collapse
Affiliation(s)
- Han Chen
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Yifan Jiang
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Hanseok Ko
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Murray Loew
- Biomedical Engineering, George Washington University, Washington D.C., USA
| |
Collapse
|
34
|
Shorfuzzaman M. IoT-enabled stacked ensemble of deep neural networks for the diagnosis of COVID-19 using chest CT scans. COMPUTING 2023; 105. [PMCID: PMC8216100 DOI: 10.1007/s00607-021-00971-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
The ongoing COVID-19 (novel coronavirus disease 2019) pandemic has triggered a global emergency, resulting in significant casualties and a negative effect on socioeconomic and healthcare systems around the world. Hence, automatic and fast screening of COVID-19 infections has become an urgent need of this pandemic. Real-time reverse transcription polymerase chain reaction (RT-PCR), a commonly used primary clinical method, is expensive and time-consuming for skilled health professionals. With the aid of various AI functionalities and advanced technologies, chest CT scans may thus be a viable alternative for quick and automatic screening of COVID-19. At the moment, significant advances in 5G cellular and internet of things (IoT) technology are finding use in various applications in the healthcare sector. This study presents an IoT-enabled deep learning-based stacking model to analyze chest CT scans for effective diagnosis of COVID-19 encounters. At first, patient data will be obtained using IoT devices and sent to a cloud server during the data procurement stage. Then we use different fine-tuned CNN sub-models, which are stacked together using a meta-learner to detect COVID-19 infection from input CT scans. The proposed model is evaluated using an open access dataset containing both COVID-19 infected and non-COVID CT images. Evaluation results show the efficacy of the proposed stacked model containing fine-tuned CNNs and a meta-learner in detecting coronavirus infections using CT scans.
Collapse
Affiliation(s)
- Mohammad Shorfuzzaman
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, 21944 Saudi Arabia
| |
Collapse
|
35
|
Lung Diseases Detection Using Various Deep Learning Algorithms. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:3563696. [PMID: 36776955 PMCID: PMC9918362 DOI: 10.1155/2023/3563696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 08/17/2022] [Accepted: 11/24/2022] [Indexed: 02/05/2023]
Abstract
The primary objective of this proposed framework work is to detect and classify various lung diseases such as pneumonia, tuberculosis, and lung cancer from standard X-ray images and Computerized Tomography (CT) scan images with the help of volume datasets. We implemented three deep learning models namely Sequential, Functional & Transfer models and trained them on open-source training datasets. To augment the patient's treatment, deep learning techniques are promising and successful domains that extend the machine learning domain where CNNs are trained to extract features and offers great potential from datasets of images in biomedical application. Our primary aim is to validate our models as a new direction to address the problem on the datasets and then to compare their performance with other existing models. Our models were able to reach higher levels of accuracy for possible solutions and provide effectiveness to humankind for faster detection of diseases and serve as best performing models. The conventional networks have poor performance for tilted, rotated, and other abnormal orientation and have poor learning framework. The results demonstrated that the proposed framework with a sequential model outperforms other existing methods in terms of an F1 score of 98.55%, accuracy of 98.43%, recall of 96.33% for pneumonia and for tuberculosis F1 score of 97.99%, accuracy of 99.4%, and recall of 98.88%. In addition, the functional model for cancer outperformed with an accuracy of 99.9% and specificity of 99.89% and paves way to less number of trained parameters, leading to less computational overhead and less expensive than existing pretrained models. In our work, we implemented a state-of-the art CNN with various models to classify lung diseases accurately.
Collapse
|
36
|
Deep Learning for Detecting COVID-19 Using Medical Images. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 10:bioengineering10010019. [PMID: 36671590 PMCID: PMC9854504 DOI: 10.3390/bioengineering10010019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
The global spread of COVID-19 (also known as SARS-CoV-2) is a major international public health crisis [...].
Collapse
|
37
|
Li J, Wang S, Hu S, Sun Y, Wang Y, Xu P, Ye J. Class-Aware Attention Network for infectious keratitis diagnosis using corneal photographs. Comput Biol Med 2022; 151:106301. [PMID: 36403354 DOI: 10.1016/j.compbiomed.2022.106301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/18/2022] [Accepted: 11/06/2022] [Indexed: 11/11/2022]
Abstract
Infectious keratitis is one of the common ophthalmic diseases and also one of the main blinding eye diseases in China, hence rapid and accurate diagnosis and treatment for infectious keratitis are urgent to prevent the progression of the disease and limit the degree of corneal injury. Unfortunately, the traditional manual diagnosis accuracy is usually unsatisfactory due to the indistinguishable visual features. In this paper, we propose a novel end-to-end fully convolutional network, named Class-Aware Attention Network (CAA-Net), for automatically diagnosing infectious keratitis (normal, viral keratitis, fungal keratitis, and bacterial keratitis) using corneal photographs. In CAA-Net, a class-aware classification module is first trained to learn class-related discriminative features using separate branches for each class. Then, the learned class-aware discriminative features are fed into the main branch and fused with other feature maps using two attention strategies to assist the final multi-class classification performance. For the experiments, we have built a new corneal photograph dataset with 1886 images from 519 patients and conducted comprehensive experiments to verify the effectiveness of our proposed method. The code is available at https://github.com/SWF-hao/CAA-Net_Pytorch.
Collapse
Affiliation(s)
- Jinhao Li
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, Shandong, China.
| | - Shuai Wang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, Shandong, China; Suzhou Research Institute of Shandong University, Suzhou, 215123, Jiangsu, China.
| | - Shaodan Hu
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| | - Yiming Sun
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, 310018, Zhejiang, China.
| | - Peifang Xu
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| | - Juan Ye
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| |
Collapse
|
38
|
Lee KW, Chin RKY. Diverse COVID-19 CT Image-to-Image Translation with Stacked Residual Dropout. Bioengineering (Basel) 2022; 9:698. [PMID: 36421099 PMCID: PMC9688018 DOI: 10.3390/bioengineering9110698] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 10/31/2022] [Accepted: 11/13/2022] [Indexed: 01/11/2024] Open
Abstract
Machine learning models are renowned for their high dependency on a large corpus of data in solving real-world problems, including the recent COVID-19 pandemic. In practice, data acquisition is an onerous process, especially in medical applications, due to lack of data availability for newly emerged diseases and privacy concerns. This study introduces a data synthesization framework (sRD-GAN) that generates synthetic COVID-19 CT images using a novel stacked-residual dropout mechanism (sRD). sRD-GAN aims to alleviate the problem of data paucity by generating synthetic lung medical images that contain precise radiographic annotations. The sRD mechanism is designed using a regularization-based strategy to facilitate perceptually significant instance-level diversity without content-style attribute disentanglement. Extensive experiments show that sRD-GAN can generate exceptional perceptual realism on COVID-19 CT images examined by an experiment radiologist, with an outstanding Fréchet Inception Distance (FID) of 58.68 and Learned Perceptual Image Patch Similarity (LPIPS) of 0.1370 on the test set. In a benchmarking experiment, sRD-GAN shows superior performance compared to GAN, CycleGAN, and one-to-one CycleGAN. The encouraging results achieved by sRD-GAN in different clinical cases, such as community-acquired pneumonia CT images and COVID-19 in X-ray images, suggest that the proposed method can be easily extended to other similar image synthetization problems.
Collapse
Affiliation(s)
| | - Renee Ka Yin Chin
- Faculty of Engineering, Universiti Malaysia Sabah, Kota Kinabalu 88400, Malaysia
| |
Collapse
|
39
|
Celard P, Iglesias EL, Sorribes-Fdez JM, Romero R, Vieira AS, Borrajo L. A survey on deep learning applied to medical images: from simple artificial neural networks to generative models. Neural Comput Appl 2022; 35:2291-2323. [PMID: 36373133 PMCID: PMC9638354 DOI: 10.1007/s00521-022-07953-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 10/12/2022] [Indexed: 11/06/2022]
Abstract
Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths.
Collapse
Affiliation(s)
- P. Celard
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - E. L. Iglesias
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - J. M. Sorribes-Fdez
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - R. Romero
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - A. Seara Vieira
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - L. Borrajo
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| |
Collapse
|
40
|
Li Y, Shi X, Yang L, Pu C, Tan Q, Yang Z, Huang H. MC-GAT: multi-layer collaborative generative adversarial transformer for cholangiocarcinoma classification from hyperspectral pathological images. BIOMEDICAL OPTICS EXPRESS 2022; 13:5794-5812. [PMID: 36733731 PMCID: PMC9872896 DOI: 10.1364/boe.472106] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 09/24/2022] [Accepted: 10/01/2022] [Indexed: 06/18/2023]
Abstract
Accurate histopathological analysis is the core step of early diagnosis of cholangiocarcinoma (CCA). Compared with color pathological images, hyperspectral pathological images have advantages for providing rich band information. Existing algorithms of HSI classification are dominated by convolutional neural network (CNN), which has the deficiency of distorting spectral sequence information of HSI data. Although vision transformer (ViT) alleviates this problem to a certain extent, the expressive power of transformer encoder will gradually decrease with increasing number of layers, which still degrades the classification performance. In addition, labeled HSI samples are limited in practical applications, which restricts the performance of methods. To address these issues, this paper proposed a multi-layer collaborative generative adversarial transformer termed MC-GAT for CCA classification from hyperspectral pathological images. MC-GAT consists of two pure transformer-based neural networks including a generator and a discriminator. The generator learns the implicit probability of real samples and transforms noise sequences into band sequences, which produces fake samples. These fake samples and corresponding real samples are mixed together as input to confuse the discriminator, which increases model generalization. In discriminator, a multi-layer collaborative transformer encoder is designed to integrate output features from different layers into collaborative features, which adaptively mines progressive relations from shallow to deep encoders and enhances the discriminating power of the discriminator. Experimental results on the Multidimensional Choledoch Datasets demonstrate that the proposed MC-GAT can achieve better classification results than many state-of-the-art methods. This confirms the potentiality of the proposed method in aiding pathologists in CCA histopathological analysis from hyperspectral imagery.
Collapse
Affiliation(s)
- Yuan Li
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Xu Shi
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Liping Yang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Chunyu Pu
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Qijuan Tan
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Zhengchun Yang
- Department of ultrasound, Chongqing Health Center for Women and Children, Chongqing 401147, China
- Department of ultrasound, Women and Children's Hospital of Chongqing Medical University, Chongqing 401147, China
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| |
Collapse
|
41
|
Jalali Moghaddam M, Ghavipour M. Towards smart diagnostic methods for COVID-19: Review of deep learning for medical imaging. IPEM-TRANSLATION 2022; 3:100008. [PMID: 36312890 PMCID: PMC9597575 DOI: 10.1016/j.ipemt.2022.100008] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 10/20/2022] [Accepted: 10/24/2022] [Indexed: 11/08/2022]
Abstract
The infectious disease known as COVID-19 has spread dramatically all over the world since December 2019. The fast diagnosis and isolation of infected patients are key factors in slowing down the spread of this virus and better management of the pandemic. Although the CT and X-ray modalities are commonly used for the diagnosis of COVID-19, identifying COVID-19 patients from medical images is a time-consuming and error-prone task. Artificial intelligence has shown to have great potential to speed up and optimize the prognosis and diagnosis process of COVID-19. Herein, we review publications on the application of deep learning (DL) techniques for diagnostics of patients with COVID-19 using CT and X-ray chest images for a period from January 2020 to October 2021. Our review focuses solely on peer-reviewed, well-documented articles. It provides a comprehensive summary of the technical details of models developed in these articles and discusses the challenges in the smart diagnosis of COVID-19 using DL techniques. Based on these challenges, it seems that the effectiveness of the developed models in clinical use needs to be further investigated. This review provides some recommendations to help researchers develop more accurate prediction models.
Collapse
Affiliation(s)
- Marjan Jalali Moghaddam
- Department of Computer Engineering and Information Technology, Amirkabir University of Technology, Tehran, Iran
| | - Mina Ghavipour
- Department of Computer Engineering and Information Technology, Amirkabir University of Technology, Tehran, Iran
| |
Collapse
|
42
|
Sharma A, Mishra PK. Covid-MANet: Multi-task attention network for explainable diagnosis and severity assessment of COVID-19 from CXR images. PATTERN RECOGNITION 2022; 131:108826. [PMID: 35698723 PMCID: PMC9170279 DOI: 10.1016/j.patcog.2022.108826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 04/24/2022] [Accepted: 06/02/2022] [Indexed: 05/17/2023]
Abstract
The devastating outbreak of Coronavirus Disease (COVID-19) cases in early 2020 led the world to face health crises. Subsequently, the exponential reproduction rate of COVID-19 disease can only be reduced by early diagnosis of COVID-19 infection cases correctly. The initial research findings reported that radiological examinations using CT and CXR modality have successfully reduced false negatives by RT-PCR test. This research study aims to develop an explainable diagnosis system for the detection and infection region quantification of COVID-19 disease. The existing research studies successfully explored deep learning approaches with higher performance measures but lacked generalization and interpretability for COVID-19 diagnosis. In this study, we address these issues by the Covid-MANet network, an automated end-to-end multi-task attention network that works for 5 classes in three stages for COVID-19 infection screening. The first stage of the Covid-MANet network localizes attention of the model to the relevant lungs region for disease recognition. The second stage of the Covid-MANet network differentiates COVID-19 cases from bacterial pneumonia, viral pneumonia, normal and tuberculosis cases, respectively. To improve the interpretation and explainability, three experiments have been conducted in exploration of the most coherent and appropriate classification approach. Moreover, the multi-scale attention model MA-DenseNet201 proposed for the classification of COVID-19 cases. The final stage of the Covid-MANet network quantifies the proportion of infection and severity of COVID-19 in the lungs. The COVID-19 cases are graded into more specific severity levels such as mild, moderate, severe, and critical as per the score assigned by the RALE scoring system. The MA-DenseNet201 classification model outperforms eight state-of-the-art CNN models, in terms of sensitivity and interpretation with lung localization network. The COVID-19 infection segmentation by UNet with DenseNet121 encoder achieves dice score of 86.15% outperforming UNet, UNet++, AttentionUNet, R2UNet, with VGG16, ResNet50 and DenseNet201 encoder. The proposed network not only classifies images based on the predicted label but also highlights the infection by segmentation/localization of model-focused regions to support explainable decisions. MA-DenseNet201 model with a segmentation-based cropping approach achieves maximum interpretation of 96% with COVID-19 sensitivity of 97.75%. Finally, based on class-varied sensitivity analysis Covid-MANet ensemble network of MA-DenseNet201, ResNet50 and MobileNet achieve 95.05% accuracy and 98.75% COVID-19 sensitivity. The proposed model is externally validated on an unseen dataset, yields 98.17% COVID-19 sensitivity.
Collapse
Affiliation(s)
- Ajay Sharma
- Department of Computer Science, Institute of Science, Banaras Hindu University, Varanasi 221005, India
| | - Pramod Kumar Mishra
- Department of Computer Science, Institute of Science, Banaras Hindu University, Varanasi 221005, India
| |
Collapse
|
43
|
Zhang R, Wei Y, Shi F, Ren J, Zhou Q, Li W, Chen B. The diagnostic and prognostic value of radiomics and deep learning technologies for patients with solid pulmonary nodules in chest CT images. BMC Cancer 2022; 22:1118. [PMID: 36319968 PMCID: PMC9628173 DOI: 10.1186/s12885-022-10224-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 10/17/2022] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Solid pulmonary nodules are different from subsolid nodules and the diagnosis is much more challenging. We intended to evaluate the diagnostic and prognostic value of radiomics and deep learning technologies for solid pulmonary nodules. METHODS Retrospectively enroll patients with pathologically-confirmed solid pulmonary nodules and collect clinical data. Obtain pre-treatment high-resolution thoracic CT and manually delineate the nodule in 3D. Then, all patients were randomly divided into training and testing sets at a ratio of 7:3, and convolutional neural networks (CNN) models and random forest (RF) models were established. Survival analyses were performed for patients with solid adenocarcinomas. RESULTS Totally 720 solid pulmonary nodules were enrolled, 348 benign and 372 malignant. The CNN model with clinical features achieved the highest AUC [0.819, 95% confidence interval (CI): 0.760-0.877] with a sensitivity of 0.778, specificity of 0.788 and accuracy of 0.783. No significant differences were observed between the CNN and radiomics models. There were 295 solid adenocarcinomas in survival analysis. Different disease-free survival was observed between the low-risk and high-risk groups divided according to the radiomics Rad-score. However, the groups based on deep learning signatures showed similar survival. Cox regression analysis indicated that the radiomics Rad-score (hazard ratio: 5.08, 95% CI: 2.61-9.90) was an independent predictor of recurrence. CONCLUSIONS The radiomics and deep learning models can well predict the malignancy of solid pulmonary nodules. Radiomics signatures also demonstrate prognostic value in solid adenocarcinomas.
Collapse
Affiliation(s)
- Rui Zhang
- grid.13291.380000 0001 0807 1581Department of Pulmonary and Critical Care Medicine, West China Hospital, Sichuan University, 37 GuoXue Alley, Wuhou District, Chengdu, Sichuan Province 610041 People’s Republic of China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Ren
- grid.13291.380000 0001 0807 1581Department of Pulmonary and Critical Care Medicine, West China Hospital, Sichuan University, 37 GuoXue Alley, Wuhou District, Chengdu, Sichuan Province 610041 People’s Republic of China
| | - Qing Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Weimin Li
- grid.13291.380000 0001 0807 1581Department of Pulmonary and Critical Care Medicine, West China Hospital, Sichuan University, 37 GuoXue Alley, Wuhou District, Chengdu, Sichuan Province 610041 People’s Republic of China
| | - Bojiang Chen
- grid.13291.380000 0001 0807 1581Department of Pulmonary and Critical Care Medicine, West China Hospital, Sichuan University, 37 GuoXue Alley, Wuhou District, Chengdu, Sichuan Province 610041 People’s Republic of China
| |
Collapse
|
44
|
Liu S, Cai T, Tang X, Zhang Y, Wang C. COVID-19 diagnosis via chest X-ray image classification based on multiscale class residual attention. Comput Biol Med 2022; 149:106065. [PMID: 36081225 PMCID: PMC9433340 DOI: 10.1016/j.compbiomed.2022.106065] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 08/07/2022] [Accepted: 08/27/2022] [Indexed: 12/11/2022]
Abstract
Aiming at detecting COVID-19 effectively, a multiscale class residual attention (MCRA) network is proposed via chest X-ray (CXR) image classification. First, to overcome the data shortage and improve the robustness of our network, a pixel-level image mixing of local regions was introduced to achieve data augmentation and reduce noise. Secondly, multi-scale fusion strategy was adopted to extract global contextual information at different scales and enhance semantic representation. Last but not least, class residual attention was employed to generate spatial attention for each class, which can avoid inter-class interference and enhance related features to further improve the COVID-19 detection. Experimental results show that our network achieves superior diagnostic performance on COVIDx dataset, and its accuracy, PPV, sensitivity, specificity and F1-score are 97.71%, 96.76%, 96.56%, 98.96% and 96.64%, respectively; moreover, the heat maps can endow our deep model with somewhat interpretability.
Collapse
Affiliation(s)
- Shangwang Liu
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China.
| | - Tongbo Cai
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| | - Xiufang Tang
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| | - Yangyang Zhang
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| | - Changgeng Wang
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| |
Collapse
|
45
|
Karthik R, Menaka R, Hariharan M, Kathiresan GS. AI for COVID-19 Detection from Radiographs: Incisive Analysis of State of the Art Techniques, Key Challenges and Future Directions. Ing Rech Biomed 2022; 43:486-510. [PMID: 34336141 PMCID: PMC8312058 DOI: 10.1016/j.irbm.2021.07.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 06/14/2021] [Accepted: 07/19/2021] [Indexed: 12/24/2022]
Abstract
Background and objective In recent years, Artificial Intelligence has had an evident impact on the way research addresses challenges in different domains. It has proven to be a huge asset, especially in the medical field, allowing for time-efficient and reliable solutions. This research aims to spotlight the impact of deep learning and machine learning models in the detection of COVID-19 from medical images. This is achieved by conducting a review of the state-of-the-art approaches proposed by the recent works in this field. Methods The main focus of this study is the recent developments of classification and segmentation approaches to image-based COVID-19 detection. The study reviews 140 research papers published in different academic research databases. These papers have been screened and filtered based on specified criteria, to acquire insights prudent to image-based COVID-19 detection. Results The methods discussed in this review include different types of imaging modality, predominantly X-rays and CT scans. These modalities are used for classification and segmentation tasks as well. This review seeks to categorize and discuss the different deep learning and machine learning architectures employed for these tasks, based on the imaging modality utilized. It also hints at other possible deep learning and machine learning architectures that can be proposed for better results towards COVID-19 detection. Along with that, a detailed overview of the emerging trends and breakthroughs in Artificial Intelligence-based COVID-19 detection has been discussed as well. Conclusion This work concludes by stipulating the technical and non-technical challenges faced by researchers and illustrates the advantages of image-based COVID-19 detection with Artificial Intelligence techniques.
Collapse
Affiliation(s)
- R Karthik
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai, India
| | - R Menaka
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai, India
| | - M Hariharan
- School of Computing Sciences and Engineering, Vellore Institute of Technology, Chennai, India
| | - G S Kathiresan
- School of Electronics Engineering, Vellore Institute of Technology, Chennai, India
| |
Collapse
|
46
|
Sadik F, Dastider AG, Subah MR, Mahmud T, Fattah SA. A dual-stage deep convolutional neural network for automatic diagnosis of COVID-19 and pneumonia from chest CT images. Comput Biol Med 2022; 149:105806. [PMID: 35994932 PMCID: PMC9295386 DOI: 10.1016/j.compbiomed.2022.105806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 06/05/2022] [Accepted: 06/26/2022] [Indexed: 11/15/2022]
Abstract
In the Coronavirus disease-2019 (COVID-19) pandemic, for fast and accurate diagnosis of a large number of patients, besides traditional methods, automated diagnostic tools are now extremely required. In this paper, a deep convolutional neural network (CNN) based scheme is proposed for automated accurate diagnosis of COVID-19 from lung computed tomography (CT) scan images. First, for the automated segmentation of lung regions in a chest CT scan, a modified CNN architecture, namely SKICU-Net is proposed by incorporating additional skip interconnections in the U-Net model that overcome the loss of information in dimension scaling. Next, an agglomerative hierarchical clustering is deployed to eliminate the CT slices without significant information. Finally, for effective feature extraction and diagnosis of COVID-19 and pneumonia from the segmented lung slices, a modified DenseNet architecture, namely P-DenseCOVNet is designed where parallel convolutional paths are introduced on top of the conventional DenseNet model for getting better performance through overcoming the loss of positional arguments. Outstanding performances have been achieved with an F1 score of 0.97 in the segmentation task along with an accuracy of 87.5% in diagnosing COVID-19, common pneumonia, and normal cases. Significant experimental results and comparison with other studies show that the proposed scheme provides very satisfactory performances and can serve as an effective diagnostic tool in the current pandemic.
Collapse
Affiliation(s)
- Farhan Sadik
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
| | - Ankan Ghosh Dastider
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
| | - Mohseu Rashid Subah
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
| | - Tanvir Mahmud
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
| | - Shaikh Anowarul Fattah
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh.
| |
Collapse
|
47
|
Chen J, Li Y, Guo L, Zhou X, Zhu Y, He Q, Han H, Feng Q. Machine learning techniques for CT imaging diagnosis of novel coronavirus pneumonia: a review. Neural Comput Appl 2022; 36:1-19. [PMID: 36159188 PMCID: PMC9483435 DOI: 10.1007/s00521-022-07709-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 08/04/2022] [Indexed: 11/20/2022]
Abstract
Since 2020, novel coronavirus pneumonia has been spreading rapidly around the world, bringing tremendous pressure on medical diagnosis and treatment for hospitals. Medical imaging methods, such as computed tomography (CT), play a crucial role in diagnosing and treating COVID-19. A large number of CT images (with large volume) are produced during the CT-based medical diagnosis. In such a situation, the diagnostic judgement by human eyes on the thousands of CT images is inefficient and time-consuming. Recently, in order to improve diagnostic efficiency, the machine learning technology is being widely used in computer-aided diagnosis and treatment systems (i.e., CT Imaging) to help doctors perform accurate analysis and provide them with effective diagnostic decision support. In this paper, we comprehensively review these frequently used machine learning methods applied in the CT Imaging Diagnosis for the COVID-19, discuss the machine learning-based applications from the various kinds of aspects including the image acquisition and pre-processing, image segmentation, quantitative analysis and diagnosis, and disease follow-up and prognosis. Moreover, we also discuss the limitations of the up-to-date machine learning technology in the context of CT imaging computer-aided diagnosis.
Collapse
Affiliation(s)
- Jingjing Chen
- Zhejiang University City College, Hangzhou, China
- Zhijiang College of Zhejiang University of Technology, Shaoxing, China
| | - Yixiao Li
- Faculty of Science, Zhejiang University of Technology, Hangzhou, China
| | - Lingling Guo
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Xiaokang Zhou
- Faculty of Data Science, Shiga University, Hikone, Japan
- RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
| | - Yihan Zhu
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Qingfeng He
- School of Pharmacy, Fudan University, Shanghai, China
| | - Haijun Han
- School of Medicine, Zhejiang University City College, Hangzhou, China
| | - Qilong Feng
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| |
Collapse
|
48
|
Cheng J, Zhao W, Liu J, Xie X, Wu S, Liu L, Yue H, Li J, Wang J, Liu J. Automated Diagnosis of COVID-19 Using Deep Supervised Autoencoder With Multi-View Features From CT Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:2723-2736. [PMID: 34351863 PMCID: PMC9647725 DOI: 10.1109/tcbb.2021.3102584] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Accurate and rapid diagnosis of coronavirus disease 2019 (COVID-19) from chest CT scans is of great importance and urgency during the worldwide outbreak. However, radiologists have to distinguish COVID-19 pneumonia from other pneumonia in a large number of CT scans, which is tedious and inefficient. Thus, it is urgently and clinically needed to develop an efficient and accurate diagnostic tool to help radiologists to fulfill the difficult task. In this study, we proposed a deep supervised autoencoder (DSAE) framework to automatically identify COVID-19 using multi-view features extracted from CT images. To fully explore features characterizing CT images from different frequency domains, DSAE was proposed to learn the latent representation by multi-task learning. The proposal was designed to both encode valuable information from different frequency features and construct a compact class structure for separability. To achieve this, we designed a multi-task loss function, which consists of a supervised loss and a reconstruction loss. Our proposed method was evaluated on a newly collected dataset of 787 subjects including COVID-19 pneumonia patients, other pneumonia patients, and normal subjects without abnormal CT findings. Extensive experimental results demonstrated that our proposed method achieved encouraging diagnostic performance and may have potential clinical application for the diagnosis of COVID-19.
Collapse
|
49
|
Manh VT, Zhou J, Jia X, Lin Z, Xu W, Mei Z, Dong Y, Yang X, Huang R, Ni D. Multi-Attribute Attention Network for Interpretable Diagnosis of Thyroid Nodules in Ultrasound Images. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2611-2620. [PMID: 35820014 DOI: 10.1109/tuffc.2022.3190012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Ultrasound (US) is the primary imaging technique for the diagnosis of thyroid cancer. However, accurate identification of nodule malignancy is a challenging task that can elude less-experienced clinicians. Recently, many computer-aided diagnosis (CAD) systems have been proposed to assist this process. However, most of them do not provide the reasoning of their classification process, which may jeopardize their credibility in practical use. To overcome this, we propose a novel deep learning (DL) framework called multi-attribute attention network (MAA-Net) that is designed to mimic the clinical diagnosis process. The proposed model learns to predict nodular attributes and infer their malignancy based on these clinically-relevant features. A multi-attention scheme is adopted to generate customized attention to improve each task and malignancy diagnosis. Furthermore, MAA-Net utilizes nodule delineations as nodules spatial prior guidance for the training rather than cropping the nodules with additional models or human interventions to prevent losing the context information. Validation experiments were performed on a large and challenging dataset containing 4554 patients. Results show that the proposed method outperformed other state-of-the-art methods and provides interpretable predictions that may better suit clinical needs.
Collapse
|
50
|
|