1
|
Parikh V, Tariq A, Patel B, Banerjee I. Comparative Analysis of Fusion Strategies for Imaging and Non-imaging Data - Use-case of Hospital Discharge Prediction. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2024; 2024:652-661. [PMID: 38827051 PMCID: PMC11141810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Accurate prediction of future clinical events such as discharge from hospital can not only improve hospital resource management but also provide an indicator of a patient's clinical condition. Within the scope of this work, we perform a comparative analysis of deep learning based fusion strategies against traditional single source models for prediction of discharge from hospital by fusing information encoded in two diverse but relevant data modalities, i.e., chest X-ray images and tabular electronic health records (EHR). We evaluate multiple fusion strategies including late, early and joint fusion in terms of their efficacy for target prediction compared to EHR-only and Image-only predictive models. Results indicated the importance of merging information from two modalities for prediction as fusion models tended to outperform single modality models and indicate that the joint fusion scheme was the most effective for target prediction. Joint fusion model merges the two modalities through a branched neural network that is jointly trained in an end-to-end fashion to extract target-relevant information from both modalities.
Collapse
|
2
|
Wu D, Ni J, Fan W, Jiang Q, Wang L, Sun L, Cai Z. Opportunities and challenges of computer aided diagnosis in new millennium: A bibliometric analysis from 2000 to 2023. Medicine (Baltimore) 2023; 102:e36703. [PMID: 38134105 PMCID: PMC10735127 DOI: 10.1097/md.0000000000036703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND After entering the new millennium, computer-aided diagnosis (CAD) is rapidly developing as an emerging technology worldwide. Expanding the spectrum of CAD-related diseases is a possible future research trend. Nevertheless, bibliometric studies in this area have not yet been reported. This study aimed to explore the hotspots and frontiers of research on CAD from 2000 to 2023, which may provide a reference for researchers in this field. METHODS In this paper, we use bibliometrics to analyze CAD-related literature in the Web of Science database between 2000 and 2023. The scientometric softwares VOSviewer and CiteSpace were used to visually analyze the countries, institutions, authors, journals, references and keywords involved in the literature. Keywords burst analysis were utilized to further explore the current state and development trends of research on CAD. RESULTS A total of 13,970 publications were included in this study, with a noticeably rising annual publication trend. China and the United States are major contributors to the publication, with the United States being the dominant position in CAD research. The American research institutions, lead by the University of Chicago, are pioneers of CAD. Acharya UR, Zheng B and Chan HP are the most prolific authors. Institute of Electrical and Electronics Engineers Transactions on Medical Imaging focuses on CAD and publishes the most articles. New computer technologies related to CAD are in the forefront of attention. Currently, CAD is used extensively in breast diseases, pulmonary diseases and brain diseases. CONCLUSION Expanding the spectrum of CAD-related diseases is a possible future research trend. How to overcome the lack of large sample datasets and establish a universally accepted standard for the evaluation of CAD system performance are urgent issues for CAD development and validation. In conclusion, this paper provides valuable information on the current state of CAD research and future developments.
Collapse
Affiliation(s)
- Di Wu
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Jiachun Ni
- Department of Coloproctology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Wenbin Fan
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Qiong Jiang
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Ling Wang
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Li Sun
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Zengjin Cai
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| |
Collapse
|
3
|
Xu W, Nie L, Chen B, Ding W. Dual-stream EfficientNet with adversarial sample augmentation for COVID-19 computer aided diagnosis. Comput Biol Med 2023; 165:107451. [PMID: 37696184 DOI: 10.1016/j.compbiomed.2023.107451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 08/17/2023] [Accepted: 09/04/2023] [Indexed: 09/13/2023]
Abstract
Though a series of computer aided measures have been taken for the rapid and definite diagnosis of 2019 coronavirus disease (COVID-19), they generally fail to achieve high enough accuracy, including the recently popular deep learning-based methods. The main reasons are that: (a) they generally focus on improving the model structures while ignoring important information contained in the medical image itself; (b) the existing small-scale datasets have difficulty in meeting the training requirements of deep learning. In this paper, a dual-stream network based on the EfficientNet is proposed for the COVID-19 diagnosis based on CT scans. The dual-stream network takes into account the important information in both spatial and frequency domains of CT scans. Besides, Adversarial Propagation (AdvProp) technology is used to address the insufficient training data usually faced by the deep learning-based computer aided diagnosis and also the overfitting issue. Feature Pyramid Network (FPN) is utilized to fuse the dual-stream features. Experimental results on the public dataset COVIDx CT-2A demonstrate that the proposed method outperforms the existing 12 deep learning-based methods for COVID-19 diagnosis, achieving an accuracy of 0.9870 for multi-class classification, and 0.9958 for binary classification. The source code is available at https://github.com/imagecbj/covid-efficientnet.
Collapse
Affiliation(s)
- Weijie Xu
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Lina Nie
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Beijing Chen
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China; Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science and Technology, Nanjing, 210044, China.
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019, China
| |
Collapse
|
4
|
Liao C, Wen X, Qi S, Liu Y, Cao R. FSE-Net: feature selection and enhancement network for mammogram classification. Phys Med Biol 2023; 68:195001. [PMID: 37712226 DOI: 10.1088/1361-6560/acf559] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 08/30/2023] [Indexed: 09/16/2023]
Abstract
Objective. Early detection and diagnosis allow for intervention and treatment at an early stage of breast cancer. Despite recent advances in computer aided diagnosis systems based on convolutional neural networks for breast cancer diagnosis, improving the classification performance of mammograms remains a challenge due to the various sizes of breast lesions and difficult extraction of small lesion features. To obtain more accurate classification results, many studies choose to directly classify region of interest (ROI) annotations, but labeling ROIs is labor intensive. The purpose of this research is to design a novel network to automatically classify mammogram image as cancer and no cancer, aiming to mitigate or address the above challenges and help radiologists perform mammogram diagnosis more accurately.Approach. We propose a novel feature selection and enhancement network (FSE-Net) to fully exploit the features of mammogram images, which requires only mammogram images and image-level labels without any bounding boxes or masks. Specifically, to obtain more contextual information, an effective feature selection module is proposed to adaptively select the receptive fields and fuse features from receptive fields of different scales. Moreover, a feature enhancement module is designed to explore the correlation between feature maps of different resolutions and to enhance the representation capacity of low-resolution feature maps with high-resolution feature maps.Main results. The performance of the proposed network has been evaluated on the CBIS-DDSM dataset and INbreast dataset. It achieves an accuracy of 0.806 with an AUC of 0.866 on the CBIS-DDSM dataset and an accuracy of 0.956 with an AUC of 0.974 on the INbreast dataset.Significance. Through extensive experiments and saliency map visualization analysis, the proposed network achieves the satisfactory performance in the mammogram classification task, and can roughly locate suspicious regions to assist in the final prediction of the entire images.
Collapse
Affiliation(s)
- Caiqing Liao
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Xin Wen
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Shuman Qi
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Yanan Liu
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Rui Cao
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| |
Collapse
|
5
|
Mora-Rubio A, Bravo-Ortíz MA, Quiñones Arredondo S, Saborit Torres JM, Ruz GA, Tabares-Soto R. Classification of Alzheimer's disease stages from magnetic resonance images using deep learning. PeerJ Comput Sci 2023; 9:e1490. [PMID: 37705614 PMCID: PMC10495979 DOI: 10.7717/peerj-cs.1490] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 06/21/2023] [Indexed: 09/15/2023]
Abstract
Alzheimer's disease (AD) is a progressive type of dementia characterized by loss of memory and other cognitive abilities, including speech. Since AD is a progressive disease, detection in the early stages is essential for the appropriate care of the patient throughout its development, going from asymptomatic to a stage known as mild cognitive impairment (MCI), and then progressing to dementia and severe dementia; is worth mentioning that everyone suffers from cognitive impairment to some degree as we age, but the relevant task here is to identify which people are most likely to develop AD. Along with cognitive tests, evaluation of the brain morphology is the primary tool for AD diagnosis, where atrophy and loss of volume of the frontotemporal lobe are common features in patients who suffer from the disease. Regarding medical imaging techniques, magnetic resonance imaging (MRI) scans are one of the methods used by specialists to assess brain morphology. Recently, with the rise of deep learning (DL) and its successful implementation in medical imaging applications, it is of growing interest in the research community to develop computer-aided diagnosis systems that can help physicians to detect this disease, especially in the early stages where macroscopic changes are not so easily identified. This article presents a DL-based approach to classifying MRI scans in the different stages of AD, using a curated set of images from Alzheimer's Disease Neuroimaging Initiative and Open Access Series of Imaging Studies databases. Our methodology involves image pre-processing using FreeSurfer, spatial data-augmentation operations, such as rotation, flip, and random zoom during training, and state-of-the-art 3D convolutional neural networks such as EfficientNet, DenseNet, and a custom siamese network, as well as the relatively new approach of vision transformer architecture. With this approach, the best detection percentage among all four architectures was around 89% for AD vs. Control, 80% for Late MCI vs. Control, 66% for MCI vs. Control, and 67% for Early MCI vs. Control.
Collapse
Affiliation(s)
- Alejandro Mora-Rubio
- Department of Electronics and Automation, Universidad Autonóma de Manizales, Manizales, Caldas, Colombia
| | | | | | - Jose Manuel Saborit Torres
- Unidad Mixta de Imagen Biomédica FISABIO-CIPF, Fundación para el Fomento de la Investigación Sanitario y Biomédica de la Comunidad Valenciana, Valencia, Spain
| | - Gonzalo A. Ruz
- Center of Applied Ecology and Sustainability (CAPES), Santiago, Chile
- Data Observatory Foundation, Santiago, Chile
- Facultad de Ingeniería y Ciencias, Universidad Asdolfo Ibáñez, Santiago, Chile
| | - Reinel Tabares-Soto
- Department of Electronics and Automation, Universidad Autonóma de Manizales, Manizales, Caldas, Colombia
- Facultad de Ingeniería y Ciencias, Universidad Asdolfo Ibáñez, Santiago, Chile
- Department of Systems and Informatics, Universidad de Caldas, Manizales, Caldas, Colombia
| |
Collapse
|
6
|
Tenali N, Babu GRM. A Systematic Literature Review and Future Perspectives for Handling Big Data Analytics in COVID-19 Diagnosis. NEW GENERATION COMPUTING 2023; 41:243-280. [PMID: 37229177 PMCID: PMC10019802 DOI: 10.1007/s00354-023-00211-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
In today's digital world, information is growing along with the expansion of Internet usage worldwide. As a consequence, bulk of data is generated constantly which is known to be "Big Data". One of the most evolving technologies in twenty-first century is Big Data analytics, it is promising field for extracting knowledge from very large datasets and enhancing benefits while lowering costs. Due to the enormous success of big data analytics, the healthcare sector is increasingly shifting toward adopting these approaches to diagnose diseases. Due to the recent boom in medical big data and the development of computational methods, researchers and practitioners have gained the ability to mine and visualize medical big data on a larger scale. Thus, with the aid of integration of big data analytics in healthcare sectors, precise medical data analysis is now feasible with early sickness detection, health status monitoring, patient treatment, and community services is now achievable. With all these improvements, a deadly disease COVID is considered in this comprehensive review with the intention of offering remedies utilizing big data analytics. The use of big data applications is vital to managing pandemic conditions, such as predicting outbreaks of COVID-19 and identifying cases and patterns of spread of COVID-19. Research is still being done on leveraging big data analytics to forecast COVID-19. But precise and early identification of COVID disease is still lacking due to the volume of medical records like dissimilar medical imaging modalities. Meanwhile, Digital imaging has now become essential to COVID diagnosis, but the main challenge is the storage of massive volumes of data. Taking these limitations into account, a comprehensive analysis is presented in the systematic literature review (SLR) to provide a deeper understanding of big data in the field of COVID-19.
Collapse
Affiliation(s)
- Nagamani Tenali
- Department of CSE, Dr.Y.S. Rajasekhar Reddy University College of Engineering & Technology, Acharya Nagarjuna University, Nagarjuna Nagar, Guntur, India
| | - Gatram Rama Mohan Babu
- Computer Science and Engineering (AI&ML), RVR & JC College of Engineering, Chowdavaram, Guntur, India
| |
Collapse
|
7
|
Rao Y, Lv Q, Zeng S, Yi Y, Huang C, Gao Y, Cheng Z, Sun J. COVID-19 CT ground-glass opacity segmentation based on attention mechanism threshold. Biomed Signal Process Control 2023; 81:104486. [PMID: 36505089 PMCID: PMC9721288 DOI: 10.1016/j.bspc.2022.104486] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 11/23/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022]
Abstract
The ground glass opacity (GGO) of the lung is one of the essential features of COVID-19. The GGO in computed tomography (CT) images has various features and low-intensity contrast between the GGO and edge structures. These problems pose significant challenges for segmenting the GGO. To tackle these problems, we propose a new threshold method for accurate segmentation of GGO. Specifically, we offer a framework for adjusting the threshold parameters according to the image contrast. Three functions include Attention mechanism threshold, Contour equalization, and Lung segmentation (ACL). The lung is divided into three areas using the attention mechanism threshold. Further, the segmentation parameters of the attention mechanism thresholds of the three parts are adaptively adjusted according to the image contrast. Only the segmentation regions restricted by the lung segmentation results are retained. Extensive experiments on four COVID datasets show that ACL can segment GGO images at low contrast well. Compared with the state-of-the-art methods, the similarity Dice of the ACL segmentation results is improved by 8.9%, the average symmetry surface distance ASD is reduced by 23%, and the required computational power F L O P s are only 0.09% of those of deep learning models. For GGO segmentation, ACL is more lightweight, and the accuracy is higher. Code will be released at https://github.com/Lqs-github/ACL.
Collapse
Affiliation(s)
- Yunbo Rao
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Qingsong Lv
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Shaoning Zeng
- Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou, 313000, China
| | - Yuling Yi
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Cheng Huang
- Fifth Clinical College of Chongqing Medical University, Chongqing, 402177, China
| | - Yun Gao
- Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Zhanglin Cheng
- Advanced Technology Chinese Academy of Sciences, Shenzhen, 610042, China
| | - Jihong Sun
- Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, 310014, China
| |
Collapse
|
8
|
Deep Learning for Detecting COVID-19 Using Medical Images. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 10:bioengineering10010019. [PMID: 36671590 PMCID: PMC9854504 DOI: 10.3390/bioengineering10010019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
The global spread of COVID-19 (also known as SARS-CoV-2) is a major international public health crisis [...].
Collapse
|
9
|
Han X, Hu Z, Wang S, Zhang Y. A Survey on Deep Learning in COVID-19 Diagnosis. J Imaging 2022; 9:1. [PMID: 36662099 PMCID: PMC9866755 DOI: 10.3390/jimaging9010001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/05/2022] [Accepted: 12/16/2022] [Indexed: 12/31/2022] Open
Abstract
According to the World Health Organization statistics, as of 25 October 2022, there have been 625,248,843 confirmed cases of COVID-19, including 65,622,281 deaths worldwide. The spread and severity of COVID-19 are alarming. The economy and life of countries worldwide have been greatly affected. The rapid and accurate diagnosis of COVID-19 directly affects the spread of the virus and the degree of harm. Currently, the classification of chest X-ray or CT images based on artificial intelligence is an important method for COVID-19 diagnosis. It can assist doctors in making judgments and reduce the misdiagnosis rate. The convolutional neural network (CNN) is very popular in computer vision applications, such as applied to biological image segmentation, traffic sign recognition, face recognition, and other fields. It is one of the most widely used machine learning methods. This paper mainly introduces the latest deep learning methods and techniques for diagnosing COVID-19 using chest X-ray or CT images based on the convolutional neural network. It reviews the technology of CNN at various stages, such as rectified linear units, batch normalization, data augmentation, dropout, and so on. Several well-performing network architectures are explained in detail, such as AlexNet, ResNet, DenseNet, VGG, GoogleNet, etc. We analyzed and discussed the existing CNN automatic COVID-19 diagnosis systems from sensitivity, accuracy, precision, specificity, and F1 score. The systems use chest X-ray or CT images as datasets. Overall, CNN has essential value in COVID-19 diagnosis. All of them have good performance in the existing experiments. If expanding the datasets, adding GPU acceleration and data preprocessing techniques, and expanding the types of medical images, the performance of CNN will be further improved. This paper wishes to make contributions to future research.
Collapse
Affiliation(s)
- Xue Han
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| |
Collapse
|
10
|
Malik H, Anees T, Din M, Naeem A. CDC_Net: multi-classification convolutional neural network model for detection of COVID-19, pneumothorax, pneumonia, lung Cancer, and tuberculosis using chest X-rays. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:13855-13880. [PMID: 36157356 PMCID: PMC9485026 DOI: 10.1007/s11042-022-13843-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 06/30/2022] [Accepted: 09/06/2022] [Indexed: 05/27/2023]
Abstract
Coronavirus (COVID-19) has adversely harmed the healthcare system and economy throughout the world. COVID-19 has similar symptoms as other chest disorders such as lung cancer (LC), pneumothorax, tuberculosis (TB), and pneumonia, which might mislead the clinical professionals in detecting a new variant of flu called coronavirus. This motivates us to design a model to classify multi-chest infections. A chest x-ray is the most ubiquitous disease diagnosis process in medical practice. As a result, chest x-ray examinations are the primary diagnostic tool for all of these chest infections. For the sake of saving human lives, paramedics and researchers are working tirelessly to establish a precise and reliable method for diagnosing the disease COVID-19 at an early stage. However, COVID-19's medical diagnosis is exceedingly idiosyncratic and varied. A multi-classification method based on the deep learning (DL) model is developed and tested in this work to automatically classify the COVID-19, LC, pneumothorax, TB, and pneumonia from chest x-ray images. COVID-19 and other chest tract disorders are diagnosed using a convolutional neural network (CNN) model called CDC Net that incorporates residual network thoughts and dilated convolution. For this study, we used this model in conjunction with publically available benchmark data to identify these diseases. For the first time, a single deep learning model has been used to diagnose five different chest ailments. In terms of classification accuracy, recall, precision, and f1-score, we compared the proposed model to three CNN-based pre-trained models, such as Vgg-19, ResNet-50, and inception v3. An AUC of 0.9953 was attained by the CDC Net when it came to identifying various chest diseases (with an accuracy of 99.39%, a recall of 98.13%, and a precision of 99.42%). Moreover, CNN-based pre-trained models Vgg-19, ResNet-50, and inception v3 achieved accuracy in classifying multi-chest diseases are 95.61%, 96.15%, and 95.16%, respectively. Using chest x-rays, the proposed model was found to be highly accurate in diagnosing chest diseases. Based on our testing data set, the proposed model shows significant performance as compared to its competitor methods. Statistical analyses of the datasets using McNemar's, and ANOVA tests also showed the robustness of the proposed model.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore, 54000 Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore, 54000 Pakistan
| | - Muizzud Din
- Department of Computer Science, Ghazi University, Dera Ghazi Khan, 32200 Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore, 54000 Pakistan
| |
Collapse
|
11
|
|
12
|
Enshaei N, Oikonomou A, Rafiee MJ, Afshar P, Heidarian S, Mohammadi A, Plataniotis KN, Naderkhani F. COVID-rate: an automated framework for segmentation of COVID-19 lesions from chest CT images. Sci Rep 2022; 12:3212. [PMID: 35217712 PMCID: PMC8881477 DOI: 10.1038/s41598-022-06854-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 01/21/2022] [Indexed: 11/09/2022] Open
Abstract
Novel Coronavirus disease (COVID-19) is a highly contagious respiratory infection that has had devastating effects on the world. Recently, new COVID-19 variants are emerging making the situation more challenging and threatening. Evaluation and quantification of COVID-19 lung abnormalities based on chest Computed Tomography (CT) images can help determining the disease stage, efficiently allocating limited healthcare resources, and making informed treatment decisions. During pandemic era, however, visual assessment and quantification of COVID-19 lung lesions by expert radiologists become expensive and prone to error, which raises an urgent quest to develop practical autonomous solutions. In this context, first, the paper introduces an open-access COVID-19 CT segmentation dataset containing 433 CT images from 82 patients that have been annotated by an expert radiologist. Second, a Deep Neural Network (DNN)-based framework is proposed, referred to as the [Formula: see text], that autonomously segments lung abnormalities associated with COVID-19 from chest CT images. Performance of the proposed [Formula: see text] framework is evaluated through several experiments based on the introduced and external datasets. Third, an unsupervised enhancement approach is introduced that can reduce the gap between the training set and test set and improve the model generalization. The enhanced results show a dice score of 0.8069 and specificity and sensitivity of 0.9969 and 0.8354, respectively. Furthermore, the results indicate that the [Formula: see text] model can efficiently segment COVID-19 lesions in both 2D CT images and whole lung volumes. Results on the external dataset illustrate generalization capabilities of the [Formula: see text] model to CT images obtained from a different scanner.
Collapse
Affiliation(s)
- Nastaran Enshaei
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | - Anastasia Oikonomou
- Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON, Canada.
| | - Moezedin Javad Rafiee
- Department of Medicine and Diagnostic Radiology, McGill University, Montreal, QC, Canada
| | - Parnian Afshar
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | - Shahin Heidarian
- Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada
| | - Arash Mohammadi
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | | | - Farnoosh Naderkhani
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| |
Collapse
|
13
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
14
|
Malik H, Anees T. BDCNet: multi-classification convolutional neural network model for classification of COVID-19, pneumonia, and lung cancer from chest radiographs. MULTIMEDIA SYSTEMS 2022; 28:815-829. [PMID: 35068705 PMCID: PMC8763428 DOI: 10.1007/s00530-021-00878-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Accepted: 12/07/2021] [Indexed: 05/08/2023]
Abstract
Globally, coronavirus disease (COVID-19) has badly affected the medical system and economy. Sometimes, the deadly COVID-19 has the same symptoms as other chest diseases such as pneumonia and lungs cancer and can mislead the doctors in diagnosing coronavirus. Frontline doctors and researchers are working assiduously in finding the rapid and automatic process for the detection of COVID-19 at the initial stage, to save human lives. However, the clinical diagnosis of COVID-19 is highly subjective and variable. The objective of this study is to implement a multi-classification algorithm based on deep learning (DL) model for identifying the COVID-19, pneumonia, and lung cancer diseases from chest radiographs. In the present study, we have proposed a model with the combination of Vgg-19 and convolutional neural networks (CNN) named BDCNet and applied it on different publically available benchmark databases to diagnose the COVID-19 and other chest tract diseases. To the best of our knowledge, this is the first study to diagnose the three chest diseases in a single deep learning model. We also computed and compared the classification accuracy of our proposed model with four well-known pre-trained models such as ResNet-50, Vgg-16, Vgg-19, and inception v3. Our proposed model achieved an AUC of 0.9833 (with an accuracy of 99.10%, a recall of 98.31%, a precision of 99.9%, and an f1-score of 99.09%) in classifying the different chest diseases. Moreover, CNN-based pre-trained models VGG-16, VGG-19, ResNet-50, and Inception-v3 achieved an accuracy of classifying multi-diseases are 97.35%, 97.14%, 97.15%, and 95.10%, respectively. The results revealed that our proposed model produced a remarkable performance as compared to its competitor approaches, thus providing significant assistance to diagnostic radiographers and health experts.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore, 54000 Pakistan
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan, 60000 Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore, 54000 Pakistan
| |
Collapse
|
15
|
Lin Z, He Z, Xie S, Wang X, Tan J, Lu J, Tan B. AANet: Adaptive Attention Network for COVID-19 Detection From Chest X-Ray Images. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:4781-4792. [PMID: 34613921 DOI: 10.1109/tnnls.2021.3114747] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Accurate and rapid diagnosis of COVID-19 using chest X-ray (CXR) plays an important role in large-scale screening and epidemic prevention. Unfortunately, identifying COVID-19 from the CXR images is challenging as its radiographic features have a variety of complex appearances, such as widespread ground-glass opacities and diffuse reticular-nodular opacities. To solve this problem, we propose an adaptive attention network (AANet), which can adaptively extract the characteristic radiographic findings of COVID-19 from the infected regions with various scales and appearances. It contains two main components: an adaptive deformable ResNet and an attention-based encoder. First, the adaptive deformable ResNet, which adaptively adjusts the receptive fields to learn feature representations according to the shape and scale of infected regions, is designed to handle the diversity of COVID-19 radiographic features. Then, the attention-based encoder is developed to model nonlocal interactions by self-attention mechanism, which learns rich context information to detect the lesion regions with complex shapes. Extensive experiments on several public datasets show that the proposed AANet outperforms state-of-the-art methods.
Collapse
|