1
|
Malik H, Anees T. Multi-modal deep learning methods for classification of chest diseases using different medical imaging and cough sounds. PLoS One 2024; 19:e0296352. [PMID: 38470893 DOI: 10.1371/journal.pone.0296352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Accepted: 12/11/2023] [Indexed: 03/14/2024] Open
Abstract
Chest disease refers to a wide range of conditions affecting the lungs, such as COVID-19, lung cancer (LC), consolidation lung (COL), and many more. When diagnosing chest disorders medical professionals may be thrown off by the overlapping symptoms (such as fever, cough, sore throat, etc.). Additionally, researchers and medical professionals make use of chest X-rays (CXR), cough sounds, and computed tomography (CT) scans to diagnose chest disorders. The present study aims to classify the nine different conditions of chest disorders, including COVID-19, LC, COL, atelectasis (ATE), tuberculosis (TB), pneumothorax (PNEUTH), edema (EDE), pneumonia (PNEU). Thus, we suggested four novel convolutional neural network (CNN) models that train distinct image-level representations for nine different chest disease classifications by extracting features from images. Furthermore, the proposed CNN employed several new approaches such as a max-pooling layer, batch normalization layers (BANL), dropout, rank-based average pooling (RBAP), and multiple-way data generation (MWDG). The scalogram method is utilized to transform the sounds of coughing into a visual representation. Before beginning to train the model that has been developed, the SMOTE approach is used to calibrate the CXR and CT scans as well as the cough sound images (CSI) of nine different chest disorders. The CXR, CT scan, and CSI used for training and evaluating the proposed model come from 24 publicly available benchmark chest illness datasets. The classification performance of the proposed model is compared with that of seven baseline models, namely Vgg-19, ResNet-101, ResNet-50, DenseNet-121, EfficientNetB0, DenseNet-201, and Inception-V3, in addition to state-of-the-art (SOTA) classifiers. The effectiveness of the proposed model is further demonstrated by the results of the ablation experiments. The proposed model was successful in achieving an accuracy of 99.01%, making it superior to both the baseline models and the SOTA classifiers. As a result, the proposed approach is capable of offering significant support to radiologists and other medical professionals.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| |
Collapse
|
2
|
Oura D, Gekka M, Sugimori H. The montage method improves the classification of suspected acute ischemic stroke using the convolution neural network and brain MRI. Radiol Phys Technol 2024; 17:297-305. [PMID: 37934345 DOI: 10.1007/s12194-023-00754-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 10/15/2023] [Accepted: 10/17/2023] [Indexed: 11/08/2023]
Abstract
This study investigated the usefulness of the montage method that combines four different magnetic resonance images into one images for automatic acute ischemic stroke (AIS) diagnosis with deep learning method. The montage image was consisted from diffusion weighted image (DWI), fluid attenuated inversion recovery (FLAIR), arterial spin labeling (ASL), and apparent diffusion coefficient (ASL). The montage method was compared with pseudo color map (pCM) which was consisted from FLAIR, ASL and ADC. 473 AIS patients were classified into four categories: mechanical thrombectomy, conservative therapy, hemorrhage, and other diseases. The results showed that the montage image significantly outperformed pCM in terms of accuracy (montage image = 0.76 ± 0.01, pCM = 0.54 ± 0.05) and the area under the curve (AUC) (montage image = 0.94 ± 0.01, pCM = 0.76 ± 0.01). This study demonstrates the usefulness of the montage method and its potential for overcoming the limitations of pCM.
Collapse
Affiliation(s)
- Daisuke Oura
- Department of Radiology, Otaru General Hospital, Otaru, 047-0152, Japan
- Graduate School of Health Sciences, Hokkaido University, Sapporo, 060-0812, Japan
| | - Masayuki Gekka
- Department of Neurosurgery, Otaru General Hospital, Otaru, 047-0152, Japan
| | - Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, Sapporo, 060-0812, Japan.
| |
Collapse
|
3
|
A novel ensemble CNN model for COVID-19 classification in computerized tomography scans. RESULTS IN CONTROL AND OPTIMIZATION 2023; 11:100215. [PMCID: PMC9936787 DOI: 10.1016/j.rico.2023.100215] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 12/26/2022] [Accepted: 02/10/2023] [Indexed: 11/30/2024]
Abstract
COVID-19 is a rapidly spread infectious disease caused by a severe acute respiratory syndrome that can lead to death in just a few days. Thus, early disease detection can provide more time for successful treatment or action, even though an efficient treatment is unknown so far. In this context, this work proposes and investigates four ensemble CNNs using transfer learning and compares them with state-of-art CNN architectures. To select which models to use we tested 11 state-of-art CNN architectures: DenseNet121, DenseNet169, DenseNet201, VGG16, VGG19, Xception, ResNet50, ResNet50v2, InceptionV3, MobileNet, and MobileNetv2. We used a public dataset comprised of 2477 computerized tomography images divided into two classes: patients diagnosed with COVID-19 and patients with a negative diagnosis. Then three architectures were selected: DenseNet169, VGG16, and Xception. Finally, the ensemble models were tested in all possible combinations. The results showed that the ensemble models tend to present the best results. Moreover, the best ensemble CNN, called EnsenbleDVX, comprising all the three CNNs, provides the best results achieving an average accuracy of 97.7%, an average precision of 97.7%, an average recall of 97.8%, and an F1 average score of 97.7%
Collapse
|
4
|
Iqbal U, Imtiaz R, Saudagar AKJ, Alam KA. CRV-NET: Robust Intensity Recognition of Coronavirus in Lung Computerized Tomography Scan Images. Diagnostics (Basel) 2023; 13:diagnostics13101783. [PMID: 37238266 DOI: 10.3390/diagnostics13101783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Revised: 05/01/2023] [Accepted: 05/10/2023] [Indexed: 05/28/2023] Open
Abstract
The early diagnosis of infectious diseases is demanded by digital healthcare systems. Currently, the detection of the new coronavirus disease (COVID-19) is a major clinical requirement. For COVID-19 detection, deep learning models are used in various studies, but the robustness is still compromised. In recent years, deep learning models have increased in popularity in almost every area, particularly in medical image processing and analysis. The visualization of the human body's internal structure is critical in medical analysis; many imaging techniques are in use to perform this job. A computerized tomography (CT) scan is one of them, and it has been generally used for the non-invasive observation of the human body. The development of an automatic segmentation method for lung CT scans showing COVID-19 can save experts time and can reduce human error. In this article, the CRV-NET is proposed for the robust detection of COVID-19 in lung CT scan images. A public dataset (SARS-CoV-2 CT Scan dataset), is used for the experimental work and customized according to the scenario of the proposed model. The proposed modified deep-learning-based U-Net model is trained on a custom dataset with 221 training images and their ground truth, which was labeled by an expert. The proposed model is tested on 100 test images, and the results show that the model segments COVID-19 with a satisfactory level of accuracy. Moreover, the comparison of the proposed CRV-NET with different state-of-the-art convolutional neural network models (CNNs), including the U-Net Model, shows better results in terms of accuracy (96.67%) and robustness (low epoch value in detection and the smallest training data size).
Collapse
Affiliation(s)
- Uzair Iqbal
- Department of Artificial Intelligence and Data Science, National University of Computer and Emerging Sciences, Islamabad Campus, Islamabad 44000, Pakistan
| | - Romil Imtiaz
- Information and Communication Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Abdul Khader Jilani Saudagar
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Khubaib Amjad Alam
- Department of Software Engineering, National University of Computer and Emerging Sciences, Islamabad Campus, Islamabad 44000, Pakistan
| |
Collapse
|
5
|
Hamza A, Attique Khan M, Wang SH, Alhaisoni M, Alharbi M, Hussein HS, Alshazly H, Kim YJ, Cha J. COVID-19 classification using chest X-ray images based on fusion-assisted deep Bayesian optimization and Grad-CAM visualization. Front Public Health 2022; 10:1046296. [PMID: 36408000 PMCID: PMC9672507 DOI: 10.3389/fpubh.2022.1046296] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 10/12/2022] [Indexed: 11/06/2022] Open
Abstract
The COVID-19 virus's rapid global spread has caused millions of illnesses and deaths. As a result, it has disastrous consequences for people's lives, public health, and the global economy. Clinical studies have revealed a link between the severity of COVID-19 cases and the amount of virus present in infected people's lungs. Imaging techniques such as computed tomography (CT) and chest x-rays can detect COVID-19 (CXR). Manual inspection of these images is a difficult process, so computerized techniques are widely used. Deep convolutional neural networks (DCNNs) are a type of machine learning that is frequently used in computer vision applications, particularly in medical imaging, to detect and classify infected regions. These techniques can assist medical personnel in the detection of patients with COVID-19. In this article, a Bayesian optimized DCNN and explainable AI-based framework is proposed for the classification of COVID-19 from the chest X-ray images. The proposed method starts with a multi-filter contrast enhancement technique that increases the visibility of the infected part. Two pre-trained deep models, namely, EfficientNet-B0 and MobileNet-V2, are fine-tuned according to the target classes and then trained by employing Bayesian optimization (BO). Through BO, hyperparameters have been selected instead of static initialization. Features are extracted from the trained model and fused using a slicing-based serial fusion approach. The fused features are classified using machine learning classifiers for the final classification. Moreover, visualization is performed using a Grad-CAM that highlights the infected part in the image. Three publically available COVID-19 datasets are used for the experimental process to obtain improved accuracies of 98.8, 97.9, and 99.4%, respectively.
Collapse
Affiliation(s)
- Ameer Hamza
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | | | - Shui-Hua Wang
- Department of Mathematics, University of Leicester, Leicester, United Kingdom
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Meshal Alharbi
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Hany S. Hussein
- Electrical Engineering Department, College of Engineering, King Khalid University, Abha, Saudi Arabia
- Electrical Engineering Department, Faculty of Engineering, Aswan University, Aswan, Egypt
| | - Hammam Alshazly
- Faculty of Computers and Information, South Valley University, Qena, Egypt
| | - Ye Jin Kim
- Department of Computer Science, Hanyang University, Seoul, South Korea
| | - Jaehyuk Cha
- Department of Computer Science, Hanyang University, Seoul, South Korea
| |
Collapse
|
6
|
Hamza A, Attique Khan M, Wang SH, Alqahtani A, Alsubai S, Binbusayyis A, Hussein HS, Martinetz TM, Alshazly H. COVID-19 classification using chest X-ray images: A framework of CNN-LSTM and improved max value moth flame optimization. Front Public Health 2022; 10:948205. [PMID: 36111186 PMCID: PMC9468600 DOI: 10.3389/fpubh.2022.948205] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 08/01/2022] [Indexed: 01/21/2023] Open
Abstract
Coronavirus disease 2019 (COVID-19) is a highly contagious disease that has claimed the lives of millions of people worldwide in the last 2 years. Because of the disease's rapid spread, it is critical to diagnose it at an early stage in order to reduce the rate of spread. The images of the lungs are used to diagnose this infection. In the last 2 years, many studies have been introduced to help with the diagnosis of COVID-19 from chest X-Ray images. Because all researchers are looking for a quick method to diagnose this virus, deep learning-based computer controlled techniques are more suitable as a second opinion for radiologists. In this article, we look at the issue of multisource fusion and redundant features. We proposed a CNN-LSTM and improved max value features optimization framework for COVID-19 classification to address these issues. The original images are acquired and the contrast is increased using a combination of filtering algorithms in the proposed architecture. The dataset is then augmented to increase its size, which is then used to train two deep learning networks called Modified EfficientNet B0 and CNN-LSTM. Both networks are built from scratch and extract information from the deep layers. Following the extraction of features, the serial based maximum value fusion technique is proposed to combine the best information of both deep models. However, a few redundant information is also noted; therefore, an improved max value based moth flame optimization algorithm is proposed. Through this algorithm, the best features are selected and finally classified through machine learning classifiers. The experimental process was conducted on three publically available datasets and achieved improved accuracy than the existing techniques. Moreover, the classifiers based comparison is also conducted and the cubic support vector machine gives better accuracy.
Collapse
Affiliation(s)
- Ameer Hamza
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | | | - Shui-Hua Wang
- Department of Mathematics, University of Leicester, Leicester, United Kingdom
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Adel Binbusayyis
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Hany S. Hussein
- Department of Electrical Engineering, College of Engineering, King Khalid University, Abha, Saudi Arabia
- Department of Electrical Engineering, Faculty of Engineering, Aswan University, Aswan, Egypt
| | | | - Hammam Alshazly
- Faculty of Computers and Information, South Valley University, Qena, Egypt
| |
Collapse
|
7
|
Linse C, Alshazly H, Martinetz T. A walk in the black-box: 3D visualization of large neural networks in virtual reality. Neural Comput Appl 2022; 34:21237-21252. [PMID: 35996678 PMCID: PMC9387423 DOI: 10.1007/s00521-022-07608-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 07/01/2022] [Indexed: 11/30/2022]
Abstract
AbstractWithin the last decade Deep Learning has become a tool for solving challenging problems like image recognition. Still, Convolutional Neural Networks (CNNs) are considered black-boxes, which are difficult to understand by humans. Hence, there is an urge to visualize CNN architectures, their internal processes and what they actually learn. Previously, virtual realityhas been successfully applied to display small CNNs in immersive 3D environments. In this work, we address the problem how to feasibly render large-scale CNNs, thereby enabling the visualization of popular architectures with ten thousands of feature maps and branches in the computational graph in 3D. Our software ”DeepVisionVR” enables the user to freely walk through the layered network, pick up and place images, move/scale layers for better readability, perform feature visualization and export the results. We also provide a novel Pytorch module to dynamically link PyTorch with Unity, which gives developers and researchers a convenient interface to visualize their own architectures. The visualization is directly created from the PyTorch class that defines the Pytorch model used for training and testing. This approach allows full access to the network’s internals and direct control over what exactly is visualized. In a use-case study, we apply the module to analyze models with different generalization abilities in order to understand how networks memorize images. We train two recent architectures, CovidResNet and CovidDenseNet on the Caltech101 and the SARS-CoV-2 datasets and find that bad generalization is driven by high-frequency features and the susceptibility to specific pixel arrangements, leading to implications for the practical application of CNNs. The code is available on Github https://github.com/Criscraft/DeepVisionVR.
Collapse
Affiliation(s)
- Christoph Linse
- Institute for Neuro- and Bioinformatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Hammam Alshazly
- Faculty of Computers and Information, South Valley University, Qena, 83523 Egypt
| | - Thomas Martinetz
- Institute for Neuro- and Bioinformatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| |
Collapse
|
8
|
Deep Sentiment Analysis of Twitter Data Using a Hybrid Ghost Convolution Neural Network Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6595799. [PMID: 35898769 PMCID: PMC9313995 DOI: 10.1155/2022/6595799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 05/31/2022] [Accepted: 06/13/2022] [Indexed: 11/17/2022]
Abstract
Several problems remain, despite the evident advantages of sentiment analysis of public opinion represented on Twitter and Facebook. On complicated training data, hybrid approaches may reduce sentiment mistakes. This research assesses the dependability of numerous hybrid approaches on a variety of datasets. Across domains and datasets, we compare hybrid models to singles. Text tweets and reviews are included in our deep sentiment analysis learning systems. The support vector machine (SVM), Long Short-Term Memory (LSTM), and ghost model convolution neural network (CNN) are combined to get the hybrid model. The dependability and computation time of each approach were evaluated. On all datasets, hybrid models outperform single models when deep learning and SVM are combined. The traditional models were less trustworthy, and deep learning algorithms have recently shown their enormous promise in sentiment analysis. Linear transformations are used in feature maps to eliminate duplicate or related features. The ghost unit makes ghost features by taking away attributes that are both similar and duplicated from each intrinsic feature. LSTM produces higher results but takes longer to process, while CNN needs less hyperparameter adjusting and monitoring. The effectiveness of the integrated model varies depending on the work, and all performed better than the others. For hybrid deep sentiment analysis learning models, LSTM networks, CNNs, and SVMs are needed. Hybrid models are used to compare SVM, LSTM, and CNN, and we tested each method's accuracy and errors. Deep learning-SVM hybrid models improve sentiment analysis accuracy. Experimental results have shown the accuracy of the proposed model shown 91.3 percent and 91.5 percent for datasets type 1 and 8, respectively.
Collapse
|
9
|
Abstract
Deep learning in the last decade has been very successful in computer vision and machine learning applications. Deep learning networks provide state-of-the-art performance in almost all of the applications where they have been employed. In this review, we aim to summarize the essential deep learning techniques and then apply them to COVID-19, a highly contagious viral infection that wreaks havoc on everyone’s lives in various ways. According to the World Health Organization and scientists, more testing potentially helps contain the virus’s spread. The use of chest radiographs is one of the early screening tests for determining disease, as the infection affects the lungs severely. To detect the COVID-19 infection, this experimental survey investigates and automates the process of testing by employing state-of-the-art deep learning classifiers. Moreover, the viruses are of many types, such as influenza, hepatitis, and COVID. Here, our focus is on COVID-19. Therefore, we employ binary classification, where one class is COVID-19 while the other viral infection types are treated as non-COVID-19 in the radiographs. The classification task is challenging due to the limited number of scans available for COVID-19 and the minute variations in the viral infections. We aim to employ current state-of-the-art CNN architectures, compare their results, and determine whether deep learning algorithms can handle the crisis appropriately and accurately. We train and evaluate 34 models. We also provide the limitations and future direction.
Collapse
|
10
|
COV-DLS: Prediction of COVID-19 from X-Rays Using Enhanced Deep Transfer Learning Techniques. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:6216273. [PMID: 35422979 PMCID: PMC9002900 DOI: 10.1155/2022/6216273] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Accepted: 02/11/2022] [Indexed: 12/12/2022]
Abstract
In this paper, modifications in neoteric architectures such as VGG16, VGG19, ResNet50, and InceptionV3 are proposed for the classification of COVID-19 using chest X-rays. The proposed architectures termed "COV-DLS" consist of two phases: heading model construction and classification. The heading model construction phase utilizes four modified deep learning architectures, namely Modified-VGG16, Modified-VGG19, Modified-ResNet50, and Modified-InceptionV3. An attempt is made to modify these neoteric architectures by incorporating the average pooling and dense layers. The dropout layer is also added to prevent the overfitting problem. Two dense layers with different activation functions are also added. Thereafter, the output of these modified models is applied during the classification phase, when COV-DLS are applied on a COVID-19 chest X-ray image data set. Classification accuracy of 98.61% is achieved by Modified-VGG16, 97.22% by Modified-VGG19, 95.13% by Modified-ResNet50, and 99.31% by Modified-InceptionV3. COV-DLS outperforms existing deep learning models in terms of accuracy and F1-score.
Collapse
|
11
|
Kini AS, Gopal Reddy AN, Kaur M, Satheesh S, Singh J, Martinetz T, Alshazly H. Ensemble Deep Learning and Internet of Things-Based Automated COVID-19 Diagnosis Framework. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:7377502. [PMID: 35280708 PMCID: PMC8896964 DOI: 10.1155/2022/7377502] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 01/24/2022] [Indexed: 12/17/2022]
Abstract
Coronavirus disease (COVID-19) is a viral infection caused by SARS-CoV-2. The modalities such as computed tomography (CT) have been successfully utilized for the early stage diagnosis of COVID-19 infected patients. Recently, many researchers have utilized deep learning models for the automated screening of COVID-19 suspected cases. An ensemble deep learning and Internet of Things (IoT) based framework is proposed for screening of COVID-19 suspected cases. Three well-known pretrained deep learning models are ensembled. The medical IoT devices are utilized to collect the CT scans, and automated diagnoses are performed on IoT servers. The proposed framework is compared with thirteen competitive models over a four-class dataset. Experimental results reveal that the proposed ensembled deep learning model yielded 98.98% accuracy. Moreover, the model outperforms all competitive models in terms of other performance metrics achieving 98.56% precision, 98.58% recall, 98.75% F-score, and 98.57% AUC. Therefore, the proposed framework can improve the acceleration of COVID-19 diagnosis.
Collapse
Affiliation(s)
- Anita S. Kini
- Manipal Institute of Technology MAHE, Manipal, Karnataka 576104, India
| | - A. Nanda Gopal Reddy
- Department of IT, Mahaveer Institute of Science and Technology, Hyderabad, Telangana 500005, India
| | - Manjit Kaur
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju 61005, Republic of Korea
| | - S. Satheesh
- Department of Electronics and Communication Engineering, Malineni Lakshmaiah Women's Engineering College, Guntur, Andhra Pradesh 522017, India
| | - Jagendra Singh
- School of Computer Science Engineering and Technology, Bennett University, Greater Noida-203206, India
| | - Thomas Martinetz
- Institute for Neuro- and Bioinformatics, University of Lübeck, Lübeck 23562, Germany
| | - Hammam Alshazly
- Faculty of Computers and Information, South Valley University, Qena 83523, Egypt
| |
Collapse
|
12
|
Attallah O. A computer-aided diagnostic framework for coronavirus diagnosis using texture-based radiomics images. Digit Health 2022; 8:20552076221092543. [PMID: 35433024 PMCID: PMC9005822 DOI: 10.1177/20552076221092543] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/21/2022] [Indexed: 12/14/2022] Open
Abstract
The accurate and rapid detection of the novel coronavirus infection, coronavirus is very important to prevent the fast spread of such disease. Thus, reducing negative effects that influenced many industrial sectors, especially healthcare. Artificial intelligence techniques in particular deep learning could help in the fast and precise diagnosis of coronavirus from computed tomography images. Most artificial intelligence-based studies used the original computed tomography images to build their models; however, the integration of texture-based radiomics images and deep learning techniques could improve the diagnostic accuracy of the novel coronavirus diseases. This study proposes a computer-assisted diagnostic framework based on multiple deep learning and texture-based radiomics approaches. It first trains three Residual Networks (ResNets) deep learning techniques with two texture-based radiomics images including discrete wavelet transform and gray-level covariance matrix instead of the original computed tomography images. Then, it fuses the texture-based radiomics deep features sets extracted from each using discrete cosine transform. Thereafter, it further combines the fused texture-based radiomics deep features obtained from the three convolutional neural networks. Finally, three support vector machine classifiers are utilized for the classification procedure. The proposed method is validated experimentally on the benchmark severe respiratory syndrome coronavirus 2 computed tomography image dataset. The accuracies attained indicate that using texture-based radiomics (gray-level covariance matrix, discrete wavelet transform) images for training the ResNet-18 (83.22%, 74.9%), ResNet-50 (80.94%, 78.39%), and ResNet-101 (80.54%, 77.99%) is better than using the original computed tomography images (70.34%, 76.51%, and 73.42%) for ResNet-18, ResNet-50, and ResNet-101, respectively. Furthermore, the sensitivity, specificity, accuracy, precision, and F1-score achieved using the proposed computer-assisted diagnostic after the two fusion steps are 99.47%, 99.72%, 99.60%, 99.72%, and 99.60% which proves that combining texture-based radiomics deep features obtained from the three ResNets has boosted its performance. Thus, fusing multiple texture-based radiomics deep features mined from several convolutional neural networks is better than using only one type of radiomics approach and a single convolutional neural network. The performance of the proposed computer-assisted diagnostic framework allows it to be used by radiologists in attaining fast and accurate diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
13
|
Tazin T, Sarker S, Gupta P, Ayaz FI, Islam S, Monirujjaman Khan M, Bourouis S, Idris SA, Alshazly H. A Robust and Novel Approach for Brain Tumor Classification Using Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:2392395. [PMID: 34970309 PMCID: PMC8714377 DOI: 10.1155/2021/2392395] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 12/04/2021] [Indexed: 11/25/2022]
Abstract
Brain tumors are the most common and aggressive illness, with a relatively short life expectancy in their most severe form. Thus, treatment planning is an important step in improving patients' quality of life. In general, image methods such as computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound images are used to assess tumors in the brain, lung, liver, breast, prostate, and so on. X-ray images, in particular, are utilized in this study to diagnose brain tumors. This paper describes the investigation of the convolutional neural network (CNN) to identify brain tumors from X-ray images. It expedites and increases the reliability of the treatment. Because there has been a significant amount of study in this field, the presented model focuses on boosting accuracy while using a transfer learning strategy. Python and Google Colab were utilized to perform this investigation. Deep feature extraction was accomplished with the help of pretrained deep CNN models, VGG19, InceptionV3, and MobileNetV2. The classification accuracy is used to assess the performance of this paper. MobileNetV2 had the accuracy of 92%, InceptionV3 had the accuracy of 91%, and VGG19 had the accuracy of 88%. MobileNetV2 has offered the highest level of accuracy among these networks. These precisions aid in the early identification of tumors before they produce physical adverse effects such as paralysis and other impairments.
Collapse
Affiliation(s)
- Tahia Tazin
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Sraboni Sarker
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Punit Gupta
- Department of Computer and Communication, Manipal University Jaipur, Jaipur, India
| | - Fozayel Ibn Ayaz
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Sumaia Islam
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Mohammad Monirujjaman Khan
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Sami Bourouis
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Sahar Ahmed Idris
- College of Industrial Engineering, King Khalid University, Abha, Saudi Arabia
| | - Hammam Alshazly
- Faculty of Computers and Information, South Valley University, Qena 83523, Egypt
| |
Collapse
|
14
|
Evaluating the Impact of COVID-19 on Society, Environment, Economy, and Education. SUSTAINABILITY 2021. [DOI: 10.3390/su132413642] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
The COVID-19 pandemic has caused drastic changes across the globe, affecting all areas of life. This paper provides a comprehensive study on the influence of COVID-19 in various fields such as the economy, education, society, the environment, and globalization. In this study, both the positive and negative consequences of the COVID-19 pandemic on education are studied. Modern technologies are combined with conventional teaching to improve the communication between instructors and learners. COVID-19 also greatly affected people with disabilities and those who are older, with these persons experiencing more complications in their normal routine activities. Additionally, COVID-19 provided negative impacts on world economies, greatly affecting the business, agriculture, entertainment, tourism, and service sectors. The impact of COVID-19 on these sectors is also investigated in this study, and this study provides some meaningful insights and suggestions for revitalizing the tourism sector. The association between globalization and travel restrictions is studied. In addition to economic and human health concerns, the influence of a lockdown on environmental health is also investigated. During periods of lockdown, the amount of pollutants in the air, soil, and water was significantly reduced. This study motivates researchers to investigate the positive and negative consequences of the COVID-19 pandemic in various unexplored areas.
Collapse
|
15
|
Facial Recognition Intensity in Disease Diagnosis Using Automatic Facial Recognition. J Pers Med 2021; 11:jpm11111172. [PMID: 34834524 PMCID: PMC8621146 DOI: 10.3390/jpm11111172] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Revised: 11/03/2021] [Accepted: 11/05/2021] [Indexed: 01/05/2023] Open
Abstract
Artificial intelligence (AI) technology is widely applied in different medical fields, including the diagnosis of various diseases on the basis of facial phenotypes, but there is no evaluation or quantitative synthesis regarding the performance of artificial intelligence. Here, for the first time, we summarized and quantitatively analyzed studies on the diagnosis of heterogeneous diseases on the basis on facial features. In pooled data from 20 systematically identified studies involving 7 single diseases and 12,557 subjects, quantitative random-effects models revealed a pooled sensitivity of 89% (95% CI 82% to 93%) and a pooled specificity of 92% (95% CI 87% to 95%). A new index, the facial recognition intensity (FRI), was established to describe the complexity of the association of diseases with facial phenotypes. Meta-regression revealed the important contribution of FRI to heterogeneous diagnostic accuracy (p = 0.021), and a similar result was found in subgroup analyses (p = 0.003). An appropriate increase in the training size and the use of deep learning models helped to improve the diagnostic accuracy for diseases with low FRI, although no statistically significant association was found between accuracy and photographic resolution, training size, AI architecture, and number of diseases. In addition, a novel hypothesis is proposed for universal rules in AI performance, providing a new idea that could be explored in other AI applications.
Collapse
|
16
|
Yang D, Martinez C, Visuña L, Khandhar H, Bhatt C, Carretero J. Detection and analysis of COVID-19 in medical images using deep learning techniques. Sci Rep 2021; 11:19638. [PMID: 34608186 PMCID: PMC8490426 DOI: 10.1038/s41598-021-99015-3] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Accepted: 09/15/2021] [Indexed: 01/02/2023] Open
Abstract
The main purpose of this work is to investigate and compare several deep learning enhanced techniques applied to X-ray and CT-scan medical images for the detection of COVID-19. In this paper, we used four powerful pre-trained CNN models, VGG16, DenseNet121, ResNet50,and ResNet152, for the COVID-19 CT-scan binary classification task. The proposed Fast.AI ResNet framework was designed to find out the best architecture, pre-processing, and training parameters for the models largely automatically. The accuracy and F1-score were both above 96% in the diagnosis of COVID-19 using CT-scan images. In addition, we applied transfer learning techniques to overcome the insufficient data and to improve the training time. The binary and multi-class classification of X-ray images tasks were performed by utilizing enhanced VGG16 deep transfer learning architecture. High accuracy of 99% was achieved by enhanced VGG16 in the detection of X-ray images from COVID-19 and pneumonia. The accuracy and validity of the algorithms were assessed on X-ray and CT-scan well-known public datasets. The proposed methods have better results for COVID-19 diagnosis than other related in literature. In our opinion, our work can help virologists and radiologists to make a better and faster diagnosis in the struggle against the outbreak of COVID-19.
Collapse
Affiliation(s)
- Dandi Yang
- Beijing Electro-Mechanical Engineering Institute, Beijing, 100074, China
| | - Cristhian Martinez
- Department of Computer Science and Engineering, Carlos III University of Madrid, 28911, Madrid, Spain
| | - Lara Visuña
- Department of Computer Science and Engineering, Carlos III University of Madrid, 28911, Madrid, Spain
| | - Hardev Khandhar
- U & P U. Patel Department of Computer Engineering, CSPIT, Charotar University of Science and Technology (CHARUSAT), Changa, India
| | - Chintan Bhatt
- U & P U. Patel Department of Computer Engineering, CSPIT, Charotar University of Science and Technology (CHARUSAT), Changa, India
| | - Jesus Carretero
- Department of Computer Science and Engineering, Carlos III University of Madrid, 28911, Madrid, Spain.
| |
Collapse
|