1
|
Contrastive learning with token projection for Omicron pneumonia identification from few-shot chest CT images. Front Med (Lausanne) 2024; 11:1360143. [PMID: 38756944 PMCID: PMC11096503 DOI: 10.3389/fmed.2024.1360143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 04/05/2024] [Indexed: 05/18/2024] Open
Abstract
Introduction Deep learning-based methods can promote and save critical time for the diagnosis of pneumonia from computed tomography (CT) images of the chest, where the methods usually rely on large amounts of labeled data to learn good visual representations. However, medical images are difficult to obtain and need to be labeled by professional radiologists. Methods To address this issue, a novel contrastive learning model with token projection, namely CoTP, is proposed for improving the diagnostic quality of few-shot chest CT images. Specifically, (1) we utilize solely unlabeled data for fitting CoTP, along with a small number of labeled samples for fine-tuning, (2) we present a new Omicron dataset and modify the data augmentation strategy, i.e., random Poisson noise perturbation for the CT interpretation task, and (3) token projection is utilized to further improve the quality of the global visual representations. Results The ResNet50 pre-trained by CoTP attained accuracy (ACC) of 92.35%, sensitivity (SEN) of 92.96%, precision (PRE) of 91.54%, and the area under the receiver-operating characteristics curve (AUC) of 98.90% on the presented Omicron dataset. On the contrary, the ResNet50 without pre-training achieved ACC, SEN, PRE, and AUC of 77.61, 77.90, 76.69, and 85.66%, respectively. Conclusion Extensive experiments reveal that a model pre-trained by CoTP greatly outperforms that without pre-training. The CoTP can improve the efficacy of diagnosis and reduce the heavy workload of radiologists for screening of Omicron pneumonia.
Collapse
|
2
|
From Organelle Morphology to Whole-Plant Phenotyping: A Phenotypic Detection Method Based on Deep Learning. PLANTS (BASEL, SWITZERLAND) 2024; 13:1177. [PMID: 38732392 PMCID: PMC11085357 DOI: 10.3390/plants13091177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 04/17/2024] [Accepted: 04/19/2024] [Indexed: 05/13/2024]
Abstract
The analysis of plant phenotype parameters is closely related to breeding, so plant phenotype research has strong practical significance. This paper used deep learning to classify Arabidopsis thaliana from the macro (plant) to the micro level (organelle). First, the multi-output model identifies Arabidopsis accession lines and regression to predict Arabidopsis's 22-day growth status. The experimental results showed that the model had excellent performance in identifying Arabidopsis lines, and the model's classification accuracy was 99.92%. The model also had good performance in predicting plant growth status, and the regression prediction of the model root mean square error (RMSE) was 1.536. Next, a new dataset was obtained by increasing the time interval of Arabidopsis images, and the model's performance was verified at different time intervals. Finally, the model was applied to classify Arabidopsis organelles to verify the model's generalizability. Research suggested that deep learning will broaden plant phenotype detection methods. Furthermore, this method will facilitate the design and development of a high-throughput information collection platform for plant phenotypes.
Collapse
|
3
|
Diagnostic performance of a deep-learning model using 18F-FDG PET/CT for evaluating recurrence after radiation therapy in patients with lung cancer. Ann Nucl Med 2024:10.1007/s12149-024-01925-5. [PMID: 38589677 DOI: 10.1007/s12149-024-01925-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 03/21/2024] [Indexed: 04/10/2024]
Abstract
OBJECTIVE We developed a deep learning model for distinguishing radiation therapy (RT)-related changes and tumour recurrence in patients with lung cancer who underwent RT, and evaluated its performance. METHODS We retrospectively recruited 308 patients with lung cancer with RT-related changes observed on 18F-fluorodeoxyglucose positron emission tomography-computed tomography (18F-FDG PET/CT) performed after RT. Patients were labelled as positive or negative for tumour recurrence through histologic diagnosis or clinical follow-up after 18F-FDG PET/CT. A two-dimensional (2D) slice-based convolutional neural network (CNN) model was created with a total of 3329 slices as input, and performance was evaluated with five independent test sets. RESULTS For the five independent test sets, the area under the curve (AUC) of the receiver operating characteristic curve, sensitivity, and specificity were in the range of 0.98-0.99, 95-98%, and 87-95%, respectively. The region determined by the model was confirmed as an actual recurred tumour through the explainable artificial intelligence (AI) using gradient-weighted class activation mapping (Grad-CAM). CONCLUSION The 2D slice-based CNN model using 18F-FDG PET imaging was able to distinguish well between RT-related changes and tumour recurrence in patients with lung cancer.
Collapse
|
4
|
Artificial intelligence and explanation: How, why, and when to explain black boxes. Eur J Radiol 2024; 173:111393. [PMID: 38417186 DOI: 10.1016/j.ejrad.2024.111393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Accepted: 02/22/2024] [Indexed: 03/01/2024]
Abstract
Artificial intelligence (AI) is infiltrating nearly all fields of science by storm. One notorious property that AI algorithms bring is their so-called black box character. In particular, they are said to be inherently unexplainable algorithms. Of course, such characteristics would pose a problem for the medical world, including radiology. The patient journey is filled with explanations along the way, from diagnoses to treatment, follow-up, and more. If we were to replace part of these steps with non-explanatory algorithms, we could lose grip on vital aspects such as finding mistakes, patient trust, and even the creation of new knowledge. In this article, we argue that, even for the darkest of black boxes, there is hope of understanding them. In particular, we compare the situation of understanding black box models to that of understanding the laws of nature in physics. In the case of physics, we are given a 'black box' law of nature, about which there is no upfront explanation. However, as current physical theories show, we can learn plenty about them. During this discussion, we present the process by which we make such explanations and the human role therein, keeping a solid focus on radiological AI situations. We will outline the AI developers' roles in this process, but also the critical role fulfilled by the practitioners, the radiologists, in providing a healthy system of continuous improvement of AI models. Furthermore, we explore the role of the explainable AI (XAI) research program in the broader context we describe.
Collapse
|
5
|
Fine-grained image classification on bats using VGG16-CBAM: a practical example with 7 horseshoe bats taxa (CHIROPTERA: Rhinolophidae: Rhinolophus) from Southern China. Front Zool 2024; 21:10. [PMID: 38561769 PMCID: PMC10983684 DOI: 10.1186/s12983-024-00531-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 03/18/2024] [Indexed: 04/04/2024] Open
Abstract
BACKGROUND Rapid identification and classification of bats are critical for practical applications. However, species identification of bats is a typically detrimental and time-consuming manual task that depends on taxonomists and well-trained experts. Deep Convolutional Neural Networks (DCNNs) provide a practical approach for the extraction of the visual features and classification of objects, with potential application for bat classification. RESULTS In this study, we investigated the capability of deep learning models to classify 7 horseshoe bat taxa (CHIROPTERA: Rhinolophus) from Southern China. We constructed an image dataset of 879 front, oblique, and lateral targeted facial images of live individuals collected during surveys between 2012 and 2021. All images were taken using a standard photograph protocol and setting aimed at enhancing the effectiveness of the DCNNs classification. The results demonstrated that our customized VGG16-CBAM model achieved up to 92.15% classification accuracy with better performance than other mainstream models. Furthermore, the Grad-CAM visualization reveals that the model pays more attention to the taxonomic key regions in the decision-making process, and these regions are often preferred by bat taxonomists for the classification of horseshoe bats, corroborating the validity of our methods. CONCLUSION Our finding will inspire further research on image-based automatic classification of chiropteran species for early detection and potential application in taxonomy.
Collapse
|
6
|
Multi-modal deep learning methods for classification of chest diseases using different medical imaging and cough sounds. PLoS One 2024; 19:e0296352. [PMID: 38470893 DOI: 10.1371/journal.pone.0296352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Accepted: 12/11/2023] [Indexed: 03/14/2024] Open
Abstract
Chest disease refers to a wide range of conditions affecting the lungs, such as COVID-19, lung cancer (LC), consolidation lung (COL), and many more. When diagnosing chest disorders medical professionals may be thrown off by the overlapping symptoms (such as fever, cough, sore throat, etc.). Additionally, researchers and medical professionals make use of chest X-rays (CXR), cough sounds, and computed tomography (CT) scans to diagnose chest disorders. The present study aims to classify the nine different conditions of chest disorders, including COVID-19, LC, COL, atelectasis (ATE), tuberculosis (TB), pneumothorax (PNEUTH), edema (EDE), pneumonia (PNEU). Thus, we suggested four novel convolutional neural network (CNN) models that train distinct image-level representations for nine different chest disease classifications by extracting features from images. Furthermore, the proposed CNN employed several new approaches such as a max-pooling layer, batch normalization layers (BANL), dropout, rank-based average pooling (RBAP), and multiple-way data generation (MWDG). The scalogram method is utilized to transform the sounds of coughing into a visual representation. Before beginning to train the model that has been developed, the SMOTE approach is used to calibrate the CXR and CT scans as well as the cough sound images (CSI) of nine different chest disorders. The CXR, CT scan, and CSI used for training and evaluating the proposed model come from 24 publicly available benchmark chest illness datasets. The classification performance of the proposed model is compared with that of seven baseline models, namely Vgg-19, ResNet-101, ResNet-50, DenseNet-121, EfficientNetB0, DenseNet-201, and Inception-V3, in addition to state-of-the-art (SOTA) classifiers. The effectiveness of the proposed model is further demonstrated by the results of the ablation experiments. The proposed model was successful in achieving an accuracy of 99.01%, making it superior to both the baseline models and the SOTA classifiers. As a result, the proposed approach is capable of offering significant support to radiologists and other medical professionals.
Collapse
|
7
|
Pattern classification of interstitial lung diseases from computed tomography images using a ResNet-based network with a split-transform-merge strategy and split attention. Phys Eng Sci Med 2024:10.1007/s13246-024-01404-1. [PMID: 38436886 DOI: 10.1007/s13246-024-01404-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 02/09/2024] [Indexed: 03/05/2024]
Abstract
In patients with interstitial lung disease (ILD), accurate pattern assessment from their computed tomography (CT) images could help track lung abnormalities and evaluate treatment efficacy. Based on excellent image classification performance, convolutional neural networks (CNNs) have been massively investigated for classifying and labeling pathological patterns in the CT images of ILD patients. However, previous studies rarely considered the three-dimensional (3D) structure of the pathological patterns of ILD and used two-dimensional network input. In addition, ResNet-based networks such as SE-ResNet and ResNeXt with high classification performance have not been used for pattern classification of ILD. This study proposed a SE-ResNeXt-SA-18 for classifying pathological patterns of ILD. The SE-ResNeXt-SA-18 integrated the multipath design of the ResNeXt and the feature weighting of the squeeze-and-excitation network with split attention. The classification performance of the SE-ResNeXt-SA-18 was compared with the ResNet-18 and SE-ResNeXt-18. The influence of the input patch size on classification performance was also evaluated. Results show that the classification accuracy was increased with the increase of the patch size. With a 32 × 32 × 16 input, the SE-ResNeXt-SA-18 presented the highest performance with average accuracy, sensitivity, and specificity of 0.991, 0.979, and 0.994. High-weight regions in the class activation maps of the SE-ResNeXt-SA-18 also matched the specific pattern features. In comparison, the performance of the SE-ResNeXt-SA-18 is superior to the previously reported CNNs in classifying the ILD patterns. We concluded that the SE-ResNeXt-SA-18 could help track or monitor the progress of ILD through accuracy pattern classification.
Collapse
|
8
|
Explainable deep learning diagnostic system for prediction of lung disease from medical images. Comput Biol Med 2024; 170:108012. [PMID: 38262202 DOI: 10.1016/j.compbiomed.2024.108012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 12/26/2023] [Accepted: 01/17/2024] [Indexed: 01/25/2024]
Abstract
Around the globe, respiratory lung diseases pose a severe threat to human survival. Based on a central goal to reduce contiguous transmission from infected to healthy persons, several technologies have evolved for diagnosing lung pathologies. One of the emerging technologies is the utility of Artificial Intelligence (AI) based on computer vision for processing wide varieties of medical imaging but AI methods without explainability are often treated as a black box. Based on a view to demystifying the rationale influencing AI decisions, this paper designed and developed a novel low-cost explainable deep-learning diagnostic tool for predicting lung disease from medical images. For this, we investigated explainable deep learning (DL) models (conventional DL and vision transformers (ViTs)) for performing prediction of the existence of pneumonia, COVID19, or no-disease from both original and data augmentation (DA)-based medical images (from two chest X-ray datasets). The results show that our experimental consideration of the DA that combines the impact of cropping, rotation, and horizontal flipping (CROP+ROT+HF) for transforming input images and then passed as input to an Inception-V3 architecture yielded a performance that surpasses all the ViTs and other conventional DL approaches in most of the evaluated performance metrics. Overall, the results suggest that the utility of data augmentation schemes aided the DL methods to yield higher classification accuracies. Furthermore, we compared five different class activation mapping (CAM) algorithms (GradCAM, GradCAM++, EigenGradCAM, AblationCAM, and RandomCAM). The result shows that most of the examined CAM algorithms were effective in identifying the attention region containing the existence of pneumonia or COVID-19 from the medical images (chest X-rays). Our developed low-cost AI diagnostic tool (pilot system) can assist medical experts and radiographers in proffering early diagnosis of lung disease. For this, we selected five to seven deep learning models and the explainable algorithms were deployed on a novel web interface implemented via a Gradio framework.
Collapse
|
9
|
EfficientNet-Based System for Detecting EGFR-Mutant Status and Predicting Prognosis of Tyrosine Kinase Inhibitors in Patients with NSCLC. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01022-z. [PMID: 38361006 DOI: 10.1007/s10278-024-01022-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 12/29/2023] [Accepted: 01/09/2024] [Indexed: 02/17/2024]
Abstract
We aimed to develop and validate a deep learning-based system using pre-therapy computed tomography (CT) images to detect epidermal growth factor receptor (EGFR)-mutant status in patients with non-small cell lung cancer (NSCLC) and predict the prognosis of advanced-stage patients with EGFR mutations treated with EGFR tyrosine kinase inhibitors (TKI). This retrospective, multicenter study included 485 patients with NSCLC from four hospitals. Of them, 339 patients from three centers were included in the training dataset to develop an EfficientNetV2-L-based model (EME) for predicting EGFR-mutant status, and the remaining patients were assigned to an independent test dataset. EME semantic features were extracted to construct an EME-prognostic model to stratify the prognosis of EGFR-mutant NSCLC patients receiving EGFR-TKI. A comparison of EME and radiomics was conducted. Additionally, we included patients from The Cancer Genome Atlas lung adenocarcinoma dataset with both CT images and RNA sequencing data to explore the biological associations between EME score and EGFR-related biological processes. EME obtained an area under the curve (AUC) of 0.907 (95% CI 0.840-0.926) on the test dataset, superior to the radiomics model (P = 0.007). The EME and radiomics fusion model showed better (AUC, 0.941) but not significantly increased performance (P = 0.895) compared with EME. In prognostic stratification, the EME-prognostic model achieved the best performance (C-index, 0.711). Moreover, the EME-prognostic score showed strong associations with biological pathways related to EGFR expression and EGFR-TKI efficacy. EME demonstrated a non-invasive and biologically interpretable approach to predict EGFR status, stratify survival prognosis, and correlate biological pathways in patients with NSCLC.
Collapse
|
10
|
A Novel Classification Model Using Optimal Long Short-Term Memory for Classification of COVID-19 from CT Images. J Digit Imaging 2023; 36:2480-2493. [PMID: 37491543 PMCID: PMC10584759 DOI: 10.1007/s10278-023-00852-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 05/13/2023] [Accepted: 05/15/2023] [Indexed: 07/27/2023] Open
Abstract
The human respiratory system is affected when an individual is infected with COVID-19, which became a global pandemic in 2020 and affected millions of people worldwide. However, accurate diagnosis of COVID-19 can be challenging due to small variations in typical and COVID-19 pneumonia, as well as the complexities involved in classifying infection regions. Currently, various deep learning (DL)-based methods are being introduced for the automatic detection of COVID-19 using computerized tomography (CT) scan images. In this paper, we propose the pelican optimization algorithm-based long short-term memory (POA-LSTM) method for classifying coronavirus using CT scan images. The data preprocessing technique is used to convert raw image data into a suitable format for subsequent steps. Here, we develop a general framework called no new U-Net (nnU-Net) for region of interest (ROI) segmentation in medical images. We apply a set of heuristic guidelines derived from the domain to systematically optimize the ROI segmentation task, which represents the dataset's key properties. Furthermore, high-resolution net (HRNet) is a standard neural network design developed for feature extraction. HRNet chooses the top-down strategy over the bottom-up method after considering the two options. It first detects the subject, generates a bounding box around the object and then estimates the relevant feature. The POA is used to minimize the subjective influence of manually selected parameters and enhance the LSTM's parameters. Thus, the POA-LSTM is used for the classification process, achieving higher performance for each performance metric such as accuracy, sensitivity, F1-score, precision, and specificity of 99%, 98.67%, 98.88%, 98.72%, and 98.43%, respectively.
Collapse
|
11
|
Multimodal and multi-omics-based deep learning model for screening of optic neuropathy. Heliyon 2023; 9:e22244. [PMID: 38046141 PMCID: PMC10686864 DOI: 10.1016/j.heliyon.2023.e22244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 11/06/2023] [Accepted: 11/07/2023] [Indexed: 12/05/2023] Open
Abstract
Purpose To examine the use of multimodal data and multi-omics strategies for optic nerve disease screening. Methods This was a single-center retrospective study. A deep learning model was created from fundus photography and infrared reflectance (IR) images of patients with diabetic optic neuropathy, glaucomatous optic neuropathy, and optic neuritis. Patients who were seen at the Ophthalmology Department of First Affiliated Hospital of Nanchang University in Jiangxi Province from November 2019 to April 2023 were included in this study. The data were analyzed in single and multimodal modes following the traditional omics, Resnet101, and fusion models. The accuracy and area-under-the-curve (AUC) of each model were compared. Results A total of 312 images fundus and infrared fundus photographs were collected from 156 patients. When multi-modal data was used, the accuracy of the traditional omics mode, Resnet101, and fusion models with the training set were 0.97, 0.98, and 0.99, respectively. The accuracy of the same models with the test sets were 0.72, 0.87, and 0.88, respectively. We compared single- and multi-mode states by applying the data to the different groups in the learning model. In the traditional omics model, the macro-average AUCs of the features extracted from fundus photography, IR images, and multimodal data were 0.94, 0.90, and 0.96, respectively. When the same data were processed in the Resnet101 model, the scores were 0.97 equally. However, when multimodal data was utilized, the macro-average AUCs in the traditional omics, Resnet101, and fusion modesl were 0.96, 0.97, and 0.99, respectively. Conclusion The deep learning model based on multimodal data and multi-omics strategies can improve the accuracy of screening and diagnosing diabetic optic neuropathy, glaucomatous optic neuropathy, and optic neuritis.
Collapse
|
12
|
Accurate staging of chick embryonic tissues via deep learning of salient features. Development 2023; 150:dev202068. [PMID: 37830145 PMCID: PMC10690058 DOI: 10.1242/dev.202068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 10/05/2023] [Indexed: 10/14/2023]
Abstract
Recent work shows that the developmental potential of progenitor cells in the HH10 chick brain changes rapidly, accompanied by subtle changes in morphology. This demands increased temporal resolution for studies of the brain at this stage, necessitating precise and unbiased staging. Here, we investigated whether we could train a deep convolutional neural network to sub-stage HH10 chick brains using a small dataset of 151 expertly labelled images. By augmenting our images with biologically informed transformations and data-driven preprocessing steps, we successfully trained a classifier to sub-stage HH10 brains to 87.1% test accuracy. To determine whether our classifier could be generally applied, we re-trained it using images (269) of randomised control and experimental chick wings, and obtained similarly high test accuracy (86.1%). Saliency analyses revealed that biologically relevant features are used for classification. Our strategy enables training of image classifiers for various applications in developmental biology with limited microscopy data.
Collapse
|
13
|
COVID-19 and beyond: leveraging artificial intelligence for enhanced outbreak control. Front Artif Intell 2023; 6:1266560. [PMID: 38028660 PMCID: PMC10663297 DOI: 10.3389/frai.2023.1266560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 10/02/2023] [Indexed: 12/01/2023] Open
Abstract
COVID-19 has brought significant changes to our political, social, and technological landscape. This paper explores the emergence and global spread of the disease and focuses on the role of Artificial Intelligence (AI) in containing its transmission. To the best of our knowledge, there has been no scientific presentation of the early pictorial representation of the disease's spread. Additionally, we outline various domains where AI has made a significant impact during the pandemic. Our methodology involves searching relevant articles on COVID-19 and AI in leading databases such as PubMed and Scopus to identify the ways AI has addressed pandemic-related challenges and its potential for further assistance. While research suggests that AI has not fully realized its potential against COVID-19, likely due to data quality and diversity limitations, we review and identify key areas where AI has been crucial in preparing the fight against any sudden outbreak of the pandemic. We also propose ways to maximize the utilization of AI's capabilities in this regard.
Collapse
|
14
|
Impact of social distancing on disease transmission risk in the context of a pandemic. Phys Rev E 2023; 108:054115. [PMID: 38115525 DOI: 10.1103/physreve.108.054115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 10/12/2023] [Indexed: 12/21/2023]
Abstract
Changes in pedestrian dynamics caused by social distancing policies place new demands on pedestrian motion modeling during the pandemic. This study summarizes pedestrian movement characteristics during the pandemic, based on which, the traditional floor-field cellular automata model was improved by introducing two floor fields related to pedestrian density to simulate social distancing in crowded places. Especially, the cumulative density field guides pedestrians in route selection, thereby compensating for the limitation of the previous models in which only local repulsion was considered. By selecting an appropriate combination of parameters, the desired social distancing behavior can be observed. Then, the rationality of our model is verified by the fundamental diagram. Moreover, to assess the influences of social distancing on the risk of disease transmission, we considered both person-person transmission and environment-person transmission. The simulation results show that although social distancing is effective in preventing interpersonal transmission, an increase in environmental transmission may somewhat offset this effect. We also examined the influence of individual motion heterogeneity on infection spread and found that the containment was the best when only patients complied with the social distancing restriction. The trade-off between safety and efficiency associated with social distancing was also initially explored in this study.
Collapse
|
15
|
The CNN model aided the study of the clinical value hidden in the implant images. J Appl Clin Med Phys 2023; 24:e14141. [PMID: 37656066 PMCID: PMC10562019 DOI: 10.1002/acm2.14141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 08/14/2023] [Accepted: 08/16/2023] [Indexed: 09/02/2023] Open
Abstract
PURPOSE This article aims to construct a new method to evaluate radiographic image identification results based on artificial intelligence, which can complement the limited vision of researchers when studying the effect of various factors on clinical implantation outcomes. METHODS We constructed a convolutional neural network (CNN) model using the clinical implant radiographic images. Moreover, we used gradient-weighted class activation mapping (Grad-CAM) to obtain thermal maps to present identification differences before performing statistical analyses. Subsequently, to verify whether these differences presented by the Grad-CAM algorithm would be of value to clinical practices, we measured the bone thickness around the identified sites. Finally, we analyzed the influence of the implant type on the implantation according to the measurement results. RESULTS The thermal maps showed that the sites with significant differences between Straumann BL and Bicon implants as identified by the CNN model were mainly the thread and neck area. (2) The heights of the mesial, distal, buccal, and lingual bone of the Bicon implant post-op were greater than those of Straumann BL (P < 0.05). (3) Between the first and second stages of surgery, the amount of bone thickness variation at the buccal and lingual sides of the Bicon implant platform was greater than that of the Straumann BL implant (P < 0.05). CONCLUSION According to the results of this study, we found that the identified-neck-area of the Bicon implant was placed deeper than the Straumann BL implant, and there was more bone resorption on the buccal and lingual sides at the Bicon implant platform between the first and second stages of surgery. In summary, this study proves that using the CNN classification model can identify differences that complement our limited vision.
Collapse
|
16
|
FP-CNN: Fuzzy pooling-based convolutional neural network for lung ultrasound image classification with explainable AI. Comput Biol Med 2023; 165:107407. [PMID: 37678140 DOI: 10.1016/j.compbiomed.2023.107407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/08/2023] [Accepted: 08/26/2023] [Indexed: 09/09/2023]
Abstract
The COVID-19 pandemic wreaks havoc on healthcare systems all across the world. In pandemic scenarios like COVID-19, the applicability of diagnostic modalities is crucial in medical diagnosis, where non-invasive ultrasound imaging has the potential to be a useful biomarker. This research develops a computer-assisted intelligent methodology for ultrasound lung image classification by utilizing a fuzzy pooling-based convolutional neural network FP-CNN with underlying evidence of particular decisions. The fuzzy-pooling method finds better representative features for ultrasound image classification. The FPCNN model categorizes ultrasound images into one of three classes: covid, disease-free (normal), and pneumonia. Explanations of diagnostic decisions are crucial to ensure the fairness of an intelligent system. This research has used Shapley Additive Explanation (SHAP) to explain the prediction of the FP-CNN models. The prediction of the black-box model is illustrated using the SHAP explanation of the intermediate layers of the black-box model. To determine the most effective model, we have tested different state-of-the-art convolutional neural network architectures with various training strategies, including fine-tuned models, single-layer fuzzy pooling models, and fuzzy pooling at all pooling layers. Among different architectures, the Xception model with all pooling layers having fuzzy pooling achieves the best classification results of 97.2% accuracy. We hope our proposed method will be helpful for the clinical diagnosis of covid-19 from lung ultrasound (LUS) images.
Collapse
|
17
|
A Systematic Review on Deep Structured Learning for COVID-19 Screening Using Chest CT from 2020 to 2022. Healthcare (Basel) 2023; 11:2388. [PMID: 37685422 PMCID: PMC10486542 DOI: 10.3390/healthcare11172388] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The emergence of the COVID-19 pandemic in Wuhan in 2019 led to the discovery of a novel coronavirus. The World Health Organization (WHO) designated it as a global pandemic on 11 March 2020 due to its rapid and widespread transmission. Its impact has had profound implications, particularly in the realm of public health. Extensive scientific endeavors have been directed towards devising effective treatment strategies and vaccines. Within the healthcare and medical imaging domain, the application of artificial intelligence (AI) has brought significant advantages. This study delves into peer-reviewed research articles spanning the years 2020 to 2022, focusing on AI-driven methodologies for the analysis and screening of COVID-19 through chest CT scan data. We assess the efficacy of deep learning algorithms in facilitating decision making processes. Our exploration encompasses various facets, including data collection, systematic contributions, emerging techniques, and encountered challenges. However, the comparison of outcomes between 2020 and 2022 proves intricate due to shifts in dataset magnitudes over time. The initiatives aimed at developing AI-powered tools for the detection, localization, and segmentation of COVID-19 cases are primarily centered on educational and training contexts. We deliberate on their merits and constraints, particularly in the context of necessitating cross-population train/test models. Our analysis encompassed a review of 231 research publications, bolstered by a meta-analysis employing search keywords (COVID-19 OR Coronavirus) AND chest CT AND (deep learning OR artificial intelligence OR medical imaging) on both the PubMed Central Repository and Web of Science platforms.
Collapse
|
18
|
Rank-ordering of known enzymes as starting points for re-engineering novel substrate activity using a convolutional neural network. Metab Eng 2023; 78:171-182. [PMID: 37301359 DOI: 10.1016/j.ymben.2023.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 05/19/2023] [Accepted: 06/02/2023] [Indexed: 06/12/2023]
Abstract
Retro-biosynthetic approaches have made significant advances in predicting synthesis routes of target biofuel, bio-renewable or bio-active molecules. The use of only cataloged enzymatic activities limits the discovery of new production routes. Recent retro-biosynthetic algorithms increasingly use novel conversions that require altering the substrate or cofactor specificities of existing enzymes while connecting pathways leading to a target metabolite. However, identifying and re-engineering enzymes for desired novel conversions are currently the bottlenecks in implementing such designed pathways. Herein, we present EnzRank, a convolutional neural network (CNN) based approach, to rank-order existing enzymes in terms of their suitability to undergo successful protein engineering through directed evolution or de novo design towards a desired specific substrate activity. We train the CNN model on 11,800 known active enzyme-substrate pairs from the BRENDA database as positive samples and data generated by scrambling these pairs as negative samples using substrate dissimilarity between an enzyme's native substrate and all other molecules present in the dataset using Tanimoto similarity score. EnzRank achieves an average recovery rate of 80.72% and 73.08% for positive and negative pairs on test data after using a 10-fold holdout method for training and cross-validation. We further developed a web-based user interface (available at https://huggingface.co/spaces/vuu10/EnzRank) to predict enzyme-substrate activity using SMILES strings of substrates and enzyme sequence as input to allow convenient and easy-to-use access to EnzRank. In summary, this effort can aid de novo pathway design tools to prioritize starting enzyme re-engineering candidates for novel reactions as well as in predicting the potential secondary activity of enzymes in cell metabolism.
Collapse
|
19
|
Detection of various lung diseases including COVID-19 using extreme learning machine algorithm based on the features extracted from a lightweight CNN architecture. Biocybern Biomed Eng 2023; 43:S0208-5216(23)00037-2. [PMID: 38620111 PMCID: PMC10292668 DOI: 10.1016/j.bbe.2023.06.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 04/04/2023] [Accepted: 06/16/2023] [Indexed: 11/09/2023]
Abstract
Around the world, several lung diseases such as pneumonia, cardiomegaly, and tuberculosis (TB) contribute to severe illness, hospitalization or even death, particularly for elderly and medically vulnerable patients. In the last few decades, several new types of lung-related diseases have taken the lives of millions of people, and COVID-19 has taken almost 6.27 million lives. To fight against lung diseases, timely and correct diagnosis with appropriate treatment is crucial in the current COVID-19 pandemic. In this study, an intelligent recognition system for seven lung diseases has been proposed based on machine learning (ML) techniques to aid the medical experts. Chest X-ray (CXR) images of lung diseases were collected from several publicly available databases. A lightweight convolutional neural network (CNN) has been used to extract characteristic features from the raw pixel values of the CXR images. The best feature subset has been identified using the Pearson Correlation Coefficient (PCC). Finally, the extreme learning machine (ELM) has been used to perform the classification task to assist faster learning and reduced computational complexity. The proposed CNN-PCC-ELM model achieved an accuracy of 96.22% with an Area Under Curve (AUC) of 99.48% for eight class classification. The outcomes from the proposed model demonstrated better performance than the existing state-of-the-art (SOTA) models in the case of COVID-19, pneumonia, and tuberculosis detection in both binary and multiclass classifications. For eight class classification, the proposed model achieved precision, recall and fi-score and ROC are 100%, 99%, 100% and 99.99% respectively for COVID-19 detection demonstrating its robustness. Therefore, the proposed model has overshadowed the existing pioneering models to accurately differentiate COVID-19 from the other lung diseases that can assist the medical physicians in treating the patient effectively.
Collapse
|
20
|
A comprehensive review of COVID-19 detection with machine learning and deep learning techniques. HEALTH AND TECHNOLOGY 2023; 13:1-14. [PMID: 37363343 PMCID: PMC10244837 DOI: 10.1007/s12553-023-00757-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/14/2023] [Indexed: 06/28/2023]
Abstract
Purpose The first transmission of coronavirus to humans started in Wuhan city of China, took the shape of a pandemic called Corona Virus Disease 2019 (COVID-19), and posed a principal threat to the entire world. The researchers are trying to inculcate artificial intelligence (Machine learning or deep learning models) for the efficient detection of COVID-19. This research explores all the existing machine learning (ML) or deep learning (DL) models, used for COVID-19 detection which may help the researcher to explore in different directions. The main purpose of this review article is to present a compact overview of the application of artificial intelligence to the research experts, helping them to explore the future scopes of improvement. Methods The researchers have used various machine learning, deep learning, and a combination of machine and deep learning models for extracting significant features and classifying various health conditions in COVID-19 patients. For this purpose, the researchers have utilized different image modalities such as CT-Scan, X-Ray, etc. This study has collected over 200 research papers from various repositories like Google Scholar, PubMed, Web of Science, etc. These research papers were passed through various levels of scrutiny and finally, 50 research articles were selected. Results In those listed articles, the ML / DL models showed an accuracy of 99% and above while performing the classification of COVID-19. This study has also presented various clinical applications of various research. This study specifies the importance of various machine and deep learning models in the field of medical diagnosis and research. Conclusion In conclusion, it is evident that ML/DL models have made significant progress in recent years, but there are still limitations that need to be addressed. Overfitting is one such limitation that can lead to incorrect predictions and overburdening of the models. The research community must continue to work towards finding ways to overcome these limitations and make machine and deep learning models even more effective and efficient. Through this ongoing research and development, we can expect even greater advances in the future.
Collapse
|
21
|
A Novel Approach for Prediction of Lung Disease Using Chest X-ray Images Based on DenseNet and MobileNet. WIRELESS PERSONAL COMMUNICATIONS 2023:1-15. [PMID: 37360137 PMCID: PMC10177707 DOI: 10.1007/s11277-023-10489-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 04/24/2023] [Indexed: 06/28/2023]
Abstract
Covid19 corona virus has caused widespread disruption across the world, in terms of the health, economy, and society problems. X-ray images of the chest can be helpful in making an accurate diagnosis because the corona virus typically first manifests its symptoms in patients' lungs. In this study, a classification method based on deep learning is proposed as a means of identifying lung disease from chest X-ray images. In the proposed study, the detection of covid19 corona virus disease from chest X-ray images was made with MobileNet and Densenet models, which are deep learning methods. Several different use cases can be built with the help of MobileNet model and case modelling approach is utilized to achieve 96% accuracy and an Area Under Curve (AUC) value of 94%. According to the result, the proposed method may be able to more accurately identify the signs of an impurity from dataset of chest X-ray images. This research also compares various performance parameters such as precision, recall and F1-Score.
Collapse
|
22
|
The Feasibility and Performance of Total Hip Replacement Prediction Deep Learning Algorithm with Real World Data. Bioengineering (Basel) 2023; 10:bioengineering10040458. [PMID: 37106645 PMCID: PMC10136253 DOI: 10.3390/bioengineering10040458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 03/15/2023] [Accepted: 04/04/2023] [Indexed: 04/29/2023] Open
Abstract
(1) Background: Hip degenerative disorder is a common geriatric disease is the main causes to lead to total hip replacement (THR). The surgical timing of THR is crucial for post-operative recovery. Deep learning (DL) algorithms can be used to detect anomalies in medical images and predict the need for THR. The real world data (RWD) were used to validate the artificial intelligence and DL algorithm in medicine but there was no previous study to prove its function in THR prediction. (2) Methods: We designed a sequential two-stage hip replacement prediction deep learning algorithm to identify the possibility of THR in three months of hip joints by plain pelvic radiography (PXR). We also collected RWD to validate the performance of this algorithm. (3) Results: The RWD totally included 3766 PXRs from 2018 to 2019. The overall accuracy of the algorithm was 0.9633; sensitivity was 0.9450; specificity was 1.000 and the precision was 1.000. The negative predictive value was 0.9009, the false negative rate was 0.0550, and the F1 score was 0.9717. The area under curve was 0.972 with 95% confidence interval from 0.953 to 0.987. (4) Conclusions: In summary, this DL algorithm can provide an accurate and reliable method for detecting hip degeneration and predicting the need for further THR. RWD offered an alternative support of the algorithm and validated its function to save time and cost.
Collapse
|
23
|
Deep Convolutional Neural Networks for Detecting COVID-19 Using Medical Images: A Survey. NEW GENERATION COMPUTING 2023; 41:343-400. [PMID: 37229176 PMCID: PMC10071474 DOI: 10.1007/s00354-023-00213-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
Coronavirus Disease 2019 (COVID-19), which is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2), surprised the world in December 2019 and has threatened the lives of millions of people. Countries all over the world closed worship places and shops, prevented gatherings, and implemented curfews to stand against the spread of COVID-19. Deep Learning (DL) and Artificial Intelligence (AI) can have a great role in detecting and fighting this disease. Deep learning can be used to detect COVID-19 symptoms and signs from different imaging modalities, such as X-Ray, Computed Tomography (CT), and Ultrasound Images (US). This could help in identifying COVID-19 cases as a first step to curing them. In this paper, we reviewed the research studies conducted from January 2020 to September 2022 about deep learning models that were used in COVID-19 detection. This paper clarified the three most common imaging modalities (X-Ray, CT, and US) in addition to the DL approaches that are used in this detection and compared these approaches. This paper also provided the future directions of this field to fight COVID-19 disease.
Collapse
|
24
|
StynMedGAN: Medical images augmentation using a new GAN model for improved diagnosis of diseases. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-223996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Abstract
Deep networks require a considerable amount of training data otherwise these networks generalize poorly. Data Augmentation techniques help the network generalize better by providing more variety in the training data. Standard data augmentation techniques such as flipping, and scaling, produce new data that is a modified version of the original data. Generative Adversarial networks (GANs) have been designed to generate new data that can be exploited. In this paper, we propose a new GAN model, named StynMedGAN for synthetically generating medical images to improve the performance of classification models. StynMedGAN builds upon the state-of-the-art styleGANv2 that has produced remarkable results generating all kinds of natural images. We introduce a regularization term that is a normalized loss factor in the existing discriminator loss of styleGANv2. It is used to force the generator to produce normalized images and penalize it if it fails. Medical imaging modalities, such as X-Rays, CT-Scans, and MRIs are different in nature, we show that the proposed GAN extends the capacity of styleGANv2 to handle medical images in a better way. This new GAN model (StynMedGAN) is applied to three types of medical imaging: X-Rays, CT scans, and MRI to produce more data for the classification tasks. To validate the effectiveness of the proposed model for the classification, 3 classifiers (CNN, DenseNet121, and VGG-16) are used. Results show that the classifiers trained with StynMedGAN-augmented data outperform other methods that only used the original data. The proposed model achieved 100%, 99.6%, and 100% for chest X-Ray, Chest CT-Scans, and Brain MRI respectively. The results are promising and favor a potentially important resource that can be used by practitioners and radiologists to diagnose different diseases.
Collapse
|
25
|
Multimodality Imaging of COVID-19 Using Fine-Tuned Deep Learning Models. Diagnostics (Basel) 2023; 13:diagnostics13071268. [PMID: 37046486 PMCID: PMC10093688 DOI: 10.3390/diagnostics13071268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 03/22/2023] [Accepted: 03/23/2023] [Indexed: 03/30/2023] Open
Abstract
In the face of the COVID-19 pandemic, many studies have been undertaken to provide assistive recommendations to patients to help overcome the burden of the expected shortage in clinicians. Thus, this study focused on diagnosing the COVID-19 virus using a set of fine-tuned deep learning models to overcome the latency in virus checkups. Five recent deep learning algorithms (EfficientB0, VGG-19, DenseNet121, EfficientB7, and MobileNetV2) were utilized to label both CT scan and chest X-ray images as positive or negative for COVID-19. The experimental results showed the superiority of the proposed method compared to state-of-the-art methods in terms of precision, sensitivity, specificity, F1 score, accuracy, and data access time.
Collapse
|
26
|
COVID-19 disease identification network based on weakly supervised feature selection. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:9327-9348. [PMID: 37161245 DOI: 10.3934/mbe.2023409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
The coronavirus disease 2019 (COVID-19) outbreak has resulted in countless infections and deaths worldwide, posing increasing challenges for the health care system. The use of artificial intelligence to assist in diagnosis not only had a high accuracy rate but also saved time and effort in the sudden outbreak phase with the lack of doctors and medical equipment. This study aimed to propose a weakly supervised COVID-19 classification network (W-COVNet). This network was divided into three main modules: weakly supervised feature selection module (W-FS), deep learning bilinear feature fusion module (DBFF) and Grad-CAM++ based network visualization module (Grad-Ⅴ). The first module, W-FS, mainly removed redundant background features from computed tomography (CT) images, performed feature selection and retained core feature regions. The second module, DBFF, mainly used two symmetric networks to extract different features and thus obtain rich complementary features. The third module, Grad-Ⅴ, allowed the visualization of lesions in unlabeled images. A fivefold cross-validation experiment showed an average classification accuracy of 85.3%, and a comparison with seven advanced classification models showed that our proposed network had a better performance.
Collapse
|
27
|
Machine learning- based lung disease diagnosis from CT images using Gabor features in Littlewood Paley empirical wavelet transform (LPEWT) and LLE. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2187244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
|
28
|
Deep-Learning-Based Automated Identification and Visualization of Oral Cancer in Optical Coherence Tomography Images. Biomedicines 2023; 11:biomedicines11030802. [PMID: 36979780 PMCID: PMC10044902 DOI: 10.3390/biomedicines11030802] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 02/15/2023] [Accepted: 03/04/2023] [Indexed: 03/09/2023] Open
Abstract
Early detection and diagnosis of oral cancer are critical for a better prognosis, but accurate and automatic identification is difficult using the available technologies. Optical coherence tomography (OCT) can be used as diagnostic aid due to the advantages of high resolution and non-invasion. We aim to evaluate deep-learning-based algorithms for OCT images to assist clinicians in oral cancer screening and diagnosis. An OCT data set was first established, including normal mucosa, precancerous lesion, and oral squamous cell carcinoma. Then, three kinds of convolutional neural networks (CNNs) were trained and evaluated by using four metrics (accuracy, precision, sensitivity, and specificity). Moreover, the CNN-based methods were compared against machine learning approaches through the same dataset. The results show the performance of CNNs, with a classification accuracy of up to 96.76%, is better than the machine-learning-based method with an accuracy of 92.52%. Moreover, visualization of lesions in OCT images was performed and the rationality and interpretability of the model for distinguishing different oral tissues were evaluated. It is proved that the automatic identification algorithm of OCT images based on deep learning has the potential to provide decision support for the effective screening and diagnosis of oral cancer.
Collapse
|
29
|
Developing and validating a highly sensitive platelet clump detection model for the Sysmex haematology analyser. Ann Clin Biochem 2023; 60:126-135. [PMID: 36653307 DOI: 10.1177/00045632231154782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
BACKGROUND Mainstream haematology analysers (HAs) are reported to have low detection sensitivity for platelet clumps. In this study, a deep learning (DL) algorithm, convolutional neural network (CNN), was implemented to detect platelet clumps. METHODS Adenosine diphosphate (ADP) was used to induce platelet aggregation to mimic platelet clumps detected (PCD) samples. Six types of leukocyte scattergrams were collected from the Sysmex XN-10. Then, multiple CNNs were trained and validated by scattergrams in a fivefold cross-validation (CV) method. Finally, the CNN model with the best CV accuracy was tested with practical routine work samples. RESULTS A total of 386 samples (190 PCD and 196 negative samples) and 4253 samples (150 PCD and 4103 negative samples) were eligible for CNN training and practical test, respectively. The CNN with the highest CV accuracy was trained by using scattergrams of side scatter (SSC) vs. forward scatter (FSC) from the white count and nucleated red blood cells (WNR) channel, whose mean area under the curve (AUC), accuracy, specificity and sensitivity were 0.968, 0.940, 0.937 and 0.942, respectively, in the CV. In the practical test, the AUC, accuracy, specificity and sensitivity of the CNN were 0.916, 0.961, 0.860 and 0.965, respectively. The dispersed spots presenting around the leucocytes in the WNR channel may be a sign of platelet clumping. CONCLUSIONS This study demonstrates that the CNN algorithms can identify platelet clumps based on optical information from dedicated leukocyte channels and has a higher ability to detect platelet clumps than the XN-10 device's internal algorithm under practical circumstances.
Collapse
|
30
|
Machine Learning in Metastatic Cancer Research: Potentials, Possibilities, and Prospects. Comput Struct Biotechnol J 2023; 21:2454-2470. [PMID: 37077177 PMCID: PMC10106342 DOI: 10.1016/j.csbj.2023.03.046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 03/26/2023] [Accepted: 03/27/2023] [Indexed: 03/31/2023] Open
Abstract
Cancer has received extensive recognition for its high mortality rate, with metastatic cancer being the top cause of cancer-related deaths. Metastatic cancer involves the spread of the primary tumor to other body organs. As much as the early detection of cancer is essential, the timely detection of metastasis, the identification of biomarkers, and treatment choice are valuable for improving the quality of life for metastatic cancer patients. This study reviews the existing studies on classical machine learning (ML) and deep learning (DL) in metastatic cancer research. Since the majority of metastatic cancer research data are collected in the formats of PET/CT and MRI image data, deep learning techniques are heavily involved. However, its black-box nature and expensive computational cost are notable concerns. Furthermore, existing models could be overestimated for their generality due to the non-diverse population in clinical trial datasets. Therefore, research gaps are itemized; follow-up studies should be carried out on metastatic cancer using machine learning and deep learning tools with data in a symmetric manner.
Collapse
|
31
|
RADIC:A tool for diagnosing COVID-19 from chest CT and X-ray scans using deep learning and quad-radiomics. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS : AN INTERNATIONAL JOURNAL SPONSORED BY THE CHEMOMETRICS SOCIETY 2023; 233:104750. [PMID: 36619376 PMCID: PMC9807270 DOI: 10.1016/j.chemolab.2022.104750] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/29/2022] [Accepted: 12/30/2022] [Indexed: 05/28/2023]
Abstract
Deep learning (DL) algorithms have demonstrated a high ability to perform speedy and accurate COVID-19 diagnosis utilizing computed tomography (CT) and X-Ray scans. The spatial information in these images was used to train DL models in the majority of relevant studies. However, training these models with images generated by radiomics approaches could enhance diagnostic accuracy. Furthermore, combining information from several radiomics approaches with time-frequency representations of the COVID-19 patterns can increase performance even further. This study introduces "RADIC", an automated tool that uses three DL models that are trained using radiomics-generated images to detect COVID-19. First, four radiomics approaches are used to analyze the original CT and X-ray images. Next, each of the three DL models is trained on a different set of radiomics, X-ray, and CT images. Then, for each DL model, deep features are obtained, and their dimensions are decreased using the Fast Walsh Hadamard Transform, yielding a time-frequency representation of the COVID-19 patterns. The tool then uses the discrete cosine transform to combine these deep features. Four classification models are then used to achieve classification. In order to validate the performance of RADIC, two benchmark datasets (CT and X-Ray) for COVID-19 are employed. The final accuracy attained using RADIC is 99.4% and 99% for the first and second datasets respectively. To prove the competing ability of RADIC, its performance is compared with related studies in the literature. The results reflect that RADIC achieve superior performance compared to other studies. The results of the proposed tool prove that a DL model can be trained more effectively with images generated by radiomics techniques than the original X-Ray and CT images. Besides, the incorporation of deep features extracted from DL models trained with multiple radiomics approaches will improve diagnostic accuracy.
Collapse
|
32
|
Recent trends in carbon nanotube (CNT)-based biosensors for the fast and sensitive detection of human viruses: a critical review. NANOSCALE ADVANCES 2023; 5:992-1010. [PMID: 36798507 PMCID: PMC9926911 DOI: 10.1039/d2na00236a] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 10/13/2022] [Indexed: 06/18/2023]
Abstract
The current COVID-19 pandemic, with its numerous variants including Omicron which is 50-70% more transmissible than the previously dominant Delta variant, demands a fast, robust, cheap, and easily deployed identification strategy to reduce the chain of transmission, for which biosensors have been shown as a feasible solution at the laboratory scale. The use of nanomaterials has significantly enhanced the performance of biosensors, and the addition of CNTs has increased detection capabilities to an unrivaled level. Among the various CNT-based detection systems, CNT-based field-effect transistors possess ultra-sensitivity and low-noise detection capacity, allowing for immediate analyte determination even in the presence of limited analyte concentrations, which would be typical of early infection stages. Recently, CNT field-effect transistor-type biosensors have been successfully used in the fast diagnosis of COVID-19, which has increased research and commercial interest in exploiting current developments of CNT field-effect transistors. Recent progress in the design and deployment of CNT-based biosensors for viral monitoring are covered in this paper, as are the remaining obstacles and prospects. This work also highlights the enormous potential for synergistic effects of CNTs used in combination with other nanomaterials for viral detection.
Collapse
|
33
|
Deep Learning Applied to Intracranial Hemorrhage Detection. J Imaging 2023; 9:jimaging9020037. [PMID: 36826956 PMCID: PMC9963867 DOI: 10.3390/jimaging9020037] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/22/2023] [Accepted: 01/26/2023] [Indexed: 02/10/2023] Open
Abstract
Intracranial hemorrhage is a serious medical problem that requires rapid and often intensive medical care. Identifying the location and type of any hemorrhage present is a critical step in the treatment of the patient. Detection of, and diagnosis of, a hemorrhage that requires an urgent procedure is a difficult and time-consuming process for human experts. In this paper, we propose methods based on EfficientDet's deep-learning technology that can be applied to the diagnosis of hemorrhages at a patient level and which could, thus, become a decision-support system. Our proposal is two-fold. On the one hand, the proposed technique classifies slices of computed tomography scans for the presence of hemorrhage or its lack of, and evaluates whether the patient is positive in terms of hemorrhage, and achieving, in this regard, 92.7% accuracy and 0.978 ROC AUC. On the other hand, our methodology provides visual explanations of the chosen classification using the Grad-CAM methodology.
Collapse
|
34
|
Data augmentation based semi-supervised method to improve COVID-19 CT classification. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:6838-6852. [PMID: 37161130 DOI: 10.3934/mbe.2023294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
The Coronavirus (COVID-19) outbreak of December 2019 has become a serious threat to people around the world, creating a health crisis that infected millions of lives, as well as destroying the global economy. Early detection and diagnosis are essential to prevent further transmission. The detection of COVID-19 computed tomography images is one of the important approaches to rapid diagnosis. Many different branches of deep learning methods have played an important role in this area, including transfer learning, contrastive learning, ensemble strategy, etc. However, these works require a large number of samples of expensive manual labels, so in order to save costs, scholars adopted semi-supervised learning that applies only a few labels to classify COVID-19 CT images. Nevertheless, the existing semi-supervised methods focus primarily on class imbalance and pseudo-label filtering rather than on pseudo-label generation. Accordingly, in this paper, we organized a semi-supervised classification framework based on data augmentation to classify the CT images of COVID-19. We revised the classic teacher-student framework and introduced the popular data augmentation method Mixup, which widened the distribution of high confidence to improve the accuracy of selected pseudo-labels and ultimately obtain a model with better performance. For the COVID-CT dataset, our method makes precision, F1 score, accuracy and specificity 21.04%, 12.95%, 17.13% and 38.29% higher than average values for other methods respectively, For the SARS-COV-2 dataset, these increases were 8.40%, 7.59%, 9.35% and 12.80% respectively. For the Harvard Dataverse dataset, growth was 17.64%, 18.89%, 19.81% and 20.20% respectively. The codes are available at https://github.com/YutingBai99/COVID-19-SSL.
Collapse
|
35
|
DTLCx: An Improved ResNet Architecture to Classify Normal and Conventional Pneumonia Cases from COVID-19 Instances with Grad-CAM-Based Superimposed Visualization Utilizing Chest X-ray Images. Diagnostics (Basel) 2023; 13:diagnostics13030551. [PMID: 36766662 PMCID: PMC9914155 DOI: 10.3390/diagnostics13030551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 01/04/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
COVID-19 is a severe respiratory contagious disease that has now spread all over the world. COVID-19 has terribly impacted public health, daily lives and the global economy. Although some developed countries have advanced well in detecting and bearing this coronavirus, most developing countries are having difficulty in detecting COVID-19 cases for the mass population. In many countries, there is a scarcity of COVID-19 testing kits and other resources due to the increasing rate of COVID-19 infections. Therefore, this deficit of testing resources and the increasing figure of daily cases encouraged us to improve a deep learning model to aid clinicians, radiologists and provide timely assistance to patients. In this article, an efficient deep learning-based model to detect COVID-19 cases that utilizes a chest X-ray images dataset has been proposed and investigated. The proposed model is developed based on ResNet50V2 architecture. The base architecture of ResNet50V2 is concatenated with six extra layers to make the model more robust and efficient. Finally, a Grad-CAM-based discriminative localization is used to readily interpret the detection of radiological images. Two datasets were gathered from different sources that are publicly available with class labels: normal, confirmed COVID-19, bacterial pneumonia and viral pneumonia cases. Our proposed model obtained a comprehensive accuracy of 99.51% for four-class cases (COVID-19/normal/bacterial pneumonia/viral pneumonia) on Dataset-2, 96.52% for the cases with three classes (normal/ COVID-19/bacterial pneumonia) and 99.13% for the cases with two classes (COVID-19/normal) on Dataset-1. The accuracy level of the proposed model might motivate radiologists to rapidly detect and diagnose COVID-19 cases.
Collapse
|
36
|
A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning. Clin Imaging 2023; 94:18-41. [PMID: 36462229 DOI: 10.1016/j.clinimag.2022.11.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 10/17/2022] [Accepted: 11/01/2022] [Indexed: 11/13/2022]
Abstract
This survey aims to identify commonly used methods, datasets, future trends, knowledge gaps, constraints, and limitations in the field to provide an overview of current solutions used in medical image analysis in parallel with the rapid developments in transfer learning (TL). Unlike previous studies, this survey grouped the last five years of current studies for the period between January 2017 and February 2021 according to different anatomical regions and detailed the modality, medical task, TL method, source data, target data, and public or private datasets used in medical imaging. Also, it provides readers with detailed information on technical challenges, opportunities, and future research trends. In this way, an overview of recent developments is provided to help researchers to select the most effective and efficient methods and access widely used and publicly available medical datasets, research gaps, and limitations of the available literature.
Collapse
|
37
|
Applications of Deep Learning in Disease Diagnosis of Chest Radiographs: A Survey on Materials and Methods. BIOMEDICAL ENGINEERING ADVANCES 2023. [DOI: 10.1016/j.bea.2023.100076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
|
38
|
UncertaintyFuseNet: Robust uncertainty-aware hierarchical feature fusion model with Ensemble Monte Carlo Dropout for COVID-19 detection. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2023; 90:364-381. [PMID: 36217534 PMCID: PMC9534540 DOI: 10.1016/j.inffus.2022.09.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 09/23/2022] [Accepted: 09/25/2022] [Indexed: 05/03/2023]
Abstract
The COVID-19 (Coronavirus disease 2019) pandemic has become a major global threat to human health and well-being. Thus, the development of computer-aided detection (CAD) systems that are capable of accurately distinguishing COVID-19 from other diseases using chest computed tomography (CT) and X-ray data is of immediate priority. Such automatic systems are usually based on traditional machine learning or deep learning methods. Differently from most of the existing studies, which used either CT scan or X-ray images in COVID-19-case classification, we present a new, simple but efficient deep learning feature fusion model, called U n c e r t a i n t y F u s e N e t , which is able to classify accurately large datasets of both of these types of images. We argue that the uncertainty of the model's predictions should be taken into account in the learning process, even though most of the existing studies have overlooked it. We quantify the prediction uncertainty in our feature fusion model using effective Ensemble Monte Carlo Dropout (EMCD) technique. A comprehensive simulation study has been conducted to compare the results of our new model to the existing approaches, evaluating the performance of competing models in terms of Precision, Recall, F-Measure, Accuracy and ROC curves. The obtained results prove the efficiency of our model which provided the prediction accuracy of 99.08% and 96.35% for the considered CT scan and X-ray datasets, respectively. Moreover, our U n c e r t a i n t y F u s e N e t model was generally robust to noise and performed well with previously unseen data. The source code of our implementation is freely available at: https://github.com/moloud1987/UncertaintyFuseNet-for-COVID-19-Classification.
Collapse
|
39
|
Computer-aided COVID-19 diagnosis: a possibility? J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
40
|
AMTLDC: a new adversarial multi-source transfer learning framework to diagnosis of COVID-19. EVOLVING SYSTEMS 2023; 14:1-15. [PMID: 38625255 PMCID: PMC9838404 DOI: 10.1007/s12530-023-09484-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
In recent years, deep learning techniques have been widely used to diagnose diseases. However, in some tasks, such as the diagnosis of COVID-19 disease, due to insufficient data, the model is not properly trained and as a result, the generalizability of the model decreases. For example, if the model is trained on a CT scan dataset and tested on another CT scan dataset, it predicts near-random results. To address this, data from several different sources can be combined using transfer learning, taking into account the intrinsic and natural differences in existing datasets obtained with different medical imaging tools and approaches. In this paper, to improve the transfer learning technique and better generalizability between multiple data sources, we propose a multi-source adversarial transfer learning model, namely AMTLDC. In AMTLDC, representations are learned that are similar among the sources. In other words, extracted representations are general and not dependent on the particular dataset domain. We apply the AMTLDC to predict Covid-19 from medical images using a convolutional neural network. We show that accuracy can be improved using the AMTLDC framework, and surpass the results of current successful transfer learning approaches. In particular, we show that the AMTLDC works well when using different dataset domains, or when there is insufficient data.
Collapse
|
41
|
Novel Comparative Study for the Detection of COVID-19 Using CT Scan and Chest X-ray Images. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:1268. [PMID: 36674023 PMCID: PMC9858730 DOI: 10.3390/ijerph20021268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/04/2023] [Accepted: 01/05/2023] [Indexed: 06/17/2023]
Abstract
The number of coronavirus disease (COVID-19) cases is constantly rising as the pandemic continues, with new variants constantly emerging. Therefore, to prevent the virus from spreading, coronavirus cases must be diagnosed as soon as possible. The COVID-19 pandemic has had a devastating impact on people's health and the economy worldwide. For COVID-19 detection, reverse transcription-polymerase chain reaction testing is the benchmark. However, this test takes a long time and necessitates a lot of laboratory resources. A new trend is emerging to address these limitations regarding the use of machine learning and deep learning techniques for automatic analysis, as these can attain high diagnosis results, especially by using medical imaging techniques. However, a key question arises whether a chest computed tomography scan or chest X-ray can be used for COVID-19 detection. A total of 17,599 images were examined in this work to develop the models used to classify the occurrence of COVID-19 infection, while four different classifiers were studied. These are the convolutional neural network (proposed architecture (named, SCovNet) and Resnet18), support vector machine, and logistic regression. Out of all four models, the proposed SCoVNet architecture reached the best performance with an accuracy of almost 99% and 98% on chest computed tomography scan images and chest X-ray images, respectively.
Collapse
|
42
|
Prediction of bone mineral density in CT using deep learning with explainability. Front Physiol 2023; 13:1061911. [PMID: 36703938 PMCID: PMC9871249 DOI: 10.3389/fphys.2022.1061911] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 12/19/2022] [Indexed: 01/12/2023] Open
Abstract
Bone mineral density (BMD) is a key feature in diagnosing bone diseases. Although computational tomography (CT) is a common imaging modality, it seldom provides bone mineral density information in a clinic owing to technical difficulties. Thus, a dual-energy X-ray absorptiometry (DXA) is required to measure bone mineral density at the expense of additional radiation exposure. In this study, a deep learning framework was developed to estimate the bone mineral density from an axial cut of the L1 bone on computational tomography. As a result, the correlation coefficient between bone mineral density estimates and dual-energy X-ray absorptiometry bone mineral density was .90. When the samples were categorized into abnormal and normal groups using a standard (T-score = - 1.0 ), the maximum F1 score in the diagnostic test was .875. In addition, it was identified using explainable artificial intelligence techniques that the network intensively sees a local area spanning tissues around the vertebral foramen. This method is well suited as an auxiliary tool in clinical practice and as an automatic screener for identifying latent patients in computational tomography databases.
Collapse
|
43
|
A systematic literature review of machine learning application in COVID-19 medical image classification. PROCEDIA COMPUTER SCIENCE 2023; 216:749-756. [PMID: 36643182 PMCID: PMC9829419 DOI: 10.1016/j.procs.2022.12.192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Detecting COVID-19 as early as possible and quickly is one way to stop the spread of COVID-19. Machine learning development can help to diagnose COVID-19 more quickly and accurately. This report aims to find out how far research has progressed and what lessons can be learned for future research in this sector. By filtering titles, abstracts, and content in the Google Scholar database, this literature review was able to find 19 related papers to answer two research questions, i.e. what medical images are commonly used for COVID-19 classification and what are the methods for COVID-19 classification. According to the findings, chest X-ray were the most commonly used data to categorize COVID-19 and transfer learning techniques were the method used in this study. Researchers also concluded that lung segmentation and use of multimodal data could improve performance.
Collapse
|
44
|
Deep learning-based important weights-only transfer learning approach for COVID-19 CT-scan classification. APPL INTELL 2023; 53:7201-7215. [PMID: 35875199 PMCID: PMC9289654 DOI: 10.1007/s10489-022-03893-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/13/2022] [Indexed: 11/18/2022]
Abstract
COVID-19 has become a pandemic for the entire world, and it has significantly affected the world economy. The importance of early detection and treatment of the infection cannot be overstated. The traditional diagnosis techniques take more time in detecting the infection. Although, numerous deep learning-based automated solutions have recently been developed in this regard, nevertheless, the limitation of computational and battery power in resource-constrained devices makes it difficult to deploy trained models for real-time inference. In this paper, to detect the presence of COVID-19 in CT-scan images, an important weights-only transfer learning method has been proposed for devices with limited runt-time resources. In the proposed method, the pre-trained models are made point-of-care devices friendly by pruning less important weight parameters of the model. The experiments were performed on two popular VGG16 and ResNet34 models and the empirical results showed that pruned ResNet34 model achieved 95.47% accuracy, 0.9216 sensitivity, 0.9567 F-score, and 0.9942 specificity with 41.96% fewer FLOPs and 20.64% fewer weight parameters on the SARS-CoV-2 CT-scan dataset. The results of our experiments showed that the proposed method significantly reduces the run-time resource requirements of the computationally intensive models and makes them ready to be utilized on the point-of-care devices.
Collapse
|
45
|
AI-based radiodiagnosis using chest X-rays: A review. Front Big Data 2023; 6:1120989. [PMID: 37091458 PMCID: PMC10116151 DOI: 10.3389/fdata.2023.1120989] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 01/06/2023] [Indexed: 04/25/2023] Open
Abstract
Chest Radiograph or Chest X-ray (CXR) is a common, fast, non-invasive, relatively cheap radiological examination method in medical sciences. CXRs can aid in diagnosing many lung ailments such as Pneumonia, Tuberculosis, Pneumoconiosis, COVID-19, and lung cancer. Apart from other radiological examinations, every year, 2 billion CXRs are performed worldwide. However, the availability of the workforce to handle this amount of workload in hospitals is cumbersome, particularly in developing and low-income nations. Recent advances in AI, particularly in computer vision, have drawn attention to solving challenging medical image analysis problems. Healthcare is one of the areas where AI/ML-based assistive screening/diagnostic aid can play a crucial part in social welfare. However, it faces multiple challenges, such as small sample space, data privacy, poor quality samples, adversarial attacks and most importantly, the model interpretability for reliability on machine intelligence. This paper provides a structured review of the CXR-based analysis for different tasks, lung diseases and, in particular, the challenges faced by AI/ML-based systems for diagnosis. Further, we provide an overview of existing datasets, evaluation metrics for different[][15mm][0mm]Q5 tasks and patents issued. We also present key challenges and open problems in this research domain.
Collapse
|
46
|
A deep transfer learning-based convolution neural network model for COVID-19 detection using computed tomography scan images for medical applications. ADVANCES IN ENGINEERING SOFTWARE (BARKING, LONDON, ENGLAND : 1992) 2023; 175:103317. [PMID: 36311489 PMCID: PMC9595382 DOI: 10.1016/j.advengsoft.2022.103317] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 10/09/2022] [Accepted: 10/19/2022] [Indexed: 05/26/2023]
Abstract
The Coronavirus (COVID-19) has become a critical and extreme epidemic because of its international dissemination. COVID-19 is the world's most serious health, economic, and survival danger. This disease affects not only a single country but the entire planet due to this infectious disease. Illnesses of Covid-19 spread at a much faster rate than usual influenza cases. Because of its high transmissibility and early diagnosis, it isn't easy to manage COVID-19. The popularly used RT-PCR method for COVID-19 disease diagnosis may provide false negatives. COVID-19 can be detected non-invasively using medical imaging procedures such as chest CT and chest x-ray. Deep learning is the most effective machine learning approach for examining a considerable quantity of chest computed tomography (CT) pictures that can significantly affect Covid-19 screening. Convolutional neural network (CNN) is one of the most popular deep learning techniques right now, and its gaining traction due to its potential to transform several spheres of human life. This research aims to develop conceptual transfer learning enhanced CNN framework models for detecting COVID-19 with CT scan images. Though with minimal datasets, these techniques were demonstrated to be effective in detecting the presence of COVID-19. This proposed research looks into several deep transfer learning-based CNN approaches for detecting the presence of COVID-19 in chest CT images.VGG16, VGG19, Densenet121, InceptionV3, Xception, and Resnet50 are the foundation models used in this work. Each model's performance was evaluated using a confusion matrix and various performance measures such as accuracy, recall, precision, f1-score, loss, and ROC. The VGG16 model performed much better than the other models in this study (98.00 % accuracy). Promising outcomes from experiments have revealed the merits of the proposed model for detecting and monitoring COVID-19 patients. This could help practitioners and academics create a tool to help minimal health professionals decide on the best course of therapy.
Collapse
|
47
|
An Analysis of Image Features Extracted by CNNs to Design Classification Models for COVID-19 and Non-COVID-19. JOURNAL OF SIGNAL PROCESSING SYSTEMS 2023; 95:101-113. [PMID: 34777680 PMCID: PMC8572648 DOI: 10.1007/s11265-021-01714-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 10/05/2021] [Accepted: 10/13/2021] [Indexed: 05/11/2023]
Abstract
The SARS-CoV-2 virus causes a respiratory disease in humans, known as COVID-19. The confirmatory diagnostic of this disease occurs through the real-time reverse transcription and polymerase chain reaction test (RT-qPCR). However, the period of obtaining the results limits the application of the mass test. Thus, chest X-ray computed tomography (CT) images are analyzed to help diagnose the disease. However, during an outbreak of a disease that causes respiratory problems, radiologists may be overwhelmed with analyzing medical images. In the literature, some studies used feature extraction techniques based on CNNs, with classification models to identify COVID-19 and non-COVID-19. This work compare the performance of applying pre-trained CNNs in conjunction with classification methods based on machine learning algorithms. The main objective is to analyze the impact of the features extracted by CNNs, in the construction of models to classify COVID-19 and non-COVID-19. A SARS-CoV-2 CT data-set is used in experimental tests. The CNNs implemented are visual geometry group (VGG-16 and VGG-19), inception V3 (IV3), and EfficientNet-B0 (EB0). The classification methods were k-nearest neighbor (KNN), support vector machine (SVM), and explainable deep neural networks (xDNN). In the experiments, the best results were obtained by the EfficientNet model used to extract data and the SVM with an RBF kernel. This approach achieved an average performance of 0.9856 in the precision macro, 0.9853 in the sensitivity macro, 0.9853 in the specificity macro, and 0.9853 in the F1 score macro.
Collapse
|
48
|
CovidExpert: A Triplet Siamese Neural Network framework for the detection of COVID-19. INFORMATICS IN MEDICINE UNLOCKED 2023; 37:101156. [PMID: 36686559 PMCID: PMC9837208 DOI: 10.1016/j.imu.2022.101156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 12/27/2022] [Accepted: 12/28/2022] [Indexed: 01/15/2023] Open
Abstract
Patients with the COVID-19 infection may have pneumonia-like symptoms as well as respiratory problems which may harm the lungs. From medical images, coronavirus illness may be accurately identified and predicted using a variety of machine learning methods. Most of the published machine learning methods may need extensive hyperparameter adjustment and are unsuitable for small datasets. By leveraging the data in a comparatively small dataset, few-shot learning algorithms aim to reduce the requirement of large datasets. This inspired us to develop a few-shot learning model for early detection of COVID-19 to reduce the post-effect of this dangerous disease. The proposed architecture combines few-shot learning with an ensemble of pre-trained convolutional neural networks to extract feature vectors from CT scan images for similarity learning. The proposed Triplet Siamese Network as the few-shot learning model classified CT scan images into Normal, COVID-19, and Community-Acquired Pneumonia. The suggested model achieved an overall accuracy of 98.719%, a specificity of 99.36%, a sensitivity of 98.72%, and a ROC score of 99.9% with only 200 CT scans per category for training data.
Collapse
|
49
|
Improving COVID-19 CT classification of CNNs by learning parameter-efficient representation. Comput Biol Med 2023; 152:106417. [PMID: 36543003 PMCID: PMC9750504 DOI: 10.1016/j.compbiomed.2022.106417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 11/22/2022] [Accepted: 12/04/2022] [Indexed: 12/23/2022]
Abstract
The COVID-19 pandemic continues to spread rapidly over the world and causes a tremendous crisis in global human health and the economy. Its early detection and diagnosis are crucial for controlling the further spread. Many deep learning-based methods have been proposed to assist clinicians in automatic COVID-19 diagnosis based on computed tomography imaging. However, challenges still remain, including low data diversity in existing datasets, and unsatisfied detection resulting from insufficient accuracy and sensitivity of deep learning models. To enhance the data diversity, we design augmentation techniques of incremental levels and apply them to the largest open-access benchmark dataset, COVIDx CT-2A. Meanwhile, similarity regularization (SR) derived from contrastive learning is proposed in this study to enable CNNs to learn more parameter-efficient representations, thus improve the accuracy and sensitivity of CNNs. The results on seven commonly used CNNs demonstrate that CNN performance can be improved stably through applying the designed augmentation and SR techniques. In particular, DenseNet121 with SR achieves an average test accuracy of 99.44% in three trials for three-category classification, including normal, non-COVID-19 pneumonia, and COVID-19 pneumonia. The achieved precision, sensitivity, and specificity for the COVID-19 pneumonia category are 98.40%, 99.59%, and 99.50%, respectively. These statistics suggest that our method has surpassed the existing state-of-the-art methods on the COVIDx CT-2A dataset. Source code is available at https://github.com/YujiaKCL/COVID-CT-Similarity-Regularization.
Collapse
|
50
|
ABOA-CNN: auction-based optimization algorithm with convolutional neural network for pulmonary disease prediction. Neural Comput Appl 2023; 35:7463-7474. [PMID: 36788792 PMCID: PMC9910772 DOI: 10.1007/s00521-022-08033-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 11/04/2022] [Indexed: 02/12/2023]
Abstract
Nowadays, deep learning plays a vital role behind many of the emerging technologies. Few applications of deep learning include speech recognition, virtual assistant, healthcare, entertainment, and so on. In healthcare applications, deep learning can be used to predict diseases effectively. It is a type of computer model that learns in conducting classification tasks directly from text, sound, or images. It also provides better accuracy and sometimes outdoes human performance. We presented a novel approach that makes use of the deep learning method in our proposed work. The prediction of pulmonary disease can be performed with the aid of convolutional neural network (CNN) incorporated with auction-based optimization algorithm (ABOA) and DSC process. The traditional CNN ignores the dominant features from the X-ray images while performing the feature extraction process. This can be effectively circumvented by the adoption of ABOA, and the DSC is used to classify the pulmonary disease types such as fibrosis, pneumonia, cardiomegaly, and normal from the X-ray images. We have taken two datasets, namely the NIH Chest X-ray dataset and ChestX-ray8. The performances of the proposed approach are compared with deep learning-based state-of-art works such as BPD, DL, CSS-DL, and Grad-CAM. From the performance analyses, it is confirmed that the proposed approach effectively extracts the features from the X-ray images, and thus, the prediction of pulmonary diseases is more accurate than the state-of-art approaches.
Collapse
|