51
|
Tian Y, Wang J, Yang W, Wang J, Qian D. Deep multi-instance transfer learning for pneumothorax classification in chest X-ray images. Med Phys 2021; 49:231-243. [PMID: 34802144 DOI: 10.1002/mp.15328] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 10/17/2021] [Accepted: 10/18/2021] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Pneumothorax is a life-threatening emergency that requires immediate treatment. Frontal-view chest X-ray images are typically used for pneumothorax detection in clinical practice. However, manual review of radiographs is time-consuming, labor-intensive, and highly dependent on the experience of radiologists, which may lead to misdiagnosis. Here, we aim to develop a reliable automatic classification method to assist radiologists in rapidly and accurately diagnosing pneumothorax in frontal chest radiographs. METHODS A novel residual neural network (ResNet)-based two-stage deep-learning strategy is proposed for pneumothorax identification: local feature learning (LFL) followed by global multi-instance learning (GMIL). Most of the nonlesion regions in the images are removed for learning discriminative features. Two datasets are used for large-scale validation: a private dataset (27 955 frontal-view chest X-ray images) and a public dataset (the National Institutes of Health [NIH] ChestX-ray14; 112 120 frontal-view X-ray images). The model performance of the identification was evaluated using the accuracy, precision, recall, specificity, F1-score, receiver operating characteristic (ROC), and area under ROC curve (AUC). Fivefold cross-validation is conducted on the datasets, and then the mean and standard deviation of the above-mentioned metrics are calculated to assess the overall performance of the model. RESULTS The experimental results demonstrate that the proposed learning strategy can achieve state-of-the-art performance on the NIH dataset with an accuracy, AUC, precision, recall, specificity, and F1-score of 94.4% ± 0.7%, 97.3% ± 0.5%, 94.2% ± 0.3%, 94.6% ± 1.5%, 94.2% ± 0.4%, and 94.4% ± 0.7%, respectively. CONCLUSIONS The experimental results demonstrate that our proposed CAD system is an efficient assistive tool in the identification of pneumothorax.
Collapse
Affiliation(s)
- Yuchi Tian
- Academy of Engineering and Technology, Fudan University, Shanghai, China
| | - Jiawei Wang
- Department of Radiology, The Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Wenjie Yang
- Department of Radiology, Ruijin Hospital Affiliated to School of Medicine, Shanghai Jiao Tong University, China
| | - Jun Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
52
|
|
53
|
Magnide E, Tchaha GW, Joncas J, Bellefleur C, Barchi S, Roy-Beaudry M, Parent S, Grimard G, Labelle H, Duong L. Automatic bone maturity grading from EOS radiographs in Adolescent Idiopathic Scoliosis. Comput Biol Med 2021; 136:104681. [PMID: 34332349 DOI: 10.1016/j.compbiomed.2021.104681] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 07/16/2021] [Accepted: 07/20/2021] [Indexed: 12/18/2022]
Abstract
Adolescent Idiopathic Scoliosis (AIS) is a deformation of the spine and it is routinely diagnosed using posteroanterior and lateral radiographs. The Risser sign used in skeletal maturity assessment is commonly accepted in AIS patient's management. However, the Risser sign is subject to inter-observer variability and it relies mainly on the observation of ossification on the iliac crests. This study proposes a new machine-learning-based approach for Risser sign skeletal maturity assessment using EOS radiographs. Regions of interest including right and left humeral heads; left and right femoral heads; and pelvis are extracted from the radiographs. First, a total of 24 image features is extracted from EOS radiographs using a ResNet101-type convolutional neural network (CNN), pre-trained from the ImageNet database. Then, a support vector machine (SVM) algorithm is used for the final Risser sign classification. The experimental results demonstrate an overall accuracy of 84%, 78%, and 80% respectively for iliac crests, humeral heads, and femoral heads. Class activation maps using Grad-CAM were also investigated to understand the features of our model. In conclusion, our machine learning approach is promising to incorporate a large number of image features for different regions of interest to improve Risser grading for skeletal maturity. Automatic classification could contribute to the management of AIS patients.
Collapse
Affiliation(s)
- Eddie Magnide
- Department of Software and IT Engineering, École de Technologie Supérieure, Montreal, Canada.
| | - Georges Wona Tchaha
- Department of Software and IT Engineering, École de Technologie Supérieure, Montreal, Canada
| | - Julie Joncas
- Department of Orthopedics, Sainte-Justine Hospital, Montreal, Canada
| | | | - Soraya Barchi
- Department of Orthopedics, Sainte-Justine Hospital, Montreal, Canada
| | | | - Stefan Parent
- Department of Orthopedics, Sainte-Justine Hospital, Montreal, Canada; Université de Montréal, Montreal, Canada
| | - Guy Grimard
- Department of Orthopedics, Sainte-Justine Hospital, Montreal, Canada; Université de Montréal, Montreal, Canada
| | - Hubert Labelle
- Department of Orthopedics, Sainte-Justine Hospital, Montreal, Canada; Université de Montréal, Montreal, Canada
| | - Luc Duong
- Department of Software and IT Engineering, École de Technologie Supérieure, Montreal, Canada
| |
Collapse
|
54
|
Moses DA. Deep learning applied to automatic disease detection using chest X-rays. J Med Imaging Radiat Oncol 2021; 65:498-517. [PMID: 34231311 DOI: 10.1111/1754-9485.13273] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/08/2021] [Indexed: 12/24/2022]
Abstract
Deep learning (DL) has shown rapid advancement and considerable promise when applied to the automatic detection of diseases using CXRs. This is important given the widespread use of CXRs across the world in diagnosing significant pathologies, and the lack of trained radiologists to report them. This review article introduces the basic concepts of DL as applied to CXR image analysis including basic deep neural network (DNN) structure, the use of transfer learning and the application of data augmentation. It then reviews the current literature on how DNN models have been applied to the detection of common CXR abnormalities (e.g. lung nodules, pneumonia, tuberculosis and pneumothorax) over the last few years. This includes DL approaches employed for the classification of multiple different diseases (multi-class classification). Performance of different techniques and models and their comparison with human observers are presented. Some of the challenges facing DNN models, including their future implementation and relationships to radiologists, are also discussed.
Collapse
Affiliation(s)
- Daniel A Moses
- Graduate School of Biomedical Engineering, Faculty of Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Department of Medical Imaging, Prince of Wales Hospital, Sydney, New South Wales, Australia
| |
Collapse
|
55
|
Tasci E, Uluturk C, Ugur A. A voting-based ensemble deep learning method focusing on image augmentation and preprocessing variations for tuberculosis detection. Neural Comput Appl 2021; 33:15541-15555. [PMID: 34121816 PMCID: PMC8182991 DOI: 10.1007/s00521-021-06177-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 05/27/2021] [Indexed: 11/12/2022]
Abstract
Tuberculosis (TB) is known as a potentially dangerous and infectious disease that affects mostly lungs worldwide. The detection and treatment of TB at an early stage are critical for preventing the disease and decreasing the risk of mortality and transmission of it to others. Nowadays, as the most common medical imaging technique, chest radiography (CXR) is useful for determining thoracic diseases. Computer-aided detection (CADe) systems are also crucial mechanisms to provide more reliable, efficient, and systematic approaches with accelerating the decision-making process of clinicians. In this study, we propose voting and preprocessing variations-based ensemble CNN model for TB detection. We utilize 40 different variations in fine-tuned CNN models based on InceptionV3 and Xception by also using CLAHE (contrast-limited adaptive histogram equalization) preprocessing technique and 10 different image transformations for data augmentation types. After analyzing all these combination schemes, three or five best classifier models are selected as base learners for voting operations. We apply the Bayesian optimization-based weighted voting and the average of probabilities as a combination rule in soft voting methods on two TB CXR image datasets to get better results in various numbers of models. The computational results indicate that the proposed method achieves 97.500% and 97.699% accuracy rates on Montgomery and Shenzhen datasets, respectively. Furthermore, our method outperforms state-of-the-art results for the two TB detection datasets in terms of accuracy rate.
Collapse
Affiliation(s)
- Erdal Tasci
- Computer Engineering Department, Ege University, Izmir, Turkey
| | - Caner Uluturk
- Computer Engineering Department, Ege University, Izmir, Turkey
| | - Aybars Ugur
- Computer Engineering Department, Ege University, Izmir, Turkey
| |
Collapse
|
56
|
Govindarajan S, Swaminathan R. Extreme Learning Machine based Differentiation of Pulmonary Tuberculosis in Chest Radiographs using Integrated Local Feature Descriptors. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 204:106058. [PMID: 33789212 DOI: 10.1016/j.cmpb.2021.106058] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Accepted: 03/16/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer aided diagnostics of Pulmonary Tuberculosis in chest radiographs relies on the differentiation of subtle and non-specific alterations in the images. In this study, an attempt has been made to identify and classify Tuberculosis conditions from healthy subjects in chest radiographs using integrated local feature descriptors and variants of extreme learning machine. METHODS Lung fields in the chest images are segmented using Reaction Diffusion Level Set method. Local feature descriptors such as Median Robust Extended Local Binary Patterns and Gradient Local Ternary Patterns are extracted. Extreme Learning Machine (ELM) and Online Sequential ELM (OSELM) classifiers are employed to identify Tuberculosis conditions and, their performances are analysed using standard metrics. RESULTS Results show that the adopted segmentation method is able to delineate lung fields in both healthy and Tuberculosis images. Extracted features are statistically significant even in images with inter and intra subject variability. Sigmoid activation function yields accuracy and sensitivity values greater than 98% for both the classifiers. Highest sensitivity is observed with OSELM for minimal significant features in detecting Tuberculosis images. CONCLUSION As ELM based method is able to differentiate the subtle changes in inter and intra subject variations of chest X-ray images, the proposed methodology seems to be useful for computer-based detection of Pulmonary Tuberculosis.
Collapse
Affiliation(s)
- Satyavratan Govindarajan
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India.
| | - Ramakrishnan Swaminathan
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India
| |
Collapse
|
57
|
Afzali A, Babapour Mofrad F, Pouladian M. 2D Statistical Lung Shape Analysis Using Chest Radiographs: Modelling and Segmentation. J Digit Imaging 2021; 34:523-540. [PMID: 33754214 PMCID: PMC8329117 DOI: 10.1007/s10278-021-00440-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 11/30/2020] [Accepted: 02/24/2021] [Indexed: 11/26/2022] Open
Abstract
Accurate information of the lung shape analysis and its anatomical variations is very noticeable in medical imaging. The normal variations of the lung shape can be interpreted as a normal lung. In contrast, abnormal variations of the lung shape can be a result of one of the pulmonary diseases. The goal of this study is twofold: (1) represent two lung shape models which are different at the reference points in registration process considering to show their impact on estimating the inter-patient 2D lung shape variations and (2) using the obtained models in lung field segmentation by utilizing active shape model (ASM) technique. The represented models which showed the inter-patient 2D lung shape variations in two different forms are fully compared and evaluated. The results show that the models along with standard principal component analysis (PCA) can be able to explain more than 95% of total variations in all cases using only first 7 principal component (PC) modes for both lungs. Both models are used in ASM-based segmentation technique for lung field segmentation. The segmentation results are evaluated using leave-one-out cross validation technique. According to the experimental results, the proposed method has average dice similarity coefficient of 97.1% and 96.1% for the right and the left lung, respectively. The results show that the proposed segmentation method is more stable and accurate than other model-based techniques to inter-patient lung field segmentation.
Collapse
Affiliation(s)
- Ali Afzali
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Farshid Babapour Mofrad
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Majid Pouladian
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| |
Collapse
|
58
|
Altaf F, Islam SMS, Janjua NK. A novel augmented deep transfer learning for classification of COVID-19 and other thoracic diseases from X-rays. Neural Comput Appl 2021; 33:14037-14048. [PMID: 33948047 PMCID: PMC8083924 DOI: 10.1007/s00521-021-06044-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 04/13/2021] [Indexed: 12/24/2022]
Abstract
Deep learning has provided numerous breakthroughs in natural imaging tasks. However, its successful application to medical images is severely handicapped with the limited amount of annotated training data. Transfer learning is commonly adopted for the medical imaging tasks. However, a large covariant shift between the source domain of natural images and target domain of medical images results in poor transfer learning. Moreover, scarcity of annotated data for the medical imaging tasks causes further problems for effective transfer learning. To address these problems, we develop an augmented ensemble transfer learning technique that leads to significant performance gain over the conventional transfer learning. Our technique uses an ensemble of deep learning models, where the architecture of each network is modified with extra layers to account for dimensionality change between the images of source and target data domains. Moreover, the model is hierarchically tuned to the target domain with augmented training data. Along with the network ensemble, we also utilize an ensemble of dictionaries that are based on features extracted from the augmented models. The dictionary ensemble provides an additional performance boost to our method. We first establish the effectiveness of our technique with the challenging ChestXray-14 radiography data set. Our experimental results show more than 50% reduction in the error rate with our method as compared to the baseline transfer learning technique. We then apply our technique to a recent COVID-19 data set for binary and multi-class classification tasks. Our technique achieves 99.49% accuracy for the binary classification, and 99.24% for multi-class classification.
Collapse
Affiliation(s)
- Fouzia Altaf
- School of Science, Edith Cowan University, Joondalup, WA Australia
| | - Syed M. S. Islam
- School of Science, Edith Cowan University, Joondalup, WA Australia
| | | |
Collapse
|
59
|
Puttagunta M, Ravi S. Medical image analysis based on deep learning approach. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:24365-24398. [PMID: 33841033 PMCID: PMC8023554 DOI: 10.1007/s11042-021-10707-4] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/28/2020] [Accepted: 02/10/2021] [Indexed: 05/05/2023]
Abstract
Medical imaging plays a significant role in different clinical applications such as medical procedures used for early detection, monitoring, diagnosis, and treatment evaluation of various medical conditions. Basicsof the principles and implementations of artificial neural networks and deep learning are essential for understanding medical image analysis in computer vision. Deep Learning Approach (DLA) in medical image analysis emerges as a fast-growing research field. DLA has been widely used in medical imaging to detect the presence or absence of the disease. This paper presents the development of artificial neural networks, comprehensive analysis of DLA, which delivers promising medical imaging applications. Most of the DLA implementations concentrate on the X-ray images, computerized tomography, mammography images, and digital histopathology images. It provides a systematic review of the articles for classification, detection, and segmentation of medical images based on DLA. This review guides the researchers to think of appropriate changes in medical image analysis based on DLA.
Collapse
Affiliation(s)
- Muralikrishna Puttagunta
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| | - S. Ravi
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| |
Collapse
|
60
|
Guo R, Passi K, Jain CK. Tuberculosis Diagnostics and Localization in Chest X-Rays via Deep Learning Models. Front Artif Intell 2021; 3:583427. [PMID: 33733221 PMCID: PMC7861240 DOI: 10.3389/frai.2020.583427] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 08/13/2020] [Indexed: 11/13/2022] Open
Abstract
For decades, tuberculosis (TB), a potentially serious infectious lung disease, continues to be a leading cause of worldwide death. Proven to be conveniently efficient and cost-effective, chest X-ray (CXR) has become the preliminary medical imaging tool for detecting TB. Arguably, the quality of TB diagnosis will improve vastly with automated CXRs for TB detection and the localization of suspected areas, which may manifest TB. The current line of research aims to develop an efficient computer-aided detection system that will support doctors (and radiologists) to become well-informed when making TB diagnosis from patients' CXRs. Here, an integrated process to improve TB diagnostics via convolutional neural networks (CNNs) and localization in CXRs via deep-learning models is proposed. Three key steps in the TB diagnostics process include (a) modifying CNN model structures, (b) model fine-tuning via artificial bee colony algorithm, and (c) the implementation of linear average–based ensemble method. Comparisons of the overall performance are made across all three steps among the experimented deep CNN models on two publicly available CXR datasets, namely, the Shenzhen Hospital CXR dataset and the National Institutes of Health CXR dataset. Validated performance includes detecting CXR abnormalities and differentiating among seven TB-related manifestations (consolidation, effusion, fibrosis, infiltration, mass, nodule, and pleural thickening). Importantly, class activation mapping is employed to inform a visual interpretation of the diagnostic result by localizing the detected lung abnormality manifestation on CXR. Compared to the state-of-the-art, the resulting approach showcases an outstanding performance both in the lung abnormality detection and the specific TB-related manifestation diagnosis vis-à-vis the localization in CXRs.
Collapse
Affiliation(s)
- Ruihua Guo
- Department of Mathematics and Computer Science, Laurentian University, Greater Sudbury, ON, Canada
| | - Kalpdrum Passi
- Department of Mathematics and Computer Science, Laurentian University, Greater Sudbury, ON, Canada
| | - Chakresh Kumar Jain
- Department of Biotechnology, Jaypee Institute of Information Technology, Noida, India
| |
Collapse
|
61
|
A new automated CNN deep learning approach for identification of ECG congestive heart failure and arrhythmia using constant-Q non-stationary Gabor transform. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102326] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
62
|
Proposing a novel multi-instance learning model for tuberculosis recognition from chest X-ray images based on CNNs, complex networks and stacked ensemble. Phys Eng Sci Med 2021; 44:291-311. [PMID: 33616887 DOI: 10.1007/s13246-021-00980-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 02/01/2021] [Indexed: 10/22/2022]
Abstract
Mycobacterium Tuberculosis (TB) is an infectious bacterial disease. In 2018, about 10 million people has been diagnosed with tuberculosis (TB) worldwide. Early diagnosis of TB is necessary for effective treatment, higher survival rate, and preventing its further transmission. The gold standard for tuberculosis diagnosis is sputum culture. Nevertheless, posterior-anterior chest radiographs (CXR) is an effective central method with low cost and a relatively low radiation dose for screening TB with immediate results. TB diagnosis from CXR is a challenging task requiring high level of expertise due to the diverse presentation of the disease. Significant intra-class variation and inter-class similarity in CXR images makes TB diagnosis from CXR a more challenging task. The main aim of this study is tuberculosis recognition from CXR images for reducing the disease burden. For this purpose, a novel multi-instance classification model is proposed in this study which is based on CNNs, complex networks and stacked ensemble (CCNSE). A main advantage of CCNSE is not requiring an accurate lung segmentation to localize the suspicious regions. Several overlapping patches are extracted from each CXR image. Features describing each patch are obtained by CNNs and then the feature vectors are clustered. Local complex networks (LCN) and global ones (GCN) of the cluster representatives are formed and feature engineering on LCN (GCN) generates other features at image-level (patch-level and image-level). Global clustering on these feature sets is performed for all patches. Each patch is assigned the purity score of its corresponding cluster. Patch-level features and purity scores are aggregated for each image. Finally, the images are classified with a proposed stacked ensemble classifier to normal and TB classes. Two datasets are used in this study including Montgomery County CXR set (MC) and Shenzhen dataset (SZ). MC/SZ includes 138/662 chest X-rays (CXR) from which 80 and 58/326 and 336 images belong to normal/TB classes, respectively. The experimental results show that the proposed method with AUC of 99.00 ± 0.28/98.00 ± 0.16 for MC/SZ and accuracy of 99.26 ± 0.40/99.22 ± 0.32 for MC/SZ with fivefold cross validation strategy is superior than the compared ones for diagnosis of TB from CXR images. The proposed method can be used as a computer-aided diagnosis system to reduce the manual time, effort and dependency to specialist's expertise level.
Collapse
|
63
|
Ayaz M, Shaukat F, Raja G. Ensemble learning based automatic detection of tuberculosis in chest X-ray images using hybrid feature descriptors. Phys Eng Sci Med 2021; 44:183-194. [PMID: 33459996 PMCID: PMC7812355 DOI: 10.1007/s13246-020-00966-0] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 12/17/2020] [Indexed: 02/02/2023]
Abstract
Tuberculosis (TB) remains one of the major health problems in modern times with a high mortality rate. While efforts are being made to make early diagnosis accessible and more reliable in high burden TB countries, digital chest radiography has become a popular source for this purpose. However, the screening process requires expert radiologists which may be a potential barrier in developing countries. A fully automatic computer-aided diagnosis system can reduce the need of trained personnel for early diagnosis of TB using chest X-ray images. In this paper, we have proposed a novel TB detection technique that combines hand-crafted features with deep features (convolutional neural network-based) through Ensemble Learning. Handcrafted features were extracted via Gabor Filter and deep features were extracted via pre-trained deep learning models. Two publicly available datasets namely (i) Montgomery and (ii) Shenzhen were used to evaluate the proposed system. The proposed methodology was validated with a k-fold cross-validation scheme. The area under receiver operating characteristics curves of 0.99 and 0.97 were achieved for Shenzhen and Montgomery datasets respectively which shows the superiority of the proposed scheme.
Collapse
Affiliation(s)
- Muhammad Ayaz
- Faculty of Electronics & Electrical Engineering, University of Engineering & Technology, Taxila, 47080, Pakistan
| | - Furqan Shaukat
- Department of Electronics Engineering, University of Chakwal, Chakwal, Pakistan
| | - Gulistan Raja
- Faculty of Electronics & Electrical Engineering, University of Engineering & Technology, Taxila, 47080, Pakistan.
| |
Collapse
|
64
|
Saleem HN, Sheikh UU, Khalid SA. Classification of Chest Diseases from X-ray Images on the CheXpert Dataset. LECTURE NOTES IN ELECTRICAL ENGINEERING 2021:837-850. [DOI: 10.1007/978-981-16-0749-3_64] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
65
|
Hu Y, Xie C, Yang H, Ho JWK, Wen J, Han L, Lam KO, Wong IYH, Law SYK, Chiu KWH, Vardhanabhuti V, Fu J. Computed tomography-based deep-learning prediction of neoadjuvant chemoradiotherapy treatment response in esophageal squamous cell carcinoma. Radiother Oncol 2021; 154:6-13. [PMID: 32941954 DOI: 10.1016/j.radonc.2020.09.014] [Citation(s) in RCA: 94] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Revised: 08/20/2020] [Accepted: 09/06/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND Deep learning is promising to predict treatment response. We aimed to evaluate and validate the predictive performance of the CT-based model using deep learning features for predicting pathologic complete response to neoadjuvant chemoradiotherapy (nCRT) in esophageal squamous cell carcinoma (ESCC). MATERIALS AND METHODS Patients were retrospectively enrolled between April 2007 and December 2018 from two institutions. We extracted deep learning features of six pre-trained convolutional neural networks, respectively, from pretreatment CT images in the training cohort (n = 161). Support vector machine was adopted as the classifier. Validation was performed in an external testing cohort (n = 70). We assessed the performance using the area under the receiver operating characteristics curve (AUC) and selected an optimal model, which was compared with a radiomics model developed from the training cohort. A clinical model consisting of clinical factors only was also built for baseline comparison. We further conducted a radiogenomics analysis using gene expression profiles to reveal underlying biology associated with radiological prediction. RESULTS The optimal model with features extracted from ResNet50 achieved an AUC and accuracy of 0.805 (95% CI, 0.696-0.913) and 77.1% (65.6%-86.3%) in the testing cohort, compared with 0.725 (0.605-0.846)) and 67.1% (54.9%-77.9%) for the radiomics model. All the radiological models showed better predictive performance than the clinical model. Radiogenomics analysis suggested a potential association mainly with WNT signaling pathway and tumor microenvironment. CONCLUSIONS The novel and noninvasive deep learning approach could provide efficient and accurate prediction of treatment response to nCRT in ESCC, and benefit clinical decision making of therapeutic strategy.
Collapse
Affiliation(s)
- Yihuai Hu
- Department of Thoracic Surgery, Sun Yat-sen University Cancer Center, Guangzhou, China; State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Guangdong Esophageal Cancer Institute, Guangzhou, China
| | - Chenyi Xie
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Hong Yang
- Department of Thoracic Surgery, Sun Yat-sen University Cancer Center, Guangzhou, China; State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Guangdong Esophageal Cancer Institute, Guangzhou, China
| | - Joshua W K Ho
- School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Jing Wen
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Guangdong Esophageal Cancer Institute, Guangzhou, China
| | - Lujun Han
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Department of Medical Imaging, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Ka-On Lam
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Ian Y H Wong
- Department of Surgery, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Simon Y K Law
- Department of Surgery, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Keith W H Chiu
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China.
| | - Jianhua Fu
- Department of Thoracic Surgery, Sun Yat-sen University Cancer Center, Guangzhou, China; State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Guangdong Esophageal Cancer Institute, Guangzhou, China.
| |
Collapse
|
66
|
Owais M, Arsalan M, Mahmood T, Kim YH, Park KR. Comprehensive Computer-Aided Decision Support Framework to Diagnose Tuberculosis From Chest X-Ray Images: Data Mining Study. JMIR Med Inform 2020; 8:e21790. [PMID: 33284119 PMCID: PMC7752539 DOI: 10.2196/21790] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 11/05/2020] [Accepted: 11/09/2020] [Indexed: 12/29/2022] Open
Abstract
Background Tuberculosis (TB) is one of the most infectious diseases that can be fatal. Its early diagnosis and treatment can significantly reduce the mortality rate. In the literature, several computer-aided diagnosis (CAD) tools have been proposed for the efficient diagnosis of TB from chest radiograph (CXR) images. However, the majority of previous studies adopted conventional handcrafted feature-based algorithms. In addition, some recent CAD tools utilized the strength of deep learning methods to further enhance diagnostic performance. Nevertheless, all these existing methods can only classify a given CXR image into binary class (either TB positive or TB negative) without providing further descriptive information. Objective The main objective of this study is to propose a comprehensive CAD framework for the effective diagnosis of TB by providing visual as well as descriptive information from the previous patients’ database. Methods To accomplish our objective, first we propose a fusion-based deep classification network for the CAD decision that exhibits promising performance over the various state-of-the-art methods. Furthermore, a multilevel similarity measure algorithm is devised based on multiscale information fusion to retrieve the best-matched cases from the previous database. Results The performance of the framework was evaluated based on 2 well-known CXR data sets made available by the US National Library of Medicine and the National Institutes of Health. Our classification model exhibited the best diagnostic performance (0.929, 0.937, 0.921, 0.928, and 0.965 for F1 score, average precision, average recall, accuracy, and area under the curve, respectively) and outperforms the performance of various state-of-the-art methods. Conclusions This paper presents a comprehensive CAD framework to diagnose TB from CXR images by retrieving the relevant cases and their clinical observations from the previous patients’ database. These retrieval results assist the radiologist in making an effective diagnostic decision related to the current medical condition of a patient. Moreover, the retrieval results can facilitate the radiologists in subjectively validating the CAD decision.
Collapse
Affiliation(s)
- Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Yu Hwan Kim
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| |
Collapse
|
67
|
Tavolara TE, Niazi MKK, Ginese M, Piedra-Mora C, Gatti DM, Beamer G, Gurcan MN. Automatic discovery of clinically interpretable imaging biomarkers for Mycobacterium tuberculosis supersusceptibility using deep learning. EBioMedicine 2020; 62:103094. [PMID: 33166789 PMCID: PMC7658666 DOI: 10.1016/j.ebiom.2020.103094] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 10/09/2020] [Accepted: 10/12/2020] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Identifying which individuals will develop tuberculosis (TB) remains an unresolved problem due to few animal models and computational approaches that effectively address its heterogeneity. To meet these shortcomings, we show that Diversity Outbred (DO) mice reflect human-like genetic diversity and develop human-like lung granulomas when infected with Mycobacterium tuberculosis (M.tb) . METHODS Following M.tb infection, a "supersusceptible" phenotype develops in approximately one-third of DO mice characterized by rapid morbidity and mortality within 8 weeks. These supersusceptible DO mice develop lung granulomas patterns akin to humans. This led us to utilize deep learning to identify supersusceptibility from hematoxylin & eosin (H&E) lung tissue sections utilizing only clinical outcomes (supersusceptible or not-supersusceptible) as labels. FINDINGS The proposed machine learning model diagnosed supersusceptibility with high accuracy (91.50 ± 4.68%) compared to two expert pathologists using H&E stained lung sections (94.95% and 94.58%). Two non-experts used the imaging biomarker to diagnose supersusceptibility with high accuracy (88.25% and 87.95%) and agreement (96.00%). A board-certified veterinary pathologist (GB) examined the imaging biomarker and determined the model was making diagnostic decisions using a form of granuloma necrosis (karyorrhectic and pyknotic nuclear debris). This was corroborated by one other board-certified veterinary pathologist. Finally, the imaging biomarker was quantified, providing a novel means to convert visual patterns within granulomas to data suitable for statistical analyses. IMPLICATIONS Overall, our results have translatable implication to improve our understanding of TB and also to the broader field of computational pathology in which clinical outcomes alone can drive automatic identification of interpretable imaging biomarkers, knowledge discovery, and validation of existing clinical biomarkers. FUNDING National Institutes of Health and American Lung Association.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Biomedical Informatics, Wake Forest School of Medicine, 486 Patterson Avenue, Winston-Salem, NC 27101, United States
| | - M Khalid Khan Niazi
- Center for Biomedical Informatics, Wake Forest School of Medicine, 486 Patterson Avenue, Winston-Salem, NC 27101, United States.
| | - Melanie Ginese
- Department of Infectious Disease and Global Health, Tufts University Cummings School of Veterinary Medicine, 200 Westboro Rd., North Grafton, MA 01536, United States
| | - Cesar Piedra-Mora
- Department of Biomedical Sciences, Tufts University Cummings School of Veterinary Medicine, 200 Westboro Rd., North Grafton, MA 01536, United States
| | - Daniel M Gatti
- The College of the Atlantic, 105 Eden Street, Bar Harbor, ME 04609, United States
| | - Gillian Beamer
- Department of Infectious Disease and Global Health, Tufts University Cummings School of Veterinary Medicine, 200 Westboro Rd., North Grafton, MA 01536, United States
| | - Metin N Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, 486 Patterson Avenue, Winston-Salem, NC 27101, United States
| |
Collapse
|
68
|
A Survey of Deep Learning for Lung Disease Detection on Medical Images: State-of-the-Art, Taxonomy, Issues and Future Directions. J Imaging 2020; 6:jimaging6120131. [PMID: 34460528 PMCID: PMC8321202 DOI: 10.3390/jimaging6120131] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 11/25/2020] [Accepted: 11/25/2020] [Indexed: 12/24/2022] Open
Abstract
The recent developments of deep learning support the identification and classification of lung diseases in medical images. Hence, numerous work on the detection of lung disease using deep learning can be found in the literature. This paper presents a survey of deep learning for lung disease detection in medical images. There has only been one survey paper published in the last five years regarding deep learning directed at lung diseases detection. However, their survey is lacking in the presentation of taxonomy and analysis of the trend of recent work. The objectives of this paper are to present a taxonomy of the state-of-the-art deep learning based lung disease detection systems, visualise the trends of recent work on the domain and identify the remaining issues and potential future directions in this domain. Ninety-eight articles published from 2016 to 2020 were considered in this survey. The taxonomy consists of seven attributes that are common in the surveyed articles: image types, features, data augmentation, types of deep learning algorithms, transfer learning, the ensemble of classifiers and types of lung diseases. The presented taxonomy could be used by other researchers to plan their research contributions and activities. The potential future direction suggested could further improve the efficiency and increase the number of deep learning aided lung disease detection applications.
Collapse
|
69
|
Chalakkal R, Hafiz F, Abdulla W, Swain A. An efficient framework for automated screening of Clinically Significant Macular Edema. Comput Biol Med 2020; 130:104128. [PMID: 33529843 DOI: 10.1016/j.compbiomed.2020.104128] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 11/09/2020] [Accepted: 11/16/2020] [Indexed: 11/20/2022]
Abstract
The present study proposes a new approach to automated screening of Clinically Significant Macular Edema (CSME) and addresses two major challenges associated with such screenings, i.e., exudate segmentation and imbalanced datasets. The proposed approach replaces the conventional exudate segmentation based feature extraction by combining a pre-trained deep neural network with meta-heuristic feature selection. A feature space over-sampling technique is being used to overcome the effects of skewed datasets and the screening is accomplished by a k-NN based classifier. The role of each data-processing step (e.g., class balancing, feature selection) and the effects of limiting the region of interest to fovea on the classification performance are critically analyzed. Finally, the selection and implication of operating points on Receiver Operating Characteristic curve are discussed. The results of this study convincingly demonstrate that by following these fundamental practices of machine learning, a basic k-NN based classifier could effectively accomplish the CSME screening.
Collapse
Affiliation(s)
- Renoh Chalakkal
- Department of Electrical & Computer Engineering, The University of Auckland, Auckland, New Zealand; oDocs Eye Care, Dunedin, New Zealand.
| | - Faizal Hafiz
- Department of Electrical & Computer Engineering, The University of Auckland, Auckland, New Zealand; oDocs Eye Care, Dunedin, New Zealand.
| | - Waleed Abdulla
- Department of Electrical & Computer Engineering, The University of Auckland, Auckland, New Zealand.
| | - Akshya Swain
- Department of Electrical & Computer Engineering, The University of Auckland, Auckland, New Zealand.
| |
Collapse
|
70
|
Tartaglione E, Barbano CA, Berzovini C, Calandri M, Grangetto M. Unveiling COVID-19 from CHEST X-Ray with Deep Learning: A Hurdles Race with Small Data. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:E6933. [PMID: 32971995 PMCID: PMC7557723 DOI: 10.3390/ijerph17186933] [Citation(s) in RCA: 97] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 09/11/2020] [Accepted: 09/14/2020] [Indexed: 12/19/2022]
Abstract
The possibility to use widespread and simple chest X-ray (CXR) imaging for early screening of COVID-19 patients is attracting much interest from both the clinical and the AI community. In this study we provide insights and also raise warnings on what is reasonable to expect by applying deep learning to COVID classification of CXR images. We provide a methodological guide and critical reading of an extensive set of statistical results that can be obtained using currently available datasets. In particular, we take the challenge posed by current small size COVID data and show how significant can be the bias introduced by transfer-learning using larger public non-COVID CXR datasets. We also contribute by providing results on a medium size COVID CXR dataset, just collected by one of the major emergency hospitals in Northern Italy during the peak of the COVID pandemic. These novel data allow us to contribute to validate the generalization capacity of preliminary results circulating in the scientific community. Our conclusions shed some light into the possibility to effectively discriminate COVID using CXR.
Collapse
Affiliation(s)
- Enzo Tartaglione
- Computer Science Department, University of Turin, 10149 Torino, Italy; (C.A.B.); (M.G.)
| | - Carlo Alberto Barbano
- Computer Science Department, University of Turin, 10149 Torino, Italy; (C.A.B.); (M.G.)
| | - Claudio Berzovini
- Azienda Ospedaliera Città della Salute e della Scienza Presidio Molinette, 10126 Torino, Italy;
| | - Marco Calandri
- Oncology Department, University of Turin, AOU San Luigi Gonzaga, 10043 Orbassano, Italy;
| | - Marco Grangetto
- Computer Science Department, University of Turin, 10149 Torino, Italy; (C.A.B.); (M.G.)
| |
Collapse
|
71
|
Breast cancer detection from biopsy images using nucleus guided transfer learning and belief based fusion. Comput Biol Med 2020; 124:103954. [DOI: 10.1016/j.compbiomed.2020.103954] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 07/30/2020] [Accepted: 07/30/2020] [Indexed: 01/22/2023]
|
72
|
Computer-Aided System for the Detection of Multicategory Pulmonary Tuberculosis in Radiographs. JOURNAL OF HEALTHCARE ENGINEERING 2020; 2020:9205082. [PMID: 32908660 PMCID: PMC7463336 DOI: 10.1155/2020/9205082] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 04/22/2020] [Indexed: 11/18/2022]
Abstract
The early screening and diagnosis of tuberculosis plays an important role in the control and treatment of tuberculosis infections. In this paper, an integrated computer-aided system based on deep learning is proposed for the detection of multiple categories of tuberculosis lesions in chest radiographs. In this system, the fully convolutional neural network method is used to segment the lung area from the entire chest radiograph for pulmonary tuberculosis detection. Different from the previous analysis of the whole chest radiograph, we focus on the specific tuberculosis lesion areas for the analysis and propose the first multicategory tuberculosis lesion detection method. In it, a learning scalable pyramid structure is introduced into the Faster Region-based Convolutional Network (Faster RCNN), which effectively improves the detection of small-area lesions, mines indistinguishable samples during the training process, and uses reinforcement learning to reduce the detection of false-positive lesions. To compare our method with the current tuberculosis detection system, we propose a classification rule for whole chest X-rays using a multicategory tuberculosis lesion detection model and achieve good performance on two public datasets (Montgomery: AUC = 0.977 and accuracy = 0.926; Shenzhen: AUC = 0.941 and accuracy = 0.902). Our proposed computer-aided system is superior to current systems that can be used to assist radiologists in diagnoses and public health providers in screening for tuberculosis in areas where tuberculosis is endemic.
Collapse
|
73
|
Hybrid Learning of Hand-Crafted and Deep-Activated Features Using Particle Swarm Optimization and Optimized Support Vector Machine for Tuberculosis Screening. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10175749] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Tuberculosis (TB) is a leading infectious killer, especially for people with Human Immunodeficiency Virus (HIV) and Acquired Immunodeficiency Syndrome (AIDS). Early diagnosis of TB is crucial for disease treatment and control. Radiology is a fundamental diagnostic tool used to screen or triage TB. Automated chest x-rays analysis can facilitate and expedite TB screening with fast and accurate reports of radiological findings and can rapidly screen large populations and alleviate a shortage of skilled experts in remote areas. We describe a hybrid feature-learning algorithm for automatic screening of TB in chest x-rays: it first segmented the lung regions using the DeepLabv3+ model. Then, six sets of hand-crafted features from statistical textures, local binary pattern, GIST, histogram of oriented gradients (HOG), pyramid histogram of oriented gradients and bags of visual words (BoVW), and nine sets of deep-activated features from AlexNet, GoogLeNet, InceptionV3, XceptionNet, ResNet-50, SqueezeNet, ShuffleNet, MobileNet, and DenseNet, were extracted. The dominant features of each feature set were selected using particle swarm optimization, and then separately input to an optimized support vector machine classifier to label ‘normal’ and ‘TB’ x-rays. GIST, HOG, BoVW from hand-crafted features, and MobileNet and DenseNet from deep-activated features performed better than the others. Finally, we combined these five best-performing feature sets to build a hybrid-learning algorithm. Using the Montgomery County (MC) and Shenzen datasets, we found that the hybrid features of GIST, HOG, BoVW, MobileNet and DenseNet, performed best, achieving an accuracy of 92.5% for the MC dataset and 95.5% for the Shenzen dataset.
Collapse
|
74
|
Sethy PK, Barpanda NK, Rath AK, Behera SK. Deep feature based rice leaf disease identification using support vector machine. COMPUTERS AND ELECTRONICS IN AGRICULTURE 2020; 175:105527. [DOI: 10.1016/j.compag.2020.105527] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2023]
|
75
|
Sethy PK, Behera SK, Ratha PK, Biswas P. Detection of coronavirus Disease (COVID-19) based on Deep Features and Support Vector Machine. INTERNATIONAL JOURNAL OF MATHEMATICAL, ENGINEERING AND MANAGEMENT SCIENCES 2020; 5:643-651. [DOI: 10.33889/ijmems.2020.5.4.052] [Citation(s) in RCA: 80] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2023]
Abstract
The detection of coronavirus (COVID-19) is now a critical task for the medical practitioner. The coronavirus spread so quickly between people and approaches 100,000 people worldwide. In this consequence, it is very much essential to identify the infected people so that prevention of spread can be taken. In this paper, the deep feature plus support vector machine (SVM) based methodology is suggested for detection of coronavirus infected patient using X-ray images. For classification, SVM is used instead of deep learning based classifier, as the later one need a large dataset for training and validation. The deep features from the fully connected layer of CNN model are extracted and fed to SVM for classification purpose. The SVM classifies the corona affected X-ray images from others. The methodology consists of three categories of Xray images, i.e., COVID-19, pneumonia and normal. The method is beneficial for the medical practitioner to classify among the COVID-19 patient, pneumonia patient and healthy people. SVM is evaluated for detection of COVID-19 using the deep features of different 13 number of CNN models. The SVM produced the best results using the deep feature of ResNet50. The classification model, i.e. ResNet50 plus SVM achieved accuracy, sensitivity, FPR and F1 score of 95.33%,95.33%,2.33% and 95.34% respectively for detection of COVID-19 (ignoring SARS, MERS and ARDS). Again, the highest accuracy achieved by ResNet50 plus SVM is 98.66%. The result is based on the Xray images available in the repository of GitHub and Kaggle. As the data set is in hundreds, the classification based on SVM is more robust compared to the transfer learning approach. Also, a comparison analysis of other traditional classification method is carried out. The traditional methods are local binary patterns (LBP) plus SVM, histogram of oriented gradients (HOG) plus SVM and Gray Level Co-occurrence Matrix (GLCM) plus SVM. In traditional image classification method, LBP plus SVM achieved 93.4% of accuracy.
Collapse
|
76
|
Farhat H, Sakr GE, Kilany R. Deep learning applications in pulmonary medical imaging: recent updates and insights on COVID-19. MACHINE VISION AND APPLICATIONS 2020; 31:53. [PMID: 32834523 PMCID: PMC7386599 DOI: 10.1007/s00138-020-01101-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 06/21/2020] [Accepted: 07/07/2020] [Indexed: 05/07/2023]
Abstract
Shortly after deep learning algorithms were applied to Image Analysis, and more importantly to medical imaging, their applications increased significantly to become a trend. Likewise, deep learning applications (DL) on pulmonary medical images emerged to achieve remarkable advances leading to promising clinical trials. Yet, coronavirus can be the real trigger to open the route for fast integration of DL in hospitals and medical centers. This paper reviews the development of deep learning applications in medical image analysis targeting pulmonary imaging and giving insights of contributions to COVID-19. It covers more than 160 contributions and surveys in this field, all issued between February 2017 and May 2020 inclusively, highlighting various deep learning tasks such as classification, segmentation, and detection, as well as different pulmonary pathologies like airway diseases, lung cancer, COVID-19 and other infections. It summarizes and discusses the current state-of-the-art approaches in this research domain, highlighting the challenges, especially with COVID-19 pandemic current situation.
Collapse
Affiliation(s)
- Hanan Farhat
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - George E. Sakr
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - Rima Kilany
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| |
Collapse
|
77
|
Research of Epidemic Big Data Based on Improved Deep Convolutional Neural Network. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:3641745. [PMID: 32774444 PMCID: PMC7396034 DOI: 10.1155/2020/3641745] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Accepted: 06/23/2020] [Indexed: 02/06/2023]
Abstract
In recent years, with the acceleration of the aging process and the aggravation of life pressure, the proportion of chronic epidemics has gradually increased. A large amount of medical data will be generated during the hospitalization of diabetics. It will have important practical significance and social value to discover potential medical laws and valuable information among medical data. In view of this, an improved deep convolutional neural network (“CNN+” for short) algorithm was proposed to predict the changes of diabetes. Firstly, the bagging integrated classification algorithm was used instead of the output layer function of the deep CNN, which can help the improved deep CNN algorithm constructed for the data set of diabetic patients and improve the accuracy of classification. In this way, the “CNN+” algorithm can take the advantages of both the deep CNN and the bagging algorithm. On the one hand, it can extract the potential features of the data set by using the powerful feature extraction ability of deep CNN. On the other hand, the bagging integrated classification algorithm can be used for feature classification, so as to improve the classification accuracy and obtain better disease prediction effect to assist doctors in diagnosis and treatment. Experimental results show that compared with the traditional convolutional neural network and other classification algorithm, the “CNN+” model can get more reliable prediction results.
Collapse
|
78
|
A Novel Method for Detection of Tuberculosis in Chest Radiographs Using Artificial Ecosystem-Based Optimisation of Deep Neural Network Features. Symmetry (Basel) 2020. [DOI: 10.3390/sym12071146] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Tuberculosis (TB) is is an infectious disease that generally attacks the lungs and causes death for millions of people annually. Chest radiography and deep-learning-based image segmentation techniques can be utilized for TB diagnostics. Convolutional Neural Networks (CNNs) has shown advantages in medical image recognition applications as powerful models to extract informative features from images. Here, we present a novel hybrid method for efficient classification of chest X-ray images. First, the features are extracted from chest X-ray images using MobileNet, a CNN model, which was previously trained on the ImageNet dataset. Then, to determine which of these features are the most relevant, we apply the Artificial Ecosystem-based Optimization (AEO) algorithm as a feature selector. The proposed method is applied to two public benchmark datasets (Shenzhen and Dataset 2) and allows them to achieve high performance and reduced computational time. It selected successfully only the best 25 and 19 (for Shenzhen and Dataset 2, respectively) features out of about 50,000 features extracted with MobileNet, while improving the classification accuracy (90.2% for Shenzen dataset and 94.1% for Dataset 2). The proposed approach outperforms other deep learning methods, while the results are the best compared to other recently published works on both datasets.
Collapse
|
79
|
Trusculescu AA, Manolescu D, Tudorache E, Oancea C. Deep learning in interstitial lung disease-how long until daily practice. Eur Radiol 2020; 30:6285-6292. [PMID: 32537728 PMCID: PMC7554005 DOI: 10.1007/s00330-020-06986-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Revised: 03/28/2020] [Accepted: 05/27/2020] [Indexed: 12/19/2022]
Abstract
Interstitial lung diseases are a diverse group of disorders that involve inflammation and fibrosis of interstitium, with clinical, radiological, and pathological overlapping features. These are an important cause of morbidity and mortality among lung diseases. This review describes computer-aided diagnosis systems centered on deep learning approaches that improve the diagnostic of interstitial lung diseases. We highlighted the challenges and the implementation of important daily practice, especially in the early diagnosis of idiopathic pulmonary fibrosis (IPF). Developing a convolutional neuronal network (CNN) that could be deployed on any computer station and be accessible to non-academic centers is the next frontier that needs to be crossed. In the future, early diagnosis of IPF should be possible. CNN might not only spare the human resources but also will reduce the costs spent on all the social and healthcare aspects of this deadly disease. Key Points • Deep learning algorithms are used in pattern recognition of different interstitial lung diseases. • High-resolution computed tomography plays a central role in the diagnosis and in the management of all interstitial lung diseases, especially fibrotic lung disease. • Developing an accessible algorithm that could be deployed on any computer station and be used in non-academic centers is the next frontier in the early diagnosis of idiopathic pulmonary fibrosis.
Collapse
Affiliation(s)
- Ana Adriana Trusculescu
- Department of Pulmonology, University of Medicine and Pharmacy "Victor Babes", Timisoara, Romania
| | - Diana Manolescu
- Department of Radiology, University of Medicine and Pharmacy "Victor Babes", Eftimie Murgu Square, Number 2, Timisoara, Romania.
| | - Emanuela Tudorache
- Department of Pulmonology, University of Medicine and Pharmacy "Victor Babes", Timisoara, Romania
| | - Cristian Oancea
- Department of Pulmonology, University of Medicine and Pharmacy "Victor Babes", Timisoara, Romania
| |
Collapse
|
80
|
Schwalbe N, Wahl B. Artificial intelligence and the future of global health. Lancet 2020; 395:1579-1586. [PMID: 32416782 PMCID: PMC7255280 DOI: 10.1016/s0140-6736(20)30226-9] [Citation(s) in RCA: 270] [Impact Index Per Article: 54.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Revised: 01/21/2020] [Accepted: 01/22/2020] [Indexed: 02/07/2023]
Abstract
Concurrent advances in information technology infrastructure and mobile computing power in many low and middle-income countries (LMICs) have raised hopes that artificial intelligence (AI) might help to address challenges unique to the field of global health and accelerate achievement of the health-related sustainable development goals. A series of fundamental questions have been raised about AI-driven health interventions, and whether the tools, methods, and protections traditionally used to make ethical and evidence-based decisions about new technologies can be applied to AI. Deployment of AI has already begun for a broad range of health issues common to LMICs, with interventions focused primarily on communicable diseases, including tuberculosis and malaria. Types of AI vary, but most use some form of machine learning or signal processing. Several types of machine learning methods are frequently used together, as is machine learning with other approaches, most often signal processing. AI-driven health interventions fit into four categories relevant to global health researchers: (1) diagnosis, (2) patient morbidity or mortality risk assessment, (3) disease outbreak prediction and surveillance, and (4) health policy and planning. However, much of the AI-driven intervention research in global health does not describe ethical, regulatory, or practical considerations required for widespread use or deployment at scale. Despite the field remaining nascent, AI-driven health interventions could lead to improved health outcomes in LMICs. Although some challenges of developing and deploying these interventions might not be unique to these settings, the global health community will need to work quickly to establish guidelines for development, testing, and use, and develop a user-driven research agenda to facilitate equitable and ethical use.
Collapse
Affiliation(s)
- Nina Schwalbe
- Heilbrunn Department of Population and Family Health, Columbia Mailman School of Public Health, New York, NY, USA; Spark Street Advisors, New York, NY, USA.
| | - Brian Wahl
- Spark Street Advisors, New York, NY, USA; Department of International Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| |
Collapse
|
81
|
Abubakar A, Ugail H, Bukar AM. Assessment of Human Skin Burns: A Deep Transfer Learning Approach. J Med Biol Eng 2020. [DOI: 10.1007/s40846-020-00520-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Abstract
Purpose
Accurate assessment of burns is increasingly sought due to diagnostic challenges faced with traditional visual assessment methods. While visual assessment is the most established means of evaluating burns globally, specialised dermatologists are not readily available in most locations and assessment is highly subjective. The use of other technical devices such as Laser Doppler Imaging is highly expensive while rate of occurrences is high in low- and middle-income countries. These necessitate the need for robust and cost-effective assessment techniques thereby acting as an affordable alternative to human expertise.
Method
In this paper, we present a technique to discriminate skin burns using deep transfer learning. This is due to deficient datasets to train a model from scratch, in which two dense and a classification layers were added to replace the existing top layers of pre-trained ResNet50 model.
Results
The proposed study was able to discriminate between burns and healthy skin in both ethnic subjects (Caucasians and Africans). We present an extensive analysis of the effect of using both homogeneous and heterogeneous datasets when training a machine learning algorithm. The findings show that using homogenous dataset during training process produces a biased diagnostic model towards minor racial subjects while using heterogeneous datasets produce a robust diagnostic model. Recognition accuracy of up to 97.1% and 99.3% using African and Caucasian datasets respectively were achieved.
Conclusion
We concluded that it is feasible to have a robust diagnostic machine learning model for burns assessment that can be deployed to remote locations faced with access to specialized burns specialists, thereby aiding in decision-making as quick as possible
Collapse
|
82
|
Analyzing Lung Disease Using Highly Effective Deep Learning Techniques. Healthcare (Basel) 2020; 8:healthcare8020107. [PMID: 32340344 PMCID: PMC7348888 DOI: 10.3390/healthcare8020107] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 04/14/2020] [Accepted: 04/20/2020] [Indexed: 01/14/2023] Open
Abstract
Image processing technologies and computer-aided diagnosis are medical technologies used to support decision-making processes of radiologists and medical professionals who provide treatment for lung disease. These methods involve using chest X-ray images to diagnose and detect lung lesions, but sometimes there are abnormal cases that take some time to occur. This experiment used 5810 images for training and validation with the MobileNet, Densenet-121 and Resnet-50 models, which are popular networks used to classify the accuracy of images, and utilized a rotational technique to adjust the lung disease dataset to support learning with these convolutional neural network models. The results of the convolutional neural network model evaluation showed that Densenet-121, with a state-of-the-art Mish activation function and Nadam-optimized performance. All the rates for accuracy, recall, precision and F1 measures totaled 98.88%. We then used this model to test 10% of the total images from the non-dataset training and validation. The accuracy rate was 98.97% for the result which provided significant components for the development of a computer-aided diagnosis system to yield the best performance for the detection of lung lesions.
Collapse
|
83
|
Rajaraman S, Kim I, Antani SK. Detection and visualization of abnormality in chest radiographs using modality-specific convolutional neural network ensembles. PeerJ 2020; 8:e8693. [PMID: 32211231 PMCID: PMC7083159 DOI: 10.7717/peerj.8693] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 02/05/2020] [Indexed: 11/20/2022] Open
Abstract
Convolutional neural networks (CNNs) trained on natural images are extremely successful in image classification and localization due to superior automated feature extraction capability. In extending their use to biomedical recognition tasks, it is important to note that visual features of medical images tend to be uniquely different than natural images. There are advantages offered through training these networks on large scale medical common modality image collections pertaining to the recognition task. Further, improved generalization in transferring knowledge across similar tasks is possible when the models are trained to learn modality-specific features and then suitably repurposed for the target task. In this study, we propose modality-specific ensemble learning toward improving abnormality detection in chest X-rays (CXRs). CNN models are trained on a large-scale CXR collection to learn modality-specific features and then repurposed for detecting and localizing abnormalities. Model predictions are combined using different ensemble strategies toward reducing prediction variance and sensitivity to the training data while improving overall performance and generalization. Class-selective relevance mapping (CRM) is used to visualize the learned behavior of the individual models and their ensembles. It localizes discriminative regions of interest (ROIs) showing abnormal regions and offers an improved explanation of model predictions. It was observed that the model ensembles demonstrate superior localization performance in terms of Intersection of Union (IoU) and mean Average Precision (mAP) metrics than any individual constituent model.
Collapse
Affiliation(s)
- Sivaramakrishnan Rajaraman
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States of America
| | - Incheol Kim
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States of America
| | - Sameer K. Antani
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States of America
| |
Collapse
|
84
|
Rajaraman S, Antani SK. Modality-specific deep learning model ensembles toward improving TB detection in chest radiographs. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:27318-27326. [PMID: 32257736 PMCID: PMC7120763 DOI: 10.1109/access.2020.2971257] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The proposed study evaluates the efficacy of knowledge transfer gained through an ensemble of modality-specific deep learning models toward improving the state-of-the-art in Tuberculosis (TB) detection. A custom convolutional neural network (CNN) and selected popular pretrained CNNs are trained to learn modality-specific features from large-scale publicly available chest x-ray (CXR) collections including (i) RSNA dataset (normal = 8851, abnormal = 17833), (ii) Pediatric pneumonia dataset (normal = 1583, abnormal = 4273), and (iii) Indiana dataset (normal = 1726, abnormal = 2378). The knowledge acquired through modality-specific learning is transferred and fine-tuned for TB detection on the publicly available Shenzhen CXR collection (normal = 326, abnormal =336). The predictions of the best performing models are combined using different ensemble methods to demonstrate improved performance over any individual constituent model in classifying TB-infected and normal CXRs. The models are evaluated through cross-validation (n = 5) at the patient-level with an aim to prevent overfitting, improve robustness and generalization. It is observed that a stacked ensemble of the top-3 retrained models demonstrates promising performance (accuracy: 0.941; 95% confidence interval (CI): [0.899, 0.985], area under the curve (AUC): 0.995; 95% CI: [0.945, 1.00]). One-way ANOVA analyses show there are no statistically significant differences in accuracy (P = .759) and AUC (P = .831) among the ensemble methods. Knowledge transferred through modality-specific learning of relevant features helped improve the classification. The ensemble model resulted in reduced prediction variance and sensitivity to training data fluctuations. Results from their combined use are superior to the state-of-the-art.
Collapse
Affiliation(s)
- Sivaramakrishnan Rajaraman
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, 8600 Rockville Pike, Bethesda, MD 20894 USA
| | - Sameer K. Antani
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, 8600 Rockville Pike, Bethesda, MD 20894 USA
| |
Collapse
|
85
|
Zhang T, Luo YM, Li P, Liu PZ, Du YZ, Sun P, Dong B, Xue H. Cervical precancerous lesions classification using pre-trained densely connected convolutional networks with colposcopy images. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101566] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
86
|
Ul Abideen Z, Ghafoor M, Munir K, Saqib M, Ullah A, Zia T, Tariq SA, Ahmed G, Zahra A. Uncertainty Assisted Robust Tuberculosis Identification With Bayesian Convolutional Neural Networks. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:22812-22825. [PMID: 32391238 PMCID: PMC7176037 DOI: 10.1109/access.2020.2970023] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2020] [Accepted: 01/21/2020] [Indexed: 05/07/2023]
Abstract
Tuberculosis (TB) is an infectious disease that can lead towards death if left untreated. TB detection involves extraction of complex TB manifestation features such as lung cavity, air space consolidation, endobronchial spread, and pleural effusions from chest x-rays (CXRs). Deep learning based approach named convolutional neural network (CNN) has the ability to learn complex features from CXR images. The main problem is that CNN does not consider uncertainty to classify CXRs using softmax layer. It lacks in presenting the true probability of CXRs by differentiating confusing cases during TB detection. This paper presents the solution for TB identification by using Bayesian-based convolutional neural network (B-CNN). It deals with the uncertain cases that have low discernibility among the TB and non-TB manifested CXRs. The proposed TB identification methodology based on B-CNN is evaluated on two TB benchmark datasets, i.e., Montgomery and Shenzhen. For training and testing of proposed scheme we have utilized Google Colab platform which provides NVidia Tesla K80 with 12 GB of VRAM, single core of 2.3 GHz Xeon Processor, 12 GB RAM and 320 GB of disk. B-CNN achieves 96.42% and 86.46% accuracy on both dataset, respectively as compared to the state-of-the-art machine learning and CNN approaches. Moreover, B-CNN validates its results by filtering the CXRs as confusion cases where the variance of B-CNN predicted outputs is more than a certain threshold. Results prove the supremacy of B-CNN for the identification of TB and non-TB sample CXRs as compared to counterparts in terms of accuracy, variance in the predicted probabilities and model uncertainty.
Collapse
Affiliation(s)
- Zain Ul Abideen
- 1Department of Computer ScienceCOMSATS University Islamabad (CUI)Islamabad44000Pakistan
| | - Mubeen Ghafoor
- 1Department of Computer ScienceCOMSATS University Islamabad (CUI)Islamabad44000Pakistan
- 2FET - Computer Science and Creative TechnologiesUniversity of the West of EnglandBristolBS16 1QYU.K
| | - Kamran Munir
- 2FET - Computer Science and Creative TechnologiesUniversity of the West of EnglandBristolBS16 1QYU.K
| | - Madeeha Saqib
- 3Department of Computer Information SystemsCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam34212Saudi Arabia
| | - Ata Ullah
- 4Department of Computer ScienceNational University of Modern Languages (NUML)Islamabad44000Pakistan
| | - Tehseen Zia
- 1Department of Computer ScienceCOMSATS University Islamabad (CUI)Islamabad44000Pakistan
| | - Syed Ali Tariq
- 1Department of Computer ScienceCOMSATS University Islamabad (CUI)Islamabad44000Pakistan
| | - Ghufran Ahmed
- 5Department of Computer ScienceNational University of Computer and Emerging Sciences (NUCES)Karachi54700Pakistan
| | - Asma Zahra
- 1Department of Computer ScienceCOMSATS University Islamabad (CUI)Islamabad44000Pakistan
| |
Collapse
|
87
|
Harris M, Qi A, Jeagal L, Torabi N, Menzies D, Korobitsyn A, Pai M, Nathavitharana RR, Ahmad Khan F. A systematic review of the diagnostic accuracy of artificial intelligence-based computer programs to analyze chest x-rays for pulmonary tuberculosis. PLoS One 2019; 14:e0221339. [PMID: 31479448 PMCID: PMC6719854 DOI: 10.1371/journal.pone.0221339] [Citation(s) in RCA: 74] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2019] [Accepted: 08/05/2019] [Indexed: 12/11/2022] Open
Abstract
We undertook a systematic review of the diagnostic accuracy of artificial intelligence-based software for identification of radiologic abnormalities (computer-aided detection, or CAD) compatible with pulmonary tuberculosis on chest x-rays (CXRs). We searched four databases for articles published between January 2005-February 2019. We summarized data on CAD type, study design, and diagnostic accuracy. We assessed risk of bias with QUADAS-2. We included 53 of the 4712 articles reviewed: 40 focused on CAD design methods (“Development” studies) and 13 focused on evaluation of CAD (“Clinical” studies). Meta-analyses were not performed due to methodological differences. Development studies were more likely to use CXR databases with greater potential for bias as compared to Clinical studies. Areas under the receiver operating characteristic curve (median AUC [IQR]) were significantly higher: in Development studies AUC: 0.88 [0.82–0.90]) versus Clinical studies (0.75 [0.66–0.87]; p-value 0.004); and with deep-learning (0.91 [0.88–0.99]) versus machine-learning (0.82 [0.75–0.89]; p = 0.001). We conclude that CAD programs are promising, but the majority of work thus far has been on development rather than clinical evaluation. We provide concrete suggestions on what study design elements should be improved.
Collapse
Affiliation(s)
- Miriam Harris
- Department of Epidemiology and Biostatistics, McGill University, Montreal, Canada
- Department of Medicine, McGill University Health Centre, Montreal, Canada
- Department of Medicine, Boston University–Boston Medical Center, Boston, Massachusetts, United States of America
- * E-mail:
| | - Amy Qi
- Department of Medicine, McGill University Health Centre, Montreal, Canada
- Respiratory Epidemiology and Clinical Research Unit, Montreal Chest Institute & Research Institute of the McGill University Health Centre, Montreal, Canada
| | - Luke Jeagal
- Respiratory Epidemiology and Clinical Research Unit, Montreal Chest Institute & Research Institute of the McGill University Health Centre, Montreal, Canada
| | - Nazi Torabi
- St. Michael's Hospital, Li Ka Shing International Healthcare Education Centre, Toronto, Canada
| | - Dick Menzies
- Department of Epidemiology and Biostatistics, McGill University, Montreal, Canada
- Respiratory Epidemiology and Clinical Research Unit, Montreal Chest Institute & Research Institute of the McGill University Health Centre, Montreal, Canada
- McGill International TB Centre, Montreal, Canada
| | - Alexei Korobitsyn
- Laboratories, Diagnostics & Drug Resistance Global TB Programme WHO, Geneva, Switzerland
| | - Madhukar Pai
- Department of Epidemiology and Biostatistics, McGill University, Montreal, Canada
- Respiratory Epidemiology and Clinical Research Unit, Montreal Chest Institute & Research Institute of the McGill University Health Centre, Montreal, Canada
- McGill International TB Centre, Montreal, Canada
| | - Ruvandhi R. Nathavitharana
- Division of Infectious Diseases, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
| | - Faiz Ahmad Khan
- Department of Epidemiology and Biostatistics, McGill University, Montreal, Canada
- Respiratory Epidemiology and Clinical Research Unit, Montreal Chest Institute & Research Institute of the McGill University Health Centre, Montreal, Canada
- McGill International TB Centre, Montreal, Canada
| |
Collapse
|
88
|
Dixit U, Mishra A, Shukla A, Tiwari R. Texture classification using convolutional neural network optimized with whale optimization algorithm. SN APPLIED SCIENCES 2019. [DOI: 10.1007/s42452-019-0678-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
89
|
Efficient Deep Network Architectures for Fast Chest X-Ray Tuberculosis Screening and Visualization. Sci Rep 2019; 9:6268. [PMID: 31000728 PMCID: PMC6472370 DOI: 10.1038/s41598-019-42557-4] [Citation(s) in RCA: 117] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 03/22/2019] [Indexed: 01/23/2023] Open
Abstract
Automated diagnosis of tuberculosis (TB) from chest X-Rays (CXR) has been tackled with either hand-crafted algorithms or machine learning approaches such as support vector machines (SVMs) and convolutional neural networks (CNNs). Most deep neural network applied to the task of tuberculosis diagnosis have been adapted from natural image classification. These models have a large number of parameters as well as high hardware requirements, which makes them prone to overfitting and harder to deploy in mobile settings. We propose a simple convolutional neural network optimized for the problem which is faster and more efficient than previous models but preserves their accuracy. Moreover, the visualization capabilities of CNNs have not been fully investigated. We test saliency maps and grad-CAMs as tuberculosis visualization methods, and discuss them from a radiological perspective.
Collapse
|
90
|
Govindarajan S, Swaminathan R. Analysis of Tuberculosis in Chest Radiographs for Computerized Diagnosis using Bag of Keypoint Features. J Med Syst 2019; 43:87. [DOI: 10.1007/s10916-019-1222-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2019] [Accepted: 02/21/2019] [Indexed: 10/27/2022]
|
91
|
Lopez-Garnier S, Sheen P, Zimic M. Automatic diagnostics of tuberculosis using convolutional neural networks analysis of MODS digital images. PLoS One 2019; 14:e0212094. [PMID: 30811445 PMCID: PMC6392246 DOI: 10.1371/journal.pone.0212094] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2017] [Accepted: 01/26/2019] [Indexed: 11/23/2022] Open
Abstract
Tuberculosis is an infectious disease that causes ill health and death in millions of people each year worldwide. Timely diagnosis and treatment is key to full patient recovery. The Microscopic Observed Drug Susceptibility (MODS) is a test to diagnose TB infection and drug susceptibility directly from a sputum sample in 7-10 days with a low cost and high sensitivity and specificity, based on the visual recognition of specific growth cording patterns of M. Tuberculosis in a broth culture. Despite its advantages, MODS is still limited in remote, low resource settings, because it requires permanent and trained technical staff for the image-based diagnostics. Hence, it is important to develop alternative solutions, based on reliable automated analysis and interpretation of MODS cultures. In this study, we trained and evaluated a convolutional neural network (CNN) for automatic interpretation of MODS cultures digital images. The CNN was trained on a dataset of 12,510 MODS positive and negative images obtained from three different laboratories, where it achieved 96.63 +/- 0.35% accuracy, and a sensitivity and specificity ranging from 91% to 99%, when validated across held-out laboratory datasets. The model's learned features resemble visual cues used by expert diagnosticians to interpret MODS cultures, suggesting that our model may have the ability to generalize and scale. It performed robustly when validated across held-out laboratory datasets and can be improved upon with data from new laboratories. This CNN can assist laboratory personnel, in low resource settings, and is a step towards facilitating automated diagnostics access to critical areas in developing countries.
Collapse
Affiliation(s)
- Santiago Lopez-Garnier
- Unidad de Bioinformática / Laboratorio de Enfermedades Infecciosas, Laboratorio de Investigación y Desarrollo, Facultad de Ciencias y Filosofía—Universidad Peruana Cayetano Heredia, Lima, Peru
- Wyss Institute for Biologically Inspired Engineering, Harvard University, Cambridge, Massachusetts, United States of America
| | - Patricia Sheen
- Unidad de Bioinformática / Laboratorio de Enfermedades Infecciosas, Laboratorio de Investigación y Desarrollo, Facultad de Ciencias y Filosofía—Universidad Peruana Cayetano Heredia, Lima, Peru
| | - Mirko Zimic
- Unidad de Bioinformática / Laboratorio de Enfermedades Infecciosas, Laboratorio de Investigación y Desarrollo, Facultad de Ciencias y Filosofía—Universidad Peruana Cayetano Heredia, Lima, Peru
| |
Collapse
|
92
|
A Novel Genetically Optimized Convolutional Neural Network for Traffic Sign Recognition: A New Benchmark on Belgium and Chinese Traffic Sign Datasets. Neural Process Lett 2019. [DOI: 10.1007/s11063-019-09991-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
93
|
Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 2019; 290:590-606. [PMID: 30694159 DOI: 10.1148/radiol.2018180547] [Citation(s) in RCA: 308] [Impact Index Per Article: 51.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced in various fields within the past few years and has recently gained particular attention in the radiology community. This article provides an introduction to deep learning technology and presents the stages that are entailed in the design process of deep learning radiology research. In addition, the article details the results of a survey of the application of deep learning-specifically, the application of convolutional neural networks-to radiologic imaging that was focused on the following five major system organs: chest, breast, brain, musculoskeletal system, and abdomen and pelvis. The survey of the studies is followed by a discussion about current challenges and future trends and their potential implications for radiology. This article may be used as a guide for radiologists planning research in the field of radiologic image analysis using convolutional neural networks.
Collapse
Affiliation(s)
- Shelly Soffer
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Avi Ben-Cohen
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Orit Shimon
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Michal Marianne Amitai
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Hayit Greenspan
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Eyal Klang
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| |
Collapse
|
94
|
Deep Learning Algorithms with Demographic Information Help to Detect Tuberculosis in Chest Radiographs in Annual Workers' Health Examination Data. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2019; 16:ijerph16020250. [PMID: 30654560 PMCID: PMC6352082 DOI: 10.3390/ijerph16020250] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Revised: 01/04/2019] [Accepted: 01/09/2019] [Indexed: 12/12/2022]
Abstract
We aimed to use deep learning to detect tuberculosis in chest radiographs in annual workers’ health examination data and compare the performances of convolutional neural networks (CNNs) based on images only (I-CNN) and CNNs including demographic variables (D-CNN). The I-CNN and D-CNN models were trained on 1000 chest X-ray images, both positive and negative, for tuberculosis. Feature extraction was conducted using VGG19, InceptionV3, ResNet50, DenseNet121, and InceptionResNetV2. Age, weight, height, and gender were recorded as demographic variables. The area under the receiver operating characteristic (ROC) curve (AUC) was calculated for model comparison. The AUC values of the D-CNN models were greater than that of I-CNN. The AUC values for VGG19 increased by 0.0144 (0.957 to 0.9714) in the training set, and by 0.0138 (0.9075 to 0.9213) in the test set (both p < 0.05). The D-CNN models show greater sensitivity than I-CNN models (0.815 vs. 0.775, respectively) at the same cut-off point for the same specificity of 0.962. The sensitivity of D-CNN does not attenuate as much as that of I-CNN, even when specificity is increased by cut-off points. Conclusion: Our results indicate that machine learning can facilitate the detection of tuberculosis in chest X-rays, and demographic factors can improve this process.
Collapse
|
95
|
Utilizing Pretrained Deep Learning Models for Automated Pulmonary Tuberculosis Detection Using Chest Radiography. INTELLIGENT INFORMATION AND DATABASE SYSTEMS 2019. [DOI: 10.1007/978-3-030-14802-7_34] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
96
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46:e1-e36. [PMID: 30367497 PMCID: PMC9560030 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 398] [Impact Index Per Article: 66.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | - Karen Drukker
- Department of RadiologyUniversity of ChicagoChicagoIL60637USA
| | - Kenny H. Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | | |
Collapse
|
97
|
Rajaraman S, Candemir S, Xue Z, Alderson PO, Kohli M, Abuya J, Thoma GR, Antani S. A novel stacked generalization of models for improved TB detection in chest radiographs. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:718-721. [PMID: 30440497 PMCID: PMC11995885 DOI: 10.1109/embc.2018.8512337] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Chest x-ray (CXR) analysis is a common part of the protocol for confirming active pulmonary Tuberculosis (TB). However, many TB endemic regions are severely resource constrained in radiological services impairing timely detection and treatment. Computer-aided diagnosis (CADx) tools can supplement decision-making while simultaneously addressing the gap in expert radiological interpretation during mobile field screening. These tools use hand-engineered and/or convolutional neural networks (CNN) computed image features. CNN, a class of deep learning (DL) models, has gained research prominence in visual recognition. It has been shown that Ensemble learning has an inherent advantage of constructing non-linear decision making functions and improve visual recognition. We create a stacking of classifiers with hand-engineered and CNN features toward improving TB detection in CXRs. The results obtained are highly promising and superior to the state-of-the-art.
Collapse
|
98
|
Santosh KC, Antani S. Automated Chest X-Ray Screening: Can Lung Region Symmetry Help Detect Pulmonary Abnormalities? IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1168-1177. [PMID: 29727280 DOI: 10.1109/tmi.2017.2775636] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Our primary motivator is the need for screening HIV+ populations in resource-constrained regions for exposure to Tuberculosis, using posteroanterior chest radiographs (CXRs). The proposed method is motivated by the observation that radiological examinations routinely conduct bilateral comparisons of the lung field. In addition, the abnormal CXRs tend to exhibit changes in the lung shape, size, and content (textures), and in overall, reflection symmetry between them. We analyze the lung region symmetry using multi-scale shape features, and edge plus texture features. Shape features exploit local and global representation of the lung regions, while edge and texture features take internal content, including spatial arrangements of the structures. For classification, we have performed voting-based combination of three different classifiers: Bayesian network, multilayer perception neural networks, and random forest. We have used three CXR benchmark collections made available by the U.S. National Library of Medicine and the National Institute of Tuberculosis and Respiratory Diseases, India, and have achieved a maximum abnormality detection accuracy (ACC) of 91.00% and area under the ROC curve (AUC) of 0.96. The proposed method outperforms the previously reported methods by more than 5% in ACC and 3% in AUC.
Collapse
|
99
|
Abidin AZ, Deng B, DSouza AM, Nagarajan MB, Coan P, Wismüller A. Deep transfer learning for characterizing chondrocyte patterns in phase contrast X-Ray computed tomography images of the human patellar cartilage. Comput Biol Med 2018; 95:24-33. [PMID: 29433038 PMCID: PMC5869140 DOI: 10.1016/j.compbiomed.2018.01.008] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Revised: 01/22/2018] [Accepted: 01/23/2018] [Indexed: 10/18/2022]
Abstract
Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated to be effective for visualization of the human cartilage matrix at micrometer resolution, thereby capturing osteoarthritis induced changes to chondrocyte organization. This study aims to systematically assess the efficacy of deep transfer learning methods for classifying between healthy and diseased tissue patterns. We extracted features from two different convolutional neural network architectures, CaffeNet and Inception-v3 for characterizing such patterns. These features were quantitatively evaluated in a classification task measured by the area (AUC) under the Receiver Operating Characteristic (ROC) curve as well as qualitative visualization through a dimension reduction approach t-Distributed Stochastic Neighbor Embedding (t-SNE). The best classification performance, for CaffeNet, was observed when using features from the last convolutional layer and the last fully connected layer (AUCs >0.91). Meanwhile, off-the-shelf features from Inception-v3 produced similar classification performance (AUC >0.95). Visualization of features from these layers further confirmed adequate characterization of chondrocyte patterns for reliably distinguishing between healthy and osteoarthritic tissue classes. Such techniques, can be potentially used for detecting the presence of osteoarthritis related changes in the human patellar cartilage.
Collapse
Affiliation(s)
- Anas Z Abidin
- Department of Biomedical Engineering, University of Rochester Medical Center, Rochester, NY, USA.
| | - Botao Deng
- Department of Electrical Engineering, University of Rochester Medical Center, Rochester, NY, USA
| | - Adora M DSouza
- Department of Electrical Engineering, University of Rochester Medical Center, Rochester, NY, USA
| | - Mahesh B Nagarajan
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, USA
| | - Paola Coan
- European Synchrotron Radiation Facility, Grenoble, France; Faculty of Medicine and Institute of Clinical Radiology, Ludwig Maximilians University, Munich Germany
| | - Axel Wismüller
- Department of Biomedical Engineering, University of Rochester Medical Center, Rochester, NY, USA; Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA; Department of Electrical Engineering, University of Rochester Medical Center, Rochester, NY, USA; Faculty of Medicine and Institute of Clinical Radiology, Ludwig Maximilians University, Munich Germany
| |
Collapse
|
100
|
Disease Diagnosis in Smart Healthcare: Innovation, Technologies and Applications. SUSTAINABILITY 2017. [DOI: 10.3390/su9122309] [Citation(s) in RCA: 72] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|