1
|
Isleem UN, Zaidat B, Ren R, Geng EA, Burapachaisri A, Tang JE, Kim JS, Cho SK. Can generative artificial intelligence pass the orthopaedic board examination? J Orthop 2024; 53:27-33. [PMID: 38450060 PMCID: PMC10912220 DOI: 10.1016/j.jor.2023.10.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 10/24/2023] [Accepted: 10/26/2023] [Indexed: 03/08/2024] Open
Abstract
Background Resident training programs in the US use the Orthopaedic In-Training Examination (OITE) developed by the American Academy of Orthopaedic Surgeons (AAOS) to assess the current knowledge of their residents and to identify the residents at risk of failing the Amerian Board of Orthopaedic Surgery (ABOS) examination. Optimal strategies for OITE preparation are constantly being explored. There may be a role for Large Language Models (LLMs) in orthopaedic resident education. ChatGPT, an LLM launched in late 2022 has demonstrated the ability to produce accurate, detailed answers, potentially enabling it to aid in medical education and clinical decision-making. The purpose of this study is to evaluate the performance of ChatGPT on Orthopaedic In-Training Examinations using Self-Assessment Exams from the AAOS database and approved literature as a proxy for the Orthopaedic Board Examination. Methods 301 SAE questions from the AAOS database and associated AAOS literature were input into ChatGPT's interface in a question and multiple-choice format and the answers were then analyzed to determine which answer choice was selected. A new chat was used for every question. All answers were recorded, categorized, and compared to the answer given by the OITE and SAE exams, noting whether the answer was right or wrong. Results Of the 301 questions asked, ChatGPT was able to correctly answer 183 (60.8%) of them. The subjects with the highest percentage of correct questions were basic science (81%), oncology (72.7%, shoulder and elbow (71.9%), and sports (71.4%). The questions were further subdivided into 3 groups: those about management, diagnosis, or knowledge recall. There were 86 management questions and 47 were correct (54.7%), 45 diagnosis questions with 32 correct (71.7%), and 168 knowledge recall questions with 102 correct (60.7%). Conclusions ChatGPT has the potential to provide orthopedic educators and trainees with accurate clinical conclusions for the majority of board-style questions, although its reasoning should be carefully analyzed for accuracy and clinical validity. As such, its usefulness in a clinical educational context is currently limited but rapidly evolving. Clinical relevance ChatGPT can access a multitude of medical data and may help provide accurate answers to clinical questions.
Collapse
Affiliation(s)
- Ula N. Isleem
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Bashar Zaidat
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Renee Ren
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Eric A. Geng
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Aonnicha Burapachaisri
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Justin E. Tang
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jun S. Kim
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Samuel K. Cho
- Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
2
|
Wang L, Wang Q, Wang X, Ma Y, Zhang L, Liu M. Triplet-constrained deep hashing for chest X-ray image retrieval in COVID-19 assessment. Neural Netw 2024; 173:106182. [PMID: 38387203 DOI: 10.1016/j.neunet.2024.106182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 01/15/2024] [Accepted: 02/15/2024] [Indexed: 02/24/2024]
Abstract
Radiology images of the chest, such as computer tomography scans and X-rays, have been prominently used in computer-aided COVID-19 analysis. Learning-based radiology image retrieval has attracted increasing attention recently, which generally involves image feature extraction and finding matches in extensive image databases based on query images. Many deep hashing methods have been developed for chest radiology image search due to the high efficiency of retrieval using hash codes. However, they often overlook the complex triple associations between images; that is, images belonging to the same category tend to share similar characteristics and vice versa. To this end, we develop a triplet-constrained deep hashing (TCDH) framework for chest radiology image retrieval to facilitate automated analysis of COVID-19. The TCDH consists of two phases, including (a) feature extraction and (b) image retrieval. For feature extraction, we have introduced a triplet constraint and an image reconstruction task to enhance discriminative ability of learned features, and these features are then converted into binary hash codes to capture semantic information. Specifically, the triplet constraint is designed to pull closer samples within the same category and push apart samples from different categories. Additionally, an auxiliary image reconstruction task is employed during feature extraction to help effectively capture anatomical structures of images. For image retrieval, we utilize learned hash codes to conduct searches for medical images. Extensive experiments on 30,386 chest X-ray images demonstrate the superiority of the proposed method over several state-of-the-art approaches in automated image search. The code is now available online.
Collapse
Affiliation(s)
- Linmin Wang
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Qianqian Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Xiaochuan Wang
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Yunling Ma
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Limei Zhang
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, Shandong, 250101, China.
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
| |
Collapse
|
3
|
Rueckel J, Huemmer C, Shahidi C, Buizza G, Hoppe BF, Liebig T, Ricke J, Rudolph J, Sabel BO. Artificial Intelligence to Assess Tracheal Tubes and Central Venous Catheters in Chest Radiographs Using an Algorithmic Approach With Adjustable Positioning Definitions. Invest Radiol 2024; 59:306-313. [PMID: 37682731 DOI: 10.1097/rli.0000000000001018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
PURPOSE To develop and validate an artificial intelligence algorithm for the positioning assessment of tracheal tubes (TTs) and central venous catheters (CVCs) in supine chest radiographs (SCXRs) by using an algorithm approach allowing for adjustable definitions of intended device positioning. MATERIALS AND METHODS Positioning quality of CVCs and TTs is evaluated by spatially correlating the respective tip positions with anatomical structures. For CVC analysis, a configurable region of interest is defined to approximate the expected region of well-positioned CVC tips from segmentations of anatomical landmarks. The CVC/TT information is estimated by introducing a new multitask neural network architecture for jointly performing type/existence classification, course segmentation, and tip detection. Validation data consisted of 589 SCXRs that have been radiologically annotated for inserted TTs/CVCs, including an experts' categorical positioning assessment (reading 1). In-image positions of algorithm-detected TT/CVC tips could be corrected using a validation software tool (reading 2) that finally allowed for localization accuracy quantification. Algorithmic detection of images with misplaced devices (reading 1 as reference standard) was quantified by receiver operating characteristics. RESULTS Supine chest radiographs were correctly classified according to inserted TTs/CVCs in 100%/98% of the cases, thereby with high accuracy in also spatially localizing the medical device tips: corrections less than 3 mm in >86% (TTs) and 77% (CVCs) of the cases. Chest radiographs with malpositioned devices were detected with area under the curves of >0.98 (TTs), >0.96 (CVCs with accidental vessel turnover), and >0.93 (also suboptimal CVC insertion length considered). The receiver operating characteristics limitations regarding CVC assessment were mainly caused by limitations of the applied CXR position definitions (region of interest derived from anatomical landmarks), not by algorithmic spatial detection inaccuracies. CONCLUSIONS The TT and CVC tips were accurately localized in SCXRs by the presented algorithms, but triaging applications for CVC positioning assessment still suffer from the vague definition of optimal CXR positioning. Our algorithm, however, allows for an adjustment of these criteria, theoretically enabling them to meet user-specific or patient subgroups requirements. Besides CVC tip analysis, future work should also include specific course analysis for accidental vessel turnover detection.
Collapse
Affiliation(s)
- Johannes Rueckel
- From the Department of Radiology, University Hospital, LMU Munich, Munich, Germany (J.Rueckel, C.S., B.F.H., J.Ricke, J.Rudolph, B.O.S.); Institute of Neuroradiology, University Hospital, LMU Munich, Munich, Germany (J.Rueckel, T.L.); and XP Technology and Innovation, Siemens Healthcare GmbH, Forchheim, Germany (C.H., G.B.)
| | | | | | | | | | | | | | | | | |
Collapse
|
4
|
Sorace L, Raju N, O'Shaughnessy J, Kachel S, Jansz K, Yang N, Lim RP. Assessment of inspiration and technical quality in anteroposterior thoracic radiographs using machine learning. Radiography (Lond) 2024; 30:107-115. [PMID: 37918335 DOI: 10.1016/j.radi.2023.10.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 11/04/2023]
Abstract
INTRODUCTION Chest radiographs are the most performed radiographic procedure, but suboptimal technical factors can impact clinical interpretation. A deep learning model was developed to assess technical and inspiratory adequacy of anteroposterior chest radiographs. METHODS Adult anteroposterior chest radiographs (n = 2375) were assessed for technical adequacy, and if otherwise technically adequate, for adequacy of inspiration. Images were labelled by an experienced radiologist with one of three ground truth labels: inadequate technique (n = 605, 25.5 %), adequate inspiration (n = 900, 37.9 %), and inadequate inspiration (n = 870, 36.6 %). A convolutional neural network was then iteratively trained to predict these labels and evaluated using recall, precision, F1 and micro-F1, and Gradient-weighted Class Activation Mapping analysis on a hold-out test set. Impact of kyphosis on model accuracy was assessed. RESULTS The model performed best for radiographs with adequate technique, and worst for images with inadequate technique. Recall was highest (89 %) for radiographs with both adequate technique and inspiration, with recall of 81 % for images with adequate technique and inadequate inspiration, and 60 % for images with inadequate technique, although precision was highest (85 %) for this category. Per-class F1 was 80 %, 81 % and 70 % for adequate inspiration, inadequate inspiration, and inadequate technique respectively. Weighted F1 and Micro F1 scores were 78 %. Presence or absence of kyphosis had no significant impact on model accuracy in images with adequate technique. CONCLUSION This study explores the promising performance of a machine learning algorithm for assessment of inspiratory adequacy and overall technical adequacy for anteroposterior chest radiograph acquisition. IMPLICATIONS FOR PRACTICE With further refinement, machine learning can contribute to education and quality improvement in radiology departments.
Collapse
Affiliation(s)
- L Sorace
- Department of Radiology, Austin Hospital, Heidelberg, Australia.
| | - N Raju
- Department of Radiology, Austin Hospital, Heidelberg, Australia
| | - J O'Shaughnessy
- Department of Radiology, Austin Hospital, Heidelberg, Australia
| | - S Kachel
- Department of Radiology, Austin Hospital, Heidelberg, Australia; The University of Melbourne, Parkville, Australia; Columbia University, New York, NY, USA
| | - K Jansz
- Department of Radiology, Austin Hospital, Heidelberg, Australia
| | - N Yang
- Department of Radiology, Austin Hospital, Heidelberg, Australia; The University of Melbourne, Parkville, Australia
| | - R P Lim
- Department of Radiology, Austin Hospital, Heidelberg, Australia; The University of Melbourne, Parkville, Australia
| |
Collapse
|
5
|
Hadi MU, Qureshi R, Ahmed A, Iftikhar N. A lightweight CORONA-NET for COVID-19 detection in X-ray images. EXPERT SYSTEMS WITH APPLICATIONS 2023; 225:120023. [PMID: 37063778 PMCID: PMC10088342 DOI: 10.1016/j.eswa.2023.120023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 03/28/2023] [Accepted: 03/31/2023] [Indexed: 06/19/2023]
Abstract
Since December 2019, COVID-19 has posed the most serious threat to living beings. With the advancement of vaccination programs around the globe, the need to quickly diagnose COVID-19 in general with little logistics is fore important. As a consequence, the fastest diagnostic option to stop COVID-19 from spreading, especially among senior patients, should be the development of an automated detection system. This study aims to provide a lightweight deep learning method that incorporates a convolutional neural network (CNN), discrete wavelet transform (DWT), and a long short-term memory (LSTM), called CORONA-NET for diagnosing COVID-19 from chest X-ray images. In this system, deep feature extraction is performed by CNN, the feature vector is reduced yet strengthened by DWT, and the extracted feature is detected by LSTM for prediction. The dataset included 3000 X-rays, 1000 of which were COVID-19 obtained locally. Within minutes of the test, the proposed test platform's prototype can accurately detect COVID-19 patients. The proposed method achieves state-of-the-art performance in comparison with the existing deep learning methods. We hope that the suggested method will hasten clinical diagnosis and may be used for patients in remote areas where clinical labs are not easily accessible due to a lack of resources, location, or other factors.
Collapse
Affiliation(s)
- Muhammad Usman Hadi
- Nanotechnology and Integrated Bio-Engineering Centre (NIBEC), School of Engineering, Ulster University, BT15 1AP Belfast, UK
| | - Rizwan Qureshi
- Department of Imaging Physics, MD Anderson Cancer Center, The University of Texas, Houston, TX 77030, USA
| | - Ayesha Ahmed
- Department of Radiology, Aalborg University Hospital, Aalborg 9000, Denmark
| | - Nadeem Iftikhar
- University College of Northern Denmark, Aalborg 9200, Denmark
| |
Collapse
|
6
|
Saboo YS, Kapse S, Prasanna P. Convolutional Neural Networks (CNNs) for Pneumonia Classification on Pediatric Chest Radiographs. Cureus 2023; 15:e44130. [PMID: 37753018 PMCID: PMC10518240 DOI: 10.7759/cureus.44130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2023] [Indexed: 09/28/2023] Open
Abstract
BACKGROUND Pneumonia is an infectious disease that is especially harmful to those with weak immune systems, such as children under the age of 5. While radiologists' diagnosis of pediatric pneumonia on chest radiographs (CXRs) is often accurate, subtle findings can be missed due to the subjective nature of the diagnosis process. Artificial intelligence (AI) techniques, such as convolutional neural networks (CNNs), can help make the process more objective and precise. However, off-the-shelf CNNs may perform poorly if they are not tuned to their appropriate hyperparameters. Our study aimed to identify the CNNs and their hyperparameter combinations (dropout, batch size, and optimizer) that optimize model performance. METHODOLOGY Sixty models based on five CNNs (VGG 16, VGG 19, DenseNet 121, DenseNet 169, and InceptionResNet V2) and 12 hyperparameter combinations were tested. Adam, Root Mean Squared Propagation (RmsProp), and Mini-Batch Stochastic Gradient Descent (SGD) optimizers were used. Two batch sizes, 32 and 64, were utilized. A dropout rate of either 0.5 or 0.7 was used in all dropout layers. We used a deidentified CXR dataset of 4200 pneumonia (Figure 1a) and 1600 normal images (Figure 1b). Seventy percent of the CXRs in the dataset were used for training the model, 20% were used for validating the model, and 10% were used for testing the model. All CNNs were trained first on the ImageNet dataset. They were then trained, with frozen weights, on the CXR-containing dataset. Results: Among the 60 models, VGG-19 (dropout of 0.5, batch size of 32, and Adam optimizer) was the most accurate. This model achieved an accuracy of 87.9%. A dropout of 0.5 consistently gave higher accuracy, area under the receiver operating characteristics curve (AUROC), and area under the precision-recall curve (AUPRC) compared to a dropout of 0.7. The CNNs InceptionResNet V2, DenseNet 169, VGG 16, and VGG 19 significantly outperformed the DenseNet121 CNN in accuracy and AUROC. The Adam and RmsProp optimizer had improved AUROC and AUPRC compared to the SGD optimizer. The batch size had no statistically significant effect on model performance. CONCLUSION We recommend using low dropout rates (0.5) and RmsProp or Adam optimizer for pneumonia-detecting CNNs. Additionally, we discourage using the DenseNet121 CNN when other CNNs are available. Finally, the batch size may be set to any value, dependent on computational resources.
Collapse
Affiliation(s)
- Yash S Saboo
- Radiology, The University of Texas Health Science Center at San Antonio, San Antonio, USA
| | - Saarthak Kapse
- Biomedical Informatics, Stony Brook University, Stony Brook, USA
| | - Prateek Prasanna
- Biomedical Informatics, Stony Brook University, Stony Brook, USA
| |
Collapse
|
7
|
Ramsey WA, O'Neil CF, Saberi RA, Meece MS, Gilna GP, Kaufman JI, Lieberman HM, Lineen EB, Meizoso JP, Pizano LR, Satahoo SS, Danton GH, Proctor KG, Namias N. Examining the Definition of Ventilator-Associated Pneumonia in the Trauma Setting: A Single-Center Analysis. Surg Infect (Larchmt) 2023; 24:322-326. [PMID: 36944154 DOI: 10.1089/sur.2022.272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023] Open
Abstract
Background: Ventilator associated pneumonia (VAP) is defined by the American College of Surgeons Trauma Quality Improvement Program (ACS TQIP) using laboratory findings, pathophysiologic signs/symptoms, and imaging criteria. However, many critically ill trauma patients meet the non-specific laboratory and sign/symptom thresholds for VAP, so the TQIP designation of VAP depends heavily upon imaging evidence. We hypothesized that physician opinions widely vary regarding chest radiograph findings significant for VAP. Patients and Methods: The TQIP Spring 2021 Benchmark Report (BR) was used to identify 14 patients with VAP at an academic Level 1 Trauma Center. Critically ill trauma patients (n = 7) who spent at least four days intubated and met TQIP's laboratory and sign/symptom thresholds for VAP but did not appear as VAPs on the BR comprised the control group. For each deidentified patient, four successive chest radiographic images were compiled and arranged chronologically. Cases and controls were randomly arranged in digital format. Blinded physicians (n = 27) were asked to identify patients with VAP based solely on imaging evidence. Results: Radiographic evidence of VAP was highly subjective (Krippendorff α = 0.134). Among physicians of the same job description, inter-rater reliability remained low (α = 0.137 for trauma attending physicians; α = 0.141 for trauma fellows; α = 0.271 for radiologists). When majority judgment was compared to the TQIP BR, there was disagreement between the two tests (Cohen κ = -0.071; sensitivity, 64.3%; specificity, 28.6%). Conclusions: Current definitions of VAP rely on subjective imaging interpretation and ignore the reality that there are numerous explanations for opacities on CXR. The inconsistency of physicians' imaging interpretation and protean physiologic findings for VAP in trauma patients should preclude the current definition of VAP from being used as a quality improvement metric in TQIP.
Collapse
Affiliation(s)
- Walter A Ramsey
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Christopher F O'Neil
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Rebecca A Saberi
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Matthew S Meece
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Gareth P Gilna
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Joyce I Kaufman
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Howard M Lieberman
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Edward B Lineen
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Jonathan P Meizoso
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Louis R Pizano
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Shevonne S Satahoo
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Gary H Danton
- Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Kenneth G Proctor
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| | - Nicholas Namias
- DeWitt Daughtry Family Department of Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
- Ryder Trauma Center, Jackson Memorial Hospital, Miami, Florida, USA
| |
Collapse
|
8
|
Rehman A, Khan A, Fatima G, Naz S, Razzak I. Review on chest pathogies detection systems using deep learning techniques. Artif Intell Rev 2023; 56:1-47. [PMID: 37362896 PMCID: PMC10027283 DOI: 10.1007/s10462-023-10457-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Chest radiography is the standard and most affordable way to diagnose, analyze, and examine different thoracic and chest diseases. Typically, the radiograph is examined by an expert radiologist or physician to decide about a particular anomaly, if exists. Moreover, computer-aided methods are used to assist radiologists and make the analysis process accurate, fast, and more automated. A tremendous improvement in automatic chest pathologies detection and analysis can be observed with the emergence of deep learning. The survey aims to review, technically evaluate, and synthesize the different computer-aided chest pathologies detection systems. The state-of-the-art of single and multi-pathologies detection systems, which are published in the last five years, are thoroughly discussed. The taxonomy of image acquisition, dataset preprocessing, feature extraction, and deep learning models are presented. The mathematical concepts related to feature extraction model architectures are discussed. Moreover, the different articles are compared based on their contributions, datasets, methods used, and the results achieved. The article ends with the main findings, current trends, challenges, and future recommendations.
Collapse
Affiliation(s)
- Arshia Rehman
- COMSATS University Islamabad, Abbottabad-Campus, Abbottabad, Pakistan
| | - Ahmad Khan
- COMSATS University Islamabad, Abbottabad-Campus, Abbottabad, Pakistan
| | - Gohar Fatima
- The Islamia University of Bahawalpur, Bahawal Nagar Campus, Bahawal Nagar, Pakistan
| | - Saeeda Naz
- Govt Girls Post Graduate College No.1, Abbottabad, Pakistan
| | - Imran Razzak
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
9
|
Niehoff JH, Kalaitzidis J, Kroeger JR, Schoenbeck D, Borggrefe J, Michael AE. Evaluation of the clinical performance of an AI-based application for the automated analysis of chest X-rays. Sci Rep 2023; 13:3680. [PMID: 36872333 PMCID: PMC9985819 DOI: 10.1038/s41598-023-30521-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 02/24/2023] [Indexed: 03/07/2023] Open
Abstract
The AI-Rad Companion Chest X-ray (AI-Rad, Siemens Healthineers) is an artificial-intelligence based application for the analysis of chest X-rays. The purpose of the present study is to evaluate the performance of the AI-Rad. In total, 499 radiographs were retrospectively included. Radiographs were independently evaluated by radiologists and the AI-Rad. Findings indicated by the AI-Rad and findings described in the written report (WR) were compared to the findings of a ground truth reading (consensus decision of two radiologists after assessing additional radiographs and CT scans). The AI-Rad can offer superior sensitivity for the detection of lung lesions (0.83 versus 0.52), consolidations (0.88 versus 0.78) and atelectasis (0.54 versus 0.43) compared to the WR. However, the superior sensitivity is accompanied by higher false-detection-rates. The sensitivity of the AI-Rad for the detection of pleural effusions is lower compared to the WR (0.74 versus 0.88). The negative-predictive-values (NPV) of the AI-Rad for the detection of all pre-defined findings are on a high level and comparable to the WR. The seemingly advantageous high sensitivity of the AI-Rad is partially offset by the disadvantage of a high false-detection-rate. At the current stage of development, therefore, the high NPVs may be the greatest benefit of the AI-Rad giving radiologists the possibility to re-insure their own negative search for pathologies and thus boosting their confidence in their reports.
Collapse
Affiliation(s)
- Julius Henning Niehoff
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany.
| | - Jana Kalaitzidis
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Jan Robert Kroeger
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Denise Schoenbeck
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Jan Borggrefe
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Arwed Elias Michael
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
10
|
Azad AK, Ahmed I, Ahmed MU. In Search of an Efficient and Reliable Deep Learning Model for Identification of COVID-19 Infection from Chest X-ray Images. Diagnostics (Basel) 2023; 13:diagnostics13030574. [PMID: 36766679 PMCID: PMC9914163 DOI: 10.3390/diagnostics13030574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/08/2022] [Accepted: 01/17/2023] [Indexed: 02/08/2023] Open
Abstract
The virus responsible for COVID-19 is mutating day by day with more infectious characteristics. With the limited healthcare resources and overburdened medical practitioners, it is almost impossible to contain this virus. The automatic identification of this viral infection from chest X-ray (CXR) images is now more demanding as it is a cheaper and less time-consuming diagnosis option. To that cause, we have applied deep learning (DL) approaches for four-class classification of CXR images comprising COVID-19, normal, lung opacity, and viral pneumonia. At first, we extracted features of CXR images by applying a local binary pattern (LBP) and pre-trained convolutional neural network (CNN). Afterwards, we utilized a pattern recognition network (PRN), support vector machine (SVM), decision tree (DT), random forest (RF), and k-nearest neighbors (KNN) classifiers on the extracted features to classify aforementioned four-class CXR images. The performances of the proposed methods have been analyzed rigorously in terms of classification performance and classification speed. Among different methods applied to the four-class test images, the best method achieved classification performances with 97.41% accuracy, 94.94% precision, 94.81% recall, 98.27% specificity, and 94.86% F1 score. The results indicate that the proposed method can offer an efficient and reliable framework for COVID-19 detection from CXR images, which could be immensely conducive to the effective diagnosis of COVID-19-infected patients.
Collapse
|
11
|
Umar Ibrahim A, Al-Turjman F, Ozsoz M, Serte S. Computer aided detection of tuberculosis using two classifiers. BIOMED ENG-BIOMED TE 2022; 67:513-524. [PMID: 36165698 DOI: 10.1515/bmt-2021-0310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 09/13/2022] [Indexed: 11/15/2022]
Abstract
OBJECTIVES Tuberculosis caused by Mycobacterium tuberculosis have been a major challenge for medical and healthcare sectors in many underdeveloped countries with limited diagnosis tools. Tuberculosis can be detected from microscopic slides and chest X-ray but as a result of the high cases of tuberculosis, this method can be tedious for both microbiologist and Radiologist and can lead to miss-diagnosis. The main objective of this study is to addressed these challenges by employing Computer Aided Detection (CAD) using Artificial Intelligence-driven models which learn features based on convolution and result in an output with high accuracy. METHOD In this paper, we described automated discrimination of X-ray and microscopic slide images of tuberculosis into positive and negative cases using pretrained AlexNet Models. The study employed Chest X-ray dataset made available on Kaggle repository and microscopic slide images from both Near East university hospital and Kaggle repository. RESULTS For classification of tuberculosis and healthy microscopic slide using AlexNet+Softmax, the model achieved accuracy of 98.14%. For classification of tuberculosis and healthy microscopic slide using AlexNet+SVM, the model achieved 98.73% accuracy. For classification of tuberculosis and healthy chest X-ray images using AlexNet+Softmax, the model achieved accuracy of 98.19%. For classification of tuberculosis and healthy chest X-ray images using AlexNet+SVM, the model achieved 98.38% accuracy. CONCLUSION The result obtained has shown to outperformed several studies in the current literature. Future studies will attempt to integrate Internet of Medical Things (IoMT) for the design of IoMT/AI-enabled platform for detection of Tuberculosis from both X-ray and Microscopic slide images.
Collapse
Affiliation(s)
| | - Fadi Al-Turjman
- Department of Artificial Intelligence, Research Center for AI and IoT, Near East University, Nicosia, Turkey
| | - Mehmet Ozsoz
- Department of Biomedical Engineering, Near East University, Nicosia, Turkey
| | - Sertan Serte
- Department of Electrical and Electronics Engineering, Near East University, Nicosia, Turkey
| |
Collapse
|
12
|
Multithreshold Segmentation and Machine Learning Based Approach to Differentiate COVID-19 from Viral Pneumonia. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2728866. [PMID: 36039344 PMCID: PMC9420061 DOI: 10.1155/2022/2728866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 06/13/2022] [Accepted: 07/05/2022] [Indexed: 11/17/2022]
Abstract
Coronavirus disease (COVID-19) has created an unprecedented devastation and the loss of millions of lives globally. Contagious nature and fatalities invariably pose challenges to physicians and healthcare support systems. Clinical diagnostic evaluation using reverse transcription-polymerase chain reaction and other approaches are currently in use. The Chest X-ray (CXR) and CT images were effectively utilized in screening purposes that could provide relevant data on localized regions affected by the infection. A step towards automated screening and diagnosis using CXR and CT could be of considerable importance in these turbulent times. The main objective is to probe a simple threshold-based segmentation approach to identify possible infection regions in CXR images and investigate intensity-based, wavelet transform (WT)-based, and Laws based texture features with statistical measures. Further feature selection strategy using Random Forest (RF) then selected features used to create Machine Learning (ML) representation with Support Vector Machine (SVM) and a Random Forest (RF) to make different COVID-19 from viral pneumonia (VP). The results obtained clearly indicate that the intensity and WT-based features vary in the two pathologies that are better differentiated with the combined features trained using SVM and RF classifiers. Classifier performance measures like an Area Under the Curve (AUC) of 0.97 and by and large classification accuracy of 0.9 using the RF model clearly indicate that the methodology implemented is useful in characterizing COVID-19 and Viral Pneumonia.
Collapse
|
13
|
Ahn H, Jun I, Seo KY, Kim EK, Kim TI. Artificial Intelligence for the Estimation of Visual Acuity Using Multi-Source Anterior Segment Optical Coherence Tomographic Images in Senile Cataract. Front Med (Lausanne) 2022; 9:871382. [PMID: 35655854 PMCID: PMC9152093 DOI: 10.3389/fmed.2022.871382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 04/04/2022] [Indexed: 12/05/2022] Open
Abstract
Purpose To investigate an artificial intelligence (AI) model performance using multi-source anterior segment optical coherence tomographic (OCT) images in estimating the preoperative best-corrected visual acuity (BCVA) in patients with senile cataract. Design Retrospective, cross-instrument validation study. Subjects A total of 2,332 anterior segment images obtained using swept-source OCT, optical biometry for intraocular lens calculation, and a femtosecond laser platform in patients with senile cataract and postoperative BCVA ≥ 0.0 logMAR were included in the training/validation dataset. A total of 1,002 images obtained using optical biometry and another femtosecond laser platform in patients who underwent cataract surgery in 2021 were used for the test dataset. Methods AI modeling was based on an ensemble model of Inception-v4 and ResNet. The BCVA training/validation dataset was used for model training. The model performance was evaluated using the test dataset. Analysis of absolute error (AE) was performed by comparing the difference between true preoperative BCVA and estimated preoperative BCVA, as ≥0.1 logMAR (AE≥0.1) or <0.1 logMAR (AE <0.1). AE≥0.1 was classified into underestimation and overestimation groups based on the logMAR scale. Outcome Measurements Mean absolute error (MAE), root mean square error (RMSE), mean percentage error (MPE), and correlation coefficient between true preoperative BCVA and estimated preoperative BCVA. Results The test dataset MAE, RMSE, and MPE were 0.050 ± 0.130 logMAR, 0.140 ± 0.134 logMAR, and 1.3 ± 13.9%, respectively. The correlation coefficient was 0.969 (p < 0.001). The percentage of cases with AE≥0.1 was 8.4%. The incidence of postoperative BCVA > 0.1 was 21.4% in the AE≥0.1 group, of which 88.9% were in the underestimation group. The incidence of vision-impairing disease in the underestimation group was 95.7%. Preoperative corneal astigmatism and lens thickness were higher, and nucleus cataract was more severe (p < 0.001, 0.007, and 0.024, respectively) in AE≥0.1 than that in AE <0.1. The longer the axial length and the more severe the cortical/posterior subcapsular opacity, the better the estimated BCVA than the true BCVA. Conclusions The AI model achieved high-level visual acuity estimation in patients with senile cataract. This quantification method encompassed both visual acuity and cataract severity of OCT image, which are the main indications for cataract surgery, showing the potential to objectively evaluate cataract severity.
Collapse
Affiliation(s)
- Hyunmin Ahn
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea
| | - Ikhyun Jun
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea.,Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| | - Kyoung Yul Seo
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea
| | - Eung Kweon Kim
- Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea.,Saevit Eye Hospital, Goyang, South Korea
| | - Tae-Im Kim
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea.,Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| |
Collapse
|
14
|
Liu J, Qi J, Chen W, Nian Y. Multi-branch fusion auxiliary learning for the detection of pneumonia from chest X-ray images. Comput Biol Med 2022; 147:105732. [PMID: 35779478 PMCID: PMC9212341 DOI: 10.1016/j.compbiomed.2022.105732] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 05/23/2022] [Accepted: 06/11/2022] [Indexed: 11/26/2022]
Abstract
Lung infections caused by bacteria and viruses are infectious and require timely screening and isolation, and different types of pneumonia require different treatment plans. Therefore, finding a rapid and accurate screening method for lung infections is critical. To achieve this goal, we proposed a multi-branch fusion auxiliary learning (MBFAL) method for pneumonia detection from chest X-ray (CXR) images. The MBFAL method was used to perform two tasks through a double-branch network. The first task was to recognize the absence of pneumonia (normal), COVID-19, other viral pneumonia and bacterial pneumonia from CXR images, and the second task was to recognize the three types of pneumonia from CXR images. The latter task was used to assist the learning of the former task to achieve a better recognition effect. In the process of auxiliary parameter updating, the feature maps of different branches were fused after sample screening through label information to enhance the model’s ability to recognize case of pneumonia without impacting its ability to recognize normal cases. Experiments show that an average classification accuracy of 95.61% is achieved using MBFAL. The single class accuracy for normal, COVID-19, other viral pneumonia and bacterial pneumonia was 98.70%, 99.10%, 96.60% and 96.80%, respectively, and the recall was 97.20%, 98.60%, 96.10% and 89.20%, respectively, using the MBFAL method. Compared with the baseline model and the model constructed using the above methods separately, better results for the rapid screening of pneumonia were achieved using MBFAL.
Collapse
|
15
|
Sevli O. A deep learning-based approach for diagnosing COVID-19 on chest x-ray images, and a test study with clinical experts. Comput Intell 2022; 38:COIN12526. [PMID: 35941907 PMCID: PMC9348396 DOI: 10.1111/coin.12526] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 03/06/2022] [Accepted: 04/05/2022] [Indexed: 01/20/2023]
Abstract
Pneumonia is among the common symptoms of the virus that causes COVID-19, which has turned into a worldwide pandemic. It is possible to diagnose pneumonia by examining chest radiographs. Chest x-ray (CXR) is a fast, low-cost, and practical method widely used in this field. The fact that different pathogens other than COVID-19 also cause pneumonia and the radiographic images of all are similar make it difficult to detect the source of the disease. In this study, automatic detection of COVID-19 cases over CXR images was tried to be performed using convolutional neural network (CNN), a deep learning technique. Classifications were carried out using six different architectures on the dataset consisting of 15,153 images of three different types: healthy, COVID-19, and other viral-induced pneumonia. In the classifications performed with five different state-of-art models, ResNet18, GoogLeNet, AlexNet, VGG16, and DenseNet161, and a minimal CNN architecture specific to this study, the most successful result was obtained with the ResNet18 architecture as 99.25% accuracy. Although the minimal CNN model developed for this study has a simpler structure, it was observed that it has a success to compete with more complex models. The performances of the models used in this study were compared with similar studies in the literature and it was revealed that they generally achieved higher success. The model with the highest success was transformed into a test application, tested by 10 volunteer clinicians, and it was concluded that it provides 99.06% accuracy in practical use. This result reveals that the conducted study can play the role of a successful decision support system for experts.
Collapse
Affiliation(s)
- Onur Sevli
- Faculty of Engineering and Architecture, Computer Engineering DepartmentBurdur Mehmet Akif Ersoy UniversityBurdurTurkey
| |
Collapse
|
16
|
Zouch W, Sagga D, Echtioui A, Khemakhem R, Ghorbel M, Mhiri C, Hamida AB. Detection of COVID-19 from CT and Chest X-ray Images Using Deep Learning Models. Ann Biomed Eng 2022; 50:825-835. [PMID: 35415768 PMCID: PMC9005164 DOI: 10.1007/s10439-022-02958-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 03/21/2022] [Indexed: 12/12/2022]
Abstract
Coronavirus 2019 (COVID-19) is a highly transmissible and pathogenic virus caused by severe respiratory syndrome coronavirus 2 (SARS-CoV-2), which first appeared in Wuhan, China, and has since spread in the whole world. This pathology has caused a major health crisis in the world. However, the early detection of this anomaly is a key task to minimize their spread. Artificial intelligence is one of the approaches commonly used by researchers to discover the problems it causes and provide solutions. These estimates would help enable health systems to take the necessary steps to diagnose and track cases of COVID. In this review, we intend to offer a novel method of automatic detection of COVID-19 using tomographic images (CT) and radiographic images (Chest X-ray). In order to improve the performance of the detection system for this outbreak, we used two deep learning models: the VGG and ResNet. The results of the experiments show that our proposed models achieved the best accuracy of 99.35 and 96.77% respectively for VGG19 and ResNet50 with all the chest X-ray images.
Collapse
Affiliation(s)
- Wassim Zouch
- King Abdulaziz University (KAU), Jeddah, Saudi Arabia.
| | - Dhouha Sagga
- ATMS Lab, Advanced Technologies for Medicine and Signals, ENIS, Sfax University, Sfax, Tunisia.,Higher Institute of Management of Gabes, Gabes University, Gabès, Tunisia
| | - Amira Echtioui
- ATMS Lab, Advanced Technologies for Medicine and Signals, ENIS, Sfax University, Sfax, Tunisia
| | - Rafik Khemakhem
- ATMS Lab, Advanced Technologies for Medicine and Signals, ENIS, Sfax University, Sfax, Tunisia.,Higher Institute of Management of Gabes, Gabes University, Gabès, Tunisia
| | - Mohamed Ghorbel
- ATMS Lab, Advanced Technologies for Medicine and Signals, ENIS, Sfax University, Sfax, Tunisia
| | - Chokri Mhiri
- Department of Neurology, Habib Bourguiba University Hospital, Sfax, Tunisia.,Neuroscience Laboratory "LR-12-SP-19", Faculty of Medicine, Sfax University, Sfax, Tunisia
| | - Ahmed Ben Hamida
- ATMS Lab, Advanced Technologies for Medicine and Signals, ENIS, Sfax University, Sfax, Tunisia
| |
Collapse
|
17
|
Soni M, Gomathi S, Kumar P, Churi PP, Mohammed MA, Salman AO. Hybridizing Convolutional Neural Network for Classification of Lung Diseases. INTERNATIONAL JOURNAL OF SWARM INTELLIGENCE RESEARCH 2022. [DOI: 10.4018/ijsir.287544] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Pulmonary disease is widespread worldwide. There is persistent blockage of the lungs, pneumonia, asthma, TB, etc. It is essential to diagnose the lungs promptly. For this reason, machine learning models were developed. For lung disease prediction, many deep learning technologies, including the CNN, and the capsule network, are used. The fundamental CNN has low rotating, inclined, or other irregular image orientation efficiency. Therefore by integrating the space transformer network (STN) with CNN, we propose a new hybrid deep learning architecture named STNCNN. The new model is implemented on the dataset from the Kaggle repository for an NIH chest X-ray image. STNCNN has an accuracy of 69% in respect of the entire dataset, while the accuracy values of vanilla grey, vanilla RGB, hybrid CNN are 67.8%, 69.5%, and 63.8%, respectively. When the sample data set is applied, STNCNN takes much less time to train at the cost of a slightly less reliable validation. Therefore both specialists and physicians are simplified by the proposed STNCNN System for the diagnosis of lung disease.
Collapse
Affiliation(s)
| | - S. Gomathi
- UK International Qualifications, Ltd., India
| | - Pankaj Kumar
- Noida Institute of Engineering and Technology, Greater Noida, India
| | | | | | | |
Collapse
|
18
|
Shoaib MR, Emara HM, Elwekeil M, El-Shafai W, Taha TE, El-Fishawy AS, El-Rabaie ESM, El-Samie FEA. Hybrid classification structures for automatic COVID-19 detection. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2022; 13:4477-4492. [PMID: 35280854 PMCID: PMC8898749 DOI: 10.1007/s12652-021-03686-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 12/21/2021] [Indexed: 06/14/2023]
Abstract
This paper explores the issue of COVID-19 detection from X-ray images. X-ray images, in general, suffer from low quality and low resolution. That is why the detection of different diseases from X-ray images requires sophisticated algorithms. First of all, machine learning (ML) is adopted on the features extracted manually from the X-ray images. Twelve classifiers are compared for this task. Simulation results reveal the superiority of Gaussian process (GP) and random forest (RF) classifiers. To extend the feasibility of this study, we have modified the feature extraction strategy to give deep features. Four pre-trained models, namely ResNet50, ResNet101, Inception-v3 and InceptionResnet-v2 are adopted in this study. Simulation results prove that InceptionResnet-v2 and ResNet101 with GP classifier achieve the best performance. Moreover, transfer learning (TL) is also introduced in this paper to enhance the COVID-19 detection process. The selected classification hierarchy is also compared with a convolutional neural network (CNN) model built from scratch to prove its quality of classification. Simulation results prove that deep features and TL methods provide the best performance that reached 100% for accuracy.
Collapse
Affiliation(s)
- Mohamed R. Shoaib
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
| | - Heba M. Emara
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
| | - Mohamed Elwekeil
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
- Department of Electrical and Information Engineering, University of Cassino and Southern Lazio, Cassino, Italy
| | - Walid El-Shafai
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
- Security Engineering Lab, Computer Science Department, Prince Sultan University, Riyadh, 11586 Saudi Arabia
| | - Taha E. Taha
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
| | - Adel S. El-Fishawy
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
| | - El-Sayed M. El-Rabaie
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
| | - Fathi E. Abd El-Samie
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
19
|
Panahi A, Askari Moghadam R, Akrami M, Madani K. Deep Residual Neural Network for COVID-19 Detection from Chest X-ray Images. SN COMPUTER SCIENCE 2022; 3:169. [PMID: 35224513 PMCID: PMC8860458 DOI: 10.1007/s42979-022-01067-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 11/27/2021] [Indexed: 12/22/2022]
Abstract
The COVID-19 diffused quickly throughout the world and converted as a pandemic. It has caused a destructive effect on both regular lives, common health and global business. It is crucial to identify positive patients as shortly as desirable to limit this epidemic’s further diffusion and to manage immediately affected cases. The demand for quick assistant distinguishing devices has developed. Recent findings achieved utilizing radiology imaging systems propose that such images include salient data about the COVID-19. The utilization of progressive artificial intelligence (AI) methods linked by radiological imaging can help the reliable diagnosis of COVID-19. As radiography images can recognize pneumonia infections, this research brings an accurate and automatic technique based on a deep residual network to analyze chest X-ray images to monitor COVID-19 and diagnose verified patients. The physician states that it is significantly challenging to separate COVID-19 from common viral and bacterial pneumonia, while COVID-19 is additionally a variety of viruses. The proposed network is expanded to perform detailed diagnostics for two multi-class classification (COVID-19, Normal, Viral Pneumonia) and (COVID-19, Normal, Viral Pneumonia, Bacterial Pneumonia) and binary classification. By comparing the proposed network with the popular methods on public databases, the results show that the proposed algorithm can provide an accuracy of 92.1% in classifying multi-classes of COVID-19, normal, viral pneumonia, and bacterial pneumonia cases. It can be applied to support radiologists in verifying their first viewpoint.
Collapse
Affiliation(s)
- Amirhossein Panahi
- Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran
| | | | - Mohammadreza Akrami
- Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran
| | - Kurosh Madani
- LISSI Lab, Senart-FB Institute of Technology, University Paris Est-Creteil (UPEC), Lieusaint, France
| |
Collapse
|
20
|
Sourab SY, Kabir MA. A comparison of hybrid deep learning models for pneumonia diagnosis from chest radiograms. SENSORS INTERNATIONAL 2022. [DOI: 10.1016/j.sintl.2022.100167] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
21
|
Mehrotra R, Agrawal R, Ansari MA. Diagnosis of hypercritical chronic pulmonary disorders using dense convolutional network through chest radiography. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:7625-7649. [PMID: 35125924 PMCID: PMC8798313 DOI: 10.1007/s11042-021-11748-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 08/30/2021] [Accepted: 11/22/2021] [Indexed: 06/14/2023]
Abstract
Lung-related ailments are prevalent all over the world which majorly includes asthma, chronic obstructive pulmonary disease (COPD), tuberculosis, pneumonia, fibrosis, etc. and now COVID-19 is added to this list. Infection of COVID-19 poses respirational complications with other indications like cough, high fever, and pneumonia. WHO had identified cancer in the lungs as a fatal cancer type amongst others and thus, the timely detection of such cancer is pivotal for an individual's health. Since the elementary convolutional neural networks have not performed fairly well in identifying atypical image types hence, we recommend a novel and completely automated framework with a deep learning approach for the recognition and classification of chronic pulmonary disorders (CPD) and COVID-pneumonia using Thoracic or Chest X-Ray (CXR) images. A novel three-step, completely automated, approach is presented that first extracts the region of interest from CXR images for preprocessing, and they are then used to detects infected lungs X-rays from the Normal ones. Thereafter, the infected lung images are further classified into COVID-pneumonia, pneumonia, and other chronic pulmonary disorders (OCPD), which might be utilized in the current scenario to help the radiologist in substantiating their diagnosis and in starting well in time treatment of these deadly lung diseases. And finally, highlight the regions in the CXR which are indicative of severe chronic pulmonary disorders like COVID-19 and pneumonia. A detailed investigation of various pivotal parameters based on several experimental outcomes are made here. This paper presents an approach that detects the Normal lung X-rays from infected ones and the infected lung images are further classified into COVID-pneumonia, pneumonia, and other chronic pulmonary disorders with an utmost accuracy of 96.8%. Several other collective performance measurements validate the superiority of the presented model. The proposed framework shows effective results in classifying lung images into Normal, COVID-pneumonia, pneumonia, and other chronic pulmonary disorders (OCPD). This framework can be effectively utilized in this current pandemic scenario to help the radiologist in substantiating their diagnosis and in starting well in time treatment of these deadly lung diseases.
Collapse
Affiliation(s)
- Rajat Mehrotra
- Department of Electrical & Electronics Engineering, GL Bajaj Institute of Technology & Management, Gr. Noida, India
| | - Rajeev Agrawal
- Department of Electronics & Communication Engineering, GL Bajaj Institute of Technology & Management, Gr. Noida, India
| | - M. A. Ansari
- Department of Electrical Engineering, School of Engineering, Gautam Buddha University, Gr. Noida, India
| |
Collapse
|
22
|
Moujahid H, Cherradi B, Al-Sarem M, Bahatti L, Bakr Assedik Mohammed Yahya Eljialy A, Alsaeedi A, Saeed F. Combining CNN and Grad-Cam for COVID-19 Disease Prediction and Visual Explanation. INTELLIGENT AUTOMATION & SOFT COMPUTING 2022; 32:723-745. [DOI: 10.32604/iasc.2022.022179] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 09/02/2021] [Indexed: 06/15/2023]
|
23
|
Shah PM, Ullah F, Shah D, Gani A, Maple C, Wang Y, Abrar M, Islam SU. Deep GRU-CNN Model for COVID-19 Detection From Chest X-Rays Data. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2022; 10:35094-35105. [PMID: 35582498 PMCID: PMC9088790 DOI: 10.1109/access.2021.3077592] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Accepted: 04/20/2021] [Indexed: 05/03/2023]
Abstract
In the current era, data is growing exponentially due to advancements in smart devices. Data scientists apply a variety of learning-based techniques to identify underlying patterns in the medical data to address various health-related issues. In this context, automated disease detection has now become a central concern in medical science. Such approaches can reduce the mortality rate through accurate and timely diagnosis. COVID-19 is a modern virus that has spread all over the world and is affecting millions of people. Many countries are facing a shortage of testing kits, vaccines, and other resources due to significant and rapid growth in cases. In order to accelerate the testing process, scientists around the world have sought to create novel methods for the detection of the virus. In this paper, we propose a hybrid deep learning model based on a convolutional neural network (CNN) and gated recurrent unit (GRU) to detect the viral disease from chest X-rays (CXRs). In the proposed model, a CNN is used to extract features, and a GRU is used as a classifier. The model has been trained on 424 CXR images with 3 classes (COVID-19, Pneumonia, and Normal). The proposed model achieves encouraging results of 0.96, 0.96, and 0.95 in terms of precision, recall, and f1-score, respectively. These findings indicate how deep learning can significantly contribute to the early detection of COVID-19 in patients through the analysis of X-ray scans. Such indications can pave the way to mitigate the impact of the disease. We believe that this model can be an effective tool for medical practitioners for early diagnosis.
Collapse
Affiliation(s)
- Pir Masoom Shah
- Department of Computer ScienceBacha Khan University Charsadda 24000 Pakistan
- School of Computer ScienceWuhan University Wuhan 430072 China
| | - Faizan Ullah
- Department of Computer ScienceBacha Khan University Charsadda 24000 Pakistan
| | - Dilawar Shah
- Department of Computer ScienceBacha Khan University Charsadda 24000 Pakistan
| | - Abdullah Gani
- Faculty of Computer Science and Information TechnologyUniversity of Malaya Kuala Lumpur 50603 Malaysia
- Faculty of Computing and InformaticsUniversity Malaysia Sabah Labuan 88400 Malaysia
| | - Carsten Maple
- Secure Cyber Systems Research Group, WMGUniversity of Warwick Coventry CV4 7AL U.K
- Alan Turing Institute London NW1 2DB U.K
| | - Yulin Wang
- School of Computer ScienceWuhan University Wuhan 430072 China
| | - Mohammad Abrar
- Department of Computer ScienceMohi-ud-Din Islamic University Nerian Sharif 12080 Pakistan
| | - Saif Ul Islam
- Department of Computer ScienceInstitute of Space Technology Islamabad 44000 Pakistan
| |
Collapse
|
24
|
Yang G, Ye Q, Xia J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2022; 77:29-52. [PMID: 34980946 PMCID: PMC8459787 DOI: 10.1016/j.inffus.2021.07.016] [Citation(s) in RCA: 130] [Impact Index Per Article: 65.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/25/2021] [Accepted: 07/25/2021] [Indexed: 05/04/2023]
Abstract
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.
Collapse
Affiliation(s)
- Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
- Royal Brompton Hospital, London, UK
- Imperial Institute of Advanced Technology, Hangzhou, China
| | - Qinghao Ye
- Hangzhou Ocean’s Smart Boya Co., Ltd, China
- University of California, San Diego, La Jolla, CA, USA
| | - Jun Xia
- Radiology Department, Shenzhen Second People’s Hospital, Shenzhen, China
| |
Collapse
|
25
|
Tjoa E, Guan C. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:4793-4813. [PMID: 33079674 DOI: 10.1109/tnnls.2020.3027314] [Citation(s) in RCA: 280] [Impact Index Per Article: 93.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of the DL is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. The different categories show different dimensions in interpretability research, from approaches that provide "obviously" interpretable information to the studies of complex patterns. By applying the same categorization to interpretability in medical research, it is hoped that: 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.
Collapse
|
26
|
Chakraborty S, Paul S, Hasan KMA. A Transfer Learning-Based Approach with Deep CNN for COVID-19- and Pneumonia-Affected Chest X-ray Image Classification. SN COMPUTER SCIENCE 2021; 3:17. [PMID: 34723208 PMCID: PMC8547126 DOI: 10.1007/s42979-021-00881-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 09/14/2021] [Indexed: 01/22/2023]
Abstract
The COVID-19 pandemic creates a significant impact on everyone's life. One of the fundamental movements to cope with this challenge is identifying the COVID-19-affected patients as early as possible. In this paper, we classified COVID-19, Pneumonia, and Healthy cases from the chest X-ray images by applying the transfer learning approach on the pre-trained VGG-19 architecture. We use MongoDB as a database to store the original image and corresponding category. The analysis is performed on a public dataset of 3797 X-ray images, among them COVID-19 affected (1184 images), Pneumonia affected (1294 images), and Healthy (1319 images) (https://www.kaggle.com/tawsifurrahman/covid19-radiography-database/version/3). This research gained an accuracy of 97.11%, average precision of 97%, and average Recall of 97% on the test dataset.
Collapse
Affiliation(s)
- Soarov Chakraborty
- Department of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna, 9203 Bangladesh
| | - Shourav Paul
- Department of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna, 9203 Bangladesh
| | - K. M. Azharul Hasan
- Department of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna, 9203 Bangladesh
| |
Collapse
|
27
|
Moses DA. Deep learning applied to automatic disease detection using chest X-rays. J Med Imaging Radiat Oncol 2021; 65:498-517. [PMID: 34231311 DOI: 10.1111/1754-9485.13273] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/08/2021] [Indexed: 12/24/2022]
Abstract
Deep learning (DL) has shown rapid advancement and considerable promise when applied to the automatic detection of diseases using CXRs. This is important given the widespread use of CXRs across the world in diagnosing significant pathologies, and the lack of trained radiologists to report them. This review article introduces the basic concepts of DL as applied to CXR image analysis including basic deep neural network (DNN) structure, the use of transfer learning and the application of data augmentation. It then reviews the current literature on how DNN models have been applied to the detection of common CXR abnormalities (e.g. lung nodules, pneumonia, tuberculosis and pneumothorax) over the last few years. This includes DL approaches employed for the classification of multiple different diseases (multi-class classification). Performance of different techniques and models and their comparison with human observers are presented. Some of the challenges facing DNN models, including their future implementation and relationships to radiologists, are also discussed.
Collapse
Affiliation(s)
- Daniel A Moses
- Graduate School of Biomedical Engineering, Faculty of Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Department of Medical Imaging, Prince of Wales Hospital, Sydney, New South Wales, Australia
| |
Collapse
|
28
|
El Asnaoui K, Chawki Y. Using X-ray images and deep learning for automated detection of coronavirus disease. J Biomol Struct Dyn 2021; 39:3615-3626. [PMID: 32397844 DOI: 10.1109/access.2020.3010287] [Citation(s) in RCA: 457] [Impact Index Per Article: 152.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Coronavirus is still the leading cause of death worldwide. There are a set number of COVID-19 test units accessible in emergency clinics because of the expanding cases daily. Therefore, it is important to implement an automatic detection and classification system as a speedy elective finding choice to forestall COVID-19 spreading among individuals. Medical images analysis is one of the most promising research areas, it provides facilities for diagnosis and making decisions of a number of diseases such as Coronavirus. This paper conducts a comparative study of the use of the recent deep learning models (VGG16, VGG19, DenseNet201, Inception_ResNet_V2, Inception_V3, Resnet50, and MobileNet_V2) to deal with detection and classification of coronavirus pneumonia. The experiments were conducted using chest X-ray & CT dataset of 6087 images (2780 images of bacterial pneumonia, 1493 of coronavirus, 231 of Covid19, and 1583 normal) and confusion matrices are used to evaluate model performances. Results found out that the use of inception_Resnet_V2 and Densnet201 provide better results compared to other models used in this work (92.18% accuracy for Inception-ResNetV2 and 88.09% accuracy for Densnet201).Communicated by Ramaswamy H. Sarma.
Collapse
Affiliation(s)
- Khalid El Asnaoui
- Complex System Engineering and Human System, Mohammed VI Polytechnic University, Benguerir, Morocco
| | - Youness Chawki
- Faculty of Sciences and Techniques, Moulay Ismail University, Errachidia, Morocco
| |
Collapse
|
29
|
de Souza LA, Mendel R, Strasser S, Ebigbo A, Probst A, Messmann H, Papa JP, Palm C. Convolutional Neural Networks for the evaluation of cancer in Barrett's esophagus: Explainable AI to lighten up the black-box. Comput Biol Med 2021; 135:104578. [PMID: 34171639 DOI: 10.1016/j.compbiomed.2021.104578] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 06/11/2021] [Accepted: 06/12/2021] [Indexed: 01/10/2023]
Abstract
Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their level of accountability and transparency must be provided in such evaluations. The reliability related to machine learning predictions must be explained and interpreted, especially if diagnosis support is addressed. For this task, the black-box nature of deep learning techniques must be lightened up to transfer its promising results into clinical practice. Hence, we aim to investigate the use of explainable artificial intelligence techniques to quantitatively highlight discriminative regions during the classification of early-cancerous tissues in Barrett's esophagus-diagnosed patients. Four Convolutional Neural Network models (AlexNet, SqueezeNet, ResNet50, and VGG16) were analyzed using five different interpretation techniques (saliency, guided backpropagation, integrated gradients, input × gradients, and DeepLIFT) to compare their agreement with experts' previous annotations of cancerous tissue. We could show that saliency attributes match best with the manual experts' delineations. Moreover, there is moderate to high correlation between the sensitivity of a model and the human-and-computer agreement. The results also lightened that the higher the model's sensitivity, the stronger the correlation of human and computational segmentation agreement. We observed a relevant relation between computational learning and experts' insights, demonstrating how human knowledge may influence the correct computational learning.
Collapse
Affiliation(s)
- Luis A de Souza
- Department of Computing, São Carlos Federal University - UFSCar, Brazil; Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany
| | - Robert Mendel
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany
| | - Sophia Strasser
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany
| | - Alanna Ebigbo
- Medizinische Klinik III, Universitätsklinikum Augsburg, Germany
| | - Andreas Probst
- Medizinische Klinik III, Universitätsklinikum Augsburg, Germany
| | - Helmut Messmann
- Medizinische Klinik III, Universitätsklinikum Augsburg, Germany
| | - João P Papa
- Department of Computing, São Paulo State University, UNESP, Brazil.
| | - Christoph Palm
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany; Regensburg Center of Health Sciences and Technology (RCHST), OTH Regensburg, Germany
| |
Collapse
|
30
|
Emara HM, Shoaib MR, Elwekeil M, El‐Shafai W, Taha TE, El‐Fishawy AS, El‐Rabaie EM, Alshebeili SA, Dessouky MI, Abd El‐Samie FE. Deep convolutional neural networks for COVID-19 automatic diagnosis. Microsc Res Tech 2021; 84:2504-2516. [PMID: 34121273 PMCID: PMC8420362 DOI: 10.1002/jemt.23713] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 12/29/2020] [Accepted: 01/06/2021] [Indexed: 11/16/2022]
Abstract
This article is mainly concerned with COVID‐19 diagnosis from X‐ray images. The number of cases infected with COVID‐19 is increasing daily, and there is a limitation in the number of test kits needed in hospitals. Therefore, there is an imperative need to implement an efficient automatic diagnosis system to alleviate COVID‐19 spreading among people. This article presents a discussion of the utilization of convolutional neural network (CNN) models with different learning strategies for automatic COVID‐19 diagnosis. First, we consider the CNN‐based transfer learning approach for automatic diagnosis of COVID‐19 from X‐ray images with different training and testing ratios. Different pre‐trained deep learning models in addition to a transfer learning model are considered and compared for the task of COVID‐19 detection from X‐ray images. Confusion matrices of these studied models are presented and analyzed. Considering the performance results obtained, ResNet models (ResNet18, ResNet50, and ResNet101) provide the highest classification accuracy on the two considered datasets with different training and testing ratios, namely 80/20, 70/30, 60/40, and 50/50. The accuracies obtained using the first dataset with 70/30 training and testing ratio are 97.67%, 98.81%, and 100% for ResNet18, ResNet50, and ResNet101, respectively. For the second dataset, the reported accuracies are 99%, 99.12%, and 99.29% for ResNet18, ResNet50, and ResNet101, respectively. The second approach is the training of a proposed CNN model from scratch. The results confirm that training of the CNN from scratch can lead to the identification of the signs of COVID‐19 disease.
Collapse
Affiliation(s)
- Heba M. Emara
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Mohamed R. Shoaib
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Mohamed Elwekeil
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Walid El‐Shafai
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
- Security Engineering LabComputer Science Department, Prince Sultan UniversityRiyadhSaudi Arabia
| | - Taha E. Taha
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Adel S. El‐Fishawy
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - El‐Sayed M. El‐Rabaie
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Saleh A. Alshebeili
- Electrical Engineering DepartmentKACST‐TIC in Radio Frequency and Photonics for the e‐Society (RFTONICS), King Saud UniversityRiyadhSaudi Arabia
- Department of Electrical EngineeringKing Saud UniversityRiyadhSaudi Arabia
| | - Moawad I. Dessouky
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Fathi E. Abd El‐Samie
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
- Department of Information TechnologyCollege of Computer and Information Sciences, Princess Nourah Bint Abdulrahman UniversityRiyadhSaudi Arabia
| |
Collapse
|
31
|
Surianarayanan C, Chelliah PR. Leveraging Artificial Intelligence (AI) Capabilities for COVID-19 Containment. NEW GENERATION COMPUTING 2021; 39:717-741. [PMID: 34131359 PMCID: PMC8191724 DOI: 10.1007/s00354-021-00128-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 06/05/2021] [Indexed: 05/15/2023]
Abstract
The Coronavirus disease (COVID-19) is an infectious disease caused by the newly discovered Severe Acute Respiratory Syndrome Coronavirus two (SARS-CoV-2). Most of the people do not have the acquired immunity to fight this virus. There is no specific treatment or medicine to cure the disease. The effects of this disease appear to vary from individual to individual, right from mild cough, fever to respiratory disease. It also leads to mortality in many people. As the virus has a very rapid transmission rate, the entire world is in distress. The control and prevention of this disease has evolved as an urgent and critical issue to be addressed through technological solutions. The Healthcare industry therefore needs support from the domain of artificial intelligence (AI). AI has the inherent capability of imitating the human brain and assisting in decision-making support by automatically learning from input data. It can process huge amounts of data quickly without getting tiresome and making errors. AI technologies and tools significantly relieve the burden of healthcare professionals. In this paper, we review the critical role of AI in responding to different research challenges around the COVID-19 crisis. A sample implementation of a powerful probabilistic machine learning (ML) algorithm for assessment of risk levels of individuals is incorporated in this paper. Other pertinent application areas such as surveillance of people and hotspots, mortality prediction, diagnosis, prognostic assistance, drug repurposing and discovery of protein structure, and vaccine are presented. The paper also describes various challenges that are associated with the implementation of AI-based tools and solutions for practical use.
Collapse
Affiliation(s)
- Chellammal Surianarayanan
- Government Arts and Science College (Formerly Bharathidasan University Constituent Arts and Science College), Affiliated to Bharathidasan University, Tiruchirappalli, Tamilnadu India
| | - Pethuru Raj Chelliah
- Site Reliability Engineering Division, Reliance Jio Platforms Ltd, Bangalore, India
| |
Collapse
|
32
|
Çallı E, Sogancioglu E, van Ginneken B, van Leeuwen KG, Murphy K. Deep learning for chest X-ray analysis: A survey. Med Image Anal 2021; 72:102125. [PMID: 34171622 DOI: 10.1016/j.media.2021.102125] [Citation(s) in RCA: 95] [Impact Index Per Article: 31.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have led to a promising performance in many medical image analysis tasks. As the most commonly performed radiological exam, chest radiographs are a particularly important modality for which a variety of applications have been researched. The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications. In this paper, we review all studies using deep learning on chest radiographs published before March 2021, categorizing works by task: image-level prediction (classification and regression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of all publicly available datasets are included and commercial systems in the field are described. A comprehensive discussion of the current state of the art is provided, including caveats on the use of public datasets, the requirements of clinically useful systems and gaps in the current literature.
Collapse
Affiliation(s)
- Erdi Çallı
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands.
| | - Ecem Sogancioglu
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Kicky G van Leeuwen
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Keelin Murphy
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| |
Collapse
|
33
|
Panetta K, Sanghavi F, Agaian S, Madan N. Automated Detection of COVID-19 Cases on Radiographs using Shape-Dependent Fibonacci-p Patterns. IEEE J Biomed Health Inform 2021; 25:1852-1863. [PMID: 33788696 PMCID: PMC8768975 DOI: 10.1109/jbhi.2021.3069798] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The coronavirus (COVID-19) pandemic has been adversely affecting people's health globally. To diminish the effect of this widespread pandemic, it is essential to detect COVID-19 cases as quickly as possible. Chest radiographs are less expensive and are a widely available imaging modality for detecting chest pathology compared with CT images. They play a vital role in early prediction and developing treatment plans for suspected or confirmed COVID-19 chest infection patients. In this paper, a novel shape-dependent Fibonacci-p patterns-based feature descriptor using a machine learning approach is proposed. Computer simulations show that the presented system (1) increases the effectiveness of differentiating COVID-19, viral pneumonia, and normal conditions, (2) is effective on small datasets, and (3) has faster inference time compared to deep learning methods with comparable performance. Computer simulations are performed on two publicly available datasets; (a) the Kaggle dataset, and (b) the COVIDGR dataset. To assess the performance of the presented system, various evaluation parameters, such as accuracy, recall, specificity, precision, and f1-score are used. Nearly 100% differentiation between normal and COVID-19 radiographs is observed for the three-class classification scheme using the lung area-specific Kaggle radiographs. While Recall of 72.65 ± 6.83 and specificity of 77.72 ± 8.06 is observed for the COVIDGR dataset.
Collapse
|
34
|
A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11104573] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms therefore they are intrinsically unexplainable. This creates a barrier for clinical implementation due to lack of trust and transparency that is a characteristic of black box algorithms. Additionally, recent regulations prevent the implementation of unexplainable models in clinical settings which further demonstrates a need for explainability. To mitigate these concerns, there have been recent studies that attempt to overcome these issues by modifying deep learning architectures or providing after-the-fact explanations. A review of the deep learning explanation literature focused on cancer detection using MR images is presented here. The gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided.
Collapse
|
35
|
Peters AA, Decasper A, Munz J, Klaus J, Loebelenz LI, Hoffner MKM, Hourscht C, Heverhagen JT, Christe A, Ebner L. Performance of an AI based CAD system in solid lung nodule detection on chest phantom radiographs compared to radiology residents and fellow radiologists. J Thorac Dis 2021; 13:2728-2737. [PMID: 34164165 PMCID: PMC8182550 DOI: 10.21037/jtd-20-3522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Background Despite the decreasing relevance of chest radiography in lung cancer screening, chest radiography is still frequently applied to assess for lung nodules. The aim of the current study was to determine the accuracy of a commercial AI based CAD system for the detection of artificial lung nodules on chest radiograph phantoms and compare the performance to radiologists in training. Methods Sixty-one anthropomorphic lung phantoms were equipped with 140 randomly deployed artificial lung nodules (5, 8, 10, 12 mm). A random generator chose nodule size and distribution before a two-plane chest X-ray (CXR) of each phantom was performed. Seven blinded radiologists in training (2 fellows, 5 residents) with 2 to 5 years of experience in chest imaging read the CXRs on a PACS-workstation independently. Results of the software were recorded separately. McNemar test was used to compare each radiologist’s results to the AI-computer-aided-diagnostic (CAD) software in a per-nodule and a per-phantom approach and Fleiss-Kappa was applied for inter-rater and intra-observer agreements. Results Five out of seven readers showed a significantly higher accuracy than the AI algorithm. The pooled accuracies of the radiologists in a nodule-based and a phantom-based approach were 0.59 and 0.82 respectively, whereas the AI-CAD showed accuracies of 0.47 and 0.67, respectively. Radiologists’ average sensitivity for 10 and 12 mm nodules was 0.80 and dropped to 0.66 for 8 mm (P=0.04) and 0.14 for 5 mm nodules (P<0.001). The radiologists and the algorithm both demonstrated a significant higher sensitivity for peripheral compared to central nodules (0.66 vs. 0.48; P=0.004 and 0.64 vs. 0.094; P=0.025, respectively). Inter-rater agreements were moderate among the radiologists and between radiologists and AI-CAD software (K’=0.58±0.13 and 0.51±0.1). Intra-observer agreement was calculated for two readers and was almost perfect for the phantom-based (K’=0.85±0.05; K’=0.80±0.02); and substantial to almost perfect for the nodule-based approach (K’=0.83±0.02; K’=0.78±0.02). Conclusions The AI based CAD system as a primary reader acts inferior to radiologists regarding lung nodule detection in chest phantoms. Chest radiography has reasonable accuracy in lung nodule detection if read by a radiologist alone and may be further optimized by an AI based CAD system as a second reader.
Collapse
Affiliation(s)
- Alan A Peters
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Amanda Decasper
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jaro Munz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jeremias Klaus
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Laura I Loebelenz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Maximilian Korbinian Michael Hoffner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Cynthia Hourscht
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Johannes T Heverhagen
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Department of BioMedical Research, Experimental Radiology, University of Bern, Bern, Switzerland.,Department of Radiology, The Ohio State University, Columbus, OH, USA
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
36
|
Umar Ibrahim A, Ozsoz M, Serte S, Al‐Turjman F, Habeeb Kolapo S. Convolutional neural network for diagnosis of viral pneumonia and COVID-19 alike diseases. EXPERT SYSTEMS 2021; 39:e12705. [PMID: 34177037 PMCID: PMC8209916 DOI: 10.1111/exsy.12705] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 04/03/2021] [Indexed: 05/09/2023]
Abstract
Reverse-Transcription Polymerase Chain Reaction (RT-PCR) method is currently the gold standard method for detection of viral strains in human samples, but this technique is very expensive, take time and often leads to misdiagnosis. The recent outbreak of COVID-19 has led scientists to explore other options such as the use of artificial intelligence driven tools as an alternative or a confirmatory approach for detection of viral pneumonia. In this paper, we utilized a Convolutional Neural Network (CNN) approach to detect viral pneumonia in x-ray images using a pretrained AlexNet model thereby adopting a transfer learning approach. The dataset used for the study was obtained in the form of optical Coherence Tomography and chest X-ray images made available by Kermany et al. (2018, https://doi.org/10.17632/rscbjbr9sj.3) with a total number of 5853 pneumonia (positive) and normal (negative) images. To evaluate the average efficiency of the model, the dataset was split into on 50:50, 60:40, 70:30, 80:20 and 90:10 for training and testing respectively. To evaluate the performance of the model, 10 K Cross-validation was carried out. The performance of the model using overall dataset was compared with the means of cross-validation and the currents state of arts. The classification model has shown high performance in terms of accuracy, sensitivity and specificity. 70:30 split performed better compare to other splits with accuracy of 98.73%, sensitivity of 98.59% and specificity of 99.84%.
Collapse
Affiliation(s)
| | - Mehmet Ozsoz
- Department of Biomedical EngineeringNear East UniversityNicosiaMersin 10Turkey
| | - Sertan Serte
- Department of Electrical EngineeringNear East UniversityNicosiaMersin 10Turkey
| | - Fadi Al‐Turjman
- Department of Artificial Intelligence, Research Center for AI and IoTNear East UniversityNicosiaMersin 10Turkey
| | | |
Collapse
|
37
|
Salehi M, Mohammadi R, Ghaffari H, Sadighi N, Reiazi R. Automated detection of pneumonia cases using deep transfer learning with paediatric chest X-ray images. Br J Radiol 2021; 94:20201263. [PMID: 33861150 DOI: 10.1259/bjr.20201263] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
OBJECTIVE Pneumonia is a lung infection and causes the inflammation of the small air sacs (Alveoli) in one or both lungs. Proper and faster diagnosis of pneumonia at an early stage is imperative for optimal patient care. Currently, chest X-ray is considered as the best imaging modality for diagnosing pneumonia. However, the interpretation of chest X-ray images is challenging. To this end, we aimed to use an automated convolutional neural network-based transfer-learning approach to detect pneumonia in paediatric chest radiographs. METHODS Herein, an automated convolutional neural network-based transfer-learning approach using four different pre-trained models (i.e. VGG19, DenseNet121, Xception, and ResNet50) was applied to detect pneumonia in children (1-5 years) chest X-ray images. The performance of different proposed models for testing data set was evaluated using five performances metrics, including accuracy, sensitivity/recall, Precision, area under curve, and F1 score. RESULTS All proposed models provide accuracy greater than 83.0% for binary classification. The pre-trained DenseNet121 model provides the highest classification performance of automated pneumonia classification with 86.8% accuracy, followed by Xception model with an accuracy of 86.0%. The sensitivity of the proposed models was greater than 91.0%. The Xception and DenseNet121 models achieve the highest classification performance with F1-score greater than 89.0%. The plotted area under curve of receiver operating characteristics of VGG19, Xception, ResNet50, and DenseNet121 models are 0.78, 0.81, 0.81, and 0.86, respectively. CONCLUSION Our data showed that the proposed models achieve a high accuracy for binary classification. Transfer learning was used to accelerate training of the proposed models and resolve the problem associated with insufficient data. We hope that these proposed models can help radiologists for a quick diagnosis of pneumonia at radiology departments. Moreover, our proposed models may be useful to detect other chest-related diseases such as novel Coronavirus 2019. ADVANCES IN KNOWLEDGE Herein, we used transfer learning as a machine learning approach to accelerate training of the proposed models and resolve the problem associated with insufficient data. Our proposed models achieved accuracy greater than 83.0% for binary classification.
Collapse
Affiliation(s)
- Mohammad Salehi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran.,Medical Image and Signal Processing Research Core, Iran University of Medical Sciences, Tehran, Iran
| | - Reza Mohammadi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran.,Medical Image and Signal Processing Research Core, Iran University of Medical Sciences, Tehran, Iran
| | - Hamed Ghaffari
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Nahid Sadighi
- Advanced Diagnostic & Interventional Radiology ResearchCenter (ADIR), Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Reza Reiazi
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran.,Medical Image and Signal Processing Research Core, Iran University of Medical Sciences, Tehran, Iran.,Princess Margaret Cancer Research Center, University Health Network, Toronto, Canada
| |
Collapse
|
38
|
Rueckel J, Huemmer C, Fieselmann A, Ghesu FC, Mansoor A, Schachtner B, Wesp P, Trappmann L, Munawwar B, Ricke J, Ingrisch M, Sabel BO. Pneumothorax detection in chest radiographs: optimizing artificial intelligence system for accuracy and confounding bias reduction using in-image annotations in algorithm training. Eur Radiol 2021; 31:7888-7900. [PMID: 33774722 PMCID: PMC8452588 DOI: 10.1007/s00330-021-07833-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2020] [Revised: 01/06/2021] [Accepted: 02/24/2021] [Indexed: 12/15/2022]
Abstract
OBJECTIVES Diagnostic accuracy of artificial intelligence (AI) pneumothorax (PTX) detection in chest radiographs (CXR) is limited by the noisy annotation quality of public training data and confounding thoracic tubes (TT). We hypothesize that in-image annotations of the dehiscent visceral pleura for algorithm training boosts algorithm's performance and suppresses confounders. METHODS Our single-center evaluation cohort of 3062 supine CXRs includes 760 PTX-positive cases with radiological annotations of PTX size and inserted TTs. Three step-by-step improved algorithms (differing in algorithm architecture, training data from public datasets/clinical sites, and in-image annotations included in algorithm training) were characterized by area under the receiver operating characteristics (AUROC) in detailed subgroup analyses and referenced to the well-established "CheXNet" algorithm. RESULTS Performances of established algorithms exclusively trained on publicly available data without in-image annotations are limited to AUROCs of 0.778 and strongly biased towards TTs that can completely eliminate algorithm's discriminative power in individual subgroups. Contrarily, our final "algorithm 2" which was trained on a lower number of images but additionally with in-image annotations of the dehiscent pleura achieved an overall AUROC of 0.877 for unilateral PTX detection with a significantly reduced TT-related confounding bias. CONCLUSIONS We demonstrated strong limitations of an established PTX-detecting AI algorithm that can be significantly reduced by designing an AI system capable of learning to both classify and localize PTX. Our results are aimed at drawing attention to the necessity of high-quality in-image localization in training data to reduce the risks of unintentionally biasing the training process of pathology-detecting AI algorithms. KEY POINTS • Established pneumothorax-detecting artificial intelligence algorithms trained on public training data are strongly limited and biased by confounding thoracic tubes. • We used high-quality in-image annotated training data to effectively boost algorithm performance and suppress the impact of confounding thoracic tubes. • Based on our results, we hypothesize that even hidden confounders might be effectively addressed by in-image annotations of pathology-related image features.
Collapse
Affiliation(s)
- Johannes Rueckel
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany.
| | | | | | | | - Awais Mansoor
- Digital Technology and Innovation, Siemens Healthineers, Princeton, NJ, USA
| | - Balthasar Schachtner
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
- Comprehensive Pneumology Center (CPC-M), Member of the German Center for Lung Research (DZL), Munich, Germany
| | - Philipp Wesp
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| | - Lena Trappmann
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| | - Basel Munawwar
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| | - Jens Ricke
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| | - Michael Ingrisch
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| | - Bastian O Sabel
- Department of Radiology, University Hospital, LMU Munich, Marchioninistr. 15, 81377, Munich, Germany
| |
Collapse
|
39
|
Ghaderzadeh M, Asadi F. Deep Learning in the Detection and Diagnosis of COVID-19 Using Radiology Modalities: A Systematic Review. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:6677314. [PMID: 33747419 PMCID: PMC7958142 DOI: 10.1155/2021/6677314] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 01/08/2021] [Accepted: 02/11/2021] [Indexed: 12/17/2022]
Abstract
Introduction The early detection and diagnosis of COVID-19 and the accurate separation of non-COVID-19 cases at the lowest cost and in the early stages of the disease are among the main challenges in the current COVID-19 pandemic. Concerning the novelty of the disease, diagnostic methods based on radiological images suffer from shortcomings despite their many applications in diagnostic centers. Accordingly, medical and computer researchers tend to use machine-learning models to analyze radiology images. Material and Methods. The present systematic review was conducted by searching the three databases of PubMed, Scopus, and Web of Science from November 1, 2019, to July 20, 2020, based on a search strategy. A total of 168 articles were extracted and, by applying the inclusion and exclusion criteria, 37 articles were selected as the research population. Result This review study provides an overview of the current state of all models for the detection and diagnosis of COVID-19 through radiology modalities and their processing based on deep learning. According to the findings, deep learning-based models have an extraordinary capacity to offer an accurate and efficient system for the detection and diagnosis of COVID-19, the use of which in the processing of modalities would lead to a significant increase in sensitivity and specificity values. Conclusion The application of deep learning in the field of COVID-19 radiologic image processing reduces false-positive and negative errors in the detection and diagnosis of this disease and offers a unique opportunity to provide fast, cheap, and safe diagnostic services to patients.
Collapse
Affiliation(s)
- Mustafa Ghaderzadeh
- Student Research Committee, Department and Faculty of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Farkhondeh Asadi
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
40
|
Zhong A, Li X, Wu D, Ren H, Kim K, Kim Y, Buch V, Neumark N, Bizzo B, Tak WY, Park SY, Lee YR, Kang MK, Park JG, Kim BS, Chung WJ, Guo N, Dayan I, Kalra MK, Li Q. Deep metric learning-based image retrieval system for chest radiograph and its clinical applications in COVID-19. Med Image Anal 2021; 70:101993. [PMID: 33711739 PMCID: PMC8032481 DOI: 10.1016/j.media.2021.101993] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 01/19/2021] [Accepted: 02/01/2021] [Indexed: 12/13/2022]
Abstract
In recent years, deep learning-based image analysis methods have been widely applied in computer-aided detection, diagnosis and prognosis, and has shown its value during the public health crisis of the novel coronavirus disease 2019 (COVID-19) pandemic. Chest radiograph (CXR) has been playing a crucial role in COVID-19 patient triaging, diagnosing and monitoring, particularly in the United States. Considering the mixed and unspecific signals in CXR, an image retrieval model of CXR that provides both similar images and associated clinical information can be more clinically meaningful than a direct image diagnostic model. In this work we develop a novel CXR image retrieval model based on deep metric learning. Unlike traditional diagnostic models which aim at learning the direct mapping from images to labels, the proposed model aims at learning the optimized embedding space of images, where images with the same labels and similar contents are pulled together. The proposed model utilizes multi-similarity loss with hard-mining sampling strategy and attention mechanism to learn the optimized embedding space, and provides similar images, the visualizations of disease-related attention maps and useful clinical information to assist clinical decisions. The model is trained and validated on an international multi-site COVID-19 dataset collected from 3 different sources. Experimental results of COVID-19 image retrieval and diagnosis tasks show that the proposed model can serve as a robust solution for CXR analysis and patient management for COVID-19. The model is also tested on its transferability on a different clinical decision support task for COVID-19, where the pre-trained model is applied to extract image features from a new dataset without any further training. The extracted features are then combined with COVID-19 patient's vitals, lab tests and medical histories to predict the possibility of airway intubation in 72 hours, which is strongly associated with patient prognosis, and is crucial for patient care and hospital resource planning. These results demonstrate our deep metric learning based image retrieval model is highly efficient in the CXR retrieval, diagnosis and prognosis, and thus has great clinical value for the treatment and management of COVID-19 patients.
Collapse
Affiliation(s)
- Aoxiao Zhong
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States; School of Engineering and Applied Sciences, Harvard University, Boston, MA, United States
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Dufan Wu
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Hui Ren
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Kyungsang Kim
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Younggon Kim
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Varun Buch
- MGH & BWH Center for Clinical Data Science, Boston, MA, United States
| | - Nir Neumark
- MGH & BWH Center for Clinical Data Science, Boston, MA, United States
| | - Bernardo Bizzo
- MGH & BWH Center for Clinical Data Science, Boston, MA, United States
| | - Won Young Tak
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, South Korea
| | - Soo Young Park
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, South Korea
| | - Yu Rim Lee
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, South Korea
| | - Min Kyu Kang
- Department of Internal Medicine, Yeungnam University College of Medicine, Daegu, South Korea
| | - Jung Gil Park
- Department of Internal Medicine, Yeungnam University College of Medicine, Daegu, South Korea
| | - Byung Seok Kim
- Department of Internal Medicine, Catholic University of Daegu School of Medicine, Daegu, South Korea
| | - Woo Jin Chung
- Department of Internal Medicine, Keimyung University School of Medicine, Daegu, South Korea
| | - Ning Guo
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Ittai Dayan
- School of Engineering and Applied Sciences, Harvard University, Boston, MA, United States
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States; MGH & BWH Center for Clinical Data Science, Boston, MA, United States.
| |
Collapse
|
41
|
Attention-Based Transfer Learning for Efficient Pneumonia Detection in Chest X-ray Images. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11031242] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Pneumonia is a form of acute respiratory infection commonly caused by germs, viruses, and fungi, and can prove fatal at any age. Chest X-rays is the most common technique for diagnosing pneumonia. There have been several attempts to apply transfer learning based on a Convolutional Neural Network to build a stable model in computer-aided diagnosis. Recently, with the appearance of an attention mechanism that automatically focuses on the critical part of the image that is crucial for the diagnosis of disease, it is possible to increase the performance of previous models. The goal of this study is to improve the accuracy of a computer-aided diagnostic approach that medical professionals can easily use as an auxiliary tool. In this paper, we proposed the attention-based transfer learning framework for efficient pneumonia detection in chest X-ray images. We collected features from three-types of pre-trained models, ResNet152, DenseNet121, ResNet18 as a role of feature extractor. We redefined the classifier for a new task and applied the attention mechanism as a feature selector. As a result, the proposed approach achieved accuracy, F-score, Area Under the Curve(AUC), precision and recall of 96.63%, 0.973, 96.03%, 96.23% and 98.46%, respectively.
Collapse
|
42
|
Ibrahim AU, Ozsoz M, Serte S, Al-Turjman F, Yakoi PS. Pneumonia Classification Using Deep Learning from Chest X-ray Images During COVID-19. Cognit Comput 2021:1-13. [PMID: 33425044 PMCID: PMC7781428 DOI: 10.1007/s12559-020-09787-5] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 10/21/2020] [Indexed: 12/15/2022]
Abstract
The outbreak of the novel corona virus disease (COVID-19) in December 2019 has led to global crisis around the world. The disease was declared pandemic by World Health Organization (WHO) on 11th of March 2020. Currently, the outbreak has affected more than 200 countries with more than 37 million confirmed cases and more than 1 million death tolls as of 10 October 2020. Reverse-transcription polymerase chain reaction (RT-PCR) is the standard method for detection of COVID-19 disease, but it has many challenges such as false positives, low sensitivity, expensive, and requires experts to conduct the test. As the number of cases continue to grow, there is a high need for developing a rapid screening method that is accurate, fast, and cheap. Chest X-ray (CXR) scan images can be considered as an alternative or a confirmatory approach as they are fast to obtain and easily accessible. Though the literature reports a number of approaches to classify CXR images and detect the COVID-19 infections, the majority of these approaches can only recognize two classes (e.g., COVID-19 vs. normal). However, there is a need for well-developed models that can classify a wider range of CXR images belonging to the COVID-19 class itself such as the bacterial pneumonia, the non-COVID-19 viral pneumonia, and the normal CXR scans. The current work proposes the use of a deep learning approach based on pretrained AlexNet model for the classification of COVID-19, non-COVID-19 viral pneumonia, bacterial pneumonia, and normal CXR scans obtained from different public databases. The model was trained to perform two-way classification (i.e., COVID-19 vs. normal, bacterial pneumonia vs. normal, non-COVID-19 viral pneumonia vs. normal, and COVID-19 vs. bacterial pneumonia), three-way classification (i.e., COVID-19 vs. bacterial pneumonia vs. normal), and four-way classification (i.e., COVID-19 vs. bacterial pneumonia vs. non-COVID-19 viral pneumonia vs. normal). For non-COVID-19 viral pneumonia and normal (healthy) CXR images, the proposed model achieved 94.43% accuracy, 98.19% sensitivity, and 95.78% specificity. For bacterial pneumonia and normal CXR images, the model achieved 91.43% accuracy, 91.94% sensitivity, and 100% specificity. For COVID-19 pneumonia and normal CXR images, the model achieved 99.16% accuracy, 97.44% sensitivity, and 100% specificity. For classification CXR images of COVID-19 pneumonia and non-COVID-19 viral pneumonia, the model achieved 99.62% accuracy, 90.63% sensitivity, and 99.89% specificity. For the three-way classification, the model achieved 94.00% accuracy, 91.30% sensitivity, and 84.78%. Finally, for the four-way classification, the model achieved an accuracy of 93.42%, sensitivity of 89.18%, and specificity of 98.92%.
Collapse
Affiliation(s)
| | - Mehmet Ozsoz
- Department of Biomedical Engineering, Near East University, Nicosia, Mersin 10, Turkey
| | - Sertan Serte
- Department of Electrical Engineering, Near East University, Nicosia, Mersin 10, Turkey
| | - Fadi Al-Turjman
- Department of Artificial Intelligence, Research Center for AI and IoT, Near East University, Nicosia, Mersin 10, Turkey
| | | |
Collapse
|
43
|
Guefrechi S, Jabra MB, Ammar A, Koubaa A, Hamam H. Deep learning based detection of COVID-19 from chest X-ray images. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:31803-31820. [PMID: 34305440 PMCID: PMC8286881 DOI: 10.1007/s11042-021-11192-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 05/19/2021] [Accepted: 06/24/2021] [Indexed: 05/08/2023]
Abstract
The whole world is facing a health crisis, that is unique in its kind, due to the COVID-19 pandemic. As the coronavirus continues spreading, researchers are concerned by providing or help provide solutions to save lives and to stop the pandemic outbreak. Among others, artificial intelligence (AI) has been adapted to address the challenges caused by pandemic. In this article, we design a deep learning system to extract features and detect COVID-19 from chest X-ray images. Three powerful networks, namely ResNet50, InceptionV3, and VGG16, have been fine-tuned on an enhanced dataset, which was constructed by collecting COVID-19 and normal chest X-ray images from different public databases. We applied data augmentation techniques to artificially generate a large number of chest X-ray images: Random Rotation with an angle between - 10 and 10 degrees, random noise, and horizontal flips. Experimental results are encouraging: the proposed models reached an accuracy of 97.20 % for Resnet50, 98.10 % for InceptionV3, and 98.30 % for VGG16 in classifying chest X-ray images as Normal or COVID-19. The results show that transfer learning is proven to be effective, showing strong performance and easy-to-deploy COVID-19 detection methods. This enables automatizing the process of analyzing X-ray images with high accuracy and it can also be used in cases where the materials and RT-PCR tests are limited.
Collapse
Affiliation(s)
- Sarra Guefrechi
- Faculty of Engineering, University of Moncton, Moncton, NB Canada
| | - Marwa Ben Jabra
- Charisma University, British Overseas Territories, Englewood, UK
- Robotics and Internet- of-Things Unit (RIoT) Lab, Riyadh, Saudi Arabia
| | - Adel Ammar
- Prince Sultan University, Riyadh, Saudi Arabia
| | - Anis Koubaa
- Prince Sultan University, Riyadh, Saudi Arabia
- Gaitech Robotics, Shanghai, China
- INESC- TEC, ISEP, Polytechnic Institute of Porto, Porto, Portugal
| | - Habib Hamam
- Faculty of Engineering, University of Moncton, Moncton, NB Canada
| |
Collapse
|
44
|
kumaraguru T, Abirami P, Darshan K, Angeline Kirubha S, Latha S, Muthu P. Smart access development for classifying lung disease with chest x-ray images using deep learning. MATERIALS TODAY: PROCEEDINGS 2021; 47:76-79. [PMID: 33880332 PMCID: PMC8049837 DOI: 10.1016/j.matpr.2021.03.650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 03/26/2021] [Indexed: 10/25/2022]
|
45
|
Dorr F, Chaves H, Serra MM, Ramirez A, Costa ME, Seia J, Cejas C, Castro M, Eyheremendy E, Fernández Slezak D, Farez MF. COVID-19 pneumonia accurately detected on chest radiographs with artificial intelligence. ACTA ACUST UNITED AC 2020; 3:100014. [PMID: 33230503 PMCID: PMC7674009 DOI: 10.1016/j.ibmed.2020.100014] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 11/06/2020] [Accepted: 11/12/2020] [Indexed: 01/08/2023]
Abstract
Purpose To investigate the diagnostic performance of an Artificial Intelligence (AI) system for detection of COVID-19 in chest radiographs (CXR), and compare results to those of physicians working alone, or with AI support. Materials and methods An AI system was fine-tuned to discriminate confirmed COVID-19 pneumonia, from other viral and bacterial pneumonia and non-pneumonia patients and used to review 302 CXR images from adult patients retrospectively sourced from nine different databases. Fifty-four physicians blind to diagnosis, were invited to interpret images under identical conditions in a test set, and randomly assigned either to receive or not receive support from the AI system. Comparisons were then made between diagnostic performance of physicians working with and without AI support. AI system performance was evaluated using the area under the receiver operating characteristic (AUROC), and sensitivity and specificity of physician performance compared to that of the AI system. Results Discrimination by the AI system of COVID-19 pneumonia showed an AUROC curve of 0.96 in the validation and 0.83 in the external test set, respectively. The AI system outperformed physicians in the AUROC overall (70% increase in sensitivity and 1% increase in specificity, p < 0.0001). When working with AI support, physicians increased their diagnostic sensitivity from 47% to 61% (p < 0.001), although specificity decreased from 79% to 75% (p = 0.007). Conclusions Our results suggest interpreting chest radiographs (CXR) supported by AI, increases physician diagnostic sensitivity for COVID-19 detection. This approach involving a human-machine partnership may help expedite triaging efforts and improve resource allocation in the current crisis.
Collapse
Affiliation(s)
| | - Hernán Chaves
- Entelai, Buenos Aires, Argentina.,Department of Diagnostic Imaging, Fleni, Buenos Aires, Argentina
| | - María Mercedes Serra
- Entelai, Buenos Aires, Argentina.,Department of Diagnostic Imaging, Fleni, Buenos Aires, Argentina
| | | | | | | | - Claudia Cejas
- Department of Diagnostic Imaging, Fleni, Buenos Aires, Argentina
| | - Marcelo Castro
- Department of Diagnostic Imaging, Clínica Indisa, Santiago, Chile
| | | | - Diego Fernández Slezak
- Entelai, Buenos Aires, Argentina.,Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires, Argentina.,Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Mauricio F Farez
- Entelai, Buenos Aires, Argentina.,Center for Epidemiology, Biostatistics and Public Health, Fleni, Buenos Aires, Argentina
| | | |
Collapse
|
46
|
Chen J, See KC. Artificial Intelligence for COVID-19: Rapid Review. J Med Internet Res 2020; 22:e21476. [PMID: 32946413 PMCID: PMC7595751 DOI: 10.2196/21476] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 07/25/2020] [Accepted: 09/15/2020] [Indexed: 12/18/2022] Open
Abstract
BACKGROUND COVID-19 was first discovered in December 2019 and has since evolved into a pandemic. OBJECTIVE To address this global health crisis, artificial intelligence (AI) has been deployed at various levels of the health care system. However, AI has both potential benefits and limitations. We therefore conducted a review of AI applications for COVID-19. METHODS We performed an extensive search of the PubMed and EMBASE databases for COVID-19-related English-language studies published between December 1, 2019, and March 31, 2020. We supplemented the database search with reference list checks. A thematic analysis and narrative review of AI applications for COVID-19 was conducted. RESULTS In total, 11 papers were included for review. AI was applied to COVID-19 in four areas: diagnosis, public health, clinical decision making, and therapeutics. We identified several limitations including insufficient data, omission of multimodal methods of AI-based assessment, delay in realization of benefits, poor internal/external validation, inability to be used by laypersons, inability to be used in resource-poor settings, presence of ethical pitfalls, and presence of legal barriers. AI could potentially be explored in four other areas: surveillance, combination with big data, operation of other core clinical services, and management of patients with COVID-19. CONCLUSIONS In view of the continuing increase in the number of cases, and given that multiple waves of infections may occur, there is a need for effective methods to help control the COVID-19 pandemic. Despite its shortcomings, AI holds the potential to greatly augment existing human efforts, which may otherwise be overwhelmed by high patient numbers.
Collapse
Affiliation(s)
- Jiayang Chen
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Kay Choong See
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Division of Respiratory & Critical Care Medicine, Department of Medicine, National University Hospital, Singapore, Singapore
| |
Collapse
|
47
|
Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines. NPJ Digit Med 2020; 3:136. [PMID: 33083571 PMCID: PMC7567861 DOI: 10.1038/s41746-020-00341-z] [Citation(s) in RCA: 163] [Impact Index Per Article: 40.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 09/17/2020] [Indexed: 12/15/2022] Open
Abstract
Advancements in deep learning techniques carry the potential to make significant contributions to healthcare, particularly in fields that utilize medical imaging for diagnosis, prognosis, and treatment decisions. The current state-of-the-art deep learning models for radiology applications consider only pixel-value information without data informing clinical context. Yet in practice, pertinent and accurate non-imaging data based on the clinical history and laboratory data enable physicians to interpret imaging findings in the appropriate clinical context, leading to a higher diagnostic accuracy, informative clinical decision making, and improved patient outcomes. To achieve a similar goal using deep learning, medical imaging pixel-based models must also achieve the capability to process contextual data from electronic health records (EHR) in addition to pixel data. In this paper, we describe different data fusion techniques that can be applied to combine medical imaging with EHR, and systematically review medical data fusion literature published between 2012 and 2020. We conducted a systematic search on PubMed and Scopus for original research articles leveraging deep learning for fusion of multimodality data. In total, we screened 985 studies and extracted data from 17 papers. By means of this systematic review, we present current knowledge, summarize important results and provide implementation guidelines to serve as a reference for researchers interested in the application of multimodal fusion in medical imaging.
Collapse
|
48
|
Munawar K, Sugi MD, Prabhu V. Radiology in the News: A Content Analysis of Radiology-Related Information Retrieved From Google Alerts. Curr Probl Diagn Radiol 2020; 50:825-830. [PMID: 33041161 PMCID: PMC7544702 DOI: 10.1067/j.cpradiol.2020.09.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Accepted: 09/15/2020] [Indexed: 12/24/2022]
Abstract
Google Alerts highlighted a diverse set of topics present in online media. Most links were directly to non-radiology lay press, but <1% of links over the 6-month period sent the user directly to a primary peer-reviewed medical journal article. The most common topics were market trends, promotional, COVID-19, and artificial intelligence.
Introduction Radiology topics receive substantial online media attention, with prior studies focusing on social media platform coverage. We used Google Alerts, a content change detection and notification service, to prospectively analyze new radiology-related content appearing on the internet. Materials and Methods An automated notification was created on Google Alerts for the search term “radiology,” sending the user emails with up to 3 new links daily. All links from November 2019 through April 2020 were assessed by 2 of 3 independent raters using a coding system to classify the content source and primary topic of discussion. The top 5 primary topics were retrospectively evaluated to identify prevalent subcategories. Content viewing restrictions were documented. Results 526 links were accessed. The majority (68%) of links were created by non-radiology lay press, followed by radiology-related lay press (28%), university-based publications (2%), and professional society websites (2%). The primary topic of these links most frequently related to market trends (28%), promotional material (20%), COVID-19 (13%), artificial intelligence (8%), and new technology or equipment (5%). 15% of links discussed a topic sourced from another article, such as a peer-reviewed journal, though only 2 linked directly to the journal itself. 8% of links had content viewing restrictions. Conclusion New radiology content was largely disseminated via non-radiology news sources; radiologists should therefore ensure their research and viewpoints are presented in these outlets. Google Alerts may be a useful tool to stay abreast of the most current public radiology subject matters, especially during these times of social isolation and rapidly evolving clinical practice.
Collapse
Affiliation(s)
- Kamran Munawar
- NYU Langone Health, Department of Radiology, New York, NY.
| | - Mark D Sugi
- University of California, San Francisco, Department of Radiology, San Francisco, CA. https://twitter.com/markdsugi
| | - Vinay Prabhu
- NYU Langone Health, Department of Radiology, New York, NY. https://twitter.com/yaniv34
| |
Collapse
|
49
|
Healthcare Applications of Artificial Intelligence and Analytics: A Review and Proposed Framework. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10186553] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Healthcare is considered as one of the most promising application areas for artificial intelligence and analytics (AIA) just after the emergence of the latter. AI combined to analytics technologies is increasingly changing medical practice and healthcare in an impressive way using efficient algorithms from various branches of information technology (IT). Indeed, numerous works are published every year in several universities and innovation centers worldwide, but there are concerns about progress in their effective success. There are growing examples of AIA being implemented in healthcare with promising results. This review paper summarizes the past 5 years of healthcare applications of AIA, across different techniques and medical specialties, and discusses the current issues and challenges, related to this revolutionary technology. A total of 24,782 articles were identified. The aim of this paper is to provide the research community with the necessary background to push this field even further and propose a framework that will help integrate diverse AIA technologies around patient needs in various healthcare contexts, especially for chronic care patients, who present the most complex comorbidities and care needs.
Collapse
|
50
|
Hurt B, Yen A, Kligerman S, Hsiao A. Augmenting Interpretation of Chest Radiographs With Deep Learning Probability Maps. J Thorac Imaging 2020; 35:285-293. [PMID: 32205817 PMCID: PMC7483166 DOI: 10.1097/rti.0000000000000505] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE Pneumonia is a common clinical diagnosis for which chest radiographs are often an important part of the diagnostic workup. Deep learning has the potential to expedite and improve the clinical interpretation of chest radiographs. While earlier approaches have emphasized the feasibility of "binary classification" to accomplish this task, alternative strategies may be possible. We explore the feasibility of a "semantic segmentation" deep learning approach to highlight the potential foci of pneumonia on frontal chest radiographs. MATERIALS AND METHODS In this retrospective study, we trained a U-net convolutional neural network (CNN) to predict pixel-wise probability maps for pneumonia using a public data set provided by the Radiological Society of North America (RSNA) comprised of 22,000 radiographs and radiologist-defined bounding boxes. We reserved 3684 radiographs as an independent validation data set and assessed overall performance for localization using Dice overlap and classification performance using the area under the receiver-operator characteristic curve. RESULTS For classification/detection of pneumonia, area under the receiver-operator characteristic curve on frontal radiographs was 0.854 with a sensitivity of 82.8% and specificity of 72.6%. Using this strategy of neural network training, probability maps localized pneumonia to lung parenchyma for essentially all validation cases. For segmentation of pneumonia for positive cases, predicted probability maps had a mean Dice score (±SD) of 0.603±0.204, and 60.0% of these had a Dice score >0.5. CONCLUSIONS A "semantic segmentation" deep learning approach can provide a probabilistic map to assist in the diagnosis of pneumonia. In combination with the patient's history, clinical findings and other imaging, this strategy may help expedite and improve diagnosis.
Collapse
|