201
|
Kalane P, Patil S, Patil BP, Sharma DP. Automatic detection of COVID-19 disease using U-Net architecture based fully convolutional network. Biomed Signal Process Control 2021; 67:102518. [PMID: 33643425 PMCID: PMC7896819 DOI: 10.1016/j.bspc.2021.102518] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 02/03/2021] [Accepted: 02/18/2021] [Indexed: 12/28/2022]
Abstract
The severe acute respiratory syndrome coronavirus 2, called a SARS-CoV-2 virus, emerged from China at the end of 2019, has caused a disease named COVID-19, which has now evolved as a pandemic. Amongst the detected Covid-19 cases, several cases are also found asymptomatic. The presently available Reverse Transcription - Polymerase Chain Reaction (RT-PCR) system for detecting COVID-19 lacks due to limited availability of test kits and relatively low positive symptoms in the early stages of the disease, urging the need for alternative solutions. The tool based on Artificial Intelligence might help the world to develop an additional COVID-19 disease mitigation policy. In this paper, an automated Covid-19 detection system has been proposed, which uses indications from Computer Tomography (CT) images to train the new powered deep learning model- U-Net architecture. The performance of the proposed system has been evaluated using 1000 Chest CT images. The images were obtained from three different sources - Two different GitHub repository sources and the Italian Society of Medical and Interventional Radiology's excellent collection. Out of 1000 images, 552 images were of normal persons, and 448 images were obtained from COVID-19 affected people. The proposed algorithm has achieved a sensitivity and specificity of 94.86% and 93.47% respectively, with an overall accuracy of 94.10%. The U-Net architecture used for Chest CT image analysis has been found effective. The proposed method can be used for primary screening of COVID-19 affected persons as an additional tool available to clinicians.
Collapse
Affiliation(s)
| | - Sarika Patil
- Department of Electronics and Telecommunication Engineering, Sinhgad College of Engineering, Savitribai Phule Pune University, Pune, India
| | - B P Patil
- Army Institute of Technology, Savitribai Phule Pune University, Pune, India
| | - Davinder Pal Sharma
- Department of Physics, The University of the West Indies, St. Augustine, Trinidad and Tobago
| |
Collapse
|
202
|
Peters AA, Decasper A, Munz J, Klaus J, Loebelenz LI, Hoffner MKM, Hourscht C, Heverhagen JT, Christe A, Ebner L. Performance of an AI based CAD system in solid lung nodule detection on chest phantom radiographs compared to radiology residents and fellow radiologists. J Thorac Dis 2021; 13:2728-2737. [PMID: 34164165 PMCID: PMC8182550 DOI: 10.21037/jtd-20-3522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Background Despite the decreasing relevance of chest radiography in lung cancer screening, chest radiography is still frequently applied to assess for lung nodules. The aim of the current study was to determine the accuracy of a commercial AI based CAD system for the detection of artificial lung nodules on chest radiograph phantoms and compare the performance to radiologists in training. Methods Sixty-one anthropomorphic lung phantoms were equipped with 140 randomly deployed artificial lung nodules (5, 8, 10, 12 mm). A random generator chose nodule size and distribution before a two-plane chest X-ray (CXR) of each phantom was performed. Seven blinded radiologists in training (2 fellows, 5 residents) with 2 to 5 years of experience in chest imaging read the CXRs on a PACS-workstation independently. Results of the software were recorded separately. McNemar test was used to compare each radiologist’s results to the AI-computer-aided-diagnostic (CAD) software in a per-nodule and a per-phantom approach and Fleiss-Kappa was applied for inter-rater and intra-observer agreements. Results Five out of seven readers showed a significantly higher accuracy than the AI algorithm. The pooled accuracies of the radiologists in a nodule-based and a phantom-based approach were 0.59 and 0.82 respectively, whereas the AI-CAD showed accuracies of 0.47 and 0.67, respectively. Radiologists’ average sensitivity for 10 and 12 mm nodules was 0.80 and dropped to 0.66 for 8 mm (P=0.04) and 0.14 for 5 mm nodules (P<0.001). The radiologists and the algorithm both demonstrated a significant higher sensitivity for peripheral compared to central nodules (0.66 vs. 0.48; P=0.004 and 0.64 vs. 0.094; P=0.025, respectively). Inter-rater agreements were moderate among the radiologists and between radiologists and AI-CAD software (K’=0.58±0.13 and 0.51±0.1). Intra-observer agreement was calculated for two readers and was almost perfect for the phantom-based (K’=0.85±0.05; K’=0.80±0.02); and substantial to almost perfect for the nodule-based approach (K’=0.83±0.02; K’=0.78±0.02). Conclusions The AI based CAD system as a primary reader acts inferior to radiologists regarding lung nodule detection in chest phantoms. Chest radiography has reasonable accuracy in lung nodule detection if read by a radiologist alone and may be further optimized by an AI based CAD system as a second reader.
Collapse
Affiliation(s)
- Alan A Peters
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Amanda Decasper
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jaro Munz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jeremias Klaus
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Laura I Loebelenz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Maximilian Korbinian Michael Hoffner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Cynthia Hourscht
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Johannes T Heverhagen
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Department of BioMedical Research, Experimental Radiology, University of Bern, Bern, Switzerland.,Department of Radiology, The Ohio State University, Columbus, OH, USA
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
203
|
Montazeri M, ZahediNasab R, Farahani A, Mohseni H, Ghasemian F. Machine Learning Models for Image-Based Diagnosis and Prognosis of COVID-19: Systematic Review. JMIR Med Inform 2021; 9:e25181. [PMID: 33735095 PMCID: PMC8074953 DOI: 10.2196/25181] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 12/31/2020] [Accepted: 01/16/2021] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND Accurate and timely diagnosis and effective prognosis of the disease is important to provide the best possible care for patients with COVID-19 and reduce the burden on the health care system. Machine learning methods can play a vital role in the diagnosis of COVID-19 by processing chest x-ray images. OBJECTIVE The aim of this study is to summarize information on the use of intelligent models for the diagnosis and prognosis of COVID-19 to help with early and timely diagnosis, minimize prolonged diagnosis, and improve overall health care. METHODS A systematic search of databases, including PubMed, Web of Science, IEEE, ProQuest, Scopus, bioRxiv, and medRxiv, was performed for COVID-19-related studies published up to May 24, 2020. This study was performed in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. All original research articles describing the application of image processing for the prediction and diagnosis of COVID-19 were considered in the analysis. Two reviewers independently assessed the published papers to determine eligibility for inclusion in the analysis. Risk of bias was evaluated using the Prediction Model Risk of Bias Assessment Tool. RESULTS Of the 629 articles retrieved, 44 articles were included. We identified 4 prognosis models for calculating prediction of disease severity and estimation of confinement time for individual patients, and 40 diagnostic models for detecting COVID-19 from normal or other pneumonias. Most included studies used deep learning methods based on convolutional neural networks, which have been widely used as a classification algorithm. The most frequently reported predictors of prognosis in patients with COVID-19 included age, computed tomography data, gender, comorbidities, symptoms, and laboratory findings. Deep convolutional neural networks obtained better results compared with non-neural network-based methods. Moreover, all of the models were found to be at high risk of bias due to the lack of information about the study population, intended groups, and inappropriate reporting. CONCLUSIONS Machine learning models used for the diagnosis and prognosis of COVID-19 showed excellent discriminative performance. However, these models were at high risk of bias, because of various reasons such as inadequate information about study participants, randomization process, and the lack of external validation, which may have resulted in the optimistic reporting of these models. Hence, our findings do not recommend any of the current models to be used in practice for the diagnosis and prognosis of COVID-19.
Collapse
Affiliation(s)
- Mahdieh Montazeri
- Medical Informatics Research Center, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran
| | - Roxana ZahediNasab
- Computer Engineering Department, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
| | - Ali Farahani
- Computer Engineering Department, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
| | - Hadis Mohseni
- Computer Engineering Department, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
| | - Fahimeh Ghasemian
- Computer Engineering Department, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
| |
Collapse
|
204
|
Analysis of the Region of Interest According to CNN Structure in Hierarchical Pattern Surface Inspection Using CAM. MATERIALS 2021; 14:ma14092095. [PMID: 33919231 PMCID: PMC8122604 DOI: 10.3390/ma14092095] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 04/16/2021] [Accepted: 04/20/2021] [Indexed: 11/16/2022]
Abstract
A convolutional neural network (CNN), which exhibits excellent performance in solving image-based problem, has been widely applied to various industrial problems. In general, the CNN model was applied to defect inspection on the surface of raw materials or final products, and its accuracy also showed better performance compared to human inspection. However, surfaces with heterogeneous and complex backgrounds have difficulties in separating defects region from the background, which is a typical challenge in this field. In this study, the CNN model was applied to detect surface defects on a hierarchical patterned surface, one of the representative complex background surfaces. In order to optimize the CNN structure, the change in inspection performance was analyzed according to the number of layers and kernel size of the model using evaluation metrics. In addition, the change of the CNN's decision criteria according to the change of the model structure was analyzed using a class activation map (CAM) technique, which can highlight the most important region recognized by the CNN in performing classification. As a result, we were able to accurately understand the classification manner of the CNN for the hierarchical pattern surface, and an accuracy of 93.7% was achieved using the optimized model.
Collapse
|
205
|
Mehmood A, Yang S, Feng Z, Wang M, Ahmad AS, Khan R, Maqsood M, Yaqub M. A Transfer Learning Approach for Early Diagnosis of Alzheimer's Disease on MRI Images. Neuroscience 2021; 460:43-52. [PMID: 33465405 DOI: 10.1016/j.neuroscience.2021.01.002] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 12/23/2020] [Accepted: 01/03/2021] [Indexed: 01/07/2023]
Abstract
Mild cognitive impairment (MCI) detection using magnetic resonance image (MRI), plays a crucial role in the treatment of dementia disease at an early stage. Deep learning architecture produces impressive results in such research. Algorithms require a large number of annotated datasets for training the model. In this study, we overcome this issue by using layer-wise transfer learning as well as tissue segmentation of brain images to diagnose the early stage of Alzheimer's disease (AD). In layer-wise transfer learning, we used the VGG architecture family with pre-trained weights. The proposed model segregates between normal control (NC), the early mild cognitive impairment (EMCI), the late mild cognitive impairment (LMCI), and the AD. In this paper, 85 NC patients, 70 EMCI, 70 LMCI, and 75 AD patients access form the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Tissue segmentation was applied on each subject to extract the gray matter (GM) tissue. In order to check the validity, the proposed method is tested on preprocessing data and achieved the highest rates of the classification accuracy on AD vs NC is 98.73%, also distinguish between EMCI vs LMCI patients testing accuracy 83.72%, whereas remaining classes accuracy is more than 80%. Finally, we provide a comparative analysis with other studies which shows that the proposed model outperformed the state-of-the-art models in terms of testing accuracy.
Collapse
Affiliation(s)
- Atif Mehmood
- School of Artificial Intelligence, Xidian University, Xi'an 710071, China
| | - Shuyuan Yang
- School of Artificial Intelligence, Xidian University, Xi'an 710071, China.
| | - Zhixi Feng
- School of Artificial Intelligence, Xidian University, Xi'an 710071, China
| | - Min Wang
- Key Laboratory of Radar Signal Processing, Xidian University, Xi'an 710071, China
| | - Al Smadi Ahmad
- School of Artificial Intelligence, Xidian University, Xi'an 710071, China
| | - Rizwan Khan
- School of Electronic Information and Communications, HUST University, Wuhan 4370074, China
| | - Muazzam Maqsood
- Department of Computer Science, COMSATS University Islamabad, Attock Campus, Attock 43600, Pakistan
| | - Muhammad Yaqub
- Faculty of Information Technology, Beijing University of Technology, Beijing 10000, China
| |
Collapse
|
206
|
Gefter WB, Lee KS, Schiebler ML, Parraga G, Seo JB, Ohno Y, Hatabu H. Pulmonary Functional Imaging: Part 2-State-of-the-Art Clinical Applications and Opportunities for Improved Patient Care. Radiology 2021; 299:524-538. [PMID: 33847518 DOI: 10.1148/radiol.2021204033] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Pulmonary functional imaging may be defined as the regional quantification of lung function by using primarily CT, MRI, and nuclear medicine techniques. The distribution of pulmonary physiologic parameters, including ventilation, perfusion, gas exchange, and biomechanics, can be noninvasively mapped and measured throughout the lungs. This information is not accessible by using conventional pulmonary function tests, which measure total lung function without viewing the regional distribution. The latter is important because of the heterogeneous distribution of virtually all lung disorders. Moreover, techniques such as hyperpolarized xenon 129 and helium 3 MRI can probe lung physiologic structure and microstructure at the level of the alveolar-air and alveolar-red blood cell interface, which is well beyond the spatial resolution of other clinical methods. The opportunities, challenges, and current stage of clinical deployment of pulmonary functional imaging are reviewed, including applications to chronic obstructive pulmonary disease, asthma, interstitial lung disease, pulmonary embolism, and pulmonary hypertension. Among the challenges to the deployment of pulmonary functional imaging in routine clinical practice are the need for further validation, establishment of normal values, standardization of imaging acquisition and analysis, and evidence of patient outcomes benefit. When these challenges are addressed, it is anticipated that pulmonary functional imaging will have an expanding role in the evaluation and management of patients with lung disease.
Collapse
Affiliation(s)
- Warren B Gefter
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea (K.S.L.); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, Madison, Wis (M.L.S.); Departments of Medicine and Medical Biophysics, Robarts Research Institute, Western University, London, Canada (G.P.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Radiology and Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Japan (Y.O.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Kyung Soo Lee
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea (K.S.L.); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, Madison, Wis (M.L.S.); Departments of Medicine and Medical Biophysics, Robarts Research Institute, Western University, London, Canada (G.P.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Radiology and Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Japan (Y.O.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Mark L Schiebler
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea (K.S.L.); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, Madison, Wis (M.L.S.); Departments of Medicine and Medical Biophysics, Robarts Research Institute, Western University, London, Canada (G.P.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Radiology and Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Japan (Y.O.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Grace Parraga
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea (K.S.L.); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, Madison, Wis (M.L.S.); Departments of Medicine and Medical Biophysics, Robarts Research Institute, Western University, London, Canada (G.P.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Radiology and Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Japan (Y.O.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Joon Beom Seo
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea (K.S.L.); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, Madison, Wis (M.L.S.); Departments of Medicine and Medical Biophysics, Robarts Research Institute, Western University, London, Canada (G.P.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Radiology and Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Japan (Y.O.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Yoshiharu Ohno
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea (K.S.L.); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, Madison, Wis (M.L.S.); Departments of Medicine and Medical Biophysics, Robarts Research Institute, Western University, London, Canada (G.P.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Radiology and Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Japan (Y.O.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Hiroto Hatabu
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea (K.S.L.); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, Madison, Wis (M.L.S.); Departments of Medicine and Medical Biophysics, Robarts Research Institute, Western University, London, Canada (G.P.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Radiology and Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Japan (Y.O.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| |
Collapse
|
207
|
Yang S, Zhu F, Ling X, Liu Q, Zhao P. Intelligent Health Care: Applications of Deep Learning in Computational Medicine. Front Genet 2021; 12:607471. [PMID: 33912213 PMCID: PMC8075004 DOI: 10.3389/fgene.2021.607471] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 03/05/2021] [Indexed: 12/24/2022] Open
Abstract
With the progress of medical technology, biomedical field ushered in the era of big data, based on which and driven by artificial intelligence technology, computational medicine has emerged. People need to extract the effective information contained in these big biomedical data to promote the development of precision medicine. Traditionally, the machine learning methods are used to dig out biomedical data to find the features from data, which generally rely on feature engineering and domain knowledge of experts, requiring tremendous time and human resources. Different from traditional approaches, deep learning, as a cutting-edge machine learning branch, can automatically learn complex and robust feature from raw data without the need for feature engineering. The applications of deep learning in medical image, electronic health record, genomics, and drug development are studied, where the suggestion is that deep learning has obvious advantage in making full use of biomedical data and improving medical health level. Deep learning plays an increasingly important role in the field of medical health and has a broad prospect of application. However, the problems and challenges of deep learning in computational medical health still exist, including insufficient data, interpretability, data privacy, and heterogeneity. Analysis and discussion on these problems provide a reference to improve the application of deep learning in medical health.
Collapse
Affiliation(s)
- Sijie Yang
- School of Computer Science and Technology, Soochow University, Suzhou, China
| | - Fei Zhu
- School of Computer Science and Technology, Soochow University, Suzhou, China
| | - Xinghong Ling
- School of Computer Science and Technology, Soochow University, Suzhou, China
- WenZheng College of Soochow University, Suzhou, China
| | - Quan Liu
- School of Computer Science and Technology, Soochow University, Suzhou, China
| | - Peiyao Zhao
- School of Computer Science and Technology, Soochow University, Suzhou, China
| |
Collapse
|
208
|
LI SHIWEI, LIU DANDAN. AUTOMATED CLASSIFICATION OF SOLITARY PULMONARY NODULES USING CONVOLUTIONAL NEURAL NETWORK BASED ON TRANSFER LEARNING STRATEGY. J MECH MED BIOL 2021. [DOI: 10.1142/s0219519421400029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This study aimed to propose an effective malignant solitary pulmonary nodule classification method based on improved Faster R-CNN and transfer learning strategy. In practice, the existing solitary pulmonary nodule classification methods divide the lung cancer images into two categories only: normal and cancerous. This study proposed the deep convolution neural network to classify the computed tomography (CT) images of lung cancer into four categories: lung adenocarcinoma, lung squamous cell carcinoma, metastatic lung cancer, and normal types of lung cancer. Some high-resolution lung CT images have unnecessary characters such as a large number of high-density continuity features, small-size lung nodule targets, CT image background complexity, and so forth. In this study, the CT image sub-block preprocessing strategy was used to extract nodule features for enhancement and alleviate the aforementioned problems. The experimental results showed that the proposed system was effective in resolving issues such as high false-positive rate and long classification time cost based on the original Faster R-CNN detection method. Meanwhile, the transfer learning strategy was used to improve the classification efficiency so as to avoid the overfitting problem caused by a few labeled samples of lung cancer datasets. The classification results were integrated using the majority vote algorithm. The classification results of the lung CT imaging showed that the proposed method had an average detection accuracy of 89.7% and reduced the rate of misdiagnosis to meet the clinical needs.
Collapse
Affiliation(s)
- SHIWEI LI
- Department of Data Science and Technology, Heilongjiang University Harbin, Heilongjiang 150080, P. R. China
| | - DANDAN LIU
- Department of Oncology, Heilongjiang Province Hospital, Harbin, Heilongjiang 150036, P. R. China
| |
Collapse
|
209
|
Spiesman BJ, Gratton C, Hatfield RG, Hsu WH, Jepsen S, McCornack B, Patel K, Wang G. Assessing the potential for deep learning and computer vision to identify bumble bee species from images. Sci Rep 2021; 11:7580. [PMID: 33828196 PMCID: PMC8027374 DOI: 10.1038/s41598-021-87210-1] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Accepted: 03/25/2021] [Indexed: 01/30/2023] Open
Abstract
Pollinators are undergoing a global decline. Although vital to pollinator conservation and ecological research, species-level identification is expensive, time consuming, and requires specialized taxonomic training. However, deep learning and computer vision are providing ways to open this methodological bottleneck through automated identification from images. Focusing on bumble bees, we compare four convolutional neural network classification models to evaluate prediction speed, accuracy, and the potential of this technology for automated bee identification. We gathered over 89,000 images of bumble bees, representing 36 species in North America, to train the ResNet, Wide ResNet, InceptionV3, and MnasNet models. Among these models, InceptionV3 presented a good balance of accuracy (91.6%) and average speed (3.34 ms). Species-level error rates were generally smaller for species represented by more training images. However, error rates also depended on the level of morphological variability among individuals within a species and similarity to other species. Continued development of this technology for automatic species identification and monitoring has the potential to be transformative for the fields of ecology and conservation. To this end, we present BeeMachine, a web application that allows anyone to use our classification model to identify bumble bees in their own images.
Collapse
Affiliation(s)
- Brian J Spiesman
- Department of Entomology, Kansas State University, Manhattan, KS, USA.
| | - Claudio Gratton
- Department of Entomology, University of Wiscosin - Madison, Madison, WI, USA
| | | | - William H Hsu
- Department of Computer Science, Kansas State University, Manhattan, KS, USA
| | - Sarina Jepsen
- The Xerces Society for Invertebrate Conservation, Portland, OR, USA
| | - Brian McCornack
- Department of Entomology, Kansas State University, Manhattan, KS, USA
| | - Krushi Patel
- Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, KS, USA
| | - Guanghui Wang
- Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, KS, USA.,Department of Computer Science, Ryerson University, Toronto, ON, Canada
| |
Collapse
|
210
|
Bhinder B, Gilvary C, Madhukar NS, Elemento O. Artificial Intelligence in Cancer Research and Precision Medicine. Cancer Discov 2021; 11:900-915. [PMID: 33811123 DOI: 10.1158/2159-8290.cd-21-0090] [Citation(s) in RCA: 277] [Impact Index Per Article: 69.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 02/06/2021] [Accepted: 02/08/2021] [Indexed: 11/16/2022]
Abstract
Artificial intelligence (AI) is rapidly reshaping cancer research and personalized clinical care. Availability of high-dimensionality datasets coupled with advances in high-performance computing, as well as innovative deep learning architectures, has led to an explosion of AI use in various aspects of oncology research. These applications range from detection and classification of cancer, to molecular characterization of tumors and their microenvironment, to drug discovery and repurposing, to predicting treatment outcomes for patients. As these advances start penetrating the clinic, we foresee a shifting paradigm in cancer care becoming strongly driven by AI. SIGNIFICANCE: AI has the potential to dramatically affect nearly all aspects of oncology-from enhancing diagnosis to personalizing treatment and discovering novel anticancer drugs. Here, we review the recent enormous progress in the application of AI to oncology, highlight limitations and pitfalls, and chart a path for adoption of AI in the cancer clinic.
Collapse
Affiliation(s)
- Bhavneet Bhinder
- Caryl and Israel Englander Institute for Precision Medicine, Weill Cornell Medicine, New York, New York.,Department of Physiology and Biophysics, Weill Cornell Medicine, New York, New York
| | | | | | - Olivier Elemento
- Caryl and Israel Englander Institute for Precision Medicine, Weill Cornell Medicine, New York, New York. .,Department of Physiology and Biophysics, Weill Cornell Medicine, New York, New York.,OneThree Biotech, New York, New York
| |
Collapse
|
211
|
Li Z, Wang J, Cao D, Li Y, Sun X, Zhang J, Liu H, Wang G. Investigating Neural Activation Effects on Deep Belief Echo-State Networks for Prediction Toward Smart Ocean Environment Monitoring. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021. [DOI: 10.1007/s13369-020-05319-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
212
|
Sun L, Wang Z, Pu H, Yuan G, Guo L, Pu T, Peng Z. Attention-embedded complementary-stream CNN for false positive reduction in pulmonary nodule detection. Comput Biol Med 2021; 133:104357. [PMID: 33836449 DOI: 10.1016/j.compbiomed.2021.104357] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 03/22/2021] [Accepted: 03/22/2021] [Indexed: 01/18/2023]
Abstract
False positive reduction plays a key role in computer-aided detection systems for pulmonary nodule detection in computed tomography (CT) scans. However, this remains a challenge owing to the heterogeneity and similarity of anisotropic pulmonary nodules. In this study, a novel attention-embedded complementary-stream convolutional neural network (AECS-CNN) is proposed to obtain more representative features of nodules for false positive reduction. The proposed network comprises three function blocks: 1) attention-guided multi-scale feature extraction, 2) complementary-stream block with an attention module for feature integration, and 3) classification block. The inputs of the network are multi-scale 3D CT volumes due to variations in nodule sizes. Subsequently, a gradual multi-scale feature extraction block with an attention module was applied to acquire more contextual information regarding the nodules. A subsequent complementary-stream integration block with an attention module was utilized to learn the significantly complementary features. Finally, the candidates were classified using a fully connected layer block. An exhaustive experiment on the LUNA16 challenge dataset was conducted to verify the effectiveness and performance of the proposed network. The AECS-CNN achieved a sensitivity of 0.92 with 4 false positives per scan. The results indicate that the attention mechanism can improve the network performance in false positive reduction, the proposed AECS-CNN can learn more representative features, and the attention module can guide the network to learn the discriminated feature channels and the crucial information embedded in the data, thereby effectively enhancing the performance of the detection system.
Collapse
Affiliation(s)
- Lingma Sun
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zhuoran Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Hong Pu
- Sichuan Provincial People's Hospital, Chengdu, Sichuan, 610072, China; School of Medicine, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
| | - Guohui Yuan
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Lu Guo
- Sichuan Provincial People's Hospital, Chengdu, Sichuan, 610072, China; School of Medicine, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
| | - Tian Pu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zhenming Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| |
Collapse
|
213
|
Ren G, Lam SK, Zhang J, Xiao H, Cheung ALY, Ho WY, Qin J, Cai J. Investigation of a Novel Deep Learning-Based Computed Tomography Perfusion Mapping Framework for Functional Lung Avoidance Radiotherapy. Front Oncol 2021; 11:644703. [PMID: 33842356 PMCID: PMC8024641 DOI: 10.3389/fonc.2021.644703] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 02/02/2021] [Indexed: 11/25/2022] Open
Abstract
Functional lung avoidance radiation therapy aims to minimize dose delivery to the normal lung tissue while favoring dose deposition in the defective lung tissue based on the regional function information. However, the clinical acquisition of pulmonary functional images is resource-demanding, inconvenient, and technically challenging. This study aims to investigate the deep learning-based lung functional image synthesis from the CT domain. Forty-two pulmonary macro-aggregated albumin SPECT/CT perfusion scans were retrospectively collected from the hospital. A deep learning-based framework (including image preparation, image processing, and proposed convolutional neural network) was adopted to extract features from 3D CT images and synthesize perfusion as estimations of regional lung function. Ablation experiments were performed to assess the effects of each framework component by removing each element of the framework and analyzing the testing performances. Major results showed that the removal of the CT contrast enhancement component in the image processing resulted in the largest drop in framework performance, compared to the optimal performance (~12%). In the CNN part, all the three components (residual module, ROI attention, and skip attention) were approximately equally important to the framework performance; removing one of them resulted in a 3–5% decline in performance. The proposed CNN improved ~4% overall performance and ~350% computational efficiency, compared to the U-Net model. The deep convolutional neural network, in conjunction with image processing for feature enhancement, is capable of feature extraction from CT images for pulmonary perfusion synthesis. In the proposed framework, image processing, especially CT contrast enhancement, plays a crucial role in the perfusion synthesis. This CTPM framework provides insights for relevant research studies in the future and enables other researchers to leverage for the development of optimized CNN models for functional lung avoidance radiation therapy.
Collapse
Affiliation(s)
- Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Sai-Kit Lam
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Jiang Zhang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Andy Lai-Yin Cheung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Wai-Yin Ho
- Department of Nuclear Medicine, Queen Mary Hospital, Hong Kong, Hong Kong
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| |
Collapse
|
214
|
Ram S, Hoff BA, Bell AJ, Galban S, Fortuna AB, Weinheimer O, Wielpütz MO, Robinson TE, Newman B, Vummidi D, Chughtai A, Kazerooni EA, Johnson TD, Han MK, Hatt CR, Galban CJ. Improved detection of air trapping on expiratory computed tomography using deep learning. PLoS One 2021; 16:e0248902. [PMID: 33760861 PMCID: PMC7990199 DOI: 10.1371/journal.pone.0248902] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/26/2021] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND Radiologic evidence of air trapping (AT) on expiratory computed tomography (CT) scans is associated with early pulmonary dysfunction in patients with cystic fibrosis (CF). However, standard techniques for quantitative assessment of AT are highly variable, resulting in limited efficacy for monitoring disease progression. OBJECTIVE To investigate the effectiveness of a convolutional neural network (CNN) model for quantifying and monitoring AT, and to compare it with other quantitative AT measures obtained from threshold-based techniques. MATERIALS AND METHODS Paired volumetric whole lung inspiratory and expiratory CT scans were obtained at four time points (0, 3, 12 and 24 months) on 36 subjects with mild CF lung disease. A densely connected CNN (DN) was trained using AT segmentation maps generated from a personalized threshold-based method (PTM). Quantitative AT (QAT) values, presented as the relative volume of AT over the lungs, from the DN approach were compared to QAT values from the PTM method. Radiographic assessment, spirometric measures, and clinical scores were correlated to the DN QAT values using a linear mixed effects model. RESULTS QAT values from the DN were found to increase from 8.65% ± 1.38% to 21.38% ± 1.82%, respectively, over a two-year period. Comparison of CNN model results to intensity-based measures demonstrated a systematic drop in the Dice coefficient over time (decreased from 0.86 ± 0.03 to 0.45 ± 0.04). The trends observed in DN QAT values were consistent with clinical scores for AT, bronchiectasis, and mucus plugging. In addition, the DN approach was found to be less susceptible to variations in expiratory deflation levels than the threshold-based approach. CONCLUSION The CNN model effectively delineated AT on expiratory CT scans, which provides an automated and objective approach for assessing and monitoring AT in CF patients.
Collapse
Affiliation(s)
- Sundaresh Ram
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
- Department of Biomedical Engineering, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Benjamin A. Hoff
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Alexander J. Bell
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Stefanie Galban
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Aleksa B. Fortuna
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Oliver Weinheimer
- Department of Diagnostic and Interventional Radiology, University Hospital of Heidelberg, Heidelberg, Germany
- Translational Lung Research Center, Heidelberg (TLRC), German Lung Research Center (DZL), Heidelberg, Germany
| | - Mark O. Wielpütz
- Department of Diagnostic and Interventional Radiology, University Hospital of Heidelberg, Heidelberg, Germany
- Translational Lung Research Center, Heidelberg (TLRC), German Lung Research Center (DZL), Heidelberg, Germany
| | - Terry E. Robinson
- Department of Pediatrics, Center of Excellence in Pulmonary Biology, Stanford University School of Medicine, Stanford, California, United States of America
| | - Beverley Newman
- Department of Pediatric Radiology, Lucile Packard Children’s Hospital at Stanford, Stanford, California, United States of America
| | - Dharshan Vummidi
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Aamer Chughtai
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Ella A. Kazerooni
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
- Department of Internal Medicine, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Timothy D. Johnson
- Department of Biostatistics, University of Michigan, School of Public Health, Ann Arbor, Michigan, United States of America
| | - MeiLan K. Han
- Department of Internal Medicine, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Charles R. Hatt
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
- Imbio LLC, Minneapolis, Minnesota, United States of America
| | - Craig J. Galban
- Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
- Department of Biomedical Engineering, Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| |
Collapse
|
215
|
P SAB, Annavarapu CSR. Deep learning-based improved snapshot ensemble technique for COVID-19 chest X-ray classification. APPL INTELL 2021; 51:3104-3120. [PMID: 34764590 PMCID: PMC7986181 DOI: 10.1007/s10489-021-02199-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/06/2021] [Indexed: 12/22/2022]
Abstract
COVID-19 has proven to be a deadly virus, and unfortunately, it triggered a worldwide pandemic. Its detection for further treatment poses a severe threat to researchers, scientists, health professionals, and administrators worldwide. One of the daunting tasks during the pandemic for doctors in radiology is the use of chest X-ray or CT images for COVID-19 diagnosis. Time is required to inspect each report manually. While a CT scan is the better standard, an X-ray is still useful because it is cheaper, faster, and more widely used. To diagnose COVID-19, this paper proposes to use a deep learning-based improved Snapshot Ensemble technique for efficient COVID-19 chest X-ray classification. In addition, the proposed method takes advantage of the transfer learning technique using the ResNet-50 model, which is a pre-trained model. The proposed model uses the publicly accessible COVID-19 chest X-ray dataset consisting of 2905 images, which include COVID-19, viral pneumonia, and normal chest X-ray images. For performance evaluation, the model applied the metrics such as AU-ROC, AU-PR, and Jaccard Index. Furthermore, it also obtained a multi-class micro-average of 97% specificity, 95% f 1-score, and 95% classification accuracy. The obtained results demonstrate that the performance of the proposed method outperformed those of several existing methods. This method appears to be a suitable and efficient approach for COVID-19 chest X-ray classification.
Collapse
Affiliation(s)
- Samson Anosh Babu P
- Department of Computer Science and Engineering, Indian Institute of Technology (ISM), Dhanbad, 826004 India
| | | |
Collapse
|
216
|
Andersen NK, Trøjgaard P, Herschend NO, Størling ZM. Automated Assessment of Peristomal Skin Discoloration and Leakage Area Using Artificial Intelligence. Front Artif Intell 2021; 3:72. [PMID: 33733189 PMCID: PMC7861335 DOI: 10.3389/frai.2020.00072] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 08/05/2020] [Indexed: 01/22/2023] Open
Abstract
For people living with an ostomy, development of peristomal skin complications (PSCs) is the most common post-operative challenge. A visual sign of PSCs is discoloration (redness) of the peristomal skin often resulting from leakage of ostomy output under the baseplate. If left unattended, a mild skin condition may progress into a severe disorder; consequently, it is important to monitor discoloration and leakage patterns closely. The Ostomy Skin Tool is current state-of-the-art for evaluation of peristomal skin, but it relies on patients visiting their healthcare professional regularly. To enable close monitoring of peristomal skin over time, an automated strategy not relying on scheduled consultations is required. Several medical fields have implemented automated image analysis based on artificial intelligence, and these deep learning algorithms have become increasingly recognized as a valuable tool in healthcare. Therefore, the main objective of this study was to develop deep learning algorithms which could provide automated, consistent, and objective assessments of changes in peristomal skin discoloration and leakage patterns. A total of 614 peristomal skin images were used for development of the discoloration model, which predicted the area of the discolored peristomal skin with an accuracy of 95% alongside precision and recall scores of 79.6 and 75.0%, respectively. The algorithm predicting leakage patterns was developed based on 954 product images, and leakage area was determined with 98.8% accuracy, 75.0% precision, and 71.5% recall. Combined, these data for the first time demonstrate implementation of artificial intelligence for automated assessment of changes in peristomal skin discoloration and leakage patterns.
Collapse
|
217
|
Yu W, Zhou H, Goldin JG, Wong WK, Kim GHJ. End-to-end domain knowledge-assisted automatic diagnosis of idiopathic pulmonary fibrosis (IPF) using computed tomography (CT). Med Phys 2021; 48:2458-2467. [PMID: 33547645 DOI: 10.1002/mp.14754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 01/21/2021] [Accepted: 01/25/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Domain knowledge (DK) acquired from prior studies is important for medical diagnosis. This paper leverages the population-level DK using an optimality design criterion to train a deep learning model in an end-to-end manner. In this study, the problem of interest is at the patient level to diagnose a subject with idiopathic pulmonary fibrosis (IPF) among subjects with interstitial lung disease (ILD) using a computed tomography (CT). IPF diagnosis is a complicated process with multidisciplinary discussion with experts and is subject to interobserver variability, even for experienced radiologists. To this end, we propose a new statistical method to construct a time/memory-efficient IPF diagnosis model using axial chest CT and DK, along with an optimality design criterion via a DK-enhanced loss function of deep learning. METHODS Four state-of-the-art two-dimensional convolutional neural network (2D-CNN) architectures (MobileNet, VGG16, ResNet-50, and DenseNet-121) and one baseline 2D-CNN are implemented to automatically diagnose IPF among ILD patients. Axial lung CT images are retrospectively acquired from 389 IPF patients and 700 non-IPF ILD patients in five multicenter clinical trials. To enrich the sample size and boost model performance, we sample 20 three-slice samples (triplets) from each CT scan, where these three slices are randomly selected from the top, middle, and bottom of both lungs respectively. Model performance is evaluated using a fivefold cross-validation, where each fold was stratified using a fixed proportion of IPF vs non-IPF. RESULTS Using DK-enhanced loss function increases the model performance of the baseline CNN model from 0.77 to 0.89 in terms of study-wise accuracy. Four other well-developed models reach satisfactory model performance with an overall accuracy >0.95 but the benefits brought on by the DK-enhanced loss function is not noticeable. CONCLUSIONS We believe this is the first attempt that (a) uses population-level DK with an optimal design criterion to train deep learning-based diagnostic models in an end-to-end manner and (b) focuses on patient-level IPF diagnosis. Further evaluation of using population-level DK on prospective studies is warranted and is underway.
Collapse
Affiliation(s)
- Wenxi Yu
- Department of Biostatistics, University of California, Los Angeles, CA, 90024, USA
| | - Hua Zhou
- Department of Biostatistics, University of California, Los Angeles, CA, 90024, USA
| | - Jonathan G Goldin
- Department of Radiology, University of California, Los Angeles, CA, 90024, USA
| | - Weng Kee Wong
- Department of Biostatistics, University of California, Los Angeles, CA, 90024, USA
| | - Grace Hyun J Kim
- Department of Biostatistics, University of California, Los Angeles, CA, 90024, USA.,Department of Radiology, University of California, Los Angeles, CA, 90024, USA
| |
Collapse
|
218
|
Guo R, Passi K, Jain CK. Tuberculosis Diagnostics and Localization in Chest X-Rays via Deep Learning Models. Front Artif Intell 2021; 3:583427. [PMID: 33733221 PMCID: PMC7861240 DOI: 10.3389/frai.2020.583427] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 08/13/2020] [Indexed: 11/13/2022] Open
Abstract
For decades, tuberculosis (TB), a potentially serious infectious lung disease, continues to be a leading cause of worldwide death. Proven to be conveniently efficient and cost-effective, chest X-ray (CXR) has become the preliminary medical imaging tool for detecting TB. Arguably, the quality of TB diagnosis will improve vastly with automated CXRs for TB detection and the localization of suspected areas, which may manifest TB. The current line of research aims to develop an efficient computer-aided detection system that will support doctors (and radiologists) to become well-informed when making TB diagnosis from patients' CXRs. Here, an integrated process to improve TB diagnostics via convolutional neural networks (CNNs) and localization in CXRs via deep-learning models is proposed. Three key steps in the TB diagnostics process include (a) modifying CNN model structures, (b) model fine-tuning via artificial bee colony algorithm, and (c) the implementation of linear average–based ensemble method. Comparisons of the overall performance are made across all three steps among the experimented deep CNN models on two publicly available CXR datasets, namely, the Shenzhen Hospital CXR dataset and the National Institutes of Health CXR dataset. Validated performance includes detecting CXR abnormalities and differentiating among seven TB-related manifestations (consolidation, effusion, fibrosis, infiltration, mass, nodule, and pleural thickening). Importantly, class activation mapping is employed to inform a visual interpretation of the diagnostic result by localizing the detected lung abnormality manifestation on CXR. Compared to the state-of-the-art, the resulting approach showcases an outstanding performance both in the lung abnormality detection and the specific TB-related manifestation diagnosis vis-à-vis the localization in CXRs.
Collapse
Affiliation(s)
- Ruihua Guo
- Department of Mathematics and Computer Science, Laurentian University, Greater Sudbury, ON, Canada
| | - Kalpdrum Passi
- Department of Mathematics and Computer Science, Laurentian University, Greater Sudbury, ON, Canada
| | - Chakresh Kumar Jain
- Department of Biotechnology, Jaypee Institute of Information Technology, Noida, India
| |
Collapse
|
219
|
Ackermans LLGC, Volmer L, Wee L, Brecheisen R, Sánchez-González P, Seiffert AP, Gómez EJ, Dekker A, Ten Bosch JA, Olde Damink SMW, Blokhuis TJ. Deep Learning Automated Segmentation for Muscle and Adipose Tissue from Abdominal Computed Tomography in Polytrauma Patients. SENSORS 2021; 21:s21062083. [PMID: 33809710 PMCID: PMC8002279 DOI: 10.3390/s21062083] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 02/28/2021] [Accepted: 03/10/2021] [Indexed: 12/15/2022]
Abstract
Manual segmentation of muscle and adipose compartments from computed tomography (CT) axial images is a potential bottleneck in early rapid detection and quantification of sarcopenia. A prototype deep learning neural network was trained on a multi-center collection of 3413 abdominal cancer surgery subjects to automatically segment truncal muscle, subcutaneous adipose tissue and visceral adipose tissue at the L3 lumbar vertebral level. Segmentations were externally tested on 233 polytrauma subjects. Although after severe trauma abdominal CT scans are quickly and robustly delivered, with often motion or scatter artefacts, incomplete vertebral bodies or arms that influence image quality, the concordance was generally very good for the body composition indices of Skeletal Muscle Radiation Attenuation (SMRA) (Concordance Correlation Coefficient (CCC) = 0.92), Visceral Adipose Tissue index (VATI) (CCC = 0.99) and Subcutaneous Adipose Tissue Index (SATI) (CCC = 0.99). In conclusion, this article showed an automated and accurate segmentation system to segment the cross-sectional muscle and adipose area L3 lumbar spine level on abdominal CT. Future perspectives will include fine-tuning the algorithm and minimizing the outliers.
Collapse
Affiliation(s)
- Leanne L. G. C. Ackermans
- Department of Traumatology, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands; (J.A.T.B.); (T.J.B.)
- Department of Surgery, NUTRIM School of Nutrition and Translational Research in Metabolism, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands; (R.B.); (S.M.W.O.D.)
- Correspondence: (L.L.G.C.A.); (L.V.); Tel.: +31-433-877-489 (L.L.G.C.A.); +31-884-456-00 (L.V.)
| | - Leroy Volmer
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Development Biology, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands; (L.W.); (A.D.)
- Correspondence: (L.L.G.C.A.); (L.V.); Tel.: +31-433-877-489 (L.L.G.C.A.); +31-884-456-00 (L.V.)
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Development Biology, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands; (L.W.); (A.D.)
- Clinical Data Science, Faculty of Health Medicine and Lifesciences, Maastricht University, Paul Henri Spaaklaan 1, 6229 GT Maastricht, The Netherlands
| | - Ralph Brecheisen
- Department of Surgery, NUTRIM School of Nutrition and Translational Research in Metabolism, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands; (R.B.); (S.M.W.O.D.)
| | - Patricia Sánchez-González
- Biomedical Engineering and Telemedicine Centre, ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid, 28040 Madrid, Spain; (P.S.-G.); (A.P.S.); (E.J.G.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
| | - Alexander P. Seiffert
- Biomedical Engineering and Telemedicine Centre, ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid, 28040 Madrid, Spain; (P.S.-G.); (A.P.S.); (E.J.G.)
| | - Enrique J. Gómez
- Biomedical Engineering and Telemedicine Centre, ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid, 28040 Madrid, Spain; (P.S.-G.); (A.P.S.); (E.J.G.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Development Biology, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands; (L.W.); (A.D.)
- Clinical Data Science, Faculty of Health Medicine and Lifesciences, Maastricht University, Paul Henri Spaaklaan 1, 6229 GT Maastricht, The Netherlands
| | - Jan A. Ten Bosch
- Department of Traumatology, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands; (J.A.T.B.); (T.J.B.)
| | - Steven M. W. Olde Damink
- Department of Surgery, NUTRIM School of Nutrition and Translational Research in Metabolism, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands; (R.B.); (S.M.W.O.D.)
- Department of General, Visceral and Transplantation Surgery, RWTH University Hospital Aachen, 52074 Aachen, Germany
| | - Taco J. Blokhuis
- Department of Traumatology, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands; (J.A.T.B.); (T.J.B.)
| |
Collapse
|
220
|
Wei G, Liu Y, Ji X, Li Q, Xing Y, Xue Y, Liu H. Micro-morphological feature visualization, auto-classification, and evolution quantitative analysis of tumors by using SR-PCT. Cancer Med 2021; 10:2319-2331. [PMID: 33682368 PMCID: PMC7982622 DOI: 10.1002/cam4.3796] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 02/05/2021] [Accepted: 02/06/2021] [Indexed: 11/15/2022] Open
Abstract
Tissue micro‐morphological abnormalities and interrelated quantitative data can provide immediate evidences for tumorigenesis and metastasis in microenvironment. However, the multiscale three‐dimensional nondestructive pathological visualization, measurement, and quantitative analysis are still a challenging for the medical imaging and diagnosis. In this work, we employed the synchrotron‐based X‐ray phase‐contrast tomography (SR‐PCT) combined with phase‐and‐attenuation duality phase retrieval to reconstruct and extract the volumetric inner‐structural characteristics of tumors in digesting system, helpful for tumor typing and statistic calculation of different tumor specimens. On the basis of the feature set including eight types of tumor micro‐lesions presented by our SR‐PCT reconstruction with high density resolution, the AlexNet‐based deep convolutional neural network model was trained and obtained the 94.21% of average accuracy of auto‐classification for the eight types of tumors in digesting system. The micro‐pathomophological relationship of liver tumor angiogenesis and progression were revealed by quantitatively analyzing the microscopic changes of texture and grayscale features screened by a machine learning method of area under curve and principal component analysis. The results showed the specific path and clinical manifestations of tumor evolution and indicated that these progressions of tumor lesions rely on its inflammation microenvironment. Hence, this high phase‐contrast 3D pathological characteristics and automatic analysis methods exhibited excellent recognizable and classifiable for micro tumor lesions.
Collapse
Affiliation(s)
- Gong‐Xiang Wei
- School of Physics and Optoelectronic EngineeringShandong University of TechnologyZiboChina
- State Key Laboratory of Pathogenesis, Prevention, Treatment of Central Asian High Incidence DiseasesFirst Affiliated Hospital of Xinjiang Medical UniversityUrumqiChina
| | - Yun‐Yan Liu
- School of Physics and Optoelectronic EngineeringShandong University of TechnologyZiboChina
- State Key Laboratory of Pathogenesis, Prevention, Treatment of Central Asian High Incidence DiseasesFirst Affiliated Hospital of Xinjiang Medical UniversityUrumqiChina
| | - Xue‐Wen Ji
- State Key Laboratory of Pathogenesis, Prevention, Treatment of Central Asian High Incidence DiseasesFirst Affiliated Hospital of Xinjiang Medical UniversityUrumqiChina
- Hepatobiliary SurgeryFirst Affiliated HospitalXinjiang Medical UniversityUrumqiChina
| | - Qiao‐Xin Li
- State Key Laboratory of Pathogenesis, Prevention, Treatment of Central Asian High Incidence DiseasesFirst Affiliated Hospital of Xinjiang Medical UniversityUrumqiChina
- Department of PathologyFirst Affiliated HospitalXinjiang Medical UniversityUrumqiChina
| | - Yan Xing
- State Key Laboratory of Pathogenesis, Prevention, Treatment of Central Asian High Incidence DiseasesFirst Affiliated Hospital of Xinjiang Medical UniversityUrumqiChina
- Imaging CenterFirst Affiliated HospitalXinjiang Medical UniversityUrumqiChina
| | - Yan‐Ling Xue
- SSRFShanghai Advanced Research InstituteChinese Academy of SciencesShanghaiChina
| | - Hui‐Qiang Liu
- School of Physics and Optoelectronic EngineeringShandong University of TechnologyZiboChina
- State Key Laboratory of Pathogenesis, Prevention, Treatment of Central Asian High Incidence DiseasesFirst Affiliated Hospital of Xinjiang Medical UniversityUrumqiChina
| |
Collapse
|
221
|
Elmuogy S, Hikal NA, Hassan E. An efficient technique for CT scan images classification of COVID-19. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-201985] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Nowadays, Coronavirus (COVID-19) considered one of the most critical pandemics in the earth. This is due its ability to spread rapidly between humans as well as animals. COVID-19 expected to outbreak around the world, around 70 % of the earth population might infected with COVID-19 in the incoming years. Therefore, an accurate and efficient diagnostic tool is highly required, which the main objective of our study. Manual classification was mainly used to detect different diseases, but it took too much time in addition to the probability of human errors. Automatic image classification reduces doctors diagnostic time, which could save human’s life. We propose an automatic classification architecture based on deep neural network called Worried Deep Neural Network (WDNN) model with transfer learning. Comparative analysis reveals that the proposed WDNN model outperforms by using three pre-training models: InceptionV3, ResNet50, and VGG19 in terms of various performance metrics. Due to the shortage of COVID-19 data set, data augmentation was used to increase the number of images in the positive class, then normalization used to make all images have the same size. Experimentation is done on COVID-19 dataset collected from different cases with total 2623 where (1573 training, 524 validation, 524 test). Our proposed model achieved 99,046, 98,684, 99,119, 98,90 in terms of accuracy, precision, recall, F-score, respectively. The results are compared with both the traditional machine learning methods and those using Convolutional Neural Networks (CNNs). The results demonstrate the ability of our classification model to use as an alternative of the current diagnostic tool.
Collapse
Affiliation(s)
- Samir Elmuogy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University,, Mansoura, Egypt
| | - Noha A. Hikal
- Department of Information Technology, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Esraa Hassan
- Department of Machine Learning and Information Retrieval, Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh, Egypt
| |
Collapse
|
222
|
Wang Z, Xiao Y, Weng F, Li X, Zhu D, Lu F, Liu X, Hou M, Meng Y. R-JaunLab: Automatic Multi-Class Recognition of Jaundice on Photos of Subjects with Region Annotation Networks. J Digit Imaging 2021; 34:337-350. [PMID: 33634415 DOI: 10.1007/s10278-021-00432-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Revised: 07/01/2020] [Accepted: 02/09/2021] [Indexed: 12/21/2022] Open
Abstract
Jaundice occurs as a symptom of various diseases, such as hepatitis, the liver cancer, gallbladder or pancreas. Therefore, clinical measurement with special equipment is a common method that is used to identify the total serum bilirubin level in patients. Fully automated multi-class recognition of jaundice combines two key issues: (1) the critical difficulties in multi-class recognition of jaundice approaches contrasting with the binary class and (2) the subtle difficulties in multi-class recognition of jaundice represent extensive individuals variability of high-resolution photos of subjects, huge coherency between healthy controls and occult jaundice, as well as broadly inhomogeneous color distribution. We introduce a novel approach for multi-class recognition of jaundice to detect occult jaundice, obvious jaundice and healthy controls. First, region annotation network is developed and trained to propose eye candidates. Subsequently, an efficient jaundice recognizer is proposed to learn similarities, context, localization features and globalization characteristics on photos of subjects. Finally, both networks are unified by using shared convolutional layer. Evaluation of the structured model in a comparative study resulted in a significant performance boost (categorical accuracy for mean 91.38%) over the independent human observer. Our work was exceeded against the state-of-the-art convolutional neural network (96.85% and 90.06% for training and validation subset, respectively) and showed a remarkable categorical result for mean 95.33% on testing subset. The proposed network makes a performance better than physicians. This work demonstrates the strength of our proposal to help bringing an efficient tool for multi-class recognition of jaundice into clinical practice.
Collapse
Affiliation(s)
- Zheng Wang
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China.,Science and Engineering School, Hunan First Normal University, Changsha, 410205, China
| | - Ying Xiao
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Futian Weng
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China
| | - Xiaojun Li
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Danhua Zhu
- Department of Gastroenterology, Hunan Provincial People's Hospital, Changsha, 410002, China
| | - Fanggen Lu
- The Second Xiangya Hospital, Central South University, 410083, Changsha, China
| | - Xiaowei Liu
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Muzhou Hou
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China.
| | - Yu Meng
- Department of Gastroenterology and Hepatology, Shenzhen University General Hospital, Shenzhen, 518055, China.
| |
Collapse
|
223
|
A-DenseUNet: Adaptive Densely Connected UNet for Polyp Segmentation in Colonoscopy Images with Atrous Convolution. SENSORS 2021; 21:s21041441. [PMID: 33669539 PMCID: PMC7922083 DOI: 10.3390/s21041441] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 02/14/2021] [Accepted: 02/17/2021] [Indexed: 01/05/2023]
Abstract
Colon carcinoma is one of the leading causes of cancer-related death in both men and women. Automatic colorectal polyp segmentation and detection in colonoscopy videos help endoscopists to identify colorectal disease more easily, making it a promising method to prevent colon cancer. In this study, we developed a fully automated pixel-wise polyp segmentation model named A-DenseUNet. The proposed architecture adapts different datasets, adjusting for the unknown depth of the network by sharing multiscale encoding information to the different levels of the decoder side. We also used multiple dilated convolutions with various atrous rates to observe a large field of view without increasing the computational cost and prevent loss of spatial information, which would cause dimensionality reduction. We utilized an attention mechanism to remove noise and inappropriate information, leading to the comprehensive re-establishment of contextual features. Our experiments demonstrated that the proposed architecture achieved significant segmentation results on public datasets. A-DenseUNet achieved a 90% Dice coefficient score on the Kvasir-SEG dataset and a 91% Dice coefficient score on the CVC-612 dataset, both of which were higher than the scores of other deep learning models such as UNet++, ResUNet, U-Net, PraNet, and ResUNet++ for segmenting polyps in colonoscopy images.
Collapse
|
224
|
An F, Li X, Ma X. Medical Image Classification Algorithm Based on Visual Attention Mechanism-MCNN. OXIDATIVE MEDICINE AND CELLULAR LONGEVITY 2021; 2021:6280690. [PMID: 33688390 PMCID: PMC7914083 DOI: 10.1155/2021/6280690] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 02/02/2021] [Accepted: 02/06/2021] [Indexed: 11/23/2022]
Abstract
Due to the complexity of medical images, traditional medical image classification methods have been unable to meet the actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification. However, deep learning has the following problems in the application of medical image classification. First, it is impossible to construct a deep learning model with excellent performance according to the characteristics of medical images. Second, the current deep learning network structure and training strategies are less adaptable to medical images. Therefore, this paper first introduces the visual attention mechanism into the deep learning model so that the information can be extracted more effectively according to the problem of medical images, and the reasoning is realized at a finer granularity. It can increase the interpretability of the model. Additionally, to solve the problem of matching the deep learning network structure and training strategy to medical images, this paper will construct a novel multiscale convolutional neural network model that can automatically extract high-level discriminative appearance features from the original image, and the loss function uses the Mahalanobis distance optimization model to obtain a better training strategy, which can improve the robust performance of the network model. The medical image classification task is completed by the above method. Based on the above ideas, this paper proposes a medical classification algorithm based on a visual attention mechanism-multiscale convolutional neural network. The lung nodules and breast cancer images were classified by the method in this paper. The experimental results show that the accuracy of medical image classification in this paper is not only higher than that of traditional machine learning methods but also improved compared with other deep learning methods, and the method has good stability and robustness.
Collapse
Affiliation(s)
- Fengping An
- School of Physics and Electronic Electrical Engineering, Huaiyin Normal University, Huaian 223300, China
| | - Xiaowei Li
- School of Physics and Electronic Electrical Engineering, Huaiyin Normal University, Huaian 223300, China
| | - Xingmin Ma
- System Second Department, North China Institute of Computing Technology, Beijing 100083, China
| |
Collapse
|
225
|
Mallio CA, Napolitano A, Castiello G, Giordano FM, D’Alessio P, Iozzino M, Sun Y, Angeletti S, Russano M, Santini D, Tonini G, Zobel BB, Vincenzi B, Quattrocchi CC. Deep Learning Algorithm Trained with COVID-19 Pneumonia Also Identifies Immune Checkpoint Inhibitor Therapy-Related Pneumonitis. Cancers (Basel) 2021; 13:652. [PMID: 33562011 PMCID: PMC7914551 DOI: 10.3390/cancers13040652] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 02/01/2021] [Accepted: 02/02/2021] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Coronavirus disease 2019 (COVID-19) pneumonia and immune checkpoint inhibitor (ICI) therapy-related pneumonitis share common features. The aim of this study was to determine on chest computed tomography (CT) images whether a deep convolutional neural network algorithm is able to solve the challenge of differential diagnosis between COVID-19 pneumonia and ICI therapy-related pneumonitis. METHODS We enrolled three groups: a pneumonia-free group (n = 30), a COVID-19 group (n = 34), and a group of patients with ICI therapy-related pneumonitis (n = 21). Computed tomography images were analyzed with an artificial intelligence (AI) algorithm based on a deep convolutional neural network structure. Statistical analysis included the Mann-Whitney U test (significance threshold at p < 0.05) and the receiver operating characteristic curve (ROC curve). RESULTS The algorithm showed low specificity in distinguishing COVID-19 from ICI therapy-related pneumonitis (sensitivity 97.1%, specificity 14.3%, area under the curve (AUC) = 0.62). ICI therapy-related pneumonitis was identified by the AI when compared to pneumonia-free controls (sensitivity = 85.7%, specificity 100%, AUC = 0.97). CONCLUSIONS The deep learning algorithm is not able to distinguish between COVID-19 pneumonia and ICI therapy-related pneumonitis. Awareness must be increased among clinicians about imaging similarities between COVID-19 and ICI therapy-related pneumonitis. ICI therapy-related pneumonitis can be applied as a challenge population for cross-validation to test the robustness of AI models used to analyze interstitial pneumonias of variable etiology.
Collapse
Affiliation(s)
- Carlo Augusto Mallio
- Departmental Faculty of Medicine and Surgery, Unit of Diagnostic Imaging and Interventional Radiology, Università Campus Bio-Medico di Roma, 00128 Rome, Italy; (G.C.); (F.M.G.); (P.D.); (B.B.Z.); (C.C.Q.)
| | - Andrea Napolitano
- Departmental Faculty of Medicine and Surgery, Unit of Medical Oncology, 00128 Rome, Italy; (M.R.); (D.S.); (G.T.); (B.V.)
| | - Gennaro Castiello
- Departmental Faculty of Medicine and Surgery, Unit of Diagnostic Imaging and Interventional Radiology, Università Campus Bio-Medico di Roma, 00128 Rome, Italy; (G.C.); (F.M.G.); (P.D.); (B.B.Z.); (C.C.Q.)
| | - Francesco Maria Giordano
- Departmental Faculty of Medicine and Surgery, Unit of Diagnostic Imaging and Interventional Radiology, Università Campus Bio-Medico di Roma, 00128 Rome, Italy; (G.C.); (F.M.G.); (P.D.); (B.B.Z.); (C.C.Q.)
| | - Pasquale D’Alessio
- Departmental Faculty of Medicine and Surgery, Unit of Diagnostic Imaging and Interventional Radiology, Università Campus Bio-Medico di Roma, 00128 Rome, Italy; (G.C.); (F.M.G.); (P.D.); (B.B.Z.); (C.C.Q.)
| | - Mario Iozzino
- Department of Interventional Radiology, S. Maria Goretti Hospital, 04100 Latina, Italy;
| | - Yipeng Sun
- Infervision Europe GmbH, Mainzer Strasse 75, D-65189 Wiesbaden, Germany;
| | - Silvia Angeletti
- Departmental Faculty of Medicine and Surgery, Unit of Clinical Laboratory Science, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
| | - Marco Russano
- Departmental Faculty of Medicine and Surgery, Unit of Medical Oncology, 00128 Rome, Italy; (M.R.); (D.S.); (G.T.); (B.V.)
| | - Daniele Santini
- Departmental Faculty of Medicine and Surgery, Unit of Medical Oncology, 00128 Rome, Italy; (M.R.); (D.S.); (G.T.); (B.V.)
| | - Giuseppe Tonini
- Departmental Faculty of Medicine and Surgery, Unit of Medical Oncology, 00128 Rome, Italy; (M.R.); (D.S.); (G.T.); (B.V.)
| | - Bruno Beomonte Zobel
- Departmental Faculty of Medicine and Surgery, Unit of Diagnostic Imaging and Interventional Radiology, Università Campus Bio-Medico di Roma, 00128 Rome, Italy; (G.C.); (F.M.G.); (P.D.); (B.B.Z.); (C.C.Q.)
| | - Bruno Vincenzi
- Departmental Faculty of Medicine and Surgery, Unit of Medical Oncology, 00128 Rome, Italy; (M.R.); (D.S.); (G.T.); (B.V.)
| | - Carlo Cosimo Quattrocchi
- Departmental Faculty of Medicine and Surgery, Unit of Diagnostic Imaging and Interventional Radiology, Università Campus Bio-Medico di Roma, 00128 Rome, Italy; (G.C.); (F.M.G.); (P.D.); (B.B.Z.); (C.C.Q.)
| |
Collapse
|
226
|
Application of Artificial Intelligence in Gastrointestinal Endoscopy. J Clin Gastroenterol 2021; 55:110-120. [PMID: 32925304 DOI: 10.1097/mcg.0000000000001423] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 08/07/2020] [Indexed: 12/24/2022]
Abstract
Artificial intelligence (AI), also known as computer-aided diagnosis, is a technology that enables machines to process information and functions at or above human level and has great potential in gastrointestinal endoscopy applications. At present, the research on medical image recognition usually adopts the deep-learning algorithm based on the convolutional neural network. AI has been used in gastrointestinal endoscopy including esophagogastroduodenoscopy, capsule endoscopy, colonoscopy, etc. AI can help endoscopic physicians improve the diagnosis rate of various lesions, reduce the rate of missed diagnosis, improve the quality of endoscopy, assess the severity of the disease, and improve the efficiency of endoscopy. The diversity, susceptibility, and imaging specificity of gastrointestinal endoscopic images are all difficulties and challenges on the road to intelligence. We need more large-scale, high-quality, multicenter prospective studies to explore the clinical applicability of AI, and ethical issues need to be taken into account.
Collapse
|
227
|
Li Q, Chen G. Recognition of industrial machine parts based on transfer learning with convolutional neural network. PLoS One 2021; 16:e0245735. [PMID: 33507901 PMCID: PMC7842930 DOI: 10.1371/journal.pone.0245735] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 01/07/2021] [Indexed: 12/12/2022] Open
Abstract
As the industry gradually enters the stage of unmanned and intelligent, factories in the future need to realize intelligent monitoring and diagnosis and maintenance of parts and components. In order to achieve this goal, it is first necessary to accurately identify and classify the parts in the factory. However, the existing literature rarely studies the classification and identification of parts of the entire factory. Due to the lack of existing data samples, this paper studies the identification and classification of small samples of industrial machine parts. In order to solve this problem, this paper establishes a convolutional neural network model based on the InceptionNet-V3 pretrained model through migration learning. Through experimental design, the influence of data expansion, learning rate and optimizer algorithm on the model effectiveness is studied, and the optimal model was finally determined, and the test accuracy rate reaches 99.74%. By comparing with the accuracy of other classifiers, the experimental results prove that the convolutional neural network model based on transfer learning can effectively solve the problem of recognition and classification of industrial machine parts with small samples and the idea of transfer learning can also be further promoted.
Collapse
Affiliation(s)
- Qiaoyang Li
- Xi'an Research Institute of High-Tech, Xi’an, China
| | - Guiming Chen
- Xi'an Research Institute of High-Tech, Xi’an, China
| |
Collapse
|
228
|
Joseph Raj AN, Zhu H, Khan A, Zhuang Z, Yang Z, Mahesh VGV, Karthik G. ADID-UNET-a segmentation model for COVID-19 infection from lung CT scans. PeerJ Comput Sci 2021; 7:e349. [PMID: 33816999 PMCID: PMC7924694 DOI: 10.7717/peerj-cs.349] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 12/07/2020] [Indexed: 05/23/2023]
Abstract
Currently, the new coronavirus disease (COVID-19) is one of the biggest health crises threatening the world. Automatic detection from computed tomography (CT) scans is a classic method to detect lung infection, but it faces problems such as high variations in intensity, indistinct edges near lung infected region and noise due to data acquisition process. Therefore, this article proposes a new COVID-19 pulmonary infection segmentation depth network referred as the Attention Gate-Dense Network- Improved Dilation Convolution-UNET (ADID-UNET). The dense network replaces convolution and maximum pooling function to enhance feature propagation and solves gradient disappearance problem. An improved dilation convolution is used to increase the receptive field of the encoder output to further obtain more edge features from the small infected regions. The integration of attention gate into the model suppresses the background and improves prediction accuracy. The experimental results show that the ADID-UNET model can accurately segment COVID-19 lung infected areas, with performance measures greater than 80% for metrics like Accuracy, Specificity and Dice Coefficient (DC). Further when compared to other state-of-the-art architectures, the proposed model showed excellent segmentation effects with a high DC and F1 score of 0.8031 and 0.82 respectively.
Collapse
Affiliation(s)
- Alex Noel Joseph Raj
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Haipeng Zhu
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Asiya Khan
- School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth, UK
| | - Zhemin Zhuang
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Zengbiao Yang
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Vijayalakshmi G. V. Mahesh
- Department of Electronics and Communication, BMS Institute of Technology and Management, Bangalore, India
| | - Ganesan Karthik
- COVID CARE - Institute of Orthopedics and Traumatology, Madras Medical College, Chennai, India
| |
Collapse
|
229
|
Qayyum A, Qadir J, Bilal M, Al-Fuqaha A. Secure and Robust Machine Learning for Healthcare: A Survey. IEEE Rev Biomed Eng 2021; 14:156-180. [PMID: 32746371 DOI: 10.1109/rbme.2020.3013489] [Citation(s) in RCA: 106] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Recent years have witnessed widespread adoption of machine learning (ML)/deep learning (DL) techniques due to their superior performance for a variety of healthcare applications ranging from the prediction of cardiac arrest from one-dimensional heart signals to computer-aided diagnosis (CADx) using multi-dimensional medical images. Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quite challenging due to the myriad security and privacy issues involved), especially in light of recent results that have shown that ML/DL are vulnerable to adversarial attacks. In this paper, we present an overview of various application areas in healthcare that leverage such techniques from security and privacy point of view and present associated challenges. In addition, we present potential methods to ensure secure and privacy-preserving ML for healthcare applications. Finally, we provide insight into the current research challenges and promising directions for future research.
Collapse
|
230
|
Nogales A, García-Tejedor ÁJ, Monge D, Vara JS, Antón C. A survey of deep learning models in medical therapeutic areas. Artif Intell Med 2021; 112:102020. [PMID: 33581832 DOI: 10.1016/j.artmed.2021.102020] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 12/21/2020] [Accepted: 01/10/2021] [Indexed: 12/18/2022]
Abstract
Artificial intelligence is a broad field that comprises a wide range of techniques, where deep learning is presently the one with the most impact. Moreover, the medical field is an area where data both complex and massive and the importance of the decisions made by doctors make it one of the fields in which deep learning techniques can have the greatest impact. A systematic review following the Cochrane recommendations with a multidisciplinary team comprised of physicians, research methodologists and computer scientists has been conducted. This survey aims to identify the main therapeutic areas and the deep learning models used for diagnosis and treatment tasks. The most relevant databases included were MedLine, Embase, Cochrane Central, Astrophysics Data System, Europe PubMed Central, Web of Science and Science Direct. An inclusion and exclusion criteria were defined and applied in the first and second peer review screening. A set of quality criteria was developed to select the papers obtained after the second screening. Finally, 126 studies from the initial 3493 papers were selected and 64 were described. Results show that the number of publications on deep learning in medicine is increasing every year. Also, convolutional neural networks are the most widely used models and the most developed area is oncology where they are used mainly for image analysis.
Collapse
Affiliation(s)
- Alberto Nogales
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Álvaro J García-Tejedor
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Diana Monge
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Juan Serrano Vara
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Cristina Antón
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| |
Collapse
|
231
|
Sarvamangala DR, Kulkarni RV. Convolutional neural networks in medical image understanding: a survey. EVOLUTIONARY INTELLIGENCE 2021; 15:1-22. [PMID: 33425040 PMCID: PMC7778711 DOI: 10.1007/s12065-020-00540-3] [Citation(s) in RCA: 180] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 10/05/2020] [Accepted: 11/22/2020] [Indexed: 12/23/2022]
Abstract
Imaging techniques are used to capture anomalies of the human body. The captured images must be understood for diagnosis, prognosis and treatment planning of the anomalies. Medical image understanding is generally performed by skilled medical professionals. However, the scarce availability of human experts and the fatigue and rough estimate procedures involved with them limit the effectiveness of image understanding performed by skilled medical professionals. Convolutional neural networks (CNNs) are effective tools for image understanding. They have outperformed human experts in many image understanding tasks. This article aims to provide a comprehensive survey of applications of CNNs in medical image understanding. The underlying objective is to motivate medical image understanding researchers to extensively apply CNNs in their research and diagnosis. A brief introduction to CNNs has been presented. A discussion on CNN and its various award-winning frameworks have been presented. The major medical image understanding tasks, namely image classification, segmentation, localization and detection have been introduced. Applications of CNN in medical image understanding of the ailments of brain, breast, lung and other organs have been surveyed critically and comprehensively. A critical discussion on some of the challenges is also presented.
Collapse
|
232
|
Wu DJ, Badamjav O, Reddy VV, Eisenberg M, Behr B. A preliminary study of sperm identification in microdissection testicular sperm extraction samples with deep convolutional neural networks. Asian J Androl 2021; 23:135-139. [PMID: 33106465 PMCID: PMC7991821 DOI: 10.4103/aja.aja_66_20] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Sperm identification and selection is an essential task when processing human testicular samples for in vitro fertilization. Locating and identifying sperm cell(s) in human testicular biopsy samples is labor intensive and time consuming. We developed a new computer-aided sperm analysis (CASA) system, which utilizes deep learning for near human-level performance on testicular sperm extraction (TESE), trained on a custom dataset. The system automates the identification of sperm in testicular biopsy samples. A dataset of 702 de-identified images from testicular biopsy samples of 30 patients was collected. Each image was normalized and passed through glare filters and diffraction correction. The data were split 80%, 10%, and 10% into training, validation, and test sets, respectively. Then, a deep object detection network, composed of a feature extraction network and object detection network, was trained on this dataset. The model was benchmarked against embryologists' performance on the detection task. Our deep learning CASA system achieved a mean average precision (mAP) of 0.741, with an average recall (AR) of 0.376 on our dataset. Our proposed method can work in real time; its speed is effectively limited only by the imaging speed of the microscope. Our results indicate that deep learning-based technologies can improve the efficiency of finding sperm in testicular biopsy samples.
Collapse
Affiliation(s)
- Daniel J Wu
- Department of Computer Science, Stanford University, Stanford, CA 94305, USA
| | - Odgerel Badamjav
- Department of Urology, University of Utah Health, Salt Lake City, UT 84108, USA
| | - Vikrant V Reddy
- Department of Obstetrics and Gynecology, Stanford Children's Health, Stanford, CA 94305, USA
| | - Michael Eisenberg
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Barry Behr
- Department of Obstetrics and Gynecology, Stanford Children's Health, Stanford, CA 94305, USA
| |
Collapse
|
233
|
Abstract
The interest in artificial intelligence (AI) has ballooned within radiology in the past few years primarily due to notable successes of deep learning. With the advances brought by deep learning, AI has the potential to recognize and localize complex patterns from different radiological imaging modalities, many of which even achieve comparable performance to human decision-making in recent applications. In this chapter, we review several AI applications in radiology for different anatomies: chest, abdomen, pelvis, as well as general lesion detection/identification that is not limited to specific anatomies. For each anatomy site, we focus on introducing the tasks of detection, segmentation, and classification with an emphasis on describing the technology development pathway with the aim of providing the reader with an understanding of what AI can do in radiology and what still needs to be done for AI to better fit in radiology. Combining with our own research experience of AI in medicine, we elaborate how AI can enrich knowledge discovery, understanding, and decision-making in radiology, rather than replacing the radiologist.
Collapse
|
234
|
Zheng Q, Yang L, Zeng B, Li J, Guo K, Liang Y, Liao G. Artificial intelligence performance in detecting tumor metastasis from medical radiology imaging: A systematic review and meta-analysis. EClinicalMedicine 2021; 31:100669. [PMID: 33392486 PMCID: PMC7773591 DOI: 10.1016/j.eclinm.2020.100669] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 11/14/2020] [Accepted: 11/17/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Early diagnosis of tumor metastasis is crucial for clinical treatment. Artificial intelligence (AI) has shown great promise in the field of medicine. We therefore aimed to evaluate the diagnostic accuracy of AI algorithms in detecting tumor metastasis using medical radiology imaging. METHODS We searched PubMed and Web of Science for studies published from January 1, 1997, to January 30, 2020. Studies evaluating an AI model for the diagnosis of tumor metastasis from medical images were included. We excluded studies that used histopathology images or medical wave-form data and those focused on the region segmentation of interest. Studies providing enough information to construct contingency tables were included in a meta-analysis. FINDINGS We identified 2620 studies, of which 69 were included. Among them, 34 studies were included in a meta-analysis with a pooled sensitivity of 82% (95% CI 79-84%), specificity of 84% (82-87%) and AUC of 0·90 (0·87-0·92). Analysis for different AI algorithms showed a pooled sensitivity of 87% (83-90%) for machine learning and 86% (82-89%) for deep learning, and a pooled specificity of 89% (82-93%) for machine learning, and 87% (82-91%) for deep learning. INTERPRETATION AI algorithms may be used for the diagnosis of tumor metastasis using medical radiology imaging with equivalent or even better performance to health-care professionals, in terms of sensitivity and specificity. At the same time, rigorous reporting standards with external validation and comparison to health-care professionals are urgently needed for AI application in the medical field. FUNDING College students' innovative entrepreneurial training plan program .
Collapse
Affiliation(s)
- Qiuhan Zheng
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Le Yang
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Bin Zeng
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Jiahao Li
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Kaixin Guo
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Yujie Liang
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Guiqing Liao
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| |
Collapse
|
235
|
Sadeghzadeh H, Koohi S, Paranj AF. Free-Space Optical Neural Network Based on Optical Nonlinearity and Pooling Operations. IEEE ACCESS 2021; 9:146533-146549. [DOI: 10.1109/access.2021.3123230] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
|
236
|
Sari S, Soesanti I, Setiawan NA. Development of CAD System for Automatic Lung Nodule Detection: A Review. BIO WEB OF CONFERENCES 2021. [DOI: 10.1051/bioconf/20214104001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Lung cancer is a type of cancer that spreads rapidly and is the leading cause of mortality globally. The Computer-Aided Detection (CAD) system for automatic lung cancer detection has a significant influence on human survival. In this article, we report the summary of relevant literature on CAD systems for lung cancer detection. The CAD system includes preprocessing techniques, segmentation, lung nodule detection, and false-positive reduction with feature extraction. In evaluating some of the work on this topic, we used a search of selected literature, the dataset used for method validation, the number of cases, the image size, several techniques in nodule detection, feature extraction, sensitivity, and false-positive rates. The best performance CAD systems of our analysis results show the sensitivity value is high with low false positives and other parameters for lung nodule detection. Furthermore, it also uses a large dataset, so the further systems have improved accuracy and precision in detection. CNN is the best lung nodule detection method and need to develop, it is preferable because this method has witnessed various growth in recent years and has yielded impressive outcomes. We hope this article will help professional researchers and radiologists in developing CAD systems for lung cancer detection.
Collapse
|
237
|
Applying Machine Learning for Integration of Multi-Modal Genomics Data and Imaging Data to Quantify Heterogeneity in Tumour Tissues. Methods Mol Biol 2021; 2190:209-228. [PMID: 32804368 DOI: 10.1007/978-1-0716-0826-5_10] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
With rapid advances in experimental instruments and protocols, imaging and sequencing data are being generated at an unprecedented rate contributing significantly to the current and coming big biomedical data. Meanwhile, unprecedented advances in computational infrastructure and analysis algorithms are realizing image-based digital diagnosis not only in radiology and cardiology but also oncology and other diseases. Machine learning methods, especially deep learning techniques, are already and broadly implemented in diverse technological and industrial sectors, but their applications in healthcare are just starting. Uniquely in biomedical research, a vast potential exists to integrate genomics data with histopathological imaging data. The integration has the potential to extend the pathologist's limits and boundaries, which may create breakthroughs in diagnosis, treatment, and monitoring at molecular and tissue levels. Moreover, the applications of genomics data are realizing the potential for personalized medicine, making diagnosis, treatment, monitoring, and prognosis more accurate. In this chapter, we discuss machine learning methods readily available for digital pathology applications, new prospects of integrating spatial genomics data on tissues with tissue morphology, and frontier approaches to combining genomics data with pathological imaging data. We present perspectives on how artificial intelligence can be synergized with molecular genomics and imaging to make breakthroughs in biomedical and translational research for computer-aided applications.
Collapse
|
238
|
U S, K. PT, K S. Computer aided diagnosis of obesity based on thermal imaging using various convolutional neural networks. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102233] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
239
|
Madan S, Gandhi TK, Chaudhury S. Bone age assessment using metric learning on small dataset of hand radiographs. ADVANCED MACHINE VISION PARADIGMS FOR MEDICAL IMAGE ANALYSIS 2021:259-271. [DOI: 10.1016/b978-0-12-819295-5.00010-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
240
|
Drukker K, Yan P, Sibley A, Wang G. Biomedical imaging and analysis through deep learning. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00004-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
241
|
Chen D, Zhang X, Mei Y, Liao F, Xu H, Li Z, Xiao Q, Guo W, Zhang H, Yan T, Xiong J, Ventikos Y. Multi-stage learning for segmentation of aortic dissections using a prior aortic anatomy simplification. Med Image Anal 2020; 69:101931. [PMID: 33618153 DOI: 10.1016/j.media.2020.101931] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Revised: 11/20/2020] [Accepted: 11/27/2020] [Indexed: 12/30/2022]
Abstract
Aortic dissection (AD) is a life-threatening cardiovascular disease with a high mortality rate. The accurate and generalized 3-D reconstruction of AD from CT-angiography can effectively assist clinical procedures and surgery plans, however, is clinically unavaliable due to the lacking of efficient tools. In this study, we presented a novel multi-stage segmentation framework for type B AD to extract true lumen (TL), false lumen (FL) and all branches (BR) as different classes. Two cascaded neural networks were used to segment the aortic trunk and branches and to separate the dual lumen, respectively. An aortic straightening method was designed based on the prior vascular anatomy of AD, simplifying the curved aortic shape before the second network. The straightening-based method achieved the mean Dice scores of 0.96, 0.95 and 0.89 for TL, FL, and BR on a multi-center dataset involving 120 patients, outperforming the end-to-end multi-class methods and the multi-stage methods without straightening on the dual-lumen segmentation, even using different network architectures. Both the global volumetric features of the aorta and the local characteristics of the primary tear could be better identified and quantified based on the straightening. Comparing to previous deep learning methods dealing with AD segmentations, the proposed framework presented advantages in segmentation accuracy.
Collapse
Affiliation(s)
- Duanduan Chen
- School of Life Science, Beijing Institute of Technology, Beijing, China.
| | - Xuyang Zhang
- School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Yuqian Mei
- School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Fangzhou Liao
- Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China
| | - Huanming Xu
- School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Zhenfeng Li
- School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Qianjiang Xiao
- Shukun (Beijing) Network Technology Co.Ltd., Beijing, China
| | - Wei Guo
- Department of Vascular and Endovascular Surgery, Chinese PLA General Hospital, Beijing, China
| | - Hongkun Zhang
- Department of Vascular Surgery, First Affiliated Hospital of Medical College, Zhejiang University, Hangzhou, China
| | - Tianyi Yan
- School of Life Science, Beijing Institute of Technology, Beijing, China.
| | - Jiang Xiong
- Department of Vascular and Endovascular Surgery, Chinese PLA General Hospital, Beijing, China.
| | - Yiannis Ventikos
- Department of Mechanical Engineering, University College London, London, UK; School of Life Science, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
242
|
Zeng F, Liang X, Chen Z. New Roles for Clinicians in the Age of Artificial Intelligence. BIO INTEGRATION 2020. [DOI: 10.15212/bioi-2020-0014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
Abstract With the rapid developments of digital picture processing, pattern recognition, and intelligent algorithms, artificial intelligence (AI) has been widely applied in the medical field. The applications of artificial intelligence in medicine (AIM) include diagnosis
generation, therapy selection, healthcare management, disease stratification, etc. Among the applications, the focuses of AIM are assisting clinicians in implementing disease detection, quantitative measurement, and differential diagnosis to improve diagnostic accuracy and optimize treatment
selection. Thus, researchers focus on creating and refining modeling processes, including the processes of data collection, data preprocessing, and data partitioning as well as how models are configured, evaluated, optimized, clinically applied, and used for training. However, there is little
research on the consideration of clinicians in the age of AI. Meanwhile, AI is more accurate and spends less time in diagnosis between the competitions of AI and clinicians in some cases. Thus, AIM is gradually becoming a hot topic. Barely a day goes by without a claim that AI techniques are
poised to replace most of today’s professionals. Despite huge promise surrounding this technology, AI alone cannot support all the requirements for precision medicine, rather AI should be used in cohesive collaboration with clinicians. However, the integration of AIM has created confusion
among clinicians on their role in this era. Therefore, it is necessary to explore new roles for clinicians in the age of AI.Statement of significanceWith the advent of the era of AI, the integration of medical field and AI is on the rise. Medicine has undergone significant changes,
and what was previously labor-intensive work is now being solved through intelligent means. This change has also raised concerns among scholars: Will doctors eventually be replaced by AI? From this perspective, this study elaborates on the reasons why AI cannot replace doctors, and points
out how doctors should change their roles to accelerate the integration of these fields, so as to adapt to the developing times.
Collapse
Affiliation(s)
- Fengyi Zeng
- Department of Ultrasound Medicine, Laboratory of Ultrasound Molecular Imaging, The Third Affiliated Hospital of Guangzhou Medical University of Guangzhou Medical University, The Liwan Hospital of the Third Affiliated Hospital of Guangzhou Medical
University, Guangzhou, Guangdong 510000, China
| | - Xiaowen Liang
- Department of Ultrasound Medicine, Laboratory of Ultrasound Molecular Imaging, The Third Affiliated Hospital of Guangzhou Medical University of Guangzhou Medical University, The Liwan Hospital of the Third Affiliated Hospital of Guangzhou Medical
University, Guangzhou, Guangdong 510000, China
| | - Zhiyi Chen
- Department of Ultrasound Medicine, Laboratory of Ultrasound Molecular Imaging, The Third Affiliated Hospital of Guangzhou Medical University of Guangzhou Medical University, The Liwan Hospital of the Third Affiliated Hospital of Guangzhou Medical
University, Guangzhou, Guangdong 510000, China
| |
Collapse
|
243
|
Zheng S, Lin X, Zhang W, He B, Jia S, Wang P, Jiang H, Shi J, Jia F. MDCC-Net: Multiscale double-channel convolution U-Net framework for colorectal tumor segmentation. Comput Biol Med 2020; 130:104183. [PMID: 33360107 DOI: 10.1016/j.compbiomed.2020.104183] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 12/07/2020] [Accepted: 12/12/2020] [Indexed: 01/03/2023]
Abstract
PURPOSE Multiscale feature fusion is a feasible method to improve tumor segmentation accuracy. However, current multiscale networks have two common problems: 1. Some networks only allow feature fusion between encoders and decoders of the same scale. It is obvious that such feature fusion is not sufficient. 2. Some networks have too many dense skip connections and too much nesting between the coding layer and the decoding layer, which causes some features to be lost and means that not enough information will be learned from multiple scales. To overcome these two problems, we propose a multiscale double-channel convolution U-Net (MDCC-Net) framework for colorectal tumor segmentation. METHODS In the coding layer, we designed a dual-channel separation and convolution module and then added residual connections to perform multiscale feature fusion on the input image and the feature map after dual-channel separation and convolution. By fusing features at different scales in the same coding layer, the network can fully extract the detailed information of the original image and learn more tumor boundary information. RESULTS The segmentation results show that our proposed method has a high accuracy, with a Dice similarity coefficient (DSC) of 83.57%, which is an improvement of 9.59%, 6.42%, and 1.57% compared with nnU-Net, U-Net, and U-Net++, respectively. CONCLUSION The experimental results show that our proposed method has good performance in the segmentation of colorectal tumors and is close to the expert level. The proposed method has potential clinical applicability.
Collapse
Affiliation(s)
- Suichang Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xue Lin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Department of Radiology, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Weifeng Zhang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Baochun He
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Shuangfu Jia
- Department of Operating Room, Hejian People's Hospital, Cangzhou, China
| | - Ping Wang
- Department of Surgery, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Huijie Jiang
- Department of Radiology, The Second Affiliated Hospital of Harbin Medical University, Harbin, China.
| | - Jingjing Shi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| | - Fucang Jia
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
| |
Collapse
|
244
|
Alam M, Samad MD, Vidyaratne L, Glandon A, Iftekharuddin KM. Survey on Deep Neural Networks in Speech and Vision Systems. Neurocomputing 2020; 417:302-321. [PMID: 33100581 PMCID: PMC7584105 DOI: 10.1016/j.neucom.2020.07.053] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
This survey presents a review of state-of-the-art deep neural network architectures, algorithms, and systems in vision and speech applications. Recent advances in deep artificial neural network algorithms and architectures have spurred rapid innovation and development of intelligent vision and speech systems. With availability of vast amounts of sensor data and cloud computing for processing and training of deep neural networks, and with increased sophistication in mobile and embedded technology, the next-generation intelligent systems are poised to revolutionize personal and commercial computing. This survey begins by providing background and evolution of some of the most successful deep learning models for intelligent vision and speech systems to date. An overview of large-scale industrial research and development efforts is provided to emphasize future trends and prospects of intelligent vision and speech systems. Robust and efficient intelligent systems demand low-latency and high fidelity in resource-constrained hardware platforms such as mobile devices, robots, and automobiles. Therefore, this survey also provides a summary of key challenges and recent successes in running deep neural networks on hardware-restricted platforms, i.e. within limited memory, battery life, and processing capabilities. Finally, emerging applications of vision and speech across disciplines such as affective computing, intelligent transportation, and precision medicine are discussed. To our knowledge, this paper provides one of the most comprehensive surveys on the latest developments in intelligent vision and speech applications from the perspectives of both software and hardware systems. Many of these emerging technologies using deep neural networks show tremendous promise to revolutionize research and development for future vision and speech systems.
Collapse
Affiliation(s)
| | - M. D. Samad
- Department of Computer Science, Tennessee State University, Nashville, TN, 37209
| | | | | | - K. M. Iftekharuddin
- Department of Computer Science, Tennessee State University, Nashville, TN, 37209
| |
Collapse
|
245
|
Zhang J, Li X, Li Y, Wang M, Huang B, Yao S, Shen L. Three dimensional convolutional neural network-based classification of conduct disorder with structural MRI. Brain Imaging Behav 2020; 14:2333-2340. [PMID: 31538277 DOI: 10.1007/s11682-019-00186-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Conduct disorder (CD) is a common child and adolescent psychiatric disorder with various representative symptoms, and may cause long-term burden to patients and society. Recently, an increasing number of studies have used deep learning-based approaches, such as convolutional neural network (CNN), to analyze neuroimaging data and to identify biomarkers. In this study, we applied an optimized 3D AlexNet CNN model to automatically extract multi-layer high dimensional features of structural magnetic resonance imaging (sMRI), and to classify CD from healthy controls (HCs). We acquired high-resolution sMRI from 60 CD and 60 age- and gender-matched HCs. All subjects were male, and the age (mean ± std. dev) of participants in the CD and HC groups was 15.3 ± 1.0 and 15.5 ± 0.7, respectively. Five-fold cross validation (CV) was used to train and test this model. The receiver operating characteristic (ROC) curve for this model and that for support vector machine (SVM) model were compared. Feature visualization was performed to obtain intuition about the sMRI features learned by our AlexNet model. Our proposed AlexNet model achieved high classification performance with accuracy of 0.85, specificity of 0.82 and sensitivity of 0.87. The area under the ROC curve (AUC) of AlexNet was 0.86, significantly higher than that of SVM (AUC = 0.78; p = 0.046). The saliency maps for each convolutional layer highlighted the different brain regions in sMRI of CD, mainly including the frontal lobe, superior temporal gyrus, parietal lobe and occipital lobe. The classification results indicated that deep learning-based method is able to explore the hidden features from the sMRI of CD and might assist clinicians in the diagnosis of CD.
Collapse
Affiliation(s)
- Jianing Zhang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, People's Republic of China
| | - Xuechen Li
- Computer Vision Institute, School of Computer Science and Software Engineering, Shenzhen University, Shenzhen, People's Republic of China
| | - Yuexiang Li
- Computer Vision Institute, School of Computer Science and Software Engineering, Shenzhen University, Shenzhen, People's Republic of China
| | - Mingyu Wang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, People's Republic of China
| | - Bingsheng Huang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, People's Republic of China
- Medical Psychological Center, Second Xiangya Hospital, Central South University, Changsha, People's Republic of China
| | - Shuqiao Yao
- Medical Psychological Center, Second Xiangya Hospital, Central South University, Changsha, People's Republic of China.
| | - Linlin Shen
- Computer Vision Institute, School of Computer Science and Software Engineering, Shenzhen University, Shenzhen, People's Republic of China.
| |
Collapse
|
246
|
Xu J, Yang W, Wan C, Shen J. Weakly supervised detection of central serous chorioretinopathy based on local binary patterns and discrete wavelet transform. Comput Biol Med 2020; 127:104056. [PMID: 33096297 DOI: 10.1016/j.compbiomed.2020.104056] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 10/10/2020] [Accepted: 10/10/2020] [Indexed: 10/23/2022]
Abstract
Central serous chorioretinopathy (CSCR) is a common fundus disease. Early detection of CSCR is of great importance to prevent visual loss. Therefore, a novel automatic detection method is presented in this paper which integrates technologies including discrete wavelet transform (DWT) image decomposition, local binary patterns (LBP) based texture feature extraction, and multi-instance learning (MIL). LBP is selected due to its robustness to low contrast and low quality images, which can reduce the interference of image itself on the detection method. DWT image decomposition provides high-frequency components with rich details for extracting LBP texture features, which can remove redundant information that is not necessary for diagnosis of CSCR in the raw image. The tedious task of accurately locating and segmenting CSCR lesions is avoided by using MIL. Experiments on 358 optical coherence tomography (OCT) B-scan images demonstrate the effectiveness of our method. Even under the condition of single threshold, the accuracy of 99.58% is obtained at K = 35 by only using a high-frequency feature fusion scheme, which is competitive with the existing methods. Additionally, through further detail innovation, such as multi-threshold optimization (MTO) and integrated decision-making (IDM), the performance of our method is further improved and the detection accuracy is 100% at K = 40.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical & Electrical Engineering, Nanjing University of Aeronautics &Astronautics, 210016, Nanjing, PR China.
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, 210029, Nanjing, PR China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics & Astronautics, 211106, Nanjing, PR China
| | - Jianxin Shen
- College of Mechanical & Electrical Engineering, Nanjing University of Aeronautics &Astronautics, 210016, Nanjing, PR China.
| |
Collapse
|
247
|
Liu C, Xie H, Zhang S, Mao Z, Sun J, Zhang Y. Misshapen Pelvis Landmark Detection With Local-Global Feature Learning for Diagnosing Developmental Dysplasia of the Hip. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3944-3954. [PMID: 32746137 DOI: 10.1109/tmi.2020.3008382] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Developmental dysplasia of the hip (DDH) is one of the most common orthopedic disorders in infants and young children. Accurately detecting and identifying the misshapen anatomical landmarks plays a crucial role in the diagnosis of DDH. However, the diversity during the calcification and the deformity due to the dislocation lead it a difficult task to detect the misshapen pelvis landmarks for both human expert and computer. Generally, the anatomical landmarks exhibit stable morphological features in part regions and rigid structural features in long ranges, which can be strong identification for the landmarks. In this paper, we investigate the local morphological features and global structural features for the misshapen landmark detection with a novel Pyramid Non-local UNet (PN-UNet). Firstly, we mine the local morphological features with a series of convolutional neural network (CNN) stacks, and convert the detection of a landmark to the segmentation of the landmark's local neighborhood by UNet. Secondly, a non-local module is employed to capture the global structural features with high-level structural knowledge. With the end-to-end and accurate detection of pelvis landmarks, we realize a fully automatic and highly reliable diagnosis of DDH. In addition, a dataset with 10,000 pelvis X-ray images is constructed in our work. It is the first public dataset for diagnosing DDH and has been already released for open research. To the best of our knowledge, this is the first attempt to apply deep learning method in the diagnosis of DDH. Experimental results show that our approach achieves an excellent precision in landmark detection (average point to point error of 0.9286mm) and illness diagnosis over human experts. Project is available at http://imcc.ustc.edu.cn/project/ddh/.
Collapse
|
248
|
Tian Y, Fu S. A descriptive framework for the field of deep learning applications in medical images. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106445] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
249
|
Hussain L, Nguyen T, Li H, Abbasi AA, Lone KJ, Zhao Z, Zaib M, Chen A, Duong TQ. Machine-learning classification of texture features of portable chest X-ray accurately classifies COVID-19 lung infection. Biomed Eng Online 2020; 19:88. [PMID: 33239006 PMCID: PMC7686836 DOI: 10.1186/s12938-020-00831-x] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 11/17/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND The large volume and suboptimal image quality of portable chest X-rays (CXRs) as a result of the COVID-19 pandemic could post significant challenges for radiologists and frontline physicians. Deep-learning artificial intelligent (AI) methods have the potential to help improve diagnostic efficiency and accuracy for reading portable CXRs. PURPOSE The study aimed at developing an AI imaging analysis tool to classify COVID-19 lung infection based on portable CXRs. MATERIALS AND METHODS Public datasets of COVID-19 (N = 130), bacterial pneumonia (N = 145), non-COVID-19 viral pneumonia (N = 145), and normal (N = 138) CXRs were analyzed. Texture and morphological features were extracted. Five supervised machine-learning AI algorithms were used to classify COVID-19 from other conditions. Two-class and multi-class classification were performed. Statistical analysis was done using unpaired two-tailed t tests with unequal variance between groups. Performance of classification models used the receiver-operating characteristic (ROC) curve analysis. RESULTS For the two-class classification, the accuracy, sensitivity and specificity were, respectively, 100%, 100%, and 100% for COVID-19 vs normal; 96.34%, 95.35% and 97.44% for COVID-19 vs bacterial pneumonia; and 97.56%, 97.44% and 97.67% for COVID-19 vs non-COVID-19 viral pneumonia. For the multi-class classification, the combined accuracy and AUC were 79.52% and 0.87, respectively. CONCLUSION AI classification of texture and morphological features of portable CXRs accurately distinguishes COVID-19 lung infection in patients in multi-class datasets. Deep-learning methods have the potential to improve diagnostic efficiency and accuracy for portable CXRs.
Collapse
Affiliation(s)
- Lal Hussain
- Department of Computer Science and IT, King Abdullah Campus, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, Azad Kashmir, Pakistan.
- Department of Computer Science and IT, Neelum Campus, University of Azad Jammu and Kashmir, Athmuqam, 13230, Azad Kashmir, Pakistan.
| | - Tony Nguyen
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Haifang Li
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Adeel A Abbasi
- Department of Computer Science and IT, King Abdullah Campus, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, Azad Kashmir, Pakistan
| | - Kashif J Lone
- Department of Computer Science and IT, King Abdullah Campus, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, Azad Kashmir, Pakistan
| | - Zirun Zhao
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Mahnoor Zaib
- Department of Computer Science and IT, Neelum Campus, University of Azad Jammu and Kashmir, Athmuqam, 13230, Azad Kashmir, Pakistan
| | - Anne Chen
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Tim Q Duong
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| |
Collapse
|
250
|
Ramakrishna RR, Abd Hamid Z, Wan Zaki WMD, Huddin AB, Mathialagan R. Stem cell imaging through convolutional neural networks: current issues and future directions in artificial intelligence technology. PeerJ 2020; 8:e10346. [PMID: 33240655 PMCID: PMC7680049 DOI: 10.7717/peerj.10346] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 10/21/2020] [Indexed: 12/12/2022] Open
Abstract
Stem cells are primitive and precursor cells with the potential to reproduce into diverse mature and functional cell types in the body throughout the developmental stages of life. Their remarkable potential has led to numerous medical discoveries and breakthroughs in science. As a result, stem cell-based therapy has emerged as a new subspecialty in medicine. One promising stem cell being investigated is the induced pluripotent stem cell (iPSC), which is obtained by genetically reprogramming mature cells to convert them into embryonic-like stem cells. These iPSCs are used to study the onset of disease, drug development, and medical therapies. However, functional studies on iPSCs involve the analysis of iPSC-derived colonies through manual identification, which is time-consuming, error-prone, and training-dependent. Thus, an automated instrument for the analysis of iPSC colonies is needed. Recently, artificial intelligence (AI) has emerged as a novel technology to tackle this challenge. In particular, deep learning, a subfield of AI, offers an automated platform for analyzing iPSC colonies and other colony-forming stem cells. Deep learning rectifies data features using a convolutional neural network (CNN), a type of multi-layered neural network that can play an innovative role in image recognition. CNNs are able to distinguish cells with high accuracy based on morphologic and textural changes. Therefore, CNNs have the potential to create a future field of deep learning tasks aimed at solving various challenges in stem cell studies. This review discusses the progress and future of CNNs in stem cell imaging for therapy and research.
Collapse
Affiliation(s)
- Ramanaesh Rao Ramakrishna
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| | - Zariyantey Abd Hamid
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| | - Wan Mimi Diyana Wan Zaki
- Department of Electrical, Electronic & Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
| | - Aqilah Baseri Huddin
- Department of Electrical, Electronic & Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
| | - Ramya Mathialagan
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| |
Collapse
|