51
|
|
52
|
Shaban M, Ogur Z, Mahmoud A, Switala A, Shalaby A, Abu Khalifeh H, Ghazal M, Fraiwan L, Giridharan G, Sandhu H, El-Baz AS. A convolutional neural network for the screening and staging of diabetic retinopathy. PLoS One 2020; 15:e0233514. [PMID: 32569310 PMCID: PMC7307769 DOI: 10.1371/journal.pone.0233514] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 05/06/2020] [Indexed: 01/28/2023] Open
Abstract
Diabetic retinopathy (DR) is a serious retinal disease and is considered as a leading cause of blindness in the world. Ophthalmologists use optical coherence tomography (OCT) and fundus photography for the purpose of assessing the retinal thickness, and structure, in addition to detecting edema, hemorrhage, and scars. Deep learning models are mainly used to analyze OCT or fundus images, extract unique features for each stage of DR and therefore classify images and stage the disease. Throughout this paper, a deep Convolutional Neural Network (CNN) with 18 convolutional layers and 3 fully connected layers is proposed to analyze fundus images and automatically distinguish between controls (i.e. no DR), moderate DR (i.e. a combination of mild and moderate Non Proliferative DR (NPDR)) and severe DR (i.e. a group of severe NPDR, and Proliferative DR (PDR)) with a validation accuracy of 88%-89%, a sensitivity of 87%-89%, a specificity of 94%-95%, and a Quadratic Weighted Kappa Score of 0.91–0.92 when both 5-fold, and 10-fold cross validation methods were used respectively. A prior pre-processing stage was deployed where image resizing and a class-specific data augmentation were used. The proposed approach is considerably accurate in objectively diagnosing and grading diabetic retinopathy, which obviates the need for a retina specialist and expands access to retinal care. This technology enables both early diagnosis and objective tracking of disease progression which may help optimize medical therapy to minimize vision loss.
Collapse
Affiliation(s)
- Mohamed Shaban
- Electrical and Computer Engineering, University of South Alabama, Mobile, AL, United States of America
| | - Zeliha Ogur
- Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | - Andrew Switala
- Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | - Ahmed Shalaby
- Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | | | | | | | - Guruprasad Giridharan
- Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | - Harpal Sandhu
- Department of Ophthalmology and Visual Sciences, University of Louisville, Louisville, KY, United States of America
| | - Ayman S. El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY, United States of America
- * E-mail:
| |
Collapse
|
53
|
Rajan SP. Recognition of Cardiovascular Diseases through Retinal Images Using Optic Cup to Optic Disc Ratio. PATTERN RECOGNITION AND IMAGE ANALYSIS 2020. [DOI: 10.1134/s105466182002011x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
54
|
Shu X, Zhang L, Wang Z, Lv Q, Yi Z. Deep Neural Networks With Region-Based Pooling Structures for Mammographic Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2246-2255. [PMID: 31985411 DOI: 10.1109/tmi.2020.2968397] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Breast cancer is one of the most frequently diagnosed solid cancers. Mammography is the most commonly used screening technology for detecting breast cancer. Traditional machine learning methods of mammographic image classification or segmentation using manual features require a great quantity of manual segmentation annotation data to train the model and test the results. But manual labeling is expensive, time-consuming, and laborious, and greatly increases the cost of system construction. To reduce this cost and the workload of radiologists, an end-to-end full-image mammogram classification method based on deep neural networks was proposed for classifier building, which can be constructed without bounding boxes or mask ground truth label of training data. The only label required in this method is the classification of mammographic images, which can be relatively easy to collect from diagnostic reports. Because breast lesions usually take up a fraction of the total area visualized in the mammographic image, we propose different pooling structures for convolutional neural networks(CNNs) instead of the common pooling methods, which divide the image into regions and select the few with high probability of malignancy as the representation of the whole mammographic image. The proposed pooling structures can be applied on most CNN-based models, which may greatly improve the models' performance on mammographic image data with the same input. Experimental results on the publicly available INbreast dataset and CBIS dataset indicate that the proposed pooling structures perform satisfactorily on mammographic image data compared with previous state-of-the-art mammographic image classifiers and detection algorithm using segmentation annotations.
Collapse
|
55
|
Wang L, Zhang L, Zhu M, Qi X, Yi Z. Automatic diagnosis for thyroid nodules in ultrasound images by deep neural networks. Med Image Anal 2020; 61:101665. [DOI: 10.1016/j.media.2020.101665] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 01/31/2020] [Accepted: 02/03/2020] [Indexed: 12/23/2022]
|
56
|
Wang L, Zhang L, Zhu M, Qi X, Yi Z. Automatic diagnosis for thyroid nodules in ultrasound images by deep neural networks. Med Image Anal 2020. [DOI: 10.1016/j.media.2020.101665 10.1016/j.media.2020.101665] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
57
|
Scruggs BA, Chan RVP, Kalpathy-Cramer J, Chiang MF, Campbell JP. Artificial Intelligence in Retinopathy of Prematurity Diagnosis. Transl Vis Sci Technol 2020; 9:5. [PMID: 32704411 PMCID: PMC7343673 DOI: 10.1167/tvst.9.2.5] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 11/21/2019] [Indexed: 02/06/2023] Open
Abstract
Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide. The diagnosis of ROP is subclassified by zone, stage, and plus disease, with each area demonstrating significant intra- and interexpert subjectivity and disagreement. In addition to improved efficiencies for ROP screening, artificial intelligence may lead to automated, quantifiable, and objective diagnosis in ROP. This review focuses on the development of artificial intelligence for automated diagnosis of plus disease in ROP and highlights the clinical and technical challenges of both the development and implementation of artificial intelligence in the real world.
Collapse
Affiliation(s)
- Brittni A. Scruggs
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - R. V. Paul Chan
- Department of Ophthalmology, University of Illinois, Chicago, IL, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Boston, MA, USA
| | - Michael F. Chiang
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
| | - J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
| |
Collapse
|
58
|
Scruggs BA, Chan RVP, Kalpathy-Cramer J, Chiang MF, Campbell JP. Artificial Intelligence in Retinopathy of Prematurity Diagnosis. Transl Vis Sci Technol 2020. [DOI: 10.1167/tvst.210.2.2010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Brittni A. Scruggs
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - R. V. Paul Chan
- Department of Ophthalmology, University of Illinois, Chicago, IL, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Boston, MA, USA
| | - Michael F. Chiang
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
| | - J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
| |
Collapse
|
59
|
He T, Guo J, Chen N, Xu X, Wang Z, Fu K, Liu L, Yi Z. MediMLP: Using Grad-CAM to Extract Crucial Variables for Lung Cancer Postoperative Complication Prediction. IEEE J Biomed Health Inform 2019; 24:1762-1771. [PMID: 31670685 DOI: 10.1109/jbhi.2019.2949601] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Lung cancer postoperative complication prediction (PCP) is significant for decreasing the perioperative mortality rate after lung cancer surgery. In this paper we concentrate on two PCP tasks: (1) the binary classification for predicting whether a patient will have postoperative complications; and (2) the three-class multi-label classification for predicting which postoperative complication a patient will experience. Furthermore, an important clinical requirement of PCP is the extraction of crucial variables from electronic medical records. We propose a novel multi-layer perceptron (MLP) model called medical MLP (MediMLP) together with the gradient-weighted class activation mapping (Grad-CAM) algorithm for lung cancer PCP. The proposed MediMLP, which involves one locally connected layer and fully connected layers with a shortcut connection, simultaneously extracts crucial variables and performs PCP tasks. The experimental results indicated that MediMLP outperformed normal MLP on two PCP tasks and had comparable performance with existing feature selection methods. Using MediMLP and further experimental analysis, we found that the variable of "time of indwelling drainage tube" was very relevant to lung cancer postoperative complications.
Collapse
|