101
|
Jiang Y, Pan J, Yuan M, Shen Y, Zhu J, Wang Y, Li Y, Zhang K, Yu Q, Xie H, Li H, Wang X, Luo Y. Segmentation of Laser Marks of Diabetic Retinopathy in the Fundus Photographs Using Lightweight U-Net. J Diabetes Res 2021; 2021:8766517. [PMID: 34712739 PMCID: PMC8548126 DOI: 10.1155/2021/8766517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 09/03/2021] [Accepted: 09/24/2021] [Indexed: 11/17/2022] Open
Abstract
Diabetic retinopathy (DR) is a prevalent vision-threatening disease worldwide. Laser marks are the scars left after panretinal photocoagulation, a treatment to prevent patients with severe DR from losing vision. In this study, we develop a deep learning algorithm based on the lightweight U-Net to segment laser marks from the color fundus photos, which could help indicate a stage or providing valuable auxiliary information for the care of DR patients. We prepared our training and testing data, manually annotated by trained and experienced graders from Image Reading Center, Zhongshan Ophthalmic Center, publicly available to fill the vacancy of public image datasets dedicated to the segmentation of laser marks. The lightweight U-Net, along with two postprocessing procedures, achieved an AUC of 0.9824, an optimal sensitivity of 94.16%, and an optimal specificity of 92.82% on the segmentation of laser marks in fundus photographs. With accurate segmentation and high numeric metrics, the lightweight U-Net method showed its reliable performance in automatically segmenting laser marks in fundus photographs, which could help the AI assist the diagnosis of DR in the severe stage.
Collapse
Affiliation(s)
- Yukang Jiang
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Jianying Pan
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Ming Yuan
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Yanhe Shen
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Jin Zhu
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Yishen Wang
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Yewei Li
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Ke Zhang
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Qingyun Yu
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Huirui Xie
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Huiting Li
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Xueqin Wang
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China
- Xinhua College, Sun Yat-Sen University, Guangzhou 510520, China
| | - Yan Luo
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| |
Collapse
|
102
|
Hong N, Park Y, You SC, Rhee Y. AIM in Endocrinology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_328-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
103
|
Cao B, Zhang N, Zhang Y, Fu Y, Zhao D. Plasma cytokines for predicting diabetic retinopathy among type 2 diabetic patients via machine learning algorithms. Aging (Albany NY) 2020; 13:1972-1988. [PMID: 33323553 PMCID: PMC7880388 DOI: 10.18632/aging.202168] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 10/09/2020] [Indexed: 11/25/2022]
Abstract
AIMS This study aimed to investigate changes of plasma cytokines and to develop machine learning classifiers for predicting non-proliferative diabetic retinopathy among type 2 diabetes mellitus patients. RESULTS There were 12 plasma cytokines significantly higher in the non-proliferative diabetic retinopathy group in the pilot cohort. The validation cohort showed that angiopoietin 1, platelet-derived growth factor-BB, tissue inhibitors of metalloproteinase 2 and vascular endothelial growth factor receptor 2 were significantly higher in the NPDR group. Machine learning algorithms using the random forest yielded the best performance, with sensitivity of 92.3%, specificity of 75%, PPV of 82.8%, NPV of 88.2% and area under the curve of 0.84. CONCLUSIONS Plasma angiopoietin 1, platelet-derived growth factor-BB, and vascular endothelial growth factor receptor 2 were associated with presence of non-proliferative diabetic retinopathy and may be good biomarkers that play important roles in pathophysiology of diabetic retinopathy. MATERIALS AND METHODS In pilot cohort, 60 plasma cytokines were simultaneously measured. In validation cohort, angiopoietin 1, CXC-chemokine ligand 16, platelet-derived growth factor-BB, tissue inhibitors of metalloproteinase 1, tissue inhibitors of metalloproteinase 2, and vascular endothelial growth factor receptor 2 were validated using ELISA kits. Machine learning algorithms were developed to build a prediction model for non-proliferative diabetic retinopathy.
Collapse
Affiliation(s)
- Bin Cao
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China.,Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| | - Ning Zhang
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China.,Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| | - Yuanyuan Zhang
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China.,Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| | - Ying Fu
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China.,Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| | - Dong Zhao
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China.,Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| |
Collapse
|
104
|
Sun J, Huang X, Egwuagu C, Badr Y, Dryden SC, Fowler BT, Yousefi S. Identifying Mouse Autoimmune Uveitis from Fundus Photographs Using Deep Learning. Transl Vis Sci Technol 2020; 9:59. [PMID: 33294300 PMCID: PMC7718814 DOI: 10.1167/tvst.9.2.59] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 09/25/2020] [Indexed: 01/09/2023] Open
Abstract
Purpose To develop a deep learning model for objective evaluation of experimental autoimmune uveitis (EAU), the animal model of posterior uveitis that reveals its essential pathological features via fundus photographs. Methods We developed a deep learning construct to identify uveitis using reference mouse fundus images and further categorized the severity levels of disease into mild and severe EAU. We evaluated the performance of the model using the area under the receiver operating characteristic curve (AUC) and confusion matrices. We further assessed the clinical relevance of the model by visualizing the principal components of features at different layers and through the use of gradient-weighted class activation maps, which presented retinal regions having the most significant influence on the model. Results Our model was trained, validated, and tested on 1500 fundus images (training, 1200; validation, 150; testing, 150) and achieved an average AUC of 0.98 for identifying the normal, trace (small and local lesions), and disease classes (large and spreading lesions). The AUCs of the model using an independent subset with 180 images were 1.00 (95% confidence interval [CI], 0.99-1.00), 0.97 (95% CI, 0.94-0.99), and 0.96 (95% CI, 0.90-1.00) for the normal, trace and disease classes, respectively. Conclusions The proposed deep learning model is able to identify three severity levels of EAU with high accuracy. The model also achieved high accuracy on independent validation subsets, reflecting a substantial degree of generalizability. Translational Relevance The proposed model represents an important new tool for use in animal medical research and provides a step toward clinical uveitis identification in clinical practice.
Collapse
Affiliation(s)
- Jian Sun
- Molecular Immunology Section, Laboratory of Immunology, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Xiaoqin Huang
- The Pennsylvania State University Great Valley, Malvern, PA, USA
| | - Charles Egwuagu
- Molecular Immunology Section, Laboratory of Immunology, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Youakim Badr
- The Pennsylvania State University Great Valley, Malvern, PA, USA
| | | | | | - Siamak Yousefi
- University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
105
|
Tian Y, Fu S. A descriptive framework for the field of deep learning applications in medical images. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106445] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
106
|
Son J, Shin JY, Chun EJ, Jung KH, Park KH, Park SJ. Predicting High Coronary Artery Calcium Score From Retinal Fundus Images With Deep Learning Algorithms. Transl Vis Sci Technol 2020; 9:28. [PMID: 33184590 PMCID: PMC7410115 DOI: 10.1167/tvst.9.2.28] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Accepted: 03/06/2020] [Indexed: 01/04/2023] Open
Abstract
Purpose To evaluate high accumulation of coronary artery calcium (CAC) from retinal fundus images with deep learning technologies as an inexpensive and radiation-free screening method. Methods Individuals who underwent bilateral retinal fundus imaging and CAC score (CACS) evaluation from coronary computed tomography scans on the same day were identified. With this database, performances of deep learning algorithms (inception-v3) to distinguish high CACS from CACS of 0 were evaluated at various thresholds for high CACS. Vessel-inpainted and fovea-inpainted images were also used as input to investigate areas of interest in determining CACS. Results A total of 44,184 images from 20,130 individuals were included. A deep learning algorithm for discrimination of no CAC from CACS >100 achieved area under receiver operating curve (AUROC) of 82.3% (79.5%–85.0%) and 83.2% (80.2%–86.3%) using unilateral and bilateral fundus images, respectively, under a 5-fold cross validation setting. AUROC increased as the criterion for high CACS was increased, showing a plateau at 100 and losing significant improvement thereafter. AUROC decreased when fovea was inpainted and decreased further when vessels were inpainted, whereas AUROC increased when bilateral images were used as input. Conclusions Visual patterns of retinal fundus images in subjects with CACS > 100 could be recognized by deep learning algorithms compared with those with no CAC. Exploiting bilateral images improves discrimination performance, and ablation studies removing retinal vasculature or fovea suggest that recognizable patterns reside mainly in these areas. Translational Relevance Retinal fundus images can be used by deep learning algorithms for prediction of high CACS.
Collapse
Affiliation(s)
| | - Joo Young Shin
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Korea
| | - Eun Ju Chun
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | | | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|
107
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Zhao L, Wu X, Dongye M, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Deep learning from "passive feeding" to "selective eating" of real-world data. NPJ Digit Med 2020; 3:143. [PMID: 33145439 PMCID: PMC7603327 DOI: 10.1038/s41746-020-00350-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 09/24/2020] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence (AI) based on deep learning has shown excellent diagnostic performance in detecting various diseases with good-quality clinical images. Recently, AI diagnostic systems developed from ultra-widefield fundus (UWF) images have become popular standard-of-care tools in screening for ocular fundus diseases. However, in real-world settings, these systems must base their diagnoses on images with uncontrolled quality ("passive feeding"), leading to uncertainty about their performance. Here, using 40,562 UWF images, we develop a deep learning-based image filtering system (DLIFS) for detecting and filtering out poor-quality images in an automated fashion such that only good-quality images are transferred to the subsequent AI diagnostic system ("selective eating"). In three independent datasets from different clinical institutions, the DLIFS performed well with sensitivities of 96.9%, 95.6% and 96.6%, and specificities of 96.6%, 97.9% and 98.8%, respectively. Furthermore, we show that the application of our DLIFS significantly improves the performance of established AI diagnostic systems in real-world settings. Our work demonstrates that "selective eating" of real-world data is necessary and needs to be considered in the development of image-based AI systems.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, 518001 Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Centre, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, 015000 Inner Mongolia, China
| | - Yu Han
- EYE and ENT Hospital of Fudan University, 200031 Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
- Centre for Precision Medicine, Sun Yat-sen University, 510060 Guangzhou, China
| |
Collapse
|
108
|
Sex judgment using color fundus parameters in elementary school students. Graefes Arch Clin Exp Ophthalmol 2020; 258:2781-2789. [PMID: 33064194 DOI: 10.1007/s00417-020-04969-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 09/28/2020] [Accepted: 10/05/2020] [Indexed: 12/17/2022] Open
Abstract
PURPOSES Recently, artificial intelligence has been used to determine sex using fundus photographs alone. We had earlier reported that sex can be distinguished using known factors obtained from color fundus photography (CFP) in adult eyes. However, it is not clear when the sex difference in fundus parameters begins. Therefore, we conducted this study to investigate sex determination based on fundus parameters using binominal logistic regression in elementary school students. METHODS This prospective observational cross-sectional study was conducted on 119 right eyes of elementary school students (aged 8 or 9 years, 59 boys and 60 girls). Through CFP, the tessellation fundus index was calculated as R/(R + G + B) using the mean value of red-green-blue intensity in the eight locations around the optic disc. Optic disc ovality ratio, papillomacular angle, retinal artery trajectory, and retinal vessel were quantified based on our earlier reports. Regularized binomial logistic regression was applied to these variables to select the decisive factors. Furthermore, its discriminative performance was evaluated using the leave-one-out cross-validation method. Sex difference in the parameters was assessed using the Mann-Whitney U test. RESULTS The optimal model yielded by the Ridge binomial logistic regression suggested that the ovality ratio of girls was significantly smaller, whereas their nasal green and blue intensities were significantly higher, than those of boys. Using this approach, the area under the receiver-operating characteristic curve was 63.2%. CONCLUSIONS Although sex can be distinguished using CFP even in elementary school students, the discrimination accuracy was relatively low. Some sex difference in the ocular fundus may begin after the age of 10 years.
Collapse
|
109
|
Cho BH, Lee DY, Park KA, Oh SY, Moon JH, Lee GI, Noh H, Chung JK, Kang MC, Chung MJ. Computer-aided recognition of myopic tilted optic disc using deep learning algorithms in fundus photography. BMC Ophthalmol 2020; 20:407. [PMID: 33036582 PMCID: PMC7547463 DOI: 10.1186/s12886-020-01657-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Accepted: 09/23/2020] [Indexed: 12/27/2022] Open
Abstract
Background It is necessary to consider myopic optic disc tilt as it seriously impacts normal ocular parameters. However, ophthalmologic measurements are within inter-observer variability and time-consuming to get. This study aimed to develop and evaluate deep learning models that automatically recognize a myopic tilted optic disc in fundus photography. Methods This study used 937 fundus photographs of patients with normal or myopic tilted disc, collected from Samsung Medical Center between April 2016 and December 2018. We developed an automated computer-aided recognition system for optic disc tilt on color fundus photographs via a deep learning algorithm. We preprocessed all images with two image resizing techniques. GoogleNet Inception-v3 architecture was implemented. The performances of the models were compared with the human examiner’s results. Activation map visualization was qualitatively analyzed using the generalized visualization technique based on gradient-weighted class activation mapping (Grad-CAM++). Results Nine hundred thirty-seven fundus images were collected and annotated from 509 subjects. In total, 397 images from eyes with tilted optic discs and 540 images from eyes with non-tilted optic discs were analyzed. We included both eye data of most included patients and analyzed them separately in this study. For comparison, we conducted training using two aspect ratios: the simple resized dataset and the original aspect ratio (AR) preserving dataset, and the impacts of the augmentations for both datasets were evaluated. The constructed deep learning models for myopic optic disc tilt achieved the best results when simple image-resizing and augmentation were used. The results were associated with an area under the receiver operating characteristic curve (AUC) of 0.978 ± 0.008, an accuracy of 0.960 ± 0.010, sensitivity of 0.937 ± 0.023, and specificity of 0.963 ± 0.015. The heatmaps revealed that the model could effectively identify the locations of the optic discs, the superior retinal vascular arcades, and the retinal maculae. Conclusions We developed an automated deep learning-based system to detect optic disc tilt. The model demonstrated excellent agreement with the previous clinical criteria, and the results are promising for developing future programs to adjust and identify the effect of optic disc tilt on ophthalmic measurements.
Collapse
Affiliation(s)
- Baek Hwan Cho
- Medical AI Research Center, Institute of Smart Healthcare, Samsung Medical Center, Seoul, Korea.,Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Korea
| | - Da Young Lee
- Medical AI Research Center, Institute of Smart Healthcare, Samsung Medical Center, Seoul, Korea.,Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Korea
| | - Kyung-Ah Park
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea.
| | - Sei Yeul Oh
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea.
| | - Jong Hak Moon
- Medical AI Research Center, Institute of Smart Healthcare, Samsung Medical Center, Seoul, Korea.,Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Korea
| | - Ga-In Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea
| | - Hoon Noh
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea
| | - Joon Kyo Chung
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea
| | - Min Chae Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea
| | - Myung Jin Chung
- Medical AI Research Center, Institute of Smart Healthcare, Samsung Medical Center, Seoul, Korea.,Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| |
Collapse
|
110
|
Artificial intelligence for diabetic retinopathy screening, prediction and management. Curr Opin Ophthalmol 2020; 31:357-365. [PMID: 32740069 DOI: 10.1097/icu.0000000000000693] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
PURPOSE OF REVIEW Diabetic retinopathy is the most common specific complication of diabetes mellitus. Traditional care for patients with diabetes and diabetic retinopathy is fragmented, uncoordinated and delivered in a piecemeal nature, often in the most expensive and high-resource tertiary settings. Transformative new models incorporating digital technology are needed to address these gaps in clinical care. RECENT FINDINGS Artificial intelligence and telehealth may improve access, financial sustainability and coverage of diabetic retinopathy screening programs. They enable risk stratifying patients based on individual risk of vision-threatening diabetic retinopathy including diabetic macular edema (DME), and predicting which patients with DME best respond to antivascular endothelial growth factor therapy. SUMMARY Progress in artificial intelligence and tele-ophthalmology for diabetic retinopathy screening, including artificial intelligence applications in 'real-world settings' and cost-effectiveness studies are summarized. Furthermore, the initial research on the use of artificial intelligence models for diabetic retinopathy risk stratification and management of DME are outlined along with potential future directions. Finally, the need for artificial intelligence adoption within ophthalmology in response to coronavirus disease 2019 is discussed. Digital health solutions such as artificial intelligence and telehealth can facilitate the integration of community, primary and specialist eye care services, optimize the flow of patients within healthcare networks, and improve the efficiency of diabetic retinopathy management.
Collapse
|
111
|
Odaibo SG. Re: Wang et al.: Machine learning models for diagnosing glaucoma from retinal nerve fiber layer thickness maps (Ophthalmology Glaucoma. 2019;2:422-428). Ophthalmol Glaucoma 2020; 3:e3. [PMID: 32672624 DOI: 10.1016/j.ogla.2020.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2020] [Accepted: 03/04/2020] [Indexed: 11/29/2022]
|
112
|
Hirota M, Mizota A, Mimura T, Hayashi T, Kotoku J, Sawa T, Inoue K. Effect of color information on the diagnostic performance of glaucoma in deep learning using few fundus images. Int Ophthalmol 2020; 40:3013-3022. [PMID: 32594350 DOI: 10.1007/s10792-020-01485-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2019] [Accepted: 06/20/2020] [Indexed: 11/28/2022]
Abstract
PURPOSE The purpose of this study was to evaluate the accuracy of the convolutional neural network (CNN) model in glaucoma identification with three primary colors (red, green, blue; RGB) and split color channels using fundus photographs with a small sample size. METHODS The dataset was prepared using color fundus photographs captured with a fundus camera (VX-10i, Kowa Co., Ltd., Tokyo, Japan). The training dataset consisted of 200 images, and the validation dataset contained 60 images. In the preprocessing stage, the color channels for the fundus images were separated into red (red model), green (green model), and blue (blue model) using OpenCV on Windows. All images were resized to squares with a size of 512 × 512 pixels for preprocessing before input into the model, and the model was fine-tuned with VGG16. RESULTS The diagnostic performance was significantly higher in the green model [area under the curve (AUC) 0.946; 95% confidence interval (CI) 0.851-0.982] than in the RGB model (AUC 0.800; 95% CI 0.658-0.893; P = 0.006), red model (AUC 0.746; 95% CI 0.601-0.851; P = 0.002), and blue model (AUC 0.558; 95% CI 0.405-0.700; P < 0.001). CONCLUSION The present study showed that the green digital filter is useful for structuring CNN models for automatic discrimination of glaucoma using fundus photographs with a small sample size. The present findings suggest that preprocessing, when creating the CNN model, is an important step for the identification of a large number of retinal diseases using color fundus photographs.
Collapse
Affiliation(s)
- Masakazu Hirota
- Department of Orthoptics, Faculty of Medical Technology, Teikyo University, Itabashi, Tokyo, Japan. .,Department of Ophthalmology, School of Medicine, Teikyo University, 2-11-1 Kaga, Itabashi-ku, Tokyo, 173-8605, Japan.
| | - Atsushi Mizota
- Department of Ophthalmology, School of Medicine, Teikyo University, 2-11-1 Kaga, Itabashi-ku, Tokyo, 173-8605, Japan
| | - Tatsuya Mimura
- Department of Ophthalmology, School of Medicine, Teikyo University, 2-11-1 Kaga, Itabashi-ku, Tokyo, 173-8605, Japan
| | - Takao Hayashi
- Department of Orthoptics, Faculty of Medical Technology, Teikyo University, Itabashi, Tokyo, Japan.,Department of Ophthalmology, School of Medicine, Teikyo University, 2-11-1 Kaga, Itabashi-ku, Tokyo, 173-8605, Japan
| | - Junichi Kotoku
- Division of Clinical Radiology, Graduate School of Medical Care and Technology, Teikyo University, Itabashi, Tokyo, Japan
| | - Tomohiro Sawa
- Medical Information Systems Research Center, Teikyo University, Itabashi, Tokyo, Japan
| | | |
Collapse
|
113
|
He M, Li Z, Liu C, Shi D, Tan Z. Deployment of Artificial Intelligence in Real-World Practice: Opportunity and Challenge. Asia Pac J Ophthalmol (Phila) 2020; 9:299-307. [PMID: 32694344 DOI: 10.1097/apo.0000000000000301] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence has rapidly evolved from the experimental phase to the implementation phase in many image-driven clinical disciplines, including ophthalmology. A combination of the increasing availability of large datasets and computing power with revolutionary progress in deep learning has created unprecedented opportunities for major breakthrough improvements in the performance and accuracy of automated diagnoses that primarily focus on image recognition and feature detection. Such an automated disease classification would significantly improve the accessibility, efficiency, and cost-effectiveness of eye care systems where it is less dependent on human input, potentially enabling diagnosis to be cheaper, quicker, and more consistent. Although this technology will have a profound impact on clinical flow and practice patterns sooner or later, translating such a technology into clinical practice is challenging and requires similar levels of accountability and effectiveness as any new medication or medical device due to the potential problems of bias, and ethical, medical, and legal issues that might arise. The objective of this review is to summarize the opportunities and challenges of this transition and to facilitate the integration of artificial intelligence (AI) into routine clinical practice based on our best understanding and experience in this area.
Collapse
Affiliation(s)
- Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Australia
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- School of Computer Science, University of Technology Sydney, Ultimo NSW, Australia
| | - Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zachary Tan
- Faculty of Medicine, The University of Queensland, Brisbane, Australia
- Schwarzman College, Tsinghua University, Beijing, China
| |
Collapse
|
114
|
Chang J, Lee J, Ha A, Han YS, Bak E, Choi S, Yun JM, Kang U, Shin IH, Shin JY, Ko T, Bae YS, Oh BL, Park KH, Park SM. Explaining the Rationale of Deep Learning Glaucoma Decisions with Adversarial Examples. Ophthalmology 2020; 128:78-88. [PMID: 32598951 DOI: 10.1016/j.ophtha.2020.06.036] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 06/14/2020] [Accepted: 06/15/2020] [Indexed: 12/22/2022] Open
Abstract
PURPOSE To illustrate what is inside the so-called black box of deep learning models (DLMs) so that clinicians can have greater confidence in the conclusions of artificial intelligence by evaluating adversarial explanation on its ability to explain the rationale of DLM decisions for glaucoma and glaucoma-related findings. Adversarial explanation generates adversarial examples (AEs), or images that have been changed to gain or lose pathologic characteristic-specific traits, to explain the DLM's rationale. DESIGN Evaluation of explanation methods for DLMs. PARTICIPANTS Health screening participants (n = 1653) at the Seoul National University Hospital Health Promotion Center, Seoul, Republic of Korea. METHODS We trained DLMs for referable glaucoma (RG), increased cup-to-disc ratio (ICDR), disc rim narrowing (DRN), and retinal nerve fiber layer defect (RNFLD) using 6430 retinal fundus images. Surveys consisting of explanations using AE and gradient-weighted class activation mapping (GradCAM), a conventional heatmap-based explanation method, were generated for 400 pathologic and healthy patient eyes. For each method, board-trained glaucoma specialists rated location explainability, the ability to pinpoint decision-relevant areas in the image, and rationale explainability, the ability to inform the user on the model's reasoning for the decision based on pathologic features. Scores were compared by paired Wilcoxon signed-rank test. MAIN OUTCOME MEASURES Area under the receiver operating characteristic curve (AUC), sensitivities, and specificities of DLMs; visualization of clinical pathologic changes of AEs; and survey scores for locational and rationale explainability. RESULTS The AUCs were 0.90, 0.99, 0.95, and 0.79 and sensitivities were 0.79, 1.00, 0.82, and 0.55 at 0.90 specificity for RG, ICDR, DRN, and RNFLD DLMs, respectively. Generated AEs showed valid clinical feature changes, and survey results for location explainability were 3.94 ± 1.33 and 2.55 ± 1.24 using AEs and GradCAMs, respectively, of a possible maximum score of 5 points. The scores for rationale explainability were 3.97 ± 1.31 and 2.10 ± 1.25 for AEs and GradCAM, respectively. Adversarial example provided significantly better explainability than GradCAM. CONCLUSIONS Adversarial explanation increased the explainability over GradCAM, a conventional heatmap-based explanation method. Adversarial explanation may help medical professionals understand more clearly the rationale of DLMs when using them for clinical decisions.
Collapse
Affiliation(s)
- Jooyoung Chang
- Department of Biomedical Sciences, Seoul National University Graduate School, Seoul, Republic of Korea
| | - Jinho Lee
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Hallym University Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea
| | - Ahnul Ha
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Young Soo Han
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Eunoo Bak
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Seulggie Choi
- Department of Biomedical Sciences, Seoul National University Graduate School, Seoul, Republic of Korea
| | - Jae Moon Yun
- Department of Family Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Uk Kang
- InTheSmart Co., Ltd., Seoul, Republic of Korea; Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
| | | | - Joo Young Shin
- Department of Ophthalmology, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | - Taehoon Ko
- Office of Hospital Information, Seoul National University Hospital, Seoul, Republic of Korea
| | - Ye Seul Bae
- Department of Family Medicine, Seoul National University Hospital, Seoul, Republic of Korea; Office of Hospital Information, Seoul National University Hospital, Seoul, Republic of Korea
| | - Baek-Lok Oh
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Ki Ho Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Seoul National University Hospital, Seoul, Republic of Korea.
| | - Sang Min Park
- Department of Biomedical Sciences, Seoul National University Graduate School, Seoul, Republic of Korea; Department of Family Medicine, Seoul National University Hospital, Seoul, Republic of Korea.
| |
Collapse
|
115
|
Arsalan M, Baek NR, Owais M, Mahmood T, Park KR. Deep Learning-Based Detection of Pigment Signs for Analysis and Diagnosis of Retinitis Pigmentosa. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3454. [PMID: 32570943 PMCID: PMC7349531 DOI: 10.3390/s20123454] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 06/16/2020] [Accepted: 06/16/2020] [Indexed: 12/24/2022]
Abstract
Ophthalmological analysis plays a vital role in the diagnosis of various eye diseases, such as glaucoma, retinitis pigmentosa (RP), and diabetic and hypertensive retinopathy. RP is a genetic retinal disorder that leads to progressive vision degeneration and initially causes night blindness. Currently, the most commonly applied method for diagnosing retinal diseases is optical coherence tomography (OCT)-based disease analysis. In contrast, fundus imaging-based disease diagnosis is considered a low-cost diagnostic solution for retinal diseases. This study focuses on the detection of RP from the fundus image, which is a crucial task because of the low quality of fundus images and non-cooperative image acquisition conditions. Automatic detection of pigment signs in fundus images can help ophthalmologists and medical practitioners in diagnosing and analyzing RP disorders. To accurately segment pigment signs for diagnostic purposes, we present an automatic RP segmentation network (RPS-Net), which is a specifically designed deep learning-based semantic segmentation network to accurately detect and segment the pigment signs with fewer trainable parameters. Compared with the conventional deep learning methods, the proposed method applies a feature enhancement policy through multiple dense connections between the convolutional layers, which enables the network to discriminate between normal and diseased eyes, and accurately segment the diseased area from the background. Because pigment spots can be very small and consist of very few pixels, the RPS-Net provides fine segmentation, even in the case of degraded images, by importing high-frequency information from the preceding layers through concatenation inside and outside the encoder-decoder. To evaluate the proposed RPS-Net, experiments were performed based on 4-fold cross-validation using the publicly available Retinal Images for Pigment Signs (RIPS) dataset for detection and segmentation of retinal pigments. Experimental results show that RPS-Net achieved superior segmentation performance for RP diagnosis, compared with the state-of-the-art methods.
Collapse
Affiliation(s)
| | | | | | | | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea; (M.A.); (N.R.B.); (M.O.); (T.M.)
| |
Collapse
|
116
|
Christopher M, Nakahara K, Bowd C, Proudfoot JA, Belghith A, Goldbaum MH, Rezapour J, Weinreb RN, Fazio MA, Girkin CA, Liebmann JM, De Moraes G, Murata H, Tokumo K, Shibata N, Fujino Y, Matsuura M, Kiuchi Y, Tanito M, Asaoka R, Zangwill LM. Effects of Study Population, Labeling and Training on Glaucoma Detection Using Deep Learning Algorithms. Transl Vis Sci Technol 2020; 9:27. [PMID: 32818088 PMCID: PMC7396194 DOI: 10.1167/tvst.9.2.27] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 03/04/2020] [Indexed: 12/21/2022] Open
Abstract
Purpose To compare performance of independently developed deep learning algorithms for detecting glaucoma from fundus photographs and to evaluate strategies for incorporating new data into models. Methods Two fundus photograph datasets from the Diagnostic Innovations in Glaucoma Study/African Descent and Glaucoma Evaluation Study and Matsue Red Cross Hospital were used to independently develop deep learning algorithms for detection of glaucoma at the University of California, San Diego, and the University of Tokyo. We compared three versions of the University of California, San Diego, and University of Tokyo models: original (no retraining), sequential (retraining only on new data), and combined (training on combined data). Independent datasets were used to test the algorithms. Results The original University of California, San Diego and University of Tokyo models performed similarly (area under the receiver operating characteristic curve = 0.96 and 0.97, respectively) for detection of glaucoma in the Matsue Red Cross Hospital dataset, but not the Diagnostic Innovations in Glaucoma Study/African Descent and Glaucoma Evaluation Study data (0.79 and 0.92; P < .001), respectively. Model performance was higher when classifying moderate-to-severe compared with mild disease (area under the receiver operating characteristic curve = 0.98 and 0.91; P < .001), respectively. Models trained with the combined strategy generally had better performance across all datasets than the original strategy. Conclusions Deep learning glaucoma detection can achieve high accuracy across diverse datasets with appropriate training strategies. Because model performance was influenced by the severity of disease, labeling, training strategies, and population characteristics, reporting accuracy stratified by relevant covariates is important for cross study comparisons. Translational Relevance High sensitivity and specificity of deep learning algorithms for moderate-to-severe glaucoma across diverse populations suggest a role for artificial intelligence in the detection of glaucoma in primary care.
Collapse
Affiliation(s)
- Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, CA, USA
| | | | - Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, CA, USA
| | - James A Proudfoot
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, CA, USA
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, CA, USA
| | - Michael H Goldbaum
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, CA, USA
| | - Jasmin Rezapour
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, CA, USA.,Department of Ophthalmology, University Medical Center Mainz, Germany
| | - Robert N Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, CA, USA
| | - Massimo A Fazio
- School of Medicine, University of Alabama-Birmingham, Birmingham, AL, USA
| | | | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, USA
| | - Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, USA
| | - Hiroshi Murata
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | - Kana Tokumo
- Department of Ophthalmology and Visual Science, Hiroshima University, Hiroshima, Japan
| | | | - Yuri Fujino
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan.,Department of Ophthalmology, Graduate School of Medical Science, Kitasato University, Sagamihara, Kanagawa, Japan
| | - Masato Matsuura
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan.,Department of Ophthalmology, Graduate School of Medical Science, Kitasato University, Sagamihara, Kanagawa, Japan
| | - Yoshiaki Kiuchi
- Department of Ophthalmology and Visual Science, Hiroshima University, Hiroshima, Japan
| | - Masaki Tanito
- Department of Ophthalmology, Shimane University Faculty of Medicine, Shimane, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan.,Seirei Hamamatsu General Hospital, Seirei Christopher University, Hamamatsu, Japan
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, CA, USA
| |
Collapse
|
117
|
Barros DMS, Moura JCC, Freire CR, Taleb AC, Valentim RAM, Morais PSG. Machine learning applied to retinal image processing for glaucoma detection: review and perspective. Biomed Eng Online 2020; 19:20. [PMID: 32293466 PMCID: PMC7160894 DOI: 10.1186/s12938-020-00767-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 04/06/2020] [Indexed: 02/07/2023] Open
Abstract
INTRODUCTION This is a systematic review on the main algorithms using machine learning (ML) in retinal image processing for glaucoma diagnosis and detection. ML has proven to be a significant tool for the development of computer aided technology. Furthermore, secondary research has been widely conducted over the years for ophthalmologists. Such aspects indicate the importance of ML in the context of retinal image processing. METHODS The publications that were chosen to compose this review were gathered from Scopus, PubMed, IEEEXplore and Science Direct databases. Then, the papers published between 2014 and 2019 were selected . Researches that used the segmented optic disc method were excluded. Moreover, only the methods which applied the classification process were considered. The systematic analysis was performed in such studies and, thereupon, the results were summarized. DISCUSSION Based on architectures used for ML in retinal image processing, some studies applied feature extraction and dimensionality reduction to detect and isolate important parts of the analyzed image. Differently, other works utilized a deep convolutional network. Based on the evaluated researches, the main difference between the architectures is the number of images demanded for processing and the high computational cost required to use deep learning techniques. CONCLUSIONS All the analyzed publications indicated it was possible to develop an automated system for glaucoma diagnosis. The disease severity and its high occurrence rates justify the researches which have been carried out. Recent computational techniques, such as deep learning, have shown to be promising technologies in fundus imaging. Although such a technique requires an extensive database and high computational costs, the studies show that the data augmentation and transfer learning techniques have been applied as an alternative way to optimize and reduce networks training.
Collapse
Affiliation(s)
- Daniele M S Barros
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil.
| | - Julio C C Moura
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Cefas R Freire
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | | | - Ricardo A M Valentim
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Philippi S G Morais
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| |
Collapse
|
118
|
Kim YD, Noh KJ, Byun SJ, Lee S, Kim T, Sunwoo L, Lee KJ, Kang SH, Park KH, Park SJ. Effects of Hypertension, Diabetes, and Smoking on Age and Sex Prediction from Retinal Fundus Images. Sci Rep 2020; 10:4623. [PMID: 32165702 PMCID: PMC7067849 DOI: 10.1038/s41598-020-61519-9] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 02/28/2020] [Indexed: 12/25/2022] Open
Abstract
Retinal fundus images are used to detect organ damage from vascular diseases (e.g. diabetes mellitus and hypertension) and screen ocular diseases. We aimed to assess convolutional neural network (CNN) models that predict age and sex from retinal fundus images in normal participants and in participants with underlying systemic vascular-altered status. In addition, we also tried to investigate clues regarding differences between normal ageing and vascular pathologic changes using the CNN models. In this study, we developed CNN age and sex prediction models using 219,302 fundus images from normal participants without hypertension, diabetes mellitus (DM), and any smoking history. The trained models were assessed in four test-sets with 24,366 images from normal participants, 40,659 images from hypertension participants, 14,189 images from DM participants, and 113,510 images from smokers. The CNN model accurately predicted age in normal participants; the correlation between predicted age and chronologic age was R2 = 0.92, and the mean absolute error (MAE) was 3.06 years. MAEs in test-sets with hypertension (3.46 years), DM (3.55 years), and smoking (2.65 years) were similar to that of normal participants; however, R2 values were relatively low (hypertension, R2 = 0.74; DM, R2 = 0.75; smoking, R2 = 0.86). In subgroups with participants over 60 years, the MAEs increased to above 4.0 years and the accuracies declined for all test-sets. Fundus-predicted sex demonstrated acceptable accuracy (area under curve > 0.96) in all test-sets. Retinal fundus images from participants with underlying vascular-altered conditions (hypertension, DM, or smoking) indicated similar MAEs and low coefficients of determination (R2) between the predicted age and chronologic age, thus suggesting that the ageing process and pathologic vascular changes exhibit different features. Our models demonstrate the most improved performance yet and provided clues to the relationship and difference between ageing and pathologic changes from underlying systemic vascular conditions. In the process of fundus change, systemic vascular diseases are thought to have a different effect from ageing. Research in context. Evidence before this study. The human retina and optic disc continuously change with ageing, and they share physiologic or pathologic characteristics with brain and systemic vascular status. As retinal fundus images provide high-resolution in-vivo images of retinal vessels and parenchyma without any invasive procedure, it has been used to screen ocular diseases and has attracted significant attention as a predictive biomarker for cerebral and systemic vascular diseases. Recently, deep neural networks have revolutionised the field of medical image analysis including retinal fundus images and shown reliable results in predicting age, sex, and presence of cardiovascular diseases. Added value of this study. This is the first study demonstrating how a convolutional neural network (CNN) trained using retinal fundus images from normal participants measures the age of participants with underlying vascular conditions such as hypertension, diabetes mellitus (DM), or history of smoking using a large database, SBRIA, which contains 412,026 retinal fundus images from 155,449 participants. Our results indicated that the model accurately predicted age in normal participants, while correlations (coefficient of determination, R2) in test-sets with hypertension, DM, and smoking were relatively low. Additionally, a subgroup analysis indicated that mean absolute errors (MAEs) increased and accuracies declined significantly in subgroups with participants over 60 years of age in both normal participants and participants with vascular-altered conditions. These results suggest that pathologic retinal vascular changes occurring in systemic vascular diseases are different form the changes in spontaneous ageing process, and the ageing process observed in retinal fundus images may saturate at age about 60 years. Implications of all available evidence. Based on this study and previous reports, the CNN could accurately and reliably predict age and sex using retinal fundus images. The fact that retinal changes caused by ageing and systemic vascular diseases occur differently motivates one to understand the retina deeper. Deep learning-based fundus image reading may be a more useful and beneficial tool for screening and diagnosing systemic and ocular diseases after further development.
Collapse
Affiliation(s)
- Yong Dae Kim
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea.,Department of Ophthalmology, Kangdong Sacred Heart Hospital, Seoul, Korea
| | - Kyoung Jin Noh
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Seong Jun Byun
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, Seoul, Republic of Korea
| | - Tackeun Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Kyong Joon Lee
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Si-Hyuck Kang
- Division of Cardiology, Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea.
| |
Collapse
|
119
|
Gunasekeran DV, Wong TY. Artificial Intelligence in Ophthalmology in 2020: A Technology on the Cusp for Translation and Implementation. Asia Pac J Ophthalmol (Phila) 2020; 9:61-66. [PMID: 32349112 DOI: 10.1097/01.apo.0000656984.56467.2c] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Affiliation(s)
- Dinesh Visva Gunasekeran
- Singapore Eye Research Institute (SERI), Singapore
- National University of Singapore (NUS), Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute (SERI), Singapore
- National University of Singapore (NUS), Singapore
- Singapore National Eye Center (SNEC), Singapore
| |
Collapse
|
120
|
Ruamviboonsuk P, Cheung CY, Zhang X, Raman R, Park SJ, Ting DSW. Artificial Intelligence in Ophthalmology: Evolutions in Asia. Asia Pac J Ophthalmol (Phila) 2020; 9:78-84. [PMID: 32349114 DOI: 10.1097/01.apo.0000656980.41190.bf] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence (AI) has been studied in ophthalmology since availability of digital information in ophthalmic care. The significant turning point was availability of commercial digital color fundus photography in the late 1990s, which caused digital screening for diabetic retinopathy (DR) to take off. Automated Retinal Disease Assessment software was then developed using machine learning to detect abnormal lesions in fundus to screen DR. The use of this version of AI had not been generalized because the specificity at 45% was not high enough, although the sensitivity reached 90%. The recent breakthrough in machine learning is the invent of deep learning, which accelerates its performance to be on par with experts. The first 2 breakthrough studies on deep learning for screening DR were conducted in Asia. The first represented collaboration of datasets between Asia and the United States for algorithms development, whereas the second represented algorithms developed in Asia but validated in different populations across the world. Both found accuracy for detecting referable DR of >95%. Diversity and variety are unique strengths of Asia for AI studies. There are many more studies of AI ongoing in Asia not only as prospective deployments in DR but in glaucoma, age-related macular degeneration, cataract, and systemic disease, such as Alzheimer's disease. Some Asian countries have laid out plans for digital health care system using AI as one of the puzzle pieces for solving blindness. More studies on AI and digital health are expected to come from Asia in this new decade.
Collapse
Affiliation(s)
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Xiulan Zhang
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, People's Republic of China
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, India
| | - Sang Jun Park
- Duke-NUS Medical School Consultant, Vitreo-retinal Department, Singapore National Eye Center, Singapore
| | - Daniel Shu Wei Ting
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, South Korea
| |
Collapse
|
121
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Xiang Y, Xu F, Jin C, Zhang X, Yang Y, Zhang K, Zhao L, Zhang P, Han Y, Yun D, Wu X, Yan P, Lin H. Development and Evaluation of a Deep Learning System for Screening Retinal Hemorrhage Based on Ultra-Widefield Fundus Images. Transl Vis Sci Technol 2020; 9:3. [PMID: 32518708 PMCID: PMC7255628 DOI: 10.1167/tvst.9.2.3] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 11/21/2019] [Indexed: 12/15/2022] Open
Abstract
Purpose To develop and evaluate a deep learning (DL) system for retinal hemorrhage (RH) screening using ultra-widefield fundus (UWF) images. Methods A total of 16,827 UWF images from 11,339 individuals were used to develop the DL system. Three experienced retina specialists were recruited to grade UWF images independently. Three independent data sets from 3 different institutions were used to validate the effectiveness of the DL system. The data set from Zhongshan Ophthalmic Center (ZOC) was selected to compare the classification performance of the DL system and general ophthalmologists. A heatmap was generated to identify the most important area used by the DL model to classify RH and to discern whether the RH involved the anatomical macula. Results In the three independent data sets, the DL model for detecting RH achieved areas under the curve of 0.997, 0.998, and 0.999, with sensitivities of 97.6%, 96.7%, and 98.9% and specificities of 98.0%, 98.7%, and 99.4%. In the ZOC data set, the sensitivity of the DL model was better than that of the general ophthalmologists, although the general ophthalmologists had slightly higher specificities. The heatmaps highlighted RH regions in all true-positive images, and the RH within the anatomical macula was determined based on heatmaps. Conclusions Our DL system showed reliable performance for detecting RH and could be used to screen for RH-related diseases. Translational Relevance As a screening tool, this automated system may aid early diagnosis and management of RH-related retinal and systemic diseases by allowing timely referral.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yi Zhu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Chuan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Yifan Xiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yahan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Kai Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, Inner Mongolia, China
| | - Yu Han
- EYE & ENT Hospital of Fudan University, Shanghai, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
122
|
Li F, Yan L, Wang Y, Shi J, Chen H, Zhang X, Jiang M, Wu Z, Zhou K. Deep learning-based automated detection of glaucomatous optic neuropathy on color fundus photographs. Graefes Arch Clin Exp Ophthalmol 2020; 258:851-867. [PMID: 31989285 DOI: 10.1007/s00417-020-04609-8] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 12/09/2019] [Accepted: 01/20/2020] [Indexed: 01/08/2023] Open
Abstract
PURPOSE To develop a deep learning approach based on deep residual neural network (ResNet101) for the automated detection of glaucomatous optic neuropathy (GON) using color fundus images, understand the process by which the model makes predictions, and explore the effect of the integration of fundus images and the medical history data from patients. METHODS A total of 34,279 fundus images and the corresponding medical history data were retrospectively collected from cohorts of 2371 adult patients, and these images were labeled by 8 glaucoma experts, in which 26,585 fundus images (12,618 images with GON-confirmed eyes, 1114 images with GON-suspected eyes, and 12,853 NORMAL eye images) were included. We adopted 10-fold cross-validation strategy to train and optimize our model. This model was tested in an independent testing dataset consisting of 3481 images (1524 images from NORMAL eyes, 1442 images from GON-confirmed eyes, and 515 images from GON-suspected eyes) from 249 patients. Moreover, the performance of the best model was compared with results obtained by two experts. Accuracy, sensitivity, specificity, kappa value, and area under receiver operating characteristic (AUC) were calculated. Further, we performed qualitative evaluation of model predictions and occlusion testing. Finally, we assessed the effect of integrating medical history data in the final classification. RESULTS In a multiclass comparison between GON-confirmed eyes, GON-suspected eyes and NORMAL eyes, our model achieved 0.941 (95% confidence interval [CI], 0.936-0.946) accuracy, 0.957 (95% CI, 0.953-0.961) sensitivity, and 0.929 (95% CI, 0.923-0.935) specificity. The AUC distinguishing referrals (GON-confirmed and GON-suspected eyes) from observation was 0.992 (95% CI, 0.991-0.993). Our best model had a kappa value of 0.927, while the two experts' kappa values were 0.928 and 0.925 independently. The best 2 binary classifiers distinguishing GON-confirmed/GON-suspected eyes from NORMAL eyes obtained 0.955, 0.965 accuracy, 0.977, 0.998 sensitivity, and 0.929, 0.954 specificity, while the AUC was 0.992, 0.999 respectively. Additionally, the occlusion testing showed that our model identified the neuroretinal rim region, retinal nerve fiber layer (RNFL) defect areas (superior or inferior) as the most important parts for the discrimination of GON, which evaluated fundus images in a way similar to clinicians. Finally, the results of integration of fundus images with medical history data showed a slight improvement in sensitivity and specificity with similar AUCs. CONCLUSIONS This approach could discriminate GON with high accuracy, sensitivity, specificity, and AUC using color fundus photographs. It may provide a second opinion on the diagnosis of glaucoma to the specialist quickly, efficiently and at low cost, and assist doctors and the public in large-scale screening for glaucoma.
Collapse
Affiliation(s)
- Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Lei Yan
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Yuguang Wang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Jianxun Shi
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Hua Chen
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuedian Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Minshan Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Zhizheng Wu
- Department of Precision Mechanical Engineering, Shanghai University, Shanghai, 200072, China
| | - Kaiqian Zhou
- Liver Cancer Institute, Zhongshan Hospital, Shanghai, 200032, China
| |
Collapse
|
123
|
Lee CS, Yanagihara RT, Lee AY. Using Deep Learning Models to Characterize Major Retinal Features on Color Fundus Photographs. Ophthalmology 2020; 127:95-96. [PMID: 31864476 PMCID: PMC7335005 DOI: 10.1016/j.ophtha.2019.07.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 07/15/2019] [Accepted: 07/17/2019] [Indexed: 10/25/2022] Open
Affiliation(s)
- Cecilia S. Lee
- Department of Ophthalmology, University of Washington, Seattle WA
| | | | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle WA
| |
Collapse
|