1
|
Shin JY, Son J, Kong ST, Park J, Park B, Park KH, Jung KH, Park SJ. Clinical Utility of Deep Learning Assistance for Detecting Various Abnormal Findings in Color Retinal Fundus Images: A Reader Study. Transl Vis Sci Technol 2024; 13:34. [PMID: 39441571 PMCID: PMC11512572 DOI: 10.1167/tvst.13.10.34] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 02/28/2024] [Indexed: 10/25/2024] Open
Abstract
Purpose To evaluate the clinical usefulness of a deep learning-based detection device for multiple abnormal findings on retinal fundus photographs for readers with varying expertise. Methods Fourteen ophthalmologists (six residents, eight specialists) assessed 399 fundus images with respect to 12 major ophthalmologic findings, with or without the assistance of a deep learning algorithm, in two separate reading sessions. Sensitivity, specificity, and reading time per image were compared. Results With algorithmic assistance, readers significantly improved in sensitivity for all 12 findings (P < 0.05) but tended to be less specific (P < 0.05) for hemorrhage, drusen, membrane, and vascular abnormality, more profoundly so in residents. Sensitivity without algorithmic assistance was significantly lower in residents (23.1%∼75.8%) compared to specialists (55.1%∼97.1%) in nine findings, but it improved to similar levels with algorithmic assistance (67.8%∼99.4% in residents, 83.2%∼99.5% in specialists) with only hemorrhage remaining statistically significantly lower. Variances in sensitivity were significantly reduced for all findings. Reading time per image decreased in images with fewer than three findings per image, more profoundly in residents. When simulated based on images acquired from a health screening center, average reading time was estimated to be reduced by 25.9% (from 16.4 seconds to 12.1 seconds per image) for residents, and by 2.0% (from 9.6 seconds to 9.4 seconds) for specialists. Conclusions Deep learning-based computer-assisted detection devices increase sensitivity, reduce inter-reader variance in sensitivity, and reduce reading time in less complicated images. Translational Relevance This study evaluated the influence that algorithmic assistance in detecting abnormal findings on retinal fundus photographs has on clinicians, possibly predicting its influence on clinical application.
Collapse
Affiliation(s)
- Joo Young Shin
- Department of Ophthalmology, Seoul Metropolitan Government Seoul National University Boramae Medical Centre, Seoul, Republic of Korea
| | | | | | | | | | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Kyu-Hwan Jung
- VUNO Inc., Seoul, Republic of Korea
- Department of Medical Device Research and Management, Samsung Advanced Institute for Health Sciences and Technology, Sungkyunkwan University, Seoul, Republic of Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| |
Collapse
|
2
|
Bae SH, Go S, Kim J, Park KH, Lee S, Park SJ. A novel vector field analysis for quantitative structure changes after macular epiretinal membrane surgery. Sci Rep 2024; 14:8242. [PMID: 38589440 PMCID: PMC11002028 DOI: 10.1038/s41598-024-58089-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 03/25/2024] [Indexed: 04/10/2024] Open
Abstract
The aim of this study was to introduce novel vector field analysis for the quantitative measurement of retinal displacement after epiretinal membrane (ERM) removal. We developed a novel framework to measure retinal displacement from retinal fundus images as follows: (1) rigid registration of preoperative retinal fundus images in reference to postoperative retinal fundus images, (2) extraction of retinal vessel segmentation masks from these retinal fundus images, (3) non-rigid registration of preoperative vessel masks in reference to postoperative vessel masks, and (4) calculation of the transformation matrix required for non-rigid registration for each pixel. These pixel-wise vector field results were summarized according to predefined 24 sectors after standardization. We applied this framework to 20 patients who underwent ERM removal to obtain their retinal displacement vector fields between retinal fundus images taken preoperatively and at postoperative 1, 4, 10, and 22 months. The mean direction of displacement vectors was in the nasal direction. The mean standardized magnitudes of retinal displacement between preoperative and postoperative 1 month, postoperative 1 and 4, 4 and 10, and 10 and 22 months were 38.6, 14.9, 7.6, and 5.4, respectively. In conclusion, the proposed method provides a computerized, reproducible, and scalable way to analyze structural changes in the retina with a powerful visualization tool. Retinal structural changes were mostly concentrated in the early postoperative period and tended to move nasally.
Collapse
Affiliation(s)
- Seok Hyun Bae
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea
- Department of Ophthalmology, HanGil Eye Hospital, Incheon, South Korea
| | - Sojung Go
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea
| | - Jooyoung Kim
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, South Korea
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, Seoul, South Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea.
| |
Collapse
|
3
|
Son J, Shin JY, Kong ST, Park J, Kwon G, Kim HD, Park KH, Jung KH, Park SJ. An interpretable and interactive deep learning algorithm for a clinically applicable retinal fundus diagnosis system by modelling finding-disease relationship. Sci Rep 2023; 13:5934. [PMID: 37045856 PMCID: PMC10097752 DOI: 10.1038/s41598-023-32518-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 03/28/2023] [Indexed: 04/14/2023] Open
Abstract
The identification of abnormal findings manifested in retinal fundus images and diagnosis of ophthalmic diseases are essential to the management of potentially vision-threatening eye conditions. Recently, deep learning-based computer-aided diagnosis systems (CADs) have demonstrated their potential to reduce reading time and discrepancy amongst readers. However, the obscure reasoning of deep neural networks (DNNs) has been the leading cause to reluctance in its clinical use as CAD systems. Here, we present a novel architectural and algorithmic design of DNNs to comprehensively identify 15 abnormal retinal findings and diagnose 8 major ophthalmic diseases from macula-centered fundus images with the accuracy comparable to experts. We then define a notion of counterfactual attribution ratio (CAR) which luminates the system's diagnostic reasoning, representing how each abnormal finding contributed to its diagnostic prediction. By using CAR, we show that both quantitative and qualitative interpretation and interactive adjustment of the CAD result can be achieved. A comparison of the model's CAR with experts' finding-disease diagnosis correlation confirms that the proposed model identifies the relationship between findings and diseases similarly as ophthalmologists do.
Collapse
Affiliation(s)
| | - Joo Young Shin
- Department of Ophthalmology, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | | | | | | | - Hoon Dong Kim
- Department of Ophthalmology, College of Medicine, Soonchunhyang University, Cheonan, Republic of Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, Republic of Korea
| | - Kyu-Hwan Jung
- Department of Medical Device Research and Management, Samsung Advanced Institute for Health Sciences and Technology, Sungkyunkwan University, 81 Irwon-ro, Gangnam-gu, Seoul, Republic of Korea.
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, Republic of Korea.
| |
Collapse
|
4
|
Sarhan A, Swift A, Gorner A, Rokne J, Alhajj R, Docherty G, Crichton A. Utilizing a responsive web portal for studying disc tracing agreement in retinal images. PLoS One 2021; 16:e0251703. [PMID: 34032798 PMCID: PMC8148353 DOI: 10.1371/journal.pone.0251703] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 05/02/2021] [Indexed: 11/18/2022] Open
Abstract
Glaucoma is a leading cause of blindness worldwide whose detection is based on multiple factors, including measuring the cup to disc ratio, retinal nerve fiber layer and visual field defects. Advances in image processing and machine learning have allowed the development of automated approached for segmenting objects from fundus images. However, to build a robust system, a reliable ground truth dataset is required for proper training and validation of the model. In this study, we investigate the level of agreement in properly detecting the retinal disc in fundus images using an online portal built for such purposes. Two Doctors of Optometry independently traced the discs for 159 fundus images obtained from publicly available datasets using a purpose-built online portal. Additionally, we studied the effectiveness of ellipse fitting in handling misalignments in tracing. We measured tracing precision, interobserver variability, and average boundary distance between the results provided by ophthalmologists, and optometrist tracing. We also studied whether ellipse fitting has a positive or negative impact on properly detecting disc boundaries. The overall agreement between the optometrists in terms of locating the disc region in these images was 0.87. However, we found that there was a fair agreement on the disc border with kappa = 0.21. Disagreements were mainly in fundus images obtained from glaucomatous patients. The resulting dataset was deemed to be an acceptable ground truth dataset for training a validation of models for automatic detection of objects in fundus images.
Collapse
Affiliation(s)
- Abdullah Sarhan
- Department of Computer Science, University of Calgary, Calgary, Canada
- * E-mail:
| | - Andrew Swift
- Cumming School of Medicine, University of Calgary, Calgary, Canada
| | - Adam Gorner
- Cumming School of Medicine, University of Calgary, Calgary, Canada
| | - Jon Rokne
- Department of Computer Science, University of Calgary, Calgary, Canada
| | - Reda Alhajj
- Department of Computer Science, University of Calgary, Calgary, Canada
- Department of Computer Engineering, Istanbul Medipol University, Istanbul, Turkey
- Department of Health Informatics, University of Southern Denmark, Odense, Denmark
| | - Gavin Docherty
- Department of Ophthalmology and Visual Sciences, University of Calgary, Calgary, Canada
| | - Andrew Crichton
- Department of Ophthalmology and Visual Sciences, University of Calgary, Calgary, Canada
| |
Collapse
|
5
|
Morya AK, Gowdar J, Kaushal A, Makwana N, Biswas S, Raj P, Singh S, Hegde S, Vaishnav R, Shetty S, S P V, Shah V, Paul S, Muralidhar S, Velis G, Padua W, Waghule T, Nazm N, Jeganathan S, Reddy Mallidi A, Susan John D, Sen S, Choudhary S, Parashar N, Sharma B, Raghav P, Udawat R, Ram S, Salodia UP. Evaluating the Viability of a Smartphone-Based Annotation Tool for Faster and Accurate Image Labelling for Artificial Intelligence in Diabetic Retinopathy. Clin Ophthalmol 2021; 15:1023-1039. [PMID: 33727785 PMCID: PMC7953891 DOI: 10.2147/opth.s289425] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 01/18/2021] [Indexed: 12/17/2022] Open
Abstract
INTRODUCTION Deep Learning (DL) and Artificial Intelligence (AI) have become widespread due to the advanced technologies and availability of digital data. Supervised learning algorithms have shown human-level performance or even better and are better feature extractor-quantifier than unsupervised learning algorithms. To get huge dataset with good quality control, there is a need of an annotation tool with a customizable feature set. This paper evaluates the viability of having an in house annotation tool which works on a smartphone and can be used in a healthcare setting. METHODS We developed a smartphone-based grading system to help researchers in grading multiple retinal fundi. The process consisted of designing the flow of user interface (UI) keeping in view feedback from experts. Quantitative and qualitative analysis of change in speed of a grader over time and feature usage statistics was done. The dataset size was approximately 16,000 images with adjudicated labels by a minimum of 2 doctors. Results for an AI model trained on the images graded using this tool and its validation over some public datasets were prepared. RESULTS We created a DL model and analysed its performance for a binary referrable DR Classification task, whether a retinal image has Referrable DR or not. A total of 32 doctors used the tool for minimum of 20 images each. Data analytics suggested significant portability and flexibility of the tool. Grader variability for images was in favour of agreement on images annotated. Number of images used to assess agreement is 550. Mean of 75.9% was seen in agreement. CONCLUSION Our aim was to make Annotation of Medical imaging easier and to minimize time taken for annotations without quality degradation. The user feedback and feature usage statistics confirm our hypotheses of incorporation of brightness and contrast variations, green channels and zooming add-ons in correlation to certain disease types. Simulation of multiple review cycles and establishing quality control can boost the accuracy of AI models even further. Although our study aims at developing an annotation tool for diagnosing and classifying diabetic retinopathy fundus images but same concept can be used for fundus images of other ocular diseases as well as other streams of medical science such as radiology where image-based diagnostic applications are utilised.
Collapse
Affiliation(s)
- Arvind Kumar Morya
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Jaitra Gowdar
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Abhishek Kaushal
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Nachiket Makwana
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Saurav Biswas
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Puneeth Raj
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Shabnam Singh
- Sri Narayani Hospital & Research Centre, Vellore, Tamilnadu, 632 055, India
| | - Sharat Hegde
- Prasad Netralaya, Udupi, Karnataka, 576101, India
| | - Raksha Vaishnav
- Bhaktivedanta Hospital, Mira Bhayandar, Maharashtra, 401107, India
| | - Sharan Shetty
- Prime Retina Eye Care Centre, Hyderabad, Telangana, 500029, India
| | | | - Vedang Shah
- Shree Netra Eye Foundation, Kolkata, West Bengal, 700020, India
| | | | | | | | - Winston Padua
- St. John's Medical College & Hospital, Bengaluru, Bengaluru, 560034, India
| | - Tushar Waghule
- Reti Vision Eye Clinic, KK Eye Institute, Pune, Maharashtra, 411001, India
| | - Nazneen Nazm
- ESI PGIMSR, ESI Medical College and Hospital, Kolkata, West Bengal, 700104, India
| | - Sangeetha Jeganathan
- Srinivas Institute of Medical Sciences and Research Centre, Mangalore, Karnataka, 574146, India
| | | | - Dona Susan John
- Diya Speciality Eye Care, Bengaluru, Karnataka, 560061, India
| | - Sagnik Sen
- Aravind Eye Hospital, Madurai, Tamil Nadu, 625 020, India
| | - Sandeep Choudhary
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Nishant Parashar
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Bhavana Sharma
- All India Institute of Medical Sciences, Bhopal, Madhya Pradesh, 462020, India
| | - Pankaja Raghav
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Raghuveer Udawat
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Sampat Ram
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Umang P Salodia
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| |
Collapse
|
6
|
Development of Decision Support Software for Deep Learning-Based Automated Retinal Disease Screening Using Relatively Limited Fundus Photograph Data. ELECTRONICS 2021. [DOI: 10.3390/electronics10020163] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Purpose—This study was conducted to develop an automated detection algorithm for screening fundus abnormalities, including age-related macular degeneration (AMD), diabetic retinopathy (DR), epiretinal membrane (ERM), retinal vascular occlusion (RVO), and suspected glaucoma among health screening program participants. Methods—The development dataset consisted of 43,221 retinal fundus photographs (from 25,564 participants, mean age 53.38 ± 10.97 years, female 39.0%) from a health screening program and patients of the Kangbuk Samsung Hospital Ophthalmology Department from 2006 to 2017. We evaluated our screening algorithm on independent validation datasets. Five separate one-versus-rest (OVR) classification algorithms based on deep convolutional neural networks (CNNs) were trained to detect AMD, ERM, DR, RVO, and suspected glaucoma. The ground truth for both development and validation datasets was graded at least two times by three ophthalmologists. The area under the receiver operating characteristic curve (AUC), sensitivity, and specificity were calculated for each disease, as well as their macro-averages. Results—For the internal validation dataset, the average sensitivity was 0.9098 (95% confidence interval (CI), 0.8660–0.9536), the average specificity was 0.9079 (95% CI, 0.8576–0.9582), and the overall accuracy was 0.9092 (95% CI, 0.8769–0.9415). For the external validation dataset consisting of 1698 images, the average of the AUCs was 0.9025 (95% CI, 0.8671–0.9379). Conclusions—Our algorithm had high sensitivity and specificity for detecting major fundus abnormalities. Our study will facilitate expansion of the applications of deep learning-based computer-aided diagnostic decision support tools in actual clinical settings. Further research is needed to improved generalization for this algorithm.
Collapse
|
7
|
Son J, Shin JY, Chun EJ, Jung KH, Park KH, Park SJ. Predicting High Coronary Artery Calcium Score From Retinal Fundus Images With Deep Learning Algorithms. Transl Vis Sci Technol 2020; 9:28. [PMID: 33184590 PMCID: PMC7410115 DOI: 10.1167/tvst.9.2.28] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Accepted: 03/06/2020] [Indexed: 01/04/2023] Open
Abstract
Purpose To evaluate high accumulation of coronary artery calcium (CAC) from retinal fundus images with deep learning technologies as an inexpensive and radiation-free screening method. Methods Individuals who underwent bilateral retinal fundus imaging and CAC score (CACS) evaluation from coronary computed tomography scans on the same day were identified. With this database, performances of deep learning algorithms (inception-v3) to distinguish high CACS from CACS of 0 were evaluated at various thresholds for high CACS. Vessel-inpainted and fovea-inpainted images were also used as input to investigate areas of interest in determining CACS. Results A total of 44,184 images from 20,130 individuals were included. A deep learning algorithm for discrimination of no CAC from CACS >100 achieved area under receiver operating curve (AUROC) of 82.3% (79.5%–85.0%) and 83.2% (80.2%–86.3%) using unilateral and bilateral fundus images, respectively, under a 5-fold cross validation setting. AUROC increased as the criterion for high CACS was increased, showing a plateau at 100 and losing significant improvement thereafter. AUROC decreased when fovea was inpainted and decreased further when vessels were inpainted, whereas AUROC increased when bilateral images were used as input. Conclusions Visual patterns of retinal fundus images in subjects with CACS > 100 could be recognized by deep learning algorithms compared with those with no CAC. Exploiting bilateral images improves discrimination performance, and ablation studies removing retinal vasculature or fovea suggest that recognizable patterns reside mainly in these areas. Translational Relevance Retinal fundus images can be used by deep learning algorithms for prediction of high CACS.
Collapse
Affiliation(s)
| | - Joo Young Shin
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Korea
| | - Eun Ju Chun
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | | | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|
8
|
Ting DSW, Tan TE, Lim CCT. Development and Validation of a Deep Learning System for Detection of Active Pulmonary Tuberculosis on Chest Radiographs: Clinical and Technical Considerations. Clin Infect Dis 2020; 69:748-750. [PMID: 30418534 DOI: 10.1093/cid/ciy969] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 11/08/2018] [Indexed: 12/18/2022] Open
Affiliation(s)
- Daniel Shu Wei Ting
- Singapore Eye Research Institute, National Neuroscience Institute, Singapore.,Singapore National Eye Centre, National Neuroscience Institute, Singapore.,Duke-National University of Singapore Medical School, National Neuroscience Institute, Singapore
| | - Tien-En Tan
- Singapore Eye Research Institute, National Neuroscience Institute, Singapore.,Singapore National Eye Centre, National Neuroscience Institute, Singapore
| | - C C Tchoyoson Lim
- Duke-National University of Singapore Medical School, National Neuroscience Institute, Singapore.,Department of Neuroradiology, National Neuroscience Institute, Singapore
| |
Collapse
|
9
|
Kim YD, Noh KJ, Byun SJ, Lee S, Kim T, Sunwoo L, Lee KJ, Kang SH, Park KH, Park SJ. Effects of Hypertension, Diabetes, and Smoking on Age and Sex Prediction from Retinal Fundus Images. Sci Rep 2020; 10:4623. [PMID: 32165702 PMCID: PMC7067849 DOI: 10.1038/s41598-020-61519-9] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 02/28/2020] [Indexed: 12/25/2022] Open
Abstract
Retinal fundus images are used to detect organ damage from vascular diseases (e.g. diabetes mellitus and hypertension) and screen ocular diseases. We aimed to assess convolutional neural network (CNN) models that predict age and sex from retinal fundus images in normal participants and in participants with underlying systemic vascular-altered status. In addition, we also tried to investigate clues regarding differences between normal ageing and vascular pathologic changes using the CNN models. In this study, we developed CNN age and sex prediction models using 219,302 fundus images from normal participants without hypertension, diabetes mellitus (DM), and any smoking history. The trained models were assessed in four test-sets with 24,366 images from normal participants, 40,659 images from hypertension participants, 14,189 images from DM participants, and 113,510 images from smokers. The CNN model accurately predicted age in normal participants; the correlation between predicted age and chronologic age was R2 = 0.92, and the mean absolute error (MAE) was 3.06 years. MAEs in test-sets with hypertension (3.46 years), DM (3.55 years), and smoking (2.65 years) were similar to that of normal participants; however, R2 values were relatively low (hypertension, R2 = 0.74; DM, R2 = 0.75; smoking, R2 = 0.86). In subgroups with participants over 60 years, the MAEs increased to above 4.0 years and the accuracies declined for all test-sets. Fundus-predicted sex demonstrated acceptable accuracy (area under curve > 0.96) in all test-sets. Retinal fundus images from participants with underlying vascular-altered conditions (hypertension, DM, or smoking) indicated similar MAEs and low coefficients of determination (R2) between the predicted age and chronologic age, thus suggesting that the ageing process and pathologic vascular changes exhibit different features. Our models demonstrate the most improved performance yet and provided clues to the relationship and difference between ageing and pathologic changes from underlying systemic vascular conditions. In the process of fundus change, systemic vascular diseases are thought to have a different effect from ageing. Research in context. Evidence before this study. The human retina and optic disc continuously change with ageing, and they share physiologic or pathologic characteristics with brain and systemic vascular status. As retinal fundus images provide high-resolution in-vivo images of retinal vessels and parenchyma without any invasive procedure, it has been used to screen ocular diseases and has attracted significant attention as a predictive biomarker for cerebral and systemic vascular diseases. Recently, deep neural networks have revolutionised the field of medical image analysis including retinal fundus images and shown reliable results in predicting age, sex, and presence of cardiovascular diseases. Added value of this study. This is the first study demonstrating how a convolutional neural network (CNN) trained using retinal fundus images from normal participants measures the age of participants with underlying vascular conditions such as hypertension, diabetes mellitus (DM), or history of smoking using a large database, SBRIA, which contains 412,026 retinal fundus images from 155,449 participants. Our results indicated that the model accurately predicted age in normal participants, while correlations (coefficient of determination, R2) in test-sets with hypertension, DM, and smoking were relatively low. Additionally, a subgroup analysis indicated that mean absolute errors (MAEs) increased and accuracies declined significantly in subgroups with participants over 60 years of age in both normal participants and participants with vascular-altered conditions. These results suggest that pathologic retinal vascular changes occurring in systemic vascular diseases are different form the changes in spontaneous ageing process, and the ageing process observed in retinal fundus images may saturate at age about 60 years. Implications of all available evidence. Based on this study and previous reports, the CNN could accurately and reliably predict age and sex using retinal fundus images. The fact that retinal changes caused by ageing and systemic vascular diseases occur differently motivates one to understand the retina deeper. Deep learning-based fundus image reading may be a more useful and beneficial tool for screening and diagnosing systemic and ocular diseases after further development.
Collapse
Affiliation(s)
- Yong Dae Kim
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea.,Department of Ophthalmology, Kangdong Sacred Heart Hospital, Seoul, Korea
| | - Kyoung Jin Noh
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Seong Jun Byun
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, Seoul, Republic of Korea
| | - Tackeun Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Kyong Joon Lee
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Si-Hyuck Kang
- Division of Cardiology, Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea.
| |
Collapse
|
10
|
Ruamviboonsuk P, Cheung CY, Zhang X, Raman R, Park SJ, Ting DSW. Artificial Intelligence in Ophthalmology: Evolutions in Asia. Asia Pac J Ophthalmol (Phila) 2020; 9:78-84. [PMID: 32349114 DOI: 10.1097/01.apo.0000656980.41190.bf] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence (AI) has been studied in ophthalmology since availability of digital information in ophthalmic care. The significant turning point was availability of commercial digital color fundus photography in the late 1990s, which caused digital screening for diabetic retinopathy (DR) to take off. Automated Retinal Disease Assessment software was then developed using machine learning to detect abnormal lesions in fundus to screen DR. The use of this version of AI had not been generalized because the specificity at 45% was not high enough, although the sensitivity reached 90%. The recent breakthrough in machine learning is the invent of deep learning, which accelerates its performance to be on par with experts. The first 2 breakthrough studies on deep learning for screening DR were conducted in Asia. The first represented collaboration of datasets between Asia and the United States for algorithms development, whereas the second represented algorithms developed in Asia but validated in different populations across the world. Both found accuracy for detecting referable DR of >95%. Diversity and variety are unique strengths of Asia for AI studies. There are many more studies of AI ongoing in Asia not only as prospective deployments in DR but in glaucoma, age-related macular degeneration, cataract, and systemic disease, such as Alzheimer's disease. Some Asian countries have laid out plans for digital health care system using AI as one of the puzzle pieces for solving blindness. More studies on AI and digital health are expected to come from Asia in this new decade.
Collapse
Affiliation(s)
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Xiulan Zhang
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, People's Republic of China
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, India
| | - Sang Jun Park
- Duke-NUS Medical School Consultant, Vitreo-retinal Department, Singapore National Eye Center, Singapore
| | - Daniel Shu Wei Ting
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, South Korea
| |
Collapse
|
11
|
Orlando JI, Fu H, Barbosa Breda J, van Keer K, Bathula DR, Diaz-Pinto A, Fang R, Heng PA, Kim J, Lee J, Lee J, Li X, Liu P, Lu S, Murugesan B, Naranjo V, Phaye SSR, Shankaranarayana SM, Sikka A, Son J, van den Hengel A, Wang S, Wu J, Wu Z, Xu G, Xu Y, Yin P, Li F, Zhang X, Xu Y, Bogunović H. REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med Image Anal 2020; 59:101570. [DOI: 10.1016/j.media.2019.101570] [Citation(s) in RCA: 83] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 07/26/2019] [Accepted: 10/01/2019] [Indexed: 01/01/2023]
|
12
|
Son J, Shin JY, Kim HD, Jung KH, Park KH, Park SJ. Development and Validation of Deep Learning Models for Screening Multiple Abnormal Findings in Retinal Fundus Images. Ophthalmology 2020; 127:85-94. [DOI: 10.1016/j.ophtha.2019.05.029] [Citation(s) in RCA: 125] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 05/03/2019] [Accepted: 05/24/2019] [Indexed: 12/25/2022] Open
|