1
|
Shamsan A, Senan EM, Ahmad Shatnawi HS. Predicting of diabetic retinopathy development stages of fundus images using deep learning based on combined features. PLoS One 2023; 18:e0289555. [PMID: 37862328 PMCID: PMC10588832 DOI: 10.1371/journal.pone.0289555] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 07/20/2023] [Indexed: 10/22/2023] Open
Abstract
The number of diabetic retinopathy (DR) patients is increasing every year, and this causes a public health problem. Therefore, regular diagnosis of diabetes patients is necessary to avoid the progression of DR stages to advanced stages that lead to blindness. Manual diagnosis requires effort and expertise and is prone to errors and differing expert diagnoses. Therefore, artificial intelligence techniques help doctors make a proper diagnosis and resolve different opinions. This study developed three approaches, each with two systems, for early diagnosis of DR disease progression. All colour fundus images have been subjected to image enhancement and increasing contrast ROI through filters. All features extracted by the DenseNet-121 and AlexNet (Dense-121 and Alex) were fed to the Principal Component Analysis (PCA) method to select important features and reduce their dimensions. The first approach is to DR image analysis for early prediction of DR disease progression by Artificial Neural Network (ANN) with selected, low-dimensional features of Dense-121 and Alex models. The second approach is to DR image analysis for early prediction of DR disease progression is by integrating important and low-dimensional features of Dense-121 and Alex models before and after PCA. The third approach is to DR image analysis for early prediction of DR disease progression by ANN with the radiomic features. The radiomic features are a combination of the features of the CNN models (Dense-121 and Alex) separately with the handcrafted features extracted by Discrete Wavelet Transform (DWT), Local Binary Pattern (LBP), Fuzzy colour histogram (FCH), and Gray Level Co-occurrence Matrix (GLCM) methods. With the radiomic features of the Alex model and the handcrafted features, ANN reached a sensitivity of 97.92%, an AUC of 99.56%, an accuracy of 99.1%, a specificity of 99.4% and a precision of 99.06%.
Collapse
Affiliation(s)
- Ahlam Shamsan
- Computer Department, Applied College, Najran University, Najran, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | | |
Collapse
|
2
|
Liu TYA, Ling C, Hahn L, Jones CK, Boon CJ, Singh MS. Prediction of visual impairment in retinitis pigmentosa using deep learning and multimodal fundus images. Br J Ophthalmol 2023; 107:1484-1489. [PMID: 35896367 PMCID: PMC10579177 DOI: 10.1136/bjo-2021-320897] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 06/25/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The efficiency of clinical trials for retinitis pigmentosa (RP) treatment is limited by the screening burden and lack of reliable surrogate markers for functional end points. Automated methods to determine visual acuity (VA) may help address these challenges. We aimed to determine if VA could be estimated using confocal scanning laser ophthalmoscopy (cSLO) imaging and deep learning (DL). METHODS Snellen corrected VA and cSLO imaging were obtained retrospectively. The Johns Hopkins University (JHU) dataset was used for 10-fold cross-validations and internal testing. The Amsterdam University Medical Centers (AUMC) dataset was used for external independent testing. Both datasets had the same exclusion criteria: visually significant media opacities and images not centred on the central macula. The JHU dataset included patients with RP with and without molecular confirmation. The AUMC dataset only included molecularly confirmed patients with RP. Using transfer learning, three versions of the ResNet-152 neural network were trained: infrared (IR), optical coherence tomography (OCT) and combined image (CI). RESULTS In internal testing (JHU dataset, 2569 images, 462 eyes, 231 patients), the area under the curve (AUC) for the binary classification task of distinguishing between Snellen VA 20/40 or better and worse than Snellen VA 20/40 was 0.83, 0.87 and 0.85 for IR, OCT and CI, respectively. In external testing (AUMC dataset, 349 images, 166 eyes, 83 patients), the AUC was 0.78, 0.87 and 0.85 for IR, OCT and CI, respectively. CONCLUSIONS Our algorithm showed robust performance in predicting visual impairment in patients with RP, thus providing proof-of-concept for predicting structure-function correlation based solely on cSLO imaging in patients with RP.
Collapse
Affiliation(s)
- Tin Yan Alvin Liu
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - Carlthan Ling
- Department of Ophthalmology, University of Maryland Medical System, Baltimore, Maryland, USA
| | - Leo Hahn
- Department of Ophthalmology, Amsterdam UMC Locatie AMC, Amsterdam, The Netherlands
| | - Craig K Jones
- Malone Center for Engineering in Healthcare, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Camiel Jf Boon
- Department of Ophthalmology, Amsterdam UMC Locatie AMC, Amsterdam, The Netherlands
- Department of Ophthalmology, Leiden University Medical Center, Leiden, The Netherlands
| | - Mandeep S Singh
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, Maryland, USA
- Department of Genetic Medicine, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
3
|
Shimizu E, Ishikawa T, Tanji M, Agata N, Nakayama S, Nakahara Y, Yokoiwa R, Sato S, Hanyuda A, Ogawa Y, Hirayama M, Tsubota K, Sato Y, Shimazaki J, Negishi K. Artificial intelligence to estimate the tear film breakup time and diagnose dry eye disease. Sci Rep 2023; 13:5822. [PMID: 37037877 PMCID: PMC10085985 DOI: 10.1038/s41598-023-33021-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 04/06/2023] [Indexed: 04/12/2023] Open
Abstract
The use of artificial intelligence (AI) in the diagnosis of dry eye disease (DED) remains limited due to the lack of standardized image formats and analysis models. To overcome these issues, we used the Smart Eye Camera (SEC), a video-recordable slit-lamp device, and collected videos of the anterior segment of the eye. This study aimed to evaluate the accuracy of the AI algorithm in estimating the tear film breakup time and apply this model for the diagnosis of DED according to the Asia Dry Eye Society (ADES) DED diagnostic criteria. Using the retrospectively corrected DED videos of 158 eyes from 79 patients, 22,172 frames were annotated by the DED specialist to label whether or not the frame had breakup. The AI algorithm was developed using the training dataset and machine learning. The DED criteria of the ADES was used to determine the diagnostic performance. The accuracy of tear film breakup time estimation was 0.789 (95% confidence interval (CI) 0.769-0.809), and the area under the receiver operating characteristic curve of this AI model was 0.877 (95% CI 0.861-0.893). The sensitivity and specificity of this AI model for the diagnosis of DED was 0.778 (95% CI 0.572-0.912) and 0.857 (95% CI 0.564-0.866), respectively. We successfully developed a novel AI-based diagnostic model for DED. Our diagnostic model has the potential to enable ophthalmology examination outside hospitals and clinics.
Collapse
Affiliation(s)
- Eisuke Shimizu
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan.
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan.
- Yokohama Keiai Eye Clinic, Courtley House 2F, 1-11-17 Wada, Hodogaya-ku, Kanagawa, 240-0065, Japan.
| | - Toshiki Ishikawa
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Makoto Tanji
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Naomichi Agata
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Shintaro Nakayama
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Yo Nakahara
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Ryota Yokoiwa
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Shinri Sato
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- Yokohama Keiai Eye Clinic, Courtley House 2F, 1-11-17 Wada, Hodogaya-ku, Kanagawa, 240-0065, Japan
| | - Akiko Hanyuda
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Yoko Ogawa
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Masatoshi Hirayama
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Kazuo Tsubota
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Yasunori Sato
- Department of Preventive Medicine and Public Health, School of Medicine, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Jun Shimazaki
- Department of Ophthalmology, Tokyo Dental College Ichikawa General Hospital, 5-11-13 Sugano, Ichikawa-shi, Chiba, 272-8513, Japan
| | - Kazuno Negishi
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| |
Collapse
|
4
|
Berbar MA. Features extraction using encoded local binary pattern for detection and grading diabetic retinopathy. Health Inf Sci Syst 2022; 10:14. [PMID: 35782197 DOI: 10.1007/s13755-022-00181-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 06/05/2022] [Indexed: 11/16/2022] Open
Abstract
Introduction Reliable computer diagnosis of diabetic retinopathy (DR) is needed to rescue many with diabetes who may be under threat of blindness. This research aims to detect the presence of diabetic retinopathy in fundus images and grade the disease severity without lesion segmentation. Methods To ensure that the fundus images are in a standard state of brightness, a series of preprocessing steps have been applied to the green channel image using histogram matching and a median filter. Then, contrast-limited adaptive histogram equalisation is performed, followed by the unsharp filter. The preprocessed image is divided into small blocks, and then each block is processed to extract uniform local binary patterns (LBPs) features. The extracted features are encoded, and the feature size is reduced to 3.5 percent of its original size. Classifiers like Support Vector Machine (SVM) and a proposed CNN model were used to classify retinal fundus images. The classification is abnormal or normal and to grade the severity of DR. Results Our feature extraction method was tested on a binary classifier and resulted in an accuracy of 98.37% and 98.84% on the Messidor2 and EyePACS databases, respectively. The proposed system could grade DR severity into three grades (0: no DR, 1: mild DR, and 5: moderate, severe NPDR, and PDR). It obtains an F1-score of 0.9617 and an accuracy of 95.37% on the EyePACS database, and an F1-score of 0.9860 and an accuracy of 97.57% on the Messidor2 database. The resultant values are dependent on the selection of (neighbours, radius) pairs during the extraction of LBP features. Conclusions This study’s results proved that the preprocessing steps are significant and had a great effect on highlighting image features. The novel method of stacking and encoding the LBP values in the feature vector greatly affects results when using SVM or CNN for classification. The proposed system outperforms the state of the artwork. The proposed CNN model performs better than SVM.
Collapse
|
5
|
Srinivasan V, Strodthoff N, Ma J, Binder A, Müller KR, Samek W. To pretrain or not? A systematic analysis of the benefits of pretraining in diabetic retinopathy. PLoS One 2022; 17:e0274291. [PMID: 36256665 PMCID: PMC9578637 DOI: 10.1371/journal.pone.0274291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 08/26/2022] [Indexed: 11/06/2022] Open
Abstract
There is an increasing number of medical use cases where classification algorithms based on deep neural networks reach performance levels that are competitive with human medical experts. To alleviate the challenges of small dataset sizes, these systems often rely on pretraining. In this work, we aim to assess the broader implications of these approaches in order to better understand what type of pretraining works reliably (with respect to performance, robustness, learned representation etc.) in practice and what type of pretraining dataset is best suited to achieve good performance in small target dataset size scenarios. Considering diabetic retinopathy grading as an exemplary use case, we compare the impact of different training procedures including recently established self-supervised pretraining methods based on contrastive learning. To this end, we investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions. Our results indicate that models initialized from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions. In particular, self-supervised models show further benefits to supervised models. Self-supervised models with initialization from ImageNet pretraining not only report higher performance, they also reduce overfitting to large lesions along with improvements in taking into account minute lesions indicative of the progression of the disease. Understanding the effects of pretraining in a broader sense that goes beyond simple performance comparisons is of crucial importance for the broader medical imaging community beyond the use case considered in this work.
Collapse
Affiliation(s)
- Vignesh Srinivasan
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| | - Nils Strodthoff
- School of Medicine and Health Services, Oldenburg University, Oldenburg, Germany
| | - Jackie Ma
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| | - Alexander Binder
- Singapore Institute of Technology, ICT Cluster, Singapore, Singapore
- Department of Informatics, Oslo University, Oslo, Norway
| | - Klaus-Robert Müller
- BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany
- Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
- Max Planck Institute for Informatics, Saarbrücken, Germany
- * E-mail: (KRM); (WS)
| | - Wojciech Samek
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
- BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany
- Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- * E-mail: (KRM); (WS)
| |
Collapse
|
6
|
Khan NC, Perera C, Dow ER, Chen KM, Mahajan VB, Mruthyunjaya P, Do DV, Leng T, Myung D. Predicting Systemic Health Features from Retinal Fundus Images Using Transfer-Learning-Based Artificial Intelligence Models. Diagnostics (Basel) 2022; 12:diagnostics12071714. [PMID: 35885619 PMCID: PMC9322827 DOI: 10.3390/diagnostics12071714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 12/02/2022] Open
Abstract
While color fundus photos are used in routine clinical practice to diagnose ophthalmic conditions, evidence suggests that ocular imaging contains valuable information regarding the systemic health features of patients. These features can be identified through computer vision techniques including deep learning (DL) artificial intelligence (AI) models. We aim to construct a DL model that can predict systemic features from fundus images and to determine the optimal method of model construction for this task. Data were collected from a cohort of patients undergoing diabetic retinopathy screening between March 2020 and March 2021. Two models were created for each of 12 systemic health features based on the DenseNet201 architecture: one utilizing transfer learning with images from ImageNet and another from 35,126 fundus images. Here, 1277 fundus images were used to train the AI models. Area under the receiver operating characteristics curve (AUROC) scores were used to compare the model performance. Models utilizing the ImageNet transfer learning data were superior to those using retinal images for transfer learning (mean AUROC 0.78 vs. 0.65, p-value < 0.001). Models using ImageNet pretraining were able to predict systemic features including ethnicity (AUROC 0.93), age > 70 (AUROC 0.90), gender (AUROC 0.85), ACE inhibitor (AUROC 0.82), and ARB medication use (AUROC 0.78). We conclude that fundus images contain valuable information about the systemic characteristics of a patient. To optimize DL model performance, we recommend that even domain specific models consider using transfer learning from more generalized image sets to improve accuracy.
Collapse
Affiliation(s)
- Nergis C. Khan
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Chandrashan Perera
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
- Department of Ophthalmology, Fremantle Hospital, Perth, WA 6004, Australia
| | - Eliot R. Dow
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Karen M. Chen
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Vinit B. Mahajan
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Prithvi Mruthyunjaya
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Diana V. Do
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Theodore Leng
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - David Myung
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
- VA Palo Alto Health Care System, Palo Alto, CA 94304, USA
- Correspondence: ; Tel.: +1-650-724-3948
| |
Collapse
|
7
|
Liu TYA, Wu JH. The Ethical and Societal Considerations for the Rise of Artificial Intelligence and Big Data in Ophthalmology. Front Med (Lausanne) 2022; 9:845522. [PMID: 35836952 PMCID: PMC9273876 DOI: 10.3389/fmed.2022.845522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 06/10/2022] [Indexed: 01/09/2023] Open
Abstract
Medical specialties with access to a large amount of imaging data, such as ophthalmology, have been at the forefront of the artificial intelligence (AI) revolution in medicine, driven by deep learning (DL) and big data. With the rise of AI and big data, there has also been increasing concern on the issues of bias and privacy, which can be partially addressed by low-shot learning, generative DL, federated learning and a "model-to-data" approach, as demonstrated by various groups of investigators in ophthalmology. However, to adequately tackle the ethical and societal challenges associated with the rise of AI in ophthalmology, a more comprehensive approach is preferable. Specifically, AI should be viewed as sociotechnical, meaning this technology shapes, and is shaped by social phenomena.
Collapse
Affiliation(s)
- T. Y. Alvin Liu
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, United States,*Correspondence: T. Y. Alvin Liu
| | - Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
8
|
Jansen LG, Schultz T, Holz FG, Finger RP, Wintergerst MWM. [Smartphone-based fundus imaging: applications and adapters]. Ophthalmologe 2021; 119:112-126. [PMID: 34913992 DOI: 10.1007/s00347-021-01536-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/08/2021] [Indexed: 12/28/2022]
Abstract
BACKGROUND Smartphone-based fundus imaging (SBFI) is an innovative and low-cost alternative for color fundus photography. Since the first reports on this topic more than 10 years ago a large number of studies on different adapters and clinical applications have been published. OBJECTIVE The aim of this review article is to provide an overview on the development of SBFI and adapters and clinical applications published so far. MATERIAL AND METHODS A literature search was performed using the MEDLINE and Science Citation Index Expanded databases without time restrictions. RESULTS Overall, 11 adapters were included and compared in terms of exemplary image material, field of view, acquisition costs, weight, software, application range, smartphone compatibility and certification. Previously published SBFI applications are screening for diabetic retinopathy, glaucoma and retinopathy of prematurity as well as the application in emergency medicine, pediatrics and medical education/teaching. Image quality of conventional retinal cameras is in general superior to SBFI. First approaches on automatic detection of diabetic retinopathy through SBFI are promising and the use of automatic image processing algorithms enables the generation of wide-field image montages. CONCLUSION SBFI is a versatile, mobile, low-cost alternative to conventional equipment for color fundus photography. In addition, it facilitates the delegation of ophthalmological examinations to assistance personnel in telemedical settings, could simplify retinal documentation, improve teaching, and improve ophthalmological care, particularly in countries with low and middle incomes.
Collapse
Affiliation(s)
- Linus G Jansen
- Klinik für Augenheilkunde, Universitätsklinikum Bonn, Ernst-Abbe-Str. 2, 53127, Bonn, Deutschland
| | - Thomas Schultz
- Institut für Informatik II, Universität Bonn, Friedrich-Hirzebruch-Allee 5, 53115, Bonn, Deutschland.,Bonn-Aachen International Center for Information Technology (B-IT), Universität Bonn, Friedrich-Hirzebruch-Allee 5, 53115, Bonn, Deutschland
| | - Frank G Holz
- Klinik für Augenheilkunde, Universitätsklinikum Bonn, Ernst-Abbe-Str. 2, 53127, Bonn, Deutschland
| | - Robert P Finger
- Klinik für Augenheilkunde, Universitätsklinikum Bonn, Ernst-Abbe-Str. 2, 53127, Bonn, Deutschland
| | | |
Collapse
|
9
|
Yu TT, Ma D, Lo J, Ju MJ, Beg MF, Sarunic MV. Effect of optical coherence tomography and angiography sampling rate towards diabetic retinopathy severity classification. Biomed Opt Express 2021; 12:6660-6673. [PMID: 34745763 PMCID: PMC8547994 DOI: 10.1364/boe.431992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 08/12/2021] [Accepted: 08/16/2021] [Indexed: 06/13/2023]
Abstract
Optical coherence tomography (OCT) and OCT angiography (OCT-A) may benefit the screening of diabetic retinopathy (DR). This study investigated the effect of laterally subsampling OCT/OCT-A en face scans by up to a factor of 8 when using deep neural networks for automated referable DR classification. There was no significant difference in the classification performance across all evaluation metrics when subsampling up to a factor of 3, and only minimal differences up to a factor of 8. Our findings suggest that OCT/OCT-A can reduce the number of samples (and hence the acquisition time) for a volume for a given field of view on the retina that is acquired for rDR classification.
Collapse
Affiliation(s)
- Timothy T. Yu
- Engineering Science, Simon Fraser University, Burnaby BC V5A1S6, Canada
| | - Da Ma
- Engineering Science, Simon Fraser University, Burnaby BC V5A1S6, Canada
| | - Julian Lo
- Engineering Science, Simon Fraser University, Burnaby BC V5A1S6, Canada
| | - Myeong Jin Ju
- Dept. of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, V5Z 3N9, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, V5Z 3N9, Canada
| | - Mirza Faisal Beg
- Engineering Science, Simon Fraser University, Burnaby BC V5A1S6, Canada
| | | |
Collapse
|
10
|
Liu TYA, Wei J, Zhu H, Subramanian PS, Myung D, Yi PH, Hui FK, Unberath M, Ting DSW, Miller NR. Detection of Optic Disc Abnormalities in Color Fundus Photographs Using Deep Learning. J Neuroophthalmol 2021; 41:368-374. [PMID: 34415271 PMCID: PMC10637344 DOI: 10.1097/wno.0000000000001358] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
BACKGROUND To date, deep learning-based detection of optic disc abnormalities in color fundus photographs has mostly been limited to the field of glaucoma. However, many life-threatening systemic and neurological conditions can manifest as optic disc abnormalities. In this study, we aimed to extend the application of deep learning (DL) in optic disc analyses to detect a spectrum of nonglaucomatous optic neuropathies. METHODS Using transfer learning, we trained a ResNet-152 deep convolutional neural network (DCNN) to distinguish between normal and abnormal optic discs in color fundus photographs (CFPs). Our training data set included 944 deidentified CFPs (abnormal 364; normal 580). Our testing data set included 151 deidentified CFPs (abnormal 71; normal 80). Both the training and testing data sets contained a wide range of optic disc abnormalities, including but not limited to ischemic optic neuropathy, atrophy, compressive optic neuropathy, hereditary optic neuropathy, hypoplasia, papilledema, and toxic optic neuropathy. The standard measures of performance (sensitivity, specificity, and area under the curve of the receiver operating characteristic curve (AUC-ROC)) were used for evaluation. RESULTS During the 10-fold cross-validation test, our DCNN for distinguishing between normal and abnormal optic discs achieved the following mean performance: AUC-ROC 0.99 (95 CI: 0.98-0.99), sensitivity 94% (95 CI: 91%-97%), and specificity 96% (95 CI: 93%-99%). When evaluated against the external testing data set, our model achieved the following mean performance: AUC-ROC 0.87, sensitivity 90%, and specificity 69%. CONCLUSION In summary, we have developed a deep learning algorithm that is capable of detecting a spectrum of optic disc abnormalities in color fundus photographs, with a focus on neuro-ophthalmological etiologies. As the next step, we plan to validate our algorithm prospectively as a focused screening tool in the emergency department, which if successful could be beneficial because current practice pattern and training predict a shortage of neuro-ophthalmologists and ophthalmologists in general in the near future.
Collapse
Affiliation(s)
- T Y Alvin Liu
- Department of Ophthalmology (TYAL, NRM), Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland; Department of Biomedical Engineering (JW), Johns Hopkins University, Baltimore, Maryland; Malone Center for Engineering in Healthcare (HZ, MU), Johns Hopkins University, Baltimore, Maryland; Department of Radiology (PHY, FKH), Johns Hopkins University, Baltimore, Maryland; Singapore Eye Research Institute (DSWT), Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore ; Department of Ophthalmology (PSS), University of Colorado School of Medicine, Aurora, Colorado; and Department of Ophthalmology (DM), Byers Eye Institute, Stanford University, Palo Alto, California
| | | | | | | | | | | | | | | | | | | |
Collapse
|
11
|
Abstract
PURPOSE OF REVIEW Artificial intelligence and deep learning have become important tools in extracting data from ophthalmic surgery to evaluate, teach, and aid the surgeon in all phases of surgical management. The purpose of this review is to highlight the ever-increasing intersection of computer vision, machine learning, and ophthalmic microsurgery. RECENT FINDINGS Deep learning algorithms are being applied to help evaluate and teach surgical trainees. Artificial intelligence tools are improving real-time surgical instrument tracking, phase segmentation, as well as enhancing the safety of robotic-assisted vitreoretinal surgery. SUMMARY Similar to strides appreciated in ophthalmic medical disease, artificial intelligence will continue to become an important part of surgical management of ocular conditions. Machine learning applications will help push the boundaries of what surgeons can accomplish to improve patient outcomes.
Collapse
Affiliation(s)
- Kapil Mishra
- Department of Ophthalmology, Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California, USA
| | | |
Collapse
|
12
|
Lakshminarayanan V, Kheradfallah H, Sarkar A, Jothi Balaji J. Automated Detection and Diagnosis of Diabetic Retinopathy: A Comprehensive Survey. J Imaging 2021; 7:165. [PMID: 34460801 PMCID: PMC8468161 DOI: 10.3390/jimaging7090165] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/23/2021] [Accepted: 08/24/2021] [Indexed: 12/16/2022] Open
Abstract
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world. In the past few years, artificial intelligence (AI) based approaches have been used to detect and grade DR. Early detection enables appropriate treatment and thus prevents vision loss. For this purpose, both fundus and optical coherence tomography (OCT) images are used to image the retina. Next, Deep-learning (DL)-/machine-learning (ML)-based approaches make it possible to extract features from the images and to detect the presence of DR, grade its severity and segment associated lesions. This review covers the literature dealing with AI approaches to DR such as ML and DL in classification and segmentation that have been published in the open literature within six years (2016-2021). In addition, a comprehensive list of available DR datasets is reported. This list was constructed using both the PICO (P-Patient, I-Intervention, C-Control, O-Outcome) and Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) 2009 search strategies. We summarize a total of 114 published articles which conformed to the scope of the review. In addition, a list of 43 major datasets is presented.
Collapse
Affiliation(s)
- Vasudevan Lakshminarayanan
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Hoda Kheradfallah
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Arya Sarkar
- Department of Computer Engineering, University of Engineering and Management, Kolkata 700 156, India;
| | | |
Collapse
|
13
|
Pujari A, Saluja G, Agarwal D, Sinha A, P R A, Kumar A, Sharma N. Clinical Role of Smartphone Fundus Imaging in Diabetic Retinopathy and Other Neuro-retinal Diseases. Curr Eye Res 2021; 46:1605-1613. [PMID: 34325587 DOI: 10.1080/02713683.2021.1958347] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Purpose: In today's life, many electronic gadgets have the potential to become invaluable health care devices in future. The gadgets in this category include smartphones, smartwatches, and others. Till now, smartphone role has been highlighted on many occasions in different areas, and they continue to possess immense role in clinical documentation, clinical consultation, and digitalization of ocular care. In last one decade, many treatable conditions including diabetic retinopathy, glaucoma, and other pediatric retinal diseases are being imaged using smartphones.Methods: To comprehend this cumulative knowledge, a detailed medical literature search was conducted on PubMed/Medline, Scopus, and Web of Science till February 2021.Results: The included literature revealed a definitive progress in posterior segment imaging. From simple torch light with smartphone examination to present day compact handy devices with artificial intelligence integrated software's have changed the very perspectives of ocular imaging in ophthalmology. The consistently reproducible results, constantly improving imaging techniques, and most importantly their affordable costs have renegotiated their role as effective screening devices in ophthalmology. Moreover, the obtained field of view, ocular safety, and their key utility in non-ophthalmic specialties are also growing.Conclusions: To conclude, smartphone imaging can now be considered as a quick, cost-effective, and digitalized tool for posterior segment screenings, however, their definite role in routine ophthalmic clinics is yet to be established.
Collapse
Affiliation(s)
- Amar Pujari
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Gunjan Saluja
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Divya Agarwal
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Ayushi Sinha
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Ananya P R
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Atul Kumar
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Namrata Sharma
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| |
Collapse
|