1
|
Yu S, Yan C, Qin G, Pazo EE, He X, Qi P, Li M, Han D, He W, He X. Assessing the Impact of AI-Assisted Portable Slit Lamps on Rural Primary Ophthalmic Medical Service. Curr Eye Res 2025; 50:551-558. [PMID: 39910748 DOI: 10.1080/02713683.2025.2458131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Revised: 12/13/2024] [Accepted: 01/19/2025] [Indexed: 02/07/2025]
Abstract
PURPOSE To investigate the effect of an AI-assisted portable slit lamp (iSpector) and basic ophthalmology training on cataract detection, referral, and surgery rate in rural areas. METHODS This randomized control trial randomly assigned 63 village doctors to either the AI-assisted group (providing iSpector and training) or the control group (providing training). Doctors were followed for 1 year before intervention as a baseline and 1 year after to make the comparison. Multivariable Poisson regression was applied to compare the difference in cataract detection, referral, and surgery rate between the two groups, adjusted for primary doctors' baseline characteristics. We further conducted subgroup analysis to estimate the change after the intervention. RESULTS Compared to the control group, the detection, referral, and surgery rate of cataracts among the AI-assisted group was comparable, 1.7 times higher, and 4.9 times higher, respectively. Providing iSpector and training increased the detection, referral, and surgery rate of cataracts. However, only based on training to elevate the detection rate of cataracts did not change the referral and surgery rate. CONCLUSIONS iSpector helps village doctors detect and refer cataract patients appropriately, thus increasing the probability that patients receive cataract surgery.
Collapse
Affiliation(s)
- Sile Yu
- He University, Shenyang, China
| | | | | | | | | | - Peng Qi
- He Eye Specialist Hospital, Shenyang, China
| | - Mingze Li
- He Eye Specialist Hospital, Shenyang, China
| | | | - Wei He
- He Eye Specialist Hospital, Shenyang, China
| | | |
Collapse
|
2
|
Zhao Z, Zhang W, Chen X, Song F, Gunasegaram J, Huang W, Shi D, He M, Liu N. Slit Lamp Report Generation and Question Answering: Development and Validation of a Multimodal Transformer Model with Large Language Model Integration. J Med Internet Res 2024; 26:e54047. [PMID: 39753218 PMCID: PMC11729784 DOI: 10.2196/54047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 02/24/2024] [Accepted: 09/05/2024] [Indexed: 01/18/2025] Open
Abstract
BACKGROUND Large language models have shown remarkable efficacy in various medical research and clinical applications. However, their skills in medical image recognition and subsequent report generation or question answering (QA) remain limited. OBJECTIVE We aim to finetune a multimodal, transformer-based model for generating medical reports from slit lamp images and develop a QA system using Llama2. We term this entire process slit lamp-GPT. METHODS Our research used a dataset of 25,051 slit lamp images from 3409 participants, paired with their corresponding physician-created medical reports. We used these data, split into training, validation, and test sets, to finetune the Bootstrapping Language-Image Pre-training framework toward report generation. The generated text reports and human-posed questions were then input into Llama2 for subsequent QA. We evaluated performance using qualitative metrics (including BLEU [bilingual evaluation understudy], CIDEr [consensus-based image description evaluation], ROUGE-L [Recall-Oriented Understudy for Gisting Evaluation-Longest Common Subsequence], SPICE [Semantic Propositional Image Caption Evaluation], accuracy, sensitivity, specificity, precision, and F1-score) and the subjective assessments of two experienced ophthalmologists on a 1-3 scale (1 referring to high quality). RESULTS We identified 50 conditions related to diseases or postoperative complications through keyword matching in initial reports. The refined slit lamp-GPT model demonstrated BLEU scores (1-4) of 0.67, 0.66, 0.65, and 0.65, respectively, with a CIDEr score of 3.24, a ROUGE (Recall-Oriented Understudy for Gisting Evaluation) score of 0.61, and a Semantic Propositional Image Caption Evaluation score of 0.37. The most frequently identified conditions were cataracts (22.95%), age-related cataracts (22.03%), and conjunctival concretion (13.13%). Disease classification metrics demonstrated an overall accuracy of 0.82 and an F1-score of 0.64, with high accuracies (≥0.9) observed for intraocular lens, conjunctivitis, and chronic conjunctivitis, and high F1-scores (≥0.9) observed for cataract and age-related cataract. For both report generation and QA components, the two evaluating ophthalmologists reached substantial agreement, with κ scores between 0.71 and 0.84. In assessing 100 generated reports, they awarded scores of 1.36 for both completeness and correctness; 64% (64/100) were considered "entirely good," and 93% (93/100) were "acceptable." In the evaluation of 300 generated answers to questions, the scores were 1.33 for completeness, 1.14 for correctness, and 1.15 for possible harm, with 66.3% (199/300) rated as "entirely good" and 91.3% (274/300) as "acceptable." CONCLUSIONS This study introduces the slit lamp-GPT model for report generation and subsequent QA, highlighting the potential of large language models to assist ophthalmologists and patients.
Collapse
Affiliation(s)
- Ziwei Zhao
- School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
| | - Weiyi Zhang
- School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
| | - Xiaolan Chen
- School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
| | - Fan Song
- School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
| | | | - Wenyong Huang
- Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Hong Kong, China
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Hong Kong, China
- Centre for Eye and Vision Research (CEVR), Hong Kong, China
| | - Na Liu
- Guangzhou Cadre and Talent Health Management Center, Guangzhou, China
| |
Collapse
|
3
|
Wichmann J, Gesk TS, Leyer M. Acceptance of AI in Health Care for Short- and Long-Term Treatments: Pilot Development Study of an Integrated Theoretical Model. JMIR Form Res 2024; 8:e48600. [PMID: 39024565 PMCID: PMC11294784 DOI: 10.2196/48600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 02/14/2024] [Accepted: 04/29/2024] [Indexed: 07/20/2024] Open
Abstract
BACKGROUND As digital technologies and especially artificial intelligence (AI) become increasingly important in health care, it is essential to determine whether and why potential users intend to use related health information systems (HIS). Several theories exist, but they focus mainly on aspects of health care or information systems, in addition to general psychological theories, and hence provide a small number of variables to explain future behavior. Thus, research that provides a larger number of variables by combining several theories from health care, information systems, and psychology is necessary. OBJECTIVE This study aims to investigate the intention to use new HIS for decisions concerning short- and long-term medical treatments using an integrated approach with several variables to explain future behavior. METHODS We developed an integrated theoretical model based on theories from health care, information systems, and psychology that allowed us to analyze the duality approach of adaptive and nonadaptive appraisals and their influence on the intention to use HIS. We applied the integrated theoretical model to the short-term treatment using AI-based HIS for surgery and the long-term treatment of diabetes tracking using survey data with structured equation modeling. To differentiate between certain levels of AI involvement, we used several scenarios that include treatments by physicians only, physicians with AI support, and AI only to understand how individuals perceive the influence of AI. RESULTS Our results showed that for short- and long-term treatments, the variables perceived threats, fear (disease), perceived efficacy, attitude (HIS), and perceived norms are important to consider when determining the intention to use AI-based HIS. Furthermore, the results revealed that perceived efficacy and attitude (HIS) are the most important variables to determine intention to use for all treatments and scenarios. In contrast, abilities (HIS) were important for short-term treatments only. For our 9 scenarios, adaptive and nonadaptive appraisals were both important to determine intention to use, depending on whether the treatment is known. Furthermore, we determined R² values that varied between 57.9% and 81.7% for our scenarios, which showed that the explanation power of our model is medium to good. CONCLUSIONS We contribute to HIS literature by highlighting the importance of integrating disease- and technology-related factors and by providing an integrated theoretical model. As such, we show how adaptive and nonadaptive appraisals should be arranged to report on medical decisions in the future, especially in the short and long terms. Physicians and HIS developers can use our insights to identify promising rationale for HIS adoption concerning short- and long-term treatments and adapt and develop HIS accordingly. Specifically, HIS developers should ensure that future HIS act in terms of HIS functions, as our study shows that efficient HIS lead to a positive attitude toward the HIS and ultimately to a higher intention to use.
Collapse
Affiliation(s)
- Johannes Wichmann
- Working group Digitalization and Process Management, Department of Business, Philipps-University Marburg, Marburg, Germany
| | - Tanja Sophie Gesk
- Working group Digitalization and Process Management, Department of Business, Philipps-University Marburg, Marburg, Germany
| | - Michael Leyer
- Working group Digitalization and Process Management, Department of Business, Philipps-University Marburg, Marburg, Germany
- Management Department, Queensland University of Technology, Brisbane, Australia
| |
Collapse
|
4
|
Vanathi M. Cataract surgery innovations. Indian J Ophthalmol 2024; 72:613-614. [PMID: 38648429 PMCID: PMC11168568 DOI: 10.4103/ijo.ijo_888_24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024] Open
Affiliation(s)
- M Vanathi
- Cornea and Ocular Surface, Cataract and Refractive Services, Dr R P Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India E-mail:
| |
Collapse
|
5
|
Lu MC, Deng C, Greenwald MF, Farsiu S, Prajna NV, Nallasamy N, Pawar M, Hart JN, S.R. S, Kochar P, Selvaraj S, Levine H, Amescua G, Sepulveda-Beltran PA, Niziol LM, Woodward MA. Automatic Classification of Slit-Lamp Photographs by Imaging Illumination. Cornea 2024; 43:419-424. [PMID: 37267474 PMCID: PMC10689570 DOI: 10.1097/ico.0000000000003318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 04/25/2023] [Indexed: 06/04/2023]
Abstract
PURPOSE The aim of this study was to facilitate deep learning systems in image annotations for diagnosing keratitis type by developing an automated algorithm to classify slit-lamp photographs (SLPs) based on illumination technique. METHODS SLPs were collected from patients with corneal ulcer at Kellogg Eye Center, Bascom Palmer Eye Institute, and Aravind Eye Care Systems. Illumination techniques were slit beam, diffuse white light, diffuse blue light with fluorescein, and sclerotic scatter (ScS). Images were manually labeled for illumination and randomly split into training, validation, and testing data sets (70%:15%:15%). Classification algorithms including MobileNetV2, ResNet50, LeNet, AlexNet, multilayer perceptron, and k-nearest neighborhood were trained to distinguish 4 type of illumination techniques. The algorithm performances on the test data set were evaluated with 95% confidence intervals (CIs) for accuracy, F1 score, and area under the receiver operator characteristics curve (AUC-ROC), overall and by class (one-vs-rest). RESULTS A total of 12,132 images from 409 patients were analyzed, including 41.8% (n = 5069) slit-beam photographs, 21.2% (2571) diffuse white light, 19.5% (2364) diffuse blue light, and 17.5% (2128) ScS. MobileNetV2 achieved the highest overall F1 score of 97.95% (CI, 97.94%-97.97%), AUC-ROC of 99.83% (99.72%-99.9%), and accuracy of 98.98% (98.97%-98.98%). The F1 scores for slit beam, diffuse white light, diffuse blue light, and ScS were 97.82% (97.80%-97.84%), 96.62% (96.58%-96.66%), 99.88% (99.87%-99.89%), and 97.59% (97.55%-97.62%), respectively. Slit beam and ScS were the 2 most frequently misclassified illumination. CONCLUSIONS MobileNetV2 accurately labeled illumination of SLPs using a large data set of corneal images. Effective, automatic classification of SLPs is key to integrating deep learning systems for clinical decision support into practice workflows.
Collapse
Affiliation(s)
- Ming-Chen Lu
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Callie Deng
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Miles F. Greenwald
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Sina Farsiu
- Departments of Biomedical Engineering, Duke University, Durham, NC, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC, USA
| | | | - Nambi Nallasamy
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Mercy Pawar
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Jenna N. Hart
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | | | | | | | - Harry Levine
- Bascom Palmer Eye Institute, Department of Ophthalmology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Guillermo Amescua
- Bascom Palmer Eye Institute, Department of Ophthalmology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Paula A. Sepulveda-Beltran
- Bascom Palmer Eye Institute, Department of Ophthalmology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Leslie M. Niziol
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Maria A. Woodward
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
- Institute for Healthcare Policy and Innovation, University of Michigan, Ann Arbor, MI, USA
| | | |
Collapse
|
6
|
Zhu J, Li FF, Li GX, Jiang SY, Cheng D, Bao FJ, Wu SQ, Dai Q, Ye YF. Enhancing Vault Prediction and ICL Sizing Through Advanced Machine Learning Models. J Refract Surg 2024; 40:e126-e132. [PMID: 38466764 DOI: 10.3928/1081597x-20240131-01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
PURPOSE To use artificial intelligence (AI) technology to accurately predict vault and Implantable Collamer Lens (ICL) size. METHODS The methodology focused on enhancing predictive capabilities through the fusion of machine-learning algorithms. Specifically, AdaBoost, Random Forest, Decision Tree, Support Vector Regression, LightGBM, and XGBoost were integrated into a majority-vote model. The performance of each model was evaluated using appropriate metrics such as accuracy, precision, F1-score, and area under the curve (AUC). RESULTS The majority-vote model exhibited the highest performance among the classification models, with an accuracy of 81.9% area under the curve (AUC) of 0.807. Notably, LightGBM (accuracy = 0.788, AUC = 0.803) and XGBoost (ACC = 0.790, AUC = 0.801) demonstrated competitive results. For the ICL size prediction, the Random Forest model achieved an impressive accuracy of 85.3% (AUC = 0.973), whereas XG-Boost (accuracy = 0.834, AUC = 0.961) and LightGBM (accuracy = 0.816, AUC = 0.961) maintained their compatibility. CONCLUSIONS This study highlights the potential of diverse machine learning algorithms to enhance postoperative vault and ICL size prediction, ultimately contributing to the safety of ICL implantation procedures. Furthermore, the introduction of the novel majority-vote model demonstrates its capability to combine the advantages of multiple models, yielding superior accuracy. Importantly, this study will empower ophthalmologists to use a precise tool for vault prediction, facilitating informed ICL size selection in clinical practice. [J Refract Surg. 2024;40(3):e126-e132.].
Collapse
|
7
|
Vilela MAP, Arrigo A, Parodi MB, da Silva Mengue C. Smartphone Eye Examination: Artificial Intelligence and Telemedicine. Telemed J E Health 2024; 30:341-353. [PMID: 37585566 DOI: 10.1089/tmj.2023.0041] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/18/2023] Open
Abstract
Background: The current medical scenario is closely linked to recent progress in telecommunications, photodocumentation, and artificial intelligence (AI). Smartphone eye examination may represent a promising tool in the technological spectrum, with special interest for primary health care services. Obtaining fundus imaging with this technique has improved and democratized the teaching of fundoscopy, but in particular, it contributes greatly to screening diseases with high rates of blindness. Eye examination using smartphones essentially represents a cheap and safe method, thus contributing to public policies on population screening. This review aims to provide an update on the use of this resource and its future prospects, especially as a screening and ophthalmic diagnostic tool. Methods: In this review, we surveyed major published advances in retinal and anterior segment analysis using AI. We performed an electronic search on the Medical Literature Analysis and Retrieval System Online (MEDLINE), EMBASE, and Cochrane Library for published literature without a deadline. We included studies that compared the diagnostic accuracy of smartphone ophthalmoscopy for detecting prevalent diseases with an accurate or commonly employed reference standard. Results: There are few databases with complete metadata, providing demographic data, and few databases with sufficient images involving current or new therapies. It should be taken into consideration that these are databases containing images captured using different systems and formats, with information often being excluded without essential detailing of the reasons for exclusion, which further distances them from real-life conditions. The safety, portability, low cost, and reproducibility of smartphone eye images are discussed in several studies, with encouraging results. Conclusions: The high level of agreement between conventional and a smartphone method shows a powerful arsenal for screening and early diagnosis of the main causes of blindness, such as cataract, glaucoma, diabetic retinopathy, and age-related macular degeneration. In addition to streamlining the medical workflow and bringing benefits for public health policies, smartphone eye examination can make safe and quality assessment available to the population.
Collapse
Affiliation(s)
| | - Alessandro Arrigo
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Maurizio Battaglia Parodi
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Carolina da Silva Mengue
- Post-Graduation Ophthalmological School, Ivo Corrêa-Meyer/Cardiology Institute, Porto Alegre, Brazil
| |
Collapse
|
8
|
Lapka M, Straňák Z. The Current State of Artificial Intelligence in Neuro-Ophthalmology. A Review. CESKA A SLOVENSKA OFTALMOLOGIE : CASOPIS CESKE OFTALMOLOGICKE SPOLECNOSTI A SLOVENSKE OFTALMOLOGICKE SPOLECNOSTI 2024; 80:179-186. [PMID: 38538291 DOI: 10.31348/2023/33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
This article presents a summary of recent advances in the development and use of complex systems using artificial intelligence (AI) in neuro-ophthalmology. The aim of the following article is to present the principles of AI and algorithms that are currently being used or are still in the stage of evaluation or validation within the neuro-ophthalmology environment. For the purpose of this text, a literature search was conducted using specific keywords in available scientific databases, cumulatively up to April 2023. The AI systems developed across neuro-ophthalmology mostly achieve high sensitivity, specificity and accuracy. Individual AI systems and algorithms are subsequently selected, simply described and compared in the article. The results of the individual studies differ significantly, depending on the chosen methodology, the set goals, the size of the test, evaluated set, and the evaluated parameters. It has been demonstrated that the evaluation of various diseases will be greatly speeded up with the help of AI and make the diagnosis more efficient in the future, thus showing a high potential to be a useful tool in clinical practice even with a significant increase in the number of patients.
Collapse
|
9
|
Shimizu E, Tanji M, Nakayama S, Ishikawa T, Agata N, Yokoiwa R, Nishimura H, Khemlani RJ, Sato S, Hanyuda A, Sato Y. AI-based diagnosis of nuclear cataract from slit-lamp videos. Sci Rep 2023; 13:22046. [PMID: 38086904 PMCID: PMC10716159 DOI: 10.1038/s41598-023-49563-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 12/09/2023] [Indexed: 12/18/2023] Open
Abstract
In ophthalmology, the availability of many fundus photographs and optical coherence tomography images has spurred consideration of using artificial intelligence (AI) for diagnosing retinal and optic nerve disorders. However, AI application for diagnosing anterior segment eye conditions remains unfeasible due to limited standardized images and analysis models. We addressed this limitation by augmenting the quantity of standardized optical images using a video-recordable slit-lamp device. We then investigated whether our proposed machine learning (ML) AI algorithm could accurately diagnose cataracts from videos recorded with this device. We collected 206,574 cataract frames from 1812 cataract eye videos. Ophthalmologists graded the nuclear cataracts (NUCs) using the cataract grading scale of the World Health Organization. These gradings were used to train and validate an ML algorithm. A validation dataset was used to compare the NUC diagnosis and grading of AI and ophthalmologists. The results of individual cataract gradings were: NUC 0: area under the curve (AUC) = 0.967; NUC 1: AUC = 0.928; NUC 2: AUC = 0.923; and NUC 3: AUC = 0.949. Our ML-based cataract diagnostic model achieved performance comparable to a conventional device, presenting a promising and accurate auto diagnostic AI tool.
Collapse
Affiliation(s)
- Eisuke Shimizu
- OUI Inc., Tokyo, Japan.
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan.
- Yokohama Keiai Eye Clinic, Yokohama, Japan.
| | - Makoto Tanji
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Shintato Nakayama
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Toshiki Ishikawa
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | | | | | - Hiroki Nishimura
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
- Yokohama Keiai Eye Clinic, Yokohama, Japan
| | | | - Shinri Sato
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
- Yokohama Keiai Eye Clinic, Yokohama, Japan
| | - Akiko Hanyuda
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Yasunori Sato
- Department of Preventive Medicine and Public Health, School of Medicine, Keio University, Tokyo, Japan
| |
Collapse
|
10
|
Anton N, Doroftei B, Curteanu S, Catãlin L, Ilie OD, Târcoveanu F, Bogdănici CM. Comprehensive Review on the Use of Artificial Intelligence in Ophthalmology and Future Research Directions. Diagnostics (Basel) 2022; 13:100. [PMID: 36611392 PMCID: PMC9818832 DOI: 10.3390/diagnostics13010100] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/12/2022] [Accepted: 12/26/2022] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND Having several applications in medicine, and in ophthalmology in particular, artificial intelligence (AI) tools have been used to detect visual function deficits, thus playing a key role in diagnosing eye diseases and in predicting the evolution of these common and disabling diseases. AI tools, i.e., artificial neural networks (ANNs), are progressively involved in detecting and customized control of ophthalmic diseases. The studies that refer to the efficiency of AI in medicine and especially in ophthalmology were analyzed in this review. MATERIALS AND METHODS We conducted a comprehensive review in order to collect all accounts published between 2015 and 2022 that refer to these applications of AI in medicine and especially in ophthalmology. Neural networks have a major role in establishing the demand to initiate preliminary anti-glaucoma therapy to stop the advance of the disease. RESULTS Different surveys in the literature review show the remarkable benefit of these AI tools in ophthalmology in evaluating the visual field, optic nerve, and retinal nerve fiber layer, thus ensuring a higher precision in detecting advances in glaucoma and retinal shifts in diabetes. We thus identified 1762 applications of artificial intelligence in ophthalmology: review articles and research articles (301 pub med, 144 scopus, 445 web of science, 872 science direct). Of these, we analyzed 70 articles and review papers (diabetic retinopathy (N = 24), glaucoma (N = 24), DMLV (N = 15), other pathologies (N = 7)) after applying the inclusion and exclusion criteria. CONCLUSION In medicine, AI tools are used in surgery, radiology, gynecology, oncology, etc., in making a diagnosis, predicting the evolution of a disease, and assessing the prognosis in patients with oncological pathologies. In ophthalmology, AI potentially increases the patient's access to screening/clinical diagnosis and decreases healthcare costs, mainly when there is a high risk of disease or communities face financial shortages. AI/DL (deep learning) algorithms using both OCT and FO images will change image analysis techniques and methodologies. Optimizing these (combined) technologies will accelerate progress in this area.
Collapse
Affiliation(s)
- Nicoleta Anton
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Bogdan Doroftei
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Silvia Curteanu
- Department of Chemical Engineering, Cristofor Simionescu Faculty of Chemical Engineering and Environmental Protection, Gheorghe Asachi Technical University, Prof.dr.doc Dimitrie Mangeron Avenue, No 67, 700050 Iasi, Romania
| | - Lisa Catãlin
- Department of Chemical Engineering, Cristofor Simionescu Faculty of Chemical Engineering and Environmental Protection, Gheorghe Asachi Technical University, Prof.dr.doc Dimitrie Mangeron Avenue, No 67, 700050 Iasi, Romania
| | - Ovidiu-Dumitru Ilie
- Department of Biology, Faculty of Biology, “Alexandru Ioan Cuza” University, Carol I Avenue, No 20A, 700505 Iasi, Romania
| | - Filip Târcoveanu
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Camelia Margareta Bogdănici
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| |
Collapse
|
11
|
Pietris J, Lam A, Bacchi S, Gupta AK, Kovoor JG, Chan WO. Health Economic Implications of Artificial Intelligence Implementation for Ophthalmology in Australia: A Systematic Review. Asia Pac J Ophthalmol (Phila) 2022; 11:554-562. [PMID: 36218837 DOI: 10.1097/apo.0000000000000565] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 07/15/2022] [Indexed: 11/24/2022] Open
Abstract
PURPOSE The health care industry is an inherently resource-intense sector. Emerging technologies such as artificial intelligence (AI) are at the forefront of advancements in health care. The health economic implications of this technology have not been clearly established and represent a substantial barrier to adoption both in Australia and globally. This review aims to determine the health economic impact of implementing AI to ophthalmology in Australia. METHODS A systematic search of the databases PubMed/MEDLINE, EMBASE, and CENTRAL was conducted to March 2022, before data collection and risk of bias analysis in accordance with preferred reporting items for systematic ceviews and meta-analyses 2020 guidelines (PROSPERO number CRD42022325511). Included were full-text primary research articles analyzing a population of patients who have or are being evaluated for an ophthalmological diagnosis, using a health economic assessment system to assess the cost-effectiveness of AI. RESULTS Seven articles were identified for inclusion. Economic viability was defined as direct cost to the patient that is equal to or less than costs incurred with human clinician assessment. Despite the lack of Australia-specific data, foreign analyses overwhelmingly showed that AI is just as economically viable, if not more so, than traditional human screening programs while maintaining comparable clinical effectiveness. This evidence was largely in the setting of diabetic retinopathy screening. CONCLUSIONS Primary Australian research is needed to accurately analyze the health economic implications of implementing AI on a large scale. Further research is also required to analyze the economic feasibility of adoption of AI technology in other areas of ophthalmology, such as glaucoma and cataract screening.
Collapse
Affiliation(s)
- James Pietris
- University of Queensland, Herston, QLD, Australia
- Princess Alexandra Hospital, Woolloongabba, QLD, Australia
| | - Antoinette Lam
- University of Adelaide, Adelaide, SA, Australia
- Royal Adelaide Hospital, Adelaide, SA, Australia
| | - Stephen Bacchi
- University of Adelaide, Adelaide, SA, Australia
- Royal Adelaide Hospital, Adelaide, SA, Australia
| | - Aashray K Gupta
- University of Adelaide, Adelaide, SA, Australia
- Gold Coast University Hospital, Southport, QLD, Australia
| | - Joshua G Kovoor
- University of Adelaide, Adelaide, SA, Australia
- Royal Adelaide Hospital, Adelaide, SA, Australia
| | - Weng Onn Chan
- University of Adelaide, Adelaide, SA, Australia
- Royal Adelaide Hospital, Adelaide, SA, Australia
| |
Collapse
|
12
|
Zhang XQ, Hu Y, Xiao ZJ, Fang JS, Higashita R, Liu J. Machine Learning for Cataract Classification/Grading on Ophthalmic Imaging Modalities: A Survey. MACHINE INTELLIGENCE RESEARCH 2022; 19:184-208. [DOI: 10.1007/s11633-022-1329-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/28/2022] [Indexed: 01/04/2025]
Abstract
AbstractCataracts are the leading cause of visual impairment and blindness globally. Over the years, researchers have achieved significant progress in developing state-of-the-art machine learning techniques for automatic cataract classification and grading, aiming to prevent cataracts early and improve clinicians’ diagnosis efficiency. This survey provides a comprehensive survey of recent advances in machine learning techniques for cataract classification/grading based on ophthalmic images. We summarize existing literature from two research directions: conventional machine learning methods and deep learning methods. This survey also provides insights into existing works of both merits and limitations. In addition, we discuss several challenges of automatic cataract classification/grading based on machine learning techniques and present possible solutions to these challenges for future research.
Collapse
|
13
|
Martínez-Plaza E, Ruiz-Fortes P, Soto-Negro R, Hernández-Rodríguez CJ, Molina-Martín A, Arias-Puente A, Piñero DP. Characterization of Dysfunctional Lens Index and Opacity Grade in a Healthy Population. Diagnostics (Basel) 2022; 12:diagnostics12051167. [PMID: 35626322 PMCID: PMC9140515 DOI: 10.3390/diagnostics12051167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 05/05/2022] [Accepted: 05/06/2022] [Indexed: 11/28/2022] Open
Abstract
This study enrolled 61 volunteers (102 eyes) classified into subjects < 50 years (group 1) and subjects ≥ 50 years (group 2). Dysfunctional Lens Index (DLI); opacity grade; pupil diameter; and corneal, internal, and ocular higher order aberrations (HOAs) were measured with the i-Trace system (Tracey Technologies). Mean DLI was 8.89 ± 2.00 and 6.71 ± 2.97 in groups 1 and 2, respectively, being significantly higher in group 1 in all and right eyes (both p < 0.001). DLI correlated significantly with age (Rho = −0.41, p < 0.001) and pupil diameter (Rho = 0.20, p = 0.043) for all eyes, and numerous internal and ocular root-mean square HOAs for right, left, and all eyes (Rho ≤ −0.25, p ≤ 0.001). Mean opacity grade was 1.21 ± 0.63 and 1.48 ± 1.15 in groups 1 and 2, respectively, with no significant differences between groups (p ≥ 0.29). Opacity grade significantly correlated with pupil diameter for right and all eyes (Rho ≤ 0.33, p ≤ 0.013), and with some ocular root-mean square HOAs for right and all eyes (Rho ≥ 0.23, p ≤ 0.020). DLI correlates with age and might be used complementary to other diagnostic measurements for assessing the dysfunctional lens syndrome. Both DLI and opacity grade maintain a relationship with pupil diameter and internal and ocular HOAs, supporting that the algorithms used by the device may be based, in part, on these parameters.
Collapse
Affiliation(s)
- Elena Martínez-Plaza
- Group of Optics and Visual Perception, Department of Optics, Pharmacology and Anatomy, University of Alicante, 03690 Alicante, Spain; (E.M.-P.); (C.J.H.-R.); (A.M.-M.)
- University of Valladolid, 47002 Valladolid, Spain
| | - Pedro Ruiz-Fortes
- Department of Ophthalmology, Vithas Medimar International Hospital, 03016 Alicante, Spain; (P.R.-F.); (R.S.-N.); (A.A.-P.)
| | - Roberto Soto-Negro
- Department of Ophthalmology, Vithas Medimar International Hospital, 03016 Alicante, Spain; (P.R.-F.); (R.S.-N.); (A.A.-P.)
| | - Carlos J. Hernández-Rodríguez
- Group of Optics and Visual Perception, Department of Optics, Pharmacology and Anatomy, University of Alicante, 03690 Alicante, Spain; (E.M.-P.); (C.J.H.-R.); (A.M.-M.)
- Department of Ophthalmology, Vithas Medimar International Hospital, 03016 Alicante, Spain; (P.R.-F.); (R.S.-N.); (A.A.-P.)
| | - Ainhoa Molina-Martín
- Group of Optics and Visual Perception, Department of Optics, Pharmacology and Anatomy, University of Alicante, 03690 Alicante, Spain; (E.M.-P.); (C.J.H.-R.); (A.M.-M.)
| | - Alfonso Arias-Puente
- Department of Ophthalmology, Vithas Medimar International Hospital, 03016 Alicante, Spain; (P.R.-F.); (R.S.-N.); (A.A.-P.)
| | - David P. Piñero
- Group of Optics and Visual Perception, Department of Optics, Pharmacology and Anatomy, University of Alicante, 03690 Alicante, Spain; (E.M.-P.); (C.J.H.-R.); (A.M.-M.)
- Department of Ophthalmology, Vithas Medimar International Hospital, 03016 Alicante, Spain; (P.R.-F.); (R.S.-N.); (A.A.-P.)
- Correspondence: ; Tel.: +34-965-903400
| |
Collapse
|
14
|
Lam TCH, Lok JKH, Lin TPH, Yuen HKL, Wong MOM. Survey-based Evaluation of the Use of Picture Archiving and Communication Systems in an Eye Hospital-Ophthalmologists' Perspective. Asia Pac J Ophthalmol (Phila) 2022; 11:258-266. [PMID: 34923520 DOI: 10.1097/apo.0000000000000467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
PURPOSE Picture archiving and communication system (PACS) is a medical imaging system for sharing, storage, retrieval, and access of medical images stored. Our study aimed to identify ophthalmologists' views on PACS, with the comparison between 3 platforms, namely electronic patient record (ePR), HEYEX (Heidelberg Engineering, Switzerland), and FORUM (Zeiss, US), following their implementation in an eye hospital for common ophthalmic investigations [visual field, optical coherence tomography (OCT) of retinal nerve fiber layer and macula, and fluorescein/indocyanine green angiography (FA/ICG)]. METHODS An online survey was distributed among ophthalmologists in a single center. Primary outcome included comparison of PACS with paper-based system. Secondary outcomes included pattern of use and comparison of different PACS platforms. RESULTS Survey response rate was 28/37 (75.7%). Images were most commonly accessed through ePR (median: 80% of time, interquartile range: 50 to 90%).All systems scored highly in information display items (median scores ≥7.5 out of 10) and in reducing patient identification error in investigation filing and retrieval during consultation compared to paper (score ≥7.0). However, ePR was inferior to paper in "facilitating comparison with previous results" in all investigation types (scores 3.0 to 4.5). ePR scored significantly higher in all system quality items than HEYEX ( P < 0.001) and FORUM ( P < 0.022), except login response time ( P = 0.081). HEYEX scored significantly higher among vitreoretinaluveitis members (VRU) for information quality items for OCT macula and FA/ICG [VRU: 10.0 (8.0 to 10.0), non-VRU: 8.0 (6.75 to 9.25), P = 0.042]. CONCLUSIONS Overall feedback for PACS among ophthalmologists was positive, with limitations of inefficiency in use of information, for example, comparison with previous results. Subspecialty played an important role in evaluating PACS.
Collapse
Affiliation(s)
- Thomas Chi Ho Lam
- Hong Kong Eye Hospital, Hong Kong SAR, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jerry Ka Hing Lok
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Timothy Pak Ho Lin
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Hunter Kwok Lai Yuen
- Hong Kong Eye Hospital, Hong Kong SAR, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Mandy Oi Man Wong
- Hong Kong Eye Hospital, Hong Kong SAR, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
15
|
Al-Khaled T, Acaba-Berrocal L, Cole E, Ting DSW, Chiang MF, Chan RVP. Digital Education in Ophthalmology. Asia Pac J Ophthalmol (Phila) 2022; 11:267-272. [PMID: 34966034 PMCID: PMC9240107 DOI: 10.1097/apo.0000000000000484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
ABSTRACT Accessibility to the Internet and computer systems has prompted the gravitation towards digital learning in medicine, including ophthalmology. Using the PubMed database and Google search engine, current initiatives in ophthalmology that serve as alternatives to traditional in-person learning with the purpose of enhancing clinical and surgical training were reviewed. This includes the development of teleeducation modules, construction of libraries of clinical and surgical videos, conduction of didactics via video communication, and the implementation of simulators and intelligent tutoring systems into clinical and surgical training programs. In this age of digital communication, teleophthalmology programs, virtual ophthalmological society meetings, and online examinations have become necessary for conducting clinical work and educational training in ophthalmology, especially in light of recent global events that have prevented large gatherings as well as the rural location of various populations. Looking forward, web-based modules and resources, artificial intelligence-based systems, and telemedicine programs will augment current curricula for ophthalmology trainees.
Collapse
Affiliation(s)
- Tala Al-Khaled
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, US
| | - Luis Acaba-Berrocal
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, US
| | - Emily Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, US
| | - Daniel S W Ting
- Singapore Eye Research institute, Singapore National Eye centre, Singapore
- Duke-NUS Medical School, National University Singapore, Singapore
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, MD, US
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, US
| |
Collapse
|
16
|
Huemer J, Kronschläger M, Ruiss M, Sim D, Keane PA, Findl O, Wagner SK. Diagnostic accuracy of code-free deep learning for detection and evaluation of posterior capsule opacification. BMJ Open Ophthalmol 2022; 7:e000992. [PMID: 36161827 PMCID: PMC9174773 DOI: 10.1136/bmjophth-2022-000992] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/09/2022] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE To train and validate a code-free deep learning system (CFDLS) on classifying high-resolution digital retroillumination images of posterior capsule opacification (PCO) and to discriminate between clinically significant and non-significant PCOs. METHODS AND ANALYSIS For this retrospective registry study, three expert observers graded two independent datasets of 279 images three separate times with no PCO to severe PCO, providing binary labels for clinical significance. The CFDLS was trained and internally validated using 179 images of a training dataset and externally validated with 100 images. Model development was through Google Cloud AutoML Vision. Intraobserver and interobserver variabilities were assessed using Fleiss kappa (κ) coefficients and model performance through sensitivity, specificity and area under the curve (AUC). RESULTS Intraobserver variability κ values for observers 1, 2 and 3 were 0.90 (95% CI 0.86 to 0.95), 0.94 (95% CI 0.90 to 0.97) and 0.88 (95% CI 0.82 to 0.93). Interobserver agreement was high, ranging from 0.85 (95% CI 0.79 to 0.90) between observers 1 and 2 to 0.90 (95% CI 0.85 to 0.94) for observers 1 and 3. On internal validation, the AUC of the CFDLS was 0.99 (95% CI 0.92 to 1.0); sensitivity was 0.89 at a specificity of 1. On external validation, the AUC was 0.97 (95% CI 0.93 to 0.99); sensitivity was 0.84 and specificity was 0.92. CONCLUSION This CFDLS provides highly accurate discrimination between clinically significant and non-significant PCO equivalent to human expert graders. The clinical value as a potential decision support tool in different models of care warrants further research.
Collapse
Affiliation(s)
- Josef Huemer
- Department of Medical Retina, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Martin Kronschläger
- VIROS-Vienna Institute for Research in Ocular Surgery, a Karl Landsteiner Institute, Hanusch Hospital, Vienna, Austria
| | - Manuel Ruiss
- VIROS-Vienna Institute for Research in Ocular Surgery, a Karl Landsteiner Institute, Hanusch Hospital, Vienna, Austria
| | - Dawn Sim
- Department of Medical Retina, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Pearse A Keane
- Department of Medical Retina, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Institute of Ophthalmology, UCL, London, UK
| | - Oliver Findl
- VIROS-Vienna Institute for Research in Ocular Surgery, a Karl Landsteiner Institute, Hanusch Hospital, Vienna, Austria
| | - Siegfried K Wagner
- Department of Medical Retina, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Institute of Ophthalmology, UCL, London, UK
| |
Collapse
|
17
|
Affiliation(s)
- Santosh G Honavar
- Editor, Indian Journal of Ophthalmology, Centre for Sight, Road No 2, Banjara Hills, Hyderabad, Telangana, India
| |
Collapse
|
18
|
Tham YC, Goh JHL, Anees A, Lei X, Rim TH, Chee ML, Wang YX, Jonas JB, Thakur S, Teo ZL, Cheung N, Hamzah H, Tan GSW, Husain R, Sabanayagam C, Wang JJ, Chen Q, Lu Z, Keenan TD, Chew EY, Tan AG, Mitchell P, Goh RSM, Xu X, Liu Y, Wong TY, Cheng CY. Detecting visually significant cataract using retinal photograph-based deep learning. NATURE AGING 2022; 2:264-271. [PMID: 37118370 PMCID: PMC10154193 DOI: 10.1038/s43587-022-00171-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 01/10/2022] [Indexed: 02/06/2023]
Abstract
Age-related cataracts are the leading cause of visual impairment among older adults. Many significant cases remain undiagnosed or neglected in communities, due to limited availability or accessibility to cataract screening. In the present study, we report the development and validation of a retinal photograph-based, deep-learning algorithm for automated detection of visually significant cataracts, using more than 25,000 images from population-based studies. In the internal test set, the area under the receiver operating characteristic curve (AUROC) was 96.6%. External testing performed across three studies showed AUROCs of 91.6-96.5%. In a separate test set of 186 eyes, we further compared the algorithm's performance with 4 ophthalmologists' evaluations. The algorithm performed comparably, if not being slightly more superior (sensitivity of 93.3% versus 51.7-96.6% by ophthalmologists and specificity of 99.0% versus 90.7-97.9% by ophthalmologists). Our findings show the potential of a retinal photograph-based screening tool for visually significant cataracts among older adults, providing more appropriate referrals to tertiary eye centers.
Collapse
Affiliation(s)
- Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ayesha Anees
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Xiaofeng Lei
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Miao-Li Chee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Jost B Jonas
- Department of Ophthalmology, Medical Faculty Mannheim of the Ruprecht-Karis-University Heidelberg, Mannheim, Germany
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Zhen Ling Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ning Cheung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Haslina Hamzah
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Gavin S W Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Rahat Husain
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | | | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Tiarnan D Keenan
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Emily Y Chew
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Ava Grace Tan
- Centre for Vision Research, Department of Ophthalmology, The Westmead Institute for Medical Research, University of Sydney, Westmead Hospital, Westmead, New South Wales, Australia
- National Health Medical Research Council Clinical Trials Centre, University of Sydney, Sydney, New South Wales, Australia
| | - Paul Mitchell
- Centre for Vision Research, Department of Ophthalmology, The Westmead Institute for Medical Research, University of Sydney, Westmead Hospital, Westmead, New South Wales, Australia
| | - Rick S M Goh
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Xinxing Xu
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Yong Liu
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.
| |
Collapse
|
19
|
Keenan TDL, Chen Q, Agrón E, Tham YC, Lin Goh JH, Lei X, Ng YP, Liu Y, Xu X, Cheng CY, Bikbov MM, Jonas JB, Bhandari S, Broadhead GK, Colyer MH, Corsini J, Cousineau-Krieger C, Gensheimer W, Grasic D, Lamba T, Magone MT, Maiberger M, Oshinsky A, Purt B, Shin SY, Thavikulwat AT, Lu Z, Chew EY. Deep Learning Automated Diagnosis and Quantitative Classification of Cataract Type and Severity. Ophthalmology 2022; 129:571-584. [PMID: 34990643 PMCID: PMC9038670 DOI: 10.1016/j.ophtha.2021.12.017] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 12/10/2021] [Accepted: 12/27/2021] [Indexed: 12/14/2022] Open
Abstract
PURPOSE To develop and evaluate deep learning models to perform automated diagnosis and quantitative classification of age-related cataract, including all three anatomical types, from anterior segment photographs. DESIGN Application of deep learning models to Age-Related Eye Disease Study (AREDS) dataset. PARTICIPANTS 18,999 photographs (6,333 triplets) from longitudinal follow-up of 1,137 eyes (576 AREDS participants). METHODS Deep learning models were trained to detect and quantify nuclear cataract (NS; scale 0.9-7.1) from 45-degree slit-lamp photographs and cortical (CLO; scale 0-100%) and posterior subcapsular (PSC; scale 0-100%) cataract from retroillumination photographs. Model performance was compared with that of 14 ophthalmologists and 24 medical students. The ground truth labels were from reading center grading. MAIN OUTCOME MEASURES Mean squared error (MSE). RESULTS On the full test set, mean MSE values for the deep learning models were: 0.23 (SD 0.01) for NS, 13.1 (SD 1.6) for CLO, and 16.6 (SD 2.4) for PSC. On a subset of the test set (substantially enriched for positive cases of CLO and PSC), for NS, mean MSE for the models was 0.23 (SD 0.02), compared to 0.98 (SD 0.23; p=0.000001) for the ophthalmologists, and 1.24 (SD 0.33; p=0.000005) for the medical students. For CLO, mean MSE values were 53.5 (SD 14.8), compared to 134.9 (SD 89.9; p=0.003) and 422.0 (SD 944.4; p=0.0007), respectively. For PSC, mean MSE values were 171.9 (SD 38.9), compared to 176.8 (SD 98.0; p=0.67) and 395.2 (SD 632.5; p=0.18), respectively. In external validation on the Singapore Malay Eye Study (sampled to reflect the distribution of cataract severity in AREDS), MSE was 1.27 for NS and 25.5 for PSC. CONCLUSIONS A deep learning framework was able to perform automated and quantitative classification of cataract severity for all three types of age-related cataract. For the two most common types (NS and CLO), the accuracy was significantly superior to that of ophthalmologists; for the least common type (PSC), the accuracy was similar. The framework may have wide potential applications in both clinical and research domains. In the future, such approaches may increase the accessibility of cataract assessment globally. The code and models are publicly available at https://XXX.
Collapse
Affiliation(s)
- Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
| | - Elvira Agrón
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | | | - Xiaofeng Lei
- Institute of High Performance Computing, A*STAR, Singapore
| | - Yi Pin Ng
- Institute of High Performance Computing, A*STAR, Singapore
| | - Yong Liu
- Duke-NUS Medical School, Singapore; Institute of High Performance Computing, A*STAR, Singapore
| | - Xinxing Xu
- Duke-NUS Medical School, Singapore; Institute of High Performance Computing, A*STAR, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore; Institute of High Performance Computing, A*STAR, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | | | - Jost B Jonas
- Department of Ophthalmology, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany; Institute of Molecular and Clinical Ophthalmology Basel, Switzerland; Privatpraxis Prof Jonas und Dr Panda-Jonas, Heidelberg, Germany
| | - Sanjeeb Bhandari
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Geoffrey K Broadhead
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Marcus H Colyer
- Department of Ophthalmology, Madigan Army Medical Center, Tacoma, WA, USA; Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Jonathan Corsini
- Warfighter Eye Center, Malcolm Grow Medical Clinics and Surgery Center, Joint Base Andrews, MD, USA
| | - Chantal Cousineau-Krieger
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - William Gensheimer
- White River Junction Veterans Affairs Medical Center, White River Junction, VT, USA; Geisel School of Medicine, Dartmouth, NH, USA
| | - David Grasic
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Tania Lamba
- Washington DC Veterans Affairs Medical Center, Washington DC, USA
| | - M Teresa Magone
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Arnold Oshinsky
- Washington DC Veterans Affairs Medical Center, Washington DC, USA
| | - Boonkit Purt
- Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA; Department of Ophthalmology, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Soo Y Shin
- Washington DC Veterans Affairs Medical Center, Washington DC, USA
| | - Alisa T Thavikulwat
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.
| | | |
Collapse
|
20
|
Zhang YY, Zhao H, Lin JY, Wu SN, Liu XW, Zhang HD, Shao Y, Yang WF. Artificial Intelligence to Detect Meibomian Gland Dysfunction From in-vivo Laser Confocal Microscopy. Front Med (Lausanne) 2021; 8:774344. [PMID: 34901091 PMCID: PMC8655877 DOI: 10.3389/fmed.2021.774344] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Accepted: 11/04/2021] [Indexed: 02/05/2023] Open
Abstract
Background: In recent years, deep learning has been widely used in a variety of ophthalmic diseases. As a common ophthalmic disease, meibomian gland dysfunction (MGD) has a unique phenotype in in-vivo laser confocal microscope imaging (VLCMI). The purpose of our study was to investigate a deep learning algorithm to differentiate and classify obstructive MGD (OMGD), atrophic MGD (AMGD) and normal groups. Methods: In this study, a multi-layer deep convolution neural network (CNN) was trained using VLCMI from OMGD, AMGD and healthy subjects as verified by medical experts. The automatic differential diagnosis of OMGD, AMGD and healthy people was tested by comparing its image-based identification of each group with the medical expert diagnosis. The CNN was trained and validated with 4,985 and 1,663 VLCMI images, respectively. By using established enhancement techniques, 1,663 untrained VLCMI images were tested. Results: In this study, we included 2,766 healthy control VLCMIs, 2,744 from OMGD and 2,801 from AMGD. Of the three models, differential diagnostic accuracy of the DenseNet169 CNN was highest at over 97%. The sensitivity and specificity of the DenseNet169 model for OMGD were 88.8 and 95.4%, respectively; and for AMGD 89.4 and 98.4%, respectively. Conclusion: This study described a deep learning algorithm to automatically check and classify VLCMI images of MGD. By optimizing the algorithm, the classifier model displayed excellent accuracy. With further development, this model may become an effective tool for the differential diagnosis of MGD.
Collapse
Affiliation(s)
- Ye-Ye Zhang
- Department of Electronic Engineering, School of Science, Hainan University, Haikou, China
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Hui Zhao
- Department of Ophthalmology, Shanghai First People's Hospital, Shanghai Jiao Tong University, National Clinical Research Center for Eye Diseases, Shanghai, China
| | - Jin-Yan Lin
- Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China
| | - Shi-Nan Wu
- Jiangxi Centre of National Ophthalmology Clinical Research Center, Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Xi-Wang Liu
- Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China
- Department of Mathematics, College of Science, Shantou University, Shantou, China
| | - Hong-Dan Zhang
- Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China
- Department of Mathematics, College of Science, Shantou University, Shantou, China
| | - Yi Shao
- Jiangxi Centre of National Ophthalmology Clinical Research Center, Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Wei-Feng Yang
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
- Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China
- Department of Mathematics, College of Science, Shantou University, Shantou, China
| |
Collapse
|
21
|
Chen X, Xu J, Chen X, Yao K. Cataract: Advances in surgery and whether surgery remains the only treatment in future. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2021; 1:100008. [PMID: 37846393 PMCID: PMC10577864 DOI: 10.1016/j.aopr.2021.100008] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 08/27/2021] [Accepted: 10/12/2021] [Indexed: 10/18/2023]
Abstract
Background Cataract is the world's leading eye disease that causes blindness. The prevalence of cataract aged 40 years and older is approximately 11.8%-18.8%. Currently, surgery is the only way to treat cataracts. Main Text From early intracapsular cataract extraction to extracapsular cataract extraction, to current phacoemulsification cataract surgery, the incision ranges from 12 to 3 mm, and sometimes to even 1.8 mm or less, and the revolution in cataract surgery is ongoing. Cataract surgery has transformed from vision recovery to refractive surgery, leading to the era of refractive cataract surgery, and premium intraocular lenses (IOLs) such as toric IOLs, multifocal IOLs, and extended depth-of-focus IOLs are being increasingly used to meet the individual needs of patients. With its advantages of providing better visual acuity and causing fewer complications, phacoemulsification is currently the mainstream cataract surgery technique worldwide. However, patient expectations for the safety and accuracy of the operation are continually increasing. Femtosecond laser-assisted cataract surgery (FLACS) has entered the public's field of vision. FLACS is a combination of new laser technology and artificial intelligence to replace fine manual clear corneal incision, capsulorhexis, and nuclear pre-fragmentation, providing new alternative technologies for patients and ophthalmologists. As FLACS matures, it is being increasingly applied in complex cases; however, some think it is not cost-effective. Although more than 26 million cataract surgeries are performed each year, there is still a gap in the prevalence of cataracts, especially in developing countries. Although cataract surgery is a nearly ideal procedure and complications are manageable, both patients and doctors dream of using drugs to cure cataracts. Is surgery really the only way to treat cataracts in the future? It has been verified by animal experiments that lanosterol therapy in rabbits and dogs could make cataract severity alleviated and lens transparency partially recovered. Although there is still much to learn about cataract reversal, this groundbreaking work provided a new strategy for the prevention and treatment of cataracts. Conclusions Although cataract surgery is nearly ideal, it is still insufficient, we expect the prospects for cataract drugs to be bright.
Collapse
Affiliation(s)
- Xinyi Chen
- Eye Center, Second Affiliated Hospital of School of Medicine, Zhejiang University, Hangzhou, Zhejiang, 310009, China
| | - Jingjie Xu
- Eye Center, Second Affiliated Hospital of School of Medicine, Zhejiang University, Hangzhou, Zhejiang, 310009, China
| | - Xiangjun Chen
- Eye Center, Second Affiliated Hospital of School of Medicine, Zhejiang University, Hangzhou, Zhejiang, 310009, China
| | - Ke Yao
- Eye Center, Second Affiliated Hospital of School of Medicine, Zhejiang University, Hangzhou, Zhejiang, 310009, China
| |
Collapse
|
22
|
A deep learning approach for successful big-bubble formation prediction in deep anterior lamellar keratoplasty. Sci Rep 2021; 11:18559. [PMID: 34535722 PMCID: PMC8448733 DOI: 10.1038/s41598-021-98157-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 08/30/2021] [Indexed: 12/02/2022] Open
Abstract
The efficacy of deep learning in predicting successful big-bubble (SBB) formation during deep anterior lamellar keratoplasty (DALK) was evaluated. Medical records of patients undergoing DALK at the University of Cologne, Germany between March 2013 and July 2019 were retrospectively analyzed. Patients were divided into two groups: (1) SBB or (2) failed big-bubble (FBB). Preoperative images of anterior segment optical coherence tomography and corneal biometric values (corneal thickness, corneal curvature, and densitometry) were evaluated. A deep neural network model, Visual Geometry Group-16, was selected to test the validation data, evaluate the model, create a heat map image, and calculate the area under the curve (AUC). This pilot study included 46 patients overall (11 women, 35 men). SBBs were more common in keratoconus eyes (KC eyes) than in corneal opacifications of other etiologies (non KC eyes) (p = 0.006). The AUC was 0.746 (95% confidence interval [CI] 0.603–0.889). The determination success rate was 78.3% (18/23 eyes) (95% CI 56.3–92.5%) for SBB and 69.6% (16/23 eyes) (95% CI 47.1–86.8%) for FBB. This automated system demonstrates the potential of SBB prediction in DALK. Although KC eyes had a higher SBB rate, no other specific findings were found in the corneal biometric data.
Collapse
|
23
|
Yuen V, Ran A, Shi J, Sham K, Yang D, Chan VTT, Chan R, Yam JC, Tham CC, McKay GJ, Williams MA, Schmetterer L, Cheng CY, Mok V, Chen CL, Wong TY, Cheung CY. Deep-Learning-Based Pre-Diagnosis Assessment Module for Retinal Photographs: A Multicenter Study. Transl Vis Sci Technol 2021; 10:16. [PMID: 34524409 PMCID: PMC8444486 DOI: 10.1167/tvst.10.11.16] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/12/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose Artificial intelligence (AI) deep learning (DL) has been shown to have significant potential for eye disease detection and screening on retinal photographs in different clinical settings, particular in primary care. However, an automated pre-diagnosis image assessment is essential to streamline the application of the developed AI-DL algorithms. In this study, we developed and validated a DL-based pre-diagnosis assessment module for retinal photographs, targeting image quality (gradable vs. ungradable), field of view (macula-centered vs. optic-disc-centered), and laterality of the eye (right vs. left). Methods A total of 21,348 retinal photographs from 1914 subjects from various clinical settings in Hong Kong, Singapore, and the United Kingdom were used for training, internal validation, and external testing for the DL module, developed by two DL-based algorithms (EfficientNet-B0 and MobileNet-V2). Results For image-quality assessment, the pre-diagnosis module achieved area under the receiver operating characteristic curve (AUROC) values of 0.975, 0.999, and 0.987 in the internal validation dataset and the two external testing datasets, respectively. For field-of-view assessment, the module had an AUROC value of 1.000 in all of the datasets. For laterality-of-the-eye assessment, the module had AUROC values of 1.000, 0.999, and 0.985 in the internal validation dataset and the two external testing datasets, respectively. Conclusions Our study showed that this three-in-one DL module for assessing image quality, field of view, and laterality of the eye of retinal photographs achieved excellent performance and generalizability across different centers and ethnicities. Translational Relevance The proposed DL-based pre-diagnosis module realized accurate and automated assessments of image quality, field of view, and laterality of the eye of retinal photographs, which could be further integrated into AI-based models to improve operational flow for enhancing disease screening and diagnosis.
Collapse
Affiliation(s)
- Vincent Yuen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jian Shi
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Victor T. T. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jason C. Yam
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Gareth J. McKay
- Center for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Michael A. Williams
- Center for Medical Education, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Vincent Mok
- Gerald Choa Neuroscience Center, Therese Pei Fong Chow Research Center for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong
| | - Christopher L. Chen
- Memory, Aging and Cognition Center, Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Y. Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
24
|
Ong J, Selvam A, Chhablani J. Artificial intelligence in ophthalmology: Optimization of machine learning for ophthalmic care and research. Clin Exp Ophthalmol 2021; 49:413-415. [PMID: 34279854 DOI: 10.1111/ceo.13952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Affiliation(s)
- Joshua Ong
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
25
|
Ting DSW, Wong TY, Park KH, Cheung CY, Tham CC, Lam DSC. Ocular Imaging Standardization for Artificial Intelligence Applications in Ophthalmology: the Joint Position Statement and Recommendations From the Asia-Pacific Academy of Ophthalmology and the Asia-Pacific Ocular Imaging Society. Asia Pac J Ophthalmol (Phila) 2021; 10:348-349. [PMID: 34415245 DOI: 10.1097/apo.0000000000000421] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Affiliation(s)
- Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
- Duke-NUS Medical School, National University Singapore, Singapore
| | - Tien Y Wong
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
- Duke-NUS Medical School, National University Singapore, Singapore
| | | | - Carol Y Cheung
- The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Clement C Tham
- The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Dennis S C Lam
- C-MER International Eye Care Group Limited, Hong Kong SAR, China
| |
Collapse
|
26
|
Gunasekeran DV, Wong TY. Artificial Intelligence in Ophthalmology in 2020: A Technology on the Cusp for Translation and Implementation. Asia Pac J Ophthalmol (Phila) 2020; 9:61-66. [PMID: 32349112 DOI: 10.1097/01.apo.0000656984.56467.2c] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Affiliation(s)
- Dinesh Visva Gunasekeran
- Singapore Eye Research Institute (SERI), Singapore
- National University of Singapore (NUS), Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute (SERI), Singapore
- National University of Singapore (NUS), Singapore
- Singapore National Eye Center (SNEC), Singapore
| |
Collapse
|