1
|
Cataract surgery innovations. Indian J Ophthalmol 2024; 72:613-614. [PMID: 38648429 DOI: 10.4103/ijo.ijo_888_24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024] Open
|
2
|
Automatic Classification of Slit-Lamp Photographs by Imaging Illumination. Cornea 2024; 43:419-424. [PMID: 37267474 PMCID: PMC10689570 DOI: 10.1097/ico.0000000000003318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 04/25/2023] [Indexed: 06/04/2023]
Abstract
PURPOSE The aim of this study was to facilitate deep learning systems in image annotations for diagnosing keratitis type by developing an automated algorithm to classify slit-lamp photographs (SLPs) based on illumination technique. METHODS SLPs were collected from patients with corneal ulcer at Kellogg Eye Center, Bascom Palmer Eye Institute, and Aravind Eye Care Systems. Illumination techniques were slit beam, diffuse white light, diffuse blue light with fluorescein, and sclerotic scatter (ScS). Images were manually labeled for illumination and randomly split into training, validation, and testing data sets (70%:15%:15%). Classification algorithms including MobileNetV2, ResNet50, LeNet, AlexNet, multilayer perceptron, and k-nearest neighborhood were trained to distinguish 4 type of illumination techniques. The algorithm performances on the test data set were evaluated with 95% confidence intervals (CIs) for accuracy, F1 score, and area under the receiver operator characteristics curve (AUC-ROC), overall and by class (one-vs-rest). RESULTS A total of 12,132 images from 409 patients were analyzed, including 41.8% (n = 5069) slit-beam photographs, 21.2% (2571) diffuse white light, 19.5% (2364) diffuse blue light, and 17.5% (2128) ScS. MobileNetV2 achieved the highest overall F1 score of 97.95% (CI, 97.94%-97.97%), AUC-ROC of 99.83% (99.72%-99.9%), and accuracy of 98.98% (98.97%-98.98%). The F1 scores for slit beam, diffuse white light, diffuse blue light, and ScS were 97.82% (97.80%-97.84%), 96.62% (96.58%-96.66%), 99.88% (99.87%-99.89%), and 97.59% (97.55%-97.62%), respectively. Slit beam and ScS were the 2 most frequently misclassified illumination. CONCLUSIONS MobileNetV2 accurately labeled illumination of SLPs using a large data set of corneal images. Effective, automatic classification of SLPs is key to integrating deep learning systems for clinical decision support into practice workflows.
Collapse
|
3
|
Enhancing Vault Prediction and ICL Sizing Through Advanced Machine Learning Models. J Refract Surg 2024; 40:e126-e132. [PMID: 38466764 DOI: 10.3928/1081597x-20240131-01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
PURPOSE To use artificial intelligence (AI) technology to accurately predict vault and Implantable Collamer Lens (ICL) size. METHODS The methodology focused on enhancing predictive capabilities through the fusion of machine-learning algorithms. Specifically, AdaBoost, Random Forest, Decision Tree, Support Vector Regression, LightGBM, and XGBoost were integrated into a majority-vote model. The performance of each model was evaluated using appropriate metrics such as accuracy, precision, F1-score, and area under the curve (AUC). RESULTS The majority-vote model exhibited the highest performance among the classification models, with an accuracy of 81.9% area under the curve (AUC) of 0.807. Notably, LightGBM (accuracy = 0.788, AUC = 0.803) and XGBoost (ACC = 0.790, AUC = 0.801) demonstrated competitive results. For the ICL size prediction, the Random Forest model achieved an impressive accuracy of 85.3% (AUC = 0.973), whereas XG-Boost (accuracy = 0.834, AUC = 0.961) and LightGBM (accuracy = 0.816, AUC = 0.961) maintained their compatibility. CONCLUSIONS This study highlights the potential of diverse machine learning algorithms to enhance postoperative vault and ICL size prediction, ultimately contributing to the safety of ICL implantation procedures. Furthermore, the introduction of the novel majority-vote model demonstrates its capability to combine the advantages of multiple models, yielding superior accuracy. Importantly, this study will empower ophthalmologists to use a precise tool for vault prediction, facilitating informed ICL size selection in clinical practice. [J Refract Surg. 2024;40(3):e126-e132.].
Collapse
|
4
|
Abstract
Background: The current medical scenario is closely linked to recent progress in telecommunications, photodocumentation, and artificial intelligence (AI). Smartphone eye examination may represent a promising tool in the technological spectrum, with special interest for primary health care services. Obtaining fundus imaging with this technique has improved and democratized the teaching of fundoscopy, but in particular, it contributes greatly to screening diseases with high rates of blindness. Eye examination using smartphones essentially represents a cheap and safe method, thus contributing to public policies on population screening. This review aims to provide an update on the use of this resource and its future prospects, especially as a screening and ophthalmic diagnostic tool. Methods: In this review, we surveyed major published advances in retinal and anterior segment analysis using AI. We performed an electronic search on the Medical Literature Analysis and Retrieval System Online (MEDLINE), EMBASE, and Cochrane Library for published literature without a deadline. We included studies that compared the diagnostic accuracy of smartphone ophthalmoscopy for detecting prevalent diseases with an accurate or commonly employed reference standard. Results: There are few databases with complete metadata, providing demographic data, and few databases with sufficient images involving current or new therapies. It should be taken into consideration that these are databases containing images captured using different systems and formats, with information often being excluded without essential detailing of the reasons for exclusion, which further distances them from real-life conditions. The safety, portability, low cost, and reproducibility of smartphone eye images are discussed in several studies, with encouraging results. Conclusions: The high level of agreement between conventional and a smartphone method shows a powerful arsenal for screening and early diagnosis of the main causes of blindness, such as cataract, glaucoma, diabetic retinopathy, and age-related macular degeneration. In addition to streamlining the medical workflow and bringing benefits for public health policies, smartphone eye examination can make safe and quality assessment available to the population.
Collapse
|
5
|
AI-based diagnosis of nuclear cataract from slit-lamp videos. Sci Rep 2023; 13:22046. [PMID: 38086904 PMCID: PMC10716159 DOI: 10.1038/s41598-023-49563-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 12/09/2023] [Indexed: 12/18/2023] Open
Abstract
In ophthalmology, the availability of many fundus photographs and optical coherence tomography images has spurred consideration of using artificial intelligence (AI) for diagnosing retinal and optic nerve disorders. However, AI application for diagnosing anterior segment eye conditions remains unfeasible due to limited standardized images and analysis models. We addressed this limitation by augmenting the quantity of standardized optical images using a video-recordable slit-lamp device. We then investigated whether our proposed machine learning (ML) AI algorithm could accurately diagnose cataracts from videos recorded with this device. We collected 206,574 cataract frames from 1812 cataract eye videos. Ophthalmologists graded the nuclear cataracts (NUCs) using the cataract grading scale of the World Health Organization. These gradings were used to train and validate an ML algorithm. A validation dataset was used to compare the NUC diagnosis and grading of AI and ophthalmologists. The results of individual cataract gradings were: NUC 0: area under the curve (AUC) = 0.967; NUC 1: AUC = 0.928; NUC 2: AUC = 0.923; and NUC 3: AUC = 0.949. Our ML-based cataract diagnostic model achieved performance comparable to a conventional device, presenting a promising and accurate auto diagnostic AI tool.
Collapse
|
6
|
Comprehensive Review on the Use of Artificial Intelligence in Ophthalmology and Future Research Directions. Diagnostics (Basel) 2022; 13:diagnostics13010100. [PMID: 36611392 PMCID: PMC9818832 DOI: 10.3390/diagnostics13010100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/12/2022] [Accepted: 12/26/2022] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND Having several applications in medicine, and in ophthalmology in particular, artificial intelligence (AI) tools have been used to detect visual function deficits, thus playing a key role in diagnosing eye diseases and in predicting the evolution of these common and disabling diseases. AI tools, i.e., artificial neural networks (ANNs), are progressively involved in detecting and customized control of ophthalmic diseases. The studies that refer to the efficiency of AI in medicine and especially in ophthalmology were analyzed in this review. MATERIALS AND METHODS We conducted a comprehensive review in order to collect all accounts published between 2015 and 2022 that refer to these applications of AI in medicine and especially in ophthalmology. Neural networks have a major role in establishing the demand to initiate preliminary anti-glaucoma therapy to stop the advance of the disease. RESULTS Different surveys in the literature review show the remarkable benefit of these AI tools in ophthalmology in evaluating the visual field, optic nerve, and retinal nerve fiber layer, thus ensuring a higher precision in detecting advances in glaucoma and retinal shifts in diabetes. We thus identified 1762 applications of artificial intelligence in ophthalmology: review articles and research articles (301 pub med, 144 scopus, 445 web of science, 872 science direct). Of these, we analyzed 70 articles and review papers (diabetic retinopathy (N = 24), glaucoma (N = 24), DMLV (N = 15), other pathologies (N = 7)) after applying the inclusion and exclusion criteria. CONCLUSION In medicine, AI tools are used in surgery, radiology, gynecology, oncology, etc., in making a diagnosis, predicting the evolution of a disease, and assessing the prognosis in patients with oncological pathologies. In ophthalmology, AI potentially increases the patient's access to screening/clinical diagnosis and decreases healthcare costs, mainly when there is a high risk of disease or communities face financial shortages. AI/DL (deep learning) algorithms using both OCT and FO images will change image analysis techniques and methodologies. Optimizing these (combined) technologies will accelerate progress in this area.
Collapse
|
7
|
Health Economic Implications of Artificial Intelligence Implementation for Ophthalmology in Australia: A Systematic Review. Asia Pac J Ophthalmol (Phila) 2022; 11:554-562. [PMID: 36218837 DOI: 10.1097/apo.0000000000000565] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 07/15/2022] [Indexed: 11/24/2022] Open
Abstract
PURPOSE The health care industry is an inherently resource-intense sector. Emerging technologies such as artificial intelligence (AI) are at the forefront of advancements in health care. The health economic implications of this technology have not been clearly established and represent a substantial barrier to adoption both in Australia and globally. This review aims to determine the health economic impact of implementing AI to ophthalmology in Australia. METHODS A systematic search of the databases PubMed/MEDLINE, EMBASE, and CENTRAL was conducted to March 2022, before data collection and risk of bias analysis in accordance with preferred reporting items for systematic ceviews and meta-analyses 2020 guidelines (PROSPERO number CRD42022325511). Included were full-text primary research articles analyzing a population of patients who have or are being evaluated for an ophthalmological diagnosis, using a health economic assessment system to assess the cost-effectiveness of AI. RESULTS Seven articles were identified for inclusion. Economic viability was defined as direct cost to the patient that is equal to or less than costs incurred with human clinician assessment. Despite the lack of Australia-specific data, foreign analyses overwhelmingly showed that AI is just as economically viable, if not more so, than traditional human screening programs while maintaining comparable clinical effectiveness. This evidence was largely in the setting of diabetic retinopathy screening. CONCLUSIONS Primary Australian research is needed to accurately analyze the health economic implications of implementing AI on a large scale. Further research is also required to analyze the economic feasibility of adoption of AI technology in other areas of ophthalmology, such as glaucoma and cataract screening.
Collapse
|
8
|
Characterization of Dysfunctional Lens Index and Opacity Grade in a Healthy Population. Diagnostics (Basel) 2022; 12:diagnostics12051167. [PMID: 35626322 PMCID: PMC9140515 DOI: 10.3390/diagnostics12051167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 05/05/2022] [Accepted: 05/06/2022] [Indexed: 11/28/2022] Open
Abstract
This study enrolled 61 volunteers (102 eyes) classified into subjects < 50 years (group 1) and subjects ≥ 50 years (group 2). Dysfunctional Lens Index (DLI); opacity grade; pupil diameter; and corneal, internal, and ocular higher order aberrations (HOAs) were measured with the i-Trace system (Tracey Technologies). Mean DLI was 8.89 ± 2.00 and 6.71 ± 2.97 in groups 1 and 2, respectively, being significantly higher in group 1 in all and right eyes (both p < 0.001). DLI correlated significantly with age (Rho = −0.41, p < 0.001) and pupil diameter (Rho = 0.20, p = 0.043) for all eyes, and numerous internal and ocular root-mean square HOAs for right, left, and all eyes (Rho ≤ −0.25, p ≤ 0.001). Mean opacity grade was 1.21 ± 0.63 and 1.48 ± 1.15 in groups 1 and 2, respectively, with no significant differences between groups (p ≥ 0.29). Opacity grade significantly correlated with pupil diameter for right and all eyes (Rho ≤ 0.33, p ≤ 0.013), and with some ocular root-mean square HOAs for right and all eyes (Rho ≥ 0.23, p ≤ 0.020). DLI correlates with age and might be used complementary to other diagnostic measurements for assessing the dysfunctional lens syndrome. Both DLI and opacity grade maintain a relationship with pupil diameter and internal and ocular HOAs, supporting that the algorithms used by the device may be based, in part, on these parameters.
Collapse
|
9
|
Survey-based Evaluation of the Use of Picture Archiving and Communication Systems in an Eye Hospital-Ophthalmologists' Perspective. Asia Pac J Ophthalmol (Phila) 2022; 11:258-266. [PMID: 34923520 DOI: 10.1097/apo.0000000000000467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
PURPOSE Picture archiving and communication system (PACS) is a medical imaging system for sharing, storage, retrieval, and access of medical images stored. Our study aimed to identify ophthalmologists' views on PACS, with the comparison between 3 platforms, namely electronic patient record (ePR), HEYEX (Heidelberg Engineering, Switzerland), and FORUM (Zeiss, US), following their implementation in an eye hospital for common ophthalmic investigations [visual field, optical coherence tomography (OCT) of retinal nerve fiber layer and macula, and fluorescein/indocyanine green angiography (FA/ICG)]. METHODS An online survey was distributed among ophthalmologists in a single center. Primary outcome included comparison of PACS with paper-based system. Secondary outcomes included pattern of use and comparison of different PACS platforms. RESULTS Survey response rate was 28/37 (75.7%). Images were most commonly accessed through ePR (median: 80% of time, interquartile range: 50 to 90%).All systems scored highly in information display items (median scores ≥7.5 out of 10) and in reducing patient identification error in investigation filing and retrieval during consultation compared to paper (score ≥7.0). However, ePR was inferior to paper in "facilitating comparison with previous results" in all investigation types (scores 3.0 to 4.5). ePR scored significantly higher in all system quality items than HEYEX ( P < 0.001) and FORUM ( P < 0.022), except login response time ( P = 0.081). HEYEX scored significantly higher among vitreoretinaluveitis members (VRU) for information quality items for OCT macula and FA/ICG [VRU: 10.0 (8.0 to 10.0), non-VRU: 8.0 (6.75 to 9.25), P = 0.042]. CONCLUSIONS Overall feedback for PACS among ophthalmologists was positive, with limitations of inefficiency in use of information, for example, comparison with previous results. Subspecialty played an important role in evaluating PACS.
Collapse
|
10
|
Abstract
ABSTRACT Accessibility to the Internet and computer systems has prompted the gravitation towards digital learning in medicine, including ophthalmology. Using the PubMed database and Google search engine, current initiatives in ophthalmology that serve as alternatives to traditional in-person learning with the purpose of enhancing clinical and surgical training were reviewed. This includes the development of teleeducation modules, construction of libraries of clinical and surgical videos, conduction of didactics via video communication, and the implementation of simulators and intelligent tutoring systems into clinical and surgical training programs. In this age of digital communication, teleophthalmology programs, virtual ophthalmological society meetings, and online examinations have become necessary for conducting clinical work and educational training in ophthalmology, especially in light of recent global events that have prevented large gatherings as well as the rural location of various populations. Looking forward, web-based modules and resources, artificial intelligence-based systems, and telemedicine programs will augment current curricula for ophthalmology trainees.
Collapse
|
11
|
Diagnostic accuracy of code-free deep learning for detection and evaluation of posterior capsule opacification. BMJ Open Ophthalmol 2022; 7:e000992. [PMID: 36161827 PMCID: PMC9174773 DOI: 10.1136/bmjophth-2022-000992] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/09/2022] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE To train and validate a code-free deep learning system (CFDLS) on classifying high-resolution digital retroillumination images of posterior capsule opacification (PCO) and to discriminate between clinically significant and non-significant PCOs. METHODS AND ANALYSIS For this retrospective registry study, three expert observers graded two independent datasets of 279 images three separate times with no PCO to severe PCO, providing binary labels for clinical significance. The CFDLS was trained and internally validated using 179 images of a training dataset and externally validated with 100 images. Model development was through Google Cloud AutoML Vision. Intraobserver and interobserver variabilities were assessed using Fleiss kappa (κ) coefficients and model performance through sensitivity, specificity and area under the curve (AUC). RESULTS Intraobserver variability κ values for observers 1, 2 and 3 were 0.90 (95% CI 0.86 to 0.95), 0.94 (95% CI 0.90 to 0.97) and 0.88 (95% CI 0.82 to 0.93). Interobserver agreement was high, ranging from 0.85 (95% CI 0.79 to 0.90) between observers 1 and 2 to 0.90 (95% CI 0.85 to 0.94) for observers 1 and 3. On internal validation, the AUC of the CFDLS was 0.99 (95% CI 0.92 to 1.0); sensitivity was 0.89 at a specificity of 1. On external validation, the AUC was 0.97 (95% CI 0.93 to 0.99); sensitivity was 0.84 and specificity was 0.92. CONCLUSION This CFDLS provides highly accurate discrimination between clinically significant and non-significant PCO equivalent to human expert graders. The clinical value as a potential decision support tool in different models of care warrants further research.
Collapse
|
12
|
|
13
|
Detecting visually significant cataract using retinal photograph-based deep learning. NATURE AGING 2022; 2:264-271. [PMID: 37118370 PMCID: PMC10154193 DOI: 10.1038/s43587-022-00171-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 01/10/2022] [Indexed: 02/06/2023]
Abstract
Age-related cataracts are the leading cause of visual impairment among older adults. Many significant cases remain undiagnosed or neglected in communities, due to limited availability or accessibility to cataract screening. In the present study, we report the development and validation of a retinal photograph-based, deep-learning algorithm for automated detection of visually significant cataracts, using more than 25,000 images from population-based studies. In the internal test set, the area under the receiver operating characteristic curve (AUROC) was 96.6%. External testing performed across three studies showed AUROCs of 91.6-96.5%. In a separate test set of 186 eyes, we further compared the algorithm's performance with 4 ophthalmologists' evaluations. The algorithm performed comparably, if not being slightly more superior (sensitivity of 93.3% versus 51.7-96.6% by ophthalmologists and specificity of 99.0% versus 90.7-97.9% by ophthalmologists). Our findings show the potential of a retinal photograph-based screening tool for visually significant cataracts among older adults, providing more appropriate referrals to tertiary eye centers.
Collapse
|
14
|
Deep Learning Automated Diagnosis and Quantitative Classification of Cataract Type and Severity. Ophthalmology 2022; 129:571-584. [PMID: 34990643 PMCID: PMC9038670 DOI: 10.1016/j.ophtha.2021.12.017] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 12/10/2021] [Accepted: 12/27/2021] [Indexed: 12/14/2022] Open
Abstract
PURPOSE To develop and evaluate deep learning models to perform automated diagnosis and quantitative classification of age-related cataract, including all three anatomical types, from anterior segment photographs. DESIGN Application of deep learning models to Age-Related Eye Disease Study (AREDS) dataset. PARTICIPANTS 18,999 photographs (6,333 triplets) from longitudinal follow-up of 1,137 eyes (576 AREDS participants). METHODS Deep learning models were trained to detect and quantify nuclear cataract (NS; scale 0.9-7.1) from 45-degree slit-lamp photographs and cortical (CLO; scale 0-100%) and posterior subcapsular (PSC; scale 0-100%) cataract from retroillumination photographs. Model performance was compared with that of 14 ophthalmologists and 24 medical students. The ground truth labels were from reading center grading. MAIN OUTCOME MEASURES Mean squared error (MSE). RESULTS On the full test set, mean MSE values for the deep learning models were: 0.23 (SD 0.01) for NS, 13.1 (SD 1.6) for CLO, and 16.6 (SD 2.4) for PSC. On a subset of the test set (substantially enriched for positive cases of CLO and PSC), for NS, mean MSE for the models was 0.23 (SD 0.02), compared to 0.98 (SD 0.23; p=0.000001) for the ophthalmologists, and 1.24 (SD 0.33; p=0.000005) for the medical students. For CLO, mean MSE values were 53.5 (SD 14.8), compared to 134.9 (SD 89.9; p=0.003) and 422.0 (SD 944.4; p=0.0007), respectively. For PSC, mean MSE values were 171.9 (SD 38.9), compared to 176.8 (SD 98.0; p=0.67) and 395.2 (SD 632.5; p=0.18), respectively. In external validation on the Singapore Malay Eye Study (sampled to reflect the distribution of cataract severity in AREDS), MSE was 1.27 for NS and 25.5 for PSC. CONCLUSIONS A deep learning framework was able to perform automated and quantitative classification of cataract severity for all three types of age-related cataract. For the two most common types (NS and CLO), the accuracy was significantly superior to that of ophthalmologists; for the least common type (PSC), the accuracy was similar. The framework may have wide potential applications in both clinical and research domains. In the future, such approaches may increase the accessibility of cataract assessment globally. The code and models are publicly available at https://XXX.
Collapse
|
15
|
Artificial Intelligence to Detect Meibomian Gland Dysfunction From in-vivo Laser Confocal Microscopy. Front Med (Lausanne) 2021; 8:774344. [PMID: 34901091 PMCID: PMC8655877 DOI: 10.3389/fmed.2021.774344] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Accepted: 11/04/2021] [Indexed: 02/05/2023] Open
Abstract
Background: In recent years, deep learning has been widely used in a variety of ophthalmic diseases. As a common ophthalmic disease, meibomian gland dysfunction (MGD) has a unique phenotype in in-vivo laser confocal microscope imaging (VLCMI). The purpose of our study was to investigate a deep learning algorithm to differentiate and classify obstructive MGD (OMGD), atrophic MGD (AMGD) and normal groups. Methods: In this study, a multi-layer deep convolution neural network (CNN) was trained using VLCMI from OMGD, AMGD and healthy subjects as verified by medical experts. The automatic differential diagnosis of OMGD, AMGD and healthy people was tested by comparing its image-based identification of each group with the medical expert diagnosis. The CNN was trained and validated with 4,985 and 1,663 VLCMI images, respectively. By using established enhancement techniques, 1,663 untrained VLCMI images were tested. Results: In this study, we included 2,766 healthy control VLCMIs, 2,744 from OMGD and 2,801 from AMGD. Of the three models, differential diagnostic accuracy of the DenseNet169 CNN was highest at over 97%. The sensitivity and specificity of the DenseNet169 model for OMGD were 88.8 and 95.4%, respectively; and for AMGD 89.4 and 98.4%, respectively. Conclusion: This study described a deep learning algorithm to automatically check and classify VLCMI images of MGD. By optimizing the algorithm, the classifier model displayed excellent accuracy. With further development, this model may become an effective tool for the differential diagnosis of MGD.
Collapse
|
16
|
Cataract: Advances in surgery and whether surgery remains the only treatment in future. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2021; 1:100008. [PMID: 37846393 PMCID: PMC10577864 DOI: 10.1016/j.aopr.2021.100008] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 08/27/2021] [Accepted: 10/12/2021] [Indexed: 10/18/2023]
Abstract
Background Cataract is the world's leading eye disease that causes blindness. The prevalence of cataract aged 40 years and older is approximately 11.8%-18.8%. Currently, surgery is the only way to treat cataracts. Main Text From early intracapsular cataract extraction to extracapsular cataract extraction, to current phacoemulsification cataract surgery, the incision ranges from 12 to 3 mm, and sometimes to even 1.8 mm or less, and the revolution in cataract surgery is ongoing. Cataract surgery has transformed from vision recovery to refractive surgery, leading to the era of refractive cataract surgery, and premium intraocular lenses (IOLs) such as toric IOLs, multifocal IOLs, and extended depth-of-focus IOLs are being increasingly used to meet the individual needs of patients. With its advantages of providing better visual acuity and causing fewer complications, phacoemulsification is currently the mainstream cataract surgery technique worldwide. However, patient expectations for the safety and accuracy of the operation are continually increasing. Femtosecond laser-assisted cataract surgery (FLACS) has entered the public's field of vision. FLACS is a combination of new laser technology and artificial intelligence to replace fine manual clear corneal incision, capsulorhexis, and nuclear pre-fragmentation, providing new alternative technologies for patients and ophthalmologists. As FLACS matures, it is being increasingly applied in complex cases; however, some think it is not cost-effective. Although more than 26 million cataract surgeries are performed each year, there is still a gap in the prevalence of cataracts, especially in developing countries. Although cataract surgery is a nearly ideal procedure and complications are manageable, both patients and doctors dream of using drugs to cure cataracts. Is surgery really the only way to treat cataracts in the future? It has been verified by animal experiments that lanosterol therapy in rabbits and dogs could make cataract severity alleviated and lens transparency partially recovered. Although there is still much to learn about cataract reversal, this groundbreaking work provided a new strategy for the prevention and treatment of cataracts. Conclusions Although cataract surgery is nearly ideal, it is still insufficient, we expect the prospects for cataract drugs to be bright.
Collapse
|
17
|
A deep learning approach for successful big-bubble formation prediction in deep anterior lamellar keratoplasty. Sci Rep 2021; 11:18559. [PMID: 34535722 PMCID: PMC8448733 DOI: 10.1038/s41598-021-98157-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 08/30/2021] [Indexed: 12/02/2022] Open
Abstract
The efficacy of deep learning in predicting successful big-bubble (SBB) formation during deep anterior lamellar keratoplasty (DALK) was evaluated. Medical records of patients undergoing DALK at the University of Cologne, Germany between March 2013 and July 2019 were retrospectively analyzed. Patients were divided into two groups: (1) SBB or (2) failed big-bubble (FBB). Preoperative images of anterior segment optical coherence tomography and corneal biometric values (corneal thickness, corneal curvature, and densitometry) were evaluated. A deep neural network model, Visual Geometry Group-16, was selected to test the validation data, evaluate the model, create a heat map image, and calculate the area under the curve (AUC). This pilot study included 46 patients overall (11 women, 35 men). SBBs were more common in keratoconus eyes (KC eyes) than in corneal opacifications of other etiologies (non KC eyes) (p = 0.006). The AUC was 0.746 (95% confidence interval [CI] 0.603–0.889). The determination success rate was 78.3% (18/23 eyes) (95% CI 56.3–92.5%) for SBB and 69.6% (16/23 eyes) (95% CI 47.1–86.8%) for FBB. This automated system demonstrates the potential of SBB prediction in DALK. Although KC eyes had a higher SBB rate, no other specific findings were found in the corneal biometric data.
Collapse
|
18
|
Deep-Learning-Based Pre-Diagnosis Assessment Module for Retinal Photographs: A Multicenter Study. Transl Vis Sci Technol 2021; 10:16. [PMID: 34524409 PMCID: PMC8444486 DOI: 10.1167/tvst.10.11.16] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/12/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose Artificial intelligence (AI) deep learning (DL) has been shown to have significant potential for eye disease detection and screening on retinal photographs in different clinical settings, particular in primary care. However, an automated pre-diagnosis image assessment is essential to streamline the application of the developed AI-DL algorithms. In this study, we developed and validated a DL-based pre-diagnosis assessment module for retinal photographs, targeting image quality (gradable vs. ungradable), field of view (macula-centered vs. optic-disc-centered), and laterality of the eye (right vs. left). Methods A total of 21,348 retinal photographs from 1914 subjects from various clinical settings in Hong Kong, Singapore, and the United Kingdom were used for training, internal validation, and external testing for the DL module, developed by two DL-based algorithms (EfficientNet-B0 and MobileNet-V2). Results For image-quality assessment, the pre-diagnosis module achieved area under the receiver operating characteristic curve (AUROC) values of 0.975, 0.999, and 0.987 in the internal validation dataset and the two external testing datasets, respectively. For field-of-view assessment, the module had an AUROC value of 1.000 in all of the datasets. For laterality-of-the-eye assessment, the module had AUROC values of 1.000, 0.999, and 0.985 in the internal validation dataset and the two external testing datasets, respectively. Conclusions Our study showed that this three-in-one DL module for assessing image quality, field of view, and laterality of the eye of retinal photographs achieved excellent performance and generalizability across different centers and ethnicities. Translational Relevance The proposed DL-based pre-diagnosis module realized accurate and automated assessments of image quality, field of view, and laterality of the eye of retinal photographs, which could be further integrated into AI-based models to improve operational flow for enhancing disease screening and diagnosis.
Collapse
|
19
|
Artificial intelligence in ophthalmology: Optimization of machine learning for ophthalmic care and research. Clin Exp Ophthalmol 2021; 49:413-415. [PMID: 34279854 DOI: 10.1111/ceo.13952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
20
|
Ocular Imaging Standardization for Artificial Intelligence Applications in Ophthalmology: the Joint Position Statement and Recommendations From the Asia-Pacific Academy of Ophthalmology and the Asia-Pacific Ocular Imaging Society. Asia Pac J Ophthalmol (Phila) 2021; 10:348-349. [PMID: 34415245 DOI: 10.1097/apo.0000000000000421] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
21
|
Artificial Intelligence in Ophthalmology in 2020: A Technology on the Cusp for Translation and Implementation. Asia Pac J Ophthalmol (Phila) 2020; 9:61-66. [PMID: 32349112 DOI: 10.1097/01.apo.0000656984.56467.2c] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
|