1
|
Capuano V, Semoun O, Combes A, Mehanna CJ, Oubraham H, Souied EH. [Diagnostic approach and treatment paradigm in atrophic age related macular degeneration: Recommendations of the France Macula Federation]. J Fr Ophtalmol 2025; 48:104473. [PMID: 40058064 DOI: 10.1016/j.jfo.2025.104473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2024] [Revised: 01/25/2025] [Accepted: 02/21/2025] [Indexed: 04/15/2025]
Abstract
Atrophic age-related macular degeneration (AMD) represents a detrimental progression of age-related maculopathy, characterized by advanced retinal lesions associated with drusen and pseudodrusen as well as alterations in the outer retinal layers and RPE. It is characterized by a thinning of the neuroretinal tissue linked to the disappearance of the outer layers of the retina and the RPE. Our goal is to offer to ophthalmologists recommendations in the diagnosis and management of atrophic AMD with a standardized approach, in order to facilitate and optimize the management of this disease. The diagnosis of atrophic AMD is based on multimodal imaging; color fundus photography, autofluorescence images of the fundus (AFF) and structural optical coherence tomography (OCT) are the first-line examinations to assess lesion size and foveolar sparing. OCT-angiography (OCT-A) is useful in diagnosing associated choroidal neovascularization. At times, the differential diagnosis will require other complementary examinations, such as fluorescein and/or indocyanine green angiography. The assessment of visual function is essentially based on the measurement of visual acuity; other functional tests such as reading speed, measurement of visual acuity in low luminance (LLVA), contrast sensitivity or microperimetry are of definite interest, but are not yet used in routine clinical practice. The therapeutic solutions for this pathology are multidisciplinary; they combine regular clinical monitoring, medical treatment, psychological support, orthoptic rehabilitation and optical visual aids. Support groups are of significant benefit.
Collapse
Affiliation(s)
- V Capuano
- Centre hospitalier intercommunal de Créteil, 40, avenue de Verdun, 94000 Créteil, France.
| | - O Semoun
- Centre hospitalier intercommunal de Créteil, 40, avenue de Verdun, 94000 Créteil, France
| | - A Combes
- Centre hospitalier intercommunal de Créteil, 40, avenue de Verdun, 94000 Créteil, France
| | - C-J Mehanna
- Centre hospitalier intercommunal de Créteil, 40, avenue de Verdun, 94000 Créteil, France
| | - H Oubraham
- Centre hospitalier intercommunal de Créteil, 40, avenue de Verdun, 94000 Créteil, France
| | - E H Souied
- Centre hospitalier intercommunal de Créteil, 40, avenue de Verdun, 94000 Créteil, France; Asso DMLA, 40, avenue de Verdun, 94000 Créteil, France
| |
Collapse
|
2
|
Piatti A, Rui C, Gazzina S, Tartaglino B, Romeo F, Manti R, Doglio M, Nada E, Giorda CB. Diabetic retinopathy screening with confocal fundus camera and artificial intelligence - assisted grading. Eur J Ophthalmol 2025; 35:679-688. [PMID: 39109554 DOI: 10.1177/11206721241272229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2025]
Abstract
PURPOSE Screening for diabetic retinopathy (DR) by ophthalmologists is costly and labour-intensive. Artificial Intelligence (AI) for automated DR detection could be a clinically and economically alternative. We assessed the performance of a confocal fundus imaging system (DRSplus, Centervue SpA), coupled with an AI algorithm (RetCAD, Thirona B.V.) in a real-world setting. METHODS 45° non-mydriatic retinal images from 506 patients with diabetes were graded both by an ophthalmologist and by the AI algorithm, according to the International Clinical Diabetic Retinopathy severity scale. Less than moderate retinopathy (DR scores 0, 1) was defined as non-referable, while more severe stages were defined as referable retinopathy. The gradings were then compared both at eye-level and patient-level. Key metrics included sensitivity, specificity all measured with a 95% Confidence Interval. RESULTS The percentage of ungradable eyes according to the AI was 2.58%. The performances of the AI algorithm for detecting referable DR were 97.18% sensitivity, 93.73% specificity at eye-level and 98.70% sensitivity and 91.06% specificity at patient-level. CONCLUSIONS DRSplus paired with RetCAD represents a reliable DR screening solution in a real-world setting. The high sensitivity of the system ensures that almost all patients requiring medical attention for DR are referred to an ophthalmologist for further evaluation.
Collapse
Affiliation(s)
- A Piatti
- Eye-Unit, Primary Care, ASL TO5, Regione Piemonte, Italy
| | - C Rui
- Centervue SpA, Padova, Italy
| | | | | | - F Romeo
- Metabolism and Diabetes Unit, ASLTO 5, Regione Piemonte, Italy
| | - R Manti
- Metabolism and Diabetes Unit, ASLTO 5, Regione Piemonte, Italy
| | - M Doglio
- Metabolism and Diabetes Unit, ASLTO 5, Regione Piemonte, Italy
| | - E Nada
- Metabolism and Diabetes Unit, ASLTO 5, Regione Piemonte, Italy
| | - C B Giorda
- Metabolism and Diabetes Unit, ASLTO 5, Regione Piemonte, Italy
| |
Collapse
|
3
|
Nagel ID, Heinke A, Agnihotri AP, Yassin S, Cheng L, Camp AS, Scott NL, Kalaw FGP, Borooah S, Bartsch DUG, Mueller AJ, Mehta N, Freeman WR. Comparison of a Novel Ultra-Widefield Three-Color Scanning Laser Ophthalmoscope to Other Retinal Imaging Modalities in Chorioretinal Lesion Imaging. Transl Vis Sci Technol 2025; 14:11. [PMID: 39804659 PMCID: PMC11737455 DOI: 10.1167/tvst.14.1.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 10/08/2024] [Indexed: 01/18/2025] Open
Abstract
Purpose To compare the assessment of clinically relevant retinal and choroidal lesions as well as optic nerve pathologies using a novel three-wavelength ultra-widefield (UWF) scanning laser ophthalmoscope with established retinal imaging techniques for ophthalmoscopic imaging. Methods Eighty eyes with a variety of retinal and choroidal lesions were assessed on the same time point using Topcon color fundus photography (CFP) montage, Optos red/green (RG), Heidelberg SPECTRALIS MultiColor 55-color montage (MCI), and novel Optos red/green/blue (RGB). Paired images of the optic nerve, retinal, or choroidal lesions were initially diagnosed based on CFP imaging. The accuracy of the imaging was then evaluated in comparison to CFP using a grading scale ranging from -1 (losing imaging information) to +1 (gaining imaging information). Results Eighty eyes of 43 patients with 116 retinal or choroidal pathologies, as well as 59 eyes with optic nerve imaging using CFP, MCI, RG, and RGB, were included in this study. Across all subgroups, RGB provided significantly more accurate clinical imaging with CFP as ground truth and compared to other modalities. This was true comparing RGB to both RG (P = 0.0225) and MCI (P < 0.001) overall. Although RGB provided more accurate clinical information overall, it was inferior to RG for melanocytic choroidal lesions (P = 0.011). Conclusions RGB can be considered as a useful tool to detect characteristics of central, midperipheral, and peripheral retinal lesions. Regarding melanocytic choroidal lesions, RGB was inferior to RG, and MCI was inferior to both RG and RGB modalities due to color changes. Translational Relevance Traditional retinal ultra-widefield imaging uses two wavelengths. Here, we evaluated three wavelengths for ultra-widefield imaging. We examined new optics (basic science) effect on patient imaging (clinical care).
Collapse
Affiliation(s)
- Ines D. Nagel
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Department of Ophthalmology, University Hospital Augsburg, Augsburg, Germany
| | - Anna Heinke
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - Akshay P. Agnihotri
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - Shaden Yassin
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - Lingyun Cheng
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - Andrew S. Camp
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - Nathan L. Scott
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - Fritz Gerald P. Kalaw
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - Shyamanga Borooah
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - Dirk-Uwe G. Bartsch
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - Arthur J. Mueller
- Department of Ophthalmology, University Hospital Augsburg, Augsburg, Germany
| | - Nehal Mehta
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - William R. Freeman
- Jacobs Retina Center, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
4
|
Park SH, Chey JH, Heo J, Han KE, Park SW, Byon I, Kwon HJ. Diagnostic ability of confocal scanning ophthalmoscope for the detection of concurrent retinal disease in eyes with asteroid hyalosis. PLoS One 2024; 19:e0306091. [PMID: 39636945 PMCID: PMC11620638 DOI: 10.1371/journal.pone.0306091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 06/11/2024] [Indexed: 12/07/2024] Open
Abstract
PURPOSE To compare the diagnostic capacity of a color fundus camera (CFC), ultra-wide-field bicolor confocal scanning laser ophthalmoscope (BC-cSLO; OPTOS), and true-color confocal scanning ophthalmoscope (TC-cSO; EIDON) in detecting coexisting retinal diseases in eyes with asteroid hyalosis (AH). METHODS The medical records of consecutive patients with AH who were referred to a tertiary hospital for subsequent assessment by a vitreoretinal specialist were retrospectively reviewed. Fundus images obtained simultaneously using CFC, BC-cSLO, and TC-cSO were classified into four grades based on their obscuration by asteroid bodies. The proportion of Grade 1 images (minimal obscuration group) was assessed for each imaging modality. The diagnostic and screening abilities for concurrent retinal diseases were compared in terms of the accuracy and sensitivity of each device. RESULTS Among the 100 eyes with AH, 76 had coexisting retinal diseases, such as diabetic retinopathy (DR), retinal vascular occlusion, age-related macular degeneration, epiretinal membrane, and retinitis pigmentosa. TC-cSO had the highest ratio of Grade 1 images (94%, P<0.001), followed by CFC (67%) and BC-cSLO (63%). CFC and BC-cSLO exhibited a 5.3-fold higher rate of significant obscuration than TC-cSO (P<0.001, 95% confidence intervals = 2.4~11.6 folds). TC-cSO demonstrated the highest accuracy and sensitivity (95% and 81%, respectively) compared with CFC (89% and 43%, respectively) and BC-cSLO (89% and 39%, respectively) for all retinal diseases. BC-cSLO showed the best performance for DR diagnosis. CONCLUSIONS TC-cSO images showed minimal obscuration and a superior ability for diagnosing retinal diseases accompanied by AH over other imaging devices. TC-cSO can be a valuable alternative screening tool for detecting retinal diseases when AH impedes fundus imaging.
Collapse
Affiliation(s)
- Su Hwan Park
- Department of Ophthalmology, Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan-si, Gyeongsangnam-do, South Korea
| | - Ji Hyoung Chey
- Department of Ophthalmology, Ulsan University Hospital, University of Ulsan College of Medicine, Ulsan, South Korea
| | - Jun Heo
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, South Korea
| | - Kwang Eon Han
- Department of Ophthalmology, Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan-si, Gyeongsangnam-do, South Korea
| | - Sung Who Park
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, South Korea
| | - Iksoo Byon
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, South Korea
| | - Han Jo Kwon
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, South Korea
| |
Collapse
|
5
|
Scanlon PH, Gruszka-Goh M, Javed U, Vukic A, Hapeshi J, Chave S, Galsworthy P, Vallance S, Aldington SJ. The scanning CONfoCal Ophthalmoscopy foR DIAbetic eye screening (CONCORDIA) study paper 2. Eye (Lond) 2024; 38:3547-3553. [PMID: 39394368 PMCID: PMC11621414 DOI: 10.1038/s41433-024-03361-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 07/29/2024] [Accepted: 09/18/2024] [Indexed: 10/13/2024] Open
Abstract
PURPOSE To determine if the Eidon white light 60-degree field Scanning Confocal Ophthalmoscope (SCO) camera was safe to use with staged mydriasis in a Diabetic Eye Screening Programme (DESP). METHODS The trial participants were recruited from people with diabetes attending appointments in DESP or Virtual Eye clinics for post-Covid delayed hospital appointments. Using staged mydriasis, the SCO images were taken before the pupils were dilated and compared to two-field 45 degrees mydriatic digital photography (the reference standard). Mydriatic SCO images were only compared to the reference standard if the non-mydriatic SCO images were unassessable. RESULTS 1050 patients were recruited, 35 individuals were withdrawn, the majority (18) due to an imaging protocol deviation leaving 1015 individuals (2029 eyes). Using staged mydriasis, the sensitivity and specificity for any retinopathy was 97.5% (95% CI: 96.4-98.4%) and 82.3% (95% CI: 79.6-84.7%) respectively. The sensitivity and specificity for referable retinopathy was 92.7% (95% CI: 89.9-94.9%) and 85.4% (95% CI: 83.6-87.2%) respectively. The total number of eyes that were unassessable with the Eidon without mydriasis was 85/2029 (4.2%), and after mydriasis was 34/2029 (1.7%) and, with the reference standard, 34/2029 (1.7% - not always the same images) were unassessable. CONCLUSIONS This study provides promising early results of the performance of the Eidon camera using staged mydriasis in a DESP which needs further evidence from a non-Caucasian population and from cost-effectiveness analyses.
Collapse
Affiliation(s)
- Peter H Scanlon
- Gloucestershire Retinal Research Group (GRRG), Cheltenham General Hospital, Cheltenham, UK.
- Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, UK.
- University of Gloucestershire, Cheltenham, UK.
| | - Marta Gruszka-Goh
- Gloucestershire Retinal Research Group (GRRG), Cheltenham General Hospital, Cheltenham, UK
- The Royal College of Ophthalmologists' National Ophthalmology Audit, London, UK
| | - Ushna Javed
- Gloucestershire Retinal Research Group (GRRG), Cheltenham General Hospital, Cheltenham, UK
| | - Anthony Vukic
- Gloucestershire Retinal Research Group (GRRG), Cheltenham General Hospital, Cheltenham, UK
| | - Julie Hapeshi
- Gloucestershire Retinal Research Group (GRRG), Cheltenham General Hospital, Cheltenham, UK
| | - Steve Chave
- Gloucestershire Retinal Research Group (GRRG), Cheltenham General Hospital, Cheltenham, UK
| | - Paul Galsworthy
- Gloucestershire Retinal Research Group (GRRG), Cheltenham General Hospital, Cheltenham, UK
| | - Scott Vallance
- Gloucestershire Retinal Research Group (GRRG), Cheltenham General Hospital, Cheltenham, UK
| | - Stephen J Aldington
- Gloucestershire Retinal Research Group (GRRG), Cheltenham General Hospital, Cheltenham, UK
| |
Collapse
|
6
|
Ahn SJ, Kim YH. Clinical Applications and Future Directions of Smartphone Fundus Imaging. Diagnostics (Basel) 2024; 14:1395. [PMID: 39001285 PMCID: PMC11240943 DOI: 10.3390/diagnostics14131395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 06/24/2024] [Accepted: 06/27/2024] [Indexed: 07/16/2024] Open
Abstract
The advent of smartphone fundus imaging technology has marked a significant evolution in the field of ophthalmology, offering a novel approach to the diagnosis and management of retinopathy. This review provides an overview of smartphone fundus imaging, including clinical applications, advantages, limitations, clinical applications, and future directions. The traditional fundus imaging techniques are limited by their cost, portability, and accessibility, particularly in resource-limited settings. Smartphone fundus imaging emerges as a cost-effective, portable, and accessible alternative. This technology facilitates the early detection and monitoring of various retinal pathologies, including diabetic retinopathy, age-related macular degeneration, and retinal vascular disorders, thereby democratizing access to essential diagnostic services. Despite its advantages, smartphone fundus imaging faces challenges in image quality, standardization, regulatory considerations, and medicolegal issues. By addressing these limitations, this review highlights the areas for future research and development to fully harness the potential of smartphone fundus imaging in enhancing patient care and visual outcomes. The integration of this technology into telemedicine is also discussed, underscoring its role in facilitating remote patient care and collaborative care among physicians. Through this review, we aim to contribute to the understanding and advancement of smartphone fundus imaging as a valuable tool in ophthalmic practice, paving the way for its broader adoption and integration into medical diagnostics.
Collapse
Affiliation(s)
- Seong Joon Ahn
- Department of Ophthalmology, Hanyang University Hospital, Hanyang University College of Medicine, Seoul 04763, Republic of Korea
| | | |
Collapse
|
7
|
Veritti D, Rubinato L, Sarao V, De Nardin A, Foresti GL, Lanzetta P. Behind the mask: a critical perspective on the ethical, moral, and legal implications of AI in ophthalmology. Graefes Arch Clin Exp Ophthalmol 2024; 262:975-982. [PMID: 37747539 PMCID: PMC10907411 DOI: 10.1007/s00417-023-06245-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 07/24/2023] [Accepted: 09/15/2023] [Indexed: 09/26/2023] Open
Abstract
PURPOSE This narrative review aims to provide an overview of the dangers, controversial aspects, and implications of artificial intelligence (AI) use in ophthalmology and other medical-related fields. METHODS We conducted a decade-long comprehensive search (January 2013-May 2023) of both academic and grey literature, focusing on the application of AI in ophthalmology and healthcare. This search included key web-based academic databases, non-traditional sources, and targeted searches of specific organizations and institutions. We reviewed and selected documents for relevance to AI, healthcare, ethics, and guidelines, aiming for a critical analysis of ethical, moral, and legal implications of AI in healthcare. RESULTS Six main issues were identified, analyzed, and discussed. These include bias and clinical safety, cybersecurity, health data and AI algorithm ownership, the "black-box" problem, medical liability, and the risk of widening inequality in healthcare. CONCLUSION Solutions to address these issues include collecting high-quality data of the target population, incorporating stronger security measures, using explainable AI algorithms and ensemble methods, and making AI-based solutions accessible to everyone. With careful oversight and regulation, AI-based systems can be used to supplement physician decision-making and improve patient care and outcomes.
Collapse
Affiliation(s)
- Daniele Veritti
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy.
| | - Leopoldo Rubinato
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
| | - Valentina Sarao
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare - IEMO, Udine, Italy
| | - Axel De Nardin
- Department of Mathematics, Informatics and Physics, University of Udine, Udine, Italy
| | - Gian Luca Foresti
- Department of Mathematics, Informatics and Physics, University of Udine, Udine, Italy
| | - Paolo Lanzetta
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare - IEMO, Udine, Italy
| |
Collapse
|
8
|
Cicinelli MV, Gravina S, Rutigliani C, Checchin L, La Franca L, Lattanzio R, Bandello F. Assessing Diabetic Retinopathy Staging With AI: A Comparative Analysis Between Pseudocolor and LED Imaging. Transl Vis Sci Technol 2024; 13:11. [PMID: 38488432 PMCID: PMC10946690 DOI: 10.1167/tvst.13.3.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 02/04/2024] [Indexed: 03/19/2024] Open
Abstract
Purpose To compare the diagnostic performance of artificial intelligence (AI)-based diabetic retinopathy (DR) staging system across pseudocolor, simulated white light (SWL), and light-emitting diode (LED) camera imaging modalities. Methods A cross-sectional investigation involved patients with diabetes undergoing imaging with an iCare DRSplus confocal LED camera and an Optos confocal, ultra-widefield pseudocolor camera, with and without SWL. Macula-centered and optic nerve-centered 45 × 45-degree photographs were processed using EyeArt v2.1. Human graders established the ground truth (GT) for DR severity on dilated fundus exams. Sensitivity and weighted Cohen's weighted kappa (wκ) were calculated. An ordinal generalized linear mixed model identified factors influencing accurate DR staging. Results The study included 362 eyes from 189 patients. The LED camera excelled in identifying sight-threatening DR stages (sensitivity = 0.83, specificity = 0.95 for proliferative DR) and had the highest agreement with the GT (wκ = 0.71). The addition of SWL to pseudocolor imaging resulted in decreased performance (sensitivity = 0.33, specificity = 0.98 for proliferative DR; wκ = 0.55). Peripheral lesions reduced the likelihood of being staged in the same or higher DR category by 80% (P < 0.001). Conclusions Pseudocolor and LED cameras, although proficient, demonstrated non-interchangeable performance, with the LED camera exhibiting superior accuracy in identifying advanced DR stages. These findings underscore the importance of implementing AI systems trained for ultra-widefield imaging, considering the impact of peripheral lesions on correct DR staging. Translational Relevance This study underscores the need for artificial intelligence-based systems specifically trained for ultra-widefield imaging in diabetic retinopathy assessment.
Collapse
Affiliation(s)
- Maria Vittoria Cicinelli
- Department of Ophthalmology, IRCCS San Raffaele Hospital, Milan, Italy
- School of Medicine, Vita-Salute San Raffaele University, Milan, Italy
| | - Salvatore Gravina
- School of Medicine, Vita-Salute San Raffaele University, Milan, Italy
| | - Carola Rutigliani
- School of Medicine, Vita-Salute San Raffaele University, Milan, Italy
| | - Lisa Checchin
- Department of Ophthalmology, IRCCS San Raffaele Hospital, Milan, Italy
| | | | | | - Francesco Bandello
- Department of Ophthalmology, IRCCS San Raffaele Hospital, Milan, Italy
- School of Medicine, Vita-Salute San Raffaele University, Milan, Italy
| |
Collapse
|
9
|
Fehler N, Schneider D, Hessling M. Advancement of a RGBW-LED pen for diaphanoscopic illumination with adjustable color and intensity with tests on ex-vivo porcine eyes in terms of retinal risk and correlated color temperature. Biomed Eng Lett 2024; 14:115-126. [PMID: 38186954 PMCID: PMC10770004 DOI: 10.1007/s13534-023-00317-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 07/18/2023] [Accepted: 08/27/2023] [Indexed: 01/09/2024] Open
Abstract
Diaphanoscopic illumination has the disadvantage that the intraocular spectrum is red-shifted due to transmission properties of the eyewall. This red-shift should be counteracted as well as the retinal risk should be reduced with adjusting the spectral distribution of the illumination light. Likewise, the illumination spectrum has to be adapted to the eye color of the patient. With the further development of a red, green, blue and white light-emitting diode (RGBW-LED) diaphanoscopy pen, the intensities of each color can be varied. The functionality of the LED pen is tested on ex-vivo porcine eyes. By measuring the transmission of the sclera and choroidea, the photochemical and thermal retinal hazard and the maximum exposure time are determined according to the standard DIN EN ISO 15004-2:2007. With this RGBW-LED pen the intraocular space can be illuminated clearly of up to 1.5 h without potential retinal damage according to DIN EN ISO 15004:2-2007. By adjusting the illumination spectrum the red-shift can be compensated and retinal risk can be reduced. By varying the LED intensities, the correlated color temperature in the eye can also be varied from cold white to warm white appearance as comfortable to the ophthalmologist. Additionally, a simple adjustment of the illumination to the eye color of the patient is possible. Using this RGBW-LED pen, the ophthalmologist can set the desired intraocular color appearance, which he prefers for special applications. He could also adjust the illumination to the eye color as this would reduce retinal hazard.
Collapse
Affiliation(s)
- Nicole Fehler
- Institute of Medical Engineering and Mechatronics, Ulm University of Applied Sciences, Ulm, Germany
| | - David Schneider
- Institute of Medical Engineering and Mechatronics, Ulm University of Applied Sciences, Ulm, Germany
| | - Martin Hessling
- Institute of Medical Engineering and Mechatronics, Ulm University of Applied Sciences, Ulm, Germany
| |
Collapse
|
10
|
Sarao V, Veritti D, De Nardin A, Misciagna M, Foresti G, Lanzetta P. Explainable artificial intelligence model for the detection of geographic atrophy using colour retinal photographs. BMJ Open Ophthalmol 2023; 8:e001411. [PMID: 38057106 PMCID: PMC10711821 DOI: 10.1136/bmjophth-2023-001411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 11/22/2023] [Indexed: 12/08/2023] Open
Abstract
OBJECTIVE To develop and validate an explainable artificial intelligence (AI) model for detecting geographic atrophy (GA) via colour retinal photographs. METHODS AND ANALYSIS We conducted a prospective study where colour fundus images were collected from healthy individuals and patients with retinal diseases using an automated imaging system. All images were categorised into three classes: healthy, GA and other retinal diseases, by two experienced retinologists. Simultaneously, an explainable learning model using class activation mapping techniques categorised each image into one of the three classes. The AI system's performance was then compared with manual evaluations. RESULTS A total of 540 colour retinal photographs were collected. Data was divided such that 300 images from each class trained the AI model, 120 for validation and 120 for performance testing. In distinguishing between GA and healthy eyes, the model demonstrated a sensitivity of 100%, specificity of 97.5% and an overall diagnostic accuracy of 98.4%. Performance metrics like area under the receiver operating characteristic (AUC-ROC, 0.988) and the precision-recall (AUC-PR, 0.952) curves reinforced the model's robust achievement. When differentiating GA from other retinal conditions, the model preserved a diagnostic accuracy of 96.8%, a precision of 90.9% and a recall of 100%, leading to an F1-score of 0.952. The AUC-ROC and AUC-PR scores were 0.975 and 0.909, respectively. CONCLUSIONS Our explainable AI model exhibits excellent performance in detecting GA using colour retinal images. With its high sensitivity, specificity and overall diagnostic accuracy, the AI model stands as a powerful tool for the automated diagnosis of GA.
Collapse
Affiliation(s)
- Valentina Sarao
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare (IEMO), Udine, Italy
| | - Daniele Veritti
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
| | - Axel De Nardin
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Micaela Misciagna
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
| | - Gianluca Foresti
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Paolo Lanzetta
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare (IEMO), Udine, Italy
| |
Collapse
|
11
|
Pfau M, Künzel SH, Pfau K, Schmitz-Valckenberg S, Fleckenstein M, Holz FG. Multimodal imaging and deep learning in geographic atrophy secondary to age-related macular degeneration. Acta Ophthalmol 2023; 101:881-890. [PMID: 37933610 PMCID: PMC11044135 DOI: 10.1111/aos.15796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 10/09/2023] [Accepted: 10/10/2023] [Indexed: 11/08/2023]
Abstract
Geographic atrophy (GA) secondary to age-related macular degeneration is among the most common causes of irreversible vision loss in industrialized countries. Recently, two therapies have been approved by the US FDA. However, given the nature of their treatment effect, which primarily involves a relative decrease in disease progression, discerning the individual treatment response at the individual level may not be readily apparent. Thus, clinical decision-making may have to rely on the quantification of the slope of GA progression before and during treatment. A panel of imaging modalities and artificial intelligence (AI)-based algorithms are available for such quantifications. This article aims to provide a comprehensive overview of the fundamentals of GA imaging, the procedures for diagnosis and classification using these images, and the cutting-edge role of AI algorithms in automatically deriving diagnostic and prognostic insights from imaging data.
Collapse
Affiliation(s)
- Maximilian Pfau
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Ophthalmology, University of Basel, Basel, Switzerland
| | | | - Kristina Pfau
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Ophthalmology, University of Basel, Basel, Switzerland
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Steffen Schmitz-Valckenberg
- Department of Ophthalmology, University of Bonn, Bonn, Germany
- John A. Moran Eye Center, Department of Ophthalmology & Visual Sciences, University of Utah, Salt Lake City, Utah, USA
| | - Monika Fleckenstein
- John A. Moran Eye Center, Department of Ophthalmology & Visual Sciences, University of Utah, Salt Lake City, Utah, USA
| | - Frank G. Holz
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| |
Collapse
|
12
|
Fasoula NA, Xie Y, Katsouli N, Reidl M, Kallmayer MA, Eckstein HH, Ntziachristos V, Hadjileontiadis L, Avgerinos DV, Briasoulis A, Siasos G, Hosseini K, Doulamis I, Kampaktsis PN, Karlas A. Clinical and Translational Imaging and Sensing of Diabetic Microangiopathy: A Narrative Review. J Cardiovasc Dev Dis 2023; 10:383. [PMID: 37754812 PMCID: PMC10531807 DOI: 10.3390/jcdd10090383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 08/21/2023] [Accepted: 08/25/2023] [Indexed: 09/28/2023] Open
Abstract
Microvascular changes in diabetes affect the function of several critical organs, such as the kidneys, heart, brain, eye, and skin, among others. The possibility of detecting such changes early enough in order to take appropriate actions renders the development of appropriate tools and techniques an imperative need. To this end, several sensing and imaging techniques have been developed or employed in the assessment of microangiopathy in patients with diabetes. Herein, we present such techniques; we provide insights into their principles of operation while discussing the characteristics that make them appropriate for such use. Finally, apart from already established techniques, we present novel ones with great translational potential, such as optoacoustic technologies, which are expected to enter clinical practice in the foreseeable future.
Collapse
Affiliation(s)
- Nikolina-Alexia Fasoula
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany; (N.-A.F.); (Y.X.); (N.K.); (V.N.)
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Yi Xie
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany; (N.-A.F.); (Y.X.); (N.K.); (V.N.)
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Nikoletta Katsouli
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany; (N.-A.F.); (Y.X.); (N.K.); (V.N.)
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Mario Reidl
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany; (N.-A.F.); (Y.X.); (N.K.); (V.N.)
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Michael A. Kallmayer
- Department for Vascular and Endovascular Surgery, Klinikum rechts der Isar, Technical University of Munich (TUM), 81675 Munich, Germany; (M.A.K.); (H.-H.E.)
| | - Hans-Henning Eckstein
- Department for Vascular and Endovascular Surgery, Klinikum rechts der Isar, Technical University of Munich (TUM), 81675 Munich, Germany; (M.A.K.); (H.-H.E.)
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany; (N.-A.F.); (Y.X.); (N.K.); (V.N.)
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
- DZHK (German Centre for Cardiovascular Research), Partner Site Munich Heart Alliance, 80336 Munich, Germany
| | - Leontios Hadjileontiadis
- Department of Biomedical Engineering, Healthcare Engineering Innovation Center (HEIC), Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates;
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | | | - Alexandros Briasoulis
- Aleksandra Hospital, National and Kapodistrian University of Athens Medical School, 11527 Athens, Greece;
| | - Gerasimos Siasos
- Sotiria Hospital, National and Kapodistrian University of Athens Medical School, 11527 Athens, Greece;
| | - Kaveh Hosseini
- Cardiac Primary Prevention Research Center, Cardiovascular Disease Research Institute, Tehran University of Medical Sciences, Tehran 1411713138, Iran;
| | - Ilias Doulamis
- Department of Surgery, The Johns Hopkins Hospital, School of Medicine, Baltimore, MD 21287, USA;
| | | | - Angelos Karlas
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany; (N.-A.F.); (Y.X.); (N.K.); (V.N.)
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
- Department for Vascular and Endovascular Surgery, Klinikum rechts der Isar, Technical University of Munich (TUM), 81675 Munich, Germany; (M.A.K.); (H.-H.E.)
- DZHK (German Centre for Cardiovascular Research), Partner Site Munich Heart Alliance, 80336 Munich, Germany
| |
Collapse
|
13
|
Fehler N, Hessling M. Determination of Correlated Color Temperature in Ex Vivo Porcine Eyes during Intraocular Illumination. J Clin Med 2023; 12:3034. [PMID: 37109369 PMCID: PMC10143230 DOI: 10.3390/jcm12083034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 03/27/2023] [Accepted: 04/20/2023] [Indexed: 04/29/2023] Open
Abstract
(1) Background: In ophthalmic surgery, white light is mostly applied to illuminate the intraocular space, and ophthalmologists are comfortable working with it. Diaphanoscopic illumination changes the spectral composition of light, resulting in a change in the correlated color temperature (CCT) of the intraocular illumination. This color change makes it difficult for surgeons to recognize the structures in the eye. CCT during intraocular illumination has not yet been measured before, and it is the aim of this study to perform such measurement. (2) Methods: CCT was measured inside ex vivo porcine eyes during diaphanoscopic illumination and endoillumination using a current ophthalmic illumination system with a detection fiber inside the eye. By applying pressure on the eye with a diaphanoscopic fiber, the dependency of CCT on pressure was examined. (3) Results: The intraocular CCT values during endoillumination were 3923 K and 5407 K for the halogen and xenon lamps, respectively. During diaphanoscopic illumination, a strong unwanted red shift was observed, resulting in 2199 K and 2675 K for the xenon and the halogen lamps, respectively. Regarding different applied pressures, the CCT did not differ considerably. (4) Conclusions: This red shift should be compensated for in the development of new illumination systems since surgeons are used to white light illumination, which also simplifies the identification of retinal structures.
Collapse
Affiliation(s)
- Nicole Fehler
- Institute of Medical Engineering and Mechatronics, Ulm University of Applied Sciences, 89081 Ulm, Germany
| | | |
Collapse
|
14
|
Artificial Intelligence for Diabetic Retinopathy Screening Using Color Retinal Photographs: From Development to Deployment. Ophthalmol Ther 2023; 12:1419-1437. [PMID: 36862308 PMCID: PMC10164194 DOI: 10.1007/s40123-023-00691-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 02/14/2023] [Indexed: 03/03/2023] Open
Abstract
Diabetic retinopathy (DR), a leading cause of preventable blindness, is expected to remain a growing health burden worldwide. Screening to detect early sight-threatening lesions of DR can reduce the burden of vision loss; nevertheless, the process requires intensive manual labor and extensive resources to accommodate the increasing number of patients with diabetes. Artificial intelligence (AI) has been shown to be an effective tool which can potentially lower the burden of screening DR and vision loss. In this article, we review the use of AI for DR screening on color retinal photographs in different phases of application, ranging from development to deployment. Early studies of machine learning (ML)-based algorithms using feature extraction to detect DR achieved a high sensitivity but relatively lower specificity. Robust sensitivity and specificity were achieved with the application of deep learning (DL), although ML is still used in some tasks. Public datasets were utilized in retrospective validations of the developmental phases in most algorithms, which require a large number of photographs. Large prospective clinical validation studies led to the approval of DL for autonomous screening of DR although the semi-autonomous approach may be preferable in some real-world settings. There have been few reports on real-world implementations of DL for DR screening. It is possible that AI may improve some real-world indicators for eye care in DR, such as increased screening uptake and referral adherence, but this has not been proven. The challenges in deployment may include workflow issues, such as mydriasis to lower ungradable cases; technical issues, such as integration into electronic health record systems and integration into existing camera systems; ethical issues, such as data privacy and security; acceptance of personnel and patients; and health-economic issues, such as the need to conduct health economic evaluations of using AI in the context of the country. The deployment of AI for DR screening should follow the governance model for AI in healthcare which outlines four main components: fairness, transparency, trustworthiness, and accountability.
Collapse
|
15
|
Fantaguzzi F, Servillo A, Sacconi R, Tombolini B, Bandello F, Querques G. Comparison of peripheral extension, acquisition time, and image chromaticity of Optos, Clarus, and EIDON systems. Graefes Arch Clin Exp Ophthalmol 2022; 261:1289-1297. [PMID: 36456861 DOI: 10.1007/s00417-022-05923-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/20/2022] [Accepted: 11/22/2022] [Indexed: 12/03/2022] Open
Abstract
PURPOSE To evaluate differences in acquisition time, peripheral extension, and chromaticity between 3 different commercialized ultra-wide-field (UWF) fundus cameras. METHODS Patients were prospectively enrolled from 07/2021 to 11/2021. Patients underwent fundus photography with the following scanning protocols: (1) single shot with Silverstone (Optos, California), two-shot montage with Clarus 500 (Carl Zeiss, Dublin, CA), and three-shot montage with iCare EIDON FA with UWF module (CenterVue Spa, a company of iCare Finland Oy; Vantaa, Finland). Acquisition time was calculated as the interval between the beginning and the end of the acquisition. Peripheral extension was quantified as the average ratio between the total retinal pixel area and the optic nerve head (ONH) pixel area. The average chromaticity of all pixels in the red-green-blue (RGB) space was calculated. RESULTS Twenty-three eyes of 13 prospectively enrolled healthy controls were included in the study. Optos Silverstone had a higher total retina area/ONH area ratio (509.1 [480.9;559.3]) compared to Zeiss Clarus (442.0 [431.9;510.5], p = 0.02) and iCare EIDON (369.7 [345.3;387.8], p < 0.0001). Silverstone demonstrated the shortest acquisition time (median [interquartile range]: 32 [20;58.5] s) compared to Zeiss Clarus (42 [28.5;53.5] s, p = 0.6733) and iCare EIDON (72 [68.5;78] s, p = 0.0003). iCare EIDON demonstrated the lowest variability of acquisition time (9.5 s), compared to Zeiss Clarus (25 s) and Optos Silverstone (38.5 s). A statistically significant difference was found in the RGB distribution between each of the 3 devices (p < 0.001). iCare EIDON demonstrated an average barycenter position (RGB = [0.412, 0.314, 0.275]) that represented the best color balance of the image. Zeiss Clarus had a noticeable red shift at the expense of the blue and green channels (RGB = [0.515, 0.294, 0.191]). Optos Silverstone showed an absence of the blue channel (RGB = [0.621, 0.372, 0.007]) which results in a distortion of the color of the image. CONCLUSION Optos Silverstone and Zeiss Clarus required less time than iCare EIDON to acquire a comparable size image and captured larger areas of the retina than iCare EIDON. iCare EIDON provided more color-balanced retinal images with greater richness of color content than the other two devices.
Collapse
Affiliation(s)
- Federico Fantaguzzi
- School of Medicine, Vita-Salute San Raffaele University, Via Olgettina 60, 20132, Milan, Italy
- Division of Head and Neck, Ophthalmology Unit, IRCCS Ospedale San Raffaele, Via Olgettina 60, 20132, Milan, Italy
| | - Andrea Servillo
- School of Medicine, Vita-Salute San Raffaele University, Via Olgettina 60, 20132, Milan, Italy
- Division of Head and Neck, Ophthalmology Unit, IRCCS Ospedale San Raffaele, Via Olgettina 60, 20132, Milan, Italy
| | - Riccardo Sacconi
- School of Medicine, Vita-Salute San Raffaele University, Via Olgettina 60, 20132, Milan, Italy
- Division of Head and Neck, Ophthalmology Unit, IRCCS Ospedale San Raffaele, Via Olgettina 60, 20132, Milan, Italy
| | - Beatrice Tombolini
- School of Medicine, Vita-Salute San Raffaele University, Via Olgettina 60, 20132, Milan, Italy
- Division of Head and Neck, Ophthalmology Unit, IRCCS Ospedale San Raffaele, Via Olgettina 60, 20132, Milan, Italy
| | - Francesco Bandello
- School of Medicine, Vita-Salute San Raffaele University, Via Olgettina 60, 20132, Milan, Italy
- Division of Head and Neck, Ophthalmology Unit, IRCCS Ospedale San Raffaele, Via Olgettina 60, 20132, Milan, Italy
| | - Giuseppe Querques
- School of Medicine, Vita-Salute San Raffaele University, Via Olgettina 60, 20132, Milan, Italy.
- Division of Head and Neck, Ophthalmology Unit, IRCCS Ospedale San Raffaele, Via Olgettina 60, 20132, Milan, Italy.
- Department of Ophthalmology, University Vita-Salute, IRCCS San Raffaele, Milan, Italy.
| |
Collapse
|
16
|
Light color efficiency-balanced trans-palpebral illumination for widefield fundus photography of the retina and choroid. Sci Rep 2022; 12:13850. [PMID: 35974053 PMCID: PMC9381777 DOI: 10.1038/s41598-022-18061-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 08/04/2022] [Indexed: 11/11/2022] Open
Abstract
A wide-field fundus camera, which can selectively evaluate the retina and choroid, is desirable for better detection and treatment evaluation of eye diseases. Trans-palpebral illumination has been demonstrated for wide-field fundus photography, but its application for true-color retinal imaging is challenging due to the light efficiency delivered through the eyelid and sclera is highly wavelength dependent. This study is to test the feasibility of true-color retinal imaging using efficiency-balanced visible light illumination, and to validate multiple spectral imaging (MSI) of the retina and choroid. 530 nm, 625 nm, 780 nm and 970 nm light emission diodes (LED)s are used to quantitatively evaluate the spectral efficiency of the trans-palpebral illumination. In comparison with 530 nm illumination, the 625 nm, 780 nm and 970 nm light efficiencies are 30.25, 523.05, and 1238.35 times higher. The light efficiency-balanced 530 nm and 625 nm illumination control can be used to produce true-color retinal image with contrast enhancement. The 780 nm light image enhances the visibility of choroidal vasculature, and the 970 nm image is predominated by large veins in the choroid. Without the need of pharmacological pupillary dilation, a 140° eye-angle field of view (FOV) is demonstrated in a snapshot fundus image. In coordination with a fixation target, the FOV can be readily expanded over the equator of the eye to visualize vortex ampullas.
Collapse
|
17
|
Li Z, Guo C, Nie D, Lin D, Cui T, Zhu Y, Chen C, Zhao L, Zhang X, Dongye M, Wang D, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Automated detection of retinal exudates and drusen in ultra-widefield fundus images based on deep learning. Eye (Lond) 2022; 36:1681-1686. [PMID: 34345030 PMCID: PMC9307785 DOI: 10.1038/s41433-021-01715-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Revised: 07/14/2021] [Accepted: 07/22/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Retinal exudates and/or drusen (RED) can be signs of many fundus diseases that can lead to irreversible vision loss. Early detection and treatment of these diseases are critical for improving vision prognosis. However, manual RED screening on a large scale is time-consuming and labour-intensive. Here, we aim to develop and assess a deep learning system for automated detection of RED using ultra-widefield fundus (UWF) images. METHODS A total of 26,409 UWF images from 14,994 subjects were used to develop and evaluate the deep learning system. The Zhongshan Ophthalmic Center (ZOC) dataset was selected to compare the performance of the system to that of retina specialists in RED detection. The saliency map visualization technique was used to understand which areas in the UWF image had the most influence on our deep learning system when detecting RED. RESULTS The system for RED detection achieved areas under the receiver operating characteristic curve of 0.994 (95% confidence interval [CI]: 0.991-0.996), 0.972 (95% CI: 0.957-0.984), and 0.988 (95% CI: 0.983-0.992) in three independent datasets. The performance of the system in the ZOC dataset was comparable to that of an experienced retina specialist. Regions of RED were highlighted by saliency maps in UWF images. CONCLUSIONS Our deep learning system is reliable in the automated detection of RED in UWF images. As a screening tool, our system may promote the early diagnosis and management of RED-related fundus diseases.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xulin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, Inner Mongolia, China
| | - Yu Han
- EYE & ENT Hospital of Fudan University, Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.
- Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
18
|
Yoo TK, Ryu IH, Kim JK, Lee IS, Kim HK. A deep learning approach for detection of shallow anterior chamber depth based on the hidden features of fundus photographs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106735. [PMID: 35305492 DOI: 10.1016/j.cmpb.2022.106735] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 02/15/2022] [Accepted: 03/04/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Patients with angle-closure glaucoma (ACG) are asymptomatic until they experience a painful attack. Shallow anterior chamber depth (ACD) is considered a significant risk factor for ACG. We propose a deep learning approach to detect shallow ACD using fundus photographs and to identify the hidden features of shallow ACD. METHODS This retrospective study assigned healthy subjects to the training (n = 1188 eyes) and test (n = 594) datasets (prospective validation design). We used a deep learning approach to estimate ACD and build a classification model to identify eyes with a shallow ACD. The proposed method, including subtraction of the input and output images of CycleGAN and a thresholding algorithm, was adopted to visualize the characteristic features of fundus photographs with a shallow ACD. RESULTS The deep learning model integrating fundus photographs and clinical variables achieved areas under the receiver operating characteristic curve of 0.978 (95% confidence interval [CI], 0.963-0.988) for an ACD ≤ 2.60 mm and 0.895 (95% CI, 0.868-0.919) for an ACD ≤ 2.80 mm, and outperformed the regression model using only clinical variables. However, the difference between shallow and deep ACD classes on fundus photographs was difficult to be detected with the naked eye. We were unable to identify the features of shallow ACD using the Grad-CAM. The CycleGAN-based feature images showed that area around the macula and optic disk significantly contributed to the classification of fundus photographs with a shallow ACD. CONCLUSIONS We demonstrated the feasibility of a novel deep learning model to detect a shallow ACD as a screening tool for ACG using fundus photographs. The CycleGAN-based feature map showed the hidden characteristic features of shallow ACD that were previously undetectable by conventional techniques and ophthalmologists. This framework will facilitate the early detection of shallow ACD to prevent overlooking the risks associated with ACG.
Collapse
Affiliation(s)
- Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea; Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea.
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | | | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| |
Collapse
|
19
|
Fogel-Levin M, Sadda SR, Rosenfeld PJ, Waheed N, Querques G, Freund KB, Sarraf D. Advanced retinal imaging and applications for clinical practice: A consensus review. Surv Ophthalmol 2022; 67:1373-1390. [DOI: 10.1016/j.survophthal.2022.02.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 02/12/2022] [Accepted: 02/14/2022] [Indexed: 01/20/2023]
|
20
|
Cruz SFSD, Gauch IR, Cruz MFSD, Araújo ACMD, Cruz NFSD, Bichara CNC. Ultra-wide field imaging for ophthalmological evaluation of pregnant women with positive serology for toxoplasmosis. REVISTA BRASILEIRA DE OFTALMOLOGIA 2021. [DOI: 10.37039/1982.8551.20210056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
|
21
|
Daich Varela M, Esener B, Hashem SA, Cabral de Guimaraes TA, Georgiou M, Michaelides M. Structural evaluation in inherited retinal diseases. Br J Ophthalmol 2021; 105:1623-1631. [PMID: 33980508 PMCID: PMC8639906 DOI: 10.1136/bjophthalmol-2021-319228] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 04/07/2021] [Accepted: 04/21/2021] [Indexed: 12/20/2022]
Abstract
Ophthalmic genetics is a field that has been rapidly evolving over the last decade, mainly due to the flourishing of translational medicine for inherited retinal diseases (IRD). In this review, we will address the different methods by which retinal structure can be objectively and accurately assessed in IRD. We review standard-of-care imaging for these patients: colour fundus photography, fundus autofluorescence imaging and optical coherence tomography (OCT), as well as higher-resolution and/or newer technologies including OCT angiography, adaptive optics imaging, fundus imaging using a range of wavelengths, magnetic resonance imaging, laser speckle flowgraphy and retinal oximetry, illustrating their utility using paradigm genotypes with on-going therapeutic efforts/trials.
Collapse
Affiliation(s)
- Malena Daich Varela
- Moorfields Eye Hospital City Road Campus, London, UK
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Burak Esener
- Department of Ophthalmology, Inonu University School of Medicine, Malatya, Turkey
| | - Shaima A Hashem
- Moorfields Eye Hospital City Road Campus, London, UK
- UCL Institute of Ophthalmology, University College London, London, UK
| | | | - Michalis Georgiou
- Moorfields Eye Hospital City Road Campus, London, UK
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Michel Michaelides
- Moorfields Eye Hospital City Road Campus, London, UK
- UCL Institute of Ophthalmology, University College London, London, UK
| |
Collapse
|
22
|
Wongchaisuwat N, Trinavarat A, Rodanant N, Thoongsuwan S, Phasukkijwatana N, Prakhunhungsit S, Preechasuk L, Wongchaisuwat P. In-Person Verification of Deep Learning Algorithm for Diabetic Retinopathy Screening Using Different Techniques Across Fundus Image Devices. Transl Vis Sci Technol 2021; 10:17. [PMID: 34767624 PMCID: PMC8590162 DOI: 10.1167/tvst.10.13.17] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Purpose To evaluate the clinical performance of an automated diabetic retinopathy (DR) screening model to detect referable cases at Siriraj Hospital, Bangkok, Thailand. Methods A retrospective review of two sets of fundus photographs (Eidon and Nidek) was undertaken. The images were classified by DR staging prior to the development of a DR screening model. In a prospective cross-sectional enrollment of patients with diabetes, automated detection of referable DR was compared with the results of the gold standard, a dilated fundus examination. Results The study analyzed 2533 Nidek fundus images and 1989 Eidon images. The sensitivities calculated for the Nidek and Eidon images were 0.93 and 0.88 and the specificities were 0.91 and 0.85, respectively. In a clinical verification phase using 982 Nidek and 674 Eidon photographs, the calculated sensitivities and specificities were 0.86 and 0.92 for Nidek along with 0.92 and 0.84 for Eidon, respectively. The 60°-field images from the Eidon yielded a more desirable performance in differentiating referable DR than did the corresponding images from the Nidek. Conclusions A conventional fundus examination requires intense healthcare resources. It is time consuming and possibly leads to unavoidable human errors. The deep learning algorithm for the detection of referable DR exhibited a favorable performance and is a promising alternative for DR screening. However, variations in the color and pixels of photographs can cause differences in sensitivity and specificity. The image angle and poor quality of fundus photographs were the main limitations of the automated method. Translational Relevance The deep learning algorithm, developed from basic research of image processing, was applied to detect referable DR in a real-word clinical care setting.
Collapse
Affiliation(s)
- Nida Wongchaisuwat
- Department of Ophthalmology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Adisak Trinavarat
- Department of Ophthalmology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Nuttawut Rodanant
- Department of Ophthalmology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Somanus Thoongsuwan
- Department of Ophthalmology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Nopasak Phasukkijwatana
- Department of Ophthalmology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Supalert Prakhunhungsit
- Department of Ophthalmology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Lukana Preechasuk
- Siriraj Diabetes Center of Excellence, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Papis Wongchaisuwat
- Department of Industrial Engineering, Kasetsart University, Bangkok, Thailand
| |
Collapse
|
23
|
Sivaraman A, Nagarajan S, Vadivel S, Dutt S, Tiwari P, Narayana S, Rao DP. A Novel, Smartphone-Based, Teleophthalmology-Enabled, Widefield Fundus Imaging Device With an Autocapture Algorithm. Transl Vis Sci Technol 2021; 10:21. [PMID: 34661624 PMCID: PMC8525841 DOI: 10.1167/tvst.10.12.21] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 08/06/2021] [Indexed: 12/13/2022] Open
Abstract
Purpose Widefield imaging can detect signs of retinal pathology extending beyond the posterior pole and is currently moving to the forefront of posterior segment imaging. We report a novel, smartphone-based, telemedicine-enabled, mydriatic, widefield retinal imaging device with autofocus and autocapture capabilities to be used by non-specialist operators. Methods The Remidio Vistaro uses an annular illumination design without cross-polarizers to eliminate Purkinje reflexes. The measured resolution using the US Air Force target test was 64 line pairs (lp)/mm in the center, 57 lp/mm in the middle, and 45 lp/mm in the periphery of a single-shot retinal image. An autocapture algorithm was developed to capture images automatically upon reaching the correct working distance. The field of view (FOV) was validated using both model and real eyes. A pilot study was conducted to objectively assess image quality. The FOVs of montaged images from the Vistaro were compared with regulatory-approved widefield and ultra-widefield devices. Results The FOV of the Vistaro was found to be approximately 65° in one shot. Automatic image capture was achieved in 80% of patient examinations within an average of 10 to 15 seconds. Consensus grading of image quality among three graders showed that 91.6% of the images were clinically useful. A two-field montage on the Vistaro was shown to exceed the cumulative FOV of a seven-field Early Treatment Diabetic Retinopathy Study image. Conclusions A novel, smartphone-based, portable, mydriatic, widefield imaging device can view the retina beyond the posterior pole with a FOV of 65° in one shot. Translational Relevance Smartphone-based widefield imaging can be widely used to screen for retinal pathologies beyond the posterior pole.
Collapse
Affiliation(s)
- Anand Sivaraman
- Research & Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, Karnataka, India
| | | | - Sivasundara Vadivel
- Research & Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, Karnataka, India
| | - Sreetama Dutt
- Research & Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, Karnataka, India
| | - Priyamvada Tiwari
- Research & Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, Karnataka, India
| | - Srikanth Narayana
- Department of Eye and Retinal Diseases, Diacon Hospital, Bangalore, Karnataka, India
| | | |
Collapse
|
24
|
Sarao V, Veritti D, Lanzetta P. Automated diabetic retinopathy detection with two different retinal imaging devices using artificial intelligence: a comparison study. Graefes Arch Clin Exp Ophthalmol 2020; 258:2647-2654. [DOI: 10.1007/s00417-020-04853-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Revised: 06/12/2020] [Accepted: 07/14/2020] [Indexed: 12/14/2022] Open
|
25
|
Borrelli E, Querques L, Lattanzio R, Cavalleri M, Grazioli Moretti A, Di Biase C, Signorino A, Gelormini F, Sacconi R, Bandello F, Querques G. Nonmydriatic widefield retinal imaging with an automatic white LED confocal imaging system compared with dilated ophthalmoscopy in screening for diabetic retinopathy. Acta Diabetol 2020; 57:1043-1047. [PMID: 32246268 DOI: 10.1007/s00592-020-01520-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Accepted: 03/17/2020] [Indexed: 12/28/2022]
Abstract
PURPOSE To compare nonmydriatic montage widefield images with dilated fundus ophthalmoscopy for determining diabetic retinopathy (DR) severity. MATERIALS AND METHODS In this prospective, observational, cross-sectional study, patients with a previous diagnosis of diabetes and without history of diabetes-associated ocular disease were screened for DR. Montage widefield imaging was obtained with a system that combines confocal technology with white-light emitting diode (LED) illumination (DRSplus, Centervue, Padua, Italy). Dilated fundus examination was performed by a retina specialist. RESULTS Thirty-seven eyes (20 patients, 8 females) were finally included in the analysis. Mean age of the patients enrolled was 58.0 ± 11.6 years [range 31-80 years]. The level of DR identified on montage widefield images agreed exactly with indirect ophthalmoscopy in 97.3% (36) of eyes and was within 1 step in 100% (37) of eyes. Cohen's kappa coefficient (κ) was 0.96, this suggesting an almost perfect agreement between the two modalities in DR screening. Nonmydriatic montage widefield imaging acquisition time was significantly shorter than that of dilated clinical examination (p = 0.010). CONCLUSION Nonmydriatic montage widefield images were compared favorably with dilated fundus examination in defining DR severity; however, they are acquired more rapidly.
Collapse
Affiliation(s)
- Enrico Borrelli
- Department of Ophthalmology, Ospedale San Raffaele Scientific Institute, University Vita-Salute, Via Olgettina 60, Milan, Italy
| | - Lea Querques
- Department of Ophthalmology, Ospedale San Raffaele Scientific Institute, University Vita-Salute, Via Olgettina 60, Milan, Italy
| | - Rosangela Lattanzio
- Department of Ophthalmology, Ospedale San Raffaele Scientific Institute, University Vita-Salute, Via Olgettina 60, Milan, Italy
| | - Michele Cavalleri
- Department of Ophthalmology, Ospedale San Raffaele Scientific Institute, University Vita-Salute, Via Olgettina 60, Milan, Italy
| | - Alessio Grazioli Moretti
- Department of Ophthalmology, Ospedale San Raffaele Scientific Institute, University Vita-Salute, Via Olgettina 60, Milan, Italy
| | - Carlo Di Biase
- Department of Ophthalmology, Ospedale San Raffaele Scientific Institute, University Vita-Salute, Via Olgettina 60, Milan, Italy
| | - Alberto Signorino
- Department of Ophthalmology, Ospedale San Raffaele Scientific Institute, University Vita-Salute, Via Olgettina 60, Milan, Italy
| | - Francesco Gelormini
- Department of Ophthalmology, Ospedale San Raffaele Scientific Institute, University Vita-Salute, Via Olgettina 60, Milan, Italy
| | - Riccardo Sacconi
- Department of Ophthalmology, Ospedale San Raffaele Scientific Institute, University Vita-Salute, Via Olgettina 60, Milan, Italy
| | - Francesco Bandello
- Department of Ophthalmology, Ospedale San Raffaele Scientific Institute, University Vita-Salute, Via Olgettina 60, Milan, Italy
| | - Giuseppe Querques
- Department of Ophthalmology, Ospedale San Raffaele Scientific Institute, University Vita-Salute, Via Olgettina 60, Milan, Italy.
| |
Collapse
|
26
|
Reviewing the Role of Ultra-Widefield Imaging in Inherited Retinal Dystrophies. Ophthalmol Ther 2020; 9:249-263. [PMID: 32141037 PMCID: PMC7196101 DOI: 10.1007/s40123-020-00241-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Indexed: 11/28/2022] Open
Abstract
Inherited retinal dystrophies (IRD) are a heterogeneous group of rare chronic disorders caused by genetically determined degeneration of photoreceptors and retinal pigment epithelium cells. Ultra-widefield (UWF) imaging is a useful diagnostic tool for evaluating retinal integrity in IRD, including Stargardt disease, retinitis pigmentosa, cone dystrophies, and Best vitelliform dystrophy. Color or pseudocolor and fundus autofluorescence images obtained with UWF provide previously unavailable information on the retinal periphery, which correlates well with visual field measurement or electroretinogram. Despite unavoidable artifacts of the UWF device, the feasibility of investigations in infants and in patients with poor fixation makes UWF imaging a precious resource in the diagnostic armamentarium for IRD.
Collapse
|