1
|
Britten-Jones AC, Thai L, Flanagan JPM, Bedggood PA, Edwards TL, Metha AB, Ayton LN. Adaptive optics imaging in inherited retinal diseases: A scoping review of the clinical literature. Surv Ophthalmol 2024; 69:51-66. [PMID: 37778667 DOI: 10.1016/j.survophthal.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 09/22/2023] [Accepted: 09/25/2023] [Indexed: 10/03/2023]
Abstract
Adaptive optics (AO) imaging enables direct, objective assessments of retinal cells. Applications of AO show great promise in advancing our understanding of the etiology of inherited retinal disease (IRDs) and discovering new imaging biomarkers. This scoping review systematically identifies and summarizes clinical studies evaluating AO imaging in IRDs. Ovid MEDLINE and EMBASE were searched on February 6, 2023. Studies describing AO imaging in monogenic IRDs were included. Study screening and data extraction were performed by 2 reviewers independently. This review presents (1) a broad overview of the dominant areas of research; (2) a summary of IRD characteristics revealed by AO imaging; and (3) a discussion of methodological considerations relating to AO imaging in IRDs. From 140 studies with AO outcomes, including 2 following subretinal gene therapy treatments, 75% included fewer than 10 participants with AO imaging data. Of 100 studies that included participants' genetic diagnoses, the most common IRD genes with AO outcomes are CNGA3, CNGB3, CHM, USH2A, and ABCA4. Confocal reflectance AO scanning laser ophthalmoscopy was the most reported imaging modality, followed by flood-illuminated AO and split-detector AO. The most common outcome was cone density, reported quantitatively in 56% of studies. Future research areas include guidelines to reduce variability in the reporting of AO methodology and a focus on functional AO techniques to guide the development of therapeutic interventions.
Collapse
Affiliation(s)
- Alexis Ceecee Britten-Jones
- Department of Optometry and Vision Sciences, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, VIC, Australia; Department of Surgery (Ophthalmology), Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, VIC, Australia; Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, VIC, Australia.
| | - Lawrence Thai
- Department of Surgery (Ophthalmology), Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, VIC, Australia; Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, VIC, Australia
| | - Jeremy P M Flanagan
- Department of Surgery (Ophthalmology), Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, VIC, Australia; Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, VIC, Australia
| | - Phillip A Bedggood
- Department of Optometry and Vision Sciences, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, VIC, Australia
| | - Thomas L Edwards
- Department of Surgery (Ophthalmology), Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, VIC, Australia; Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, VIC, Australia
| | - Andrew B Metha
- Department of Optometry and Vision Sciences, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, VIC, Australia
| | - Lauren N Ayton
- Department of Optometry and Vision Sciences, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, VIC, Australia; Department of Surgery (Ophthalmology), Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Parkville, VIC, Australia; Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, VIC, Australia
| |
Collapse
|
2
|
Duncan JL, Carroll J. Adaptive Optics Imaging of Inherited Retinal Disease. Cold Spring Harb Perspect Med 2023; 13:a041285. [PMID: 36220331 PMCID: PMC10317068 DOI: 10.1101/cshperspect.a041285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
The human retina is amenable to direct, noninvasive visualization using a wide array of imaging modalities. In the ∼140 years since the publication of the first image of the living human retina, there has been a continued evolution of retinal imaging technology. Advances in image acquisition and processing speed now allow real-time visualization of retinal structure, which has revolutionized the diagnosis and management of eye disease. Enormous advances have come in image resolution, with adaptive optics (AO)-based systems capable of imaging the retina with single-cell resolution. In addition, newer functional imaging techniques provide the ability to assess function with exquisite spatial and temporal resolution. These imaging advances have had an especially profound impact on the field of inherited retinal disease research. Here we will review some of the advances and applications of AO retinal imaging in patients with inherited retinal disease.
Collapse
Affiliation(s)
- Jacque L Duncan
- Department of Ophthalmology, University of California, San Francisco, California 94143-4081, USA
| | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin Eye Institute, Milwaukee, Wisconsin 53226, USA
| |
Collapse
|
3
|
Soltanian-Zadeh S, Liu Z, Liu Y, Lassoued A, Cukras CA, Miller DT, Hammer DX, Farsiu S. Deep learning-enabled volumetric cone photoreceptor segmentation in adaptive optics optical coherence tomography images of normal and diseased eyes. BIOMEDICAL OPTICS EXPRESS 2023; 14:815-833. [PMID: 36874491 PMCID: PMC9979662 DOI: 10.1364/boe.478693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 06/11/2023]
Abstract
Objective quantification of photoreceptor cell morphology, such as cell diameter and outer segment length, is crucial for early, accurate, and sensitive diagnosis and prognosis of retinal neurodegenerative diseases. Adaptive optics optical coherence tomography (AO-OCT) provides three-dimensional (3-D) visualization of photoreceptor cells in the living human eye. The current gold standard for extracting cell morphology from AO-OCT images involves the tedious process of 2-D manual marking. To automate this process and extend to 3-D analysis of the volumetric data, we propose a comprehensive deep learning framework to segment individual cone cells in AO-OCT scans. Our automated method achieved human-level performance in assessing cone photoreceptors of healthy and diseased participants captured with three different AO-OCT systems representing two different types of point scanning OCT: spectral domain and swept source.
Collapse
Affiliation(s)
| | - Zhuolin Liu
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Yan Liu
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Ayoub Lassoued
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Catherine A. Cukras
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Donald T. Miller
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Daniel X. Hammer
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
4
|
Patterson EJ, Kalitzeos A, Kane TM, Singh N, Kreis J, Pennesi ME, Hardcastle AJ, Neitz J, Neitz M, Michaelides M, Carroll J. Foveal Cone Structure in Patients With Blue Cone Monochromacy. Invest Ophthalmol Vis Sci 2022; 63:23. [PMID: 36301530 PMCID: PMC9624264 DOI: 10.1167/iovs.63.11.23] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 09/22/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose Blue cone monochromacy (BCM) is a rare inherited cone disorder in which both long- (L-) and middle- (M-) wavelength sensitive cone classes are either impaired or nonfunctional. Assessing genotype-phenotype relationships in BCM can improve our understanding of retinal development in the absence of functional L- and M-cones. Here we examined foveal cone structure in patients with genetically-confirmed BCM, using adaptive optics scanning light ophthalmoscopy (AOSLO). Methods Twenty-three male patients (aged 6-75 years) with genetically-confirmed BCM were recruited for high-resolution imaging. Eight patients had a deletion of the locus control region (LCR), and 15 had a missense mutation-Cys203Arg-affecting the first two genes in the opsin gene array. Foveal cone structure was assessed using confocal and non-confocal split-detection AOSLO across a 300 × 300 µm area, centered on the location of peak cell density. Results Only one of eight patients with LCR deletions and 10 of 15 patients with Cys203Arg mutations had analyzable images. Mean total cone density for Cys203Arg patients was 16,664 ± 11,513 cones/mm2 (n = 10), which is, on average, around 40% of normal. Waveguiding cone density was 2073 ± 963 cones/mm2 (n = 9), which was consistent with published histological estimates of S-cone density in the normal eye. The one patient with an LCR deletion had a total cone density of 10,246 cones/mm2 and waveguiding density of 1535 cones/mm2. Conclusions Our results show that BCM patients with LCR deletions and Cys203Arg mutations have a population of non-waveguiding photoreceptors, although the spectral identity and level of function remain unknown.
Collapse
Affiliation(s)
- Emily J. Patterson
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Angelos Kalitzeos
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Thomas M. Kane
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Navjit Singh
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Joseph Kreis
- Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, Wisconsin, United States
| | - Mark E. Pennesi
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, United States
| | - Alison J. Hardcastle
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Jay Neitz
- Ophthalmology, University of Washington, Seattle, Washington, United States
| | - Maureen Neitz
- Ophthalmology, University of Washington, Seattle, Washington, United States
| | - Michel Michaelides
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Joseph Carroll
- Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, Wisconsin, United States
- Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, Wisconsin, United States
| |
Collapse
|
5
|
Zhou M, Doble N, Choi SS, Jin T, Xu C, Parthasarathy S, Ramnath R. Using deep learning for the automated identification of cone and rod photoreceptors from adaptive optics imaging of the human retina. BIOMEDICAL OPTICS EXPRESS 2022; 13:5082-5097. [PMID: 36425636 PMCID: PMC9664895 DOI: 10.1364/boe.470071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/13/2022] [Accepted: 08/16/2022] [Indexed: 05/02/2023]
Abstract
Adaptive optics imaging has enabled the enhanced in vivo retinal visualization of individual cone and rod photoreceptors. Effective analysis of such high-resolution, feature rich images requires automated, robust algorithms. This paper describes RC-UPerNet, a novel deep learning algorithm, for identifying both types of photoreceptors, and was evaluated on images from central and peripheral retina extending out to 30° from the fovea in the nasal and temporal directions. Precision, recall and Dice scores were 0.928, 0.917 and 0.922 respectively for cones, and 0.876, 0.867 and 0.870 for rods. Scores agree well with human graders and are better than previously reported AI-based approaches.
Collapse
Affiliation(s)
- Mengxi Zhou
- The Ohio State University, Department of Computer Science and Engineering, 2015 Neil Ave., Columbus, OH 43210, USA
| | - Nathan Doble
- The Ohio State University, College of Optometry, 338 W 10th Ave., Columbus, OH 43210, USA
- The Ohio State University, Department of Ophthalmology and Visual Science, Havener Eye Institute, 915 Olentangy River Road, Columbus, OH 43212, USA
| | - Stacey S. Choi
- The Ohio State University, College of Optometry, 338 W 10th Ave., Columbus, OH 43210, USA
- The Ohio State University, Department of Ophthalmology and Visual Science, Havener Eye Institute, 915 Olentangy River Road, Columbus, OH 43212, USA
| | - Tianyu Jin
- The Ohio State University, Department of Computer Science and Engineering, 2015 Neil Ave., Columbus, OH 43210, USA
| | - Chenwei Xu
- The Ohio State University, Department of Statistics, 127 Pomerene Hall, 1760 Neil Ave, Columbus, OH 43212, USA
| | - Srinivasan Parthasarathy
- The Ohio State University, Department of Computer Science and Engineering, 2015 Neil Ave., Columbus, OH 43210, USA
| | - Rajiv Ramnath
- The Ohio State University, Department of Computer Science and Engineering, 2015 Neil Ave., Columbus, OH 43210, USA
| |
Collapse
|
6
|
Wynne N, Cava JA, Gaffney M, Heitkotter H, Scheidt A, Reiniger JL, Grieshop J, Yang K, Harmening WM, Cooper RF, Carroll J. Intergrader agreement of foveal cone topography measured using adaptive optics scanning light ophthalmoscopy. BIOMEDICAL OPTICS EXPRESS 2022; 13:4445-4454. [PMID: 36032569 PMCID: PMC9408252 DOI: 10.1364/boe.460821] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 07/11/2022] [Accepted: 07/12/2022] [Indexed: 05/02/2023]
Abstract
The foveal cone mosaic can be directly visualized using adaptive optics scanning light ophthalmoscopy (AOSLO). Previous studies in individuals with normal vision report wide variability in the topography of the foveal cone mosaic, especially the value of peak cone density (PCD). While these studies often involve a human grader, there have been no studies examining intergrader reproducibility of foveal cone mosaic metrics. Here we re-analyzed published AOSLO foveal cone images from 44 individuals to assess the relationship between the cone density centroid (CDC) location and the location of PCD. Across 5 graders with variable experience, we found a measurement error of 11.7% in PCD estimates and higher intergrader reproducibility of CDC location compared to PCD location (p < 0.0001). These estimates of measurement error can be used in future studies of the foveal cone mosaic, and our results support use of the CDC location as a more reproducible anchor for cross-modality analyses.
Collapse
Affiliation(s)
- Niamh Wynne
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W Watertown Plank Rd, Milwaukee, WI 53226, USA
| | - Jenna A. Cava
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W Watertown Plank Rd, Milwaukee, WI 53226, USA
| | - Mina Gaffney
- Joint Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 1250 W Wisconsin Ave, Milwaukee, WI 53233, USA
| | - Heather Heitkotter
- Department of Cell Biology, Neurobiology and Anatomy, 8701 W Watertown Plank Rd, Milwaukee, WI 53226, USA
| | - Abigail Scheidt
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W Watertown Plank Rd, Milwaukee, WI 53226, USA
| | - Jenny L. Reiniger
- Department of Ophthalmology, University of Bonn, Ernst-Abbe-Str. 2, 53127 Bonn, Germany
| | - Jenna Grieshop
- Joint Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 1250 W Wisconsin Ave, Milwaukee, WI 53233, USA
| | - Kai Yang
- Division of Biostatistics, Medical College of Wisconsin, 8701 W Watertown Plank Rd, Milwaukee, WI 53226, USA
| | - Wolf M. Harmening
- Department of Ophthalmology, University of Bonn, Ernst-Abbe-Str. 2, 53127 Bonn, Germany
| | - Robert F. Cooper
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W Watertown Plank Rd, Milwaukee, WI 53226, USA
- Joint Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 1250 W Wisconsin Ave, Milwaukee, WI 53233, USA
| | - Joseph Carroll
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W Watertown Plank Rd, Milwaukee, WI 53226, USA
- Joint Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 1250 W Wisconsin Ave, Milwaukee, WI 53233, USA
- Department of Cell Biology, Neurobiology and Anatomy, 8701 W Watertown Plank Rd, Milwaukee, WI 53226, USA
| |
Collapse
|
7
|
Abstract
The eye, the photoreceptive organ used to perceive the external environment, is of great importance to humans. It has been proven that some diseases in humans are accompanied by fundus changes; therefore, the health status of people may be interpreted from retinal images. However, the human eye is not a perfect refractive system for the existence of ocular aberrations. These aberrations not only affect the ability of human visual discrimination and recognition, but restrict the observation of the fine structures of human eye and reduce the possibility of exploring the mechanisms of eye disease. Adaptive optics (AO) is a technique that corrects optical wavefront aberrations. Once integrated into ophthalmoscopes, AO enables retinal imaging at the cellular level. This paper illustrates the principle of AO in correcting wavefront aberrations in human eyes, and then reviews the applications and advances of AO in ophthalmology, including the adaptive optics fundus camera (AO-FC), the adaptive optics scanning laser ophthalmoscope (AO-SLO), the adaptive optics optical coherence tomography (AO-OCT), and their combined multimodal imaging technologies. The future development trend of AO in ophthalmology is also prospected.
Collapse
|
8
|
Choy KC, Li G, Stamer WD, Farsiu S. Open-source deep learning-based automatic segmentation of mouse Schlemm's canal in optical coherence tomography images. Exp Eye Res 2022; 214:108844. [PMID: 34793828 PMCID: PMC8792324 DOI: 10.1016/j.exer.2021.108844] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 10/25/2021] [Accepted: 11/11/2021] [Indexed: 01/03/2023]
Abstract
The purpose of this study was to develop an automatic deep learning-based approach and corresponding free, open-source software to perform segmentation of the Schlemm's canal (SC) lumen in optical coherence tomography (OCT) scans of living mouse eyes. A novel convolutional neural network (CNN) for semantic segmentation grounded in a U-Net architecture was developed by incorporating a late fusion scheme, multi-scale input image pyramid, dilated residual convolution blocks, and attention-gating. 163 pairs of intensity and speckle variance (SV) OCT B-scans acquired from 32 living mouse eyes were used for training, validation, and testing of this CNN model for segmentation of the SC lumen. The proposed model achieved a mean Dice Similarity Coefficient (DSC) of 0.694 ± 0.256 and median DSC of 0.791, while manual segmentation performed by a second expert grader achieved a mean and median DSC of 0.713 ± 0.209 and 0.763, respectively. This work presents the first automatic method for segmentation of the SC lumen in OCT images of living mouse eyes. The performance of the proposed model is comparable to the performance of a second human grader. Open-source automatic software for segmentation of the SC lumen is expected to accelerate experiments for studying treatment efficacy of new drugs affecting intraocular pressure and related diseases such as glaucoma, which present as changes in the SC area.
Collapse
Affiliation(s)
- Kevin C Choy
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
| | - Guorong Li
- Department of Ophthalmology, Duke University, Durham, NC, United States
| | - W Daniel Stamer
- Department of Biomedical Engineering, Duke University, Durham, NC, United States; Department of Ophthalmology, Duke University, Durham, NC, United States
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC, United States; Department of Ophthalmology, Duke University, Durham, NC, United States.
| |
Collapse
|
9
|
Liu J, Shen C, Aguilera N, Cukras C, Hufnagel RB, Zein WM, Liu T, Tam J. Active Cell Appearance Model Induced Generative Adversarial Networks for Annotation-Efficient Cell Segmentation and Identification on Adaptive Optics Retinal Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2820-2831. [PMID: 33507868 PMCID: PMC8548993 DOI: 10.1109/tmi.2021.3055483] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Data annotation is a fundamental precursor for establishing large training sets to effectively apply deep learning methods to medical image analysis. For cell segmentation, obtaining high quality annotations is an expensive process that usually requires manual grading by experts. This work introduces an approach to efficiently generate annotated images, called "A-GANs", created by combining an active cell appearance model (ACAM) with conditional generative adversarial networks (C-GANs). ACAM is a statistical model that captures a realistic range of cell characteristics and is used to ensure that the image statistics of generated cells are guided by real data. C-GANs utilize cell contours generated by ACAM to produce cells that match input contours. By pairing ACAM-generated contours with A-GANs-based generated images, high quality annotated images can be efficiently generated. Experimental results on adaptive optics (AO) retinal images showed that A-GANs robustly synthesize realistic, artificial images whose cell distributions are exquisitely specified by ACAM. The cell segmentation performance using as few as 64 manually-annotated real AO images combined with 248 artificially-generated images from A-GANs was similar to the case of using 248 manually-annotated real images alone (Dice coefficients of 88% for both). Finally, application to rare diseases in which images exhibit never-seen characteristics demonstrated improvements in cell segmentation without the need for incorporating manual annotations from these new retinal images. Overall, A-GANs introduce a methodology for generating high quality annotated data that statistically captures the characteristics of any desired dataset and can be used to more efficiently train deep-learning-based medical image analysis applications.
Collapse
|
10
|
Young LK, Smithson HE. Emulated retinal image capture (ERICA) to test, train and validate processing of retinal images. Sci Rep 2021; 11:11225. [PMID: 34045507 PMCID: PMC8160341 DOI: 10.1038/s41598-021-90389-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 05/04/2021] [Indexed: 12/13/2022] Open
Abstract
High resolution retinal imaging systems, such as adaptive optics scanning laser ophthalmoscopes (AOSLO), are increasingly being used for clinical research and fundamental studies in neuroscience. These systems offer unprecedented spatial and temporal resolution of retinal structures in vivo. However, a major challenge is the development of robust and automated methods for processing and analysing these images. We present ERICA (Emulated Retinal Image CApture), a simulation tool that generates realistic synthetic images of the human cone mosaic, mimicking images that would be captured by an AOSLO, with specified image quality and with corresponding ground-truth data. The simulation includes a self-organising mosaic of photoreceptors, the eye movements an observer might make during image capture, and data capture through a real system incorporating diffraction, residual optical aberrations and noise. The retinal photoreceptor mosaics generated by ERICA have a similar packing geometry to human retina, as determined by expert labelling of AOSLO images of real eyes. In the current implementation ERICA outputs convincingly realistic en face images of the cone photoreceptor mosaic but extensions to other imaging modalities and structures are also discussed. These images and associated ground-truth data can be used to develop, test and validate image processing and analysis algorithms or to train and validate machine learning approaches. The use of synthetic images has the advantage that neither access to an imaging system, nor to human participants is necessary for development.
Collapse
Affiliation(s)
- Laura K Young
- Biosciences Institute, Newcastle University, Newcastle, NE2 4HH, UK.
| | - Hannah E Smithson
- Department of Experimental Psychology, University of Oxford, Oxford, OX2 6GG, UK
| |
Collapse
|
11
|
Ringel MJ, Tang EM, Tao YK. Advances in multimodal imaging in ophthalmology. Ther Adv Ophthalmol 2021; 13:25158414211002400. [PMID: 35187398 PMCID: PMC8855415 DOI: 10.1177/25158414211002400] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 02/23/2021] [Indexed: 12/12/2022] Open
Abstract
Multimodality ophthalmic imaging systems aim to enhance the contrast, resolution, and functionality of existing technologies to improve disease diagnostics and therapeutic guidance. These systems include advanced acquisition and post-processing methods using optical coherence tomography (OCT), combined scanning laser ophthalmoscopy and OCT systems, adaptive optics, surgical guidance, and photoacoustic technologies. Here, we provide an overview of these ophthalmic imaging systems and their clinical and basic science applications.
Collapse
Affiliation(s)
- Morgan J. Ringel
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Eric M. Tang
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Yuankai K. Tao
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235, USA
| |
Collapse
|
12
|
Xie H, Zeng X, Lei H, Du J, Wang J, Zhang G, Cao J, Wang T, Lei B. Cross-attention multi-branch network for fundus diseases classification using SLO images. Med Image Anal 2021; 71:102031. [PMID: 33798993 DOI: 10.1016/j.media.2021.102031] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 01/24/2021] [Accepted: 03/03/2021] [Indexed: 12/23/2022]
Abstract
Fundus diseases classification is vital for the health of human beings. However, most of existing methods detect diseases by means of single angle fundus images, which lead to the lack of pathological information. To address this limitation, this paper proposes a novel deep learning method to complete different fundus diseases classification tasks using ultra-wide field scanning laser ophthalmoscopy (SLO) images, which have an ultra-wide field view of 180-200˚. The proposed deep model consists of multi-branch network, atrous spatial pyramid pooling module (ASPP), cross-attention and depth-wise attention module. Specifically, the multi-branch network employs the ResNet-34 model as the backbone to extract feature information, where the ResNet-34 model with two-branch is followed by the ASPP module to extract multi-scale spatial contextual features by setting different dilated rates. The depth-wise attention module can provide the global attention map from the multi-branch network, which enables the network to focus on the salient targets of interest. The cross-attention module adopts the cross-fusion mode to fuse the channel and spatial attention maps from the ResNet-34 model with two-branch, which can enhance the representation ability of the disease-specific features. The extensive experiments on our collected SLO images and two publicly available datasets demonstrate that the proposed method can outperform the state-of-the-art methods and achieve quite promising classification performance of the fundus diseases.
Collapse
Affiliation(s)
- Hai Xie
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Xianlu Zeng
- Shenzhen Eye Hospital, Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China
| | - Haijun Lei
- Guangdong Province Key Laboratory of Popular High-performance Computers, School of Computer and Software Engineering, Shenzhen University, Shenzhen, China
| | - Jie Du
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Jiantao Wang
- Shenzhen Eye Hospital, Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China.
| | - Jiuwen Cao
- Key Lab for IOT and Information Fusion Technology of Zhejiang, Artificial Intelligence Institute, Hangzhou Dianzi University, Hangzhou, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.
| |
Collapse
|
13
|
Sredar N, Razeen M, Kowalski B, Carroll J, Dubra A. Comparison of confocal and non-confocal split-detection cone photoreceptor imaging. BIOMEDICAL OPTICS EXPRESS 2021; 12:737-755. [PMID: 33680539 PMCID: PMC7901313 DOI: 10.1364/boe.403907] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 12/18/2020] [Accepted: 12/19/2020] [Indexed: 05/06/2023]
Abstract
Quadrant reflectance confocal and non-confocal scanning light ophthalmoscope images of the photoreceptor mosaic were recorded in a subject with congenital achromatopsia (ACHM) and a normal control. These images, captured with various circular and annular apertures, were used to calculate split-detection images, revealing two cone photoreceptor contrast mechanisms. The first contrast mechanism, maximal in the non-confocal 5.5-10 Airy disk diameter annular region, is unrelated to the cone reflectivity in confocal or flood illumination imaging. The second mechanism, maximal for confocal split-detection, is related to the cone reflectivity in confocal or flood illumination imaging that originates from the ellipsoid zone and/or inner-outer segment junction. Seeking to maximize image contrast, split-detection images were generated using various quadrant detector combinations, with opposite (diagonal) quadrant detectors producing the highest contrast. Split-detection generated with the addition of adjacent quadrant detector pairs, shows lower contrast, while azimuthal split-detection images, calculated from adjacent quadrant detectors, showed the lowest contrast. Finally, the integration of image pairs with orthogonal split directions was used to produce images in which the photoreceptor contrast does not change with direction.
Collapse
Affiliation(s)
- Nripun Sredar
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| | - Moataz Razeen
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| | | | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| |
Collapse
|
14
|
Abstract
Adaptive optics (AO) is a technique that corrects for optical aberrations. It was originally proposed to correct for the blurring effect of atmospheric turbulence on images in ground-based telescopes and was instrumental in the work that resulted in the Nobel prize-winning discovery of a supermassive compact object at the centre of our galaxy. When AO is used to correct for the eye's imperfect optics, retinal changes at the cellular level can be detected, allowing us to study the operation of the visual system and to assess ocular health in the microscopic domain. By correcting for sample-induced blur in microscopy, AO has pushed the boundaries of imaging in thick tissue specimens, such as when observing neuronal processes in the brain. In this primer, we focus on the application of AO for high-resolution imaging in astronomy, vision science and microscopy. We begin with an overview of the general principles of AO and its main components, which include methods to measure the aberrations, devices for aberration correction, and how these components are linked in operation. We present results and applications from each field along with reproducibility considerations and limitations. Finally, we discuss future directions.
Collapse
|
15
|
Loo J, Kriegel MF, Tuohy MM, Kim KH, Prajna V, Woodward MA, Farsiu S. Open-Source Automatic Segmentation of Ocular Structures and Biomarkers of Microbial Keratitis on Slit-Lamp Photography Images Using Deep Learning. IEEE J Biomed Health Inform 2021; 25:88-99. [PMID: 32248131 PMCID: PMC7781042 DOI: 10.1109/jbhi.2020.2983549] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
We propose a fully-automatic deep learning-based algorithm for segmentation of ocular structures and microbial keratitis (MK) biomarkers on slit-lamp photography (SLP) images. The dataset consisted of SLP images from 133 eyes with manual annotations by a physician, P1. A modified region-based convolutional neural network, SLIT-Net, was developed and trained using P1's annotations to identify and segment four pathological regions of interest (ROIs) on diffuse white light images (stromal infiltrate (SI), hypopyon, white blood cell (WBC) border, corneal edema border), one pathological ROI on diffuse blue light images (epithelial defect (ED)), and two non-pathological ROIs on all images (corneal limbus, light reflexes). To assess inter-reader variability, 75 eyes were manually annotated for pathological ROIs by a second physician, P2. Performance was evaluated using the Dice similarity coefficient (DSC) and Hausdorff distance (HD). Using seven-fold cross-validation, the DSC of the algorithm (as compared to P1) for all ROIs was good (range: 0.62-0.95) on all 133 eyes. For the subset of 75 eyes with manual annotations by P2, the DSC for pathological ROIs ranged from 0.69-0.85 (SLIT-Net) vs. 0.37-0.92 (P2). DSCs for SLIT-Net were not significantly different than P2 for segmenting hypopyons (p > 0.05) and higher than P2 for WBCs (p < 0.001) and edema (p < 0.001). DSCs were higher for P2 for segmenting SIs (p < 0.001) and EDs (p < 0.001). HDs were lower for P2 for segmenting SIs (p = 0.005) and EDs (p < 0.001) and not significantly different for hypopyons (p > 0.05), WBCs (p > 0.05), and edema (p > 0.05). This prototype fully-automatic algorithm to segment MK biomarkers on SLP images performed to expectations on an exploratory dataset and holds promise for quantification of corneal physiology and pathology.
Collapse
|
16
|
Liu J, Han YJ, Liu T, Aguilera N, Tam J. Spatially Aware Dense-LinkNet Based Regression Improves Fluorescent Cell Detection in Adaptive Optics Ophthalmic Images. IEEE J Biomed Health Inform 2020; 24:3520-3528. [PMID: 32750947 DOI: 10.1109/jbhi.2020.3004271] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Retinal pigment epithelial (RPE) cells play an important role in nourishing retinal neurosensory photoreceptor cells, and numerous blinding diseases are associated with RPE defects. Their fluorescence signature can now be visualized in the living human eye using adaptive optics (AO) imaging combined with indocyanine green (ICG), which motivates us to develop an automated RPE detection method to improve the quantitative evaluation of RPE status in patients. This paper proposes a spatially-aware, Dense-LinkNet-based regression approach to improve the detection of in vivo fluorescent cell patterns, achieving precision, recall, and F1-Score of 93.6 ± 4.3%, 81.4 ± 9.5%, and 86.7 ± 5.7%, respectively. These results demonstrate the utility of incorporating spatial inputs into a deep learning-based regression framework for cell detection.
Collapse
|
17
|
Wynne N, Carroll J, Duncan JL. Promises and pitfalls of evaluating photoreceptor-based retinal disease with adaptive optics scanning light ophthalmoscopy (AOSLO). Prog Retin Eye Res 2020; 83:100920. [PMID: 33161127 DOI: 10.1016/j.preteyeres.2020.100920] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 10/28/2020] [Accepted: 10/31/2020] [Indexed: 12/15/2022]
Abstract
Adaptive optics scanning light ophthalmoscopy (AOSLO) allows visualization of the living human retina with exquisite single-cell resolution. This technology has improved our understanding of normal retinal structure and revealed pathophysiological details of a number of retinal diseases. Despite the remarkable capabilities of AOSLO, it has not seen the widespread commercial adoption and mainstream clinical success of other modalities developed in a similar time frame. Nevertheless, continued advancements in AOSLO hardware and software have expanded use to a broader range of patients. Current devices enable imaging of a number of different retinal cell types, with recent improvements in stimulus and detection schemes enabling monitoring of retinal function, microscopic structural changes, and even subcellular activity. This has positioned AOSLO for use in clinical trials, primarily as exploratory outcome measures or biomarkers that can be used to monitor disease progression or therapeutic response. AOSLO metrics could facilitate patient selection for such trials, to refine inclusion criteria or to guide the choice of therapy, depending on the presence, absence, or functional viability of specific cell types. Here we explore the potential of AOSLO retinal imaging by reviewing clinical applications as well as some of the pitfalls and barriers to more widespread clinical adoption.
Collapse
Affiliation(s)
- Niamh Wynne
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Joseph Carroll
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA; Department of Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, WI, USA; Department of Biomedical Engineering, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Jacque L Duncan
- Department of Ophthalmology, University of California, San Francisco, CA, USA.
| |
Collapse
|
18
|
Litts KM, Georgiou M, Langlo CS, Patterson EJ, Mastey RR, Kalitzeos A, Linderman RE, Lam BL, Fishman GA, Pennesi ME, Kay CN, Hauswirth WW, Michaelides M, Carroll J. Interocular Symmetry of Foveal Cone Topography in Congenital Achromatopsia. Curr Eye Res 2020; 45:1257-1264. [PMID: 32108519 PMCID: PMC7487033 DOI: 10.1080/02713683.2020.1737138] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Accepted: 02/25/2020] [Indexed: 01/26/2023]
Abstract
Purpose: To determine the interocular symmetry of foveal cone topography in achromatopsia (ACHM) using non-confocal split-detection adaptive optics scanning light ophthalmoscopy (AOSLO). Methods: Split-detector AOSLO images of the foveal cone mosaic were acquired from both eyes of 26 subjects (mean age 24.3 years; range 8-44 years, 14 females) with genetically confirmed CNGA3- or CNGB3-associated ACHM. Cones were identified within a manually delineated rod-free zone. Peak cone density (PCD) was determined using an 80 × 80 μm sampling window within the rod-free zone. The mean and standard deviation (SD) of inter-cell distance (ICD) were calculated to derive the coefficient of variation (CV). Cone density difference maps were generated to compare cone topography between eyes. Results: PCD (mean ± SD) was 17,530 ± 9,614 cones/mm2 and 17,638 ± 9,753 cones/mm2 for right and left eyes, respectively (p = .677, Wilcoxon test). The mean (± SD) for ICD was 9.05 ± 2.55 µm and 9.24 ± 2.55 µm for right and left eyes, respectively (p = .410, paired t-test). The mean (± SD) for CV of ICD was 0.16 ± 0.03 µm and 0.16 ± 0.04 µm for right and left eyes, respectively (p = .562, paired t-test). Cone density maps demonstrated that cone topography of the ACHM fovea is non-uniform with local variations in cone density between eyes. Conclusions: These results demonstrate the interocular symmetry of the foveal cone mosaic (both density and packing) in ACHM. As cone topography can differ between eyes of a subject, PCD does not completely describe the foveal cone mosaic in ACHM. Nonetheless, these findings are of value in longitudinal monitoring of patients during treatment trials and further suggest that both eyes of a given subject may have similar therapeutic potential and non-study eye can be used as a control.
Collapse
Affiliation(s)
- Katie M. Litts
- Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Michalis Georgiou
- Moorfields Eye Hospital, London, United Kingdom
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| | - Christopher S. Langlo
- Cell Biology, Neurobiology and Anatomy, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Emily J. Patterson
- Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Rebecca R. Mastey
- Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Angelos Kalitzeos
- Moorfields Eye Hospital, London, United Kingdom
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| | - Rachel E. Linderman
- Cell Biology, Neurobiology and Anatomy, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Byron L. Lam
- Bascom Palmer Eye Institute, University of Miami, Miami, Florida, United States of America
| | - Gerald A. Fishman
- Pangere Center for Inherited Retinal Diseases, The Chicago Lighthouse, Chicago, Illinois, United States
| | - Mark E. Pennesi
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239
| | | | | | - Michel Michaelides
- Moorfields Eye Hospital, London, United Kingdom
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| | - Joseph Carroll
- Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
- Cell Biology, Neurobiology and Anatomy, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| |
Collapse
|
19
|
Morgan JIW, Chen M, Huang AM, Jiang YY, Cooper RF. Cone Identification in Choroideremia: Repeatability, Reliability, and Automation Through Use of a Convolutional Neural Network. Transl Vis Sci Technol 2020; 9:40. [PMID: 32855844 PMCID: PMC7424931 DOI: 10.1167/tvst.9.2.40] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 04/10/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose Adaptive optics imaging has enabled the visualization of photoreceptors both in health and disease. However, there remains a need for automated accurate cone photoreceptor identification in images of disease. Here, we apply an open-source convolutional neural network (CNN) to automatically identify cones in images of choroideremia (CHM). We further compare the results to the repeatability and reliability of manual cone identifications in CHM. Methods We used split-detection adaptive optics scanning laser ophthalmoscopy to image the inner segment cone mosaic of 17 patients with CHM. Cones were manually identified twice by one experienced grader and once by two additional experienced graders in 204 regions of interest (ROIs). An open-source CNN either pre-trained on normal images or trained on CHM images automatically identified cones in the ROIs. True and false positive rates and Dice's coefficient were used to determine the agreement in cone locations between data sets. Interclass correlation coefficient was used to assess agreement in bound cone density. Results Intra- and intergrader agreement for cone density is high in CHM. CNN performance increased when it was trained on CHM images in comparison to normal, but had lower agreement than manual grading. Conclusions Manual cone identifications and cone density measurements are repeatable and reliable for images of CHM. CNNs show promise for automated cone selections, although additional improvements are needed to equal the accuracy of manual measurements. Translational Relevance These results are important for designing and interpreting longitudinal studies of cone mosaic metrics in disease progression or treatment intervention in CHM.
Collapse
Affiliation(s)
- Jessica I W Morgan
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA.,Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, USA
| | - Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Andrew M Huang
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA
| | - Yu You Jiang
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA.,Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, USA
| | - Robert F Cooper
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA.,Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.,Currently at the Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin and the Department of Ophthalmology, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
20
|
Jin H, Morgan JIW, Gee JC, Chen M. SPATIALLY INFORMED CNN FOR AUTOMATED CONE DETECTION IN ADAPTIVE OPTICS RETINAL IMAGES. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2020; 2020:1383-1386. [PMID: 32647558 DOI: 10.1109/isbi45749.2020.9098455] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Adaptive optics (AO) scanning laser ophthalmoscopy offers cellular level in-vivo imaging of the human cone mosaic. Existing analysis of cone photoreceptor density in AO images require accurate identification of cone cells, which is a time and labor-intensive task. Recently, several methods have been introduced for automated cone detection in AO retinal images using convolutional neural networks (CNN). However, these approaches have been limited in their ability to correctly identify cones when applied to AO images originating from different locations in the retina, due to changes to the reflectance and arrangement of the cone mosaics with eccentricity. To address these limitations, we present an adapted CNN architecture that incorporates spatial information directly into the network. Our approach, inspired by conditional generative adversarial networks, embeds the retina location from which each AO image was acquired as part of the training. Using manual cone identification as ground truth, our evaluation shows general improvement over existing approaches when detecting cones in the middle and periphery regions of the retina, but decreased performance near the fovea.
Collapse
Affiliation(s)
- Heng Jin
- School of Automation Science and Electrical Engineering, Beihang University, China
- Department of Radiology, University of Pennsylvania, USA
| | - Jessica I W Morgan
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, USA
- Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, USA
| | - James C Gee
- Department of Radiology, University of Pennsylvania, USA
| | - Min Chen
- Department of Radiology, University of Pennsylvania, USA
| |
Collapse
|
21
|
Georgiou M, Litts KM, Singh N, Kane T, Patterson EJ, Hirji N, Kalitzeos A, Dubra A, Michaelides M, Carroll J. Intraobserver Repeatability and Interobserver Reproducibility of Foveal Cone Density Measurements in CNGA3- and CNGB3-Associated Achromatopsia. Transl Vis Sci Technol 2020; 9:37. [PMID: 32832242 PMCID: PMC7414701 DOI: 10.1167/tvst.9.7.37] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Accepted: 04/13/2020] [Indexed: 01/06/2023] Open
Abstract
Purpose To examine repeatability and reproducibility of foveal cone density measurements in patients with CNGA3 - and CNGB3-associated achromatopsia (ACHM) using split-detection adaptive optics scanning light ophthalmoscopy (AOSLO). Methods Thirty foveae from molecularly confirmed subjects with ACHM, half of whom harbored disease-causing variants in CNGA3 and half in CNGB3, underwent nonconfocal split-detection AOSLO imaging. Cone photoreceptors within the manually delineated rod-free zone were manually identified twice by two independent observers. The coordinates of the marked cones were used for quantifying foveal cone density. Cone density and difference maps were generated to compare cone topography between trials. Results We observed excellent intraobserver repeatability in foveal cone density estimates, with intraclass correlation coefficients (ICCs) ranging from 0.963 to 0.991 for CNGA3 and CNGB3 subjects. Interobserver reproducibility was also excellent for both CNGA3 (ICC = 0.952; 95% confidence interval [CI], 0.903-1.0) and CNGB3 (ICC = 0.968; 95% CI, 0.935-1.0). However, Bland-Altman analysis revealed bias between observers. Conclusions Foveal cone density can be measured using the described method with good repeatability and reproducibility both for CNGA3- and CNGB3-associated ACHM. Any degree of bias observed among the observers is of uncertain clinical significance but should be evaluated on a study-specific basis. Translational Relevance This approach could be used to explore disease natural history, as well as to facilitate stratification of patients and monitor efficacy of interventions for ongoing and upcoming ACHM gene therapy trials.
Collapse
Affiliation(s)
- Michalis Georgiou
- UCL Institute of Ophthalmology, University College London, London, UK.,Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Katie M Litts
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Navjit Singh
- UCL Institute of Ophthalmology, University College London, London, UK.,Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Thomas Kane
- UCL Institute of Ophthalmology, University College London, London, UK.,Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Emily J Patterson
- UCL Institute of Ophthalmology, University College London, London, UK.,Moorfields Eye Hospital NHS Foundation Trust, London, UK.,Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Nashila Hirji
- UCL Institute of Ophthalmology, University College London, London, UK.,Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Angelos Kalitzeos
- UCL Institute of Ophthalmology, University College London, London, UK.,Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA, USA
| | - Michel Michaelides
- UCL Institute of Ophthalmology, University College London, London, UK.,Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
22
|
Hamwood J, Alonso-Caneiro D, Sampson DM, Collins MJ, Chen FK. Automatic Detection of Cone Photoreceptors With Fully Convolutional Networks. Transl Vis Sci Technol 2019; 8:10. [PMID: 31737434 PMCID: PMC6855369 DOI: 10.1167/tvst.8.6.10] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Accepted: 09/10/2019] [Indexed: 11/30/2022] Open
Abstract
PURPOSE To develop a fully automatic method, based on deep learning algorithms, for determining the locations of cone photoreceptors within adaptive optics scanning laser ophthalmoscope images and evaluate its performance against a dataset of manually segmented images. METHODS A fully convolutional network (FCN) based on U-Net architecture was used to generate prediction probability maps and then used a localization algorithm to reduce the prediction map to a collection of points. The proposed method was trained and tested on two publicly available datasets of different imaging modalities, with Dice overlap, false discovery rate, and true positive reported to assess performance. RESULTS The proposed method achieves a Dice coefficient of 0.989, true positive rate of 0.987, and false discovery rate of 0.009 on the first confocal dataset; and a Dice coefficient of 0.926, true positive rate of 0.909, and false discovery rate of 0.051 on the second split detector dataset. Results compare favorably with a previously proposed method, but this method provides quicker (25 times faster) evaluation performance. CONCLUSIONS The proposed FCN-based method demonstrates that deep learning algorithms can achieve accurate cone localizations, almost comparable to a human expert, while labeling the images. TRANSLATIONAL RELEVANCE Manual cone photoreceptor identification is a time-consuming task due to the large number of cones present within a single image; using the proposed FCN-based method could support the image analysis task, drastically reducing the need for manual assessment of the photoreceptor mosaic.
Collapse
Affiliation(s)
- Jared Hamwood
- School of Optometry & Vision Science, Queensland University of Technology, Queensland, Australia
| | - David Alonso-Caneiro
- School of Optometry & Vision Science, Queensland University of Technology, Queensland, Australia
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia
| | - Danuta M. Sampson
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia
- Surrey Biophotonics, Centre for Vision, Speech and Signal Processing and School of Biosciences and Medicine, The University of Surrey, Guildford, UK
| | - Michael J. Collins
- School of Optometry & Vision Science, Queensland University of Technology, Queensland, Australia
| | - Fred K. Chen
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia
- Department of Ophthalmology, Royal Perth Hospital, Perth, Western Australia, Australia
| |
Collapse
|
23
|
Interocular symmetry, intraobserver repeatability, and interobserver reliability of cone density measurements in the 13-lined ground squirrel. PLoS One 2019; 14:e0223110. [PMID: 31557245 PMCID: PMC6762077 DOI: 10.1371/journal.pone.0223110] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Accepted: 09/15/2019] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND The 13-lined ground squirrel (13-LGS) possesses a cone-dominant retina that is highly amenable to non-invasive high-resolution retinal imaging. The ability for longitudinal assessment of a cone-dominant photoreceptor mosaic with an adaptive optics scanning light ophthalmoscope (AOSLO) has positioned the 13-LGS to become an accessible model for vision research. Here, we examine the interocular symmetry, repeatability, and reliability of cone density measurements in the 13-LGS. METHODS Thirteen 13-LGS (18 eyes) were imaged along the vertical meridian with a custom AOSLO. Regions of interest were selected superior and inferior to the optic nerve head, including the cone-rich visual streak. Non-confocal split-detection was used to capture images of the cone mosaic. Five masked observers each manually identified photoreceptors for 26 images three times and corrected an algorithm's cell identification outputs for all 214 images three times. Intraobserver repeatability and interobserver reliability of cone density were characterized using data collected from all five observers, while interocular symmetry was assessed in five animals using the average values of all observers. The distribution of image quality for all images in this study was assessed with open-sourced software. RESULTS Manual identification was less repeatable than semi-automated correction for four of the five observers. Excellent repeatability was seen from all observers (ICC = 0.997-0.999), and there was good agreement between repeat cell identification corrections in all five observers (range: 9.43-25.71 cells/degree2). Reliability of cell identification was significantly different in two of the five observers, and worst in images taken from hibernating 13-LGS. Interocular symmetry of cone density was seen in the five 13-LGS assessed. Image quality was variable between blur- and pixel intensity-based metrics. CONCLUSIONS Interocular symmetry with repeatable cone density measurements suggest that the 13-LGS is well-suited for longitudinal examination of the cone mosaic using split-detection AOSLO. Differences in reliability highlight the importance of observer training and automation of AOSLO cell detection. Cone density measurements from hibernating 13-LGS are not repeatable. Additional studies are warranted to assess other metrics of cone health to detect deviations from normal 13-LGS in future models of cone disorder in this species.
Collapse
|
24
|
Cunefare D, Huckenpahler AL, Patterson EJ, Dubra A, Carroll J, Farsiu S. RAC-CNN: multimodal deep learning based automatic detection and classification of rod and cone photoreceptors in adaptive optics scanning light ophthalmoscope images. BIOMEDICAL OPTICS EXPRESS 2019; 10:3815-3832. [PMID: 31452977 PMCID: PMC6701534 DOI: 10.1364/boe.10.003815] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 06/26/2019] [Accepted: 06/29/2019] [Indexed: 05/03/2023]
Abstract
Quantification of the human rod and cone photoreceptor mosaic in adaptive optics scanning light ophthalmoscope (AOSLO) images is useful for the study of various retinal pathologies. Subjective and time-consuming manual grading has remained the gold standard for evaluating these images, with no well validated automatic methods for detecting individual rods having been developed. We present a novel deep learning based automatic method, called the rod and cone CNN (RAC-CNN), for detecting and classifying rods and cones in multimodal AOSLO images. We test our method on images from healthy subjects as well as subjects with achromatopsia over a range of retinal eccentricities. We show that our method is on par with human grading for detecting rods and cones.
Collapse
Affiliation(s)
- David Cunefare
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Alison L. Huckenpahler
- Department of Cell Biology, Neurobiology, & Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Emily J. Patterson
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| | - Joseph Carroll
- Department of Cell Biology, Neurobiology, & Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
25
|
Georgiou M, Litts KM, Kalitzeos A, Langlo CS, Kane T, Singh N, Kassilian M, Hirji N, Kumaran N, Dubra A, Carroll J, Michaelides M. Adaptive Optics Retinal Imaging in CNGA3-Associated Achromatopsia: Retinal Characterization, Interocular Symmetry, and Intrafamilial Variability. Invest Ophthalmol Vis Sci 2019; 60:383-396. [PMID: 30682209 PMCID: PMC6354941 DOI: 10.1167/iovs.18-25880] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 11/21/2018] [Indexed: 11/24/2022] Open
Abstract
Purpose To investigate retinal structure in subjects with CNGA3-associated achromatopsia and evaluate disease symmetry and intrafamilial variability. Methods Thirty-eight molecularly confirmed subjects underwent ocular examination, optical coherence tomography (OCT), and nonconfocal split-detection adaptive optics scanning light ophthalmoscopy (AOSLO). OCT scans were used for evaluating foveal hypoplasia, grading foveal ellipsoid zone (EZ) disruption, and measuring outer nuclear layer (ONL) thickness. AOSLO images were used to quantify peak foveal cone density, intercell distance (ICD), and the coefficient of variation (CV) of ICD. Results Mean (±SD) age was 25.9 (±13.1) years. Mean (± SD) best corrected visual acuity (BCVA) was 0.87 (±0.14) logarithm of the minimum angle of resolution. Examination with OCT showed variable disruption or loss of the EZ. Seven subjects were evaluated for disease symmetry, with peak foveal cone density, ICD, CV, ONL thickness, and BCVA not differing significantly between eyes. A cross-sectional evaluation of AOSLO imaging showed a mean (±SD) peak foveal cone density of 19,844 (±13,046) cones/mm2. There was a weak negative association between age and peak foveal cone density (r = -0.397, P = 0.102), as well as between EZ grade and age (P = 0.086). Conclusions The remnant cone mosaics were irregular and variably disrupted, with significantly lower peak foveal cone density than unaffected individuals. Variability was also seen among subjects with identical mutations. Therefore, subjects should be considered on an individual basis for stratification in clinical trials. Interocular symmetry suggests that both eyes have comparable therapeutic potential and the fellow eye can serve as a valid control. Longitudinal studies are needed, to further examine the weak negative association between age and foveal cone structure observed here.
Collapse
Affiliation(s)
- Michalis Georgiou
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, City Road, London, United Kingdom
| | - Katie M. Litts
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, Wisconsin, United States
| | - Angelos Kalitzeos
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, City Road, London, United Kingdom
| | - Christopher S. Langlo
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, Wisconsin, United States
| | - Thomas Kane
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, City Road, London, United Kingdom
| | - Navjit Singh
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, City Road, London, United Kingdom
| | - Melissa Kassilian
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, City Road, London, United Kingdom
| | - Nashila Hirji
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, City Road, London, United Kingdom
| | - Neruban Kumaran
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, City Road, London, United Kingdom
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, California, United States
| | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, Wisconsin, United States
| | - Michel Michaelides
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, City Road, London, United Kingdom
| |
Collapse
|
26
|
Desai AD, Peng C, Fang L, Mukherjee D, Yeung A, Jaffe SJ, Griffin JB, Farsiu S. Open-source, machine and deep learning-based automated algorithm for gestational age estimation through smartphone lens imaging. BIOMEDICAL OPTICS EXPRESS 2018; 9:6038-6052. [PMID: 31065411 PMCID: PMC6491013 DOI: 10.1364/boe.9.006038] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 10/25/2018] [Accepted: 10/26/2018] [Indexed: 05/20/2023]
Abstract
Gestational age estimation at time of birth is critical for determining the degree of prematurity of the infant and for administering appropriate postnatal treatment. We present a fully automated algorithm for estimating gestational age of premature infants through smartphone lens imaging of the anterior lens capsule vasculature (ALCV). Our algorithm uses a fully convolutional network and blind image quality analyzers to segment usable anterior capsule regions. Then, it extracts ALCV features using a residual neural network architecture and trains on these features using a support vector machine-based classifier. The classification algorithm is validated using leave-one-out cross-validation on videos captured from 124 neonates. The algorithm is expected to be an influential tool for remote and point-of-care gestational age estimation of premature neonates in low-income countries. To this end, we have made the software open source.
Collapse
Affiliation(s)
- Arjun D. Desai
- Department of Biomedical Engineering, Duke University, Durham 27708, USA
- Department of Computer Science, Duke University, Durham 27708, USA
| | - Chunlei Peng
- Department of Biomedical Engineering, Duke University, Durham 27708, USA
- School of Cyber Engineering, Xidian University, Xi'an 710071, China
| | - Leyuan Fang
- Department of Biomedical Engineering, Duke University, Durham 27708, USA
| | - Dibyendu Mukherjee
- Department of Biomedical Engineering, Duke University, Durham 27708, USA
| | - Andrew Yeung
- Department of Biomedical Engineering, Duke University, Durham 27708, USA
| | - Stephanie J. Jaffe
- Department of Biomedical Engineering, Duke University, Durham 27708, USA
| | - Jennifer B. Griffin
- Center for Global Health, RTI International, Research Triangle Park 27709, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham 27710, USA
- Department of Computer Science, Duke University, Durham 27708, USA
| |
Collapse
|
27
|
Heisler M, Ju MJ, Bhalla M, Schuck N, Athwal A, Navajas EV, Beg MF, Sarunic MV. Automated identification of cone photoreceptors in adaptive optics optical coherence tomography images using transfer learning. BIOMEDICAL OPTICS EXPRESS 2018; 9:5353-5367. [PMID: 30460133 PMCID: PMC6238943 DOI: 10.1364/boe.9.005353] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 10/01/2018] [Accepted: 10/02/2018] [Indexed: 05/11/2023]
Abstract
Automated measurements of the human cone mosaic requires the identification of individual cone photoreceptors. The current gold standard, manual labeling, is a tedious process and can not be done in a clinically useful timeframe. As such, we present an automated algorithm for identifying cone photoreceptors in adaptive optics optical coherence tomography (AO-OCT) images. Our approach fine-tunes a pre-trained convolutional neural network originally trained on AO scanning laser ophthalmoscope (AO-SLO) images, to work on previously unseen data from a different imaging modality. On average, the automated method correctly identified 94% of manually labeled cones when compared to manual raters, from twenty different AO-OCT images acquired from five normal subjects. Voronoi analysis confirmed the general hexagonal-packing structure of the cone mosaic as well as the general cone density variability across portions of the retina. The consistency of our measurements demonstrates the high reliability and practical utility of having an automated solution to this problem.
Collapse
Affiliation(s)
- Morgan Heisler
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, BC, V5A 1S6,
Canada
| | - Myeong Jin Ju
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, BC, V5A 1S6,
Canada
| | - Mahadev Bhalla
- University of British Columbia, Faculty of Medicine, 317 - 2194 Health Sciences Mall, Vancouver, BC, V6T 1Z3,
Canada
| | - Nathan Schuck
- University of British Columbia, Faculty of Medicine, 317 - 2194 Health Sciences Mall, Vancouver, BC, V6T 1Z3,
Canada
| | - Arman Athwal
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, BC, V5A 1S6,
Canada
| | - Eduardo V. Navajas
- University of British Columbia, Department of Ophthalmology & Vision Science, 2550 Willow Street, Vancouver, BC, V5Z 3N9,
Canada
| | - Mirza Faisal Beg
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, BC, V5A 1S6,
Canada
| | - Marinko V. Sarunic
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, BC, V5A 1S6,
Canada
| |
Collapse
|