1
|
Ahn SJ. Retinal Thickness Analysis Using Optical Coherence Tomography: Diagnostic and Monitoring Applications in Retinal Diseases. Diagnostics (Basel) 2025; 15:833. [PMID: 40218183 PMCID: PMC11988421 DOI: 10.3390/diagnostics15070833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2025] [Revised: 03/15/2025] [Accepted: 03/20/2025] [Indexed: 04/14/2025] Open
Abstract
Retinal thickness analysis using optical coherence tomography (OCT) has become an indispensable tool in retinal disease management, providing high-resolution quantitative data for diagnosis, monitoring, and treatment planning. This analysis has been found to be particularly useful for both diagnostic and monitoring purposes across a wide range of retinal diseases, enabling precise disease characterization and treatment evaluation. This paper explores its applications across major retinal conditions, including age-related macular degeneration, diabetic retinopathy, retinal vein occlusion, and inherited retinal diseases. Emerging roles in other diseases such as neurodegenerative diseases and retinal drug toxicity are also highlighted. Despite challenges such as variability in measurements, segmentation errors, and interpretation difficulties, advancements in artificial intelligence and machine learning have significantly improved accuracy and efficiency. The integration of retinal thickness analysis with telemedicine platforms and standardized protocols further underscores its potential in delivering personalized care and enabling the early detection of ocular and systemic diseases. Retinal thickness analysis continues to play a pivotal and growing role in both clinical practice and research, bridging the gap between ophthalmology and broader medical fields.
Collapse
Affiliation(s)
- Seong Joon Ahn
- Department of Ophthalmology, Hanyang University Hospital, Hanyang University College of Medicine, Seoul 04763, Republic of Korea
| |
Collapse
|
2
|
Remtulla R, Samet A, Kulbay M, Akdag A, Hocini A, Volniansky A, Kahn Ali S, Qian CX. A Future Picture: A Review of Current Generative Adversarial Neural Networks in Vitreoretinal Pathologies and Their Future Potentials. Biomedicines 2025; 13:284. [PMID: 40002698 PMCID: PMC11852121 DOI: 10.3390/biomedicines13020284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Revised: 01/06/2025] [Accepted: 01/14/2025] [Indexed: 02/27/2025] Open
Abstract
Machine learning has transformed ophthalmology, particularly in predictive and discriminatory models for vitreoretinal pathologies. However, generative modeling, especially generative adversarial networks (GANs), remains underexplored. GANs consist of two neural networks-the generator and discriminator-that work in opposition to synthesize highly realistic images. These synthetic images can enhance diagnostic accuracy, expand the capabilities of imaging technologies, and predict treatment responses. GANs have already been applied to fundus imaging, optical coherence tomography (OCT), and fluorescein autofluorescence (FA). Despite their potential, GANs face challenges in reliability and accuracy. This review explores GAN architecture, their advantages over other deep learning models, and their clinical applications in retinal disease diagnosis and treatment monitoring. Furthermore, we discuss the limitations of current GAN models and propose novel applications combining GANs with OCT, OCT-angiography, fluorescein angiography, fundus imaging, electroretinograms, visual fields, and indocyanine green angiography.
Collapse
Affiliation(s)
- Raheem Remtulla
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 3SE, Canada; (R.R.); (M.K.)
| | - Adam Samet
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 3SE, Canada; (R.R.); (M.K.)
| | - Merve Kulbay
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 3SE, Canada; (R.R.); (M.K.)
- Centre de Recherche de l’Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada
| | - Arjin Akdag
- Faculty of Medicine and Health Sciences, McGill University, Montreal, QC H3G 2M1, Canada
| | - Adam Hocini
- Faculty of Medicine, Université de Montréal, Montreal, QC H3T 1J4, Canada
| | - Anton Volniansky
- Department of Psychiatry, Université Laval, Quebec City, QC G1V 0A6, Canada
| | - Shigufa Kahn Ali
- Centre de Recherche de l’Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada
- Department of Ophthalmology, Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, University of Montreal, Montreal, QC H1T 2M4, Canada
| | - Cynthia X. Qian
- Centre de Recherche de l’Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada
- Department of Ophthalmology, Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, University of Montreal, Montreal, QC H1T 2M4, Canada
| |
Collapse
|
3
|
Diao S, Yin Z, Chen X, Li M, Zhu W, Mateen M, Xu X, Shi F, Fan Y. Two-stage adversarial learning based unsupervised domain adaptation for retinal OCT segmentation. Med Phys 2024; 51:5374-5385. [PMID: 38426594 DOI: 10.1002/mp.17012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 01/23/2024] [Accepted: 02/20/2024] [Indexed: 03/02/2024] Open
Abstract
BACKGROUND Deep learning based optical coherence tomography (OCT) segmentation methods have achieved excellent results, allowing quantitative analysis of large-scale data. However, OCT images are often acquired by different devices or under different imaging protocols, which leads to serious domain shift problem. This in turn results in performance degradation of segmentation models. PURPOSE Aiming at the domain shift problem, we propose a two-stage adversarial learning based network (TSANet) that accomplishes unsupervised cross-domain OCT segmentation. METHODS In the first stage, a Fourier transform based approach is adopted to reduce image style differences from the image level. Then, adversarial learning networks, including a segmenter and a discriminator, are designed to achieve inter-domain consistency in the segmentation output. In the second stage, pseudo labels of selected unlabeled target domain training data are used to fine-tune the segmenter, which further improves its generalization capability. The proposed method was tested on cross-domain datasets for choroid or retinoschisis segmentation tasks. For choroid segmentation, the model was trained on 400 images and validated on 100 images from the source domain, and then trained on 1320 unlabeled images and tested on 330 images from target domain I, and also trained on 400 unlabeled images and tested on 200 images from target domain II. For retinoschisis segmentation, the model was trained on 1284 images and validated on 312 images from the source domain, and then trained on 1024 unlabeled images and tested on 200 images from the target domain. RESULTS The proposed method achieved significantly improved results over that without domain adaptation, with improvement of 8.34%, 55.82% and 3.53% in intersection over union (IoU) respectively for the three test sets. The performance is better than some state-of-the-art domain adaptation methods. CONCLUSIONS The proposed TSANet, with image level adaptation, feature level adaptation and pseudo-label based fine-tuning, achieved excellent cross-domain generalization. This alleviates the burden of obtaining additional manual labels when adapting the deep learning model to new OCT data.
Collapse
Affiliation(s)
- Shengyong Diao
- MIPAV Lab, the School of Electronics and Information Engineering, Soochow University, Suzhou, China
| | - Ziting Yin
- MIPAV Lab, the School of Electronics and Information Engineering, Soochow University, Suzhou, China
| | - Xinjian Chen
- MIPAV Lab, the School of Electronics and Information Engineering, Soochow University, Suzhou, China
- The State Key Laboratory of Radiation Medicine and Protection, Soochow University, Suzhou, China
| | - Menghan Li
- Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Weifang Zhu
- MIPAV Lab, the School of Electronics and Information Engineering, Soochow University, Suzhou, China
| | - Muhammad Mateen
- MIPAV Lab, the School of Electronics and Information Engineering, Soochow University, Suzhou, China
| | - Xun Xu
- Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Fei Shi
- MIPAV Lab, the School of Electronics and Information Engineering, Soochow University, Suzhou, China
| | - Ying Fan
- Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
4
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
5
|
Vrettos K, Koltsakis E, Zibis AH, Karantanas AH, Klontzas ME. Generative adversarial networks for spine imaging: A critical review of current applications. Eur J Radiol 2024; 171:111313. [PMID: 38237518 DOI: 10.1016/j.ejrad.2024.111313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/18/2023] [Accepted: 01/09/2024] [Indexed: 02/10/2024]
Abstract
PURPOSE In recent years, the field of medical imaging has witnessed remarkable advancements, with innovative technologies which revolutionized the visualization and analysis of the human spine. Among the groundbreaking developments in medical imaging, Generative Adversarial Networks (GANs) have emerged as a transformative tool, offering unprecedented possibilities in enhancing spinal imaging techniques and diagnostic outcomes. This review paper aims to provide a comprehensive overview of the use of GANs in spinal imaging, and to emphasize their potential to improve the diagnosis and treatment of spine-related disorders. A specific review focusing on Generative Adversarial Networks (GANs) in the context of medical spine imaging is needed to provide a comprehensive and specialized analysis of the unique challenges, applications, and advancements within this specific domain, which might not be fully addressed in broader reviews covering GANs in general medical imaging. Such a review can offer insights into the tailored solutions and innovations that GANs bring to the field of spinal medical imaging. METHODS An extensive literature search from 2017 until July 2023, was conducted using the most important search engines and identified studies that used GANs in spinal imaging. RESULTS The implementations include generating fat suppressed T2-weighted (fsT2W) images from T1 and T2-weighted sequences, to reduce scan time. The generated images had a significantly better image quality than true fsT2W images and could improve diagnostic accuracy for certain pathologies. GANs were also utilized in generating virtual thin-slice images of intervertebral spaces, creating digital twins of human vertebrae, and predicting fracture response. Lastly, they could be applied to convert CT to MRI images, with the potential to generate near-MR images from CT without MRI. CONCLUSIONS GANs have promising applications in personalized medicine, image augmentation, and improved diagnostic accuracy. However, limitations such as small databases and misalignment in CT-MRI pairs, must be considered.
Collapse
Affiliation(s)
- Konstantinos Vrettos
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Emmanouil Koltsakis
- Department of Radiology, Karolinska University Hospital, Solna, Stockholm, Sweden
| | - Aristeidis H Zibis
- Department of Anatomy, Medical School, University of Thessaly, Larissa, Greece
| | - Apostolos H Karantanas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
| | - Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece.
| |
Collapse
|
6
|
Ma D, Deng W, Khera Z, Sajitha TA, Wang X, Wollstein G, Schuman JS, Lee S, Shi H, Ju MJ, Matsubara J, Beg MF, Sarunic M, Sappington RM, Chan KC. Early inner plexiform layer thinning and retinal nerve fiber layer thickening in excitotoxic retinal injury using deep learning-assisted optical coherence tomography. Acta Neuropathol Commun 2024; 12:19. [PMID: 38303097 PMCID: PMC10835918 DOI: 10.1186/s40478-024-01732-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 01/14/2024] [Indexed: 02/03/2024] Open
Abstract
Excitotoxicity from the impairment of glutamate uptake constitutes an important mechanism in neurodegenerative diseases such as Alzheimer's, multiple sclerosis, and Parkinson's disease. Within the eye, excitotoxicity is thought to play a critical role in retinal ganglion cell death in glaucoma, diabetic retinopathy, retinal ischemia, and optic nerve injury, yet how excitotoxic injury impacts different retinal layers is not well understood. Here, we investigated the longitudinal effects of N-methyl-D-aspartate (NMDA)-induced excitotoxic retinal injury in a rat model using deep learning-assisted retinal layer thickness estimation. Before and after unilateral intravitreal NMDA injection in nine adult Long Evans rats, spectral-domain optical coherence tomography (OCT) was used to acquire volumetric retinal images in both eyes over 4 weeks. Ten retinal layers were automatically segmented from the OCT data using our deep learning-based algorithm. Retinal degeneration was evaluated using layer-specific retinal thickness changes at each time point (before, and at 3, 7, and 28 days after NMDA injection). Within the inner retina, our OCT results showed that retinal thinning occurred first in the inner plexiform layer at 3 days after NMDA injection, followed by the inner nuclear layer at 7 days post-injury. In contrast, the retinal nerve fiber layer exhibited an initial thickening 3 days after NMDA injection, followed by normalization and thinning up to 4 weeks post-injury. Our results demonstrated the pathological cascades of NMDA-induced neurotoxicity across different layers of the retina. The early inner plexiform layer thinning suggests early dendritic shrinkage, whereas the initial retinal nerve fiber layer thickening before subsequent normalization and thinning indicates early inflammation before axonal loss and cell death. These findings implicate the inner plexiform layer as an early imaging biomarker of excitotoxic retinal degeneration, whereas caution is warranted when interpreting the ganglion cell complex combining retinal nerve fiber layer, ganglion cell layer, and inner plexiform layer thicknesses in conventional OCT measures. Deep learning-assisted retinal layer segmentation and longitudinal OCT monitoring can help evaluate the different phases of retinal layer damage upon excitotoxicity.
Collapse
Affiliation(s)
- Da Ma
- Wake Forest University School of Medicine, 1 Medical Center Blvd, Winston-Salem, NC, 27157, USA.
- Wake Forest University Health Sciences, Winston-Salem, NC, USA.
- Translational Eye and Vision Research Center, Wake Forest University School of Medicine, Winston-Salem, NC, USA.
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada.
| | - Wenyu Deng
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Department of Ophthalmology, SUNY Downstate Medical Center, Brooklyn, NY, USA
| | - Zain Khera
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Thajunnisa A Sajitha
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Xinlei Wang
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Center for Neural Science, College of Arts and Science, New York University, New York, NY, USA
- Department of Biomedical Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - Joel S Schuman
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Center for Neural Science, College of Arts and Science, New York University, New York, NY, USA
- Department of Biomedical Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
- Wills Eye Hospital, Philadelphia, PA, USA
- Department of Biomedical Engineering, Drexel University, Philadelphia, PA, USA
- Neuroscience Institute, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Sieun Lee
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
- Department of Ophthalmology and Visual Sciences, The University of British Columbia, Vancouver, BC, Canada
- Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
| | - Haolun Shi
- Department of Statistics and Actuarial Science, Simon Fraser University, Burnaby, BC, Canada
| | - Myeong Jin Ju
- Department of Ophthalmology and Visual Sciences, The University of British Columbia, Vancouver, BC, Canada
| | - Joanne Matsubara
- Department of Ophthalmology and Visual Sciences, The University of British Columbia, Vancouver, BC, Canada
| | - Mirza Faisal Beg
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Marinko Sarunic
- Institute of Ophthalmology, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Rebecca M Sappington
- Wake Forest University School of Medicine, 1 Medical Center Blvd, Winston-Salem, NC, 27157, USA
- Wake Forest University Health Sciences, Winston-Salem, NC, USA
- Translational Eye and Vision Research Center, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Kevin C Chan
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA.
- Center for Neural Science, College of Arts and Science, New York University, New York, NY, USA.
- Department of Biomedical Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.
- Neuroscience Institute, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA.
- Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA.
| |
Collapse
|
7
|
Leingang O, Riedl S, Mai J, Reiter GS, Faustmann G, Fuchs P, Scholl HPN, Sivaprasad S, Rueckert D, Lotery A, Schmidt-Erfurth U, Bogunović H. Automated deep learning-based AMD detection and staging in real-world OCT datasets (PINNACLE study report 5). Sci Rep 2023; 13:19545. [PMID: 37945665 PMCID: PMC10636170 DOI: 10.1038/s41598-023-46626-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 11/03/2023] [Indexed: 11/12/2023] Open
Abstract
Real-world retinal optical coherence tomography (OCT) scans are available in abundance in primary and secondary eye care centres. They contain a wealth of information to be analyzed in retrospective studies. The associated electronic health records alone are often not enough to generate a high-quality dataset for clinical, statistical, and machine learning analysis. We have developed a deep learning-based age-related macular degeneration (AMD) stage classifier, to efficiently identify the first onset of early/intermediate (iAMD), atrophic (GA), and neovascular (nAMD) stage of AMD in retrospective data. We trained a two-stage convolutional neural network to classify macula-centered 3D volumes from Topcon OCT images into 4 classes: Normal, iAMD, GA and nAMD. In the first stage, a 2D ResNet50 is trained to identify the disease categories on the individual OCT B-scans while in the second stage, four smaller models (ResNets) use the concatenated B-scan-wise output from the first stage to classify the entire OCT volume. Classification uncertainty estimates are generated with Monte-Carlo dropout at inference time. The model was trained on a real-world OCT dataset, 3765 scans of 1849 eyes, and extensively evaluated, where it reached an average ROC-AUC of 0.94 in a real-world test set.
Collapse
Affiliation(s)
- Oliver Leingang
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Sophie Riedl
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Julia Mai
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Gregor S Reiter
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Georg Faustmann
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Philipp Fuchs
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Hendrik P N Scholl
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Daniel Rueckert
- BioMedIA, Imperial College London, London, UK
- Institute for AI and Informatics in Medicine, Klinikum rechts der Isar, Technical University Munich, Munich, Germany
| | - Andrew Lotery
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, UK
| | - Ursula Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Hrvoje Bogunović
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria.
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|