1
|
Doorly R, Ong J, Waisberg E, Sarker P, Zaman N, Tavakkoli A, Lee AG. Applications of generative adversarial networks in the diagnosis, prognosis, and treatment of ophthalmic diseases. Graefes Arch Clin Exp Ophthalmol 2025:10.1007/s00417-025-06830-9. [PMID: 40263170 DOI: 10.1007/s00417-025-06830-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2024] [Revised: 03/05/2025] [Accepted: 04/08/2025] [Indexed: 04/24/2025] Open
Abstract
PURPOSE Generative adversarial networks (GANs) are key components of many artificial intelligence (AI) systems that are applied to image-informed bioengineering and medicine. GANs combat key limitations facing deep learning models: small, unbalanced datasets containing few images of severe disease. The predictive capacity of conditional GANs may also be extremely useful in managing disease on an individual basis. This narrative review focusses on the application of GANs in ophthalmology, in order to provide a critical account of the current state and ongoing challenges for healthcare professionals and allied scientists who are interested in this rapidly evolving field. METHODS We performed a search of studies that apply generative adversarial networks (GANs) in diagnosis, therapy and prognosis of eight eye diseases. These disparate tasks were selected to highlight developments in GAN techniques, differences and common features to aid practitioners and future adopters in the field of ophthalmology. RESULTS The studies we identified show that GANs have demonstrated capacity to: generate realistic and useful synthetic images, convert image modality, improve image quality, enhance extraction of relevant features, and provide prognostic predictions based on input images and other relevant data. CONCLUSION The broad range of architectures considered describe how GAN technology is evolving to meet different challenges (including segmentation and multi-modal imaging) that are of particular relevance to ophthalmology. The wide availability of datasets now facilitates the entry of new researchers to the field. However mainstream adoption of GAN technology for clinical use remains contingent on larger public datasets for widespread validation and necessary regulatory oversight.
Collapse
Affiliation(s)
| | - Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, MI, USA
| | | | - Prithul Sarker
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Andrew G Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, USA
- The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX, USA
- University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Texas A&M School of Medicine, Bryan, TX, USA
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| |
Collapse
|
2
|
Yang X, Xu F, Yu H, Li Z, Yu X, Li Z, Zhang L, Liu J, Wang S, Liu S, Hong J, Li J. Prediction of OCT contours of short-term response to anti-VEGF treatment for diabetic macular edema using generative adversarial networks. Photodiagnosis Photodyn Ther 2025; 52:104482. [PMID: 39826600 DOI: 10.1016/j.pdpdt.2025.104482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2024] [Revised: 12/25/2024] [Accepted: 01/10/2025] [Indexed: 01/22/2025]
Abstract
Diabetic macular edema (DME) stands as a leading cause for vision loss among the working-age population. Anti-vascular endothelial growth factor (VEGF) agents are currently recognized as the first-line treatment. However, a significant portion of patients remain insensitive to anti-VEGF, resulting in sustained visual impairment. Therefore, it's imperative to predict prognosis and formulate personalized therapeutic regimens. Generative adversarial networks (GANs) have demonstrated remarkably in forecasting prognosis of diseases, yet their performance is still constrained by the limited availability of real-world data and suboptimal image quality, which subsequently impacts the model's outputs. We endeavor to employ preoperative images along with postoperative OCT contours annotated and extracted via LabelMe and OpenCV to train the model in generating postoperative contours of critical OCT structures instead of previous whole retinal morphology, considerably alleviating the difficulty of output phase and diminishing the requisite quantity of training datasets. Our study reveals that the GAN could serve as an auxiliary instrument for ophthalmologists in determining the prognosis of individuals and screening patients with poor responses to anti-VEGF therapy.
Collapse
Affiliation(s)
- Xueying Yang
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, PR China
| | - Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, PR China
| | - Han Yu
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, PR China
| | - Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, PR China
| | - Xuechen Yu
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, PR China
| | - Zhiwen Li
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, PR China
| | - Li Zhang
- Department of Ophthalmology, The Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, PR China
| | - Jie Liu
- Department of Endocrinology, People's Hospital of Zoucheng, Jining, PR China
| | - Shaopeng Wang
- Zibo Central Hospital, Binzhou Medical University, Zibo, Shandong province, PR China
| | - Shaopeng Liu
- School of computer science, Guangdong Polytechnic Normal University, Guangzhou, PR China.
| | - Jiaming Hong
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, Guangdong, PR China.
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, PR China.
| |
Collapse
|
3
|
Phipps B, Hadoux X, Sheng B, Campbell JP, Liu TYA, Keane PA, Cheung CY, Chung TY, Wong TY, van Wijngaarden P. AI image generation technology in ophthalmology: Use, misuse and future applications. Prog Retin Eye Res 2025; 106:101353. [PMID: 40107410 DOI: 10.1016/j.preteyeres.2025.101353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Revised: 03/12/2025] [Accepted: 03/13/2025] [Indexed: 03/22/2025]
Abstract
BACKGROUND AI-powered image generation technology holds the potential to reshape medical practice, yet it remains an unfamiliar technology for both medical researchers and clinicians alike. Given the adoption of this technology relies on clinician understanding and acceptance, we sought to demystify its use in ophthalmology. To this end, we present a literature review on image generation technology in ophthalmology, examining both its theoretical applications and future role in clinical practice. METHODS First, we consider the key model designs used for image synthesis, including generative adversarial networks, autoencoders, and diffusion models. We then perform a survey of the literature for image generation technology in ophthalmology prior to September 2024, presenting both the type of model used and its clinical application. Finally, we discuss the limitations of this technology, the risks of its misuse and the future directions of research in this field. RESULTS Applications of this technology include improving AI diagnostic models, inter-modality image transformation, more accurate treatment and disease prognostication, image denoising, and individualised education. Key barriers to its adoption include bias in generative models, risks to patient data security, computational and logistical barriers to development, challenges with model explainability, inconsistent use of validation metrics between studies and misuse of synthetic images. Looking forward, researchers are placing a further emphasis on clinically grounded metrics, the development of image generation foundation models and the implementation of methods to ensure data provenance. CONCLUSION Compared to other medical applications of AI, image generation is still in its infancy. Yet, it holds the potential to revolutionise ophthalmology across research, education and clinical practice. This review aims to guide ophthalmic researchers wanting to leverage this technology, while also providing an insight for clinicians on how it may change ophthalmic practice in the future.
Collapse
Affiliation(s)
- Benjamin Phipps
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, 3002, VIC, Australia; Ophthalmology, Department of Surgery, University of Melbourne, Parkville, 3010, VIC, Australia.
| | - Xavier Hadoux
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, 3002, VIC, Australia; Ophthalmology, Department of Surgery, University of Melbourne, Parkville, 3010, VIC, Australia
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, USA
| | - T Y Alvin Liu
- Retina Division, Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, 21287, USA
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, 999077, China
| | - Tham Yih Chung
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Eye Academic Clinical Program (Eye ACP), Duke NUS Medical School, Singapore
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China; Beijing Visual Science and Translational Eye Research Institute, Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Peter van Wijngaarden
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, 3002, VIC, Australia; Ophthalmology, Department of Surgery, University of Melbourne, Parkville, 3010, VIC, Australia; Florey Institute of Neuroscience & Mental Health, Parkville, VIC, Australia
| |
Collapse
|
4
|
Ra H, Jee D, Han S, Lee SH, Kwon JW, Jung Y, Baek J. Prediction of short-term anatomic prognosis for central serous chorioretinopathy using a generative adversarial network. Graefes Arch Clin Exp Ophthalmol 2025:10.1007/s00417-025-06786-w. [PMID: 40032768 DOI: 10.1007/s00417-025-06786-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Revised: 01/31/2025] [Accepted: 02/19/2025] [Indexed: 03/05/2025] Open
Abstract
PURPOSE To train generative adversarial network (GAN) models to generate predictive optical coherence tomography (OCT) images of central serous chorioretinopathy (CSC) at 3 months after observation using multi-modal OCT images. METHODS Four hundred forty CSC eyes of 440 patients who underwent Cirrus OCT imaging were included. Baseline OCT B-scan images through the foveal center, en face choroid, and en face ellipsoid zone were collected from each patient. The datasets were divided into training and validation (n = 390) and test (n = 50) sets. The input images for each model comprised either baseline B-scan alone or a combination of en face choroid and ellipsoid zones. Predictive post-treatment OCT B-scan images were generated using GAN models and compared with real 3-month images. RESULTS Of 50 generated OCT images, there were 48, 47, and 48 acceptable images for UNIT, CycleGAN, and RegGAN, respectively. In comparison with real 3-month images, the generated images showed sensitivity, specificity, and positive predictive values (PPV) for residual fluid in the ranges of 0.762-1.000, 0.483-0.724, and 0.583-0.704; for pigment epithelial detachment (PED) of 0.917-1.000, 0.974-1.000, and 0.917-1.000; and for subretinal hyperreflective material (SHRM) of 0.667-0.778, 0.925-0.950 and 0.700-0.750, respectively. RegGAN exhibited the highest values except for sensitivity. CONCLUSIONS GAN models could generate prognostic OCT images with good performance for prediction of residual fluid, PED, and SHRM presence in CSC. Implementation of the models may help predict disease activity in CSC, facilitating the establishment of a proper treatment plan.
Collapse
Affiliation(s)
- Ho Ra
- Department of Ophthalmology, Bucheon St. Mary'S Hospital, College of Medicine, The Catholic University of Korea, Bucheon, Gyeonggi-Do, Republic of Korea
- Department of Ophthalmology, The Catholic University of Korea, Seoul, Republic of Korea
| | - Donghyun Jee
- Department of Ophthalmology, The Catholic University of Korea, Seoul, Republic of Korea
- Department of Ophthalmology, St. Vincent Hospital, College of Medicine, The Catholic University of Korea, Suwon, Gyeonggi-Do, Republic of Korea
| | - Suyeon Han
- Department of Ophthalmology, Bucheon St. Mary'S Hospital, College of Medicine, The Catholic University of Korea, Bucheon, Gyeonggi-Do, Republic of Korea
| | - Seung-Hoon Lee
- Department of Ophthalmology, Bucheon St. Mary'S Hospital, College of Medicine, The Catholic University of Korea, Bucheon, Gyeonggi-Do, Republic of Korea
| | - Jin-Woo Kwon
- Department of Ophthalmology, The Catholic University of Korea, Seoul, Republic of Korea
- Department of Ophthalmology, St. Vincent Hospital, College of Medicine, The Catholic University of Korea, Suwon, Gyeonggi-Do, Republic of Korea
| | - Yunhea Jung
- Department of Ophthalmology, The Catholic University of Korea, Seoul, Republic of Korea
- Department of Ophthalmology, Yeoui-Do St. Mary'S Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jiwon Baek
- Department of Ophthalmology, Bucheon St. Mary'S Hospital, College of Medicine, The Catholic University of Korea, Bucheon, Gyeonggi-Do, Republic of Korea.
- Department of Ophthalmology, The Catholic University of Korea, Seoul, Republic of Korea.
| |
Collapse
|
5
|
Remtulla R, Samet A, Kulbay M, Akdag A, Hocini A, Volniansky A, Kahn Ali S, Qian CX. A Future Picture: A Review of Current Generative Adversarial Neural Networks in Vitreoretinal Pathologies and Their Future Potentials. Biomedicines 2025; 13:284. [PMID: 40002698 PMCID: PMC11852121 DOI: 10.3390/biomedicines13020284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Revised: 01/06/2025] [Accepted: 01/14/2025] [Indexed: 02/27/2025] Open
Abstract
Machine learning has transformed ophthalmology, particularly in predictive and discriminatory models for vitreoretinal pathologies. However, generative modeling, especially generative adversarial networks (GANs), remains underexplored. GANs consist of two neural networks-the generator and discriminator-that work in opposition to synthesize highly realistic images. These synthetic images can enhance diagnostic accuracy, expand the capabilities of imaging technologies, and predict treatment responses. GANs have already been applied to fundus imaging, optical coherence tomography (OCT), and fluorescein autofluorescence (FA). Despite their potential, GANs face challenges in reliability and accuracy. This review explores GAN architecture, their advantages over other deep learning models, and their clinical applications in retinal disease diagnosis and treatment monitoring. Furthermore, we discuss the limitations of current GAN models and propose novel applications combining GANs with OCT, OCT-angiography, fluorescein angiography, fundus imaging, electroretinograms, visual fields, and indocyanine green angiography.
Collapse
Affiliation(s)
- Raheem Remtulla
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 3SE, Canada; (R.R.); (M.K.)
| | - Adam Samet
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 3SE, Canada; (R.R.); (M.K.)
| | - Merve Kulbay
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 3SE, Canada; (R.R.); (M.K.)
- Centre de Recherche de l’Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada
| | - Arjin Akdag
- Faculty of Medicine and Health Sciences, McGill University, Montreal, QC H3G 2M1, Canada
| | - Adam Hocini
- Faculty of Medicine, Université de Montréal, Montreal, QC H3T 1J4, Canada
| | - Anton Volniansky
- Department of Psychiatry, Université Laval, Quebec City, QC G1V 0A6, Canada
| | - Shigufa Kahn Ali
- Centre de Recherche de l’Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada
- Department of Ophthalmology, Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, University of Montreal, Montreal, QC H1T 2M4, Canada
| | - Cynthia X. Qian
- Centre de Recherche de l’Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada
- Department of Ophthalmology, Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, University of Montreal, Montreal, QC H1T 2M4, Canada
| |
Collapse
|
6
|
Waisberg E, Ong J, Kamran SA, Masalkhi M, Paladugu P, Zaman N, Lee AG, Tavakkoli A. Generative artificial intelligence in ophthalmology. Surv Ophthalmol 2025; 70:1-11. [PMID: 38762072 DOI: 10.1016/j.survophthal.2024.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 04/28/2024] [Accepted: 04/29/2024] [Indexed: 05/20/2024]
Abstract
Generative artificial intelligence (AI) has revolutionized medicine over the past several years. A generative adversarial network (GAN) is a deep learning framework that has become a powerful technique in medicine, particularly in ophthalmology for image analysis. In this paper we review the current ophthalmic literature involving GANs, and highlight key contributions in the field. We briefly touch on ChatGPT, another application of generative AI, and its potential in ophthalmology. We also explore the potential uses for GANs in ocular imaging, with a specific emphasis on 3 primary domains: image enhancement, disease identification, and generating of synthetic data. PubMed, Ovid MEDLINE, Google Scholar were searched from inception to October 30, 2022, to identify applications of GAN in ophthalmology. A total of 40 papers were included in this review. We cover various applications of GANs in ophthalmic-related imaging including optical coherence tomography, orbital magnetic resonance imaging, fundus photography, and ultrasound; however, we also highlight several challenges that resulted in the generation of inaccurate and atypical results during certain iterations. Finally, we examine future directions and considerations for generative AI in ophthalmology.
Collapse
Affiliation(s)
- Ethan Waisberg
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom.
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | - Sharif Amit Kamran
- School of Medicine, University College Dublin, Belfield, Dublin, Ireland
| | - Mouayad Masalkhi
- School of Medicine, University College Dublin, Belfield, Dublin, Ireland
| | - Phani Paladugu
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA; Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Andrew G Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA; Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, USA; The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX, USA; Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA; Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX, USA; University of Texas MD Anderson Cancer Center, Houston, TX, USA; Texas A&M College of Medicine, TX, USA; Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| |
Collapse
|
7
|
Han JM, Han J, Ko J, Jung J, Park JI, Hwang JS, Yoon J, Jung JH, Hwang DDJ. Anti-VEGF treatment outcome prediction based on optical coherence tomography images in neovascular age-related macular degeneration using a deep neural network. Sci Rep 2024; 14:28253. [PMID: 39548212 PMCID: PMC11568167 DOI: 10.1038/s41598-024-79034-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2024] [Accepted: 11/05/2024] [Indexed: 11/17/2024] Open
Abstract
Age-related macular degeneration (AMD) is a major cause of blindness in developed countries, and the number of affected patients is increasing worldwide. Intravitreal injections of anti-vascular endothelial growth factor (VEGF) are the standard therapy for neovascular AMD (nAMD), and optical coherence tomography (OCT) is a crucial tool for evaluating the anatomical condition of the macula. However, OCT has limitations in accurately predicting the degree of functional and morphological improvement following intravitreal injections. Artificial intelligence (AI) has been proposed as a tool for predicting the treatment response of nAMD based on OCT biomarkers. Our study focuses on the development and assessment of an AI model utilizing the DenseNet201 algorithm. The model aims to predict anatomical improvement based on OCT images before, and during anti-VEGF therapy. The training process involves two scenarios: (1) using only preinjection OCT images and (2) utilizing both OCT images before and during anti-VEGF therapy for model training. The outcomes of our investigation, involving 2068 images from a cohort of 517 Korean patients diagnosed with nAMD, indicate that the AI model we introduced surpassed the predictive performance of ophthalmologists. The model exhibited a sensitivity of 0.915, specificity of 0.426, and accuracy of 0.820. Notably, its predictive capabilities were further enhanced with the inclusion of additional OCT images taken after the first and second injections during the loading phase. The treatment prediction performance of the model was the highest when using all input modalities (before injection, and after the first and second injections) and concatenation-based fusion layers. This study highlights the potential of AI in assisting individualized and tailored nAMD treatment.
Collapse
Affiliation(s)
- Jeong Mo Han
- Kong Eye Hospital, Seoul, Korea
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
| | - Jinyoung Han
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea
- Department of Human-Artificial Intelligence Interaction, Sungkyunkwan University, Seoul, Korea
| | - Junseo Ko
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea
| | - Juho Jung
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea
| | - Ji In Park
- Department of Medicine, Kangwon National University Hospital, Kangwon National University School of Medicine, Chuncheon, Gangwon-do, Korea
| | | | - Jeewoo Yoon
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea
| | - Jae Ho Jung
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
| | - Daniel Duck-Jin Hwang
- Department of Ophthalmology, Hangil Eye Hospital, 35 Bupyeong-daero, Bupyeong-gu, Incheon, 21388, Korea.
- Lux Mind, Incheon, Korea.
| |
Collapse
|
8
|
Sorrentino FS, Zeppieri M, Culiersi C, Florido A, De Nadai K, Adamo GG, Pellegrini M, Nasini F, Vivarelli C, Mura M, Parmeggiani F. Application of Artificial Intelligence Models to Predict the Onset or Recurrence of Neovascular Age-Related Macular Degeneration. Pharmaceuticals (Basel) 2024; 17:1440. [PMID: 39598352 PMCID: PMC11597877 DOI: 10.3390/ph17111440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Revised: 10/23/2024] [Accepted: 10/24/2024] [Indexed: 11/29/2024] Open
Abstract
Neovascular age-related macular degeneration (nAMD) is one of the major causes of vision impairment that affect millions of people worldwide. Early detection of nAMD is crucial because, if untreated, it can lead to blindness. Software and algorithms that utilize artificial intelligence (AI) have become valuable tools for early detection, assisting doctors in diagnosing and facilitating differential diagnosis. AI is particularly important for remote or isolated communities, as it allows patients to endure tests and receive rapid initial diagnoses without the necessity of extensive travel and long wait times for medical consultations. Similarly, AI is notable also in big hubs because cutting-edge technologies and networking help and speed processes such as detection, diagnosis, and follow-up times. The automatic detection of retinal changes might be optimized by AI, allowing one to choose the most effective treatment for nAMD. The complex retinal tissue is well-suited for scanning and easily accessible by modern AI-assisted multi-imaging techniques. AI enables us to enhance patient management by effectively evaluating extensive data, facilitating timely diagnosis and long-term prognosis. Novel applications of AI to nAMD have focused on image analysis, specifically for the automated segmentation, extraction, and quantification of imaging-based features included within optical coherence tomography (OCT) pictures. To date, we cannot state that AI could accurately forecast the therapy that would be necessary for a single patient to achieve the best visual outcome. A small number of large datasets with high-quality OCT, lack of data about alternative treatment strategies, and absence of OCT standards are the challenges for the development of AI models for nAMD.
Collapse
Affiliation(s)
- Francesco Saverio Sorrentino
- Unit of Ophthalmology, Department of Surgical Sciences, Ospedale Maggiore, 40100 Bologna, Italy; (F.S.S.); (C.C.); (A.F.)
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, 33100 Udine, Italy;
| | - Carola Culiersi
- Unit of Ophthalmology, Department of Surgical Sciences, Ospedale Maggiore, 40100 Bologna, Italy; (F.S.S.); (C.C.); (A.F.)
| | - Antonio Florido
- Unit of Ophthalmology, Department of Surgical Sciences, Ospedale Maggiore, 40100 Bologna, Italy; (F.S.S.); (C.C.); (A.F.)
| | - Katia De Nadai
- Department of Translational Medicine and for Romagna, University of Ferrara, 44121 Ferrara, Italy; (K.D.N.); (G.G.A.); (M.P.); (C.V.); (M.M.)
- ERN-EYE Network-Center for Retinitis Pigmentosa of Veneto Region, Camposampiero Hospital, 35012 Padua, Italy
| | - Ginevra Giovanna Adamo
- Department of Translational Medicine and for Romagna, University of Ferrara, 44121 Ferrara, Italy; (K.D.N.); (G.G.A.); (M.P.); (C.V.); (M.M.)
- Unit of Ophthalmology, Azienda Ospedaliero Universitaria di Ferrara, 44100 Ferrara, Italy;
| | - Marco Pellegrini
- Department of Translational Medicine and for Romagna, University of Ferrara, 44121 Ferrara, Italy; (K.D.N.); (G.G.A.); (M.P.); (C.V.); (M.M.)
- Unit of Ophthalmology, Azienda Ospedaliero Universitaria di Ferrara, 44100 Ferrara, Italy;
| | - Francesco Nasini
- Unit of Ophthalmology, Azienda Ospedaliero Universitaria di Ferrara, 44100 Ferrara, Italy;
| | - Chiara Vivarelli
- Department of Translational Medicine and for Romagna, University of Ferrara, 44121 Ferrara, Italy; (K.D.N.); (G.G.A.); (M.P.); (C.V.); (M.M.)
| | - Marco Mura
- Department of Translational Medicine and for Romagna, University of Ferrara, 44121 Ferrara, Italy; (K.D.N.); (G.G.A.); (M.P.); (C.V.); (M.M.)
- King Khaled Eye Specialist Hospital, Riyadh 12211, Saudi Arabia
| | - Francesco Parmeggiani
- Department of Translational Medicine and for Romagna, University of Ferrara, 44121 Ferrara, Italy; (K.D.N.); (G.G.A.); (M.P.); (C.V.); (M.M.)
- ERN-EYE Network-Center for Retinitis Pigmentosa of Veneto Region, Camposampiero Hospital, 35012 Padua, Italy
| |
Collapse
|
9
|
Assaf JF, Abou Mrad A, Reinstein DZ, Amescua G, Zakka C, Archer TJ, Yammine J, Lamah E, Haykal M, Awwad ST. Creating realistic anterior segment optical coherence tomography images using generative adversarial networks. Br J Ophthalmol 2024; 108:1414-1422. [PMID: 38697800 DOI: 10.1136/bjo-2023-324633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 04/21/2024] [Indexed: 05/05/2024]
Abstract
AIMS To develop a generative adversarial network (GAN) capable of generating realistic high-resolution anterior segment optical coherence tomography (AS-OCT) images. METHODS This study included 142 628 AS-OCT B-scans from the American University of Beirut Medical Center. The Style and WAvelet based GAN architecture was trained to generate realistic AS-OCT images and was evaluated through the Fréchet Inception Distance (FID) Score and a blinded assessment by three refractive surgeons who were asked to distinguish between real and generated images. To assess the suitability of the generated images for machine learning tasks, a convolutional neural network (CNN) was trained using a dataset of real and generated images over a classification task. The generated AS-OCT images were then upsampled using an enhanced super-resolution GAN (ESRGAN) to achieve high resolution. RESULTS The generated images exhibited visual and quantitative similarity to real AS-OCT images. Quantitative similarity assessed using FID scored an average of 6.32. Surgeons scored 51.7% in identifying real versus generated images which was not significantly better than chance (p value >0.3). The CNN accuracy improved from 78% to 100% when synthetic images were added to the dataset. The ESRGAN upsampled images were objectively more realistic and accurate compared with traditional upsampling techniques by scoring a lower Learned Perceptual Image Patch Similarity of 0.0905 compared with 0.4244 of bicubic interpolation. CONCLUSIONS This study successfully developed and leveraged GANs capable of generating high-definition synthetic AS-OCT images that are realistic and suitable for machine learning and image analysis tasks.
Collapse
Affiliation(s)
- Jad F Assaf
- Faculty of Medicine, American University of Beirut, Beirut, Lebanon
- Casey Eye Institute, Pregon Health & Science University, Portland, OR, USA
| | | | - Dan Z Reinstein
- London Vision Clinic, London, UK
- Reinstein Vision, London, UK
- Columbia University Medical Center, New York, NY, USA
- Sorbonne Université, Paris, France
- Biomedical Science Research Institute, Ulster University, Coleraine, UK
| | | | - Cyril Zakka
- Department of Cardiothoracic Surgery, Stanford University, Stanford, California, USA
| | | | - Jeffrey Yammine
- Faculty of Medicine, American University of Beirut, Beirut, Lebanon
| | - Elsa Lamah
- Faculty of Medicine, American University of Beirut, Beirut, Lebanon
| | - Michèle Haykal
- Faculty of Medicine, Saint Joseph University, Beirut, Lebanon
| | - Shady T Awwad
- Department of Ophthalmology, American University of Beirut Medical Center, Beirut, Lebanon
| |
Collapse
|
10
|
Baek J, He Y, Emamverdi M, Mahmoudi A, Nittala MG, Corradetti G, Ip M, Sadda SR. Prediction of Long-Term Treatment Outcomes for Diabetic Macular Edema Using a Generative Adversarial Network. Transl Vis Sci Technol 2024; 13:4. [PMID: 38958946 PMCID: PMC11223618 DOI: 10.1167/tvst.13.7.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 05/25/2024] [Indexed: 07/04/2024] Open
Abstract
Purpose The purpose of this study was to analyze optical coherence tomography (OCT) images of generative adversarial networks (GANs) for the prediction of diabetic macular edema after long-term treatment. Methods Diabetic macular edema (DME) eyes (n = 327) underwent anti-vascular endothelial growth factor (VEGF) treatments every 4 weeks for 52 weeks from a randomized controlled trial (CRTH258B2305, KINGFISHER) were included. OCT B-scan images through the foveal center at weeks 0, 4, 12, and 52, fundus photography, and retinal thickness (RT) maps were collected. GAN models were trained to generate probable OCT images after treatment. Input for each model were comprised of either the baseline B-scan alone or combined with additional OCT, thickness map, or fundus images. Generated OCT B-scan images were compared with real week 52 images. Results For 30 test images, 28, 29, 15, and 30 gradable OCT images were generated by CycleGAN, UNIT, Pix2PixHD, and RegGAN, respectively. In comparison with the real week 52, these GAN models showed positive predictive value (PPV), sensitivity, specificity, and kappa for residual fluid ranging from 0.500 to 0.889, 0.455 to 1.000, 0.357 to 0.857, and 0.537 to 0.929, respectively. For hard exudate (HE), they were ranging from 0.500 to 1.000, 0.545 to 0.900, 0.600 to 1.000, and 0.642 to 0.894, respectively. Models trained with week 4 and 12 B-scans as additional inputs to the baseline B-scan showed improved performance. Conclusions GAN models could predict residual fluid and HE after long-term anti-VEGF treatment of DME. Translational Relevance The implementation of this tool may help identify potential nonresponders after long-term treatment, thereby facilitating management planning for these eyes.
Collapse
Affiliation(s)
- Jiwon Baek
- Doheny Eye Institute, Pasadena, CA, USA
- Department of Ophthalmology, Bucheon St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Bucheon, Gyeonggi-do, Republic of Korea
- Department of Ophthalmology, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
- Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Ye He
- Doheny Eye Institute, Pasadena, CA, USA
- Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Mehdi Emamverdi
- Doheny Eye Institute, Pasadena, CA, USA
- Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Alireza Mahmoudi
- Doheny Eye Institute, Pasadena, CA, USA
- Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | | | - Giulia Corradetti
- Doheny Eye Institute, Pasadena, CA, USA
- Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Michael Ip
- Doheny Eye Institute, Pasadena, CA, USA
- Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - SriniVas R Sadda
- Doheny Eye Institute, Pasadena, CA, USA
- Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| |
Collapse
|
11
|
Feng X, Xu K, Luo MJ, Chen H, Yang Y, He Q, Song C, Li R, Wu Y, Wang H, Tham YC, Ting DSW, Lin H, Wong TY, Lam DSC. Latest developments of generative artificial intelligence and applications in ophthalmology. Asia Pac J Ophthalmol (Phila) 2024; 13:100090. [PMID: 39128549 DOI: 10.1016/j.apjo.2024.100090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 07/30/2024] [Accepted: 08/07/2024] [Indexed: 08/13/2024] Open
Abstract
The emergence of generative artificial intelligence (AI) has revolutionized various fields. In ophthalmology, generative AI has the potential to enhance efficiency, accuracy, personalization and innovation in clinical practice and medical research, through processing data, streamlining medical documentation, facilitating patient-doctor communication, aiding in clinical decision-making, and simulating clinical trials. This review focuses on the development and integration of generative AI models into clinical workflows and scientific research of ophthalmology. It outlines the need for development of a standard framework for comprehensive assessments, robust evidence, and exploration of the potential of multimodal capabilities and intelligent agents. Additionally, the review addresses the risks in AI model development and application in clinical service and research of ophthalmology, including data privacy, data bias, adaptation friction, over interdependence, and job replacement, based on which we summarized a risk management framework to mitigate these concerns. This review highlights the transformative potential of generative AI in enhancing patient care, improving operational efficiency in the clinical service and research in ophthalmology. It also advocates for a balanced approach to its adoption.
Collapse
Affiliation(s)
- Xiaoru Feng
- School of Biomedical Engineering, Tsinghua Medicine, Tsinghua University, Beijing, China; Institute for Hospital Management, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Kezheng Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Ming-Jie Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haichao Chen
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Yangfan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Qi He
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Chenxin Song
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Ruiyao Li
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - You Wu
- Institute for Hospital Management, Tsinghua Medicine, Tsinghua University, Beijing, China; School of Basic Medical Sciences, Tsinghua Medicine, Tsinghua University, Beijing, China; Department of Health Policy and Management, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA.
| | - Haibo Wang
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China.
| | - Yih Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China
| | - Tien Yin Wong
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Dennis Shun-Chiu Lam
- The International Eye Research Institute, The Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER International Eye Care Group, Hong Kong, Hong Kong, China
| |
Collapse
|
12
|
Borrelli E, Serafino S, Ricardi F, Coletto A, Neri G, Olivieri C, Ulla L, Foti C, Marolo P, Toro MD, Bandello F, Reibaldi M. Deep Learning in Neovascular Age-Related Macular Degeneration. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:990. [PMID: 38929607 PMCID: PMC11205843 DOI: 10.3390/medicina60060990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 05/29/2024] [Accepted: 06/13/2024] [Indexed: 06/28/2024]
Abstract
Background and objectives: Age-related macular degeneration (AMD) is a complex and multifactorial condition that can lead to permanent vision loss once it progresses to the neovascular exudative stage. This review aims to summarize the use of deep learning in neovascular AMD. Materials and Methods: Pubmed search. Results: Deep learning has demonstrated effectiveness in analyzing structural OCT images in patients with neovascular AMD. This review outlines the role of deep learning in identifying and measuring biomarkers linked to an elevated risk of transitioning to the neovascular form of AMD. Additionally, deep learning techniques can quantify critical OCT features associated with neovascular AMD, which have prognostic implications for these patients. Incorporating deep learning into the assessment of neovascular AMD eyes holds promise for enhancing clinical management strategies for affected individuals. Conclusion: Several studies have demonstrated effectiveness of deep learning in assessing neovascular AMD patients and this has a promising role in the assessment of these patients.
Collapse
Affiliation(s)
- Enrico Borrelli
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Sonia Serafino
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Federico Ricardi
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Andrea Coletto
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Giovanni Neri
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Chiara Olivieri
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Lorena Ulla
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Claudio Foti
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Paola Marolo
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Mario Damiano Toro
- Eye Clinic, Public Health Department, University of Naples Federico II, 80138 Naples, Italy;
| | - Francesco Bandello
- Department of Ophthalmology, Vita-Salute San Raffaele University, 20132 Milan, Italy;
- IRCCS San Raffaele Scientific Institute, 20132 Milan, Italy
| | - Michele Reibaldi
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| |
Collapse
|
13
|
Bellemo V, Kumar Das A, Sreng S, Chua J, Wong D, Shah J, Jonas R, Tan B, Liu X, Xu X, Tan GSW, Agrawal R, Ting DSW, Yong L, Schmetterer L. Optical coherence tomography choroidal enhancement using generative deep learning. NPJ Digit Med 2024; 7:115. [PMID: 38704440 PMCID: PMC11069520 DOI: 10.1038/s41746-024-01119-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 04/23/2024] [Indexed: 05/06/2024] Open
Abstract
Spectral-domain optical coherence tomography (SDOCT) is the gold standard of imaging the eye in clinics. Penetration depth with such devices is, however, limited and visualization of the choroid, which is essential for diagnosing chorioretinal disease, remains limited. Whereas swept-source OCT (SSOCT) devices allow for visualization of the choroid these instruments are expensive and availability in praxis is limited. We present an artificial intelligence (AI)-based solution to enhance the visualization of the choroid in OCT scans and allow for quantitative measurements of choroidal metrics using generative deep learning (DL). Synthetically enhanced SDOCT B-scans with improved choroidal visibility were generated, leveraging matching images to learn deep anatomical features during the training. Using a single-center tertiary eye care institution cohort comprising a total of 362 SDOCT-SSOCT paired subjects, we trained our model with 150,784 images from 410 healthy, 192 glaucoma, and 133 diabetic retinopathy eyes. An independent external test dataset of 37,376 images from 146 eyes was deployed to assess the authenticity and quality of the synthetically enhanced SDOCT images. Experts' ability to differentiate real versus synthetic images was poor (47.5% accuracy). Measurements of choroidal thickness, area, volume, and vascularity index, from the reference SSOCT and synthetically enhanced SDOCT, showed high Pearson's correlations of 0.97 [95% CI: 0.96-0.98], 0.97 [0.95-0.98], 0.95 [0.92-0.98], and 0.87 [0.83-0.91], with intra-class correlation values of 0.99 [0.98-0.99], 0.98 [0.98-0.99], and 0.95 [0.96-0.98], 0.93 [0.91-0.95], respectively. Thus, our DL generative model successfully generated realistic enhanced SDOCT data that is indistinguishable from SSOCT images providing improved visualization of the choroid. This technology enabled accurate measurements of choroidal metrics previously limited by the imaging depth constraints of SDOCT. The findings open new possibilities for utilizing affordable SDOCT devices in studying the choroid in both healthy and pathological conditions.
Collapse
Affiliation(s)
- Valentina Bellemo
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
| | - Ankit Kumar Das
- Institute of High Performance Computing, Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore
| | - Syna Sreng
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
| | - Jacqueline Chua
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Damon Wong
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Centre for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Janika Shah
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Rahul Jonas
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department Ophthalmology, Cologne, Germany
| | - Bingyao Tan
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department Ophthalmology, Cologne, Germany
| | - Xinyu Liu
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Xinxing Xu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Rupesh Agrawal
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore School of Chemical and Biomedical Engineering, Nanyang Technological University (NTU), Singapore, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Liu Yong
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore.
- Institute of High Performance Computing, Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore.
| | - Leopold Schmetterer
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore.
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore.
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore.
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore.
- Centre for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore, Singapore.
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria.
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
| |
Collapse
|
14
|
Zhao P, Song X, Xi X, Nie X, Meng X, Qu Y, Yin Y. Biomarkers-Aware Asymmetric Bibranch GAN With Adaptive Memory Batch Normalization for Prediction of Anti-VEGF Treatment Response in Neovascular Age-Related Macular Degeneration. IEEE J Biomed Health Inform 2024; 28:557-568. [PMID: 37549082 DOI: 10.1109/jbhi.2023.3302989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/09/2023]
Abstract
The emergence of anti-vascular endothelial growth factor (anti-VEGF) therapy has revolutionized neovascular age-related macular degeneration (nAMD). Post-therapeutic optical coherence tomography (OCT) imaging facilitates the prediction of therapeutic response to anti-VEGF therapy for nAMD. Although the generative adversarial network (GAN) is a popular generative model for post-therapeutic OCT image generation, it is realistically challenging to gather sufficient pre- and post-therapeutic OCT image pairs, resulting in overfitting. Moreover, the available GAN-based methods ignore local details, such as the biomarkers that are essential for nAMD treatment. To address these issues, a Biomarkers-aware Asymmetric Bibranch GAN (BAABGAN) is proposed to efficiently generate post-therapeutic OCT images. Specifically, one branch is developed to learn prior knowledge with a high degree of transferability from large-scale data, termed the source branch. Then, the source branch transfer knowledge to another branch, which is trained on small-scale paired data, termed the target branch. To boost the transferability, a novel Adaptive Memory Batch Normalization (AMBN) is introduced in the source branch, which learns more effective global knowledge that is impervious to noise via memory mechanism. Also, a novel Adaptive Biomarkers-aware Attention (ABA) module is proposed to encode biomarkers information into latent features of target branches to learn finer local details of biomarkers. The proposed method outperforms traditional GAN models and can produce high-quality post-treatment OCT pictures with limited data sets, as shown by the results of experiments.
Collapse
|
15
|
Maunz A, Barras L, Kawczynski MG, Dai J, Lee AY, Spaide RF, Sahni J, Ferrara D. Machine Learning to Predict Response to Ranibizumab in Neovascular Age-Related Macular Degeneration. OPHTHALMOLOGY SCIENCE 2023; 3:100319. [PMID: 37304043 PMCID: PMC10251067 DOI: 10.1016/j.xops.2023.100319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 04/14/2023] [Accepted: 04/15/2023] [Indexed: 06/13/2023]
Abstract
Purpose Neovascular age-related macular degeneration (nAMD) shows variable treatment response to intravitreal anti-VEGF. This analysis compared the potential of different artificial intelligence (AI)-based machine learning models using OCT and clinical variables to accurately predict at baseline the best-corrected visual acuity (BCVA) at 9 months in response to ranibizumab in patients with nAMD. Design Retrospective analysis. Participants Baseline and imaging data from patients with subfoveal choroidal neovascularization secondary to age-related macular dengeration. Methods Baseline data from 502 study eyes from the HARBOR (NCT00891735) prospective clinical trial (monthly ranibizumab 0.5 and 2.0 mg arms) were pooled; 432 baseline OCT volume scans were included in the analysis. Seven models, based on baseline quantitative OCT features (Least absolute shrinkage and selection operator [Lasso] OCT minimum [min], Lasso OCT 1 standard error [SE]); on quantitative OCT features and clinical variables at baseline (Lasso min, Lasso 1SE, CatBoost, RF [random forest]); or on baseline OCT images only (deep learning [DL] model), were systematically compared with a benchmark linear model of baseline age and BCVA. Quantitative OCT features were derived by a DL segmentation model on the volume images, including retinal layer volumes and thicknesses, and retinal fluid biomarkers, including statistics on fluid volume and distribution. Main Outcome Measures Prognostic ability of the models was evaluated using coefficient of determination (R2) and median absolute error (MAE; letters). Results In the first cross-validation split, mean R2 (MAE) of the Lasso min, Lasso 1SE, CatBoost, and RF models was 0.46 (7.87), 0.42 (8.43), 0.45 (7.75), and 0.43 (7.60), respectively. These models ranked higher than or similar to the benchmark model (mean R2, 0.41; mean MAE, 8.20 letters) and better than OCT-only models (mean R2: Lasso OCT min, 0.20; Lasso OCT 1SE, 0.16; DL, 0.34). The Lasso min model was selected for detailed analysis; mean R2 (MAE) of the Lasso min and benchmark models for 1000 repeated cross-validation splits were 0.46 (7.7) and 0.42 (8.0), respectively. Conclusions Machine learning models based on AI-segmented OCT features and clinical variables at baseline may predict future response to ranibizumab treatment in patients with nAMD. However, further developments will be needed to realize the clinical utility of such AI-based tools. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Andreas Maunz
- Roche Pharma Research and Early Development, F. Hoffmann-La Roche AG, Basel, Switzerland
| | - Laura Barras
- Roche Data & Statistical Sciences, Genentech, Inc, South San Francisco, California
| | - Michael G. Kawczynski
- Roche Personalized Healthcare Program, Genentech, Inc, South San Francisco, California
| | - Jian Dai
- Roche Personalized Healthcare Program, Genentech, Inc, South San Francisco, California
| | - Aaron Y. Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, Washington
| | | | - Jayashree Sahni
- Roche Pharma Research and Early Development, F. Hoffmann-La Roche AG, Basel, Switzerland
| | - Daniela Ferrara
- Roche Personalized Healthcare Program, Genentech, Inc, South San Francisco, California
| |
Collapse
|
16
|
Kim J, Chin HS. Deep learning-based prediction of the retinal structural alterations after epiretinal membrane surgery. Sci Rep 2023; 13:19275. [PMID: 37935769 PMCID: PMC10630279 DOI: 10.1038/s41598-023-46063-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 10/27/2023] [Indexed: 11/09/2023] Open
Abstract
To generate and evaluate synthesized postoperative OCT images of epiretinal membrane (ERM) based on preoperative OCT images using deep learning methodology. This study included a total 500 pairs of preoperative and postoperative optical coherence tomography (OCT) images for training a neural network. 60 preoperative OCT images were used to test the neural networks performance, and the corresponding postoperative OCT images were used to evaluate the synthesized images in terms of structural similarity index measure (SSIM). The SSIM was used to quantify how similar the synthesized postoperative OCT image was to the actual postoperative OCT image. The Pix2Pix GAN model was used to generate synthesized postoperative OCT images. Total 60 synthesized OCT images were generated with training values at 800 epochs. The mean SSIM of synthesized postoperative OCT to the actual postoperative OCT was 0.913. Pix2Pix GAN model has a possibility to generate predictive postoperative OCT images following ERM removal surgery.
Collapse
Affiliation(s)
- Joseph Kim
- Retina Division, Nune Eye Hospital, Seoul, Republic of Korea
| | - Hee Seung Chin
- Department of Ophthalmology, Inha University School of Medicine, Incheon, Republic of Korea.
| |
Collapse
|
17
|
Muntean GA, Marginean A, Groza A, Damian I, Roman SA, Hapca MC, Muntean MV, Nicoară SD. The Predictive Capabilities of Artificial Intelligence-Based OCT Analysis for Age-Related Macular Degeneration Progression-A Systematic Review. Diagnostics (Basel) 2023; 13:2464. [PMID: 37510207 PMCID: PMC10378064 DOI: 10.3390/diagnostics13142464] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/16/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
The era of artificial intelligence (AI) has revolutionized our daily lives and AI has become a powerful force that is gradually transforming the field of medicine. Ophthalmology sits at the forefront of this transformation thanks to the effortless acquisition of an abundance of imaging modalities. There has been tremendous work in the field of AI for retinal diseases, with age-related macular degeneration being at the top of the most studied conditions. The purpose of the current systematic review was to identify and evaluate, in terms of strengths and limitations, the articles that apply AI to optical coherence tomography (OCT) images in order to predict the future evolution of age-related macular degeneration (AMD) during its natural history and after treatment in terms of OCT morphological structure and visual function. After a thorough search through seven databases up to 1 January 2022 using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 1800 records were identified. After screening, 48 articles were selected for full-text retrieval and 19 articles were finally included. From these 19 articles, 4 articles concentrated on predicting the anti-VEGF requirement in neovascular AMD (nAMD), 4 articles focused on predicting anti-VEGF efficacy in nAMD patients, 3 articles predicted the conversion from early or intermediate AMD (iAMD) to nAMD, 1 article predicted the conversion from iAMD to geographic atrophy (GA), 1 article predicted the conversion from iAMD to both nAMD and GA, 3 articles predicted the future growth of GA and 3 articles predicted the future outcome for visual acuity (VA) after anti-VEGF treatment in nAMD patients. Since using AI methods to predict future changes in AMD is only in its initial phase, a systematic review provides the opportunity of setting the context of previous work in this area and can present a starting point for future research.
Collapse
Affiliation(s)
- George Adrian Muntean
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Anca Marginean
- Department of Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Adrian Groza
- Department of Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Ioana Damian
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Sara Alexia Roman
- Faculty of Medicine, "Iuliu Hatieganu" University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania
| | - Mădălina Claudia Hapca
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Maximilian Vlad Muntean
- Plastic Surgery Department, "Prof. Dr. I. Chiricuta" Institute of Oncology, 400015 Cluj-Napoca, Romania
| | - Simona Delia Nicoară
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| |
Collapse
|
18
|
Wang Z, Lim G, Ng WY, Tan TE, Lim J, Lim SH, Foo V, Lim J, Sinisterra LG, Zheng F, Liu N, Tan GSW, Cheng CY, Cheung GCM, Wong TY, Ting DSW. Synthetic artificial intelligence using generative adversarial network for retinal imaging in detection of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1184892. [PMID: 37425325 PMCID: PMC10324667 DOI: 10.3389/fmed.2023.1184892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 05/30/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Age-related macular degeneration (AMD) is one of the leading causes of vision impairment globally and early detection is crucial to prevent vision loss. However, the screening of AMD is resource dependent and demands experienced healthcare providers. Recently, deep learning (DL) systems have shown the potential for effective detection of various eye diseases from retinal fundus images, but the development of such robust systems requires a large amount of datasets, which could be limited by prevalence of the disease and privacy of patient. As in the case of AMD, the advanced phenotype is often scarce for conducting DL analysis, which may be tackled via generating synthetic images using Generative Adversarial Networks (GANs). This study aims to develop GAN-synthesized fundus photos with AMD lesions, and to assess the realness of these images with an objective scale. Methods To build our GAN models, a total of 125,012 fundus photos were used from a real-world non-AMD phenotypical dataset. StyleGAN2 and human-in-the-loop (HITL) method were then applied to synthesize fundus images with AMD features. To objectively assess the quality of the synthesized images, we proposed a novel realness scale based on the frequency of the broken vessels observed in the fundus photos. Four residents conducted two rounds of gradings on 300 images to distinguish real from synthetic images, based on their subjective impression and the objective scale respectively. Results and discussion The introduction of HITL training increased the percentage of synthetic images with AMD lesions, despite the limited number of AMD images in the initial training dataset. Qualitatively, the synthesized images have been proven to be robust in that our residents had limited ability to distinguish real from synthetic ones, as evidenced by an overall accuracy of 0.66 (95% CI: 0.61-0.66) and Cohen's kappa of 0.320. For the non-referable AMD classes (no or early AMD), the accuracy was only 0.51. With the objective scale, the overall accuracy improved to 0.72. In conclusion, GAN models built with HITL training are capable of producing realistic-looking fundus images that could fool human experts, while our objective realness scale based on broken vessels can help identifying the synthetic fundus photos.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Wei Yan Ng
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Jane Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Sing Hui Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Valencia Foo
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Joshua Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | | | - Feihui Zheng
- Singapore Eye Research Institute, Singapore, Singapore
| | - Nan Liu
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Ching-Yu Cheng
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Gemmy Chui Ming Cheung
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore, Singapore
- School of Medicine, Tsinghua University, Beijing, China
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| |
Collapse
|
19
|
Moon S, Lee Y, Hwang J, Kim CG, Kim JW, Yoon WT, Kim JH. Prediction of anti-vascular endothelial growth factor agent-specific treatment outcomes in neovascular age-related macular degeneration using a generative adversarial network. Sci Rep 2023; 13:5639. [PMID: 37024576 PMCID: PMC10079864 DOI: 10.1038/s41598-023-32398-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 03/27/2023] [Indexed: 04/08/2023] Open
Abstract
To develop an artificial intelligence (AI) model that predicts anti-vascular endothelial growth factor (VEGF) agent-specific anatomical treatment outcomes in neovascular age-related macular degeneration (AMD), thereby assisting clinicians in selecting the most suitable anti-VEGF agent for each patient. This retrospective study included patients diagnosed with neovascular AMD who received three loading injections of either ranibizumab or aflibercept. Training was performed using optical coherence tomography (OCT) images with an attention generative adversarial network (GAN) model. To test the performance of the AI model, the sensitivity and specificity to predict the presence of retinal fluid after treatment were calculated for the AI model, an experienced (Examiner 1), and a less experienced (Examiner 2) human examiners. A total of 1684 OCT images from 842 patients (419 treated with ranibizumab and 423 treated with aflibercept) were used as the training set. Testing was performed using images from 98 patients. In patients treated with ranibizumab, the sensitivity and specificity, respectively, were 0.615 and 0.667 for the AI model, 0.385 and 0.861 for Examiner 1, and 0.231 and 0.806 for Examiner 2. In patients treated with aflibercept, the sensitivity and specificity, respectively, were 0.857 and 0.881 for the AI model, 0.429 and 0.976 for Examiner 1, and 0.429 and 0.857 for Examiner 2. In 18.5% of cases, the fluid status of synthetic posttreatment images differed between ranibizumab and aflibercept. The AI model using GAN might predict anti-VEGF agent-specific short-term treatment outcomes with relatively higher sensitivity than human examiners. Additionally, there was a difference in the efficacy in fluid resolution between the anti-VEGF agents. These results suggest the potential of AI in personalized medicine for patients with neovascular AMD.
Collapse
Affiliation(s)
- Sehwan Moon
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, South Korea
- MODULABS, Seoul, South Korea
| | - Youngsuk Lee
- INGRADIENT Inc., Seoul, South Korea
- MODULABS, Seoul, South Korea
| | - Jeongyoung Hwang
- AI Graduated School, Gwangju Institute of Science and Technology, Gwangju, South Korea
- MODULABS, Seoul, South Korea
| | - Chul Gu Kim
- Department of Ophthalmology, Kim's Eye Hospital, #156 Youngdeungpo-dong 4ga, Youngdeungpo-gu, Seoul, 150-034, South Korea
| | - Jong Woo Kim
- Department of Ophthalmology, Kim's Eye Hospital, #156 Youngdeungpo-dong 4ga, Youngdeungpo-gu, Seoul, 150-034, South Korea
| | - Won Tae Yoon
- Department of Ophthalmology, Kim's Eye Hospital, #156 Youngdeungpo-dong 4ga, Youngdeungpo-gu, Seoul, 150-034, South Korea.
- Kim's Eye Hospital Data Center, Seoul, South Korea.
| | - Jae Hui Kim
- Department of Ophthalmology, Kim's Eye Hospital, #156 Youngdeungpo-dong 4ga, Youngdeungpo-gu, Seoul, 150-034, South Korea.
- Kim's Eye Hospital Data Center, Seoul, South Korea.
| |
Collapse
|
20
|
Zhang Y, Huang K, Li M, Yuan S, Chen Q. Learn Single-horizon Disease Evolution for Predictive Generation of Post-therapeutic Neovascular Age-related Macular Degeneration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107364. [PMID: 36716636 DOI: 10.1016/j.cmpb.2023.107364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 01/16/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Most of the existing disease prediction methods in the field of medical image processing fall into two classes, namely image-to-category predictions and image-to-parameter predictions.Few works have focused on image-to-image predictions. Different from multi-horizon predictions in other fields, ophthalmologists prefer to show more confidence in single-horizon predictions due to the low tolerance of predictive risk. METHODS We propose a single-horizon disease evolution network (SHENet) to predictively generate post-therapeutic SD-OCT images by inputting pre-therapeutic SD-OCT images with neovascular age-related macular degeneration (nAMD). In SHENet, a feature encoder converts the input SD-OCT images to deep features, then a graph evolution module predicts the process of disease evolution in high-dimensional latent space and outputs the predicted deep features, and lastly, feature decoder recovers the predicted deep features to SD-OCT images. We further propose an evolution reinforcement module to ensure the effectiveness of disease evolution learning and obtain realistic SD-OCT images by adversarial training. RESULTS SHENet is validated on 383 SD-OCT cubes of 22 nAMD patients based on three well-designed schemes (P-0, P-1 and P-M) based on the quantitative and qualitative evaluations. Three metrics (PSNR, SSIM, 1-LPIPS) are used here for quantitative evaluations. Compared with other generative methods, the generative SD-OCT images of SHENet have the highest image quality (P-0: 23.659, P-1: 23.875, P-M: 24.198) by PSNR. Besides, SHENet achieves the best structure protection (P-0: 0.326, P-1: 0.337, P-M: 0.349) by SSIM and content prediction (P-0: 0.609, P-1: 0.626, P-M: 0.642) by 1-LPIPS. Qualitative evaluations also demonstrate that SHENet has a better visual effect than other methods. CONCLUSIONS SHENet can generate post-therapeutic SD-OCT images with both high prediction performance and good image quality, which has great potential to help ophthalmologists forecast the therapeutic effect of nAMD.
Collapse
Affiliation(s)
- Yuhan Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| | - Kun Huang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| | - Mingchao Li
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing, 210094, China.
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| |
Collapse
|
21
|
Faatz H, Rothaus K, Ziegler M, Book M, Spital G, Lange C, Lommatzsch A. The Architecture of Macular Neovascularizations Predicts Treatment Responses to Anti-VEGF Therapy in Neovascular AMD. Diagnostics (Basel) 2022; 12:2807. [PMID: 36428867 PMCID: PMC9688972 DOI: 10.3390/diagnostics12112807] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/08/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Introduction: Anti-VEGF therapy is an effective option for improving and stabilizing the vision in neovascular age-related macular degeneration (nAMD). However, the response to treatment is markedly heterogeneous. The aim of this study was therefore to analyze the vascular characteristics of type 1,2, and 3 macular neovascularizations (MNV) in order to identify biomarkers that predict treatment response, especially with regard to changes in intraretinal and subretinal fluid. Materials and Methods: Overall, 90 treatment-naive eyes with nAMD confirmed by optic coherence tomography (OCT), fluorescein angiography, and OCT angiography (OCTA) were included in this retrospective study. The MNV detected by OCTA were subjected to quantitative vascular analysis by binarization and skeletonization of the vessel using ImageJ. We determined their area, total vascular length (sumL), fractal dimension (FD), flow density, number of vascular nodes (numN), and average vascular diameter (avgW). The results were correlated with the treatment response to the initial three injections of anti-VEGF and the changes in intraretinal (IRF) and subretinal fluid (SRF) and the occurrence of pigment epithelial detachements (PED). Results: All patients found to have no subretinal or intraretinal fluid following the initial three injections of anti-VEGF showed a significantly smaller MNV area (p < 0.001), a lower sumL (p < 0.0005), and lesser FD (p < 0.005) before treatment than those who still exhibited signs of activity. These parameters also showed a significant influence in the separate analysis of persistent SRF (p < 0.005) and a persistent PED (p < 0.05), whereas we could not detect any influence on changes in IRF. The vascular parameters avgW, numN, and flow density showed no significant influence on SRF/IRF or PED changes. Conclusions: The size, the total vessel length, and the fractal dimension of MNV at baseline are predictors for the treatment response to anti-VEGF therapy. Therefore, particularly regarding the development of new classes of drugs, these parameters could yield new insights into treatment response.
Collapse
Affiliation(s)
- Henrik Faatz
- Department of Ophthalmology, St. Franziskus Hospital, 48145 Münster, Germany
- Achim Wessing Institute for Diagnostic Ophthalmology, Duisburg–Essen University, 45147 Essen, Germany
| | - Kai Rothaus
- Department of Ophthalmology, St. Franziskus Hospital, 48145 Münster, Germany
| | - Martin Ziegler
- Department of Ophthalmology, St. Franziskus Hospital, 48145 Münster, Germany
| | - Marius Book
- AugenZentrum Siegburg, MVZ ADTC Siegburg GmbH, 53721 Siegburg, Germany
| | - Georg Spital
- Department of Ophthalmology, St. Franziskus Hospital, 48145 Münster, Germany
| | - Clemens Lange
- Department of Ophthalmology, St. Franziskus Hospital, 48145 Münster, Germany
- Department of Ophthalmology, Freiburg University Hospital, 79106 Freiburg, Germany
| | - Albrecht Lommatzsch
- Department of Ophthalmology, St. Franziskus Hospital, 48145 Münster, Germany
- Achim Wessing Institute for Diagnostic Ophthalmology, Duisburg–Essen University, 45147 Essen, Germany
- Department of Ophthalmology, Essen University Hospital, 45147 Essen, Germany
| |
Collapse
|
22
|
Xu F, Yu X, Gao Y, Ning X, Huang Z, Wei M, Zhai W, Zhang R, Wang S, Li J. Predicting OCT images of short-term response to anti-VEGF treatment for retinal vein occlusion using generative adversarial network. Front Bioeng Biotechnol 2022; 10:914964. [PMID: 36312556 PMCID: PMC9596772 DOI: 10.3389/fbioe.2022.914964] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 09/23/2022] [Indexed: 11/26/2022] Open
Abstract
To generate and evaluate post-therapeutic optical coherence tomography (OCT) images based on pre-therapeutic images with generative adversarial network (GAN) to predict the short-term response of patients with retinal vein occlusion (RVO) to anti-vascular endothelial growth factor (anti-VEGF) therapy. Real-world imaging data were retrospectively collected from 1 May 2017, to 1 June 2021. A total of 515 pairs of pre-and post-therapeutic OCT images of patients with RVO were included in the training set, while 68 pre-and post-therapeutic OCT images were included in the validation set. A pix2pixHD method was adopted to predict post-therapeutic OCT images in RVO patients after anti-VEGF therapy. The quality and similarity of synthetic OCT images were evaluated by screening and evaluation experiments. We quantitatively and qualitatively assessed the prognostic accuracy of the synthetic post-therapeutic OCT images. The post-therapeutic OCT images generated by the pix2pixHD algorithm were comparable to the actual images in edema resorption response. Retinal specialists found most synthetic images (62/68) difficult to differentiate from the real ones. The mean absolute error (MAE) of the central macular thickness (CMT) between the synthetic and real OCT images was 26.33 ± 15.81 μm. There was no statistical difference in CMT between the synthetic and the real images. In this retrospective study, the application of the pix2pixHD algorithm objectively predicted the short-term response of each patient to anti-VEGF therapy based on OCT images with high accuracy, suggestive of its clinical value, especially for screening patients with relatively poor prognosis and potentially guiding clinical treatment. Importantly, our artificial intelligence-based prediction approach's non-invasiveness, repeatability, and cost-effectiveness can improve compliance and follow-up management of this patient population.
Collapse
Affiliation(s)
- Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Xuechen Yu
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Yang Gao
- School of Physics, Beihang University, Beijing, China
- Hangzhou Innovation Institute, Beihang University, Hangzhou, China
| | - Xiaolin Ning
- Hangzhou Innovation Institute, Beihang University, Hangzhou, China
- Research Institute of Frontier Science, Beihang University, Beijing, China
| | - Ziyuan Huang
- Research Institute of Frontier Science, Beihang University, Beijing, China
| | - Min Wei
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Weibin Zhai
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Rui Zhang
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Shaopeng Wang
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| |
Collapse
|
23
|
Zhang Z, Cheng N, Liu Y, Song J, Liu X, Zhang S, Zhang G. Prediction of corneal astigmatism based on corneal tomography after femtosecond laser arcuate keratotomy using a pix2pix conditional generative adversarial network. Front Public Health 2022; 10:1012929. [PMID: 36187623 PMCID: PMC9523441 DOI: 10.3389/fpubh.2022.1012929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 08/29/2022] [Indexed: 01/27/2023] Open
Abstract
Purpose This study aimed to develop a deep learning model to generate a postoperative corneal axial curvature map of femtosecond laser arcuate keratotomy (FLAK) based on corneal tomography using a pix2pix conditional generative adversarial network (pix2pix cGAN) for surgical planning. Methods A total of 451 eyes of 318 nonconsecutive patients were subjected to FLAK for corneal astigmatism correction during cataract surgery. Paired or single anterior penetrating FLAKs were performed at an 8.0-mm optical zone with a depth of 90% using a femtosecond laser (LenSx laser, Alcon Laboratories, Inc.). Corneal tomography images were acquired from Oculus Pentacam HR (Optikgeräte GmbH, Wetzlar, Germany) before and 3 months after the surgery. The raw data required for analysis consisted of the anterior corneal curvature for a range of ± 3.5 mm around the corneal apex in 0.1-mm steps, which the pseudo-color corneal curvature map synthesized was based on. The deep learning model used was a pix2pix conditional generative adversarial network. The prediction accuracy of synthetic postoperative corneal astigmatism in zones of different diameters centered on the corneal apex was assessed using vector analysis. The synthetic postoperative corneal axial curvature maps were compared with the real postoperative corneal axial curvature maps using the structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR). Results A total of 386 pairs of preoperative and postoperative corneal tomography data were included in the training set, whereas 65 preoperative data were retrospectively included in the test set. The correlation coefficient between synthetic and real postoperative astigmatism (difference vector) in the 3-mm zone was 0.89, and that between surgically induced astigmatism (SIA) was 0.93. The mean absolute errors of SIA for real and synthetic postoperative corneal axial curvature maps in the 1-, 3-, and 5-mm zone were 0.20 ± 0.25, 0.12 ± 0.17, and 0.09 ± 0.13 diopters, respectively. The average SSIM and PSNR of the 3-mm zone were 0.86 ± 0.04 and 18.24 ± 5.78, respectively. Conclusion Our results showed that the application of pix2pix cGAN can synthesize plausible postoperative corneal tomography for FLAK, showing the possibility of using GAN to predict corneal tomography, with the potential of applying artificial intelligence to construct surgical planning models.
Collapse
Affiliation(s)
- Zhe Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China,Department of Cataract, Shanxi Eye Hospital, Taiyuan, China,First Hospital of Shanxi Medical University, Taiyuan, China
| | - Nan Cheng
- College of Biomedical Engineering, Taiyuan University of Technology, Taiyuan, China
| | - Yunfang Liu
- Department of Ophthalmology, First Affiliated Hospital of Huzhou University, Huzhou, China
| | - Junyang Song
- Department of Ophthalmology, First Affiliated Hospital of Huzhou University, Huzhou, China,*Correspondence: Junyang Song
| | - Xinhua Liu
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Suhua Zhang
- Department of Cataract, Shanxi Eye Hospital, Taiyuan, China,Taiyuan Central Hospital of Shanxi Medical University, Taiyuan, China,Suhua Zhang
| | - Guanghua Zhang
- Department of Intelligence and Automation, Taiyuan University, Taiyuan, China,Graphics and Imaging Laboratory, University of Girona, Girona, Spain,Guanghua Zhang
| |
Collapse
|
24
|
A Systematic Review of Deep Learning Applications for Optical Coherence Tomography in Age-Related Macular Degeneration. Retina 2022; 42:1417-1424. [DOI: 10.1097/iae.0000000000003535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
25
|
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. EYE AND VISION (LONDON, ENGLAND) 2022; 9:6. [PMID: 35109930 PMCID: PMC8808986 DOI: 10.1186/s40662-022-00277-3] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/11/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Recent advances in deep learning techniques have led to improved diagnostic abilities in ophthalmology. A generative adversarial network (GAN), which consists of two competing types of deep neural networks, including a generator and a discriminator, has demonstrated remarkable performance in image synthesis and image-to-image translation. The adoption of GAN for medical imaging is increasing for image generation and translation, but it is not familiar to researchers in the field of ophthalmology. In this work, we present a literature review on the application of GAN in ophthalmology image domains to discuss important contributions and to identify potential future research directions. METHODS We performed a survey on studies using GAN published before June 2021 only, and we introduced various applications of GAN in ophthalmology image domains. The search identified 48 peer-reviewed papers in the final review. The type of GAN used in the analysis, task, imaging domain, and the outcome were collected to verify the usefulness of the GAN. RESULTS In ophthalmology image domains, GAN can perform segmentation, data augmentation, denoising, domain transfer, super-resolution, post-intervention prediction, and feature extraction. GAN techniques have established an extension of datasets and modalities in ophthalmology. GAN has several limitations, such as mode collapse, spatial deformities, unintended changes, and the generation of high-frequency noises and artifacts of checkerboard patterns. CONCLUSIONS The use of GAN has benefited the various tasks in ophthalmology image domains. Based on our observations, the adoption of GAN in ophthalmology is still in a very early stage of clinical validation compared with deep learning classification techniques because several problems need to be overcome for practical use. However, the proper selection of the GAN technique and statistical modeling of ocular imaging will greatly improve the performance of each image analysis. Finally, this survey would enable researchers to access the appropriate GAN technique to maximize the potential of ophthalmology datasets for deep learning research.
Collapse
Affiliation(s)
- Aram You
- School of Architecture, Kumoh National Institute of Technology, Gumi, Gyeongbuk, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea.
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Namil-myeon, Cheongwon-gun, Cheongju, Chungcheongbuk-do, 363-849, South Korea.
| |
Collapse
|
26
|
Updates in deep learning research in ophthalmology. Clin Sci (Lond) 2021; 135:2357-2376. [PMID: 34661658 DOI: 10.1042/cs20210207] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 09/14/2021] [Accepted: 09/29/2021] [Indexed: 12/13/2022]
Abstract
Ophthalmology has been one of the early adopters of artificial intelligence (AI) within the medical field. Deep learning (DL), in particular, has garnered significant attention due to the availability of large amounts of data and digitized ocular images. Currently, AI in Ophthalmology is mainly focused on improving disease classification and supporting decision-making when treating ophthalmic diseases such as diabetic retinopathy, age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity (ROP). However, most of the DL systems (DLSs) developed thus far remain in the research stage and only a handful are able to achieve clinical translation. This phenomenon is due to a combination of factors including concerns over security and privacy, poor generalizability, trust and explainability issues, unfavorable end-user perceptions and uncertain economic value. Overcoming this challenge would require a combination approach. Firstly, emerging techniques such as federated learning (FL), generative adversarial networks (GANs), autonomous AI and blockchain will be playing an increasingly critical role to enhance privacy, collaboration and DLS performance. Next, compliance to reporting and regulatory guidelines, such as CONSORT-AI and STARD-AI, will be required to in order to improve transparency, minimize abuse and ensure reproducibility. Thirdly, frameworks will be required to obtain patient consent, perform ethical assessment and evaluate end-user perception. Lastly, proper health economic assessment (HEA) must be performed to provide financial visibility during the early phases of DLS development. This is necessary to manage resources prudently and guide the development of DLS.
Collapse
|
27
|
Ferrara D, Newton EM, Lee AY. Artificial intelligence-based predictions in neovascular age-related macular degeneration. Curr Opin Ophthalmol 2021; 32:389-396. [PMID: 34265783 PMCID: PMC8373444 DOI: 10.1097/icu.0000000000000782] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
PURPOSE OF REVIEW Predicting treatment response and optimizing treatment regimen in patients with neovascular age-related macular degeneration (nAMD) remains challenging. Artificial intelligence-based tools have the potential to increase confidence in clinical development of new therapeutics, facilitate individual prognostic predictions, and ultimately inform treatment decisions in clinical practice. RECENT FINDINGS To date, most advances in applying artificial intelligence to nAMD have focused on facilitating image analysis, particularly for automated segmentation, extraction, and quantification of imaging-based features from optical coherence tomography (OCT) images. No studies in our literature search evaluated whether artificial intelligence could predict the treatment regimen required for an optimal visual response for an individual patient. Challenges identified for developing artificial intelligence-based models for nAMD include the limited number of large datasets with high-quality OCT data, limiting the patient populations included in model development; lack of counterfactual data to inform how individual patients may have fared with an alternative treatment strategy; and absence of OCT data standards, impairing the development of models usable across devices. SUMMARY Artificial intelligence has the potential to enable powerful prognostic tools for a complex nAMD treatment landscape; however, additional work remains before these tools are applicable to informing treatment decisions for nAMD in clinical practice.
Collapse
Affiliation(s)
| | | | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, School of Medicine, Seattle, Washington, USA
| |
Collapse
|
28
|
Wang Z, Lim G, Ng WY, Keane PA, Campbell JP, Tan GSW, Schmetterer L, Wong TY, Liu Y, Ting DSW. Generative adversarial networks in ophthalmology: what are these and how can they be used? Curr Opin Ophthalmol 2021; 32:459-467. [PMID: 34324454 PMCID: PMC10276657 DOI: 10.1097/icu.0000000000000794] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
PURPOSE OF REVIEW The development of deep learning (DL) systems requires a large amount of data, which may be limited by costs, protection of patient information and low prevalence of some conditions. Recent developments in artificial intelligence techniques have provided an innovative alternative to this challenge via the synthesis of biomedical images within a DL framework known as generative adversarial networks (GANs). This paper aims to introduce how GANs can be deployed for image synthesis in ophthalmology and to discuss the potential applications of GANs-produced images. RECENT FINDINGS Image synthesis is the most relevant function of GANs to the medical field, and it has been widely used for generating 'new' medical images of various modalities. In ophthalmology, GANs have mainly been utilized for augmenting classification and predictive tasks, by synthesizing fundus images and optical coherence tomography images with and without pathologies such as age-related macular degeneration and diabetic retinopathy. Despite their ability to generate high-resolution images, the development of GANs remains data intensive, and there is a lack of consensus on how best to evaluate the outputs produced by GANs. SUMMARY Although the problem of artificial biomedical data generation is of great interest, image synthesis by GANs represents an innovation with yet unclear relevance for ophthalmology.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Wei Yan Ng
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Pearse A. Keane
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Leopold Schmetterer
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE)
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Clinical Pharmacology
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Tien Yin Wong
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| |
Collapse
|