1
|
Waisberg E, Ong J, Kamran SA, Masalkhi M, Paladugu P, Zaman N, Lee AG, Tavakkoli A. Generative artificial intelligence in ophthalmology. Surv Ophthalmol 2024:S0039-6257(24)00044-4. [PMID: 38762072 DOI: 10.1016/j.survophthal.2024.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 04/28/2024] [Accepted: 04/29/2024] [Indexed: 05/20/2024]
Abstract
Generative AI has revolutionized medicine over the past several years. A generative adversarial network (GAN) is a deep learning framework that has become a powerful technique in medicine, particularly in ophthalmology and image analysis. In this paper we review the current ophthalmic literature involving GANs, and highlight key contributions in the field. We briefly touch on ChatGPT, another application of generative AI, and its potential in ophthalmology. We also explore the potential uses for GANs in ocular imaging, with a specific emphasis on 3 primary domains: image enhancement, disease identification, and generating of synthetic data. PubMed, Ovid MEDLINE, Google Scholar were searched from inception to October 30, 2022 to identify applications of GAN in ophthalmology. A total of 40 papers were included in this review. We cover various applications of GANs in ophthalmic-related imaging including optical coherence tomography, orbital magnetic resonance imaging, fundus photography, and ultrasound; however, we also highlight several challenges, that resulted in the generation of inaccurate and atypical results during certain iterations. Finally, we examine future directions and considerations for generative AI in ophthalmology.
Collapse
Affiliation(s)
- Ethan Waisberg
- Department of Ophthalmology, University of Cambridge, Cambridge, United Kingdom.
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, United States
| | - Sharif Amit Kamran
- School of Medicine, University College Dublin, Belfield, Dublin, Ireland
| | - Mouayad Masalkhi
- School of Medicine, University College Dublin, Belfield, Dublin, Ireland
| | - Phani Paladugu
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania, United States; Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, Nevada, United States
| | - Andrew G Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, Texas, United States; Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, Texas, United States; The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, Texas, United States; Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, New York, United States; Department of Ophthalmology, University of Texas Medical Branch, Galveston, Texas, United States; University of Texas MD Anderson Cancer Center, Houston, Texas, United States; Texas A&M College of Medicine, Texas, United States; Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, Iowa, United States
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, Nevada, United States
| |
Collapse
|
2
|
Li J, Zhang H, Wang X, Wang H, Hao J, Bai G. Inpainting Saturation Artifact in Anterior Segment Optical Coherence Tomography. SENSORS (BASEL, SWITZERLAND) 2023; 23:9439. [PMID: 38067812 PMCID: PMC10708580 DOI: 10.3390/s23239439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 11/22/2023] [Accepted: 11/23/2023] [Indexed: 12/18/2023]
Abstract
The cornea is an important refractive structure in the human eye. The corneal segmentation technique provides valuable information for clinical diagnoses, such as corneal thickness. Non-contact anterior segment optical coherence tomography (AS-OCT) is a prevalent ophthalmic imaging technique that can visualize the anterior and posterior surfaces of the cornea. Nonetheless, during the imaging process, saturation artifacts are commonly generated due to the tangent of the corneal surface at that point, which is normal to the incident light source. This stripe-shaped saturation artifact covers the corneal surface, causing blurring of the corneal edge, reducing the accuracy of corneal segmentation. To settle this matter, an inpainting method that introduces structural similarity and frequency loss is proposed to remove the saturation artifact in AS-OCT images. Specifically, the structural similarity loss reconstructs the corneal structure and restores corneal textural details. The frequency loss combines the spatial domain with the frequency domain to ensure the overall consistency of the image in both domains. Furthermore, the performance of the proposed method in corneal segmentation tasks is evaluated, and the results indicate a significant benefit for subsequent clinical analysis.
Collapse
Affiliation(s)
| | - He Zhang
- Electronics Information Engineering College, Changchun Univesity, Changchun 130022, China; (J.L.); (X.W.); (H.W.); (J.H.); (G.B.)
| | | | | | | | | |
Collapse
|
3
|
Garcia-Marin YF, Alonso-Caneiro D, Fisher D, Vincent SJ, Collins MJ. Patch-based CNN for corneal segmentation of AS-OCT images: Effect of the number of classes and image quality upon performance. Comput Biol Med 2023; 152:106342. [PMID: 36481759 DOI: 10.1016/j.compbiomed.2022.106342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 10/24/2022] [Accepted: 11/16/2022] [Indexed: 12/03/2022]
Abstract
Anterior segment optical coherence tomography (AS-OCT) is a fundamental ophthalmic imaging technique. AS-OCT images can be examined by experts and segmented to provide quantitative metrics that inform clinical decision making. Manual segmentation of these images is time-consuming and subjective, encouraging software developers in the field to automate segmentation procedures. Traditional programing segmentation approaches are being replaced by deep learning methods, which have shown state-of-the-art performance in AS-OCT image analysis. In this study, a method based on patch-based convolutional neural networks (CNN) was used to segment the three main boundaries of the cornea: the epithelium, Bowman's layer, and the endothelium. To assess the effect of the number of classes on performance, the model was designed as a patch-based boundary classifier using 4 and 8 classes. The effect of image quality was also assessed using different data distributions during the training process. While the Dice coefficient and probability revealed greater precision for the 8 class models, the boundary error metric indicated comparable performance. Additionally, for 8 class models, the image quality test had only a small negative effect on performance, which may be an indication of the robustness of the model and could also suggest that the data augmentation methods did not show significant improvement. These findings contribute to the development of automatic segmentation techniques for AS-OCT images, since patch-based methods have been largely unexplored in favor of other deep learning techniques. The overall performance of the proposed method is comparable to other well-established segmentation methods.
Collapse
Affiliation(s)
- Yoel F Garcia-Marin
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, Qld, 4059, Australia.
| | - David Alonso-Caneiro
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, Qld, 4059, Australia
| | - Damien Fisher
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, Qld, 4059, Australia
| | - Stephen J Vincent
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, Qld, 4059, Australia
| | - Michael J Collins
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, Qld, 4059, Australia
| |
Collapse
|
4
|
Bitton K, Zéboulon P, Ghazal W, Rizk M, Elahi S, Gatinel D. Deep Learning Model for the Detection of Corneal Edema Before Descemet Membrane Endothelial Keratoplasty on Optical Coherence Tomography Images. Transl Vis Sci Technol 2022; 11:19. [PMID: 36583911 PMCID: PMC9807180 DOI: 10.1167/tvst.11.12.19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Purpose Descemet membrane endothelial keratoplasty (DMEK) is the preferred method for treating corneal endothelial dysfunction, such as Fuchs endothelial corneal dystrophy (FECD). The surgical indication is based on the patients' symptoms and the presence of corneal edema. We developed an automated tool based on deep learning to detect edema in corneal optical coherence tomography images. This study aimed to evaluate this approach in edema detection before Descemet membrane endothelial keratoplasty surgery, for patients with or without FECD. Methods We used our previously described model allowing to classify each pixel in the corneal optical coherence tomography images as "normal" or "edema." We included 1992 images of normal and preoperative edematous corneas. We calculated the edema fraction (EF), defined as the ratio between the number of pixels labeled as "edema," and those representing the cornea for each patient. Differential central corneal thickness (DCCT), defined as the difference in central corneal thickness before and 6 months after surgery, was used to quantify preoperative edema. AUC of EF for the edema detection was calculated for Several DCCT thresholds and a value of 20 µm was selected to define significant edema as it provided the highest area under the curve value. Results The area under the curve of the receiver operating characteristic curve for EF for the detection of 20 µm of DCCT was 0.97 for all patients, 0.96 for Fuchs and normal only and 0.99 for non-FECD and normal patients. The optimal EF threshold was 0.143 for all patients and patients with FECD. Conclusions Our model is capable of objectively detecting minimal corneal edema before Descemet membrane endothelial keratoplasty surgery. Translational Relevance Deep learning can help to interpret optical coherence tomography scans and aid the surgeon in decision-making.
Collapse
Affiliation(s)
- Karen Bitton
- Department of Ophthalmology, Rothschild Foundation Hospital, Paris, France
| | - Pierre Zéboulon
- Department of Ophthalmology, Rothschild Foundation Hospital, Paris, France
| | - Wassim Ghazal
- Department of Ophthalmology, Rothschild Foundation Hospital, Paris, France
| | - Maria Rizk
- Department of Ophthalmology, Rothschild Foundation Hospital, Paris, France
| | - Sina Elahi
- Department of Ophthalmology, Rothschild Foundation Hospital, Paris, France
| | - Damien Gatinel
- Department of Ophthalmology, Rothschild Foundation Hospital, Paris, France
| |
Collapse
|
5
|
Kugelman J, Alonso-Caneiro D, Read SA, Collins MJ. A review of generative adversarial network applications in optical coherence tomography image analysis. JOURNAL OF OPTOMETRY 2022; 15 Suppl 1:S1-S11. [PMID: 36241526 PMCID: PMC9732473 DOI: 10.1016/j.optom.2022.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 08/19/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
Optical coherence tomography (OCT) has revolutionized ophthalmic clinical practice and research, as a result of the high-resolution images that the method is able to capture in a fast, non-invasive manner. Although clinicians can interpret OCT images qualitatively, the ability to quantitatively and automatically analyse these images represents a key goal for eye care by providing clinicians with immediate and relevant metrics to inform best clinical practice. The range of applications and methods to analyse OCT images is rich and rapidly expanding. With the advent of deep learning methods, the field has experienced significant progress with state-of-the-art-performance for several OCT image analysis tasks. Generative adversarial networks (GANs) represent a subfield of deep learning that allows for a range of novel applications not possible in most other deep learning methods, with the potential to provide more accurate and robust analyses. In this review, the progress in this field and clinical impact are reviewed and the potential future development of applications of GANs to OCT image processing are discussed.
Collapse
Affiliation(s)
- Jason Kugelman
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, QLD 4059, Australia.
| | - David Alonso-Caneiro
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, QLD 4059, Australia
| | - Scott A Read
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, QLD 4059, Australia
| | - Michael J Collins
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, QLD 4059, Australia
| |
Collapse
|
6
|
Kim SH, Kim J, Yang S, Oh SH, Lee SP, Yang HJ, Kim TI, Yi WJ. Automatic and quantitative measurement of alveolar bone level in OCT images using deep learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:5468-5482. [PMID: 36425614 PMCID: PMC9664875 DOI: 10.1364/boe.468212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 09/14/2022] [Accepted: 09/15/2022] [Indexed: 06/16/2023]
Abstract
We propose a method to automatically segment the periodontal structures of the tooth enamel and the alveolar bone using convolutional neural network (CNN) and to measure quantitatively and automatically the alveolar bone level (ABL) by detecting the cemento-enamel junction and the alveolar bone crest in optical coherence tomography (OCT) images. The tooth enamel and the alveolar bone regions were automatically segmented using U-Net, Dense-UNet, and U2-Net, and the ABL was quantitatively measured as the distance between the cemento-enamel junction and the alveolar bone crest using image processing. The mean distance difference (MDD) measured by our suggested method ranged from 0.19 to 0.22 mm for the alveolar bone crest (ABC) and from 0.18 to 0.32 mm for the cemento-enamel junction (CEJ). All CNN models showed the mean absolute error (MAE) of less than 0.25 mm in the x and y coordinates and greater than 90% successful detection rate (SDR) at 0.5 mm for both the ABC and the CEJ. The CNN models showed high segmentation accuracies in the tooth enamel and the alveolar bone regions, and the ABL measurements at the incisors by detected results from CNN predictions demonstrated high correlation and reliability with the ground truth in OCT images.
Collapse
Affiliation(s)
- Sul-Hee Kim
- Department of Periodontology, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
- These authors contributed equally as the first author
| | - Jin Kim
- Interdisciplinary Program in Bioengineering, Seoul National University, Seoul, 08826, Republic of Korea
- These authors contributed equally as the first author
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Republic of Korea
| | - Sung-Hye Oh
- Department of Periodontology, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Seung-Pyo Lee
- Department of Oral Anatomy and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Republic of Korea
| | - Hoon Joo Yang
- Department of Oral and Maxillofacial Surgery and Dental Research Institute, School of Dentistry, Seoul National University, Seoul 03080, Republic of Korea
| | - Tae-Il Kim
- Department of Periodontology, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
- Department of Periodontology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Won-Jin Yi
- Interdisciplinary Program in Bioengineering, Seoul National University, Seoul, 08826, Republic of Korea
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| |
Collapse
|
7
|
Garcia Marin YF, Alonso-Caneiro D, Vincent SJ, Collins MJ. Anterior segment optical coherence tomography (AS-OCT) image analysis methods and applications: A systematic review. Comput Biol Med 2022; 146:105471. [DOI: 10.1016/j.compbiomed.2022.105471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 03/27/2022] [Accepted: 03/28/2022] [Indexed: 11/03/2022]
|
8
|
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. EYE AND VISION (LONDON, ENGLAND) 2022; 9:6. [PMID: 35109930 PMCID: PMC8808986 DOI: 10.1186/s40662-022-00277-3] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/11/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Recent advances in deep learning techniques have led to improved diagnostic abilities in ophthalmology. A generative adversarial network (GAN), which consists of two competing types of deep neural networks, including a generator and a discriminator, has demonstrated remarkable performance in image synthesis and image-to-image translation. The adoption of GAN for medical imaging is increasing for image generation and translation, but it is not familiar to researchers in the field of ophthalmology. In this work, we present a literature review on the application of GAN in ophthalmology image domains to discuss important contributions and to identify potential future research directions. METHODS We performed a survey on studies using GAN published before June 2021 only, and we introduced various applications of GAN in ophthalmology image domains. The search identified 48 peer-reviewed papers in the final review. The type of GAN used in the analysis, task, imaging domain, and the outcome were collected to verify the usefulness of the GAN. RESULTS In ophthalmology image domains, GAN can perform segmentation, data augmentation, denoising, domain transfer, super-resolution, post-intervention prediction, and feature extraction. GAN techniques have established an extension of datasets and modalities in ophthalmology. GAN has several limitations, such as mode collapse, spatial deformities, unintended changes, and the generation of high-frequency noises and artifacts of checkerboard patterns. CONCLUSIONS The use of GAN has benefited the various tasks in ophthalmology image domains. Based on our observations, the adoption of GAN in ophthalmology is still in a very early stage of clinical validation compared with deep learning classification techniques because several problems need to be overcome for practical use. However, the proper selection of the GAN technique and statistical modeling of ocular imaging will greatly improve the performance of each image analysis. Finally, this survey would enable researchers to access the appropriate GAN technique to maximize the potential of ophthalmology datasets for deep learning research.
Collapse
Affiliation(s)
- Aram You
- School of Architecture, Kumoh National Institute of Technology, Gumi, Gyeongbuk, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea.
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Namil-myeon, Cheongwon-gun, Cheongju, Chungcheongbuk-do, 363-849, South Korea.
| |
Collapse
|
9
|
Dong Y, Li D, Guo Z, Liu Y, Lin P, Lv B, Lv C, Xie G, Xie L. Dissecting the Profile of Corneal Thickness With Keratoconus Progression Based on Anterior Segment Optical Coherence Tomography. Front Neurosci 2022; 15:804273. [PMID: 35173574 PMCID: PMC8842478 DOI: 10.3389/fnins.2021.804273] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 12/23/2021] [Indexed: 12/02/2022] Open
Abstract
Purpose To characterize the corneal and epithelial thickness at different stages of keratoconus (KC), using a deep learning based corneal segmentation algorithm for anterior segment optical coherence tomography (AS-OCT). Methods An AS-OCT dataset was constructed in this study with 1,430 images from 715 eyes, which included 118 normal eyes, 134 mild KC, 239 moderate KC, 153 severe KC, and 71 scarring KC. A deep learning based corneal segmentation algorithm was applied to isolate the epithelial and corneal tissues from the background. Based on the segmentation results, the thickness of epithelial and corneal tissues was automatically measured in the center 6 mm area. One-way ANOVA and linear regression were performed in 20 equally divided zones to explore the trend of the thickness changes at different locations with the KC progression. The 95% confidence intervals (CI) of epithelial thickness and corneal thickness in a specific zone were calculated to reveal the difference of thickness distribution among different groups. Results Our data showed that the deep learning based corneal segmentation algorithm can achieve accurate tissue segmentation and the error range of measured thickness was less than 4 μm between our method and the results from clinical experts, which is approximately one image pixel. Statistical analyses revealed significant corneal thickness differences in all the divided zones (P < 0.05). The entire corneal thickness grew gradually thinner with the progression of the KC, and their trends were more pronounced around the pupil center with a slight shift toward the temporal and inferior side. Especially the epithelial thicknesses were thinner gradually from a normal eye to severe KC. Due to the formation of the corneal scarring, epithelial thickness had irregular fluctuations in the scarring KC. Conclusion Our study demonstrates that our deep learning method based on AS-OCT images could accurately delineate the corneal tissues and further successfully characterize the epithelial and corneal thickness changes at different stages of the KC progression.
Collapse
Affiliation(s)
- Yanling Dong
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Dongfang Li
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Zhen Guo
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Yang Liu
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Ping Lin
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Bin Lv
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Chuanfeng Lv
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Guotong Xie
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
- Ping An Health Cloud Co. Ltd., Shenzhen, China
- Ping An International Smart City Technology Co. Ltd., Shenzhen, China
- *Correspondence: Guotong Xie,
| | - Lixin Xie
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
- *Correspondence: Guotong Xie,
| |
Collapse
|
10
|
Xun S, Li D, Zhu H, Chen M, Wang J, Li J, Chen M, Wu B, Zhang H, Chai X, Jiang Z, Zhang Y, Huang P. Generative adversarial networks in medical image segmentation: A review. Comput Biol Med 2022; 140:105063. [PMID: 34864584 DOI: 10.1016/j.compbiomed.2021.105063] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 11/14/2021] [Accepted: 11/20/2021] [Indexed: 12/13/2022]
Abstract
PURPOSE Since Generative Adversarial Network (GAN) was introduced into the field of deep learning in 2014, it has received extensive attention from academia and industry, and a lot of high-quality papers have been published. GAN effectively improves the accuracy of medical image segmentation because of its good generating ability and capability to capture data distribution. This paper introduces the origin, working principle, and extended variant of GAN, and it reviews the latest development of GAN-based medical image segmentation methods. METHOD To find the papers, we searched on Google Scholar and PubMed with the keywords like "segmentation", "medical image", and "GAN (or generative adversarial network)". Also, additional searches were performed on Semantic Scholar, Springer, arXiv, and the top conferences in computer science with the above keywords related to GAN. RESULTS We reviewed more than 120 GAN-based architectures for medical image segmentation that were published before September 2021. We categorized and summarized these papers according to the segmentation regions, imaging modality, and classification methods. Besides, we discussed the advantages, challenges, and future research directions of GAN in medical image segmentation. CONCLUSIONS We discussed in detail the recent papers on medical image segmentation using GAN. The application of GAN and its extended variants has effectively improved the accuracy of medical image segmentation. Obtaining the recognition of clinicians and patients and overcoming the instability, low repeatability, and uninterpretability of GAN will be an important research direction in the future.
Collapse
Affiliation(s)
- Siyi Xun
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China.
| | - Hui Zhu
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China
| | - Min Chen
- The Second Hospital of Shandong University, Shandong University, The Department of Medicine, The Second Hospital of Shandong University, Jinan, China
| | - Jianbo Wang
- Department of Radiation Oncology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, 250012, China
| | - Jie Li
- Department of Infectious Disease, Shandong Provincial Hospital Affiliated to Shandong University, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Meirong Chen
- Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, Shandong, China
| | - Bing Wu
- Laibo Biotechnology Co., Ltd., Jinan, Shandong, China
| | - Hua Zhang
- LinkingMed Technology Co., Ltd., Beijing, China
| | - Xiangfei Chai
- Huiying Medical Technology (Beijing) Co., Ltd., Beijing, China
| | - Zekun Jiang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Yan Zhang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Pu Huang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China.
| |
Collapse
|
11
|
Updates in deep learning research in ophthalmology. Clin Sci (Lond) 2021; 135:2357-2376. [PMID: 34661658 DOI: 10.1042/cs20210207] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 09/14/2021] [Accepted: 09/29/2021] [Indexed: 12/13/2022]
Abstract
Ophthalmology has been one of the early adopters of artificial intelligence (AI) within the medical field. Deep learning (DL), in particular, has garnered significant attention due to the availability of large amounts of data and digitized ocular images. Currently, AI in Ophthalmology is mainly focused on improving disease classification and supporting decision-making when treating ophthalmic diseases such as diabetic retinopathy, age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity (ROP). However, most of the DL systems (DLSs) developed thus far remain in the research stage and only a handful are able to achieve clinical translation. This phenomenon is due to a combination of factors including concerns over security and privacy, poor generalizability, trust and explainability issues, unfavorable end-user perceptions and uncertain economic value. Overcoming this challenge would require a combination approach. Firstly, emerging techniques such as federated learning (FL), generative adversarial networks (GANs), autonomous AI and blockchain will be playing an increasingly critical role to enhance privacy, collaboration and DLS performance. Next, compliance to reporting and regulatory guidelines, such as CONSORT-AI and STARD-AI, will be required to in order to improve transparency, minimize abuse and ensure reproducibility. Thirdly, frameworks will be required to obtain patient consent, perform ethical assessment and evaluate end-user perception. Lastly, proper health economic assessment (HEA) must be performed to provide financial visibility during the early phases of DLS development. This is necessary to manage resources prudently and guide the development of DLS.
Collapse
|
12
|
Zéboulon P, Ghazal W, Gatinel D. Corneal Edema Visualization With Optical Coherence Tomography Using Deep Learning: Proof of Concept. Cornea 2021; 40:1267-1275. [PMID: 33410639 DOI: 10.1097/ico.0000000000002640] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Accepted: 11/09/2020] [Indexed: 12/23/2022]
Abstract
PURPOSE Optical coherence tomography (OCT) is essential for the diagnosis and follow-up of corneal edema, but assessment can be challenging in minimal or localized edema. The objective was to develop and validate a novel automated tool to detect and visualize corneal edema with OCT. METHODS We trained a convolutional neural network to classify each pixel in the corneal OCT images as "normal" or "edema" and to generate colored heat maps of the result. The development set included 199 OCT images of normal and edematous corneas. We validated the model's performance on 607 images of normal and edematous corneas of various conditions. The main outcome measure was the edema fraction (EF), defined as the ratio between the number of pixels labeled as edema and those representing the cornea for each scan. Overall accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve were determined to evaluate the model's performance. RESULTS Mean EF was 0.0087 ± 0.01 in the normal scans and 0.805 ± 0.26 in the edema scans (P < 0.0001). Area under the receiver operating characteristic curve for EF in the diagnosis of corneal edema in individual scans was 0.994. The optimal threshold for distinguishing normal from edematous corneas was 6.8%, with an accuracy of 98.7%, sensitivity of 96.4%, and specificity of 100%. CONCLUSIONS The model accurately detected corneal edema and distinguished between normal and edematous cornea OCT scans while providing colored heat maps of edema presence.
Collapse
Affiliation(s)
- Pierre Zéboulon
- Department of Ophthalmology, Rothschild Foundation, Paris, France ; and
| | - Wassim Ghazal
- Department of Ophthalmology, Rothschild Foundation, Paris, France ; and
| | - Damien Gatinel
- Department of Ophthalmology, Rothschild Foundation, Paris, France ; and
- CEROC (Center of Expertise and Research in Optics for Clinicians)
| |
Collapse
|
13
|
Liu H, Cao S, Ling Y, Gan Y. Inpainting for Saturation Artifacts in Optical Coherence Tomography Using Dictionary-Based Sparse Representation. IEEE PHOTONICS JOURNAL 2021; 13:3900110. [PMID: 33927799 PMCID: PMC8081289 DOI: 10.1109/jphot.2021.3056574] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Saturation artifacts in optical coherence tomography (OCT) occur when received signal exceeds the dynamic range of spectrometer. Saturation artifact shows a streaking pattern and could impact the quality of OCT images, leading to inaccurate medical diagnosis. In this paper, we automatically localize saturation artifacts and propose an artifact correction method via inpainting. We adopt a dictionary-based sparse representation scheme for inpainting. Experimental results demonstrate that, in both case of synthetic artifacts and real artifacts, our method outperforms interpolation method and Euler's elastica method in both qualitative and quantitative results. The generic dictionary offers similar image quality when applied to tissue samples which are excluded from dictionary training. This method may have the potential to be widely used in a variety of OCT images for the localization and inpainting of the saturation artifacts.
Collapse
Affiliation(s)
- Hongshan Liu
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487 USA
| | - Shengting Cao
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487 USA
| | - Yuye Ling
- John Hopcroft Center for Computer Science, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yu Gan
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487 USA
| |
Collapse
|
14
|
Pradhan P, Meyer T, Vieth M, Stallmach A, Waldner M, Schmitt M, Popp J, Bocklitz T. Computational tissue staining of non-linear multimodal imaging using supervised and unsupervised deep learning. BIOMEDICAL OPTICS EXPRESS 2021; 12:2280-2298. [PMID: 33996229 PMCID: PMC8086483 DOI: 10.1364/boe.415962] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 01/28/2021] [Accepted: 02/17/2021] [Indexed: 05/24/2023]
Abstract
Hematoxylin and Eosin (H&E) staining is the 'gold-standard' method in histopathology. However, standard H&E staining of high-quality tissue sections requires long sample preparation times including sample embedding, which restricts its application for 'real-time' disease diagnosis. Due to this reason, a label-free alternative technique like non-linear multimodal (NLM) imaging, which is the combination of three non-linear optical modalities including coherent anti-Stokes Raman scattering, two-photon excitation fluorescence and second-harmonic generation, is proposed in this work. To correlate the information of the NLM images with H&E images, this work proposes computational staining of NLM images using deep learning models in a supervised and an unsupervised approach. In the supervised and the unsupervised approach, conditional generative adversarial networks (CGANs) and cycle conditional generative adversarial networks (cycle CGANs) are used, respectively. Both CGAN and cycle CGAN models generate pseudo H&E images, which are quantitatively analyzed based on mean squared error, structure similarity index and color shading similarity index. The mean of the three metrics calculated for the computationally generated H&E images indicate significant performance. Thus, utilizing CGAN and cycle CGAN models for computational staining is beneficial for diagnostic applications without performing a laboratory-based staining procedure. To the author's best knowledge, it is the first time that NLM images are computationally stained to H&E images using GANs in an unsupervised manner.
Collapse
Affiliation(s)
- Pranita Pradhan
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich-Schiller-University, Jena, Germany
- Leibniz Institute of Photonic Technology, Member of Leibniz Health Technologies Jena, Germany
| | - Tobias Meyer
- Leibniz Institute of Photonic Technology, Member of Leibniz Health Technologies Jena, Germany
| | - Michael Vieth
- Institute of Pathology, Klinikum Bayreuth, Bayreuth, Germany
| | - Andreas Stallmach
- Department of Internal Medicine IV (Gastroenterology, Hepatology, and Infectious Diseases), Jena University Hospital, Jena, Germany
| | - Maximilian Waldner
- Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander University of Erlangen-Nuremberg, 91052 Erlangen, Germany
- Medical Department 1, Friedrich-Alexander University of Erlangen-Nuremberg, Erlangen, Germany
| | - Michael Schmitt
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich-Schiller-University, Jena, Germany
| | - Juergen Popp
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich-Schiller-University, Jena, Germany
- Leibniz Institute of Photonic Technology, Member of Leibniz Health Technologies Jena, Germany
| | - Thomas Bocklitz
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich-Schiller-University, Jena, Germany
- Leibniz Institute of Photonic Technology, Member of Leibniz Health Technologies Jena, Germany
| |
Collapse
|
15
|
Dong Y, Li D, Guo Z, Liu Y, Lin P, Lv B, Lv C, Xie G, Xie L. Dissecting the Profile of Corneal Thickness With Keratoconus Progression Based on Anterior Segment Optical Coherence Tomography. Front Neurosci 2021; 15:804273. [PMID: 35173574 PMCID: PMC8842478 DOI: 10.3389/fnins.2021.804273 10.4103/joco.joco_147_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 12/23/2021] [Indexed: 05/21/2023] Open
Abstract
PURPOSE To characterize the corneal and epithelial thickness at different stages of keratoconus (KC), using a deep learning based corneal segmentation algorithm for anterior segment optical coherence tomography (AS-OCT). METHODS An AS-OCT dataset was constructed in this study with 1,430 images from 715 eyes, which included 118 normal eyes, 134 mild KC, 239 moderate KC, 153 severe KC, and 71 scarring KC. A deep learning based corneal segmentation algorithm was applied to isolate the epithelial and corneal tissues from the background. Based on the segmentation results, the thickness of epithelial and corneal tissues was automatically measured in the center 6 mm area. One-way ANOVA and linear regression were performed in 20 equally divided zones to explore the trend of the thickness changes at different locations with the KC progression. The 95% confidence intervals (CI) of epithelial thickness and corneal thickness in a specific zone were calculated to reveal the difference of thickness distribution among different groups. RESULTS Our data showed that the deep learning based corneal segmentation algorithm can achieve accurate tissue segmentation and the error range of measured thickness was less than 4 μm between our method and the results from clinical experts, which is approximately one image pixel. Statistical analyses revealed significant corneal thickness differences in all the divided zones (P < 0.05). The entire corneal thickness grew gradually thinner with the progression of the KC, and their trends were more pronounced around the pupil center with a slight shift toward the temporal and inferior side. Especially the epithelial thicknesses were thinner gradually from a normal eye to severe KC. Due to the formation of the corneal scarring, epithelial thickness had irregular fluctuations in the scarring KC. CONCLUSION Our study demonstrates that our deep learning method based on AS-OCT images could accurately delineate the corneal tissues and further successfully characterize the epithelial and corneal thickness changes at different stages of the KC progression.
Collapse
Affiliation(s)
- Yanling Dong
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Dongfang Li
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Zhen Guo
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Yang Liu
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Ping Lin
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
| | - Bin Lv
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Chuanfeng Lv
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
| | - Guotong Xie
- Ping An Technology (Shenzhen) Co. Ltd., Shenzhen, China
- Ping An Health Cloud Co. Ltd., Shenzhen, China
- Ping An International Smart City Technology Co. Ltd., Shenzhen, China
- *Correspondence: Guotong Xie,
| | - Lixin Xie
- Qingdao Eye Hospital of Shandong First Medical University, Qingdao, China
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Eye Institute of Shandong First Medical University, Qingdao, China
- *Correspondence: Guotong Xie,
| |
Collapse
|
16
|
Liu TYA, Farsiu S, Ting DS. Generative adversarial networks to predict treatment response for neovascular age-related macular degeneration: interesting, but is it useful? Br J Ophthalmol 2020; 104:1629-1630. [PMID: 32862129 DOI: 10.1136/bjophthalmol-2020-316300] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 07/13/2020] [Accepted: 07/16/2020] [Indexed: 12/23/2022]
Affiliation(s)
- T Y Alvin Liu
- Johns Hopkins Wilmer Eye Institute, Baltimore, Maryland, USA
| | - Sina Farsiu
- Ophthalmology, Duke University, Durham, North Carolina, USA
| | - Daniel S Ting
- Vitreo-Retinal Department, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|