1
|
Poddar R, Shukla V, Alam Z, Mohan M. Automatic segmentation of layers in chorio-retinal complex using Graph-based method for ultra-speed 1.7 MHz wide field swept source FDML optical coherence tomography. Med Biol Eng Comput 2024; 62:1375-1393. [PMID: 38191981 DOI: 10.1007/s11517-023-03007-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 12/20/2023] [Indexed: 01/10/2024]
Abstract
The posterior segment of the human eye complex contains two discrete microstructure and vasculature network systems, namely, the retina and choroid. We present a single segmentation framework technique for segmenting the entire layers present in the chorio-retinal complex of the human eye using optical coherence tomography (OCT) images. This automatic program is based on the graph theory method. This single program is capable of segmenting seven layers of the retina and choroid scleral interface. The graph theory was utilized to find the probability matrix and subsequent boundaries of different layers. The program was also implemented to segment angiographic maps of different chorio-retinal layers using "segmentation matrices." The method was tested and successfully validated on OCT images from six normal human eyes as well as eyes with non-exudative age-related macular degeneration (AMD). The thickness of microstructure and microvasculature for different layers located in the chorio-retinal segment of the eye was also generated and compared. A decent efficiency in terms of processing time, sensitivity, and accuracy was observed compared to the manual segmentation and other existing methods. The proposed method automatically segments whole OCT images of chorio-retinal complex with augmented probability maps generation in OCT volume dataset. We have also evaluated the segmentation results using quantitative metrics such as Dice coefficient and Hausdorff distance This method realizes a mean descent Dice similarity coefficient (DSC) value of 0.82 (range, 0.816-0.864) for RPE and CSI layer.
Collapse
Affiliation(s)
- Raju Poddar
- Biophotonics Lab, Department of Bioengineering & Biotechnology, Birla Institute of Technology-Mesra, Ranchi, JH, 835 215, India.
| | - Vinita Shukla
- Biophotonics Lab, Department of Bioengineering & Biotechnology, Birla Institute of Technology-Mesra, Ranchi, JH, 835 215, India
| | - Zoya Alam
- Biophotonics Lab, Department of Bioengineering & Biotechnology, Birla Institute of Technology-Mesra, Ranchi, JH, 835 215, India
| | - Muktesh Mohan
- Biophotonics Lab, Department of Bioengineering & Biotechnology, Birla Institute of Technology-Mesra, Ranchi, JH, 835 215, India
| |
Collapse
|
2
|
Hasan MM, Phu J, Sowmya A, Meijering E, Kalloniatis M. Artificial intelligence in the diagnosis of glaucoma and neurodegenerative diseases. Clin Exp Optom 2024; 107:130-146. [PMID: 37674264 DOI: 10.1080/08164622.2023.2235346] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/07/2023] [Indexed: 09/08/2023] Open
Abstract
Artificial Intelligence is a rapidly expanding field within computer science that encompasses the emulation of human intelligence by machines. Machine learning and deep learning - two primary data-driven pattern analysis approaches under the umbrella of artificial intelligence - has created considerable interest in the last few decades. The evolution of technology has resulted in a substantial amount of artificial intelligence research on ophthalmic and neurodegenerative disease diagnosis using retinal images. Various artificial intelligence-based techniques have been used for diagnostic purposes, including traditional machine learning, deep learning, and their combinations. Presented here is a review of the literature covering the last 10 years on this topic, discussing the use of artificial intelligence in analysing data from different modalities and their combinations for the diagnosis of glaucoma and neurodegenerative diseases. The performance of published artificial intelligence methods varies due to several factors, yet the results suggest that such methods can potentially facilitate clinical diagnosis. Generally, the accuracy of artificial intelligence-assisted diagnosis ranges from 67-98%, and the area under the sensitivity-specificity curve (AUC) ranges from 0.71-0.98, which outperforms typical human performance of 71.5% accuracy and 0.86 area under the curve. This indicates that artificial intelligence-based tools can provide clinicians with useful information that would assist in providing improved diagnosis. The review suggests that there is room for improvement of existing artificial intelligence-based models using retinal imaging modalities before they are incorporated into clinical practice.
Collapse
Affiliation(s)
- Md Mahmudul Hasan
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Jack Phu
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Michael Kalloniatis
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| |
Collapse
|
3
|
Cao G, Wu Y, Peng Z, Zhou Z, Dai C. Self-attention CNN for retinal layer segmentation in OCT. BIOMEDICAL OPTICS EXPRESS 2024; 15:1605-1617. [PMID: 38495698 PMCID: PMC10942697 DOI: 10.1364/boe.510464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 01/13/2024] [Accepted: 01/30/2024] [Indexed: 03/19/2024]
Abstract
The structure of the retinal layers provides valuable diagnostic information for many ophthalmic diseases. Optical coherence tomography (OCT) obtains cross-sectional images of the retina, which reveals information about the retinal layers. The U-net based approaches are prominent in retinal layering methods, which are usually beneficial to local characteristics but not good at obtaining long-distance dependence for contextual information. Furthermore, the morphology of retinal layers with the disease is more complex, which brings more significant challenges to the task of retinal layer segmentation. We propose a U-shaped network combining an encoder-decoder architecture and self-attention mechanisms. In response to the characteristics of retinal OCT cross-sectional images, a self-attentive module in the vertical direction is added to the bottom of the U-shaped network, and an attention mechanism is also added in skip connection and up-sampling to enhance essential features. In this method, the transformer's self-attentive mechanism obtains the global field of perception, thus providing the missing context information for convolutions, and the convolutional neural network also efficiently extracts local features, compensating the local details the transformer ignores. The experiment results showed that our method is accurate and better than other methods for segmentation of the retinal layers, with the average Dice scores of 0.871 and 0.820, respectively, on two public retinal OCT image datasets. To perform the layer segmentation of retinal OCT image better, the proposed method incorporates the transformer's self-attention mechanism in a U-shaped network, which is helpful for ophthalmic disease diagnosis.
Collapse
Affiliation(s)
- Guogang Cao
- Shanghai Institute of Technology, Shanghai 201418, China
| | - Yan Wu
- Shanghai Institute of Technology, Shanghai 201418, China
| | - Zeyu Peng
- Shanghai Institute of Technology, Shanghai 201418, China
| | - Zhilin Zhou
- Shanghai Institute of Technology, Shanghai 201418, China
| | - Cuixia Dai
- Shanghai Institute of Technology, Shanghai 201418, China
| |
Collapse
|
4
|
Liu K, Zhang J. Cost-efficient and glaucoma-specifical model by exploiting normal OCT images with knowledge transfer learning. BIOMEDICAL OPTICS EXPRESS 2023; 14:6151-6171. [PMID: 38420316 PMCID: PMC10898582 DOI: 10.1364/boe.500917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/17/2023] [Accepted: 10/21/2023] [Indexed: 03/02/2024]
Abstract
Monitoring the progression of glaucoma is crucial for preventing further vision loss. However, deep learning-based models emphasize early glaucoma detection, resulting in a significant performance gap to glaucoma-confirmed subjects. Moreover, developing a fully-supervised model is suffering from insufficient annotated glaucoma datasets. Currently, sufficient and low-cost normal OCT images with pixel-level annotations can serve as valuable resources, but effectively transferring shared knowledge from normal datasets is a challenge. To alleviate the issue, we propose a knowledge transfer learning model for exploiting shared knowledge from low-cost and sufficient annotated normal OCT images by explicitly establishing the relationship between the normal domain and the glaucoma domain. Specifically, we directly introduce glaucoma domain information to the training stage through a three-step adversarial-based strategy. Additionally, our proposed model exploits different level shared features in both output space and encoding space with a suitable output size by a multi-level strategy. We have collected and collated a dataset called the TongRen OCT glaucoma dataset, including pixel-level annotated glaucoma OCT images and diagnostic information. The results on the dataset demonstrate our proposed model outperforms the un-supervised model and the mixed training strategy, achieving an increase of 5.28% and 5.77% on mIoU, respectively. Moreover, our proposed model narrows performance gap to the fully-supervised model decreased by only 1.01% on mIoU. Therefore, our proposed model can serve as a valuable tool for extracting glaucoma-related features, facilitating the tracking progression of glaucoma.
Collapse
Affiliation(s)
- Kai Liu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
- Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, 100083, China
- Department of Computer Science, City University of Hong Kong, Hong Kong, 98121, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
- Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, 100083, China
- Hefei Innovation Research Institute, Beihang University, Hefei, 230012, China
| |
Collapse
|
5
|
Pucchio A, Krance S, Pur DR, Bassi A, Miranda R, Felfeli T. The role of artificial intelligence in analysis of biofluid markers for diagnosis and management of glaucoma: A systematic review. Eur J Ophthalmol 2023; 33:1816-1833. [PMID: 36426575 PMCID: PMC10469503 DOI: 10.1177/11206721221140948] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 11/01/2022] [Indexed: 08/31/2023]
Abstract
PURPOSE This review focuses on utility of artificial intelligence (AI) in analysis of biofluid markers in glaucoma. We detail the accuracy and validity of AI in the exploration of biomarkers to provide insight into glaucoma pathogenesis. METHODS A comprehensive search was conducted across five electronic databases including Embase, Medline, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Web of Science. Studies pertaining to biofluid marker analysis using AI or bioinformatics in glaucoma were included. Identified studies were critically appraised and assessed for risk of bias using the Joanna Briggs Institute Critical Appraisal tools. RESULTS A total of 10,258 studies were screened and 39 studies met the inclusion criteria, including 23 cross-sectional studies (59%), nine prospective cohort studies (23%), six retrospective cohort studies (15%), and one case-control study (3%). Primary open angle glaucoma (POAG) was the most commonly studied subtype (55% of included studies). Twenty-four studies examined disease characteristics, 10 explored treatment decisions, and 5 provided diagnostic clarification. While studies examined at entire metabolomic or proteomic profiles to determine changes in POAG, there was heterogeneity in the data with over 175 unique, differentially expressed biomarkers reported. Discriminant analysis and artificial neural network predictive models displayed strong differentiating ability between glaucoma patients and controls, although these tools were untested in a clinical context. CONCLUSION The use of AI models could inform glaucoma diagnosis with high sensitivity and specificity. While insight into differentially expressed biomarkers is valuable in pathogenic exploration, no clear pathogenic mechanism in glaucoma has emerged.
Collapse
Affiliation(s)
- Aidan Pucchio
- School of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Saffire Krance
- Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Daiana R Pur
- Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Arshpreet Bassi
- Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Rafael Miranda
- Toronto Health Economics and Technology Assessment Collaborative, University of Toronto, Toronto, Ontario, Canada
- The Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | - Tina Felfeli
- Toronto Health Economics and Technology Assessment Collaborative, University of Toronto, Toronto, Ontario, Canada
- Department of Ophthalmology and Visual Sciences, University of Toronto, Toronto, Ontario, Canada
- The Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
6
|
He Y, Carass A, Liu Y, Calabresi PA, Saidha S, Prince JL. Longitudinal deep network for consistent OCT layer segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:1874-1893. [PMID: 37206119 PMCID: PMC10191669 DOI: 10.1364/boe.487518] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/11/2023] [Accepted: 03/17/2023] [Indexed: 05/21/2023]
Abstract
Retinal layer thickness is an important bio-marker for people with multiple sclerosis (PwMS). In clinical practice, retinal layer thickness changes in optical coherence tomography (OCT) are widely used for monitoring multiple sclerosis (MS) progression. Recent developments in automated retinal layer segmentation algorithms allow cohort-level retina thinning to be observed in a large study of PwMS. However, variability in these results make it difficult to identify patient-level trends; this prevents patient specific disease monitoring and treatment planning using OCT. Deep learning based retinal layer segmentation algorithms have achieved state-of-the-art accuracy, but the segmentation is performed on each individual scan without utilizing longitudinal information, which can be important in reducing segmentation error and reveal subtle changes in retinal layers. In this paper, we propose a longitudinal OCT segmentation network which achieves more accurate and consistent layer thickness measurements for PwMS.
Collapse
Affiliation(s)
- Yufan He
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA
| | - Aaron Carass
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA
| | - Yihao Liu
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA
| | - Peter A. Calabresi
- Dept. of Neurology, The Johns Hopkins University School of Medicine, MD 21287, USA
| | - Shiv Saidha
- Dept. of Neurology, The Johns Hopkins University School of Medicine, MD 21287, USA
| | - Jerry L. Prince
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
7
|
Lou S, Chen X, Wang Y, Cai H, Chen S, Liu L. Multiscale joint segmentation method for retinal optical coherence tomography images using a bidirectional wave algorithm and improved graph theory. OPTICS EXPRESS 2023; 31:6862-6876. [PMID: 36823933 DOI: 10.1364/oe.472154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 12/16/2022] [Indexed: 06/18/2023]
Abstract
Morphology and functional metrics of retinal layers are important biomarkers for many human ophthalmic diseases. Automatic and accurate segmentation of retinal layers is crucial for disease diagnosis and research. To improve the performance of retinal layer segmentation, a multiscale joint segmentation framework for retinal optical coherence tomography (OCT) images based on bidirectional wave algorithm and improved graph theory is proposed. In this framework, the bidirectional wave algorithm was used to segment edge information in multiscale images, and the improved graph theory was used to modify edge information globally, to realize automatic and accurate segmentation of eight retinal layer boundaries. This framework was tested on two public datasets and two OCT imaging systems. The test results show that, compared with other state-of-the-art methods, this framework does not need data pre-training and parameter pre-adjustment on different datasets, and can achieve sub-pixel retinal layer segmentation on a low-configuration computer.
Collapse
|
8
|
Yousefi S. Clinical Applications of Artificial Intelligence in Glaucoma. J Ophthalmic Vis Res 2023; 18:97-112. [PMID: 36937202 PMCID: PMC10020779 DOI: 10.18502/jovr.v18i1.12730] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/05/2022] [Indexed: 02/25/2023] Open
Abstract
Ophthalmology is one of the major imaging-intensive fields of medicine and thus has potential for extensive applications of artificial intelligence (AI) to advance diagnosis, drug efficacy, and other treatment-related aspects of ocular disease. AI has made impressive progress in ophthalmology within the past few years and two autonomous AI-enabled systems have received US regulatory approvals for autonomously screening for mid-level or advanced diabetic retinopathy and macular edema. While no autonomous AI-enabled system for glaucoma screening has yet received US regulatory approval, numerous assistive AI-enabled software tools are already employed in commercialized instruments for quantifying retinal images and visual fields to augment glaucoma research and clinical practice. In this literature review (non-systematic), we provide an overview of AI applications in glaucoma, and highlight some limitations and considerations for AI integration and adoption into clinical practice.
Collapse
Affiliation(s)
- Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
9
|
Song D, Li F, Li C, Xiong J, He J, Zhang X, Qiao Y. Asynchronous feature regularization and cross-modal distillation for OCT based glaucoma diagnosis. Comput Biol Med 2022; 151:106283. [PMID: 36442272 DOI: 10.1016/j.compbiomed.2022.106283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 10/03/2022] [Accepted: 10/30/2022] [Indexed: 11/11/2022]
Abstract
Glaucoma has become a major cause of vision loss. Early-stage diagnosis of glaucoma is critical for treatment planning to avoid irreversible vision damage. Meanwhile, interpreting the rapidly accumulated medical data from ophthalmic exams is cumbersome and resource-intensive. Therefore, automated methods are highly desired to assist ophthalmologists in achieving fast and accurate glaucoma diagnosis. Deep learning has achieved great successes in diagnosing glaucoma by analyzing data from different kinds of tests, such as peripapillary optical coherence tomography (OCT) and visual field (VF) testing. Nevertheless, applying these developed models to clinical practice is still challenging because of various limiting factors. OCT models present worse glaucoma diagnosis performances compared to those achieved by OCT&VF based models, whereas VF is time-consuming and highly variable, which can restrict the wide employment of OCT&VF models. To this end, we develop a novel deep learning framework that leverages the OCT&VF model to enhance the performance of the OCT model. To transfer the complementary knowledge from the structural and functional assessments to the OCT model, a cross-modal knowledge transfer method is designed by integrating a designed distillation loss and a proposed asynchronous feature regularization (AFR) module. We demonstrate the effectiveness of the proposed method for glaucoma diagnosis by utilizing a public OCT&VF dataset and evaluating it on an external OCT dataset. Our final model with only OCT inputs achieves the accuracy of 87.4% (3.1% absolute improvement) and AUC of 92.3%, which are on par with the OCT&VF joint model. Moreover, results on the external dataset sufficiently indicate the effectiveness and generalization capability of our model.
Collapse
Affiliation(s)
- Diping Song
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 100049, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, 510060, China.
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| | - Jian Xiong
- Ophthalmic Center, The Second Affiliated Hospital of Nanchang University, Nanchang, 330000, China.
| | - Junjun He
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, 510060, China.
| | - Yu Qiao
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| |
Collapse
|
10
|
WU JOHSUAN, NISHIDA TAKASHI, WEINREB ROBERTN, LIN JOUWEI. Performances of Machine Learning in Detecting Glaucoma Using Fundus and Retinal Optical Coherence Tomography Images: A Meta-Analysis. Am J Ophthalmol 2022; 237:1-12. [PMID: 34942113 DOI: 10.1016/j.ajo.2021.12.008] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/24/2021] [Accepted: 12/03/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE To evaluate the performance of machine learning (ML) in detecting glaucoma using fundus and retinal optical coherence tomography (OCT) images. DESIGN Meta-analysis. METHODS PubMed and EMBASE were searched on August 11, 2021. A bivariate random-effects model was used to pool ML's diagnostic sensitivity, specificity, and area under the curve (AUC). Subgroup analyses were performed based on ML classifier categories and dataset types. RESULTS One hundred and five studies (3.3%) were retrieved. Seventy-three (69.5%), 30 (28.6%), and 2 (1.9%) studies tested ML using fundus, OCT, and both image types, respectively. Total testing data numbers were 197,174 for fundus and 16,039 for OCT. Overall, ML showed excellent performances for both fundus (pooled sensitivity = 0.92 [95% CI, 0.91-0.93]; specificity = 0.93 [95% CI, 0.91-0.94]; and AUC = 0.97 [95% CI, 0.95-0.98]) and OCT (pooled sensitivity = 0.90 [95% CI, 0.86-0.92]; specificity = 0.91 [95% CI, 0.89-0.92]; and AUC = 0.96 [95% CI, 0.93-0.97]). ML performed similarly using all data and external data for fundus and the external test result of OCT was less robust (AUC = 0.87). When comparing different classifier categories, although support vector machine showed the highest performance (pooled sensitivity, specificity, and AUC ranges, 0.92-0.96, 0.95-0.97, and 0.96-0.99, respectively), results by neural network and others were still good (pooled sensitivity, specificity, and AUC ranges, 0.88-0.93, 0.90-0.93, 0.95-0.97, respectively). When analyzed based on dataset types, ML demonstrated consistent performances on clinical datasets (fundus AUC = 0.98 [95% CI, 0.97-0.99] and OCT AUC = 0.95 [95% 0.93-0.97]). CONCLUSIONS Performance of ML in detecting glaucoma compares favorably to that of experts and is promising for clinical application. Future prospective studies are needed to better evaluate its real-world utility.
Collapse
|
11
|
Energetic Glaucoma Segmentation and Classification Strategies Using Depth Optimized Machine Learning Strategies. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:5709257. [PMID: 34908911 PMCID: PMC8639261 DOI: 10.1155/2021/5709257] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 10/20/2021] [Accepted: 11/08/2021] [Indexed: 11/25/2022]
Abstract
Glaucoma is a major threatening cause, in which it affects the optical nerve to lead to a permanent blindness to individuals. The major causes of Glaucoma are high pressure to eyes, family history, irregular sleeping habits, and so on. These kinds of causes lead to Glaucoma easily, and the effect of such disease leads to heavy damage to the internal optic nervous system and the affected person will get permanent blindness within few months. The major problem with this disease is that it is incurable; however, the affection stages can be reduced and the same level of effect as that for the long period can be maintained but this is possible only in the earlier stages of identification. This Glaucoma causes structural effect to the eye ball and it is complex to estimate the cause during regular diagnosis. In medical terms, the Cup to Disc Ratio (CDR) is minimized to the Glaucoma patients suddenly and leads to harmful damage to one's eye in severe manner. The general way to identify the Glaucoma is to take Optical Coherence Tomography (OCT) test, in which it captures the uncovered portion of eye ball (backside) and it is an efficient way to visualize diverse portions of eyes with optical nerve visibility shown clearly. The OCT images are mainly used to identify the diseases like Glaucoma with proper and robust accuracy levels. In this work, a new methodology is introduced to identify the Glaucoma in earlier stages, called Depth Optimized Machine Learning Strategy (DOMLS), in which it adapts the new optimization logic called Modified K-Means Optimization Logic (MkMOL) to provide best accuracy in results, and the proposed approach assures the accuracy level of more than 96.2% with least error rate of 0.002%. This paper focuses on the identification of early stage of Glaucoma and provides an efficient solution to people in case of effect by such disease using OCT images. The exact position pointed out is handled by using Region of Interest- (ROI-) based optical region selection, in which it is easy to point the optical cup (OC) and optical disc (OD). The proposed algorithm of DOMLS proves the accuracy levels in estimation of Glaucoma and the practical proofs are shown in the Result and Discussions section in a clear manner.
Collapse
|
12
|
An U, Bhardwaj A, Shameer K, Subramanian L. High Precision Mammography Lesion Identification From Imprecise Medical Annotations. Front Big Data 2021; 4:742779. [PMID: 34977563 PMCID: PMC8716325 DOI: 10.3389/fdata.2021.742779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 10/20/2021] [Indexed: 11/21/2022] Open
Abstract
Breast cancer screening using Mammography serves as the earliest defense against breast cancer, revealing anomalous tissue years before it can be detected through physical screening. Despite the use of high resolution radiography, the presence of densely overlapping patterns challenges the consistency of human-driven diagnosis and drives interest in leveraging state-of-art localization ability of deep convolutional neural networks (DCNN). The growing availability of digitized clinical archives enables the training of deep segmentation models, but training using the most widely available form of coarse hand-drawn annotations works against learning the precise boundary of cancerous tissue in evaluation, while producing results that are more aligned with the annotations rather than the underlying lesions. The expense of collecting high quality pixel-level data in the field of medical science makes this even more difficult. To surmount this fundamental challenge, we propose LatentCADx, a deep learning segmentation model capable of precisely annotating cancer lesions underlying hand-drawn annotations, which we procedurally obtain using joint classification training and a strict segmentation penalty. We demonstrate the capability of LatentCADx on a publicly available dataset of 2,620 Mammogram case files, where LatentCADx obtains classification ROC of 0.97, AP of 0.87, and segmentation AP of 0.75 (IOU = 0.5), giving comparable or better performance than other models. Qualitative and precision evaluation of LatentCADx annotations on validation samples reveals that LatentCADx increases the specificity of segmentations beyond that of existing models trained on hand-drawn annotations, with pixel level specificity reaching a staggering value of 0.90. It also obtains sharp boundary around lesions unlike other methods, reducing the confused pixels in the output by more than 60%.
Collapse
Affiliation(s)
- Ulzee An
- Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States
| | - Ankit Bhardwaj
- Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, New York, NY, United States
| | | | - Lakshminarayanan Subramanian
- Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, New York, NY, United States
- Department of Population Health, NYU Grossman School of Medicine, New York University, New York, NY, United States
| |
Collapse
|
13
|
Zhang Y, Li M, Yuan S, Liu Q, Chen Q. Robust region encoding and layer attribute protection for the segmentation of retina with multifarious abnormalities. Med Phys 2021; 48:7773-7789. [PMID: 34716932 DOI: 10.1002/mp.15315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 09/30/2021] [Accepted: 10/19/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE To robustly segment retinal layers that are affected by complex variety of retinal diseases for optical coherence tomography angiography (OCTA) en face projection generation. METHODS In this paper, we propose a robust retinal layer segmentation model to reduce the impact of multifarious abnormalities on model performance. OCTA vascular distribution that is regarded as the supplements of spectral domain optical coherence tomography (SD-OCT) structural information is introduced to improve the robustness of layer region encoding. To further reduce the sensitivity of region encoding to retinal abnormalities, we propose a multitask layer-wise refinement (MLR) module that can refine the initial layer region segmentation results layer-by-layer. Finally, we design a region-to-surface transformation (RtST) module without additional training parameters to convert the encoding layer regions to their corresponding layer surfaces. This transformation from layer regions to layer surfaces can remove the inaccurate segmentation regions, and the layer surfaces are easier to be used to protect the retinal layer natures than layer regions. RESULTS Experimental data includes 273 eyes, where 95 eyes are normal and 178 eyes contain complex retinal diseases, including age-related macular degeneration (AMD), diabetic retinopathy (DR), central serous chorioretinopathy (CSC), choroidal neovascularization (CNV), and so forth. The dice similarity coefficient (DSC: %) of superficial, deep and outer retina achieves 98.92, 97.48, and 98.87 on normal eyes and 98.35, 95.33, and 98.17 on abnormal eyes. Compared with other commonly used layer segmentation models, our model achieves the state-of-the-art layer segmentation performance. CONCLUSIONS The final results prove that our proposed model obtains outstanding performance and has enough ability to resist retinal abnormalities. Besides, OCTA modality is helpful for retinal layer segmentation.
Collapse
Affiliation(s)
- Yuhan Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Mingchao Li
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Qinghuai Liu
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| |
Collapse
|
14
|
Song D, Fu B, Li F, Xiong J, He J, Zhang X, Qiao Y. Deep Relation Transformer for Diagnosing Glaucoma With Optical Coherence Tomography and Visual Field Function. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2392-2402. [PMID: 33945474 DOI: 10.1109/tmi.2021.3077484] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Glaucoma is the leading reason for irreversible blindness. Early detection and timely treatment of glaucoma are essential for preventing visual field loss or even blindness. In clinical practice, Optical Coherence Tomography (OCT) and Visual Field (VF) exams are two widely-used and complementary techniques for diagnosing glaucoma. OCT provides quantitative measurements of the optic nerve head (ONH) structure, while VF test is the functional assessment of peripheral vision. In this paper, we propose a Deep Relation Transformer (DRT) to perform glaucoma diagnosis with OCT and VF information combined. A novel deep reasoning mechanism is proposed to explore implicit pairwise relations between OCT and VF information in global and regional manners. With the pairwise relations, a carefully-designed deep transformer mechanism is developed to enhance the representation with complementary information for each modal. Based on reasoning and transformer mechanisms, three successive modules are designed to extract and collect valuable information for glaucoma diagnosis, the global relation module, the guided regional relation module, and the interaction transformer module, namely. Moreover, we build a large dataset, namely ZOC-OCT&VF dataset, which includes 1395 OCT-VF pairs for developing and evaluating our DRT. We conduct extensive experiments to validate the effectiveness of the proposed method. Experimental results show that our method achieves 88.3% accuracy and outperforms the existing single-modal approaches with a large margin. The codes and dataset will be publicly available in the future.
Collapse
|
15
|
Rajagopalan N, N. V, Josephraj AN, E. S. Diagnosis of retinal disorders from Optical Coherence Tomography images using CNN. PLoS One 2021; 16:e0254180. [PMID: 34314421 PMCID: PMC8315505 DOI: 10.1371/journal.pone.0254180] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 06/21/2021] [Indexed: 12/04/2022] Open
Abstract
An efficient automatic decision support system for detection of retinal disorders is important and is the need of the hour. Optical Coherence Tomography (OCT) is the current imaging modality for the early detection of retinal disorders non-invasively. In this work, a Convolution Neural Network (CNN) model is proposed to classify three types of retinal disorders namely: Choroidal neovascularization (CNV), Drusen macular degeneration (DMD) and Diabetic macular edema (DME). The hyperparameters of the model like batch size, number of epochs, dropout rate, and the type of optimizer are tuned using random search optimization method for better performance to classify different retinal disorders. The proposed architecture provides an accuracy of 97.01%, sensitivity of 93.43%, and 98.07% specificity and it outperformed other existing models, when compared. The proposed model can be used for the large-scale screening of retinal disorders effectively.
Collapse
Affiliation(s)
- Nithya Rajagopalan
- Department of Biomedical Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, India
- * E-mail:
| | - Venkateswaran N.
- Department of Electronics and Communication Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, India
| | - Alex Noel Josephraj
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Srithaladevi E.
- Department of Biomedical Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, India
| |
Collapse
|
16
|
He X, Deng Y, Fang L, Peng Q. Multi-Modal Retinal Image Classification With Modality-Specific Attention Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1591-1602. [PMID: 33625978 DOI: 10.1109/tmi.2021.3059956] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recently, automatic diagnostic approaches have been widely used to classify ocular diseases. Most of these approaches are based on a single imaging modality (e.g., fundus photography or optical coherence tomography (OCT)), which usually only reflect the oculopathy to a certain extent, and neglect the modality-specific information among different imaging modalities. This paper proposes a novel modality-specific attention network (MSAN) for multi-modal retinal image classification, which can effectively utilize the modality-specific diagnostic features from fundus and OCT images. The MSAN comprises two attention modules to extract the modality-specific features from fundus and OCT images, respectively. Specifically, for the fundus image, ophthalmologists need to observe local and global pathologies at multiple scales (e.g., from microaneurysms at the micrometer level, optic disc at millimeter level to blood vessels through the whole eye). Therefore, we propose a multi-scale attention module to extract both the local and global features from fundus images. Moreover, large background regions exist in the OCT image, which is meaningless for diagnosis. Thus, a region-guided attention module is proposed to encode the retinal layer-related features and ignore the background in OCT images. Finally, we fuse the modality-specific features to form a multi-modal feature and train the multi-modal retinal image classification network. The fusion of modality-specific features allows the model to combine the advantages of fundus and OCT modality for a more accurate diagnosis. Experimental results on a clinically acquired multi-modal retinal image (fundus and OCT) dataset demonstrate that our MSAN outperforms other well-known single-modal and multi-modal retinal image classification methods.
Collapse
|
17
|
Wang C, Gan M. Tissue self-attention network for the segmentation of optical coherence tomography images on the esophagus. BIOMEDICAL OPTICS EXPRESS 2021; 12:2631-2646. [PMID: 34123493 PMCID: PMC8176794 DOI: 10.1364/boe.419809] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 04/01/2021] [Accepted: 04/01/2021] [Indexed: 05/06/2023]
Abstract
Automatic segmentation of layered tissue is the key to esophageal optical coherence tomography (OCT) image processing. With the advent of deep learning techniques, frameworks based on a fully convolutional network are proved to be effective in classifying pixels on images. However, due to speckle noise and unfavorable imaging conditions, the esophageal tissue relevant to the diagnosis is not always easy to identify. An effective approach to address this problem is extracting more powerful feature maps, which have similar expressions for pixels in the same tissue and show discriminability from those from different tissues. In this study, we proposed a novel framework, called the tissue self-attention network (TSA-Net), which introduces the self-attention mechanism for esophageal OCT image segmentation. The self-attention module in the network is able to capture long-range context dependencies from the image and analyzes the input image in a global view, which helps to cluster pixels in the same tissue and reveal differences of different layers, thus achieving more powerful feature maps for segmentation. Experiments have visually illustrated the effectiveness of the self-attention map, and its advantages over other deep networks were also discussed.
Collapse
Affiliation(s)
- Cong Wang
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Meng Gan
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| |
Collapse
|
18
|
Li J, Jin P, Zhu J, Zou H, Xu X, Tang M, Zhou M, Gan Y, He J, Ling Y, Su Y. Multi-scale GCN-assisted two-stage network for joint segmentation of retinal layers and discs in peripapillary OCT images. BIOMEDICAL OPTICS EXPRESS 2021; 12:2204-2220. [PMID: 33996224 PMCID: PMC8086482 DOI: 10.1364/boe.417212] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 03/15/2021] [Accepted: 03/16/2021] [Indexed: 05/03/2023]
Abstract
An accurate and automated tissue segmentation algorithm for retinal optical coherence tomography (OCT) images is crucial for the diagnosis of glaucoma. However, due to the presence of the optic disc, the anatomical structure of the peripapillary region of the retina is complicated and is challenging for segmentation. To address this issue, we develop a novel graph convolutional network (GCN)-assisted two-stage framework to simultaneously label the nine retinal layers and the optic disc. Specifically, a multi-scale global reasoning module is inserted between the encoder and decoder of a U-shape neural network to exploit anatomical prior knowledge and perform spatial reasoning. We conduct experiments on human peripapillary retinal OCT images. We also provide public access to the collected dataset, which might contribute to the research in the field of biomedical image processing. The Dice score of the proposed segmentation network is 0.820 ± 0.001 and the pixel accuracy is 0.830 ± 0.002, both of which outperform those from other state-of-the-art techniques.
Collapse
Affiliation(s)
- Jiaxuan Li
- John Hopcroft Center for Computer Science, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Peiyao Jin
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Jianfeng Zhu
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
| | - Haidong Zou
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Xun Xu
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Min Tang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Minwen Zhou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Yu Gan
- Department of Electrical and Computer Engineering, The University of Alabama, AL 35487, USA
| | - Jiangnan He
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
| | - Yuye Ling
- John Hopcroft Center for Computer Science, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yikai Su
- State Key Lab of Advanced Optical Communication Systems and Networks, Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
19
|
Zhong P, Wang J, Guo Y, Fu X, Wang R. Multiclass retinal disease classification and lesion segmentation in OCT B-scan images using cascaded convolutional networks. APPLIED OPTICS 2020; 59:10312-10320. [PMID: 33361962 DOI: 10.1364/ao.409414] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 10/24/2020] [Indexed: 06/12/2023]
Abstract
Disease classification and lesion segmentation of retinal optical coherence tomography images play important roles in ophthalmic computer-aided diagnosis. However, existing methods achieve the two tasks separately, which is insufficient for clinical application and ignores the internal relation of disease and lesion features. In this paper, a framework of cascaded convolutional networks is proposed to jointly classify retinal diseases and segment lesions. First, we adopt an auxiliary binary classification network to identify normal and abnormal images. Then a novel, to the best of our knowledge, U-shaped multi-task network, BDA-Net, combined with a bidirectional decoder and self-attention mechanism, is used to further analyze abnormal images. Experimental results show that the proposed method reaches an accuracy of 0.9913 in classification and achieves an improvement of around 3% in Dice compared to the baseline U-shaped model in segmentation.
Collapse
|
20
|
Kugelman J, Alonso-Caneiro D, Chen Y, Arunachalam S, Huang D, Vallis N, Collins MJ, Chen FK. Retinal Boundary Segmentation in Stargardt Disease Optical Coherence Tomography Images Using Automated Deep Learning. Transl Vis Sci Technol 2020; 9:12. [PMID: 33133774 PMCID: PMC7581491 DOI: 10.1167/tvst.9.11.12] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 09/11/2020] [Indexed: 12/13/2022] Open
Abstract
Purpose To use a deep learning model to develop a fully automated method (fully semantic network and graph search [FS-GS]) of retinal segmentation for optical coherence tomography (OCT) images from patients with Stargardt disease. Methods Eighty-seven manually segmented (ground truth) OCT volume scan sets (5171 B-scans) from 22 patients with Stargardt disease were used for training, validation and testing of a novel retinal boundary detection approach (FS-GS) that combines a fully semantic deep learning segmentation method, which generates a per-pixel class prediction map with a graph-search method to extract retinal boundary positions. The performance was evaluated using the mean absolute boundary error and the differences in two clinical metrics (retinal thickness and volume) compared with the ground truth. The performance of a separate deep learning method and two publicly available software algorithms were also evaluated against the ground truth. Results FS-GS showed an excellent agreement with the ground truth, with a boundary mean absolute error of 0.23 and 1.12 pixels for the internal limiting membrane and the base of retinal pigment epithelium or Bruch's membrane, respectively. The mean difference in thickness and volume across the central 6 mm zone were 2.10 µm and 0.059 mm3. The performance of the proposed method was more accurate and consistent than the publicly available OCTExplorer and AURA tools. Conclusions The FS-GS method delivers good performance in segmentation of OCT images of pathologic retina in Stargardt disease. Translational Relevance Deep learning models can provide a robust method for retinal segmentation and support a high-throughput analysis pipeline for measuring retinal thickness and volume in Stargardt disease.
Collapse
Affiliation(s)
- Jason Kugelman
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland, Australia
| | - David Alonso-Caneiro
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland, Australia.,Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia
| | - Yi Chen
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia
| | - Sukanya Arunachalam
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia
| | - Di Huang
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia.,Centre for Molecular Medicine and Innovative Therapeutics, Murdoch University, Murdoch, Western Australia, Australia.,Centre for Neuromuscular and Neurological Disorders, The University of Western Australia and Perron Institute for Neurological and Translational Science, Nedlands, Western Australia, Australia
| | - Natasha Vallis
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia
| | - Michael J Collins
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland, Australia
| | - Fred K Chen
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia.,Department of Ophthalmology, Royal Perth Hospital, Perth, Western Australia, Australia.,Department of Ophthalmology, Perth Children's Hospital, Nedlands, Western Australia, Australia
| |
Collapse
|
21
|
Raja H, Hassan T, Akram MU, Werghi N. Clinically Verified Hybrid Deep Learning System for Retinal Ganglion Cells Aware Grading of Glaucomatous Progression. IEEE Trans Biomed Eng 2020; 68:2140-2151. [PMID: 33044925 DOI: 10.1109/tbme.2020.3030085] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Glaucoma is the second leading cause of blindness worldwide. Glaucomatous progression can be easily monitored by analyzing the degeneration of retinal ganglion cells (RGCs). Many researchers have screened glaucoma by measuring cup-to-disc ratios from fundus and optical coherence tomography scans. However, this paper presents a novel strategy that pays attention to the RGC atrophy for screening glaucomatous pathologies and grading their severity. METHODS The proposed framework encompasses a hybrid convolutional network that extracts the retinal nerve fiber layer, ganglion cell with the inner plexiform layer and ganglion cell complex regions, allowing thus a quantitative screening of glaucomatous subjects. Furthermore, the severity of glaucoma in screened cases is objectively graded by analyzing the thickness of these regions. RESULTS The proposed framework is rigorously tested on publicly available Armed Forces Institute of Ophthalmology (AFIO) dataset, where it achieved the F1 score of 0.9577 for diagnosing glaucoma, a mean dice coefficient score of 0.8697 for extracting the RGC regions and an accuracy of 0.9117 for grading glaucomatous progression. Furthermore, the performance of the proposed framework is clinically verified with the markings of four expert ophthalmologists, achieving a statistically significant Pearson correlation coefficient of 0.9236. CONCLUSION An automated assessment of RGC degeneration yields better glaucomatous screening and grading as compared to the state-of-the-art solutions. SIGNIFICANCE An RGC-aware system not only screens glaucoma but can also grade its severity and here we present an end-to-end solution that is thoroughly evaluated on a standardized dataset and is clinically validated for analyzing glaucomatous pathologies.
Collapse
|
22
|
Zhang Y, Huang C, Li M, Xie S, Xie K, Ji Z, Yuan S, Chen Q. Robust Layer Segmentation Against Complex Retinal Abnormalities for en face OCTA Generation. ACTA ACUST UNITED AC 2020. [DOI: 10.1007/978-3-030-59722-1_62] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
23
|
Wang C, Gan M, Zhang M, Li D. Adversarial convolutional network for esophageal tissue segmentation on OCT images. BIOMEDICAL OPTICS EXPRESS 2020; 11:3095-3110. [PMID: 32637244 PMCID: PMC7316031 DOI: 10.1364/boe.394715] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 05/08/2020] [Accepted: 05/08/2020] [Indexed: 05/20/2023]
Abstract
Automatic segmentation is important for esophageal OCT image processing, which is able to provide tissue characteristics such as shape and thickness for disease diagnosis. Existing automatical segmentation methods based on deep convolutional networks may not generate accurate segmentation results due to limited training set and various layer shapes. This study proposed a novel adversarial convolutional network (ACN) to segment esophageal OCT images using a convolutional network trained by adversarial learning. The proposed framework includes a generator and a discriminator, both with U-Net alike fully convolutional architecture. The discriminator is a hybrid network that discriminates whether the generated results are real and implements pixel classification at the same time. Leveraging on the adversarial training, the discriminator becomes more powerful. In addition, the adversarial loss is able to encode high order relationships of pixels, thus eliminating the requirements of post-processing. Experiments on segmenting esophageal OCT images from guinea pigs confirmed that the ACN outperforms several deep learning frameworks in pixel classification accuracy and improves the segmentation result. The potential clinical application of ACN for detecting eosinophilic esophagitis (EoE), an esophageal disease, is also presented in the experiment.
Collapse
Affiliation(s)
- Cong Wang
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- These authors contributed equally to this work and should be considered co-first authors
| | - Meng Gan
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- These authors contributed equally to this work and should be considered co-first authors
| | - Miao Zhang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Deyin Li
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| |
Collapse
|
24
|
Wang C, Gan M, Zhang M, Li D. Adversarial convolutional network for esophageal tissue segmentation on OCT images. BIOMEDICAL OPTICS EXPRESS 2020; 11:3095-3110. [PMID: 32637244 DOI: 10.1109/access.2020.3041767] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 05/08/2020] [Accepted: 05/08/2020] [Indexed: 05/26/2023]
Abstract
Automatic segmentation is important for esophageal OCT image processing, which is able to provide tissue characteristics such as shape and thickness for disease diagnosis. Existing automatical segmentation methods based on deep convolutional networks may not generate accurate segmentation results due to limited training set and various layer shapes. This study proposed a novel adversarial convolutional network (ACN) to segment esophageal OCT images using a convolutional network trained by adversarial learning. The proposed framework includes a generator and a discriminator, both with U-Net alike fully convolutional architecture. The discriminator is a hybrid network that discriminates whether the generated results are real and implements pixel classification at the same time. Leveraging on the adversarial training, the discriminator becomes more powerful. In addition, the adversarial loss is able to encode high order relationships of pixels, thus eliminating the requirements of post-processing. Experiments on segmenting esophageal OCT images from guinea pigs confirmed that the ACN outperforms several deep learning frameworks in pixel classification accuracy and improves the segmentation result. The potential clinical application of ACN for detecting eosinophilic esophagitis (EoE), an esophageal disease, is also presented in the experiment.
Collapse
Affiliation(s)
- Cong Wang
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- These authors contributed equally to this work and should be considered co-first authors
| | - Meng Gan
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- These authors contributed equally to this work and should be considered co-first authors
| | - Miao Zhang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Deyin Li
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| |
Collapse
|