1
|
Liang X, Luo S, Liu Z, Liu Y, Luo S, Zhang K, Li L. Unsupervised machine learning analysis of optical coherence tomography radiomics features for predicting treatment outcomes in diabetic macular edema. Sci Rep 2025; 15:13389. [PMID: 40251316 PMCID: PMC12008428 DOI: 10.1038/s41598-025-96988-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2025] [Accepted: 04/01/2025] [Indexed: 04/20/2025] Open
Abstract
This study aimed to identify distinct clusters of diabetic macular edema (DME) patients with differential anti-vascular endothelial growth factor (VEGF) treatment outcomes using an unsupervised machine learning (ML) approach based on radiomic features extracted from pre-treatment optical coherence tomography (OCT) images. Retrospective data from 234 eyes with DME treated with three anti-VEGF therapies between January 2020 and March 2024 were collected from two clinical centers. Radiomic analysis was conducted on pre-treatment OCT images. Following principal component analysis (PCA) for dimensionality reduction, two unsupervised clustering methods (K-means and hierarchical clustering) were applied. Baseline characteristics and treatment outcomes were compared across clusters to assess clustering efficacy. Feature selection employed a three-stage pipeline: exclusion of collinear features (Pearson's r > 0.8); sequential filtering through ANOVA (P < 0.05) and Boruta algorithm (500 iterations); multivariate stepwise regression (entry criteria: univariate P < 0.1) to identify outcome-associated predictors. From 1165 extracted radiomic features, four distinct DME clusters were identified. Cluster 4 exhibited a significantly lower incidence of residual/recurrent DME (RDME) (34.29%) compared to Clusters 1-3 (P = 0.003, P = 0.005 and P = 0.002, respectively). This cluster also demonstrated the highest proportion of eyes (71.43%) with best-corrected visual acuity (BCVA) exceeding 20/63 (P = 0.003, P = 0.005 and P = 0.002, respectively). Multivariate analysis identified logarithm_gldm_DependenceVariance as an independent risk factor for RDME (OR 1.75, 95% CI 1.28-2.40; P < 0.001), while Wavelet-LH_Firstorder_Mean correlated with worse visual outcomes (OR 8.76, 95% CI 1.22-62.84; P = 0.031). Unsupervised ML leveraging pre-treatment OCT radiomics successfully stratifies DME eyes into clinically distinct subgroups with divergent therapeutic responses. These quantitative features may serve as non-invasive biomarkers for personalized outcome prediction and retinal pathology assessment.
Collapse
Affiliation(s)
- Xuemei Liang
- Department of Ophthalmology, Aier Eye Hospital, Jinan University, No, 191, Huanshi Middle Road, Yuexiu District, Guangzhou, 510071, Guangdong, People's Republic of China
- Department of Ophthalmology, Nanning Aier Eye Hospital, No, 63, Chaoyang Road, Xingning District, Nanning, 530012, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Shaozhao Luo
- Department of Ophthalmology, Aier Eye Hospital, Jinan University, No, 191, Huanshi Middle Road, Yuexiu District, Guangzhou, 510071, Guangdong, People's Republic of China
| | - Zhigao Liu
- Department of Ophthalmology, Jinan Aier Eye Hospital, No. 1916, Erhuan East Road, Licheng District, Jinan City, Shandong Province, People's Republic of China
| | - Yunsheng Liu
- Department of Ophthalmology, Cenxi Aier Eye Hospital, No. 101, Yuwu Avenue, Cenxi City, Wuzhou City, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Shinan Luo
- Department of Ophthalmology, Nanning Aier Eye Hospital, No, 63, Chaoyang Road, Xingning District, Nanning, 530012, Guangxi Zhuang Autonomous Region, People's Republic of China
| | - Kaiqing Zhang
- Department of Ophthalmology, Aier Eye Hospital, Jinan University, No, 191, Huanshi Middle Road, Yuexiu District, Guangzhou, 510071, Guangdong, People's Republic of China
| | - Li Li
- Department of Ophthalmology, Aier Eye Hospital, Jinan University, No, 191, Huanshi Middle Road, Yuexiu District, Guangzhou, 510071, Guangdong, People's Republic of China.
- Department of Ophthalmology, Nanning Aier Eye Hospital, No, 63, Chaoyang Road, Xingning District, Nanning, 530012, Guangxi Zhuang Autonomous Region, People's Republic of China.
| |
Collapse
|
2
|
Luo Y, Lin T, Lin A, Mai X, Chen H. Self-supervised based clustering for retinal optical coherence tomography images. Eye (Lond) 2025; 39:331-336. [PMID: 39468266 PMCID: PMC11751171 DOI: 10.1038/s41433-024-03444-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 10/17/2024] [Accepted: 10/22/2024] [Indexed: 10/30/2024] Open
Abstract
BACKGROUND In response to the inadequacy of manual analysis in meeting the rising demand for retinal optical coherence tomography (OCT) images, a self-supervised learning-based clustering model was implemented. METHODS A public dataset was utilized, with 83,484 OCT images with categories of choroidal neovascularization (CNV), diabetic macular edema (DME), drusen, and normal fundus. This study employed the Semantic Pseudo Labeling for Image Clustering (SPICE) framework, a self-supervised learning-based method, to cluster unlabeled OCT images into binary and four categories, and the performances were compared with baseline models. We also analysed feature distribution using t-SNE, and explored the cluster centers, attention maps, and misclassified images. In addition, DME and CNV subsets were clustered binarily, and the results were interpreted by two retinal specialists. RESULTS SPICE demonstrated superior performance in binary and four categories classification tasks, achieving the accuracy of 0.886 and 0.846, respectively. In t-SNE analysis, the four types exhibited significant clustering into distinct groups. The cluster centers corresponded to the human labels, and the heat map revealed that the model focused on important biomarkers. The misclassified images exposed similar features to the inaccurate classes. The model also grouped DME and CNV into two distinct categories respectively. CONCLUSIONS Self-supervised clustering effectively distinguished disease variances and revealed common features, with a notable capability to detect disease heterogeneity through biomarkers.
Collapse
Affiliation(s)
- Yilong Luo
- Joint Shantou International Eye Center, Shantou University & the Chinese University of Hong Kong, Shantou, China
| | - Tian Lin
- Joint Shantou International Eye Center, Shantou University & the Chinese University of Hong Kong, Shantou, China
| | - Aidi Lin
- Joint Shantou International Eye Center, Shantou University & the Chinese University of Hong Kong, Shantou, China
| | - Xiaoting Mai
- Joint Shantou International Eye Center, Shantou University & the Chinese University of Hong Kong, Shantou, China
| | - Haoyu Chen
- Joint Shantou International Eye Center, Shantou University & the Chinese University of Hong Kong, Shantou, China.
| |
Collapse
|
3
|
Oliveira S, Guimarães P, Campos EJ, Fernandes R, Martins J, Castelo-Branco M, Serranho P, Matafome P, Bernardes R, Ambrósio AF. Retinal OCT-Derived Texture Features as Potential Biomarkers for Early Diagnosis and Progression of Diabetic Retinopathy. Invest Ophthalmol Vis Sci 2025; 66:7. [PMID: 39760689 PMCID: PMC11717131 DOI: 10.1167/iovs.66.1.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Accepted: 11/29/2024] [Indexed: 01/07/2025] Open
Abstract
Purpose Diabetic retinopathy (DR) is usually diagnosed many years after diabetes onset. Indeed, an early diagnosis of DR remains a notable challenge, and, thus, developing novel approaches for earlier disease detection is of utmost importance. We aim to explore the potential of texture analysis of optical coherence tomography (OCT) retinal images in detecting retinal changes in streptozotocin (STZ)-induced diabetic animals at "silent" disease stages when early retinal molecular and cellular changes that cannot be clinically detectable are already occurring. Methods Volume OCT scans and electroretinograms were acquired before and 1, 2, and 4 weeks after diabetes induction. Automated OCT image segmentation was performed, followed by retinal thickness and texture analysis. Blood-retinal barrier breakdown, glial reactivity, and neuroinflammation were also assessed. Results Type 1 diabetes induced significant early changes in several texture metrics. At week 4 of diabetes, autocorrelation, correlation, homogeneity, information measure of correlation II (IMCII), inverse difference moment normalized (IDN), inverse difference normalized (INN), and sum average texture metrics decreased in all retinal layers. Similar effects were observed for correlation, homogeneity, IMCII, IDN, and INN at week 2. Moreover, the values of those seven-texture metrics described above decreased throughout the disease progression. In diabetic animals, subtle retinal thinning and impaired retinal function were detected, as well as an increase in the number of Iba1-positive cells (microglia/macrophages) and a subtle decrease in the tight junction protein immunoreactivity, which did not induce any physiologically relevant effect on the blood-retinal barrier. Conclusions The effects of diabetes on the retina can be spotted through retinal texture analysis in the early stages of the disease. Changes in retinal texture are concomitant with biological retinal changes, thus unlocking the potential of texture analysis for the early diagnosis of DR. However, this requires to be proven in clinical studies.
Collapse
Affiliation(s)
- Sara Oliveira
- University of Coimbra, Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, Coimbra, Portugal
- University of Coimbra, Center for Innovative Biomedicine and Biotechnology (CIBB), Coimbra, Portugal
- Clinical Academic Center of Coimbra (CACC), Coimbra, Portugal
| | - Pedro Guimarães
- Clinical Academic Center of Coimbra (CACC), Coimbra, Portugal
- University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), Coimbra, Portugal
- University of Coimbra, Faculty of Medicine (FMUC), Coimbra, Portugal
| | - Elisa Julião Campos
- University of Coimbra, Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, Coimbra, Portugal
- University of Coimbra, Center for Innovative Biomedicine and Biotechnology (CIBB), Coimbra, Portugal
- Clinical Academic Center of Coimbra (CACC), Coimbra, Portugal
- University of Coimbra, Chemical Engineering and Renewable Resources for Sustainability (CERES), Department of Chemical Engineering (DEQ), Faculty of Sciences and Technology (FCTUC), Coimbra, Portugal
- University of Coimbra, Center for Neuroscience and Cell Biology (CNC-UC), Coimbra, Portugal
| | - Rosa Fernandes
- University of Coimbra, Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, Coimbra, Portugal
- University of Coimbra, Center for Innovative Biomedicine and Biotechnology (CIBB), Coimbra, Portugal
- Clinical Academic Center of Coimbra (CACC), Coimbra, Portugal
- University of Coimbra, Institute of Pharmacology and Experimental Therapeutics, Faculty of Medicine, Coimbra, Portugal
| | - João Martins
- University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), Coimbra, Portugal
| | - Miguel Castelo-Branco
- Clinical Academic Center of Coimbra (CACC), Coimbra, Portugal
- University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), Coimbra, Portugal
- University of Coimbra, Faculty of Medicine (FMUC), Coimbra, Portugal
| | - Pedro Serranho
- University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), Coimbra, Portugal
- Universidade Aberta, Department of Sciences and Technology, Lisbon, Portugal
| | - Paulo Matafome
- University of Coimbra, Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, Coimbra, Portugal
- University of Coimbra, Center for Innovative Biomedicine and Biotechnology (CIBB), Coimbra, Portugal
- Clinical Academic Center of Coimbra (CACC), Coimbra, Portugal
- University of Coimbra, Institute of Physiology, Faculty of Medicine, Coimbra, Portugal
- Polytechnic University of Coimbra, Health and Technology Research Center (H&TRC), Coimbra Health School (ESTeSC), Coimbra, Portugal
| | - Rui Bernardes
- Clinical Academic Center of Coimbra (CACC), Coimbra, Portugal
- University of Coimbra, Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), Coimbra, Portugal
- University of Coimbra, Faculty of Medicine (FMUC), Coimbra, Portugal
| | - António Francisco Ambrósio
- University of Coimbra, Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, Coimbra, Portugal
- University of Coimbra, Center for Innovative Biomedicine and Biotechnology (CIBB), Coimbra, Portugal
- Clinical Academic Center of Coimbra (CACC), Coimbra, Portugal
| |
Collapse
|
4
|
Guo M, Gong D, Yang W. In-depth analysis of research hotspots and emerging trends in AI for retinal diseases over the past decade. Front Med (Lausanne) 2024; 11:1489139. [PMID: 39635592 PMCID: PMC11614663 DOI: 10.3389/fmed.2024.1489139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Accepted: 11/06/2024] [Indexed: 12/07/2024] Open
Abstract
Background The application of Artificial Intelligence (AI) in diagnosing retinal diseases represents a significant advancement in ophthalmological research, with the potential to reshape future practices in the field. This study explores the extensive applications and emerging research frontiers of AI in retinal diseases. Objective This study aims to uncover the developments and predict future directions of AI research in retinal disease over the past decade. Methods This study analyzes AI utilization in retinal disease research through articles, using citation data sourced from the Web of Science (WOS) Core Collection database, covering the period from January 1, 2014, to December 31, 2023. A combination of WOS analyzer, CiteSpace 6.2 R4, and VOSviewer 1.6.19 was used for a bibliometric analysis focusing on citation frequency, collaborations, and keyword trends from an expert perspective. Results A total of 2,861 articles across 93 countries or regions were cataloged, with notable growth in article numbers since 2017. China leads with 926 articles, constituting 32% of the total. The United States has the highest h-index at 66, while England has the most significant network centrality at 0.24. Notably, the University of London is the leading institution with 99 articles and shares the highest h-index (25) with University College London. The National University of Singapore stands out for its central role with a score of 0.16. Research primarily spans ophthalmology and computer science, with "network," "transfer learning," and "convolutional neural networks" being prominent burst keywords from 2021 to 2023. Conclusion China leads globally in article counts, while the United States has a significant research impact. The University of London and University College London have made significant contributions to the literature. Diabetic retinopathy is the retinal disease with the highest volume of research. AI applications have focused on developing algorithms for diagnosing retinal diseases and investigating abnormal physiological features of the eye. Future research should pivot toward more advanced diagnostic systems for ophthalmic diseases.
Collapse
Affiliation(s)
- Mingkai Guo
- The Third School of Clinical Medicine, Guangzhou Medical University, Guangzhou, China
| | - Di Gong
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
5
|
Holland R, Kaye R, Hagag AM, Leingang O, Taylor TR, Bogunović H, Schmidt-Erfurth U, Scholl HP, Rueckert D, Lotery AJ, Sivaprasad S, Menten MJ. Deep Learning-Based Clustering of OCT Images for Biomarker Discovery in Age-Related Macular Degeneration (PINNACLE Study Report 4). OPHTHALMOLOGY SCIENCE 2024; 4:100543. [PMID: 39139544 PMCID: PMC11321288 DOI: 10.1016/j.xops.2024.100543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 04/26/2024] [Accepted: 04/26/2024] [Indexed: 08/15/2024]
Abstract
Purpose We introduce a deep learning-based biomarker proposal system for the purpose of accelerating biomarker discovery in age-related macular degeneration (AMD). Design Retrospective analysis of a large data set of retinal OCT images. Participants A total of 3456 adults aged between 51 and 102 years whose OCT images were collected under the PINNACLE project. Methods Our system proposes candidates for novel AMD imaging biomarkers in OCT. It works by first training a neural network using self-supervised contrastive learning to discover, without any clinical annotations, features relating to both known and unknown AMD biomarkers present in 46 496 retinal OCT images. To interpret the learned biomarkers, we partition the images into 30 subsets, termed clusters, that contain similar features. We conduct 2 parallel 1.5-hour semistructured interviews with 2 independent teams of retinal specialists to assign descriptions in clinical language to each cluster. Descriptions of clusters achieving consensus can potentially inform new biomarker candidates. Main Outcome Measures We checked if each cluster showed clear features comprehensible to retinal specialists, if they related to AMD, and how many described established biomarkers used in grading systems as opposed to recently proposed or potentially new biomarkers. We also compared their prognostic value for late-stage wet and dry AMD against an established clinical grading system and a demographic baseline model. Results Overall, both teams independently identified clearly distinct characteristics in 27 of 30 clusters, of which 23 were related to AMD. Seven were recognized as known biomarkers used in established grading systems, and 16 depicted biomarker combinations or subtypes that are either not yet used in grading systems, were only recently proposed, or were unknown. Clusters separated incomplete from complete retinal atrophy, intraretinal from subretinal fluid, and thick from thin choroids, and, in simulation, outperformed clinically used grading systems in prognostic value. Conclusions Using self-supervised deep learning, we were able to automatically propose AMD biomarkers going beyond the set used in clinically established grading systems. Without any clinical annotations, contrastive learning discovered subtle differences between fine-grained biomarkers. Ultimately, we envision that equipping clinicians with discovery-oriented deep learning tools can accelerate the discovery of novel prognostic biomarkers. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Robbie Holland
- BioMedIA, Department of Computing, Imperial College London, London, United Kingdom
| | - Rebecca Kaye
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, United Kingdom
| | - Ahmed M. Hagag
- Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Unit, National Institute for Health Research, London, United Kingdom
| | - Oliver Leingang
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Thomas R.P. Taylor
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, United Kingdom
| | - Hrvoje Bogunović
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Christian Doppler Forschungsgesellschaft, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Hendrik P.N. Scholl
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Daniel Rueckert
- BioMedIA, Department of Computing, Imperial College London, London, United Kingdom
- Institute for AI and Informatics in Medicine, School of Computation, Information and Technology, School of Medicine and Health, Technical University Munich, Munich, Germany
| | - Andrew J. Lotery
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, United Kingdom
| | - Sobha Sivaprasad
- Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Unit, National Institute for Health Research, London, United Kingdom
| | - Martin J. Menten
- BioMedIA, Department of Computing, Imperial College London, London, United Kingdom
- Institute for AI and Informatics in Medicine, School of Computation, Information and Technology, School of Medicine and Health, Technical University Munich, Munich, Germany
| |
Collapse
|
6
|
Kang C, Lo JE, Zhang H, Ng SM, Lin JC, Scott IU, Kalpathy-Cramer J, Liu SHA, Greenberg PB. Artificial intelligence for diagnosing exudative age-related macular degeneration. Cochrane Database Syst Rev 2024; 10:CD015522. [PMID: 39417312 PMCID: PMC11483348 DOI: 10.1002/14651858.cd015522.pub2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
BACKGROUND Age-related macular degeneration (AMD) is a retinal disorder characterized by central retinal (macular) damage. Approximately 10% to 20% of non-exudative AMD cases progress to the exudative form, which may result in rapid deterioration of central vision. Individuals with exudative AMD (eAMD) need prompt consultation with retinal specialists to minimize the risk and extent of vision loss. Traditional methods of diagnosing ophthalmic disease rely on clinical evaluation and multiple imaging techniques, which can be resource-consuming. Tests leveraging artificial intelligence (AI) hold the promise of automatically identifying and categorizing pathological features, enabling the timely diagnosis and treatment of eAMD. OBJECTIVES To determine the diagnostic accuracy of artificial intelligence (AI) as a triaging tool for exudative age-related macular degeneration (eAMD). SEARCH METHODS We searched CENTRAL, MEDLINE, Embase, three clinical trials registries, and Data Archiving and Networked Services (DANS) for gray literature. We did not restrict searches by language or publication date. The date of the last search was April 2024. SELECTION CRITERIA Included studies compared the test performance of algorithms with that of human readers to detect eAMD on retinal images collected from people with AMD who were evaluated at eye clinics in community or academic medical centers, and who were not receiving treatment for eAMD when the images were taken. We included algorithms that were either internally or externally validated or both. DATA COLLECTION AND ANALYSIS Pairs of review authors independently extracted data and assessed study quality using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool with revised signaling questions. For studies that reported more than one set of performance results, we extracted only one set of diagnostic accuracy data per study based on the last development stage or the optimal algorithm as indicated by the study authors. For two-class algorithms, we collected data from the 2x2 table whenever feasible. For multi-class algorithms, we first consolidated data from all classes other than eAMD before constructing the corresponding 2x2 tables. Assuming a common positivity threshold applied by the included studies, we chose random-effects, bivariate logistic models to estimate summary sensitivity and specificity as the primary performance metrics. MAIN RESULTS We identified 36 eligible studies that reported 40 sets of algorithm performance data, encompassing over 16,000 participants and 62,000 images. We included 28 studies (78%) that reported 31 algorithms with performance data in the meta-analysis. The remaining nine studies (25%) reported eight algorithms that lacked usable performance data; we reported them in the qualitative synthesis. Study characteristics and risk of bias Most studies were conducted in Asia, followed by Europe, the USA, and collaborative efforts spanning multiple countries. Most studies identified study participants from the hospital setting, while others used retinal images from public repositories; a few studies did not specify image sources. Based on four of the 36 studies reporting demographic information, the age of the study participants ranged from 62 to 82 years. The included algorithms used various retinal image types as model input, such as optical coherence tomography (OCT) images (N = 15), fundus images (N = 6), and multi-modal imaging (N = 7). The predominant core method used was deep neural networks. All studies that reported externally validated algorithms were at high risk of bias mainly due to potential selection bias from either a two-gate design or the inappropriate exclusion of potentially eligible retinal images (or participants). Findings Only three of the 40 included algorithms were externally validated (7.5%, 3/40). The summary sensitivity and specificity were 0.94 (95% confidence interval (CI) 0.90 to 0.97) and 0.99 (95% CI 0.76 to 1.00), respectively, when compared to human graders (3 studies; 27,872 images; low-certainty evidence). The prevalence of images with eAMD ranged from 0.3% to 49%. Twenty-eight algorithms were reportedly either internally validated (20%, 8/40) or tested on a development set (50%, 20/40); the pooled sensitivity and specificity were 0.93 (95% CI 0.89 to 0.96) and 0.96 (95% CI 0.94 to 0.98), respectively, when compared to human graders (28 studies; 33,409 images; low-certainty evidence). We did not identify significant sources of heterogeneity among these 28 algorithms. Although algorithms using OCT images appeared more homogeneous and had the highest summary specificity (0.97, 95% CI 0.93 to 0.98), they were not superior to algorithms using fundus images alone (0.94, 95% CI 0.89 to 0.97) or multimodal imaging (0.96, 95% CI 0.88 to 0.99; P for meta-regression = 0.239). The median prevalence of images with eAMD was 30% (interquartile range [IQR] 22% to 39%). We did not include eight studies that described nine algorithms (one study reported two sets of algorithm results) to distinguish eAMD from normal images, images of other AMD, or other non-AMD retinal lesions in the meta-analysis. Five of these algorithms were generally based on smaller datasets (range 21 to 218 participants per study) yet with a higher prevalence of eAMD images (range 33% to 66%). Relative to human graders, the reported sensitivity in these studies ranged from 0.95 and 0.97, while the specificity ranged from 0.94 to 0.99. Similarly, using small datasets (range 46 to 106), an additional four algorithms for detecting eAMD from other retinal lesions showed high sensitivity (range 0.96 to 1.00) and specificity (range 0.77 to 1.00). AUTHORS' CONCLUSIONS Low- to very low-certainty evidence suggests that an algorithm-based test may correctly identify most individuals with eAMD without increasing unnecessary referrals (false positives) in either the primary or the specialty care settings. There were significant concerns for applying the review findings due to variations in the eAMD prevalence in the included studies. In addition, among the included algorithm-based tests, diagnostic accuracy estimates were at risk of bias due to study participants not reflecting real-world characteristics, inadequate model validation, and the likelihood of selective results reporting. Limited quality and quantity of externally validated algorithms highlighted the need for high-certainty evidence. This evidence will require a standardized definition for eAMD on different imaging modalities and external validation of the algorithm to assess generalizability.
Collapse
Affiliation(s)
- Chaerim Kang
- Division of Ophthalmology, Brown University, Providence, RI, USA
| | - Jui-En Lo
- Department of Internal Medicine, MetroHealth Medical Center/Case Western Reserve University, Cleveland, USA
| | - Helen Zhang
- Program in Liberal Medical Education, Brown University, Providence, RI, USA
| | - Sueko M Ng
- Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - John C Lin
- Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Ingrid U Scott
- Department of Ophthalmology and Public Health Sciences, Penn State College of Medicine, Hershey, PA, USA
| | | | - Su-Hsun Alison Liu
- Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
- Department of Epidemiology, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Paul B Greenberg
- Division of Ophthalmology, Brown University, Providence, RI, USA
- Section of Ophthalmology, VA Providence Healthcare System, Providence, RI, USA
| |
Collapse
|
7
|
Hu Y, Gao Y, Gao W, Luo W, Yang Z, Xiong F, Chen Z, Lin Y, Xia X, Yin X, Deng Y, Ma L, Li G. AMD-SD: An Optical Coherence Tomography Image Dataset for wet AMD Lesions Segmentation. Sci Data 2024; 11:1014. [PMID: 39294152 PMCID: PMC11410981 DOI: 10.1038/s41597-024-03844-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 09/02/2024] [Indexed: 09/20/2024] Open
Abstract
Wet Age-related Macular Degeneration (wet AMD) is a common ophthalmic disease that significantly impacts patients' vision. Optical coherence tomography (OCT) examination has been widely utilized for diagnosing, treating, and monitoring wet AMD due to its cost-effectiveness, non-invasiveness, and repeatability, positioning it as the most valuable tool for diagnosis and tracking. OCT can provide clear visualization of retinal layers and precise segmentation of lesion areas, facilitating the identification and quantitative analysis of abnormalities. However, the lack of high-quality datasets for assessing wet AMD has impeded the advancement of related algorithms. To address this issue, we have curated a comprehensive wet AMD OCT Segmentation Dataset (AMD-SD), comprising 3049 B-scan images from 138 patients, each annotated with five segmentation labels: subretinal fluid, intraretinal fluid, ellipsoid zone continuity, subretinal hyperreflective material, and pigment epithelial detachment. This dataset presents a valuable opportunity to investigate the accuracy and reliability of various segmentation algorithms for wet AMD, offering essential data support for developing AI-assisted clinical applications targeting wet AMD.
Collapse
Affiliation(s)
- Yunwei Hu
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, 330000, P. R. China
| | - Yundi Gao
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, 330000, P. R. China
| | - Weihao Gao
- Shenzhen International Graduate School, Tsinghua University, Lishui Rd, Shenzhen, 518055, Guangdong, P. R. China
| | - Wenbin Luo
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, 330000, P. R. China
| | - Zhongyi Yang
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, 330000, P. R. China
| | - Fen Xiong
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, 330000, P. R. China
| | - Zidan Chen
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, 330000, P. R. China
| | - Yucai Lin
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, 330000, P. R. China
| | - Xinjing Xia
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, 330000, P. R. China
| | - Xiaolong Yin
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, 330000, P. R. China.
| | - Yan Deng
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, 330000, P. R. China.
| | - Lan Ma
- Shenzhen International Graduate School, Tsinghua University, Lishui Rd, Shenzhen, 518055, Guangdong, P. R. China.
| | - Guodong Li
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, 330000, P. R. China.
| |
Collapse
|
8
|
Husvogt L, Yaghy A, Camacho A, Lam K, Schottenhamml J, Ploner SB, Fujimoto JG, Waheed NK, Maier A. Ensembling U-Nets for microaneurysm segmentation in optical coherence tomography angiography in patients with diabetic retinopathy. Sci Rep 2024; 14:21520. [PMID: 39277636 PMCID: PMC11401926 DOI: 10.1038/s41598-024-72375-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/06/2024] [Indexed: 09/17/2024] Open
Abstract
Diabetic retinopathy is one of the leading causes of blindness around the world. This makes early diagnosis and treatment important in preventing vision loss in a large number of patients. Microaneurysms are the key hallmark of the early stage of the disease, non-proliferative diabetic retinopathy, and can be detected using OCT angiography quickly and non-invasively. Screening tools for non-proliferative diabetic retinopathy using OCT angiography thus have the potential to lead to improved outcomes in patients. We compared different configurations of ensembled U-nets to automatically segment microaneurysms from OCT angiography fundus projections. For this purpose, we created a new database to train and evaluate the U-nets, created by two expert graders in two stages of grading. We present the first U-net neural networks using ensembling for the detection of microaneurysms from OCT angiography en face images from the superficial and deep capillary plexuses in patients with non-proliferative diabetic retinopathy trained on a database labeled by two experts with repeats.
Collapse
Affiliation(s)
- Lennart Husvogt
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen , Germany.
| | - Antonio Yaghy
- New England Eye Center, Tufts School of Medicine, Boston, MA, 02111, USA
| | - Alex Camacho
- New England Eye Center, Tufts School of Medicine, Boston, MA, 02111, USA
| | - Kenneth Lam
- New England Eye Center, Tufts School of Medicine, Boston, MA, 02111, USA
| | - Julia Schottenhamml
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen , Germany
| | - Stefan B Ploner
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen , Germany
| | - James G Fujimoto
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Nadia K Waheed
- New England Eye Center, Tufts School of Medicine, Boston, MA, 02111, USA
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen , Germany
| |
Collapse
|
9
|
Tiosano L, Abutbul R, Lender R, Shwartz Y, Chowers I, Hoshen Y, Levy J. Anomaly Detection and Biomarkers Localization in Retinal Images. J Clin Med 2024; 13:3093. [PMID: 38892804 PMCID: PMC11173078 DOI: 10.3390/jcm13113093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 05/17/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024] Open
Abstract
Background: To design a novel anomaly detection and localization approach using artificial intelligence methods using optical coherence tomography (OCT) scans for retinal diseases. Methods: High-resolution OCT scans from the publicly available Kaggle dataset and a local dataset were used by four state-of-the-art self-supervised frameworks. The backbone model of all the frameworks was a pre-trained convolutional neural network (CNN), which enabled the extraction of meaningful features from OCT images. Anomalous images included choroidal neovascularization (CNV), diabetic macular edema (DME), and the presence of drusen. Anomaly detectors were evaluated by commonly accepted performance metrics, including area under the receiver operating characteristic curve, F1 score, and accuracy. Results: A total of 25,315 high-resolution retinal OCT slabs were used for training. Test and validation sets consisted of 968 and 4000 slabs, respectively. The best performing across all anomaly detectors had an area under the receiver operating characteristic of 0.99. All frameworks were shown to achieve high performance and generalize well for the different retinal diseases. Heat maps were generated to visualize the quality of the frameworks' ability to localize anomalous areas of the image. Conclusions: This study shows that with the use of pre-trained feature extractors, the frameworks tested can generalize to the domain of retinal OCT scans and achieve high image-level ROC-AUC scores. The localization results of these frameworks are promising and successfully capture areas that indicate the presence of retinal pathology. Moreover, such frameworks have the potential to uncover new biomarkers that are difficult for the human eye to detect. Frameworks for anomaly detection and localization can potentially be integrated into clinical decision support and automatic screening systems that will aid ophthalmologists in patient diagnosis, follow-up, and treatment design. This work establishes a solid basis for further development of automated anomaly detection frameworks for clinical use.
Collapse
Affiliation(s)
- Liran Tiosano
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| | - Ron Abutbul
- School of Computer Science and Engineering, Hebrew University of Jerusalem, Jerusalem 9574409, Israel
| | - Rivkah Lender
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| | - Yahel Shwartz
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| | - Itay Chowers
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| | - Yedid Hoshen
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| | - Jaime Levy
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| |
Collapse
|
10
|
Seeböck P, Orlando JI, Michl M, Mai J, Schmidt-Erfurth U, Bogunović H. Anomaly guided segmentation: Introducing semantic context for lesion segmentation in retinal OCT using weak context supervision from anomaly detection. Med Image Anal 2024; 93:103104. [PMID: 38350222 DOI: 10.1016/j.media.2024.103104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 12/01/2023] [Accepted: 02/05/2024] [Indexed: 02/15/2024]
Abstract
Automated lesion detection in retinal optical coherence tomography (OCT) scans has shown promise for several clinical applications, including diagnosis, monitoring and guidance of treatment decisions. However, segmentation models still struggle to achieve the desired results for some complex lesions or datasets that commonly occur in real-world, e.g. due to variability of lesion phenotypes, image quality or disease appearance. While several techniques have been proposed to improve them, one line of research that has not yet been investigated is the incorporation of additional semantic context through the application of anomaly detection models. In this study we experimentally show that incorporating weak anomaly labels to standard segmentation models consistently improves lesion segmentation results. This can be done relatively easy by detecting anomalies with a separate model and then adding these output masks as an extra class for training the segmentation model. This provides additional semantic context without requiring extra manual labels. We empirically validated this strategy using two in-house and two publicly available retinal OCT datasets for multiple lesion targets, demonstrating the potential of this generic anomaly guided segmentation approach to be used as an extra tool for improving lesion detection models.
Collapse
Affiliation(s)
- Philipp Seeböck
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Computational Imaging Research Lab, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Austria.
| | - José Ignacio Orlando
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Yatiris Group at PLADEMA Institute, CONICET, Universidad Nacional del Centro de la Provincia de Buenos Aires, Gral. Pinto 399, Tandil, Buenos Aires, Argentina
| | - Martin Michl
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Julia Mai
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Hrvoje Bogunović
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria.
| |
Collapse
|
11
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
12
|
Araújo T, Aresta G, Schmidt-Erfurth U, Bogunović H. Few-shot out-of-distribution detection for automated screening in retinal OCT images using deep learning. Sci Rep 2023; 13:16231. [PMID: 37758754 PMCID: PMC10533534 DOI: 10.1038/s41598-023-43018-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 09/15/2023] [Indexed: 09/29/2023] Open
Abstract
Deep neural networks have been increasingly proposed for automated screening and diagnosis of retinal diseases from optical coherence tomography (OCT), but often provide high-confidence predictions on out-of-distribution (OOD) cases, compromising their clinical usage. With this in mind, we performed an in-depth comparative analysis of the state-of-the-art uncertainty estimation methods for OOD detection in retinal OCT imaging. The analysis was performed within the use-case of automated screening and staging of age-related macular degeneration (AMD), one of the leading causes of blindness worldwide, where we achieved a macro-average area under the curve (AUC) of 0.981 for AMD classification. We focus on a few-shot Outlier Exposure (OE) method and the detection of near-OOD cases that share pathomorphological characteristics with the inlier AMD classes. Scoring the OOD case based on the Cosine distance in the feature space from the penultimate network layer proved to be a robust approach for OOD detection, especially in combination with the OE. Using Cosine distance and only 8 outliers exposed per class, we were able to improve the near-OOD detection performance of the OE with Reject Bucket method by [Formula: see text] 10% compared to without OE, reaching an AUC of 0.937. The Cosine distance served as a robust metric for OOD detection of both known and unknown classes and should thus be considered as an alternative to the reject bucket class probability in OE approaches, especially in the few-shot scenario. The inclusion of these methodologies did not come at the expense of classification performance, and can substantially improve the reliability and trustworthiness of the resulting deep learning-based diagnostic systems in the context of retinal OCT.
Collapse
Affiliation(s)
- Teresa Araújo
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria.
| | - Guilherme Aresta
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Hrvoje Bogunović
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
13
|
Danese C, Kale AU, Aslam T, Lanzetta P, Barratt J, Chou YB, Eldem B, Eter N, Gale R, Korobelnik JF, Kozak I, Li X, Li X, Loewenstein A, Ruamviboonsuk P, Sakamoto T, Ting DS, van Wijngaarden P, Waldstein SM, Wong D, Wu L, Zapata MA, Zarranz-Ventura J. The impact of artificial intelligence on retinal disease management: Vision Academy retinal expert consensus. Curr Opin Ophthalmol 2023; 34:396-402. [PMID: 37326216 PMCID: PMC10399953 DOI: 10.1097/icu.0000000000000980] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE OF REVIEW The aim of this review is to define the "state-of-the-art" in artificial intelligence (AI)-enabled devices that support the management of retinal conditions and to provide Vision Academy recommendations on the topic. RECENT FINDINGS Most of the AI models described in the literature have not been approved for disease management purposes by regulatory authorities. These new technologies are promising as they may be able to provide personalized treatments as well as a personalized risk score for various retinal diseases. However, several issues still need to be addressed, such as the lack of a common regulatory pathway and a lack of clarity regarding the applicability of AI-enabled medical devices in different populations. SUMMARY It is likely that current clinical practice will need to change following the application of AI-enabled medical devices. These devices are likely to have an impact on the management of retinal disease. However, a consensus needs to be reached to ensure they are safe and effective for the overall population.
Collapse
Affiliation(s)
- Carla Danese
- Department of Medicine – Ophthalmology, University of Udine, Udine, Italy
- Department of Ophthalmology, AP-HP Hôpital Lariboisière, Université Paris Cité, Paris, France
| | - Aditya U. Kale
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham
| | - Tariq Aslam
- Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester School of Health Sciences, Manchester, UK
| | - Paolo Lanzetta
- Department of Medicine – Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare, Udine, Italy
| | - Jane Barratt
- International Federation on Ageing, Toronto, Canada
| | - Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Bora Eldem
- Department of Ophthalmology, Hacettepe University, Ankara, Turkey
| | - Nicole Eter
- Department of Ophthalmology, University of Münster Medical Center, Münster, Germany
| | - Richard Gale
- Department of Ophthalmology, York Teaching Hospital NHS Foundation Trust, York, UK
| | - Jean-François Korobelnik
- Service d’ophtalmologie, CHU Bordeaux
- University of Bordeaux, INSERM, BPH, UMR1219, F-33000 Bordeaux, France
| | - Igor Kozak
- Moorfields Eye Hospital Centre, Abu Dhabi, UAE
| | - Xiaorong Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin
| | - Xiaoxin Li
- Xiamen Eye Center, Xiamen University, Xiamen, China
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Daniel S.W. Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - Peter van Wijngaarden
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | | | - David Wong
- Unity Health Toronto – St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Lihteh Wu
- Macula, Vitreous and Retina Associates of Costa Rica, San José, Costa Rica
| | | | | |
Collapse
|
14
|
Yun C, Eom B, Park S, Kim C, Kim D, Jabeen F, Kim WH, Kim HJ, Kim J. A Study on the Effectiveness of Deep Learning-Based Anomaly Detection Methods for Breast Ultrasonography. SENSORS (BASEL, SWITZERLAND) 2023; 23:2864. [PMID: 36905074 PMCID: PMC10007509 DOI: 10.3390/s23052864] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 01/04/2023] [Accepted: 01/12/2023] [Indexed: 06/18/2023]
Abstract
In the medical field, it is delicate to anticipate good performance in using deep learning due to the lack of large-scale training data and class imbalance. In particular, ultrasound, which is a key breast cancer diagnosis method, is delicate to diagnose accurately as the quality and interpretation of images can vary depending on the operator's experience and proficiency. Therefore, computer-aided diagnosis technology can facilitate diagnosis by visualizing abnormal information such as tumors and masses in ultrasound images. In this study, we implemented deep learning-based anomaly detection methods for breast ultrasound images and validated their effectiveness in detecting abnormal regions. Herein, we specifically compared the sliced-Wasserstein autoencoder with two representative unsupervised learning models autoencoder and variational autoencoder. The anomalous region detection performance is estimated with the normal region labels. Our experimental results showed that the sliced-Wasserstein autoencoder model outperformed the anomaly detection performance of others. However, anomaly detection using the reconstruction-based approach may not be effective because of the occurrence of numerous false-positive values. In the following studies, reducing these false positives becomes an important challenge.
Collapse
Affiliation(s)
- Changhee Yun
- National Information Society Agency, Daegu 41068, Republic of Korea
| | - Bomi Eom
- National Information Society Agency, Daegu 41068, Republic of Korea
| | - Sungjun Park
- School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Chanho Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Dohwan Kim
- Department of Artificial Intelligence, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Farah Jabeen
- School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Won Hwa Kim
- Department of Radiology, Kyungpook National University Chilgok Hospital, Kyungpook National University, Daegu 41404, Republic of Korea
| | - Hye Jung Kim
- Department of Radiology, Kyungpook National University Chilgok Hospital, Kyungpook National University, Daegu 41404, Republic of Korea
| | - Jaeil Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| |
Collapse
|
15
|
Computational intelligence in eye disease diagnosis: a comparative study. Med Biol Eng Comput 2023; 61:593-615. [PMID: 36595155 DOI: 10.1007/s11517-022-02737-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 12/09/2022] [Indexed: 01/04/2023]
Abstract
In recent years, eye disorders are an important health issue among older people. Generally, individuals with eye diseases are unaware of the gradual growth of symptoms. Therefore, routine eye examinations are required for early diagnosis. Usually, eye disorders are identified by an ophthalmologist via a slit-lamp investigation. Slit-lamp interpretations are inadequate due to the differences in the analytical skills of the ophthalmologist, inconsistency in eye disorder analysis, and record maintenance issues. Therefore, digital images of an eye and computational intelligence (CI)-based approaches are preferred as assistive methods for eye disease diagnosis. A comparative study of CI-based decision support models for eye disorder diagnosis is presented in this paper. The CI-based decision support systems used for eye abnormalities diagnosis were grouped as anterior and retinal eye abnormalities diagnostic systems, and numerous algorithms used for diagnosing the eye abnormalities were also briefed. Various eye imaging modalities, pre-processing methods such as reflection removal, contrast enhancement, region of interest segmentation methods, and public eye image databases used for CI-based eye disease diagnosis system development were also discussed in this paper. In this comparative study, the reliability of various CI-based systems used for anterior eye and retinal disorder diagnosis was compared based on the precision, sensitivity, and specificity in eye disease diagnosis. The outcomes of the comparative analysis indicate that the CI-based anterior and retinal disease diagnosis systems attained significant prediction accuracy. Hence, these CI-based diagnosis systems can be used in clinics to reduce the burden on physicians, minimize fatigue-related misdetection, and take precise clinical decisions.
Collapse
|
16
|
Wang X, Tang F, Chen H, Cheung CY, Heng PA. Deep semi-supervised multiple instance learning with self-correction for DME classification from OCT images. Med Image Anal 2023; 83:102673. [PMID: 36403310 DOI: 10.1016/j.media.2022.102673] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 07/03/2022] [Accepted: 10/20/2022] [Indexed: 11/18/2022]
Abstract
Supervised deep learning has achieved prominent success in various diabetic macular edema (DME) recognition tasks from optical coherence tomography (OCT) volumetric images. A common problematic issue that frequently occurs in this field is the shortage of labeled data due to the expensive fine-grained annotations, which increases substantial difficulty in accurate analysis by supervised learning. The morphological changes in the retina caused by DME might be distributed sparsely in B-scan images of the OCT volume, and OCT data is often coarsely labeled at the volume level. Hence, the DME identification task can be formulated as a multiple instance classification problem that could be addressed by multiple instance learning (MIL) techniques. Nevertheless, none of previous studies utilize unlabeled data simultaneously to promote the classification accuracy, which is particularly significant for a high quality of analysis at the minimum annotation cost. To this end, we present a novel deep semi-supervised multiple instance learning framework to explore the feasibility of leveraging a small amount of coarsely labeled data and a large amount of unlabeled data to tackle this problem. Specifically, we come up with several modules to further improve the performance according to the availability and granularity of their labels. To warm up the training, we propagate the bag labels to the corresponding instances as the supervision of training, and propose a self-correction strategy to handle the label noise in the positive bags. This strategy is based on confidence-based pseudo-labeling with consistency regularization. The model uses its prediction to generate the pseudo-label for each weakly augmented input only if it is highly confident about the prediction, which is subsequently used to supervise the same input in a strongly augmented version. This learning scheme is also applicable to unlabeled data. To enhance the discrimination capability of the model, we introduce the Student-Teacher architecture and impose consistency constraints between two models. For demonstration, the proposed approach was evaluated on two large-scale DME OCT image datasets. Extensive results indicate that the proposed method improves DME classification with the incorporation of unlabeled data and outperforms competing MIL methods significantly, which confirm the feasibility of deep semi-supervised multiple instance learning at a low annotation cost.
Collapse
Affiliation(s)
- Xi Wang
- Zhejiang Lab, Hangzhou, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Fangyao Tang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China.
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| |
Collapse
|
17
|
Self-supervised patient-specific features learning for OCT image classification. Med Biol Eng Comput 2022; 60:2851-2863. [DOI: 10.1007/s11517-022-02627-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Accepted: 04/28/2022] [Indexed: 11/26/2022]
|
18
|
Binary dose level classification of tumour microvascular response to radiotherapy using artificial intelligence analysis of optical coherence tomography images. Sci Rep 2022; 12:13995. [PMID: 35978040 PMCID: PMC9385745 DOI: 10.1038/s41598-022-18393-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 08/10/2022] [Indexed: 12/26/2022] Open
Abstract
The dominant consequence of irradiating biological systems is cellular damage, yet microvascular damage begins to assume an increasingly important role as the radiation dose levels increase. This is currently becoming more relevant in radiation medicine with its pivot towards higher-dose-per-fraction/fewer fractions treatment paradigm (e.g., stereotactic body radiotherapy (SBRT)). We have thus developed a 3D preclinical imaging platform based on speckle-variance optical coherence tomography (svOCT) for longitudinal monitoring of tumour microvascular radiation responses in vivo. Here we present an artificial intelligence (AI) approach to analyze the resultant microvascular data. In this initial study, we show that AI can successfully classify SBRT-relevant clinical radiation dose levels at multiple timepoints (t = 2–4 weeks) following irradiation (10 Gy and 30 Gy cohorts) based on induced changes in the detected microvascular networks. Practicality of the obtained results, challenges associated with modest number of animals, their successful mitigation via augmented data approaches, and advantages of using 3D deep learning methodologies, are discussed. Extension of this encouraging initial study to longitudinal AI-based time-series analysis for treatment outcome predictions at finer dose level gradations is envisioned.
Collapse
|
19
|
Ara RK, Matiolański A, Dziech A, Baran R, Domin P, Wieczorkiewicz A. Fast and Efficient Method for Optical Coherence Tomography Images Classification Using Deep Learning Approach. SENSORS (BASEL, SWITZERLAND) 2022; 22:4675. [PMID: 35808169 PMCID: PMC9269557 DOI: 10.3390/s22134675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 06/13/2022] [Accepted: 06/16/2022] [Indexed: 05/18/2023]
Abstract
The use of optical coherence tomography (OCT) in medical diagnostics is now common. The growing amount of data leads us to propose an automated support system for medical staff. The key part of the system is a classification algorithm developed with modern machine learning techniques. The main contribution is to present a new approach for the classification of eye diseases using the convolutional neural network model. The research concerns the classification of patients on the basis of OCT B-scans into one of four categories: Diabetic Macular Edema (DME), Choroidal Neovascularization (CNV), Drusen, and Normal. Those categories are available in a publicly available dataset of above 84,000 images utilized for the research. After several tested architectures, our 5-layer neural network gives us a promising result. We compared them to the other available solutions which proves the high quality of our algorithm. Equally important for the application of the algorithm is the computational time, which is reduced by the limited size of the model. In addition, the article presents a detailed method of image data augmentation and its impact on the classification results. The results of the experiments were also presented for several derived models of convolutional network architectures that were tested during the research. Improving processes in medical treatment is important. The algorithm cannot replace a doctor but, for example, can be a valuable tool for speeding up the process of diagnosis during screening tests.
Collapse
Affiliation(s)
- Rouhollah Kian Ara
- Institute of Telecommunications, AGH University of Science and Technology, 30-059 Krakow, Poland; (R.K.A.); (A.D.)
| | - Andrzej Matiolański
- Institute of Telecommunications, AGH University of Science and Technology, 30-059 Krakow, Poland; (R.K.A.); (A.D.)
| | - Andrzej Dziech
- Institute of Telecommunications, AGH University of Science and Technology, 30-059 Krakow, Poland; (R.K.A.); (A.D.)
| | - Remigiusz Baran
- Faculty of Electrical Engineering, Automatic Control and Computer Science, Kielce University of Technology, 25-314 Kielce, Poland;
| | - Paweł Domin
- Consultronix S.A., 32-083 Balice, Poland; (P.D.); (A.W.)
| | | |
Collapse
|
20
|
Zehnder P, Feng J, Fuji RN, Sullivan R, Hu F. Multiscale generative model using regularized skip-connections and perceptual loss for anomaly detection in toxicologic histopathology. J Pathol Inform 2022; 13:100102. [PMID: 36268071 PMCID: PMC9576973 DOI: 10.1016/j.jpi.2022.100102] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/12/2022] [Accepted: 05/02/2022] [Indexed: 11/18/2022] Open
Abstract
Background Automated anomaly detection is an important tool that has been developed for many real-world applications, including security systems, industrial inspection, and medical diagnostics. Despite extensive use of machine learning for anomaly detection in these varied contexts, it is challenging to generalize and apply these methods to complex tasks such as toxicologic histopathology (TOXPATH) assessment (i.e.,finding abnormalities in organ tissues). In this work, we introduce an anomaly detection method using deep learning that greatly improves model generalizability to TOXPATH data. Methods We evaluated a one-class classification approach that leverages novel regularization and perceptual techniques within generative adversarial network (GAN) and autoencoder architectures to accurately detect anomalous histopathological findings of varying degrees of complexity. We also utilized multiscale contextual data and conducted a thorough ablation study to demonstrate the efficacy of our method. We trained our models on data from normal whole slide images (WSIs) of rat liver sections and validated on WSIs from three anomalous classes. Anomaly scores are collated into heatmaps to localize anomalies within WSIs and provide human-interpretable results. Results Our method achieves 0.953 area under the receiver operating characteristic on a real-worldTOXPATH dataset. The model also shows good performance at detecting a wide variety of anomalies demonstrating our method's ability to generalize to TOXPATH data. Conclusion Anomalies in both TOXPATH histological and non-histological datasets were accurately identified with our method, which was only trained with normal data.
Collapse
Affiliation(s)
| | | | - Reina N. Fuji
- Department of Safety Assessment, Genentech Inc., 1 DNA Way, South San Francisco, CA 94080, USA
| | - Ruth Sullivan
- Department of Safety Assessment, Genentech Inc., 1 DNA Way, South San Francisco, CA 94080, USA
| | - Fangyao Hu
- Department of Safety Assessment, Genentech Inc., 1 DNA Way, South San Francisco, CA 94080, USA
| |
Collapse
|
21
|
Schurer-Waldheim S, Seebock P, Bogunovic H, Gerendas BS, Schmidt-Erfurth U. Robust Fovea Detection in Retinal OCT Imaging using Deep Learning. IEEE J Biomed Health Inform 2022; 26:3927-3937. [PMID: 35394920 DOI: 10.1109/jbhi.2022.3166068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The fovea centralis is an essential landmark in the retina where the photoreceptor layer is entirely composed of cones responsible for sharp, central vision. The localization of this anatomical landmark in optical coherence tomography (OCT) volumes is important for assessing visual function correlates and treatment guidance in macular disease. In this study, the "PRE U-net" is introduced as a novel approach for a fully automated fovea centralis detection, addressing the localization as a pixel-wise regression task. 2D B-scans are sampled from each image volume and are concatenated with spatial location information to train the deep network. A total of 5586 OCT volumes from 1,541 eyes was used to train, validate and test the deep learning method. The test data is comprised of healthy subjects and patients affected by neovascular age-related macular degeneration (nAMD), diabetic macula edema (DME) and macular edema from retinal vein occlusion (RVO), covering the three major retinal diseases responsible for blindness. Our experiments demonstrate that the PRE U-net significantly outperforms state-of-the-art methods and improves the robustness of automated localization, which is of value for clinical practice.
Collapse
|
22
|
Chen M, Jin K, Yan Y, Liu X, Huang X, Gao Z, Wang Y, Wang S, Ye J. Automated diagnosis of age‐related macular degeneration using multi‐modal vertical plane feature fusion via deep learning. Med Phys 2022; 49:2324-2333. [PMID: 35172022 DOI: 10.1002/mp.15541] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2021] [Revised: 01/22/2022] [Accepted: 02/09/2022] [Indexed: 11/06/2022] Open
Affiliation(s)
- Menglu Chen
- Department of Ophthalmology the Second Affiliated Hospital of Zhejiang University College of Medicine Hangzhou China
| | - Kai Jin
- Department of Ophthalmology the Second Affiliated Hospital of Zhejiang University College of Medicine Hangzhou China
| | - Yan Yan
- Department of Ophthalmology the Second Affiliated Hospital of Zhejiang University College of Medicine Hangzhou China
| | - Xindi Liu
- Department of Ophthalmology the Second Affiliated Hospital of Zhejiang University College of Medicine Hangzhou China
| | - Xiaoling Huang
- Department of Ophthalmology the Second Affiliated Hospital of Zhejiang University College of Medicine Hangzhou China
| | - Zhiyuan Gao
- Department of Ophthalmology the Second Affiliated Hospital of Zhejiang University College of Medicine Hangzhou China
| | - Yao Wang
- Department of Ophthalmology the Second Affiliated Hospital of Zhejiang University College of Medicine Hangzhou China
| | - Shuai Wang
- School of Mechanical, Electrical and Information Engineering Shandong University Weihai 264209 PR China
| | - Juan Ye
- Department of Ophthalmology the Second Affiliated Hospital of Zhejiang University College of Medicine Hangzhou China
| |
Collapse
|
23
|
Anomaly localization in regular textures based on deep convolutional generative adversarial networks. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02475-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
24
|
Barua PD, Chan WY, Dogan S, Baygin M, Tuncer T, Ciaccio EJ, Islam N, Cheong KH, Shahid ZS, Acharya UR. Multilevel Deep Feature Generation Framework for Automated Detection of Retinal Abnormalities Using OCT Images. ENTROPY (BASEL, SWITZERLAND) 2021; 23:1651. [PMID: 34945957 PMCID: PMC8700736 DOI: 10.3390/e23121651] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 11/22/2021] [Accepted: 11/25/2021] [Indexed: 01/04/2023]
Abstract
Optical coherence tomography (OCT) images coupled with many learning techniques have been developed to diagnose retinal disorders. This work aims to develop a novel framework for extracting deep features from 18 pre-trained convolutional neural networks (CNN) and to attain high performance using OCT images. In this work, we have developed a new framework for automated detection of retinal disorders using transfer learning. This model consists of three phases: deep fused and multilevel feature extraction, using 18 pre-trained networks and tent maximal pooling, feature selection with ReliefF, and classification using the optimized classifier. The novelty of this proposed framework is the feature generation using widely used CNNs and to select the most suitable features for classification. The extracted features using our proposed intelligent feature extractor are fed to iterative ReliefF (IRF) to automatically select the best feature vector. The quadratic support vector machine (QSVM) is utilized as a classifier in this work. We have developed our model using two public OCT image datasets, and they are named database 1 (DB1) and database 2 (DB2). The proposed framework can attain 97.40% and 100% classification accuracies using the two OCT datasets, DB1 and DB2, respectively. These results illustrate the success of our model.
Collapse
Affiliation(s)
- Prabal Datta Barua
- School of Management & Enterprise, University of Southern Queensland, Toowoomba, QLD 4350, Australia;
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia
| | - Wai Yee Chan
- University Malaya Research Imaging Centre, Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur 59100, Malaysia;
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig 23002, Turkey; (S.D.); (T.T.)
| | - Mehmet Baygin
- Department of Computer Engineering, College of Engineering, Ardahan University, Ardahan 75000, Turkey;
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig 23002, Turkey; (S.D.); (T.T.)
| | - Edward J. Ciaccio
- Department of Medicine, Columbia University Irving Medical Center, New York, NY 10032-3784, USA;
| | - Nazrul Islam
- Glaucoma Faculty, Bangladesh Eye Hospital & Institute, Dhaka 1206, Bangladesh;
| | - Kang Hao Cheong
- Science, Mathematics and Technology Cluster, Singapore University of Technology and Design, Singapore 487372, Singapore
| | - Zakia Sultana Shahid
- Department of Ophthalmology, Anwer Khan Modern Medical College, Dhaka 1205, Bangladesh;
| | - U. Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore 129799, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
| |
Collapse
|
25
|
Kim B, Kwon K, Oh C, Park H. Unsupervised anomaly detection in MR images using multicontrast information. Med Phys 2021; 48:7346-7359. [PMID: 34628653 DOI: 10.1002/mp.15269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 09/14/2021] [Accepted: 09/14/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Anomaly detection in magnetic resonance imaging (MRI) is to distinguish the relevant biomarkers of diseases from those of normal tissues. In this paper, an unsupervised algorithm is proposed for pixel-level anomaly detection in multicontrast MRI. METHODS A deep neural network is developed, which uses only normal MR images as training data. The network has the two stages of feature generation and density estimation. For feature generation, relevant features are extracted from multicontrast MR images by performing contrast translation and dimension reduction. For density estimation, the distributions of the extracted features are estimated by using Gaussian mixture model (GMM). The two processes are trained to estimate normative distributions well presenting large normal datasets. In test phases, the proposed method can detect anomalies by measuring log-likelihood that a test sample belongs to the estimated normative distributions. RESULTS The proposed method and its variants were applied to detect glioblastoma and ischemic stroke lesion. Comparison studies with six previous anomaly detection algorithms demonstrated that the proposed method achieved relevant improvements in quantitative and qualitative evaluations. Ablation studies by removing each module from the proposed framework validated the effectiveness of each proposed module. CONCLUSION The proposed deep learning framework is an effective tool to detect anomalies in multicontrast MRI. The unsupervised approaches would have great potentials in detecting various lesions where annotated lesion data collection is limited.
Collapse
Affiliation(s)
- Byungjai Kim
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Guseong-dong, Yuseong-gu, Daejeon, Republic of Korea
| | - Kinam Kwon
- Samsung Electronics, Maetan-dong, Yeongtong-gu, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Changheun Oh
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Guseong-dong, Yuseong-gu, Daejeon, Republic of Korea
| | - Hyunwook Park
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Guseong-dong, Yuseong-gu, Daejeon, Republic of Korea
| |
Collapse
|
26
|
Wang J, Li W, Chen Y, Fang W, Kong W, He Y, Shi G. Weakly supervised anomaly segmentation in retinal OCT images using an adversarial learning approach. BIOMEDICAL OPTICS EXPRESS 2021; 12:4713-4729. [PMID: 34513220 PMCID: PMC8407839 DOI: 10.1364/boe.426803] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 06/17/2021] [Accepted: 06/26/2021] [Indexed: 05/09/2023]
Abstract
Lesion detection is a critical component of disease diagnosis, but the manual segmentation of lesions in medical images is time-consuming and experience-demanding. These issues have recently been addressed through deep learning models. However, most of the existing algorithms were developed using supervised training, which requires time-intensive manual labeling and prevents the model from detecting unaware lesions. As such, this study proposes a weakly supervised learning network based on CycleGAN for lesions segmentation in full-width optical coherence tomography (OCT) images. The model was trained to reconstruct underlying normal anatomic structures from abnormal input images, then the lesions can be detected by calculating the difference between the input and output images. A customized network architecture and a multi-scale similarity perceptual reconstruction loss were used to extend the CycleGAN model to transfer between objects exhibiting shape deformations. The proposed technique was validated using an open-source retinal OCT image dataset. Image-level anomaly detection and pixel-level lesion detection results were assessed using area-under-curve (AUC) and the Dice similarity coefficient, producing results of 96.94% and 0.8239, respectively, higher than all comparative methods. The average test time required to generate a single full-width image was 0.039 s, which is shorter than that reported in recent studies. These results indicate that our model can accurately detect and segment retinopathy lesions in real-time, without the need for supervised labeling. And we hope this method will be helpful to accelerate the clinical diagnosis process and reduce the misdiagnosis rate.
Collapse
Affiliation(s)
- Jing Wang
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Wanyue Li
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Yiwei Chen
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Wangyi Fang
- Department of Ophthalmology and Vision Science, Eye and ENT Hospital, Fudan University, Shanghai 201112, China
- Key Laboratory of Myopia of State Health Ministry, and Key Laboratory of Visual Impairment and Restoration of Shanghai, Shanghai 200003, China
| | - Wen Kong
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Yi He
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Guohua Shi
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Science, Shanghai 200031, China
| |
Collapse
|
27
|
Han Y, Li W, Liu M, Wu Z, Zhang F, Liu X, Tao L, Li X, Guo X. Application of an Anomaly Detection Model to Screen for Ocular Diseases Using Color Retinal Fundus Images: Design and Evaluation Study. J Med Internet Res 2021; 23:e27822. [PMID: 34255681 PMCID: PMC8317033 DOI: 10.2196/27822] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 05/07/2021] [Accepted: 05/24/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND The supervised deep learning approach provides state-of-the-art performance in a variety of fundus image classification tasks, but it is not applicable for screening tasks with numerous or unknown disease types. The unsupervised anomaly detection (AD) approach, which needs only normal samples to develop a model, may be a workable and cost-saving method of screening for ocular diseases. OBJECTIVE This study aimed to develop and evaluate an AD model for detecting ocular diseases on the basis of color fundus images. METHODS A generative adversarial network-based AD method for detecting possible ocular diseases was developed and evaluated using 90,499 retinal fundus images derived from 4 large-scale real-world data sets. Four other independent external test sets were used for external testing and further analysis of the model's performance in detecting 6 common ocular diseases (diabetic retinopathy [DR], glaucoma, cataract, age-related macular degeneration, hypertensive retinopathy [HR], and myopia), DR of different severity levels, and 36 categories of abnormal fundus images. The area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity of the model's performance were calculated and presented. RESULTS Our model achieved an AUC of 0.896 with 82.69% sensitivity and 82.63% specificity in detecting abnormal fundus images in the internal test set, and it achieved an AUC of 0.900 with 83.25% sensitivity and 85.19% specificity in 1 external proprietary data set. In the detection of 6 common ocular diseases, the AUCs for DR, glaucoma, cataract, AMD, HR, and myopia were 0.891, 0.916, 0.912, 0.867, 0.895, and 0.961, respectively. Moreover, the AD model had an AUC of 0.868 for detecting any DR, 0.908 for detecting referable DR, and 0.926 for detecting vision-threatening DR. CONCLUSIONS The AD approach achieved high sensitivity and specificity in detecting ocular diseases on the basis of fundus images, which implies that this model might be an efficient and economical tool for optimizing current clinical pathways for ophthalmologists. Future studies are required to evaluate the practical applicability of the AD approach in ocular disease screening.
Collapse
Affiliation(s)
- Yong Han
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Weiming Li
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Mengmeng Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Zhiyuan Wu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Feng Zhang
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Xiangtong Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Lixin Tao
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Xia Li
- Department of Mathematics and Statistics, La Trobe University, Melbourne, Australia
| | - Xiuhua Guo
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| |
Collapse
|
28
|
Schmidt-Erfurth U, Reiter GS, Riedl S, Seeböck P, Vogl WD, Blodi BA, Domalpally A, Fawzi A, Jia Y, Sarraf D, Bogunović H. AI-based monitoring of retinal fluid in disease activity and under therapy. Prog Retin Eye Res 2021; 86:100972. [PMID: 34166808 DOI: 10.1016/j.preteyeres.2021.100972] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 05/11/2021] [Accepted: 05/13/2021] [Indexed: 12/21/2022]
Abstract
Retinal fluid as the major biomarker in exudative macular disease is accurately visualized by high-resolution three-dimensional optical coherence tomography (OCT), which is used world-wide as a diagnostic gold standard largely replacing clinical examination. Artificial intelligence (AI) with its capability to objectively identify, localize and quantify fluid introduces fully automated tools into OCT imaging for personalized disease management. Deep learning performance has already proven superior to human experts, including physicians and certified readers, in terms of accuracy and speed. Reproducible measurement of retinal fluid relies on precise AI-based segmentation methods that assign a label to each OCT voxel denoting its fluid type such as intraretinal fluid (IRF) and subretinal fluid (SRF) or pigment epithelial detachment (PED) and its location within the central 1-, 3- and 6-mm macular area. Such reliable analysis is most relevant to reflect differences in pathophysiological mechanisms and impacts on retinal function, and the dynamics of fluid resolution during therapy with different regimens and substances. Yet, an in-depth understanding of the mode of action of supervised and unsupervised learning, the functionality of a convolutional neural net (CNN) and various network architectures is needed. Greater insight regarding adequate methods for performance, validation assessment, and device- and scanning-pattern-dependent variations is necessary to empower ophthalmologists to become qualified AI users. Fluid/function correlation can lead to a better definition of valid fluid variables relevant for optimal outcomes on an individual and a population level. AI-based fluid analysis opens the way for precision medicine in real-world practice of the leading retinal diseases of modern times.
Collapse
Affiliation(s)
- Ursula Schmidt-Erfurth
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Gregor S Reiter
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Sophie Riedl
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Philipp Seeböck
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Wolf-Dieter Vogl
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Barbara A Blodi
- Fundus Photograph Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA.
| | - Amitha Domalpally
- Fundus Photograph Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA.
| | - Amani Fawzi
- Feinberg School of Medicine, Northwestern University, Chicago, IL, USA.
| | - Yali Jia
- Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.
| | - David Sarraf
- Stein Eye Institute, University of California Los Angeles, Los Angeles, CA, USA.
| | - Hrvoje Bogunović
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| |
Collapse
|
29
|
Abstract
Ophthalmology has been at the forefront of medical specialties adopting artificial intelligence. This is primarily due to the "image-centric" nature of the field. Thanks to the abundance of patients' OCT scans, analysis of OCT imaging has greatly benefited from artificial intelligence to expand patient screening and facilitate clinical decision-making.In this review, we define the concepts of artificial intelligence, machine learning, and deep learning and how different artificial intelligence algorithms have been applied in OCT image analysis for disease screening, diagnosis, management, and prognosis.Finally, we address some of the challenges and limitations that might affect the incorporation of artificial intelligence in ophthalmology. These limitations mainly revolve around the quality and accuracy of datasets used in the algorithms and their generalizability, false negatives, and the cultural challenges around the adoption of the technology.
Collapse
Affiliation(s)
- Mohammad Dahrouj
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - John B Miller
- Department of Ophthalmology, Harvard Retinal Imaging Lab, Massachusetts Eye and Ear, Boston, MA, USA
| |
Collapse
|
30
|
A P S, Kar S, S G, Gopi VP, Palanisamy P. OctNET: A Lightweight CNN for Retinal Disease Classification from Optical Coherence Tomography Images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105877. [PMID: 33339630 DOI: 10.1016/j.cmpb.2020.105877] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 11/22/2020] [Indexed: 05/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal diseases are becoming a major health problem in recent years. Their early detection and ensuing treatment are essential to prevent visual damage, as the number of people affected by diabetes is expected to grow exponentially. Retinal diseases progress slowly, without any discernible symptoms. Optical Coherence Tomography (OCT) is a diagnostic tool capable of analyzing and identifying the quantitative discrimination in the disease affected retinal layers with high resolution. This paper proposes a deep neural network-based classifier for the computer-aided classification of Diabetic Macular Edema (DME), drusen, Choroidal NeoVascularization (CNV) from normal OCT images of the retina. METHODS In the proposed method, we demonstrate the feasibility of classifying and detecting severe retinal pathologies from OCT images using a deep convolutional neural network having six convolutional blocks. The classification results are explained using a gradient-based class activation mapping algorithm. RESULTS Training and validation of the model are performed on a public dataset of 83,484 images with expert-level disease grading of CNV, DME, and drusen, in addition to normal retinal image. We achieved a precision of 99.69%, recall of 99.69%, and accuracy of 99.69% with only three misclassifications out of 968 test cases. CONCLUSION In the proposed work, downsampling and weight sharing were introduced to improve the training efficiency and were found to reduce the trainable parameters significantly. The class activation mapping was also performed, and the output image was similar to the retina's actual color OCT images. The proposed network used only 6.9% of learnable parameters compared to the existing ResNet-50 model and yet outperformed it in classification. The proposed work can be potentially employed in real-time applications due to reduced complexity and fewer learnable parameters over other models.
Collapse
Affiliation(s)
- Sunija A P
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu-620015, India.
| | - Saikat Kar
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu-620015, India.
| | - Gayathri S
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu-620015, India.
| | - Varun P Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu-620015, India.
| | - P Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu-620015, India.
| |
Collapse
|
31
|
Gong D, Kras A, Miller JB. Application of Deep Learning for Diagnosing, Classifying, and Treating Age-Related Macular Degeneration. Semin Ophthalmol 2021; 36:198-204. [PMID: 33617390 DOI: 10.1080/08820538.2021.1889617] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Age-related macular degeneration (AMD) affects nearly 200 million people and is the third leading cause of irreversible vision loss worldwide. Deep learning, a branch of artificial intelligence that can learn image recognition based on pre-existing datasets, creates an opportunity for more accurate and efficient diagnosis, classification, and treatment of AMD on both individual and population levels. Current algorithms based on fundus photography and optical coherence tomography imaging have already achieved diagnostic accuracy levels comparable to human graders. This accuracy can be further increased when deep learning algorithms are simultaneously applied to multiple diagnostic imaging modalities. Combined with advances in telemedicine and imaging technology, deep learning can enable large populations of patients to be screened than would otherwise be possible and allow ophthalmologists to focus on seeing those patients who are in need of treatment, thus reducing the number of patients with significant visual impairment from AMD.
Collapse
Affiliation(s)
- Dan Gong
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA,USA
| | - Ashley Kras
- Harvard Retinal Imaging Lab, Massachusetts Eye and Ear Infirmary, Boston, MA
| | - John B Miller
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA,USA.,Harvard Retinal Imaging Lab, Massachusetts Eye and Ear Infirmary, Boston, MA
| |
Collapse
|
32
|
Reiter GS, Told R, Schranz M, Baumann L, Mylonas G, Sacu S, Pollreisz A, Schmidt-Erfurth U. Subretinal Drusenoid Deposits and Photoreceptor Loss Detecting Global and Local Progression of Geographic Atrophy by SD-OCT Imaging. Invest Ophthalmol Vis Sci 2021; 61:11. [PMID: 32503052 PMCID: PMC7415285 DOI: 10.1167/iovs.61.6.11] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Purpose To investigate the impact of subretinal drusenoid deposits (SDD) and photoreceptor integrity on global and local geographic atrophy (GA) progression. Methods Eighty-three eyes of 49 patients, aged 50 years and older with GA secondary to age-related macular degeneration (AMD), were prospectively included in this study. Participants underwent spectral-domain optical coherence tomography (SD-OCT) and fundus autofluorescence (FAF) imaging at baseline and after 12 months. The junctional zone and presence of SDD were delineated on SD-OCT and FAF images. Linear mixed models were calculated to investigate the association between GA progression and the junctional zone area, baseline GA area, age, global and local presence of SDD and unifocal versus multifocal lesions. Results The area of the junctional zone was significantly associated with the progression of GA, both globally and locally (all P < 0.001). SDD were associated with faster growth in the overall model (P = 0.039), as well as in the superior-temporal (P = 0.005) and temporal (P = 0.002) sections. Faster progression was associated with GA baseline area (P < 0.001). No difference was found between unifocal and multifocal lesions (P > 0.05). Age did not have an effect on GA progression (P > 0.05). Conclusions Photoreceptor integrity and SDD are useful for predicting global and local growth in GA. Investigation of the junctional zone is merited because this area is destined to become atrophic. Photoreceptor loss visible on SD-OCT might lead to new structural outcome measurements visible before irreversible loss of retinal pigment epithelium occurs.
Collapse
|
33
|
Wang X, Tang F, Chen H, Luo L, Tang Z, Ran AR, Cheung CY, Heng PA. UD-MIL: Uncertainty-Driven Deep Multiple Instance Learning for OCT Image Classification. IEEE J Biomed Health Inform 2020; 24:3431-3442. [DOI: 10.1109/jbhi.2020.2983730] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
34
|
A critical literature survey and prospects on tampering and anomaly detection in image data. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106727] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
35
|
Pianykh OS, Langs G, Dewey M, Enzmann DR, Herold CJ, Schoenberg SO, Brink JA. Continuous Learning AI in Radiology: Implementation Principles and Early Applications. Radiology 2020; 297:6-14. [DOI: 10.1148/radiol.2020200038] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
36
|
Sun Y, Zhang H, Yao X. Automatic diagnosis of macular diseases from OCT volume based on its two-dimensional feature map and convolutional neural network with attention mechanism. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:JBO-200085R. [PMID: 32940026 PMCID: PMC7493033 DOI: 10.1117/1.jbo.25.9.096004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Accepted: 09/03/2020] [Indexed: 05/29/2023]
Abstract
SIGNIFICANCE Automatic and accurate classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images is essential for assisting ophthalmologist in the diagnosis and grading of macular diseases. Therefore, more effective OCT volume classification for automatic recognition of macular diseases is needed. AIM For OCT volumes in which only OCT volume-level labels are known, OCT volume classifiers based on its global feature and deep learning are designed, validated, and compared with other methods. APPROACH We present a general framework to classify OCT volume for automatic recognizing macular diseases. The architecture of the framework consists of three modules: B-scan feature extractor, two-dimensional (2-D) feature map generation, and volume-level classifier. Our architecture could address OCT volume classification using two 2-D image machine learning classification algorithms. Specifically, a convolutional neural network (CNN) model is trained and used as a B-scan feature extractor to construct a 2-D feature map of an OCT volume and volume-level classifiers such as support vector machine and CNN with/without attention mechanism for 2-D feature maps are described. RESULTS Our proposed methods are validated on the publicly available Duke dataset, which consists of 269 intermediate age-related macular degeneration (AMD) volumes and 115 normal volumes. Fivefold cross-validation was done, and average accuracy, sensitivity, and specificity of 98.17%, 99.26%, and 95.65%, respectively, are achieved. The experiments show that our methods outperform the state-of-the-art methods. Our methods are also validated on our private clinical OCT volume dataset, consisting of 448 AMD volumes and 462 diabetic macular edema volumes. CONCLUSIONS We present a general framework of OCT volume classification based on its 2-D feature map and CNN with attention mechanism and describe its implementation schemes. Our proposed methods could classify OCT volumes automatically and effectively with high accuracy, and they are a potential practical tool for screening of ophthalmic diseases from OCT volume.
Collapse
Affiliation(s)
- Yankui Sun
- Tsinghua University, Department of Computer Science and Technology, Beijing, China
| | - Haoran Zhang
- Tsinghua University, Department of Computer Science and Technology, Beijing, China
| | - Xianlin Yao
- Tsinghua University, Department of Computer Science and Technology, Beijing, China
| |
Collapse
|
37
|
Application of Automated Quantification of Fluid Volumes to Anti–VEGF Therapy of Neovascular Age-Related Macular Degeneration. Ophthalmology 2020; 127:1211-1219. [DOI: 10.1016/j.ophtha.2020.03.010] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 03/02/2020] [Accepted: 03/04/2020] [Indexed: 01/18/2023] Open
|
38
|
Abstract
PURPOSE OF REVIEW As artificial intelligence continues to develop new applications in ophthalmic image recognition, we provide here an introduction for ophthalmologists and a primer on the mechanisms of deep learning systems. RECENT FINDINGS Deep learning has lent itself to the automated interpretation of various retinal imaging modalities, including fundus photography and optical coherence tomography. Convolutional neural networks (CNN) represent the primary class of deep neural networks applied to these image analyses. These have been configured to aid in the detection of diabetes retinopathy, AMD, retinal detachment, glaucoma, and ROP, among other ocular disorders. Predictive models for retinal disease prognosis and treatment are also being validated. SUMMARY Deep learning systems have begun to demonstrate a reliable level of diagnostic accuracy equal or better to human graders for narrow image recognition tasks. However, challenges regarding the use of deep learning systems in ophthalmology remain. These include trust of unsupervised learning systems and the limited ability to recognize broad ranges of disorders.
Collapse
|
39
|
Schmidt-Erfurth U, Bogunovic H, Grechenig C, Bui P, Fabianska M, Waldstein S, Reiter GS. Role of Deep Learning-Quantified Hyperreflective Foci for the Prediction of Geographic Atrophy Progression. Am J Ophthalmol 2020; 216:257-270. [PMID: 32277942 DOI: 10.1016/j.ajo.2020.03.042] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 03/26/2020] [Accepted: 03/31/2020] [Indexed: 12/22/2022]
Abstract
PURPOSE To quantitatively measure hyperreflective foci (HRF) during the progression of geographic atrophy (GA) secondary to age-related macular degeneration (AMD) using deep learning (DL) and investigate the association with local and global growth of GA. METHODS Eyes with GA were prospectively included. Spectral-domain optical coherence tomography (SDOCT) and fundus autofluorescence images were acquired every 6 months. A 500-μm-wide junctional zone adjacent to the GA border was delineated and HRF were quantified using a validated DL algorithm. HRF concentrations in progressing and nonprogressing areas, as well as correlations between HRF quantifications and global and local GA progression, were assessed. RESULTS A total of 491 SDOCT volumes from 87 eyes of 54 patients were assessed with a median follow-up of 28 months. Two-thirds of HRF were localized within a millimeter adjacent to the GA border. HRF concentration was positively correlated with GA progression in unifocal and multifocal GA (all P < .001) and de novo GA development (P = .037). Local progression speed correlated positively with local increase of HRF (P value range <.001-.004). Global progression speed, however, did not correlate with HRF concentrations (P > .05). Changes in HRF over time did not have an impact on the growth in GA (P > .05). CONCLUSION Advanced artificial intelligence (AI) methods in high-resolution retinal imaging allows to identify, localize, and quantify biomarkers such as HRF. Increased HRF concentrations in the junctional zone and future macular atrophy may represent progressive migration and loss of retinal pigment epithelium. AI-based biomarker monitoring may pave the way into the era of individualized risk assessment and objective decision-making processes. NOTE: Publication of this article is sponsored by the American Ophthalmological Society.
Collapse
|
40
|
Waldstein SM, Seeböck P, Donner R, Sadeghipour A, Bogunović H, Osborne A, Schmidt-Erfurth U. Unbiased identification of novel subclinical imaging biomarkers using unsupervised deep learning. Sci Rep 2020; 10:12954. [PMID: 32737379 PMCID: PMC7395081 DOI: 10.1038/s41598-020-69814-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Accepted: 06/27/2020] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence has recently made a disruptive impact in medical imaging by successfully automatizing expert-level diagnostic tasks. However, replicating human-made decisions may inherently be biased by the fallible and dogmatic nature of human experts, in addition to requiring prohibitive amounts of training data. In this paper, we introduce an unsupervised deep learning architecture particularly designed for OCT representations for unbiased, purely data-driven biomarker discovery. We developed artificial intelligence technology that provides biomarker candidates without any restricting input or domain knowledge beyond raw images. Analyzing 54,900 retinal optical coherence tomography (OCT) volume scans of 1094 patients with age-related macular degeneration, we generated a vocabulary of 20 local and global markers capturing characteristic retinal patterns. The resulting markers were validated by linking them with clinical outcomes (visual acuity, lesion activity and retinal morphology) using correlation and machine learning regression. The newly identified features correlated well with specific biomarkers traditionally used in clinical practice (r up to 0.73), and outperformed them in correlating with visual acuity ([Formula: see text] compared to [Formula: see text] for conventional markers), despite representing an enormous compression of OCT imaging data (67 million voxels to 20 features). In addition, our method also discovered hitherto unknown, clinically relevant biomarker candidates. The presented deep learning approach identified known as well as novel medical imaging biomarkers without any prior domain knowledge. Similar approaches may be worthwhile across other medical imaging fields.
Collapse
Affiliation(s)
- Sebastian M Waldstein
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Philipp Seeböck
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - René Donner
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Amir Sadeghipour
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Hrvoje Bogunović
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Aaron Osborne
- Genentech, Inc, 1 DNA Way, South San Francisco, CA, USA
| | - Ursula Schmidt-Erfurth
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria.
| |
Collapse
|
41
|
Rajan SP. Recognition of Cardiovascular Diseases through Retinal Images Using Optic Cup to Optic Disc Ratio. PATTERN RECOGNITION AND IMAGE ANALYSIS 2020. [DOI: 10.1134/s105466182002011x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
42
|
|
43
|
Lauw HW, Wong RCW, Ntoulas A, Lim EP, Ng SK, Pan SJ. Semi-supervised Learning Approach to Generate Neuroimaging Modalities with Adversarial Training. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING 2020. [PMCID: PMC7206232 DOI: 10.1007/978-3-030-47436-2_31] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Magnetic Resonance Imaging (MRI) of the brain can come in the form of different modalities such as T1-weighted and Fluid Attenuated Inversion Recovery (FLAIR) which has been used to investigate a wide range of neurological disorders. Current state-of-the-art models for brain tissue segmentation and disease classification require multiple modalities for training and inference. However, the acquisition of all of these modalities are expensive, time-consuming, inconvenient and the required modalities are often not available. As a result, these datasets contain large amounts of unpaired data, where examples in the dataset do not contain all modalities. On the other hand, there is smaller fraction of examples that contain all modalities (paired data) and furthermore each modality is high dimensional when compared to number of datapoints. In this work, we develop a method to address these issues with semi-supervised learning in translating between two neuroimaging modalities. Our proposed model, Semi-Supervised Adversarial CycleGAN (SSA-CGAN), uses an adversarial loss to learn from unpaired data points, cycle loss to enforce consistent reconstructions of the mappings and another adversarial loss to take advantage of paired data points. Our experiments demonstrate that our proposed framework produces an improvement in reconstruction error and reduced variance for the pairwise translation of multiple modalities and is more robust to thermal noise when compared to existing methods.
Collapse
Affiliation(s)
- Hady W. Lauw
- School of Information Systems, Singapore Management University, Singapore, Singapore
| | - Raymond Chi-Wing Wong
- Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, Hong Kong
| | - Alexandros Ntoulas
- Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens, Greece
| | - Ee-Peng Lim
- School of Information Systems, Singapore Management University, Singapore, Singapore
| | - See-Kiong Ng
- Institute of Data Science, National University of Singapore, Singapore, Singapore
| | - Sinno Jialin Pan
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
44
|
Seebock P, Orlando JI, Schlegl T, Waldstein SM, Bogunovic H, Klimscha S, Langs G, Schmidt-Erfurth U. Exploiting Epistemic Uncertainty of Anatomy Segmentation for Anomaly Detection in Retinal OCT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:87-98. [PMID: 31170065 DOI: 10.1109/tmi.2019.2919951] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Diagnosis and treatment guidance are aided by detecting relevant biomarkers in medical images. Although supervised deep learning can perform accurate segmentation of pathological areas, it is limited by requiring a priori definitions of these regions, large-scale annotations, and a representative patient cohort in the training set. In contrast, anomaly detection is not limited to specific definitions of pathologies and allows for training on healthy samples without annotation. Anomalous regions can then serve as candidates for biomarker discovery. Knowledge about normal anatomical structure brings implicit information for detecting anomalies. We propose to take advantage of this property using Bayesian deep learning, based on the assumption that epistemic uncertainties will correlate with anatomical deviations from a normal training set. A Bayesian U-Net is trained on a well-defined healthy environment using weak labels of healthy anatomy produced by existing methods. At test time, we capture epistemic uncertainty estimates of our model using Monte Carlo dropout. A novel post-processing technique is then applied to exploit these estimates and transfer their layered appearance to smooth blob-shaped segmentations of the anomalies. We experimentally validated this approach in retinal optical coherence tomography (OCT) images, using weak labels of retinal layers. Our method achieved a Dice index of 0.789 in an independent anomaly test set of age-related macular degeneration (AMD) cases. The resulting segmentations allowed very high accuracy for separating healthy and diseased cases with late wet AMD, dry geographic atrophy (GA), diabetic macular edema (DME) and retinal vein occlusion (RVO). Finally, we qualitatively observed that our approach can also detect other deviations in normal scans such as cut edge artifacts.
Collapse
|
45
|
Dysli M, Rückert R, Munk MR. Differentiation of Underlying Pathologies of Macular Edema Using Spectral Domain Optical Coherence Tomography (SD-OCT). Ocul Immunol Inflamm 2019; 27:474-483. [PMID: 31184556 DOI: 10.1080/09273948.2019.1603313] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Purpose: To describe the morphological characteristics of macular edema (ME) of different origins using spectral domain optical coherence tomography (SD-OCT). Methods: This article summarizes and highlights key morphologic findings, based on published articles, describing the characteristic presentations of ME of different origins using SD-OCT. The following pathologies were included: uveitic macular edema, pseudophakic cystoid macular edema (PCME), diabetic macular edema (DME), macular edema secondary to central or branch retinal vein occlusion (CRVO/BRVO), microcystic macular edema (MME), ME associated with epiretinal membrane (ERM), and retinitis pigmentosa (RP). Conclusions: Macular edema of different origins show characteristic patterns that are often indicative of the underlying cause and pathology. Thus, trained algorithms may in the future be able to automatically differentiate underlying causes and support clinical diagnosis. Knowledge of different appearances support the clinical diagnosis and can lead to improved and more targeted treatment of ME.
Collapse
Affiliation(s)
- Muriel Dysli
- a Department of Ophthalmology, Inselspital , Bern University Hospital and University of Bern , Bern , Switzerland.,b BPRC, Bern Photographic Reading Center , University of Bern , Bern , Switzerland
| | - René Rückert
- c Department of Ophthalmology , eye.gnos consulting , Bern , Switzerland
| | - Marion R Munk
- a Department of Ophthalmology, Inselspital , Bern University Hospital and University of Bern , Bern , Switzerland.,b BPRC, Bern Photographic Reading Center , University of Bern , Bern , Switzerland.,d Feinberg School of Medicine , Northwestern University Chicago , Chicago , IL , USA
| |
Collapse
|
46
|
Tobore I, Li J, Yuhang L, Al-Handarish Y, Kandwal A, Nie Z, Wang L. Deep Learning Intervention for Health Care Challenges: Some Biomedical Domain Considerations. JMIR Mhealth Uhealth 2019; 7:e11966. [PMID: 31376272 PMCID: PMC6696854 DOI: 10.2196/11966] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 04/14/2019] [Accepted: 06/12/2019] [Indexed: 01/10/2023] Open
Abstract
The use of deep learning (DL) for the analysis and diagnosis of biomedical and health care problems has received unprecedented attention in the last decade. The technique has recorded a number of achievements for unearthing meaningful features and accomplishing tasks that were hitherto difficult to solve by other methods and human experts. Currently, biological and medical devices, treatment, and applications are capable of generating large volumes of data in the form of images, sounds, text, graphs, and signals creating the concept of big data. The innovation of DL is a developing trend in the wake of big data for data representation and analysis. DL is a type of machine learning algorithm that has deeper (or more) hidden layers of similar function cascaded into the network and has the capability to make meaning from medical big data. Current transformation drivers to achieve personalized health care delivery will be possible with the use of mobile health (mHealth). DL can provide the analysis for the deluge of data generated from mHealth apps. This paper reviews the fundamentals of DL methods and presents a general view of the trends in DL by capturing literature from PubMed and the Institute of Electrical and Electronics Engineers database publications that implement different variants of DL. We highlight the implementation of DL in health care, which we categorize into biological system, electronic health record, medical image, and physiological signals. In addition, we discuss some inherent challenges of DL affecting biomedical and health domain, as well as prospective research directions that focus on improving health management by promoting the application of physiological signals and modern internet technology.
Collapse
Affiliation(s)
- Igbe Tobore
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China.,Graduate University, Chinese Academy of Sciences, Beijing, China
| | - Jingzhen Li
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liu Yuhang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yousef Al-Handarish
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Abhishek Kandwal
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zedong Nie
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Wang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
47
|
Holzinger A, Langs G, Denk H, Zatloukal K, Müller H. Causability and explainability of artificial intelligence in medicine. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2019; 9:e1312. [PMID: 32089788 PMCID: PMC7017860 DOI: 10.1002/widm.1312] [Citation(s) in RCA: 377] [Impact Index Per Article: 62.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Revised: 01/26/2019] [Accepted: 02/24/2019] [Indexed: 05/02/2023]
Abstract
Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black-box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use-case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction.
Collapse
Affiliation(s)
- Andreas Holzinger
- Institute for Medical Informatics, Statistics and DocumentationMedical University GrazGrazAustria
| | - Georg Langs
- Department of Biomedical Imaging and Image‐guided TherapyComputational Imaging Research Lab, Medical University of ViennaViennaAustria
| | - Helmut Denk
- Institute of PathologyMedical University GrazGrazAustria
| | - Kurt Zatloukal
- Institute of PathologyMedical University GrazGrazAustria
| | - Heimo Müller
- Institute for Medical Informatics, Statistics and DocumentationMedical University GrazGrazAustria
- Institute of PathologyMedical University GrazGrazAustria
| |
Collapse
|
48
|
Self-supervised iterative refinement learning for macular OCT volumetric data classification. Comput Biol Med 2019; 111:103327. [PMID: 31302456 DOI: 10.1016/j.compbiomed.2019.103327] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2019] [Revised: 06/12/2019] [Accepted: 06/12/2019] [Indexed: 11/23/2022]
Abstract
We present self-supervised iterative refinement learning (SIRL) as a pipeline to improve a type of macular optical coherence tomography (OCT) volumetric image classification algorithms. In this type of algorithms, first, two-dimensional (2D) image classification algorithms are applied to each B-scan in an OCT volume, and then B-scan level classification results are combined to obtain the classification result of the volume. Specifically, SIRL consists of repetitive training-sieving-relabeling steps. In the initialization stage, the label of each 2D image is assigned as the label of the volume they belong to, yielding an initial label set. In the training stage, the network is trained using the current label set. In the sieving and relabeling stage, the label of each 2D image is renewed based on the classification result of the trained network, and a new label set is obtained. Experiments are conducted on a clinical dataset and public dataset, on which the performances of the models trained by a normal scheme and our proposed methods are compared under a five-fold cross validation. Our proposed method achieves sensitivity, specificity, and accuracy of 89.74%, 94.87%, and 93.18%, respectively, on the clinical dataset. On the public dataset, the results of the corresponding three metrics are 98.22%, 90.43% and 95.88%. The results demonstrate the effectiveness of our proposed method as an approach to improve the B-scan-classification-based macular OCT volumetric image classification algorithms.
Collapse
|