1
|
Mikhail D, Milad D, Antaki F, Hammamji K, Qian CX, Rezende FA, Duval R. The Role of Artificial Intelligence in Epiretinal Membrane Care: A Scoping Review. OPHTHALMOLOGY SCIENCE 2025; 5:100689. [PMID: 40182981 PMCID: PMC11964620 DOI: 10.1016/j.xops.2024.100689] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 12/02/2024] [Accepted: 12/16/2024] [Indexed: 04/05/2025]
Abstract
Topic In ophthalmology, artificial intelligence (AI) demonstrates potential in using ophthalmic imaging across diverse diseases, often matching ophthalmologists' performance. However, the range of machine learning models for epiretinal membrane (ERM) management, which differ in methodology, application, and performance, remains largely unsynthesized. Clinical Relevance Epiretinal membrane management relies on clinical evaluation and imaging, with surgical intervention considered in cases of significant impairment. AI analysis of ophthalmic images and clinical features could enhance ERM detection, characterization, and prognostication, potentially improving clinical decision-making. This scoping review aims to evaluate the methodologies, applications, and reported performance of AI models in ERM diagnosis, characterization, and prognostication. Methods A comprehensive literature search was conducted across 5 electronic databases including Ovid MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Web of Science Core Collection from inception to November 14, 2024. Studies pertaining to AI algorithms in the context of ERM were included. The primary outcomes measured will be the reported design, application in ERM management, and performance of each AI model. Results Three hundred ninety articles were retrieved, with 33 studies meeting inclusion criteria. There were 30 studies (91%) reporting their training and validation methods. Altogether, 61 distinct AI models were included. OCT scans and fundus photographs were used in 26 (79%) and 7 (21%) papers, respectively. Supervised learning and both supervised and unsupervised learning were used in 32 (97%) and 1 (3%) studies, respectively. Twenty-seven studies (82%) developed or adapted AI models using images, whereas 5 (15%) had models using both images and clinical features, and 1 (3%) used preoperative and postoperative clinical features without ophthalmic images. Study objectives were categorized into 3 stages of ERM care. Twenty-three studies (70%) implemented AI for diagnosis (stage 1), 1 (3%) identified ERM characteristics (stage 2), and 6 (18%) predicted vision impairment after diagnosis or postoperative vision outcomes (stage 3). No articles studied treatment planning. Three studies (9%) used AI in stages 1 and 2. Of the 16 studies comparing AI performance to human graders (i.e., retinal specialists, general ophthalmologists, and trainees), 10 (63%) reported equivalent or higher performance. Conclusion Artificial intelligence-driven assessments of ophthalmic images and clinical features demonstrated high performance in detecting ERM, identifying its morphological properties, and predicting visual outcomes following ERM surgery. Future research might consider the validation of algorithms for clinical applications in personal treatment plan development, ideally to identify patients who might benefit most from surgery. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- David Mikhail
- Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
- Department of Ophthalmology, University of Montreal, Montreal, Canada
| | - Daniel Milad
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Fares Antaki
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Karim Hammamji
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Cynthia X. Qian
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
| | - Flavio A. Rezende
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
| | - Renaud Duval
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
| |
Collapse
|
2
|
Tan Z, Feng J, Lu W, Yin Y, Yang G, Zhou J. Multi-task global optimization-based method for vascular landmark detection. Comput Med Imaging Graph 2024; 114:102364. [PMID: 38432060 DOI: 10.1016/j.compmedimag.2024.102364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 12/04/2023] [Accepted: 02/22/2024] [Indexed: 03/05/2024]
Abstract
Vascular landmark detection plays an important role in medical analysis and clinical treatment. However, due to the complex topology and similar local appearance around landmarks, the popular heatmap regression based methods always suffer from the landmark confusion problem. Vascular landmarks are connected by vascular segments and have special spatial correlations, which can be utilized for performance improvement. In this paper, we propose a multi-task global optimization-based framework for accurate and automatic vascular landmark detection. A multi-task deep learning network is exploited to accomplish landmark heatmap regression, vascular semantic segmentation, and orientation field regression simultaneously. The two auxiliary objectives are highly correlated with the heatmap regression task and help the network incorporate the structural prior knowledge. During inference, instead of performing a max-voting strategy, we propose a global optimization-based post-processing method for final landmark decision. The spatial relationships between neighboring landmarks are utilized explicitly to tackle the landmark confusion problem. We evaluated our method on a cerebral MRA dataset with 564 volumes, a cerebral CTA dataset with 510 volumes, and an aorta CTA dataset with 50 volumes. The experiments demonstrate that the proposed method is effective for vascular landmark localization and achieves state-of-the-art performance.
Collapse
Affiliation(s)
- Zimeng Tan
- Department of Automation, Tsinghua University, Beijing, China
| | - Jianjiang Feng
- Department of Automation, Tsinghua University, Beijing, China.
| | - Wangsheng Lu
- UnionStrong (Beijing) Technology Co.Ltd, Beijing, China
| | - Yin Yin
- UnionStrong (Beijing) Technology Co.Ltd, Beijing, China
| | | | - Jie Zhou
- Department of Automation, Tsinghua University, Beijing, China
| |
Collapse
|
3
|
Ayhan MS, Neubauer J, Uzel MM, Gelisken F, Berens P. Interpretable detection of epiretinal membrane from optical coherence tomography with deep neural networks. Sci Rep 2024; 14:8484. [PMID: 38605115 PMCID: PMC11009346 DOI: 10.1038/s41598-024-57798-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 03/21/2024] [Indexed: 04/13/2024] Open
Abstract
This study aimed to automatically detect epiretinal membranes (ERM) in various OCT-scans of the central and paracentral macula region and classify them by size using deep-neural-networks (DNNs). To this end, 11,061 OCT-images were included and graded according to the presence of an ERM and its size (small 100-1000 µm, large > 1000 µm). The data set was divided into training, validation and test sets (75%, 10%, 15% of the data, respectively). An ensemble of DNNs was trained and saliency maps were generated using Guided-Backprob. OCT-scans were also transformed into a one-dimensional-value using t-SNE analysis. The DNNs' receiver-operating-characteristics on the test set showed a high performance for no-ERM, small-ERM and large-ERM cases (AUC: 0.99, 0.92, 0.99, respectively; 3-way accuracy: 89%), with small-ERMs being the most difficult ones to detect. t-SNE analysis sorted cases by size and, in particular, revealed increased classification uncertainty at the transitions between groups. Saliency maps reliably highlighted ERM, regardless of the presence of other OCT features (i.e. retinal-thickening, intraretinal pseudo-cysts, epiretinal-proliferation) and entities such as ERM-retinoschisis, macular-pseudohole and lamellar-macular-hole. This study showed therefore that DNNs can reliably detect and grade ERMs according to their size not only in the fovea but also in the paracentral region. This is also achieved in cases of hard-to-detect, small-ERMs. In addition, the generated saliency maps can be used to highlight small-ERMs that might otherwise be missed. The proposed model could be used for screening-programs or decision-support-systems in the future.
Collapse
Affiliation(s)
- Murat Seçkin Ayhan
- Institute for Ophthalmic Research, University of Tübingen, Elfriede Aulhorn Str. 7, 72076, Tübingen, Germany
| | - Jonas Neubauer
- University Eye Clinic, University of Tübingen, Tübingen, Germany
| | - Mehmet Murat Uzel
- University Eye Clinic, University of Tübingen, Tübingen, Germany
- Department of Ophthalmology, Balıkesir University School of Medicine, Balıkesir, Turkey
| | - Faik Gelisken
- University Eye Clinic, University of Tübingen, Tübingen, Germany.
| | - Philipp Berens
- Institute for Ophthalmic Research, University of Tübingen, Elfriede Aulhorn Str. 7, 72076, Tübingen, Germany.
- Tübingen AI Center, Tübingen, Germany.
| |
Collapse
|
4
|
Trout RM, Viehland C, Li JD, Raynor W, Dhalla AH, Vajzovic L, Kuo AN, Toth CA, Izatt JA. Methods for real-time feature-guided image fusion of intrasurgical volumetric optical coherence tomography with digital microscopy. BIOMEDICAL OPTICS EXPRESS 2023; 14:3308-3326. [PMID: 37497493 PMCID: PMC10368056 DOI: 10.1364/boe.488975] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 06/01/2023] [Accepted: 06/01/2023] [Indexed: 07/28/2023]
Abstract
4D-microscope-integrated optical coherence tomography (4D-MIOCT) is an emergent multimodal imaging technology in which live volumetric OCT (4D-OCT) is implemented in tandem with standard stereo color microscopy. 4D-OCT provides ophthalmic surgeons with many useful visual cues not available in standard microscopy; however it is challenging for the surgeon to effectively integrate cues from simultaneous-but-separate imaging in real-time. In this work, we demonstrate progress towards solving this challenge via the fusion of data from each modality guided by segmented 3D features. In this way, a more readily interpretable visualization that combines and registers important cues from both modalities is presented to the surgeon.
Collapse
Affiliation(s)
- Robert M Trout
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - Christian Viehland
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - Jianwei D Li
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - William Raynor
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Al-Hafeez Dhalla
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - Lejla Vajzovic
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Anthony N Kuo
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Cynthia A Toth
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Joseph A Izatt
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| |
Collapse
|
5
|
Zhang J, Liu J, Wei S, Chen D, Xiong J, Gao F. Semi-supervised aortic dissections segmentation: A time-dependent weighted feedback fusion framework. Comput Med Imaging Graph 2023; 106:102219. [PMID: 37001423 DOI: 10.1016/j.compmedimag.2023.102219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 03/22/2023] [Accepted: 03/23/2023] [Indexed: 03/30/2023]
Abstract
The segmentation of true lumen (TL) and false lumen (FL) plays an important role in the diagnosis and treatment of aortic dissection (AD). Although the deep learning methods have achieved remarkable performance for this task, a large number of labeled data are required for training. In order to alleviate the burden of doctors' labeling, in this paper, a novel time-dependent weighted feedback fusion based semi-supervised aortic dissections segmentation framework is proposed by effectively leveraging the unlabeled data. A feedback network is additionally extended to encode the predicted output from the backbone network into high-level feature space, which is then fused with the original feature information of the image to fix previous potential mistakes, thereby segmentation accuracy is improved iteratively. To utilize both labeled data and unlabeled data, the fused feature space flows into the network again to generate the second feedback and make sure consistency with the previous one. The utilization of image feature space provides better robustness and accuracy for the proposed structure. Experiments show that our method outperforms five existing state-of-the-art semi-supervised segmentation methods on both a type-B AD dataset and a public dataset.
Collapse
Affiliation(s)
- Jinhui Zhang
- School of Automation, Beijing Institute of Technology, Beijing 100081, China.
| | - Jian Liu
- School of Automation, Beijing Institute of Technology, Beijing 100081, China
| | - Siyi Wei
- School of Automation, Beijing Institute of Technology, Beijing 100081, China
| | - Duanduan Chen
- School of Life Science, Beijing Institute of Technology, Beijing 100081, China
| | - Jiang Xiong
- Department of Vascular and Endovascular Surgery, The First Medical Center, Chinese PLA General Hospital, Beijing 100853, China; Department of Vascular and Endovascular Surgery, Hainan Hospital, Chinese PLA General Hospital, Hainan 572013, China
| | - Feng Gao
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| |
Collapse
|
6
|
Parra-Mora E, da Silva Cruz LA. LOCTseg: A lightweight fully convolutional network for end-to-end optical coherence tomography segmentation. Comput Biol Med 2022; 150:106174. [PMID: 36252364 DOI: 10.1016/j.compbiomed.2022.106174] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 08/31/2022] [Accepted: 10/01/2022] [Indexed: 11/03/2022]
Abstract
This article presents a novel end-to-end automatic solution for semantic segmentation of optical coherence tomography (OCT) images. OCT is a non-invasive imaging technology widely used in clinical practice due to its ability to acquire high-resolution cross-sectional images of the ocular fundus. Due to the large variability of the retinal structures, OCT segmentation is usually carried out manually and requires expert knowledge. This study introduces a novel fully convolutional network (FCN) architecture designated by LOCTSeg, for end-to-end automatic segmentation of diagnostic markers in OCT b-scans. LOCTSeg is a lightweight deep FCN optimized for balancing performance and efficiency. Unlike state-of-the-art FCNs used in image segmentation, LOCTSeg achieves competitive inference speed without sacrificing segmentation accuracy. The proposed LOCTSeg is evaluated on two publicly available benchmarking datasets: (1) annotated retinal OCT image database (AROI) comprising 1136 images, and (2) healthy controls and multiple sclerosis lesions (HCMS) consisting of 1715 images. Moreover, we evaluated the proposed LOCTSeg with a private dataset of 250 OCT b-scans acquired from epiretinal membrane (ERM) and healthy patients. Results of the evaluation demonstrate empirically the effectiveness of the proposed algorithm, which improves the state-of-the-art Dice score from 69% to 73% and from 91% to 92% on AROI and HCMS datasets, respectively. Furthermore, LOCTSeg outperforms comparable lightweight FCNs' Dice score by margins between 4% and 15% on ERM segmentation.
Collapse
Affiliation(s)
- Esther Parra-Mora
- Department of Electrical and Computer Engineering, University of Coimbra, Coimbra, 3030-290, Portugal; Instituto de Telecomunicações, Coimbra, 3030-290, Portugal.
| | - Luís A da Silva Cruz
- Department of Electrical and Computer Engineering, University of Coimbra, Coimbra, 3030-290, Portugal; Instituto de Telecomunicações, Coimbra, 3030-290, Portugal.
| |
Collapse
|