1
|
Sreng S, Ramesh P, Nam Phuong PD, Binte Abdul Gani NF, Chua J, Nongpiur ME, Aung T, Husain R, Schmetterer L, Wong D. Wide-field OCT volumetric segmentation using semi-supervised CNN and transformer integration. Sci Rep 2025; 15:6676. [PMID: 39994298 PMCID: PMC11850926 DOI: 10.1038/s41598-025-89476-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Accepted: 02/05/2025] [Indexed: 02/26/2025] Open
Abstract
Wide-field optical coherence tomography (OCT) imaging can enable monitoring of peripheral changes in the retina, beyond the conventional fields of view used in current clinical OCT imaging systems. However, wide-field scans can present significant challenges for retinal layer segmentation. Deep Convolutional Neural Networks (CNNs) have shown strong performance in medical imaging segmentation but typically require large-scale, high-quality, pixel-level annotated datasets to be effectively developed. To address this challenge, we propose an advanced semi-supervised learning framework that combines the detailed capabilities of convolutional networks with the broader perspective of transformers. This method efficiently leverages labelled and unlabelled data to reduce dependence on extensive, manually annotated datasets. We evaluated the model performance on a dataset of 74 volumetric OCT scans, each performed using a prototype swept-source OCT system following a wide-field scan protocol with a 15 × 9 mm field of view, comprising 11,750 labelled and 29,016 unlabelled images. Wide-field retinal layer segmentation using the semi-supervised approach show significant improvements (P-value < 0.001) of up to 11% against a UNet baseline model. Comparisons with a clinical spectral-domain-OCT system revealed significant correlations of up to 0.91 (P-value < 0.001) in retinal layer thickness measurements. These findings highlight the effectiveness of semi-supervised learning with cross-teaching between CNNs and transformers for automated OCT layer segmentation.
Collapse
Affiliation(s)
- Syna Sreng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore City, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore City, Singapore
| | - Padmini Ramesh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore City, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore City, Singapore
| | - Pham Duc Nam Phuong
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore City, Singapore
| | | | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore City, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore City, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore City, Singapore
| | | | - Tin Aung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore City, Singapore
| | - Rahat Husain
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore City, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore City, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore City, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore City, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore City, Singapore
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore City, Singapore
- Centre for Medical Physics and Biomedical Engineering, Nanyang Technological University (NTU), Singapore City, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
- Fondation Ophtalmologique Adolphe De Rothschild, Paris, France
| | - Damon Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore City, Singapore.
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore City, Singapore.
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore City, Singapore.
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
| |
Collapse
|
2
|
Madadi Y, Delsoz M, Khouri AS, Boland M, Grzybowski A, Yousefi S. Applications of artificial intelligence-enabled robots and chatbots in ophthalmology: recent advances and future trends. Curr Opin Ophthalmol 2024; 35:238-243. [PMID: 38277274 PMCID: PMC10959691 DOI: 10.1097/icu.0000000000001035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2024]
Abstract
PURPOSE OF REVIEW Recent advances in artificial intelligence (AI), robotics, and chatbots have brought these technologies to the forefront of medicine, particularly ophthalmology. These technologies have been applied in diagnosis, prognosis, surgical operations, and patient-specific care in ophthalmology. It is thus both timely and pertinent to assess the existing landscape, recent advances, and trajectory of trends of AI, AI-enabled robots, and chatbots in ophthalmology. RECENT FINDINGS Some recent developments have integrated AI enabled robotics with diagnosis, and surgical procedures in ophthalmology. More recently, large language models (LLMs) like ChatGPT have shown promise in augmenting research capabilities and diagnosing ophthalmic diseases. These developments may portend a new era of doctor-patient-machine collaboration. SUMMARY Ophthalmology is undergoing a revolutionary change in research, clinical practice, and surgical interventions. Ophthalmic AI-enabled robotics and chatbot technologies based on LLMs are converging to create a new era of digital ophthalmology. Collectively, these developments portend a future in which conventional ophthalmic knowledge will be seamlessly integrated with AI to improve the patient experience and enhance therapeutic outcomes.
Collapse
Affiliation(s)
- Yeganeh Madadi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Mohammad Delsoz
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Albert S. Khouri
- Institute of Ophthalmology and Visual Science, University of Medicine and Dentistry of New Jersey, NJ, USA
| | - Michael Boland
- Department of Ophthalmology, Massachusetts Eye and Ear, Boston, MA, USA
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
3
|
Trout RM, Viehland C, Li JD, Raynor W, Dhalla AH, Vajzovic L, Kuo AN, Toth CA, Izatt JA. Methods for real-time feature-guided image fusion of intrasurgical volumetric optical coherence tomography with digital microscopy. BIOMEDICAL OPTICS EXPRESS 2023; 14:3308-3326. [PMID: 37497493 PMCID: PMC10368056 DOI: 10.1364/boe.488975] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 06/01/2023] [Accepted: 06/01/2023] [Indexed: 07/28/2023]
Abstract
4D-microscope-integrated optical coherence tomography (4D-MIOCT) is an emergent multimodal imaging technology in which live volumetric OCT (4D-OCT) is implemented in tandem with standard stereo color microscopy. 4D-OCT provides ophthalmic surgeons with many useful visual cues not available in standard microscopy; however it is challenging for the surgeon to effectively integrate cues from simultaneous-but-separate imaging in real-time. In this work, we demonstrate progress towards solving this challenge via the fusion of data from each modality guided by segmented 3D features. In this way, a more readily interpretable visualization that combines and registers important cues from both modalities is presented to the surgeon.
Collapse
Affiliation(s)
- Robert M Trout
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - Christian Viehland
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - Jianwei D Li
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - William Raynor
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Al-Hafeez Dhalla
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - Lejla Vajzovic
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Anthony N Kuo
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Cynthia A Toth
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Joseph A Izatt
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| |
Collapse
|