1
|
Mehravaran S, Eghrari A, Yousefi S, Khalifa F, Ghiasi G, Farahi A. Screening with the Bilateral Corneal Symmetry 3-D Analyzer. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2025; 22:747. [PMID: 40427862 PMCID: PMC12110939 DOI: 10.3390/ijerph22050747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2025] [Revised: 04/01/2025] [Accepted: 04/29/2025] [Indexed: 05/29/2025]
Abstract
This study aimed to evaluate the effectiveness of an innovative platform (the Bilateral Corneal Symmetry 3-D Analyzer-BiCSA) and a novel corneal symmetry index (the Volume Between Spheres-VBS) in differentiating normal corneas from those with keratoconus. Pentacam imaging data from 30 healthy corneas and 30 keratoconus cases were analyzed. BiCSA was utilized to determine the VBS for each case. Statistical analyses included comparing mean VBS values between groups and assessing sensitivity, specificity, and positive predictive values (PPVs). Keratoconus patients exhibited significantly higher VBS scores compared to healthy controls, particularly within the central 4.0 mm zone (11.4 versus 6.3). Using a VBS threshold of 11.3 in the central zone identified 40% of keratoconus cases (40% sensitivity), but 100% of cases surpassing the threshold were keratoconus (100% PPV). Lowering the threshold to 10.4 increased case detection to 90% while maintaining a high PPV (84.2%). These findings suggest that VBS, particularly when focused on the central 4.0 mm zone, can be a valuable tool for early keratoconus screening and identifying potential corneal abnormalities requiring further clinical evaluation. No healthy control corneas in this study exceeded a VBS threshold of 11.4 at 4 mm, indicating that values above this warrant further investigation.
Collapse
Affiliation(s)
- Shiva Mehravaran
- Department of Biology, School of Computer, Mathematical, and Natural Sciences, Morgan State University, Baltimore, MD 21251, USA;
| | - Allen Eghrari
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA;
| | - Siamak Yousefi
- Department of Ophthalmology and Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN 38136, USA;
| | - Fahmi Khalifa
- Department of Electrical and Computer Engineering, School of Engineering, Morgan State University, Baltimore, MD 21251, USA;
| | - Guita Ghiasi
- Department of Biology, School of Computer, Mathematical, and Natural Sciences, Morgan State University, Baltimore, MD 21251, USA;
| | - Azadeh Farahi
- Noor Ophthalmology Research Center, Tehran P.O. Box 3475-19395, Iran;
| |
Collapse
|
2
|
Rong H, Liu G, Wang Y, Hu J, Sun Z, Gao N, Kee CS, Du B, Wei R. Using 3D Convolutional Neural Network and Corvis ST Corneal Dynamic Video for Detecting Forme Fruste Keratoconus. J Refract Surg 2025; 41:e356-e364. [PMID: 40197082 DOI: 10.3928/1081597x-20250226-01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2025]
Abstract
PURPOSE To evaluate the performance of a three-dimensional convolutional neural network (3D CNN) in detecting forme fruste keratoconus (FFKC). METHODS A total of 415 anonymized corneal dynamic videos were collected for this study. The video dataset consisted of 150 patients with FFKC (150 videos) and 265 normal patients (265 videos). These patients underwent comprehensive ocular examinations, including slit lamp, Pentacam (Oculus Optikgeräte GmbH), and Corvis ST (Oculus Optikgeräte GmbH), and were classified by corneal experts. A 3D CNN-based algorithm was developed to establish a FFKC detection model. The performance of the model was evaluated using metrics such as accuracy, area under the receiver operating characteristic curve (AUC), confusion matrices, and F1 score. Gradient-weighted class activation mapping (Grad-CAM) was used to observe the regions that the model attended to. RESULTS In the test dataset, the model achieved an accuracy of 87.95% in identifying FFKC. The ResNet3D-AUC was 0.95 with a cut-off value of 0.49, and the F1 value was 0.85. The sensitivity was 83.33% and the specificity was 90.57%. CONCLUSIONS Combining 3D CNN with Corvis ST corneal dynamic videos provides a new method for distinguishing between FFKC and normal corneas. This could offer valuable clinical insights and recommendations for detecting FFKC. Nevertheless, the generalizability of the model is still a concern, and external validation is required prior to its clinical implementation. [J Refract Surg. 2025;41(4):e356-e364.].
Collapse
|
3
|
Muhsin ZJ, Qahwaji R, Ghafir I, AlShawabkeh M, Al Bdour M, AlRyalat S, Al-Taee M. Advances in machine learning for keratoconus diagnosis. Int Ophthalmol 2025; 45:128. [PMID: 40159519 PMCID: PMC11955434 DOI: 10.1007/s10792-025-03496-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Accepted: 03/06/2025] [Indexed: 04/02/2025]
Abstract
PURPOSE To review studies reporting the role of Machine Learning (ML) techniques in the diagnosis of keratoconus (KC) over the past decade, shedding light on recent developments while also highlighting the existing gaps between academic research and practical implementation in clinical settings. METHODS The review process begins with a systematic search of primary digital libraries using relevant keywords. A rigorous set of inclusion and exclusion criteria is then applied, resulting in the identification of 62 articles for analysis. Key research questions are formulated to address advancements in ML for KC diagnosis, corneal imaging modalities, types of datasets utilised, and the spectrum of KC conditions investigated over the past decade. A significant gap between academic research and practical implementation in clinical settings is identified, forming the basis for actionable recommendations tailored for both ML developers and ophthalmologists. Additionally, a proposed roadmap model is presented to facilitate the integration of ML models into clinical practice, enhancing diagnostic accuracy and patient care. RESULTS The analysis revealed that the diagnosis of KC predominantly relies on supervised classifiers (97%), with Random Forest being the most used algorithm (27%), followed by Deep Learning including Convolution Neural Networks (16%), Feedforward and Feedback Neural Networks (12%), and Support Vector Machines (12%). Pentacam is identified as the leading corneal imaging modality (56%), and a substantial majority of studies (91%) utilize local datasets, primarily consisting of numerical corneal parameters (77%). The most studied KC conditions were non-KC (NKC) vs. clinical KC (CKC) (29%), NKC vs. Subclinical KC (SCKC) (24%), NKC vs. SCKC vs. CKC (20%), SCKC vs. CKC (7%). However, only 20% of studies focused on addressing KC severity stages, emphasizing the need for more research in this area. These findings highlight the current landscape of ML in KC diagnosis and uncover existing challenges, and suggest potential avenues for further research and development, with particular emphasis on the dominance of certain algorithms and imaging modalities. CONCLUSION Key obstacles include the lack of consensus on an objective diagnostic standard for early KC detection and severity staging, limited multidisciplinary collaboration, and restricted access to public datasets. Further research is crucial to overcome these challenges and apply findings in clinical practice.
Collapse
Affiliation(s)
- Zahra J Muhsin
- Faculty of Engineering and Digital Technologies, University of Bradford, Bradford, BD7 1DP, UK
| | - Rami Qahwaji
- Faculty of Engineering and Digital Technologies, University of Bradford, Bradford, BD7 1DP, UK.
| | - Ibrahim Ghafir
- Faculty of Engineering and Digital Technologies, University of Bradford, Bradford, BD7 1DP, UK
| | | | | | - Saif AlRyalat
- School of Medicine, The University of Jordan, Amman, Jordan
| | | |
Collapse
|
4
|
Rampat R, Debellemanière G, Gatinel D, Ting DSJ. Artificial intelligence applications in cataract and refractive surgeries. Curr Opin Ophthalmol 2024; 35:480-486. [PMID: 39259648 DOI: 10.1097/icu.0000000000001090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
PURPOSE OF REVIEW This review highlights the recent advancements in the applications of artificial intelligence within the field of cataract and refractive surgeries. Given the rapid evolution of artificial intelligence technologies, it is essential to provide an updated overview of the significant strides and emerging trends in this field. RECENT FINDINGS Key themes include artificial intelligence-assisted diagnostics and intraoperative support, image analysis for anterior segment surgeries, development of artificial intelligence-based diagnostic scores and calculators for early disease detection and treatment planning, and integration of generative artificial intelligence for patient education and postoperative monitoring. SUMMARY The impact of artificial intelligence on cataract and refractive surgeries is becoming increasingly evident through improved diagnostic accuracy, enhanced patient education, and streamlined clinical workflows. These advancements hold significant implications for clinical practice, promising more personalized patient care and facilitating early disease detection and intervention. Equally, the review also highlights the fact that only some of this work reaches the clinical stage, successful integration of which may benefit from our focus.
Collapse
Affiliation(s)
| | - Guillaume Debellemanière
- Department of Anterior Segment and Refractive Surgery, Rothschild Foundation Hospital, Paris, France
| | - Damien Gatinel
- Department of Anterior Segment and Refractive Surgery, Rothschild Foundation Hospital, Paris, France
| | - Darren S J Ting
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham
- Birmingham and Midland Eye Centre, Sandwell and West Birmingham NHS Trust, Birmingham
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, UK
| |
Collapse
|
5
|
Goodman D, Zhu AY. Utility of artificial intelligence in the diagnosis and management of keratoconus: a systematic review. FRONTIERS IN OPHTHALMOLOGY 2024; 4:1380701. [PMID: 38984114 PMCID: PMC11182163 DOI: 10.3389/fopht.2024.1380701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 04/23/2024] [Indexed: 07/11/2024]
Abstract
Introduction The application of artificial intelligence (AI) systems in ophthalmology is rapidly expanding. Early detection and management of keratoconus is important for preventing disease progression and the need for corneal transplant. We review studies regarding the utility of AI in the diagnosis and management of keratoconus and other corneal ectasias. Methods We conducted a systematic search for relevant original, English-language research studies in the PubMed, Web of Science, Embase, and Cochrane databases from inception to October 31, 2023, using a combination of the following keywords: artificial intelligence, deep learning, machine learning, keratoconus, and corneal ectasia. Case reports, literature reviews, conference proceedings, and editorials were excluded. We extracted the following data from each eligible study: type of AI, input used for training, output, ground truth or reference, dataset size, availability of algorithm/model, availability of dataset, and major study findings. Results Ninety-three original research studies were included in this review, with the date of publication ranging from 1994 to 2023. The majority of studies were regarding the use of AI in detecting keratoconus or subclinical keratoconus (n=61). Among studies regarding keratoconus diagnosis, the most common inputs were corneal topography, Scheimpflug-based corneal tomography, and anterior segment-optical coherence tomography. This review also summarized 16 original research studies regarding AI-based assessment of severity and clinical features, 7 studies regarding the prediction of disease progression, and 6 studies regarding the characterization of treatment response. There were only three studies regarding the use of AI in identifying susceptibility genes involved in the etiology and pathogenesis of keratoconus. Discussion Algorithms trained on Scheimpflug-based tomography seem promising tools for the early diagnosis of keratoconus that can be particularly applied in low-resource communities. Future studies could investigate the application of AI models trained on multimodal patient information for staging keratoconus severity and tracking disease progression.
Collapse
|
6
|
Zhang P, Yang L, Mao Y, Zhang X, Cheng J, Miao Y, Bao F, Chen S, Zheng Q, Wang J. CorNet: Autonomous feature learning in raw Corvis ST data for keratoconus diagnosis via residual CNN approach. Comput Biol Med 2024; 172:108286. [PMID: 38493602 DOI: 10.1016/j.compbiomed.2024.108286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 02/23/2024] [Accepted: 03/12/2024] [Indexed: 03/19/2024]
Abstract
PURPOSE To ascertain whether the integration of raw Corvis ST data with an end-to-end CNN can enhance the diagnosis of keratoconus (KC). METHOD The Corvis ST is a non-contact device for in vivo measurement of corneal biomechanics. The CorNet was trained and validated on a dataset consisting of 1786 Corvis ST raw data from 1112 normal eyes and 674 KC eyes. Each raw data consists of the anterior and posterior corneal surface elevation during air-puff induced dynamic deformation. The architecture of CorNet utilizes four ResNet-inspired convolutional structures that employ 1 × 1 convolution in identity mapping. Gradient-weighted Class Activation Mapping (Grad-CAM) was adopted to visualize the attention allocation to diagnostic areas. Discriminative performance was assessed using metrics including the AUC of ROC curve, sensitivity, specificity, precision, accuracy, and F1 score. RESULTS CorNet demonstrated outstanding performance in distinguishing KC from normal eyes, achieving an AUC of 0.971 (sensitivity: 92.49%, specificity: 91.54%) in the validation set, outperforming the best existing Corvis ST parameters, namely the Corvis Biomechanical Index (CBI) with an AUC of 0.947, and its updated version for Chinese populations (cCBI) with an AUC of 0.963. Though the ROC curve analysis showed no significant difference between CorNet and cCBI (p = 0.295), it indicated a notable difference between CorNet and CBI (p = 0.011). The Grad-CAM visualizations highlighted the significance of corneal deformation data during the loading phase rather than the unloading phase for KC diagnosis. CONCLUSION This study proposed an end-to-end CNN approach utilizing raw biomechanical data by Corvis ST for KC detection, showing effectiveness comparable to or surpassing existing parameters provided by Corvis ST. The CorNet, autonomously learning comprehensive temporal and spatial features, demonstrated a promising performance for advancing KC diagnosis in ophthalmology.
Collapse
Affiliation(s)
- PeiPei Zhang
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - LanTing Yang
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - YiCheng Mao
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - XinYu Zhang
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - JiaXuan Cheng
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - YuanYuan Miao
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - FangJun Bao
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - ShiHao Chen
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - QinXiang Zheng
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - JunJie Wang
- School of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China; Department of Ophthalmology, The Third Hospital of Mianyang, Sichuan Mental Health Center, Mianyang, 621054, China.
| |
Collapse
|