1
|
Sun Y, Chu H. The outcome prediction method of football matches by the quantum neural network based on deep learning. Sci Rep 2025; 15:19875. [PMID: 40481179 PMCID: PMC12144118 DOI: 10.1038/s41598-025-91870-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Accepted: 02/24/2025] [Indexed: 06/11/2025] Open
Abstract
The precise prediction of football match outcomes holds significant value in the sports domain. However, traditional prediction methods are limited by data complexity and model capabilities, struggling to meet the demands for high accuracy. Quantum neural networks (QNNs) leverage the unique quantum properties of quantum bits (qubits) such as superposition and entanglement. They have enhanced information processing capabilities and potential pattern mining abilities when dealing with vast, high-dimensional, and complex football match data. This makes QNNs a superior choice compared to traditional neural networks and other advanced models for football match prediction. This study focuses on a deep learning (DL)-based QNN model, aiming to construct and optimize this model to analyze historical football match data for high-precision predictions of future match outcomes. Specifically, detailed match records from 2008 to 2022 of major European football leagues were obtained from the "European Football Database" public dataset on Kaggle. The data includes various factors such as match outcomes, team information, player stats, and match venues. The data are cleaned, standardized, and feature-engineered to meet the input requirements of neural network models. A multilayer perceptron model consisting of an input layer, multiple hidden layers, and an output layer is designed and implemented. During the model training phase, gradient descent is used to optimize weight parameters, and quantum algorithms are integrated to continuously adjust network weights to minimize prediction errors. The model is trained, parameter tuning is completed, and performance is evaluated using the training, validation, and independent test sets. The model's effectiveness is measured using indicators such as F1 score, accuracy, and recall. The study results indicate that the optimized QNN model significantly outperforms other advanced models in prediction accuracy. The optimized QNN model has an improvement of more than 20.5% in precision, an enhancement of over 23.2% in recall, and an increase of over 22.3% and 21.8% in accuracy and F1 score. Additionally, the model predicts the championship probabilities for Spain, France, England, and the Netherlands in the European Championship as 31.72%, 27.61%, 22.58%, and 18.09%, respectively. This study innovatively applies the optimized QNN model to outcome prediction in football matches, validating its effectiveness in the sports prediction field. It provides new ideas and methods for football match outcome prediction while offering valuable references for developing prediction models for other sports events. By integrating public data with DL technology, this study lays the foundation for the practical application of sports data analysis and prediction models, holding significant theoretical and practical value. Furthermore, future research can further explore the integration of QNN models with mathematical analysis systems, expanding their application scenarios in the real world. For example, sports betting agencies are provided with more accurate risk assessments, assisting teams in formulating more scientific tactical strategies, and optimizing event organization arrangements, to fully leverage their potential value.
Collapse
Affiliation(s)
- Yang Sun
- College of Physical Education and Health Science, Chongqing Normal University, Chongqing City, 401331, China
| | - Hongyang Chu
- Sports Training College, Tianjin University of Sport, Tianjin City, 301617, China.
| |
Collapse
|
2
|
Chappidi S, Belue MJ, Harmon SA, Jagasia S, Zhuge Y, Tasci E, Turkbey B, Singh J, Camphausen K, Krauze AV. From manual clinical criteria to machine learning algorithms: Comparing outcome endpoints derived from diverse electronic health record data modalities. PLOS DIGITAL HEALTH 2025; 4:e0000755. [PMID: 40367064 PMCID: PMC12077705 DOI: 10.1371/journal.pdig.0000755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Accepted: 01/17/2025] [Indexed: 05/16/2025]
Abstract
BACKGROUND Progression free survival (PFS) is a critical clinical outcome endpoint during cancer management and treatment evaluation. Yet, PFS is often missing from publicly available datasets due to the current subjective, expert, and time-intensive nature of generating PFS metrics. Given emerging research in multi-modal machine learning (ML), we explored the benefits and challenges associated with mining different electronic health record (EHR) data modalities and automating extraction of PFS metrics via ML algorithms. METHODS We analyzed EHR data from 92 pathology-proven GBM patients, obtaining 233 corticosteroid prescriptions, 2080 radiology reports, and 743 brain MRI scans. Three methods were developed to derive clinical PFS: 1) frequency analysis of corticosteroid prescriptions, 2) natural language processing (NLP) of reports, and 3) computer vision (CV) volumetric analysis of imaging. Outputs from these methods were compared to manually annotated clinical guideline PFS metrics. RESULTS Employing data-driven methods, standalone progression rates were 63% (prescription), 78% (NLP), and 54% (CV), compared to the 99% progression rate from manually applied clinical guidelines using integrated data sources. The prescription method identified progression an average of 5.2 months later than the clinical standard, while the CV and NLP algorithms identified progression earlier by 2.6 and 6.9 months, respectively. While lesion growth is a clinical guideline progression indicator, only half of patients exhibited increasing contrast-enhancing tumor volumes during scan-based CV analysis. CONCLUSION Our results indicate that data-driven algorithms can extract tumor progression outcomes from existing EHR data. However, ML methods are subject to varying availability bias, supporting contextual information, and pre-processing resource burdens that influence the extracted PFS endpoint distributions. Our scan-based CV results also suggest that the automation of clinical criteria may not align with human intuition. Our findings indicate a need for improved data source integration, validation, and revisiting of clinical criteria in parallel to multi-modal ML algorithm development.
Collapse
Affiliation(s)
- Shreya Chappidi
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, United States of America
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Mason J. Belue
- Artificial Intelligence Resource, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Stephanie A. Harmon
- Artificial Intelligence Resource, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Sarisha Jagasia
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Ying Zhuge
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Erdal Tasci
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Baris Turkbey
- Artificial Intelligence Resource, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Jatinder Singh
- Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
- Research Center Trustworthy Data Science and Security, University Alliance Ruhr, Duisburg-Essen, Germany
| | - Kevin Camphausen
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Andra V. Krauze
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, United States of America
| |
Collapse
|
3
|
Yu J, Li F, Liu M, Zhang M, Liu X. Application of Artificial Intelligence in the Diagnosis, Follow-Up and Prediction of Treatment of Ophthalmic Diseases. Semin Ophthalmol 2025; 40:173-181. [PMID: 39435874 DOI: 10.1080/08820538.2024.2414353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Revised: 09/27/2024] [Accepted: 10/02/2024] [Indexed: 10/23/2024]
Abstract
PURPOSE To describe the application of artificial intelligence (AI) in ophthalmic diseases and its possible future directions. METHODS A retrospective review of the literature from PubMed, Web of Science, and Embase databases (2019-2024). RESULTS AI assists in cataract diagnosis, classification, preoperative lens calculation, surgical risk, postoperative vision prediction, and follow-up. For glaucoma, AI enhances early diagnosis, progression prediction, and surgical risk assessment. It detects diabetic retinopathy early and predicts treatment effects for diabetic macular edema. AI analyzes fundus images for age-related macular degeneration (AMD) diagnosis and risk prediction. Additionally, AI quantifies and grades vitreous opacities in uveitis. For retinopathy of prematurity, AI facilitates disease classification, predicting disease occurrence and severity. Recently, AI also predicts systemic diseases by analyzing fundus vascular changes. CONCLUSIONS AI has been extensively used in diagnosing, following up, and predicting treatment outcomes for common blinding eye diseases. In addition, it also has a unique role in the prediction of systemic diseases.
Collapse
Affiliation(s)
- Jinwei Yu
- Ophthalmologic Center of the Second Hospital, Jilin University, Changchun, P.R. China
| | - Fuqiang Li
- Ophthalmologic Center of the Second Hospital, Jilin University, Changchun, P.R. China
| | - Mingzhu Liu
- Ophthalmologic Center of the Second Hospital, Jilin University, Changchun, P.R. China
| | - Mengdi Zhang
- Ophthalmologic Center of the Second Hospital, Jilin University, Changchun, P.R. China
| | - Xiaoli Liu
- Ophthalmologic Center of the Second Hospital, Jilin University, Changchun, P.R. China
| |
Collapse
|
4
|
Mohammadjafari A, Lin M, Shi M. Deep Learning-Based Glaucoma Detection Using Clinical Notes: A Comparative Study of Long Short-Term Memory and Convolutional Neural Network Models. Diagnostics (Basel) 2025; 15:807. [PMID: 40218157 PMCID: PMC11988537 DOI: 10.3390/diagnostics15070807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2025] [Revised: 03/15/2025] [Accepted: 03/18/2025] [Indexed: 04/14/2025] Open
Abstract
Background/Objectives: Glaucoma is the second-leading cause of irreversible blindness globally. Retinal images such as color fundus photography have been widely used to detect glaucoma. However, little is known about the effectiveness of using raw clinical notes generated by glaucoma specialists in detecting glaucoma. This study aims to investigate the capability of deep learning approaches to detect glaucoma from clinical notes based on a real-world dataset including 10,000 patients. Different popular models are explored to predict the binary glaucomatous status defined from a comprehensive vision function assessment. Methods: We compared multiple deep learning architectures, including Long Short-Term Memory (LSTM) networks, Convolutional Neural Networks (CNNs), and transformer-based models BERT and BioBERT. LSTM exploits temporal feature dependencies within the clinical notes, while CNNs focus on extracting local textual features, and transformer-based models leverage self-attention to capture rich contextual information and feature correlations. We also investigated the group disparities of deep learning for glaucoma detection in various demographic groups. Results: The experimental results indicate that the CNN model achieved an Overall AUC of 0.80, slightly outperforming LSTM by 0.01. Both models showed disparities and biases in performance across different racial groups. However, the CNN showed reduced group disparities compared to LSTM across Asian, Black, and White groups, meaning it has the advantage of achieving more equitable outcomes. Conclusions: This study demonstrates the potential of deep learning models to detect glaucoma from clinical notes and highlights the need for fairness-aware modeling to address health disparities.
Collapse
Affiliation(s)
- Ali Mohammadjafari
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA;
| | - Maohua Lin
- Department of Biomedical Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA;
| | - Min Shi
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA;
| |
Collapse
|
5
|
Koornwinder A, Zhang Y, Ravindranath R, Chang RT, Bernstein IA, Wang SY. Multimodal Artificial Intelligence Models Predicting Glaucoma Progression Using Electronic Health Records and Retinal Nerve Fiber Layer Scans. Transl Vis Sci Technol 2025; 14:27. [PMID: 40152766 PMCID: PMC11954538 DOI: 10.1167/tvst.14.3.27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Accepted: 02/16/2025] [Indexed: 03/29/2025] Open
Abstract
Purpose The purpose of this study was to develop models that predict which patients with glaucoma will progress to require surgery, combining structured data from electronic health records (EHRs) and retinal fiber layer optical coherence tomography (RNFL OCT) scans. Methods EHR data (demographics and clinical eye examinations) and RNFL OCT scans were identified for patients with glaucoma from an academic center (2008-2023). Comparing the novel TabNet deep learning architecture to a baseline XGBoost model, we trained and evaluated single modality models using either EHR or RNFL features, as well as fusion models combining both EHR and RNFL features as inputs, to predict glaucoma surgery within 12 months (binary). Results We had 1472 patients with glaucoma who were included in this study, of which 29.9% (N = 367) progressed to glaucoma surgery. The TabNet fusion model achieved the highest performance on the test set with an area under the receiver operating characteristic curve (AUROC) of 0.832, compared to the XGBoost fusion model (AUROC = 0.747). EHR only models performed with an AUROC of 0.764 and 0.720 for the deep learning model and XGBoost models, respectively. RNFL only models performed with an AUROC of 0.624 and 0.633 for the deep learning and XGBoost models, respectively. Conclusions Fusion models which integrate both RNFL with EHR data outperform models only utilizing one datatype or the other to predict glaucoma progression. The deep learning TabNet architecture demonstrated superior performance to traditional XGBoost models. Translational Relevance Prediction models that utilize the wealth of structured clinical and imaging data to predict glaucoma progression could form the basis of future clinical decision support tools to personalize glaucoma care.
Collapse
Affiliation(s)
- Abigail Koornwinder
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Youchen Zhang
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Rohith Ravindranath
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Robert T. Chang
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Isaac A. Bernstein
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Sophia Y. Wang
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| |
Collapse
|
6
|
Ravindranath R, Stein JD, Hernandez-Boussard T, Fisher AC, Wang SY. The Impact of Race, Ethnicity, and Sex on Fairness in Artificial Intelligence for Glaucoma Prediction Models. OPHTHALMOLOGY SCIENCE 2025; 5:100596. [PMID: 39386055 PMCID: PMC11462200 DOI: 10.1016/j.xops.2024.100596] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 07/31/2024] [Accepted: 08/07/2024] [Indexed: 10/12/2024]
Abstract
Objective Despite advances in artificial intelligence (AI) in glaucoma prediction, most works lack multicenter focus and do not consider fairness concerning sex, race, or ethnicity. This study aims to examine the impact of these sensitive attributes on developing fair AI models that predict glaucoma progression to necessitating incisional glaucoma surgery. Design Database study. Participants Thirty-nine thousand ninety patients with glaucoma, as identified by International Classification of Disease codes from 7 academic eye centers participating in the Sight OUtcomes Research Collaborative. Methods We developed XGBoost models using 3 approaches: (1) excluding sensitive attributes as input features, (2) including them explicitly as input features, and (3) training separate models for each group. Model input features included demographic details, diagnosis codes, medications, and clinical information (intraocular pressure, visual acuity, etc.), from electronic health records. The models were trained on patients from 5 sites (N = 27 999) and evaluated on a held-out internal test set (N = 3499) and 2 external test sets consisting of N = 1550 and N = 2542 patients. Main Outcomes and Measures Area under the receiver operating characteristic curve (AUROC) and equalized odds on the test set and external sites. Results Six thousand six hundred eighty-two (17.1%) of 39 090 patients underwent glaucoma surgery with a mean age of 70.1 (standard deviation 14.6) years, 54.5% female, 62.3% White, 22.1% Black, and 4.7% Latinx/Hispanic. We found that not including the sensitive attributes led to better classification performance (AUROC: 0.77-0.82) but worsened fairness when evaluated on the internal test set. However, on external test sites, the opposite was true: including sensitive attributes resulted in better classification performance (AUROC: external #1 - [0.73-0.81], external #2 - [0.67-0.70]), but varying degrees of fairness for sex and race as measured by equalized odds. Conclusions Artificial intelligence models predicting whether patients with glaucoma progress to surgery demonstrated bias with respect to sex, race, and ethnicity. The effect of sensitive attribute inclusion and exclusion on fairness and performance varied based on internal versus external test sets. Prior to deployment, AI models should be evaluated for fairness on the target population. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Rohith Ravindranath
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, California
| | - Joshua D. Stein
- Department of Ophthalmology & Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, Michigan
| | | | - A. Caroline Fisher
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, California
| | - Sophia Y. Wang
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, California
| |
Collapse
|
7
|
Wolf J, Chemudupati T, Kumar A, Franco JA, Montague AA, Lin CC, Lee WS, Fisher AC, Goldberg JL, Mruthyunjaya P, Chang RT, Mahajan VB. Using Electronic Health Record Data to Determine the Safety of Aqueous Humor Liquid Biopsies for Molecular Analyses. OPHTHALMOLOGY SCIENCE 2024; 4:100517. [PMID: 38881613 PMCID: PMC11179400 DOI: 10.1016/j.xops.2024.100517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 02/20/2024] [Accepted: 03/13/2024] [Indexed: 06/18/2024]
Abstract
Purpose Knowing the surgical safety of anterior chamber liquid biopsies will support the increased use of proteomics and other molecular analyses to better understand disease mechanisms and therapeutic responses in patients and clinical trials. Manual review of operative notes from different surgeons and procedures in electronic health records (EHRs) is cumbersome, but free-text software tools could facilitate efficient searches. Design Retrospective case series. Participants A total of 1418 aqueous humor liquid biopsies from patients undergoing intraocular surgery. Methods Free-text EHR searches were performed using the Stanford Research Repository cohort discovery tool to identify complications associated with anterior chamber paracentesis and subsequent endophthalmitis. Complications of the surgery unrelated to the biopsy were not reviewed. Main Outcome Measures Biopsy-associated intraoperative complications and endophthalmitis. Results A total of 1418 aqueous humor liquid biopsies were performed by 17 experienced surgeons. EHR free-text searches were 100% error-free for surgical complications, >99% for endophthalmitis (<1% false positive), and >93.6% for anesthesia type, requiring manual review for only a limited number of cases. More than 85% of cases were performed under local anesthesia without ocular muscle akinesia. Although the most common indication was cataract (50.1%), other diagnoses included glaucoma, diabetic retinopathy, uveitis, age-related macular degeneration, endophthalmitis, retinitis pigmentosa, and uveal melanoma. A 50- to 100-μL sample was collected in all cases using either a 30-gauge needle or a blunt cannula via a paracentesis. The median follow-up was >7 months. There was only one minor complication (0.07%) identified: a case of a small tear in Descemet membrane without long-term sequelae. No other complications occurred, including other corneal injuries, lens or iris trauma, hyphema, or suprachoroidal hemorrhage. There was no case of postoperative endophthalmitis. Conclusions Anterior chamber liquid biopsy during intraocular surgery is a safe procedure and may be considered for large-scale collection of aqueous humor samples for molecular analyses. Free-text EHR searches are an efficient approach to reviewing intraoperative procedures. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Julian Wolf
- Department of Ophthalmology, Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
- Faculty of Medicine, Eye Center, Medical Center, University of Freiburg, Freiburg, Germany
| | - Teja Chemudupati
- Department of Ophthalmology, Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
| | - Aarushi Kumar
- Department of Ophthalmology, Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
| | - Joel A Franco
- Department of Ophthalmology, Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
| | - Artis A Montague
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
| | - Charles C Lin
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
| | - Wen-Shin Lee
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
| | - A Caroline Fisher
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
| | - Jeffrey L Goldberg
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
| | - Prithvi Mruthyunjaya
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
- Department of Radiation Oncology, Stanford University, Palo Alto, California
| | - Robert T Chang
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
| | - Vinit B Mahajan
- Department of Ophthalmology, Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Molecular Surgery Laboratory, Stanford University, Palo Alto, California
- Veterans Affairs Palo Alto Health Care System, Palo Alto, California
| |
Collapse
|
8
|
Higgins BE, Leonard-Hawkhead B, Azuara-Blanco A. Quality of Reporting Electronic Health Record Data in Glaucoma: A Systematic Literature Review. Ophthalmol Glaucoma 2024; 7:422-430. [PMID: 38599318 DOI: 10.1016/j.ogla.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/05/2024] [Accepted: 04/02/2024] [Indexed: 04/12/2024]
Abstract
TOPIC Assessing reporting standards in glaucoma studies utilizing electronic health records (EHR). CLINICAL RELEVANCE Glaucoma's significance, underscored by its status as a leading cause of irreversible blindness worldwide, necessitates reliable research findings. This study evaluates adherence to the CODE-EHR best-practice framework in glaucoma studies using EHR, aiming to improve clinical care and patient outcomes. METHODS A systematic review, following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (PROSPERO CRD42023430025), identified relevant studies (January 2022-May 2023) in MEDLINE, EMBASE, CINAHL, and Web of Science. Eligible studies, using EHR data from clinical institutions for glaucoma research, were assessed for study design, participant characteristics, EHR data, and sources. Quality appraisal used the CODE-EHR best-practice framework, focusing on data construction, linkage, fitness for purpose, disease and outcome definitions, analysis, and ethics and governance. RESULTS Of 31 identified studies, predominant EHR sources were hospitals and clinical warehouses. Commonly reported elements included age, gender, glaucoma diagnosis, and intraocular pressure. Only 16% fully adhered to CODE-EHR best-practice framework's minimum standards, with none meeting preferred standards. While statistical analysis and ethical considerations were relatively well-addressed, areas such as EHR data management and study design showed room for improvement. Patient and public involvement, and acknowledgment of data linkage processes, data security, and storage reporting were often missed. CONCLUSION Adherence to CODE-EHR best-practice framework's standards in EHR-based studies of glaucoma can be improved upon. Standardized reporting of EHR data are essential to ensure the reliability of research, facilitating its translation into clinical practice and improving healthcare decision-making for better patient outcomes. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Bethany E Higgins
- Centre for Public Health, Institute of Clinical Science Block A, Royal Victoria Hospital, Belfast, United Kingdom; Optometry and Visual Sciences, School of Health & Psychological Sciences, City, University of London, London, United Kingdom.
| | - Benedict Leonard-Hawkhead
- Centre for Public Health, Institute of Clinical Science Block A, Royal Victoria Hospital, Belfast, United Kingdom.
| | - Augusto Azuara-Blanco
- Centre for Public Health, Institute of Clinical Science Block A, Royal Victoria Hospital, Belfast, United Kingdom.
| |
Collapse
|
9
|
Târcoveanu F, Leon F, Lisa C, Curteanu S, Feraru A, Ali K, Anton N. The use of artificial neural networks in studying the progression of glaucoma. Sci Rep 2024; 14:19597. [PMID: 39179625 PMCID: PMC11344130 DOI: 10.1038/s41598-024-70748-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 08/20/2024] [Indexed: 08/26/2024] Open
Abstract
In ophthalmology, artificial intelligence methods show great promise due to their potential to enhance clinical observations with predictive capabilities and support physicians in diagnosing and treating patients. This paper focuses on modelling glaucoma evolution because it requires early diagnosis, individualized treatment, and lifelong monitoring. Glaucoma is a chronic, progressive, irreversible, multifactorial optic neuropathy that primarily affects elderly individuals. It is important to emphasize that the processed data are taken from medical records, unlike other studies in the literature that rely on image acquisition and processing. Although more challenging to handle, this approach has the advantage of including a wide range of parameters in large numbers, which can highlight their potential influence. Artificial neural networks are used to study glaucoma progression, designed through successive trials for near-optimal configurations using the NeuroSolutions and PyTorch frameworks. Furthermore, different problems are formulated to demonstrate the influence of various structural and functional parameters on the study of glaucoma progression. Optimal neural networks were obtained using a program written in Python using the PyTorch deep learning framework. For various tasks, very small errors in training and validation, under 5%, were obtained. It has been demonstrated that very good results can be achieved, making them credible and useful for medical practice.
Collapse
Affiliation(s)
- Filip Târcoveanu
- Ophthalmology Department, Faculty of Medicine, University of Medicine and Pharmacy "Gr. T. Popa" Iasi, University Street No 16, 700115, Iasi, Romania
| | - Florin Leon
- Faculty of Automatic Control and Computer Engineering, "Gheorghe Asachi" Technical University of Iasi, 27 Mangeron Street, 700050, Iasi, Romania
| | - Cătălin Lisa
- Department of Chemical Engineering, Faculty of Chemical Engineering and Environmental Protection "Cristofor Simionescu", "Gheorghe Asachi" Technical University of Iasi, 73 Mangeron Street, 700050, Iasi, Romania
| | - Silvia Curteanu
- Department of Chemical Engineering, Faculty of Chemical Engineering and Environmental Protection "Cristofor Simionescu", "Gheorghe Asachi" Technical University of Iasi, 73 Mangeron Street, 700050, Iasi, Romania.
| | - Andreea Feraru
- Faculty of Economic Science, "Vasile Alecsandri" University of Bacau, Calea Marasesti 156, 600115, Bacau, Romania
| | - Kashif Ali
- Countess of Chester Hospital, Liverpool Rd, Chester, CH21UL, UK
| | - Nicoleta Anton
- Ophthalmology Department, Faculty of Medicine, University of Medicine and Pharmacy "Gr. T. Popa" Iasi, University Street No 16, 700115, Iasi, Romania.
| |
Collapse
|
10
|
Wu JH, Lin S, Moghimi S. Big data to guide glaucoma treatment. Taiwan J Ophthalmol 2024; 14:333-339. [PMID: 39430357 PMCID: PMC11488808 DOI: 10.4103/tjo.tjo-d-23-00068] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 06/06/2023] [Indexed: 10/22/2024] Open
Abstract
Ophthalmology has been at the forefront of the medical application of big data. Often harnessed with a machine learning approach, big data has demonstrated potential to transform ophthalmic care, as evidenced by prior success on clinical tasks such as the screening of ophthalmic diseases and lesions via retinal images. With the recent establishment of various large ophthalmic datasets, there has been greater interest in determining whether the benefits of big data may extend to the downstream process of ophthalmic disease management. An area of substantial investigation has been the use of big data to help guide or streamline management of glaucoma, which remains a leading cause of irreversible blindness worldwide. In this review, we summarize relevant studies utilizing big data and discuss the application of the findings in the risk assessment and treatment of glaucoma.
Collapse
Affiliation(s)
- Jo-Hsuan Wu
- Hamilton Glaucoma Center, Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, United States
| | - Shan Lin
- Glaucoma Center of San Francisco, San Francisco, CA, United States
| | - Sasan Moghimi
- Hamilton Glaucoma Center, Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, United States
| |
Collapse
|
11
|
Pham AT, Pan AA, Yohannan J. Big data in visual field testing for glaucoma. Taiwan J Ophthalmol 2024; 14:289-298. [PMID: 39430358 PMCID: PMC11488814 DOI: 10.4103/tjo.tjo-d-24-00059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Accepted: 07/02/2024] [Indexed: 10/22/2024] Open
Abstract
Recent technological advancements and the advent of ever-growing databases in health care have fueled the emergence of "big data" analytics. Big data has the potential to revolutionize health care, particularly ophthalmology, given the data-intensive nature of the medical specialty. As one of the leading causes of irreversible blindness worldwide, glaucoma is an ocular disease that receives significant interest for developing innovations in eye care. Among the most vital sources of data in glaucoma is visual field (VF) testing, which stands as a cornerstone for diagnosing and managing the disease. The expanding accessibility of large VF databases has led to a surge in studies investigating various applications of big data analytics in glaucoma. In this study, we review the use of big data for evaluating the reliability of VF tests, gaining insights into real-world clinical practices and outcomes, understanding new disease associations and risk factors, characterizing the patterns of VF loss, defining the structure-function relationship of glaucoma, enhancing early diagnosis or earlier detection of progression, informing clinical decisions, and improving clinical trials. Equally important, we discuss current challenges in big data analytics and future directions for improvement.
Collapse
Affiliation(s)
- Alex T. Pham
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Annabelle A. Pan
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jithin Yohannan
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
12
|
Bernstein IA, Fernandez KS, Stein JD, Pershing S, Wang SY. Big data and electronic health records for glaucoma research. Taiwan J Ophthalmol 2024; 14:352-359. [PMID: 39430348 PMCID: PMC11488813 DOI: 10.4103/tjo.tjo-d-24-00055] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 06/05/2024] [Indexed: 10/22/2024] Open
Abstract
The digitization of health records through electronic health records (EHRs) has transformed the landscape of ophthalmic research, particularly in the study of glaucoma. EHRs offer a wealth of structured and unstructured data, allowing for comprehensive analyses of patient characteristics, treatment histories, and outcomes. This review comprehensively discusses different EHR data sources, their strengths, limitations, and applicability towards glaucoma research. Institutional EHR repositories provide detailed multimodal clinical data, enabling in-depth investigations into conditions such as glaucoma and facilitating the development of artificial intelligence applications. Multicenter initiatives such as the Sight Outcomes Research Collaborative and the Intelligent Research In Sight registry offer larger, more diverse datasets, enhancing the generalizability of findings and supporting large-scale studies on glaucoma epidemiology, treatment outcomes, and practice patterns. The All of Us Research Program, with a special emphasis on diversity and inclusivity, presents a unique opportunity for glaucoma research by including underrepresented populations and offering comprehensive health data even beyond the EHR. Challenges persist, such as data access restrictions and standardization issues, but may be addressed through continued collaborative efforts between researchers, institutions, and regulatory bodies. Standardized data formats and improved data linkage methods, especially for ophthalmic imaging and testing, would further enhance the utility of EHR datasets for ophthalmic research, ultimately advancing our understanding and treatment of glaucoma and other ocular diseases on a global scale.
Collapse
Affiliation(s)
- Isaac A. Bernstein
- Department of Ophthalmology, Byers Eye Institute, Stanford University, California
| | - Karen S. Fernandez
- Department of Ophthalmology, Byers Eye Institute, Stanford University, California
| | - Joshua D. Stein
- Division of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, MI, USA
| | - Suzann Pershing
- Department of Ophthalmology, Byers Eye Institute, Stanford University, California
| | - Sophia Y. Wang
- Department of Ophthalmology, Byers Eye Institute, Stanford University, California
| |
Collapse
|
13
|
Karimi A, Stanik A, Kozitza C, Chen A. Integrating Deep Learning with Electronic Health Records for Early Glaucoma Detection: A Multi-Dimensional Machine Learning Approach. Bioengineering (Basel) 2024; 11:577. [PMID: 38927813 PMCID: PMC11200568 DOI: 10.3390/bioengineering11060577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 06/02/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024] Open
Abstract
BACKGROUND Recent advancements in deep learning have significantly impacted ophthalmology, especially in glaucoma, a leading cause of irreversible blindness worldwide. In this study, we developed a reliable predictive model for glaucoma detection using deep learning models based on clinical data, social and behavior risk factor, and demographic data from 1652 participants, split evenly between 826 control subjects and 826 glaucoma patients. METHODS We extracted structural data from control and glaucoma patients' electronic health records (EHR). Three distinct machine learning classifiers, the Random Forest and Gradient Boosting algorithms, as well as the Sequential model from the Keras library of TensorFlow, were employed to conduct predictive analyses across our dataset. Key performance metrics such as accuracy, F1 score, precision, recall, and the area under the receiver operating characteristics curve (AUC) were computed to both train and optimize these models. RESULTS The Random Forest model achieved an accuracy of 67.5%, with a ROC AUC of 0.67, outperforming the Gradient Boosting and Sequential models, which registered accuracies of 66.3% and 64.5%, respectively. Our results highlighted key predictive factors such as intraocular pressure, family history, and body mass index, substantiating their roles in glaucoma risk assessment. CONCLUSIONS This study demonstrates the potential of utilizing readily available clinical, lifestyle, and demographic data from EHRs for glaucoma detection through deep learning models. While our model, using EHR data alone, has a lower accuracy compared to those incorporating imaging data, it still offers a promising avenue for early glaucoma risk assessment in primary care settings. The observed disparities in model performance and feature significance show the importance of tailoring detection strategies to individual patient characteristics, potentially leading to more effective and personalized glaucoma screening and intervention.
Collapse
Affiliation(s)
- Alireza Karimi
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR 97239, USA
| | - Ansel Stanik
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
| | - Cooper Kozitza
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
| | - Aiyin Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
| |
Collapse
|
14
|
He D, Chung STL. Using natural language processing to link patients' narratives to visual capabilities and sentiments. Optom Vis Sci 2024; 101:379-387. [PMID: 38990236 PMCID: PMC11245166 DOI: 10.1097/opx.0000000000002154] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024] Open
Abstract
SIGNIFICANCE Analyzing narratives in patients' medical records using a framework that combines natural language processing (NLP) and machine learning may help uncover the underlying patterns of patients' visual capabilities and challenges that they are facing and could be useful in analyzing big data in optometric research. PURPOSE The primary goal of this study was to demonstrate the feasibility of applying a framework that combines NLP and machine learning to analyze narratives in patients' medical records. To test and validate our framework, we applied it to analyze records of low vision patients and to address two questions: Was there association between patients' narratives related to activities of daily living and the quality of their vision? Was there association between patients' narratives related to activities of daily living and their sentiments toward certain "assistive items"? METHODS Our dataset consisted of 616 records of low vision patients. From patients' complaint history, we selected multiple keywords that were related to common activities of daily living. Sentences related to each keyword were converted to numerical data using NLP techniques. Machine learning was then applied to classify the narratives related to each keyword into two categories, labeled based on different "factors of interest" (acuity, contrast sensitivity, and sentiments of patients toward certain "assistive items"). RESULTS Using our proposed framework, when patients' narratives related to specific keywords were used as input, our model effectively predicted the categories of different factors of interest with promising performance. For example, we found strong associations between patients' narratives and their acuity or contrast sensitivity for certain activities of daily living (e.g., "drive" in association with acuity and contrast sensitivity). CONCLUSIONS Despite our limited dataset, our results show that the proposed framework was able to extract the semantic patterns stored in medical narratives and to predict patients' sentiments and quality of vision.
Collapse
Affiliation(s)
| | - Susana T L Chung
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| |
Collapse
|
15
|
Biswas S, Davies LN, Sheppard AL, Logan NS, Wolffsohn JS. Utility of artificial intelligence-based large language models in ophthalmic care. Ophthalmic Physiol Opt 2024; 44:641-671. [PMID: 38404172 DOI: 10.1111/opo.13284] [Citation(s) in RCA: 23] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 01/23/2024] [Accepted: 01/25/2024] [Indexed: 02/27/2024]
Abstract
PURPOSE With the introduction of ChatGPT, artificial intelligence (AI)-based large language models (LLMs) are rapidly becoming popular within the scientific community. They use natural language processing to generate human-like responses to queries. However, the application of LLMs and comparison of the abilities among different LLMs with their human counterparts in ophthalmic care remain under-reported. RECENT FINDINGS Hitherto, studies in eye care have demonstrated the utility of ChatGPT in generating patient information, clinical diagnosis and passing ophthalmology question-based examinations, among others. LLMs' performance (median accuracy, %) is influenced by factors such as the iteration, prompts utilised and the domain. Human expert (86%) demonstrated the highest proficiency in disease diagnosis, while ChatGPT-4 outperformed others in ophthalmology examinations (75.9%), symptom triaging (98%) and providing information and answering questions (84.6%). LLMs exhibited superior performance in general ophthalmology but reduced accuracy in ophthalmic subspecialties. Although AI-based LLMs like ChatGPT are deemed more efficient than their human counterparts, these AIs are constrained by their nonspecific and outdated training, no access to current knowledge, generation of plausible-sounding 'fake' responses or hallucinations, inability to process images, lack of critical literature analysis and ethical and copyright issues. A comprehensive evaluation of recently published studies is crucial to deepen understanding of LLMs and the potential of these AI-based LLMs. SUMMARY Ophthalmic care professionals should undertake a conservative approach when using AI, as human judgement remains essential for clinical decision-making and monitoring the accuracy of information. This review identified the ophthalmic applications and potential usages which need further exploration. With the advancement of LLMs, setting standards for benchmarking and promoting best practices is crucial. Potential clinical deployment requires the evaluation of these LLMs to move away from artificial settings, delve into clinical trials and determine their usefulness in the real world.
Collapse
Affiliation(s)
- Sayantan Biswas
- School of Optometry, College of Health and Life Sciences, Aston University, Birmingham, UK
| | - Leon N Davies
- School of Optometry, College of Health and Life Sciences, Aston University, Birmingham, UK
| | - Amy L Sheppard
- School of Optometry, College of Health and Life Sciences, Aston University, Birmingham, UK
| | - Nicola S Logan
- School of Optometry, College of Health and Life Sciences, Aston University, Birmingham, UK
| | - James S Wolffsohn
- School of Optometry, College of Health and Life Sciences, Aston University, Birmingham, UK
| |
Collapse
|
16
|
Sulonen S, Leinonen S, Lehtonen E, Hujanen P, Vaajanen A, Syvänen U, Hemelings R, Stalmans I, Tuulonen A, Uusitalo-Jarvinen H. A prototype protocol for evaluating the real-world data set using a structured electronic health record in glaucoma. Acta Ophthalmol 2024; 102:216-227. [PMID: 37753831 DOI: 10.1111/aos.15763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 08/09/2023] [Accepted: 08/29/2023] [Indexed: 09/28/2023]
Abstract
PURPOSE As the first step in monitoring and evaluating day-to-day glaucoma care, this study reports all real-world data recorded during the first full year after the implementation of a prototype for glaucoma-specific structured electronic healthcare record (EHR). METHODS In 2019, 4618 patients visited Tays Medical Glaucoma Clinic at Tays Eye Centre, Tampere University Hospital, Finland, that serves a population of 0.53 M. Patient data were entered into a glaucoma-specific EHR by trained nurses to be checked by glaucoma specialists. Tays Eye Centre follows the Finnish Current Care Guideline for glaucoma in which glaucoma is defined using a '2 out of 3' rule, that is, ≥2 findings evaluated as glaucomatous in optic nerve head (ONH), retinal nerve fibre layer (RNFL) and visual field (VF). RESULTS The clinical evaluations of ONH, RNFL and VF were recorded in 95%-100% of all eyes. ONH was evaluated as glaucomatous more often (44%) than RNFL (33%) and VF tests (30%). Progressive changes in any of the three tests were recorded in 35% of the '≥2/3 glaucoma group' compared to 2%-9% in the other groups. The mean IOP at visit was 15 mmHg. The mean target IOP was 17 mmHg, and it was recorded in 94% of eyes. CONCLUSION The developed structured data presentation enables comparisons between different population-based real-world glaucoma data sets and glaucoma clinics. Compared to a data set from the UK, the proportion of glaucoma suspicion-related visits was smaller in Tays Eye Centre and test intervals were longer.
Collapse
Affiliation(s)
- Sakari Sulonen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Sanna Leinonen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
- Tays Eye Centre, Tampere University Hospital, Tampere, Finland
| | - Eemil Lehtonen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Pekko Hujanen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
- Tays Eye Centre, Tampere University Hospital, Tampere, Finland
| | - Anu Vaajanen
- Eye and Vision Research SILK, Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Ulla Syvänen
- Tays Eye Centre, Tampere University Hospital, Tampere, Finland
| | - Ruben Hemelings
- Department of Neurosciences, Research Group Ophthalmology, KU Leuven, Leuven, Belgium
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) programme, Singapore, Singapore
| | - Ingeborg Stalmans
- Department of Neurosciences, Research Group Ophthalmology, KU Leuven, Leuven, Belgium
- Ophthalmology Department, UZ Leuven, Leuven, Belgium
| | - Anja Tuulonen
- Tays Eye Centre, Tampere University Hospital, Tampere, Finland
| | - Hannele Uusitalo-Jarvinen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
- Tays Eye Centre, Tampere University Hospital, Tampere, Finland
| |
Collapse
|
17
|
Lee CS. Entering the Exciting Era of Artificial Intelligence and Big Data in Ophthalmology. OPHTHALMOLOGY SCIENCE 2024; 4:100469. [PMID: 38333043 PMCID: PMC10851194 DOI: 10.1016/j.xops.2024.100469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/10/2024]
|
18
|
Christopher M, Gonzalez R, Huynh J, Walker E, Radha Saseendrakumar B, Bowd C, Belghith A, Goldbaum MH, Fazio MA, Girkin CA, De Moraes CG, Liebmann JM, Weinreb RN, Baxter SL, Zangwill LM. Proactive Decision Support for Glaucoma Treatment: Predicting Surgical Interventions with Clinically Available Data. Bioengineering (Basel) 2024; 11:140. [PMID: 38391627 PMCID: PMC10886033 DOI: 10.3390/bioengineering11020140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 01/06/2024] [Accepted: 01/27/2024] [Indexed: 02/24/2024] Open
Abstract
A longitudinal ophthalmic dataset was used to investigate multi-modal machine learning (ML) models incorporating patient demographics and history, clinical measurements, optical coherence tomography (OCT), and visual field (VF) testing in predicting glaucoma surgical interventions. The cohort included 369 patients who underwent glaucoma surgery and 592 patients who did not undergo surgery. The data types used for prediction included patient demographics, history of systemic conditions, medication history, ophthalmic measurements, 24-2 VF results, and thickness measurements from OCT imaging. The ML models were trained to predict surgical interventions and evaluated on independent data collected at a separate study site. The models were evaluated based on their ability to predict surgeries at varying lengths of time prior to surgical intervention. The highest performing predictions achieved an AUC of 0.93, 0.92, and 0.93 in predicting surgical intervention at 1 year, 2 years, and 3 years, respectively. The models were also able to achieve high sensitivity (0.89, 0.77, 0.86 at 1, 2, and 3 years, respectively) and specificity (0.85, 0.90, and 0.91 at 1, 2, and 3 years, respectively) at an 0.80 level of precision. The multi-modal models trained on a combination of data types predicted surgical interventions with high accuracy up to three years prior to surgery and could provide an important tool to predict the need for glaucoma intervention.
Collapse
Affiliation(s)
- Mark Christopher
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Ruben Gonzalez
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Justin Huynh
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Evan Walker
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Bharanidharan Radha Saseendrakumar
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Christopher Bowd
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Akram Belghith
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Michael H Goldbaum
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Massimo A Fazio
- Department of Ophthalmology and Vision Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - Christopher A Girkin
- Department of Ophthalmology and Vision Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - Carlos Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, NY 10032, USA
| | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, NY 10032, USA
| | - Robert N Weinreb
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Sally L Baxter
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Linda M Zangwill
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| |
Collapse
|
19
|
Wang R, Bradley C, Herbert P, Hou K, Ramulu P, Breininger K, Unberath M, Yohannan J. Deep learning-based identification of eyes at risk for glaucoma surgery. Sci Rep 2024; 14:599. [PMID: 38182701 PMCID: PMC10770345 DOI: 10.1038/s41598-023-50597-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 12/21/2023] [Indexed: 01/07/2024] Open
Abstract
To develop and evaluate the performance of a deep learning model (DLM) that predicts eyes at high risk of surgical intervention for uncontrolled glaucoma based on multimodal data from an initial ophthalmology visit. Longitudinal, observational, retrospective study. 4898 unique eyes from 4038 adult glaucoma or glaucoma-suspect patients who underwent surgery for uncontrolled glaucoma (trabeculectomy, tube shunt, xen, or diode surgery) between 2013 and 2021, or did not undergo glaucoma surgery but had 3 or more ophthalmology visits. We constructed a DLM to predict the occurrence of glaucoma surgery within various time horizons from a baseline visit. Model inputs included spatially oriented visual field (VF) and optical coherence tomography (OCT) data as well as clinical and demographic features. Separate DLMs with the same architecture were trained to predict the occurrence of surgery within 3 months, within 3-6 months, within 6 months-1 year, within 1-2 years, within 2-3 years, within 3-4 years, and within 4-5 years from the baseline visit. Included eyes were randomly split into 60%, 20%, and 20% for training, validation, and testing. DLM performance was measured using area under the receiver operating characteristic curve (AUC) and precision-recall curve (PRC). Shapley additive explanations (SHAP) were utilized to assess the importance of different features. Model prediction of surgery for uncontrolled glaucoma within 3 months had the best AUC of 0.92 (95% CI 0.88, 0.96). DLMs achieved clinically useful AUC values (> 0.8) for all models that predicted the occurrence of surgery within 3 years. According to SHAP analysis, all 7 models placed intraocular pressure (IOP) within the five most important features in predicting the occurrence of glaucoma surgery. Mean deviation (MD) and average retinal nerve fiber layer (RNFL) thickness were listed among the top 5 most important features by 6 of the 7 models. DLMs can successfully identify eyes requiring surgery for uncontrolled glaucoma within specific time horizons. Predictive performance decreases as the time horizon for forecasting surgery increases. Implementing prediction models in a clinical setting may help identify patients that should be referred to a glaucoma specialist for surgical evaluation.
Collapse
Affiliation(s)
- Ruolin Wang
- Malone Center of Engineering in Healthcare, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Chris Bradley
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, 600 N Wolfe Street, Baltimore, MD, 21287, USA
| | - Patrick Herbert
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, 600 N Wolfe Street, Baltimore, MD, 21287, USA
| | - Kaihua Hou
- Malone Center of Engineering in Healthcare, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Pradeep Ramulu
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, 600 N Wolfe Street, Baltimore, MD, 21287, USA
| | - Katharina Breininger
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Mathias Unberath
- Malone Center of Engineering in Healthcare, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jithin Yohannan
- Malone Center of Engineering in Healthcare, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, 600 N Wolfe Street, Baltimore, MD, 21287, USA.
| |
Collapse
|
20
|
Valbuena Rubio S, García-Ordás MT, García-Olalla Olivera O, Alaiz-Moretón H, González-Alonso MI, Benítez-Andrades JA. Survival and grade of the glioma prediction using transfer learning. PeerJ Comput Sci 2023; 9:e1723. [PMID: 38192446 PMCID: PMC10773899 DOI: 10.7717/peerj-cs.1723] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 11/06/2023] [Indexed: 01/10/2024]
Abstract
Glioblastoma is a highly malignant brain tumor with a life expectancy of only 3-6 months without treatment. Detecting and predicting its survival and grade accurately are crucial. This study introduces a novel approach using transfer learning techniques. Various pre-trained networks, including EfficientNet, ResNet, VGG16, and Inception, were tested through exhaustive optimization to identify the most suitable architecture. Transfer learning was applied to fine-tune these models on a glioblastoma image dataset, aiming to achieve two objectives: survival and tumor grade prediction.The experimental results show 65% accuracy in survival prediction, classifying patients into short, medium, or long survival categories. Additionally, the prediction of tumor grade achieved an accuracy of 97%, accurately differentiating low-grade gliomas (LGG) and high-grade gliomas (HGG). The success of the approach is attributed to the effectiveness of transfer learning, surpassing the current state-of-the-art methods. In conclusion, this study presents a promising method for predicting the survival and grade of glioblastoma. Transfer learning demonstrates its potential in enhancing prediction models, particularly in scenarios with limited large datasets. These findings hold promise for improving diagnostic and treatment approaches for glioblastoma patients.
Collapse
Affiliation(s)
| | - María Teresa García-Ordás
- SECOMUCI Research Group, Escuela de Ingenierías Industrial e Informática, Universidad de León, León, Spain
| | | | - Héctor Alaiz-Moretón
- SECOMUCI Research Group, Escuela de Ingenierías Industrial e Informática, Universidad de León, León, Spain
| | | | | |
Collapse
|
21
|
Tao S, Ravindranath R, Wang SY. Predicting Glaucoma Progression to Surgery with Artificial Intelligence Survival Models. OPHTHALMOLOGY SCIENCE 2023; 3:100336. [PMID: 37415920 PMCID: PMC10320266 DOI: 10.1016/j.xops.2023.100336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 07/08/2023]
Abstract
Purpose Prior artificial intelligence (AI) models for predicting glaucoma progression have used traditional classifiers that do not consider the longitudinal nature of patients' follow-up. In this study, we developed survival-based AI models for predicting glaucoma patients' progression to surgery, comparing performance of regression-, tree-, and deep learning-based approaches. Design Retrospective observational study. Subjects Patients with glaucoma seen at a single academic center from 2008 to 2020 identified from electronic health records (EHRs). Methods From the EHRs, we identified 361 baseline features, including demographics, eye examinations, diagnoses, and medications. We trained AI survival models to predict patients' progression to glaucoma surgery using the following: (1) a penalized Cox proportional hazards (CPH) model with principal component analysis (PCA); (2) random survival forests (RSFs); (3) gradient-boosting survival (GBS); and (4) a deep learning model (DeepSurv). The concordance index (C-index) and mean cumulative/dynamic area under the curve (mean AUC) were used to evaluate model performance on a held-out test set. Explainability was investigated using Shapley values for feature importance and visualization of model-predicted cumulative hazard curves for patients with different treatment trajectories. Main Outcome Measures Progression to glaucoma surgery. Results Of the 4512 patients with glaucoma, 748 underwent glaucoma surgery, with a median follow-up of 1038 days. The DeepSurv model performed best overall (C-index, 0.775; mean AUC, 0.802) among the models studied in this article (CPH with PCA: C-index, 0.745; mean AUC, 0.780; RSF: C-index, 0.766; mean AUC, 0.804; GBS: C-index, 0.764; mean AUC, 0.791). Predicted cumulative hazard curves demonstrate how models could distinguish between patient who underwent early surgery and patients who underwent surgery after > 3000 days of follow-up or no surgery. Conclusions Artificial intelligence survival models can predict progression to glaucoma surgery using structured data from EHRs. Tree-based and deep learning-based models performed better at predicting glaucoma progression to surgery than the CPH regression model, potentially because of their better suitability for high-dimensional data sets. Future work predicting ophthalmic outcomes should consider using tree-based and deep learning-based survival AI models. Additional research is needed to develop and evaluate more sophisticated deep learning survival models that can incorporate clinical notes or imaging. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Shiqi Tao
- Byers Eye Institute, Department of Ophthalmology, Stanford University, Palo Alto, California
| | - Rohith Ravindranath
- Byers Eye Institute, Department of Ophthalmology, Stanford University, Palo Alto, California
| | - Sophia Y. Wang
- Byers Eye Institute, Department of Ophthalmology, Stanford University, Palo Alto, California
| |
Collapse
|
22
|
Hua C, Wu Y, Shi Y, Hu M, Xie R, Zhai G, Zhang XP. Steganography for medical record image. Comput Biol Med 2023; 165:107344. [PMID: 37603961 DOI: 10.1016/j.compbiomed.2023.107344] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 07/31/2023] [Accepted: 08/07/2023] [Indexed: 08/23/2023]
Abstract
Medical record images in EHR system are users' privacy and an asset, and there is an urgent need to protect this data. Image steganography can offer a potential solution. A steganographic model for medical record images is therefore developed based on StegaStamp. In contrast to natural images, medical record images are document images, which can be very vulnerable to image cropping attacks. Therefore, we use text region segmentation and watermark region localization to combat the image cropping attack. The distortion network has been designed to take into account the distortion that can occur during the transmission of medical record images, making the model robust against communication induced distortions. In addition, based on StegaStamp, we innovatively introduced FISM as part of the loss function to reduce the ripple texture in the steganographic image. The experimental results show that the designed distortion network and the FISM loss function term can be well suited for the steganographic task of medical record images from the perspective of decoding accuracy and image quality.
Collapse
Affiliation(s)
- Chunjun Hua
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, 500 Dongchuan Road, Shanghai 200241, China
| | - Yue Wu
- Ophthalmology Department, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, 639 Zhizaoju Road, Shanghai 200011, China
| | - Yiqiao Shi
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, 500 Dongchuan Road, Shanghai 200241, China.
| | - Menghan Hu
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, 500 Dongchuan Road, Shanghai 200241, China.
| | - Rong Xie
- Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200241, China.
| | - Guangtao Zhai
- Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200241, China.
| | - Xiao-Ping Zhang
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, 350 Victoria Street, Toronto M5B 2K3, Canada.
| |
Collapse
|
23
|
Choi JY, Yoo TK. New era after ChatGPT in ophthalmology: advances from data-based decision support to patient-centered generative artificial intelligence. ANNALS OF TRANSLATIONAL MEDICINE 2023; 11:337. [PMID: 37675304 PMCID: PMC10477620 DOI: 10.21037/atm-23-1598] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Accepted: 06/28/2023] [Indexed: 09/08/2023]
Affiliation(s)
- Joon Yul Choi
- Department of Biomedical Engineering, Yonsei University, Wonju, South Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| |
Collapse
|
24
|
Jalamangala Shivananjaiah SK, Kumari S, Majid I, Wang SY. Predicting near-term glaucoma progression: An artificial intelligence approach using clinical free-text notes and data from electronic health records. Front Med (Lausanne) 2023; 10:1157016. [PMID: 37122330 PMCID: PMC10133544 DOI: 10.3389/fmed.2023.1157016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 02/15/2023] [Indexed: 05/02/2023] Open
Abstract
Purpose The purpose of this study was to develop a model to predict whether or not glaucoma will progress to the point of requiring surgery within the following year, using data from electronic health records (EHRs), including both structured data and free-text progress notes. Methods A cohort of adult glaucoma patients was identified from the EHR at Stanford University between 2008 and 2020, with data including free-text clinical notes, demographics, diagnosis codes, prior surgeries, and clinical information, including intraocular pressure, visual acuity, and central corneal thickness. Words from patients' notes were mapped to ophthalmology domain-specific neural word embeddings. Word embeddings and structured clinical data were combined as inputs to deep learning models to predict whether a patient would undergo glaucoma surgery in the following 12 months using the previous 4-12 months of clinical data. We also evaluated models using only structured data inputs (regression-, tree-, and deep-learning-based models) and models using only text inputs. Results Of the 3,469 glaucoma patients included in our cohort, 26% underwent surgery. The baseline penalized logistic regression model achieved an area under the receiver operating curve (AUC) of 0.873 and F1 score of 0.750, compared with the best tree-based model (random forest, AUC 0.876; F1 0.746), the deep learning structured features model (AUC 0.885; F1 0.757), the deep learning clinical free-text features model (AUC 0.767; F1 0.536), and the deep learning model with both the structured clinical features and free-text features (AUC 0.899; F1 0.745). Discussion Fusion models combining text and EHR structured data successfully and accurately predicted glaucoma progression to surgery. Future research incorporating imaging data could further optimize this predictive approach and be translated into clinical decision support tools.
Collapse
Affiliation(s)
| | | | | | - Sophia Y. Wang
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, CA, United States
| |
Collapse
|
25
|
Thakur S, Dinh LL, Lavanya R, Quek TC, Liu Y, Cheng CY. Use of artificial intelligence in forecasting glaucoma progression. Taiwan J Ophthalmol 2023; 13:168-183. [PMID: 37484617 PMCID: PMC10361424 DOI: 10.4103/tjo.tjo-d-23-00022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 03/03/2023] [Indexed: 07/25/2023] Open
Abstract
Artificial intelligence (AI) has been widely used in ophthalmology for disease detection and monitoring progression. For glaucoma research, AI has been used to understand progression patterns and forecast disease trajectory based on analysis of clinical and imaging data. Techniques such as machine learning, natural language processing, and deep learning have been employed for this purpose. The results from studies using AI for forecasting glaucoma progression however vary considerably due to dataset constraints, lack of a standard progression definition and differences in methodology and approach. While glaucoma detection and screening have been the focus of most research that has been published in the last few years, in this narrative review we focus on studies that specifically address glaucoma progression. We also summarize the current evidence, highlight studies that have translational potential, and provide suggestions on how future research that addresses glaucoma progression can be improved.
Collapse
Affiliation(s)
- Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Linh Le Dinh
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Raghavan Lavanya
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology, Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| |
Collapse
|
26
|
Anton N, Doroftei B, Curteanu S, Catãlin L, Ilie OD, Târcoveanu F, Bogdănici CM. Comprehensive Review on the Use of Artificial Intelligence in Ophthalmology and Future Research Directions. Diagnostics (Basel) 2022; 13:100. [PMID: 36611392 PMCID: PMC9818832 DOI: 10.3390/diagnostics13010100] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/12/2022] [Accepted: 12/26/2022] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND Having several applications in medicine, and in ophthalmology in particular, artificial intelligence (AI) tools have been used to detect visual function deficits, thus playing a key role in diagnosing eye diseases and in predicting the evolution of these common and disabling diseases. AI tools, i.e., artificial neural networks (ANNs), are progressively involved in detecting and customized control of ophthalmic diseases. The studies that refer to the efficiency of AI in medicine and especially in ophthalmology were analyzed in this review. MATERIALS AND METHODS We conducted a comprehensive review in order to collect all accounts published between 2015 and 2022 that refer to these applications of AI in medicine and especially in ophthalmology. Neural networks have a major role in establishing the demand to initiate preliminary anti-glaucoma therapy to stop the advance of the disease. RESULTS Different surveys in the literature review show the remarkable benefit of these AI tools in ophthalmology in evaluating the visual field, optic nerve, and retinal nerve fiber layer, thus ensuring a higher precision in detecting advances in glaucoma and retinal shifts in diabetes. We thus identified 1762 applications of artificial intelligence in ophthalmology: review articles and research articles (301 pub med, 144 scopus, 445 web of science, 872 science direct). Of these, we analyzed 70 articles and review papers (diabetic retinopathy (N = 24), glaucoma (N = 24), DMLV (N = 15), other pathologies (N = 7)) after applying the inclusion and exclusion criteria. CONCLUSION In medicine, AI tools are used in surgery, radiology, gynecology, oncology, etc., in making a diagnosis, predicting the evolution of a disease, and assessing the prognosis in patients with oncological pathologies. In ophthalmology, AI potentially increases the patient's access to screening/clinical diagnosis and decreases healthcare costs, mainly when there is a high risk of disease or communities face financial shortages. AI/DL (deep learning) algorithms using both OCT and FO images will change image analysis techniques and methodologies. Optimizing these (combined) technologies will accelerate progress in this area.
Collapse
Affiliation(s)
- Nicoleta Anton
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Bogdan Doroftei
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Silvia Curteanu
- Department of Chemical Engineering, Cristofor Simionescu Faculty of Chemical Engineering and Environmental Protection, Gheorghe Asachi Technical University, Prof.dr.doc Dimitrie Mangeron Avenue, No 67, 700050 Iasi, Romania
| | - Lisa Catãlin
- Department of Chemical Engineering, Cristofor Simionescu Faculty of Chemical Engineering and Environmental Protection, Gheorghe Asachi Technical University, Prof.dr.doc Dimitrie Mangeron Avenue, No 67, 700050 Iasi, Romania
| | - Ovidiu-Dumitru Ilie
- Department of Biology, Faculty of Biology, “Alexandru Ioan Cuza” University, Carol I Avenue, No 20A, 700505 Iasi, Romania
| | - Filip Târcoveanu
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Camelia Margareta Bogdănici
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| |
Collapse
|
27
|
Chen JS, Lin WC, Yang S, Chiang MF, Hribar MR. Development of an Open-Source Annotated Glaucoma Medication Dataset From Clinical Notes in the Electronic Health Record. Transl Vis Sci Technol 2022; 11:20. [PMID: 36441131 PMCID: PMC9710490 DOI: 10.1167/tvst.11.11.20] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Accepted: 10/21/2022] [Indexed: 11/30/2022] Open
Abstract
Purpose To describe the methods involved in processing and characteristics of an open dataset of annotated clinical notes from the electronic health record (EHR) annotated for glaucoma medications. Methods In this study, 480 clinical notes from office visits, medical record numbers (MRNs), visit identification numbers, provider names, and billing codes were extracted for 480 patients seen for glaucoma by a comprehensive or glaucoma ophthalmologist from January 1, 2019, to August 31, 2020. MRNs and all visit data were de-identified using a hash function with salt from the deidentifyr package. All progress notes were annotated for glaucoma medication name, route, frequency, dosage, and drug use using an open-source annotation tool, Doccano. Annotations were saved separately. All protected health information (PHI) in progress notes and annotated files were de-identified using the published de-identifying algorithm Philter. All progress notes and annotations were manually validated by two ophthalmologists to ensure complete de-identification. Results The final dataset contained 5520 annotated sentences, including those with and without medications, for 480 clinical notes. Manual validation revealed 10 instances of remaining PHI which were manually corrected. Conclusions Annotated free-text clinical notes can be de-identified for upload as an open dataset. As data availability increases with the adoption of EHRs, free-text open datasets will become increasingly valuable for "big data" research and artificial intelligence development. This dataset is published online and publicly available at https://github.com/jche253/Glaucoma_Med_Dataset. Translational Relevance This open access medication dataset may be a source of raw data for future research involving big data and artificial intelligence research using free-text.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - Wei-Chun Lin
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
| | - Sen Yang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Michelle R. Hribar
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
| |
Collapse
|
28
|
Intelligent Data Extraction System for RNFL Examination Reports. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20503-3_45] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|