1
|
Carrillo-Larco RM, Bravo-Rocca G, Castillo-Cara M, Xu X, Bernabe-Ortiz A. A multimodal approach using fundus images and text meta-data in a machine learning classifier with embeddings to predict years with self-reported diabetes - An exploratory analysis. Prim Care Diabetes 2024; 18:327-332. [PMID: 38616442 DOI: 10.1016/j.pcd.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/17/2024] [Accepted: 04/09/2024] [Indexed: 04/16/2024]
Abstract
AIMS Machine learning models can use image and text data to predict the number of years since diabetes diagnosis; such model can be applied to new patients to predict, approximately, how long the new patient may have lived with diabetes unknowingly. We aimed to develop a model to predict self-reported diabetes duration. METHODS We used the Brazilian Multilabel Ophthalmological Dataset. Unit of analysis was the fundus image and its meta-data, regardless of the patient. We included people 40 + years and fundus images without diabetic retinopathy. Fundus images and meta-data (sex, age, comorbidities and taking insulin) were passed to the MedCLIP model to extract the embedding representation. The embedding representation was passed to an Extra Tree Classifier to predict: 0-4, 5-9, 10-14 and 15 + years with self-reported diabetes. RESULTS There were 988 images from 563 people (mean age = 67 years; 64 % were women). Overall, the F1 score was 57 %. The group 15 + years of self-reported diabetes had the highest precision (64 %) and F1 score (63 %), while the highest recall (69 %) was observed in the group 0-4 years. The proportion of correctly classified observations was 55 % for the group 0-4 years, 51 % for 5-9 years, 58 % for 10-14 years, and 64 % for 15 + years with self-reported diabetes. CONCLUSIONS The machine learning model had acceptable accuracy and F1 score, and correctly classified more than half of the patients according to diabetes duration. Using large foundational models to extract image and text embeddings seems a feasible and efficient approach to predict years living with self-reported diabetes.
Collapse
Affiliation(s)
- Rodrigo M Carrillo-Larco
- Hubert Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, GA, USA; Emory Global Diabetes Research Center, Emory University, Atlanta, GA, USA.
| | | | | | - Xiaolin Xu
- School of Public Health, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China; The Key Laboratory of Intelligent Preventive Medicine of Zhejiang Province, Hangzhou, China; School of Public Health, Faculty of Medicine, The University of Queensland, Brisbane, Australia
| | | |
Collapse
|
2
|
Ramoutar RR. An Economic Analysis for the Use of Artificial Intelligence in Screening for Diabetic Retinopathy in Trinidad and Tobago. Cureus 2024; 16:e55745. [PMID: 38586698 PMCID: PMC10999161 DOI: 10.7759/cureus.55745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/07/2024] [Indexed: 04/09/2024] Open
Abstract
This is a systematic review of 25 publications on the topics of the prevalence and cost of diabetic retinopathy (DR) in Trinidad and Tobago, the cost of traditional methods of screening for DR, and the use and cost of artificial intelligence (AI) in screening for DR. Analysis of these publications was used to identify and make estimates for how resources allocated to ophthalmology in public health systems in Trinidad and Tobago can be more efficiently utilized by employing AI in diagnosing treatable DR. DR screening was found to be an effective method of detecting the disease. Screening was found to be a universally cost-effective method of disease prevention and for altering the natural history of the disease in the spectrum of low-middle to high-income economies, such as Rwanda, Thailand, China, South Korea, and Singapore. AI and deep learning systems were found to be clinically superior to, or as effective as, human graders in areas where they were deployed, indicating that the systems are clinically safe. They have been shown to improve access to diabetic retinal screening, improve compliance with screening appointments, and prove to be cost-effective, especially in rural areas. Trinidad and Tobago, which is estimated to be disproportionately more affected by the burden of DR when projected out to the mid-21st century, stands to save as much as US$60 million annually from the implementation of an AI-based system to screen for DR versus conventional manual grading.
Collapse
Affiliation(s)
- Ryan R Ramoutar
- Ophthalmology, University Hospitals of Leicester NHS Trust, Leicester, GBR
| |
Collapse
|
3
|
Tomić M, Vrabec R, Hendelja Đ, Kolarić V, Bulum T, Rahelić D. Diagnostic Accuracy of Hand-Held Fundus Camera and Artificial Intelligence in Diabetic Retinopathy Screening. Biomedicines 2023; 12:34. [PMID: 38255141 PMCID: PMC10813433 DOI: 10.3390/biomedicines12010034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 12/19/2023] [Accepted: 12/20/2023] [Indexed: 01/24/2024] Open
Abstract
Our study aimed to assess the role of a hand-held fundus camera and artificial intelligence (AI)-based grading system in diabetic retinopathy (DR) screening and determine its diagnostic accuracy in detecting DR compared with clinical examination and a standard fundus camera. This cross-sectional instrument validation study, as a part of the International Diabetes Federation (IDF) Diabetic Retinopathy Screening Project, included 160 patients (320 eyes) with type 2 diabetes (T2DM). After the standard indirect slit-lamp fundoscopy, each patient first underwent fundus photography with a standard 45° camera VISUCAM Zeiss and then with a hand-held camera TANG (Shanghai Zhi Tang Health Technology Co., Ltd.). Two retina specialists independently graded the images taken with the standard camera, while the images taken with the hand-held camera were graded using the DeepDR system and an independent IDF ophthalmologist. The three screening methods did not differ in detecting moderate/severe nonproliferative and proliferative DR. The area under the curve, sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, kappa (ĸ) agreement, diagnostic odds ratio, and diagnostic effectiveness for a hand-held camera compared to clinical examination were 0.921, 89.1%, 100%, 100%, 91.4%, infinity, 0.11, 0.86, 936.48, and 94.9%, while compared to the standard fundus camera were 0.883, 83.2%, 100%, 100%, 87.3%, infinity, 0.17, 0.78, 574.6, and 92.2%. The results of our study suggest that fundus photography with a hand-held camera and AI-based grading system is a short, simple, and accurate method for the screening and early detection of DR, comparable to clinical examination and fundus photography with a standard camera.
Collapse
Affiliation(s)
- Martina Tomić
- Department of Ophthalmology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
| | - Romano Vrabec
- Department of Ophthalmology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
| | - Đurđica Hendelja
- Department of Ophthalmology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
| | - Vilma Kolarić
- Department of Diabetes and Endocrinology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
| | - Tomislav Bulum
- Department of Diabetes and Endocrinology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
- School of Medicine, University of Zagreb, Šalata 3, 10000 Zagreb, Croatia
| | - Dario Rahelić
- Department of Diabetes and Endocrinology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
- School of Medicine, Catholic University of Croatia, Ilica 242, 10000 Zagreb, Croatia
- School of Medicine, Josip Juraj Strossmayer University, Josipa Huttlera 4, 31000 Osijek, Croatia
| |
Collapse
|
4
|
Rajesh AE, Olvera-Barrios A, Warwick AN, Wu Y, Stuart KV, Biradar M, Ung CY, Khawaja AP, Luben R, Foster PJ, Lee CS, Tufail A, Lee AY, Egan C. Ethnicity is not biology: retinal pigment score to evaluate biological variability from ophthalmic imaging using machine learning. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.06.28.23291873. [PMID: 37461664 PMCID: PMC10350142 DOI: 10.1101/2023.06.28.23291873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/24/2023]
Abstract
Background Few metrics exist to describe phenotypic diversity within ophthalmic imaging datasets, with researchers often using ethnicity as an inappropriate marker for biological variability. Methods We derived a continuous, measured metric, the retinal pigment score (RPS), that quantifies the degree of pigmentation from a colour fundus photograph of the eye. RPS was validated using two large epidemiological studies with demographic and genetic data (UK Biobank and EPIC-Norfolk Study). Findings A genome-wide association study (GWAS) of RPS from UK Biobank identified 20 loci with known associations with skin, iris and hair pigmentation, of which 8 were replicated in the EPIC-Norfolk cohort. There was a strong association between RPS and ethnicity, however, there was substantial overlap between each ethnicity and the respective distributions of RPS scores. Interpretation RPS serves to decouple traditional demographic variables, such as ethnicity, from clinical imaging characteristics. RPS may serve as a useful metric to quantify the diversity of the training, validation, and testing datasets used in the development of AI algorithms to ensure adequate inclusion and explainability of the model performance, critical in evaluating all currently deployed AI models. The code to derive RPS is publicly available at: https://github.com/uw-biomedical-ml/retinal-pigmentation-score. Funding The authors did not receive support from any organisation for the submitted work.
Collapse
Affiliation(s)
- Anand E Rajesh
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- The Roger and Angie Karalis Johnson Retina Center, Seattle, WA, USA
| | - Abraham Olvera-Barrios
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Alasdair N Warwick
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Yue Wu
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- The Roger and Angie Karalis Johnson Retina Center, Seattle, WA, USA
| | - Kelsey V Stuart
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Mahantesh Biradar
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
- University of Cambridge, Cambridge, UK
| | | | - Anthony P Khawaja
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
- MRC Epidemiology Unit, University of Cambridge, Cambridge, UK
| | - Robert Luben
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Paul J Foster
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- The Roger and Angie Karalis Johnson Retina Center, Seattle, WA, USA
| | - Adnan Tufail
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- The Roger and Angie Karalis Johnson Retina Center, Seattle, WA, USA
| | - Catherine Egan
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| |
Collapse
|