1
|
Alqahtani AS, Alshareef WM, Aljadani HT, Hawsawi WO, Shaheen MH. The efficacy of artificial intelligence in diabetic retinopathy screening: a systematic review and meta-analysis. Int J Retina Vitreous 2025; 11:48. [PMID: 40264218 PMCID: PMC12012971 DOI: 10.1186/s40942-025-00670-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2025] [Accepted: 04/04/2025] [Indexed: 04/24/2025] Open
Abstract
BACKGROUND To evaluate the efficacy of artificial intelligence (AI) in screening for diabetic retinopathy (DR) using fundus images and optical coherence tomography (OCT) in comparison to traditional screening methods. METHODS This systematic review was registered with PROSPERO (ID: CRD42024560750). Systematic searches were conducted in PubMed Medline, Cochrane Central, ScienceDirect, and Web of Science using keywords such as "diabetic retinopathy," "screening," and "artificial intelligence." Only studies published in English from 2019 to July 22, 2024, were considered. We also manually reviewed the reference lists of relevant reviews. Two independent reviewers assessed the risk of bias using the QUADAS-2 tool, resolving disagreements through discussion with the principal investigator. Meta-analysis was performed using MetaDiSc software (version 1.4). To calculate combined sensitivity, specificity, summary receiver operating characteristic (SROC) plots, forest plots, and subgroup analyses were performed according to clinician type (ophthalmologists vs. retina specialists) and imaging modality (fundus images vs. fundus images + OCT). RESULTS 18 studies were included. Meta-analysis showed that AI systems demonstrated superior diagnostic performance compared to doctors, with the pooled sensitivity, specificity, diagnostic odds ratio, and Cochrane Q index of the AI being 0.877, 0.906, 0.94, and 153.79 accordingly. The Fagan nomogram analysis further confirmed the strong diagnostic value of AI. Subgroup analyses revealed that factors like imaging modality, and doctor expertise can influence diagnostic performance. CONCLUSION AI systems have demonstrated strong diagnostic performance in detecting diabetic retinopathy, with sensitivity and specificity comparable to or exceeding traditional clinicians.
Collapse
Affiliation(s)
- Abdullah S Alqahtani
- Department of Surgery, Division of Ophthalmology, King Abdulaziz Medical City, Ministry of National Guard Health Affairs, Jeddah, Saudi Arabia.
- King Saud Bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia.
- King Abdullah International Medical Research Center, Jeddah, Saudi Arabia.
| | - Wasan M Alshareef
- King Saud Bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia
| | - Hanan T Aljadani
- King Saud Bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia
| | - Wesal O Hawsawi
- King Saud Bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia
| | - Marya H Shaheen
- King Saud Bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia
| |
Collapse
|
2
|
Mikhail D, Mihalache A, Huang RS, Khairy T, Popovic MM, Milad D, Shor R, Pereira A, Kwok J, Yan P, Wong DT, Kertes PJ, Duval R, Muni RH. Performance of ChatGPT in French language analysis of multimodal retinal cases. J Fr Ophtalmol 2025; 48:104391. [PMID: 39708623 DOI: 10.1016/j.jfo.2024.104391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 11/13/2024] [Accepted: 11/25/2024] [Indexed: 12/23/2024]
Abstract
PURPOSE Prior literature has suggested a reduced performance of large language models (LLMs) in non-English analyses, including Arabic and French. However, there are no current studies testing the multimodal performance of ChatGPT in French ophthalmology cases, and comparing this to the results observed in the English literature. We compared the performance of ChatGPT-4 in French and English on open-ended prompts using multimodal input data from retinal cases. METHODS GPT-4 was prompted in English and French using a public dataset containing 67 retinal cases from the ophthalmology education website OCTCases.com. The clinical case and accompanying ophthalmic images comprised the prompt, along with the open-ended question: "What is the most likely diagnosis?" Systematic prompting was used to identify and compare relevant factor(s) contributing to correct and incorrect responses. Diagnostic accuracy was the primary outcome, defined as the proportion of correctly diagnosed cases in French and English. Diagnoses were compared with the answer key on OCTCases to confirm correct or incorrect responses. Clinically relevant factors reported by the LLM as contributory to its decision-making were secondary endpoints. RESULTS The diagnostic accuracies of GPT-4 in English and French were 35.8% and 28.4%, respectively (χ2, P=0.36). Imaging findings were reported as most influential for correct diagnosis in English (37.5%) and French (42.1%) (P=0.76). In incorrectly diagnosed cases, imaging findings were primarily implicated in English (35.6%) and French (33.3%) (P=0.81). In incorrectly diagnosed cases, the differential diagnosis list contained the correct diagnosis in 39.5% of English cases and 41.7% of French cases (P=0.83). CONCLUSION Our results suggest that GPT-4 performed similarly in English and French on all quantitative performance metrics measured. Ophthalmic images were identified in both languages as critical for correct diagnosis. Future research should assess LLM comprehension through the clarity, grammatical, cultural, and idiomatic accuracy of its responses.
Collapse
Affiliation(s)
- D Mikhail
- Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - A Mihalache
- Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - R S Huang
- Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - T Khairy
- Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - M M Popovic
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - D Milad
- Department of Ophthalmology, University of Montreal, Montreal, Quebec, Canada
| | - R Shor
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - A Pereira
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - J Kwok
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - P Yan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - D T Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada; Department of Ophthalmology, St. Michael's Hospital/Unity Health Toronto, Toronto, Ontario, Canada
| | - P J Kertes
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada; John and Liz Tory Eye Centre, Sunnybrook Health Science Centre, Toronto, Ontario, Canada
| | - R Duval
- Department of Ophthalmology, University of Montreal, Montreal, Quebec, Canada; Department of Ophthalmology, Hospital Maisonneuve-Rosemont, Montreal, Quebec, Canada
| | - R H Muni
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada; Department of Ophthalmology, St. Michael's Hospital/Unity Health Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
3
|
Song S, Li T, Lin W, Liu R, Zhang Y. Application of artificial intelligence in Alzheimer's disease: a bibliometric analysis. Front Neurosci 2025; 19:1511350. [PMID: 40027465 PMCID: PMC11868282 DOI: 10.3389/fnins.2025.1511350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2024] [Accepted: 02/03/2025] [Indexed: 03/05/2025] Open
Abstract
Background Understanding how artificial intelligence (AI) is employed to predict, diagnose, and perform relevant analyses in Alzheimer's disease research is a rapidly evolving field. This study integrated and analyzed the relevant literature from the Science Citation Index (SCI) and Social Science Citation Index (SSCI) on the application of AI in Alzheimer's disease (AD), covering publications from 2004 to 2023. Objective This study aims to identify the key research hotspots and trends of the application of AI in AD over the past 20 years through a bibliometric analysis. Methods Using the Web of Science Core Collection database, we conducted a comprehensive visual analysis of literature on AI and AD published between January 1, 2004, and December 31, 2023. The study utilized Excel, Scimago Graphica, VOSviewer, and CiteSpace software to visualize trends in annual publications and the distribution of research by countries, institutions, journals, references, authors, and keywords related to this topic. Results A total of 2,316 papers were obtained through the research process, with a significant increase in publications observed since 2018, signaling notable growth in this field. The United States, China, and the United Kingdom made notable contributions to this research area. The University of London led in institutional productivity with 80 publications, followed by the University of California System with 74 publications. Regarding total publications, the Journal of Alzheimer's Disease was the most prolific while Neuroimage ranked as the most cited journal. Shen Dinggang was the top author in both total publications and average citations. Analysis of reference and keyword highlighted research hotspots, including the identification of various stages of AD, early diagnostic screening, risk prediction, and prediction of disease progression. The "task analysis" keyword emerged as a research frontier from 2021 to 2023. Conclusion Research on AI applications in AD holds significant potential for practical advancements, attracting increasing attention from scholars. Deep learning (DL) techniques have emerged as a key research focus for AD diagnosis. Future research will explore AI methods, particularly task analysis, emphasizing integrating multimodal data and utilizing deep neural networks. These approaches aim to identify emerging risk factors, such as environmental influences on AD onset, predict disease progression with high accuracy, and support the development of prevention strategies. Ultimately, AI-driven innovations will transform AD management from a progressive, incurable state to a more manageable and potentially reversible condition, thereby improving healthcare, rehabilitation, and long-term care solutions.
Collapse
Affiliation(s)
- Sijia Song
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Tong Li
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Wei Lin
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Ran Liu
- School of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Yujie Zhang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| |
Collapse
|
4
|
Tran L, Kandel H, Sari D, Chiu CH, Watson SL. Artificial Intelligence and Ophthalmic Clinical Registries. Am J Ophthalmol 2024; 268:263-274. [PMID: 39111520 DOI: 10.1016/j.ajo.2024.07.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 07/30/2024] [Accepted: 07/31/2024] [Indexed: 09/03/2024]
Abstract
PURPOSE The recent advances in artificial intelligence (AI) represent a promising solution to increasing clinical demand and ever limited health resources. Whilst powerful, AI models require vast amounts of representative training data to output meaningful predictions in the clinical environment. Clinical registries represent a promising source of large volume real-world data which could be used to train more accurate and widely applicable AI models. This review aims to provide an overview of the current applications of AI to ophthalmic clinical registry data. DESIGN AND METHODS A systematic search of EMBASE, Medline, PubMed, Scopus and Web of Science for primary research articles that applied AI to ophthalmic clinical registry data was conducted in July 2024. RESULTS Twenty-three primary research articles applying AI to ophthalmic clinic registries (n = 14) were found. Registries were primarily defined by the condition captured and the most common conditions where AI was applied were glaucoma (n = 3) and neovascular age-related macular degeneration (n = 3). Tabular clinical data was the most common form of input into AI algorithms and outputs were primarily classifiers (n = 8, 40%) and risk quantifier models (n = 7, 35%). The AI algorithms applied were almost exclusively supervised conventional machine learning models (n = 39, 85%) such as decision tree classifiers and logistic regression, with only 7 applications of deep learning or natural language processing algorithms. Significant heterogeneity was found with regards to model validation methodology and measures of performance. CONCLUSIONS Limited applications of deep learning algorithms to clinical registry data have been reported. The lack of standardized validation methodology and heterogeneity of performance outcome reporting suggests that the application of AI to clinical registries is still in its infancy constrained by the poor accessibility of registry data and reflecting the need for a standardization of methodology and greater involvement of domain experts in the future development of clinically deployable AI.
Collapse
Affiliation(s)
- Luke Tran
- From the Faculty of Medicine and Health, Save Sight Institute, The University of Sydney, (L.T., H.K., D.S., C.H.C., S.L.W.) Sydney, New South Wales, Australia.
| | - Himal Kandel
- From the Faculty of Medicine and Health, Save Sight Institute, The University of Sydney, (L.T., H.K., D.S., C.H.C., S.L.W.) Sydney, New South Wales, Australia
| | - Daliya Sari
- From the Faculty of Medicine and Health, Save Sight Institute, The University of Sydney, (L.T., H.K., D.S., C.H.C., S.L.W.) Sydney, New South Wales, Australia
| | - Christopher Hy Chiu
- From the Faculty of Medicine and Health, Save Sight Institute, The University of Sydney, (L.T., H.K., D.S., C.H.C., S.L.W.) Sydney, New South Wales, Australia
| | - Stephanie L Watson
- From the Faculty of Medicine and Health, Save Sight Institute, The University of Sydney, (L.T., H.K., D.S., C.H.C., S.L.W.) Sydney, New South Wales, Australia
| |
Collapse
|
5
|
Chuntranapaporn S, Choontanom R, Srimanan W. Ocular Duction Measurement Using Three Convolutional Neural Network Models: A Comparative Study. Cureus 2024; 16:e73985. [PMID: 39703282 PMCID: PMC11658897 DOI: 10.7759/cureus.73985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/19/2024] [Indexed: 12/21/2024] Open
Abstract
OBJECTIVE This study primarily aimed to compare the accuracy of three convolutional neural network (CNN) models in measuring the four positions of ocular duction. Further, it secondarily aimed to compare the accuracy of each CNN model in the training dataset versus the ophthalmologist measurements. METHODS This study included 526 subjects aged over 18 who visited the ophthalmology outpatient department. Ocular images were captured using mobile phones in various gaze positions and stored anonymously as JPEG files. Ocular duction was measured by assessing corneal light reflex deviation from the central cornea. Ductions were classified into 30, 60, and 90 prism diopters (PD) and full ductions from the primary position. Three CNN models, MobileNet, ResNet, and EfficientNet, were used to classify ocular duction. Their predictive ability was evaluated using the area under the receiver operating characteristic (AUROC) curve. The dataset was divided into the training (2,001 images), evaluation (213 images), and testing (190 images) groups, which were reconstructed using the routine follow-up data of volunteers at the Ophthalmology Department of Phramongkutklao Hospital between February 2023 and June 2023. RESULTS To evaluate the data, the MobileNet_V3_Large, ResNet101, and EfficientNet_B5 models were utilized to measure duction angles with the receiver operating characteristic (ROC) curves. The training times for MobileNet, ResNet, and EfficientNet were 5.54, 9.56, and 26.39 minutes, respectively. In the testing phase, MobileNet, ResNet, and EfficientNet were used to measure each duction position: 30 PD with corresponding ROC curve values of 0.77, 0.5, and 0.58; 60 PD with ROC curve values of 0.71, 0.83, and 0.81; 90 PD with ROC curve values of 0.7, 0.73, and 0.81; and full duction with ROC curve values of 0.91, 0.93, and 0.94, respectively. Analysis of variance revealed no significant difference in the mean AUROC curves among the models, yielding a p-value of 0.936. MobileNet has the narrowest confidence intervals for average prediction accuracy across three CNN models. CONCLUSIONS The three CNN models did not significantly differ in terms of efficacy in detecting various duction positions. However, MobileNet stands out, with a narrower confidence interval and shorter training time, which indicates its potential application.
Collapse
|
6
|
Misson GP, Anderson SJ, Dunne MCM. Radial polarisation patterns identify macular damage: a machine learning approach. Clin Exp Optom 2024:1-8. [PMID: 39374948 DOI: 10.1080/08164622.2024.2410890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 09/22/2024] [Accepted: 09/24/2024] [Indexed: 10/09/2024] Open
Abstract
CLINICAL RELEVANCE Identifying polarisation-modulated patterns may be an effective method for both detecting and monitoring macular damage. BACKGROUND The aim of this work is to determine the effectiveness of polarisation-modulated patterns in identifying macular damage and foveolar involvement using a methodology that involved feature selection, Naïve Bayes supervised machine learning, cross validation, and use of an interpretable nomogram. METHODS A cross-sectional study involving 520 eyes was undertaken, encompassing both normal and abnormal cases, including those with age-related macular disease, diabetic retinopathy or epiretinal membrane. Macular damage and foveolar integrity were assessed using optical coherence tomography. Various polarisation-modulated geometrical and optotype patterns were employed, along with traditional methods for visual function measurement, to complete perceptual detection and identification measures. Other variables assessed included age, sex, eye (right, left) and ocular media (normal, pseudophakic, cataract). Redundant variables were removed using a Fast Correlation-Based Filter. The area under the receiver operating characteristic curve and Matthews correlation coefficient were calculated, following 5-fold stratified cross validation, for Naïve Bayes models describing the relationship between the selected predictors of macular damage and foveolar involvement. RESULTS Only radially structured polarisation-modulated patterns and age emerged as predictors of macular damage and foveolar involvement. All other variables, including traditional logMAR measures of visual acuity, were identified as redundant. Naïve Bayes, utilising the Fast Correlation-Based Filter selected features, provided a good prediction for macular damage and foveolar involvement, with an area under the receiver operating curve exceeding 0.7. Additionally, Matthews correlation coefficient showed a medium size effect for both conditions. CONCLUSIONS Radially structured polarisation-modulated geometric patterns outperform polarisation-modulated optotypes and standard logMAR acuity measures in predicting macular damage, regardless of foveolar involvement.
Collapse
Affiliation(s)
- Gary P Misson
- School of Optometry, Aston University, Birmingham, UK
| | | | | |
Collapse
|
7
|
Rajabi MT, Sadeghi R, Abdol Homayuni MR, Pezeshgi S, Hosseini SS, Rajabi MB, Poshtdar S. Optical coherence tomography angiography in thyroid associated ophthalmopathy: a systematic review. BMC Ophthalmol 2024; 24:304. [PMID: 39039451 PMCID: PMC11265183 DOI: 10.1186/s12886-024-03569-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 07/10/2024] [Indexed: 07/24/2024] Open
Abstract
PURPOSE To evaluate the evidence for alterations of blood flow, vascular and perfusion densities in the choroid, macula, peripapillary region, and the area surrounding the optic nerve head (ONH) in patients with thyroid-associated ophthalmopathy (TAO) based on changes of OCTA parameters. METHODS A systematic review of Pubmed, Google Scholar, Scopus, WOS, Cochrane, and Embase databases, including quality assessment of published studies, investigating the alterations of OCTA parameters in TAO patients was conducted. The outcomes of interest comprised changes of perfusion and vascular densities in radial peripapillary capillary (RPC), ONH, superficial and deep retinal layers (SRL and DRL), choriocapillaris (CC) flow, and the extent of the foveal avascular zone (FAZ). RESULTS From the total of 1253 articles obtained from the databases, the pool of papers was narrowed down to studies published until March 20th, 2024. Lastly, 42 studies were taken into consideration which contained the data regarding the alterations of OCTA parameters including choriocapillary vascular flow, vascular and perfusion densities of retinal microvasculature, SRL, and DRL, changes in macular all grid sessions, changes of foveal, perifoveal and parafoveal densities, macular whole image vessel density (m-wiVD) and FAZ, in addition to alterations of ONH and RPC whole image vessel densities (onh-wiVD and rpc-wiVD) among TAO patients. The correlation of these parameters with visual field-associated parameters, such as Best-corrected visual acuity (BCVA), Visual field mean defect (VF-MD), axial length (AL), P100 amplitude, and latency, was also evaluated among TAO patients. CONCLUSION The application of OCTA has proven helpful in distinguishing active and inactive TAO patients, as well as differentiation of patients with or without DON, indicating the potential promising role of some OCTA measures for early detection of TAO with high sensitivity and specificity in addition to preventing the irreversible outcomes of TAO. OCTA assessments have also been applied to evaluate the effectiveness of TAO treatment approaches, including systemic corticosteroid therapy and surgical decompression.
Collapse
Affiliation(s)
- Mohammad Taher Rajabi
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Zip Code: 1336616351, Tehran, Iran
| | - Reza Sadeghi
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Zip Code: 1336616351, Tehran, Iran
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad Reza Abdol Homayuni
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Zip Code: 1336616351, Tehran, Iran
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Saharnaz Pezeshgi
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Zip Code: 1336616351, Tehran, Iran
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyedeh Simindokht Hosseini
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Zip Code: 1336616351, Tehran, Iran
| | - Mohammad Bagher Rajabi
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Zip Code: 1336616351, Tehran, Iran
| | - Sepideh Poshtdar
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Zip Code: 1336616351, Tehran, Iran.
- Students' Scientific Research Center, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
8
|
Oloruntoba A, Ingvar Å, Sashindranath M, Anthony O, Abbott L, Guitera P, Caccetta T, Janda M, Soyer HP, Mar V. Examining labelling guidelines for AI-based software as a medical device: A review and analysis of dermatology mobile applications in Australia. Australas J Dermatol 2024. [PMID: 38693690 DOI: 10.1111/ajd.14269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 02/26/2024] [Accepted: 04/01/2024] [Indexed: 05/03/2024]
Abstract
In recent years, there has been a surge in the development of AI-based Software as a Medical Device (SaMD), particularly in visual specialties such as dermatology. In Australia, the Therapeutic Goods Administration (TGA) regulates AI-based SaMD to ensure its safe use. Proper labelling of these devices is crucial to ensure that healthcare professionals and the general public understand how to use them and interpret results accurately. However, guidelines for labelling AI-based SaMD in dermatology are lacking, which may result in products failing to provide essential information about algorithm development and performance metrics. This review examines existing labelling guidelines for AI-based SaMD across visual medical specialties, with a specific focus on dermatology. Common recommendations for labelling are identified and applied to currently available dermatology AI-based SaMD mobile applications to determine usage of these labels. Of the 21 AI-based SaMD mobile applications identified, none fully comply with common labelling recommendations. Results highlight the need for standardized labelling guidelines. Ensuring transparency and accessibility of information is essential for the safe integration of AI into health care and preventing potential risks associated with inaccurate clinical decisions.
Collapse
Affiliation(s)
| | - Åsa Ingvar
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
- Victorian Melanoma Service, Alfred Health, Melbourne, Victoria, Australia
- Department of Dermatology, Skåne University Hospital, Lund University, Lund, Sweden
- Department of Clinical Sciences, Skåne University Hospital, Lund University, Lund, Sweden
| | - Maithili Sashindranath
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| | - Ojochonu Anthony
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Lisa Abbott
- Melanoma Institute Australia, The University of Sydney, Sydney, New South Wales, Australia
| | - Pascale Guitera
- Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia
- Sydney Melanoma Diagnostic Centre, Royal Prince Alfred Hospital, Camperdown, New South Wales, Australia
- Perth Dermatology Clinic, Perth, Western Australia, Australia
| | - Tony Caccetta
- Perth Dermatology Clinic, Perth, Western Australia, Australia
| | - Monika Janda
- Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Queensland, Australia
| | - H Peter Soyer
- Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Queensland, Australia
| | - Victoria Mar
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
- Victorian Melanoma Service, Alfred Health, Melbourne, Victoria, Australia
| |
Collapse
|
9
|
Afifah A, Syafira F, Afladhanti PM, Dharmawidiarini D. Artificial intelligence as diagnostic modality for keratoconus: A systematic review and meta-analysis. J Taibah Univ Med Sci 2024; 19:296-303. [PMID: 38283379 PMCID: PMC10821587 DOI: 10.1016/j.jtumed.2023.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 11/13/2023] [Accepted: 12/25/2023] [Indexed: 01/30/2024] Open
Abstract
Objectives The challenges in diagnosing keratoconus (KC) have led researchers to explore the use of artificial intelligence (AI) as a diagnostic tool. AI has emerged as a new way to improve the efficiency of KC diagnosis. This study analyzed the use of AI as a diagnostic modality for KC. Methods This study used a systematic review and meta-analysis following the 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We searched selected databases using a combination of search terms: "((Artificial Intelligence) OR (Diagnostic Modality)) AND (Keratoconus)" from PubMed, Medline, and ScienceDirect within the last 5 years (2018-2023). Following a systematic review protocol, we selected 11 articles and 6 articles were eligible for final analysis. The relevant data were analyzed with Review Manager 5.4 software and the final output was presented in a forest plot. Results This research found neural networks as the most used AI model in diagnosing KC. Neural networks and naïve bayes showed the highest accuracy of AI in diagnosing KC with a sensitivity of 1.00, while random forests were >0.90. All studies in each group have proven high sensitivity and specificity over 0.90. Conclusions AI potentially makes a better diagnosis of the KC with its high performance, particularly on sensitivity and specificity, which can help clinicians make medical decisions about an individual patient.
Collapse
Affiliation(s)
- Azzahra Afifah
- Undaan Eye Hospital, Surabaya, Indonesia
- Medical Profession Program, Faculty of Medicine, Universitas Sriwijaya, Palembang, South Sumatra, Indonesia
| | - Fara Syafira
- Medical Profession Program, Faculty of Medicine, Universitas Sriwijaya, Palembang, South Sumatra, Indonesia
| | - Putri Mahirah Afladhanti
- Medical Profession Program, Faculty of Medicine, Universitas Sriwijaya, Palembang, South Sumatra, Indonesia
| | - Dini Dharmawidiarini
- Lens, Cornea and Refractive Surgery Division, Undaan Eye Hospital, Surabaya, Indonesia
| |
Collapse
|
10
|
Gurnani B, Kaur K. Leveraging ChatGPT for ophthalmic education: A critical appraisal. Eur J Ophthalmol 2024; 34:323-327. [PMID: 37974429 DOI: 10.1177/11206721231215862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
In recent years, the advent of artificial intelligence (AI) has transformed many sectors, including medical education. This editorial critically appraises the integration of ChatGPT, a state-of-the-art AI language model, into ophthalmic education, focusing on its potential, limitations, and ethical considerations. The application of ChatGPT in teaching and training ophthalmologists presents an innovative method to offer real-time, customized learning experiences. Through a systematic analysis of both experimental and clinical data, this editorial examines how ChatGPT enhances engagement, understanding, and retention of complex ophthalmological concepts. The study also evaluates the efficacy of ChatGPT in simulating patient interactions and clinical scenarios, which can foster improved diagnostic and interpersonal skills. Despite the promising advantages, concerns regarding reliability, lack of personal touch, and potential biases in the AI-generated content are scrutinized. Ethical considerations concerning data privacy and potential misuse are also explored. The findings underline the need for carefully designed integration, continuous evaluation, and adherence to ethical guidelines to maximize benefits while mitigating risks. By shedding light on these multifaceted aspects, this paper contributes to the ongoing discourse on the incorporation of AI in medical education, offering valuable insights and guidance for educators, practitioners, and policymakers aiming to leverage modern technology for enhancing ophthalmic education.
Collapse
Affiliation(s)
- Bharat Gurnani
- Cataract, Cornea, Trauma, External Diseases, Ocular Surface and Refractive Services, ASG Eye Hospital, Jodhpur, Rajasthan, India
- Sadguru Netra Chikitsalya, Shri Sadguru Seva Sangh Trust, Chitrakoot, Madhya Pradesh, India
| | - Kirandeep Kaur
- Cataract, Pediatric Ophthalmology and Strabismus, ASG Eye Hospital, Jodhpur, Rajasthan, India
- Children Eye Care Centre, Sadguru Netra Chikitsalya, Shri Sadguru Seva Sangh Trust, Chitrakoot, Madhya Pradesh, India
| |
Collapse
|
11
|
Zang M, Mukund P, Forsyth B, Laine AF, Thakoor KA. Predicting Clinician Fixations on Glaucoma OCT Reports via CNN-Based Saliency Prediction Methods. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:191-197. [PMID: 38606397 PMCID: PMC11008801 DOI: 10.1109/ojemb.2024.3367492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 01/26/2024] [Accepted: 02/15/2024] [Indexed: 04/13/2024] Open
Abstract
Goal: To predict physician fixations specifically on ophthalmology optical coherence tomography (OCT) reports from eye tracking data using CNN based saliency prediction methods in order to aid in the education of ophthalmologists and ophthalmologists-in-training. Methods: Fifteen ophthalmologists were recruited to each examine 20 randomly selected OCT reports and evaluate the likelihood of glaucoma for each report on a scale of 0-100. Eye movements were collected using a Pupil Labs Core eye-tracker. Fixation heat maps were generated using fixation data. Results: A model trained with traditional saliency mapping resulted in a correlation coefficient (CC) value of 0.208, a Normalized Scanpath Saliency (NSS) value of 0.8172, a Kullback-Leibler (KLD) value of 2.573, and a Structural Similarity Index (SSIM) of 0.169. Conclusions: The TranSalNet model was able to predict fixations within certain regions of the OCT report with reasonable accuracy, but more data is needed to improve model accuracy. Future steps include increasing data collection, improving quality of data, and modifying the model architecture.
Collapse
|
12
|
Marquez E, Barrón-Palma EV, Rodríguez K, Savage J, Sanchez-Sandoval AL. Supervised Machine Learning Methods for Seasonal Influenza Diagnosis. Diagnostics (Basel) 2023; 13:3352. [PMID: 37958248 PMCID: PMC10647880 DOI: 10.3390/diagnostics13213352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 10/24/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Influenza has been a stationary disease in Mexico since 2009, and this causes a high cost for the national public health system, including its detection using RT-qPCR tests, treatments, and absenteeism in the workplace. Despite influenza's relevance, the main clinical features to detect the disease defined by international institutions like the World Health Organization (WHO) and the United States Centers for Disease Control and Prevention (CDC) do not follow the same pattern in all populations. The aim of this work is to find a machine learning method to facilitate decision making in the clinical differentiation between positive and negative influenza patients, based on their symptoms and demographic features. The research sample consisted of 15480 records, including clinical and demographic data of patients with a positive/negative RT-qPCR influenza tests, from 2010 to 2020 in the public healthcare institutions of Mexico City. The performance of the methods for classifying influenza cases were evaluated with indices like accuracy, specificity, sensitivity, precision, the f1-measure and the area under the curve (AUC). Results indicate that random forest and bagging classifiers were the best supervised methods; they showed promise in supporting clinical diagnosis, especially in places where performing molecular tests might be challenging or not feasible.
Collapse
Affiliation(s)
- Edna Marquez
- Genomic Medicine Department, General Hospital of México “Dr. Eduardo Liceaga”, Mexico City 06726, Mexico; (E.V.B.-P.)
| | - Eira Valeria Barrón-Palma
- Genomic Medicine Department, General Hospital of México “Dr. Eduardo Liceaga”, Mexico City 06726, Mexico; (E.V.B.-P.)
| | - Katya Rodríguez
- Institute for Research in Applied Mathematics and Systems, National Autonomous University of Mexico, Mexico City 04510, Mexico;
| | - Jesus Savage
- Signal Processing Department, Engineering School, National Autonomous University of Mexico, Mexico City 04510, Mexico;
| | - Ana Laura Sanchez-Sandoval
- Genomic Medicine Department, General Hospital of México “Dr. Eduardo Liceaga”, Mexico City 06726, Mexico; (E.V.B.-P.)
| |
Collapse
|
13
|
Xiao J, Kopycka-Kedzierawski D, Ragusa P, Mendez Chagoya LA, Funkhouser K, Lischka T, Wu TT, Fiscella K, Kar KS, Al Jallad N, Rashwan N, Ren J, Meyerowitz C. Acceptance and Usability of an Innovative mDentistry eHygiene Model Amid the COVID-19 Pandemic Within the US National Dental Practice-Based Research Network: Mixed Methods Study. JMIR Hum Factors 2023; 10:e45418. [PMID: 37594795 PMCID: PMC10474507 DOI: 10.2196/45418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 04/17/2023] [Accepted: 06/17/2023] [Indexed: 08/19/2023] Open
Abstract
BACKGROUND Amid the COVID-19 pandemic and other possible future infectious disease pandemics, dentistry needs to consider modified dental examination regimens that render quality care and ensure the safety of patients and dental health care personnel (DHCP). OBJECTIVE This study aims to assess the acceptance and usability of an innovative mDentistry eHygiene model amid the COVID-19 pandemic. METHODS This pilot study used a 2-stage implementation design to assess 2 critical components of an innovative mDentistry eHygiene model: virtual hygiene examination (eHygiene) and patient self-taken intraoral images (SELFIE), within the National Dental Practice-Based Research Network. Mixed methods (quantitative and qualitative) were used to assess the acceptance and usability of the eHygiene model. RESULTS A total of 85 patients and 18 DHCP participated in the study. Overall, the eHygiene model was well accepted by patients (System Usability Scale [SUS] score: mean 70.0, SD 23.7) and moderately accepted by dentists (SUS score: mean 51.3, SD 15.9) and hygienists (SUS score: mean 57.1, SD 23.8). Dentists and patients had good communication during the eHygiene examination, as assessed using the Dentist-Patient Communication scale. In the SELFIE session, patients completed tasks with minimum challenges and obtained diagnostic intraoral photos. Patients and DHCP suggested that although eHygiene has the potential to improve oral health care services, it should be used selectively depending on patients' conditions. CONCLUSIONS The study results showed promise for the 2 components of the eHygiene model. eHygiene offers a complementary modality for oral health data collection and examination in dental offices, which would be particularly useful during an infectious disease outbreak. In addition, patients being able to capture critical oral health data in their home could facilitate dental treatment triage and oral health self-monitoring and potentially trigger oral health-promoting behaviors.
Collapse
Affiliation(s)
- Jin Xiao
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | | | - Patricia Ragusa
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | | | | | - Tamara Lischka
- Kaiser Permanente Center for Health Research, Portland, OR, United States
| | - Tong Tong Wu
- Department of Biostatistics and Computational Biology, University of Rochester, Rochester, NY, United States
| | - Kevin Fiscella
- Department of Family Medicine, University of Rochester, Rochester, NY, United States
| | - Kumari Saswati Kar
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Nisreen Al Jallad
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Noha Rashwan
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Johana Ren
- River Campus, University of Rochester, Rochester, NY, United States
| | - Cyril Meyerowitz
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| |
Collapse
|
14
|
Wei W, Southern J, Zhu K, Li Y, Cordeiro MF, Veselkov K. Deep learning to detect macular atrophy in wet age-related macular degeneration using optical coherence tomography. Sci Rep 2023; 13:8296. [PMID: 37217770 DOI: 10.1038/s41598-023-35414-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 05/17/2023] [Indexed: 05/24/2023] Open
Abstract
Here, we have developed a deep learning method to fully automatically detect and quantify six main clinically relevant atrophic features associated with macular atrophy (MA) using optical coherence tomography (OCT) analysis of patients with wet age-related macular degeneration (AMD). The development of MA in patients with AMD results in irreversible blindness, and there is currently no effective method of early diagnosis of this condition, despite the recent development of unique treatments. Using OCT dataset of a total of 2211 B-scans from 45 volumetric scans of 8 patients, a convolutional neural network using one-against-all strategy was trained to present all six atrophic features followed by a validation to evaluate the performance of the models. The model predictive performance has achieved a mean dice similarity coefficient score of 0.706 ± 0.039, a mean Precision score of 0.834 ± 0.048, and a mean Sensitivity score of 0.615 ± 0.051. These results show the unique potential of using artificially intelligence-aided methods for early detection and identification of the progression of MA in wet AMD, which can further support and assist clinical decisions.
Collapse
Affiliation(s)
- Wei Wei
- Department of Surgery and Cancer, Imperial College London, London, UK
- Ningbo Medical Center Lihuili Hospital, Ningbo, China
- Imperial College Ophthalmology Research Group, London, UK
| | | | - Kexuan Zhu
- Ningbo Medical Center Lihuili Hospital, Ningbo, China
| | - Yefeng Li
- School of Cyber Science and Engineering, Ningbo University of Technology, Ningbo, China
| | - Maria Francesca Cordeiro
- Department of Surgery and Cancer, Imperial College London, London, UK.
- Imperial College Ophthalmology Research Group, London, UK.
| | - Kirill Veselkov
- Department of Surgery and Cancer, Imperial College London, London, UK.
| |
Collapse
|
15
|
Tang YW, Ji J, Lin JW, Wang J, Wang Y, Liu Z, Hu Z, Yang JF, Ng TK, Zhang M, Pang CP, Cen LP. Automatic Detection of Peripheral Retinal Lesions From Ultrawide-Field Fundus Images Using Deep Learning. Asia Pac J Ophthalmol (Phila) 2023; 12:284-292. [PMID: 36912572 DOI: 10.1097/apo.0000000000000599] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 12/19/2022] [Indexed: 03/14/2023] Open
Abstract
PURPOSE To establish a multilabel-based deep learning (DL) algorithm for automatic detection and categorization of clinically significant peripheral retinal lesions using ultrawide-field fundus images. METHODS A total of 5958 ultrawide-field fundus images from 3740 patients were randomly split into a training set, validation set, and test set. A multilabel classifier was developed to detect rhegmatogenous retinal detachment, cystic retinal tuft, lattice degeneration, and retinal breaks. Referral decision was automatically generated based on the results of each disease class. t -distributed stochastic neighbor embedding heatmaps were used to visualize the features extracted by the neural networks. Gradient-weighted class activation mapping and guided backpropagation heatmaps were generated to investigate the image locations for decision-making by the DL models. The performance of the classifier(s) was evaluated by sensitivity, specificity, accuracy, F 1 score, area under receiver operating characteristic curve (AUROC) with 95% CI, and area under the precision-recall curve. RESULTS In the test set, all categories achieved a sensitivity of 0.836-0.918, a specificity of 0.858-0.989, an accuracy of 0.854-0.977, an F 1 score of 0.400-0.931, an AUROC of 0.9205-0.9882, and an area under the precision-recall curve of 0.6723-0.9745. The referral decisions achieved an AUROC of 0.9758 (95% CI= 0.9648-0.9869). The multilabel classifier had significantly better performance in cystic retinal tuft detection than the binary classifier (AUROC= 0.9781 vs 0.6112, P < 0.001). The model showed comparable performance with human experts. CONCLUSIONS This new DL model of a multilabel classifier is capable of automatic, accurate, and early detection of clinically significant peripheral retinal lesions with various sample sizes. It can be applied in peripheral retinal screening in clinics.
Collapse
Affiliation(s)
- Yi-Wen Tang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jie Ji
- Network and Information Center, Shantou University, Shantou, Guangdong, China
| | - Jian-Wei Lin
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Ji Wang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yun Wang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Zibo Liu
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Zhanchi Hu
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jian-Feng Yang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tsz Kin Ng
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Chi Pui Pang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Ling-Ping Cen
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| |
Collapse
|
16
|
Li M, Liu S, Wang Z, Li X, Yan Z, Zhu R, Wan Z. MyopiaDETR: End-to-end pathological myopia detection based on transformer using 2D fundus images. Front Neurosci 2023; 17:1130609. [PMID: 36824210 PMCID: PMC9941630 DOI: 10.3389/fnins.2023.1130609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 01/23/2023] [Indexed: 02/09/2023] Open
Abstract
Background Automated diagnosis of various retinal diseases based on fundus images can serve as an important clinical decision aid for curing vision loss. However, developing such an automated diagnostic solution is challenged by the characteristics of lesion area in 2D fundus images, such as morphology irregularity, imaging angle, and insufficient data. Methods To overcome those challenges, we propose a novel deep learning model named MyopiaDETR to detect the lesion area of normal myopia (NM), high myopia (HM) and pathological myopia (PM) using 2D fundus images provided by the iChallenge-PM dataset. To solve the challenge of morphology irregularity, we present a novel attentional FPN architecture and generate multi-scale feature maps to a traditional Detection Transformer (DETR) for detecting irregular lesion more accurate. Then, we choose the DETR structure to view the lesion from the perspective of set prediction and capture better global information. Several data augmentation methods are used on the iChallenge-PM dataset to solve the challenge of insufficient data. Results The experimental results demonstrate that our model achieves excellent localization and classification performance on the iChallenge-PM dataset, reaching AP50 of 86.32%. Conclusion Our model is effective to detect lesion areas in 2D fundus images. The model not only achieves a significant improvement in capturing small objects, but also a significant improvement in convergence speed during training.
Collapse
Affiliation(s)
- Manyu Li
- School of Information Engineering, Nanchang University, Jiangxi, China
| | - Shichang Liu
- School of Computer Science, Shaanxi Normal University, Xi’an, China
| | - Zihan Wang
- School of Information Engineering, Nanchang University, Jiangxi, China
| | - Xin Li
- School of Computer Science, Shaanxi Normal University, Xi’an, China
| | - Zezhong Yan
- School of Information Engineering, Nanchang University, Jiangxi, China
| | - Renping Zhu
- School of Information Engineering, Nanchang University, Jiangxi, China,Industrial Institute of Artificial Intelligence, Nanchang University, Jiangxi, China,School of Information Management, Wuhan University, Hubei, China,*Correspondence: Renping Zhu,
| | - Zhijiang Wan
- School of Information Engineering, Nanchang University, Jiangxi, China,Industrial Institute of Artificial Intelligence, Nanchang University, Jiangxi, China,Zhijiang Wan,
| |
Collapse
|
17
|
Ciecierski-Holmes T, Singh R, Axt M, Brenner S, Barteit S. Artificial intelligence for strengthening healthcare systems in low- and middle-income countries: a systematic scoping review. NPJ Digit Med 2022; 5:162. [PMID: 36307479 PMCID: PMC9614192 DOI: 10.1038/s41746-022-00700-y] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 09/29/2022] [Indexed: 02/08/2023] Open
Abstract
In low- and middle-income countries (LMICs), AI has been promoted as a potential means of strengthening healthcare systems by a growing number of publications. We aimed to evaluate the scope and nature of AI technologies in the specific context of LMICs. In this systematic scoping review, we used a broad variety of AI and healthcare search terms. Our literature search included records published between 1st January 2009 and 30th September 2021 from the Scopus, EMBASE, MEDLINE, Global Health and APA PsycInfo databases, and grey literature from a Google Scholar search. We included studies that reported a quantitative and/or qualitative evaluation of a real-world application of AI in an LMIC health context. A total of 10 references evaluating the application of AI in an LMIC were included. Applications varied widely, including: clinical decision support systems, treatment planning and triage assistants and health chatbots. Only half of the papers reported which algorithms and datasets were used in order to train the AI. A number of challenges of using AI tools were reported, including issues with reliability, mixed impacts on workflows, poor user friendliness and lack of adeptness with local contexts. Many barriers exists that prevent the successful development and adoption of well-performing, context-specific AI tools, such as limited data availability, trust and evidence of cost-effectiveness in LMICs. Additional evaluations of the use of AI in healthcare in LMICs are needed in order to identify their effectiveness and reliability in real-world settings and to generate understanding for best practices for future implementations.
Collapse
Affiliation(s)
- Tadeusz Ciecierski-Holmes
- Heidelberg Institute of Global Health (HIGH), Faculty of Medicine and University Hospital, Heidelberg University, Heidelberg, Germany.
- University of Cambridge, School of Clinical Medicine, Addenbrooke's Hospital, Cambridge, CB2 0SP, UK.
| | - Ritvij Singh
- Imperial College London, Faculty of Medicine, Sir Alexander Fleming Building, London, SW7 2DD, UK
| | - Miriam Axt
- Heidelberg Institute of Global Health (HIGH), Faculty of Medicine and University Hospital, Heidelberg University, Heidelberg, Germany
| | - Stephan Brenner
- Heidelberg Institute of Global Health (HIGH), Faculty of Medicine and University Hospital, Heidelberg University, Heidelberg, Germany
| | - Sandra Barteit
- Heidelberg Institute of Global Health (HIGH), Faculty of Medicine and University Hospital, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
18
|
Ferro Desideri L, Rutigliani C, Corazza P, Nastasi A, Roda M, Nicolo M, Traverso CE, Vagge A. The upcoming role of Artificial Intelligence (AI) for retinal and glaucomatous diseases. JOURNAL OF OPTOMETRY 2022; 15 Suppl 1:S50-S57. [PMID: 36216736 PMCID: PMC9732476 DOI: 10.1016/j.optom.2022.08.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/14/2022] [Accepted: 08/16/2022] [Indexed: 06/16/2023]
Abstract
In recent years, the role of artificial intelligence (AI) and deep learning (DL) models is attracting increasing global interest in the field of ophthalmology. DL models are considered the current state-of-art among the AI technologies. In fact, DL systems have the capability to recognize, quantify and describe pathological clinical features. Their role is currently being investigated for the early diagnosis and management of several retinal diseases and glaucoma. The application of DL models to fundus photographs, visual fields and optical coherence tomography (OCT) imaging has provided promising results in the early detection of diabetic retinopathy (DR), wet age-related macular degeneration (w-AMD), retinopathy of prematurity (ROP) and glaucoma. In this review we analyze the current evidence of AI applied to these ocular diseases, as well as discuss the possible future developments and potential clinical implications, without neglecting the present limitations and challenges in order to adopt AI and DL models as powerful tools in the everyday routine clinical practice.
Collapse
Affiliation(s)
- Lorenzo Ferro Desideri
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy.
| | | | - Paolo Corazza
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | | | - Matilde Roda
- Ophthalmology Unit, Department of Experimental, Diagnostic and Specialty Medicine (DIMES), Alma Mater Studiorum University of Bologna and S.Orsola-Malpighi Teaching Hospital, Bologna, Italy
| | - Massimo Nicolo
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Carlo Enrico Traverso
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Aldo Vagge
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| |
Collapse
|
19
|
Yasin A, Ren Y, Li J, Sheng Y, Cao C, Zhang K. Advances in Hyaluronic Acid for Biomedical Applications. Front Bioeng Biotechnol 2022; 10:910290. [PMID: 35860333 PMCID: PMC9289781 DOI: 10.3389/fbioe.2022.910290] [Citation(s) in RCA: 83] [Impact Index Per Article: 27.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 05/24/2022] [Indexed: 11/13/2022] Open
Abstract
Hyaluronic acid (HA) is a large non-sulfated glycosaminoglycan that is the main component of the extracellular matrix (ECM). Because of its strong and diversified functions applied in broad fields, HA has been widely studied and reported previously. The molecular properties of HA and its derivatives, including a wide range of molecular weights but distinct effects on cells, moisture retention and anti-aging, and CD44 targeting, promised its role as a popular participant in tissue engineering, wound healing, cancer treatment, ophthalmology, and cosmetics. In recent years, HA and its derivatives have played an increasingly important role in the aforementioned biomedical fields in the formulation of coatings, nanoparticles, and hydrogels. This article highlights recent efforts in converting HA to smart formulation, such as multifunctional coatings, targeted nanoparticles, or injectable hydrogels, which are used in advanced biomedical application.
Collapse
Affiliation(s)
- Aqeela Yasin
- School of Materials Science and Engineering, and Henan Key Laboratory of Advanced Magnesium Alloy and Key Laboratory of Materials Processing and Mold Technology (Ministry of Education), Zhengzhou University, Zhengzhou, China
| | - Ying Ren
- School of Materials Science and EngineeringHenan University of Technology, Zhengzhou, China
| | - Jingan Li
- School of Materials Science and Engineering, and Henan Key Laboratory of Advanced Magnesium Alloy and Key Laboratory of Materials Processing and Mold Technology (Ministry of Education), Zhengzhou University, Zhengzhou, China
| | - Yulong Sheng
- School of Materials Science and Engineering, and Henan Key Laboratory of Advanced Magnesium Alloy and Key Laboratory of Materials Processing and Mold Technology (Ministry of Education), Zhengzhou University, Zhengzhou, China
| | - Chang Cao
- Department of Cardiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Kun Zhang
- School of Life Science, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
20
|
Use of Artificial Neural Networks to Predict the Progression of Glaucoma in Patients with Sleep Apnea. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12126061] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Aim: To construct neural models to predict the progression of glaucoma in patients with sleep apnea. Materials and Methods: Modeling the use of neural networks was performed using the Neurosolutions commercial simulator. The built databases gather information on a group of patients with primitive open-angle glaucoma and normal-tension glaucoma, who have been associated with sleep apnea syndrome and various stages of disease severity. The data within the database were divided as follows: 65 were used in the neural network training stage and 8 were kept for the validation stage. In total, 21 parameters were selected as input parameters for neural models including: age of patients, BMI (body mass index), systolic and diastolic blood pressure, intraocular pressure, central corneal thickness, corneal biomechanical parameters (IOPcc, HC, CRF), AHI, desaturation index, nocturnal oxygen saturation, remaining AHI, type of apnea, and associated general conditions (diabetes, hypertension, obesity, COPD). The selected output parameters are: c/d ratio, modified visual field parameters (MD, PSD), ganglion cell layer thickness. Forward-propagation neural networks (multilayer perceptron) were constructed with a layer of hidden neurons. The constructed neural models generated the output values for these data. The obtained results were then compared with the experimental values. Results: The best results were obtained during the training stage with the ANN network (21:35:4). If we consider a 25% confidence interval, we find that very good results are obtained during the validation stage, except for the average GCL thickness, for which the errors are slightly higher. Conclusions: Excellent results were obtained during the validation stage, which support the results obtained in other studies in the literature that strengthen the connection between sleep apnea syndrome and glaucoma changes.
Collapse
|
21
|
Al-Jallad N, Ly-Mapes O, Hao P, Ruan J, Ramesh A, Luo J, Wu TT, Dye T, Rashwan N, Ren J, Jang H, Mendez L, Alomeir N, Bullock S, Fiscella K, Xiao J. Artificial intelligence-powered smartphone application, AICaries, improves at-home dental caries screening in children: Moderated and unmoderated usability test. PLOS DIGITAL HEALTH 2022; 1:e0000046. [PMID: 36381137 PMCID: PMC9645586 DOI: 10.1371/journal.pdig.0000046] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 04/15/2022] [Indexed: 06/16/2023]
Abstract
Early Childhood Caries (ECC) is the most common childhood disease worldwide and a health disparity among underserved children. ECC is preventable and reversible if detected early. However, many children from low-income families encounter barriers to dental care. An at-home caries detection technology could potentially improve access to dental care regardless of patients' economic status and address the overwhelming prevalence of ECC. Our team has developed a smartphone application (app), AICaries, that uses artificial intelligence (AI)-powered technology to detect caries using children's teeth photos. We used mixed methods to assess the acceptance, usability, and feasibility of the AICaries app among underserved parent-child dyads. We conducted moderated usability testing (Step 1) with ten parent-child dyads using "Think-aloud" methods to assess the flow and functionality of the app and analyze the data to refine the app and procedures. Next, we conducted unmoderated field testing (Step 2) with 32 parent-child dyads to test the app within their natural environment (home) over two weeks. We administered the System Usability Scale (SUS) and conducted semi-structured individual interviews with parents and conducted thematic analyses. AICaries app received a 78.4 SUS score from the participants, indicating an excellent acceptance. Notably, the majority (78.5%) of parent-taken photos of children's teeth were satisfactory in quality for detection of caries using the AI app. Parents suggested using community health workers to provide training to parents needing assistance in taking high quality photos of their young child's teeth. Perceived benefits from using the AICaries app include convenient at-home caries screening, informative on caries risk and education, and engaging family members. Data from this study support future clinical trial that evaluates the real-world impact of using this innovative smartphone app on early detection and prevention of ECC among low-income children.
Collapse
Affiliation(s)
- Nisreen Al-Jallad
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Oriana Ly-Mapes
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Peirong Hao
- Department of Computer Science, University of Rochester, United States of America
| | - Jinlong Ruan
- Department of Computer Science, University of Rochester, United States of America
| | - Ashwin Ramesh
- Department of Computer Science, University of Rochester, United States of America
| | - Jiebo Luo
- Department of Computer Science, University of Rochester, United States of America
| | - Tong Tong Wu
- Department of Biostatistics and computational biology, University of Rochester Medical Center, Rochester, United States of America
| | - Timothy Dye
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, United States of America
| | - Noha Rashwan
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Johana Ren
- University of Rochester, United States of America
| | - Hoonji Jang
- Temple University School of Dentistry, Pennsylvania, United States of America
| | - Luis Mendez
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Nora Alomeir
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| | | | - Kevin Fiscella
- Department of Family Medicine, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Jin Xiao
- Eastman Institute for Oral Health, University of Rochester Medical Center, Rochester, NY, United States of America
| |
Collapse
|
22
|
Kothandan S, Radhakrishana A, Kuppusamy G. Review on Artificial Intelligence Based Ophthalmic Application. Curr Pharm Des 2022; 28:2150-2160. [PMID: 35619317 DOI: 10.2174/1381612828666220520112240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 02/14/2022] [Indexed: 11/22/2022]
Abstract
Artificial intelligence is the leading branch of technology and innovation. The utility of artificial intelligence in the field of medicine is also remarkable. From drug discovery and development till the introduction of products in the market, artificial intelligence can play its role. As people age, they are more prone to be affected by eye diseases around the globe. Early diagnosis and detection help in minimizing the risk of vision loss and providing a quality life. With the help of artificial intelligence, the workload of humans and manmade errors can be reduced to an extent. The need for artificial intelligence in the area of ophthalmic is also found to be significant. As people age, they are more prone to be affected by eye diseases around the globe. Early diagnosis and detection help in minimizing the risk of vision loss and providing a quality life. In this review, we elaborated on the use of artificial intelligence in the field of pharmaceutical product development mainly with its application in ophthalmic care. AI in the future has a high potential to increase the success rate in the drug discovery phase has already been established. The application of artificial intelligence for drug development, diagnosis, and treatment is also reported with the scientific evidence in this paper.
Collapse
Affiliation(s)
- Sudhakar Kothandan
- Department of Pharmaceutics, JSS College of Pharmacy (JSS Academy of Higher Education & Research), Ooty
| | - Arun Radhakrishana
- Department of Pharmaceutics, JSS College of Pharmacy (JSS Academy of Higher Education & Research), Ooty
| | - Gowthamarajan Kuppusamy
- Department of Pharmaceutics, JSS College of Pharmacy (JSS Academy of Higher Education & Research), Ooty
| |
Collapse
|
23
|
Trends in Neonatal Ophthalmic Screening Methods. Diagnostics (Basel) 2022; 12:diagnostics12051251. [PMID: 35626406 PMCID: PMC9140133 DOI: 10.3390/diagnostics12051251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 05/12/2022] [Accepted: 05/17/2022] [Indexed: 11/30/2022] Open
Abstract
Neonatal ophthalmic screening should lead to early diagnosis of ocular abnormalities to reduce long-term visual impairment in selected diseases. If a treatable pathology is diagnosed within a few days after the birth, adequate therapy may be indicated to facilitate the best possible conditions for further development of visual functions. Traditional neonatal ophthalmic screening uses the red reflex test (RRT). It tests the transmittance of the light through optical media towards the retina and the general disposition of the central part of the retina. However, RRT has weaknesses, especially in posterior segment affections. Wide-field digital imaging techniques have shown promising results in detecting anterior and posterior segment pathologies. Particular attention should be paid to telemedicine and artificial intelligence. These methods can improve the specificity and sensitivity of neonatal eye screening. Both are already highly advanced in diagnosing and monitoring of retinopathy of prematurity.
Collapse
|
24
|
A Particle Swarm Optimization Backtracking Technique Inspired by Science-Fiction Time Travel. AI 2022. [DOI: 10.3390/ai3020024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Artificial intelligence techniques, such as particle swarm optimization, are used to solve problems throughout society. Optimization, in particular, seeks to identify the best possible decision within a search space. Problematically, particle swarm optimization will sometimes have particles that become trapped inside local minima, preventing them from identifying a global optimal solution. As a solution to this issue, this paper proposes a science-fiction inspired enhancement of particle swarm optimization where an impactful iteration is identified and the algorithm is rerun from this point, with a change made to the swarm. The proposed technique is tested using multiple variations on several different functions representing optimization problems and several standard test functions used to test various particle swarm optimization techniques.
Collapse
|
25
|
Diagnostic accuracy of current machine learning classifiers for age-related macular degeneration: a systematic review and meta-analysis. Eye (Lond) 2022; 36:994-1004. [PMID: 33958739 PMCID: PMC9046206 DOI: 10.1038/s41433-021-01540-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 02/23/2021] [Accepted: 04/06/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND AND OBJECTIVE The objective of this study was to systematically review and meta-analyze the diagnostic accuracy of current machine learning classifiers for age-related macular degeneration (AMD). Artificial intelligence diagnostic algorithms can automatically detect and diagnose AMD through training data from large sets of fundus or OCT images. The use of AI algorithms is a powerful tool, and it is a method of obtaining a cost-effective, simple, and fast diagnosis of AMD. METHODS MEDLINE, EMBASE, CINAHL, and ProQuest Dissertations and Theses were searched systematically and thoroughly. Conferences held through Association for Research in Vision and Ophthalmology, American Academy of Ophthalmology, and Canadian Society of Ophthalmology were searched. Studies were screened using Covidence software and data on sensitivity, specificity and area under curve were extracted from the included studies. STATA 15.0 was used to conduct the meta-analysis. RESULTS Our search strategy identified 307 records from online databases and 174 records from gray literature. Total of 13 records, 64,798 subjects (and 612,429 images), were used for the quantitative analysis. The pooled estimate for sensitivity was 0.918 [95% CI: 0.678, 0.98] and specificity was 0.888 [95% CI: 0.578, 0.98] for AMD screening using machine learning classifiers. The relative odds of a positive screen test in AMD cases were 89.74 [95% CI: 3.05-2641.59] times more likely than a negative screen test in non-AMD cases. The positive likelihood ratio was 8.22 [95% CI: 1.52-44.48] and the negative likelihood ratio was 0.09 [95% CI: 0.02-0.52]. CONCLUSION The included studies show promising results for the diagnostic accuracy of the machine learning classifiers for AMD and its implementation in clinical settings.
Collapse
|
26
|
Detecting glaucoma with only OCT: Implications for the clinic, research, screening, and AI development. Prog Retin Eye Res 2022; 90:101052. [PMID: 35216894 DOI: 10.1016/j.preteyeres.2022.101052] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 01/21/2022] [Accepted: 02/01/2022] [Indexed: 12/25/2022]
Abstract
A method for detecting glaucoma based only on optical coherence tomography (OCT) is of potential value for routine clinical decisions, for inclusion criteria for research studies and trials, for large-scale clinical screening, as well as for the development of artificial intelligence (AI) decision models. Recent work suggests that the OCT probability (p-) maps, also known as deviation maps, can play a key role in an OCT-based method. However, artifacts seen on the p-maps of healthy control eyes can resemble patterns of damage due to glaucoma. We document in section 2 that these glaucoma-like artifacts are relatively common and are probably due to normal anatomical variations in healthy eyes. We also introduce a simple anatomical artifact model based upon known anatomical variations to help distinguish these artifacts from actual glaucomatous damage. In section 3, we apply this model to an OCT-based method for detecting glaucoma that starts with an examination of the retinal nerve fiber layer (RNFL) p-map. While this method requires a judgment by the clinician, sections 4 and 5 describe automated methods that do not. In section 4, the simple model helps explain the relatively poor performance of commonly employed summary statistics, including circumpapillary RNFL thickness. In section 5, the model helps account for the success of an AI deep learning model, which in turn validates our focus on the RNFL p-map. Finally, in section 6 we consider the implications of OCT-based methods for the clinic, research, screening, and the development of AI models.
Collapse
|
27
|
Gour N, Tanveer M, Khanna P. Challenges for ocular disease identification in the era of artificial intelligence. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06770-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
28
|
Kozioł M, Nowak MS, Koń B, Udziela M, Szaflik JP. Regional analysis of diabetic retinopathy and co-existing social and demographic factors in the overall population of Poland. Arch Med Sci 2022; 18:320-327. [PMID: 35316912 PMCID: PMC8924831 DOI: 10.5114/aoms/131264] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Accepted: 12/07/2020] [Indexed: 01/23/2023] Open
Abstract
INTRODUCTION The aim of our study was to analyse the regional differences in diabetic retinopathy (DR) prevalence and its co-existing social and demographic factors in the overall population of Poland in the year 2017. MATERIAL AND METHODS Data from all levels of healthcare services at public and private institutions recorded in the National Health Fund database were evaluated. International Classification of Diseases codes were used to identify patients with type 1 and type 2 diabetes mellitus (DM) and with DR. Moran's I statistics and Spatial Autoregressive (SAR) model allowed us to understand the distribution of DR prevalence and its possible association with environmental and demographic exposures. RESULTS In total, 310,815 individuals with diabetic retinopathy (DR) were diagnosed in the year 2017 in Poland. Of them, 174,384 (56.11%) were women, 221,144 (71.15%) lived in urban areas, and 40,231 (12.94%) and 270,584 (87.06%) had type 1 and type 2 DM, respectively. The analysis of the SAR model showed that the significant factors for the occurrence of DR in particular counties were a higher level of average income and a higher number of ophthalmologic consultations per 10,000 adults. CONCLUSIONS The analyses of social, demographic, and systemic factors co-existing with DR revealed that level of income and access to ophthalmologic and diabetic services are crucial in DR prevalence in Poland.
Collapse
Affiliation(s)
| | - Michał S. Nowak
- Provisus Eye Clinic, Czestochowa, Poland
- Saint Family Hospital Medical Center, Lodz, Poland
| | - Beata Koń
- Collegium of Economic Analysis, SGH Warsaw School of Economics, Warsaw, Poland
| | - Monika Udziela
- Department of Ophthalmology, Medical University of Warsaw, Public Ophthalmic Clinical Hospital (SPKSO), Warsaw, Poland
| | - Jacek P. Szaflik
- Department of Ophthalmology, Medical University of Warsaw, Public Ophthalmic Clinical Hospital (SPKSO), Warsaw, Poland
| |
Collapse
|
29
|
Chattopadhyay AK, Chattopadhyay S. VIRDOCD: A VIRtual DOCtor to predict dengue fatality. EXPERT SYSTEMS 2022; 39. [DOI: 10.1111/exsy.12796] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 08/06/2021] [Indexed: 02/05/2023]
Abstract
AbstractClinicians make routine diagnosis by scrutinizing patients' medical signs and symptoms, a skill popularly referred to as ‘Clinical Eye’. This skill evolves through trial‐and‐error and improves with time. The success of the therapeutic regime relies largely on the accuracy of interpretation of such sign‐symptoms, analysing which a clinician assesses the severity of the illness. The present study is an attempt to propose a complementary medical front by mathematically modelling the ‘Clinical Eye’ of a VIRtual DOCtor, using statistical and machine intelligence tools (SMI), to analyse Dengue epidemic infected patients (100 case studies with 11 weighted sign‐symptoms). The SMI in VIRDOCD reads medical data and translates these into a vector comprising multiple linear regression (MLR) coefficients to predict infection severity grades of dengue patients that clone the clinician's experience‐based assessment. Risk managed through ANOVA, the dengue severity grade prediction accuracy from VIRDOCD is found higher (ca 75%) than conventional clinical practice (ca 71.4%, mean accuracy profile assessed by a team of 10 senior consultants). Free of human errors and capable of deciphering even minute differences from almost identical symptoms (to the Clinical eye), VIRDOCD is uniquely individualized in its decision‐making ability. The algorithm has been validated against Random Forest classification (RF, ca 63%), another regression‐based classifier similar to MLR that can be trained through supervised learning. We find that MLR‐based VIRDOCD is superior to RF in predicting the grade of Dengue morbidity. VIRDOCD can be further extended to analyse other epidemic infections, such as COVID‐19.
Collapse
|
30
|
Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography. Sci Rep 2021; 11:21893. [PMID: 34751189 PMCID: PMC8575929 DOI: 10.1038/s41598-021-01227-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/23/2021] [Indexed: 11/09/2022] Open
Abstract
Age-related macular degeneration (AMD) is a progressive retinal disease, causing vision loss. A more detailed characterization of its atrophic form became possible thanks to the introduction of Optical Coherence Tomography (OCT). However, manual atrophy quantification in 3D retinal scans is a tedious task and prevents taking full advantage of the accurate retina depiction. In this study we developed a fully automated algorithm segmenting Retinal Pigment Epithelial and Outer Retinal Atrophy (RORA) in dry AMD on macular OCT. 62 SD-OCT scans from eyes with atrophic AMD (57 patients) were collected and split into train and test sets. The training set was used to develop a Convolutional Neural Network (CNN). The performance of the algorithm was established by cross validation and comparison to the test set with ground-truth annotated by two graders. Additionally, the effect of using retinal layer segmentation during training was investigated. The algorithm achieved mean Dice scores of 0.881 and 0.844, sensitivity of 0.850 and 0.915 and precision of 0.928 and 0.799 in comparison with Expert 1 and Expert 2, respectively. Using retinal layer segmentation improved the model performance. The proposed model identified RORA with performance matching human experts. It has a potential to rapidly identify atrophy with high consistency.
Collapse
|
31
|
Xiao J, Luo J, Ly-Mapes O, Wu TT, Dye T, Al Jallad N, Hao P, Ruan J, Bullock S, Fiscella K. Assessing a Smartphone App (AICaries) That Uses Artificial Intelligence to Detect Dental Caries in Children and Provides Interactive Oral Health Education: Protocol for a Design and Usability Testing Study. JMIR Res Protoc 2021; 10:e32921. [PMID: 34529582 PMCID: PMC8571694 DOI: 10.2196/32921] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 09/14/2021] [Indexed: 01/09/2023] Open
Abstract
BACKGROUND Early childhood caries (ECC) is the most common chronic childhood disease, with nearly 1.8 billion new cases per year worldwide. ECC afflicts approximately 55% of low-income and minority US preschool children, resulting in harmful short- and long-term effects on health and quality of life. Clinical evidence shows that caries is reversible if detected and addressed in its early stages. However, many low-income US children often have poor access to pediatric dental services. In this underserved group, dental caries is often diagnosed at a late stage when extensive restorative treatment is needed. With more than 85% of lower-income Americans owning a smartphone, mobile health tools such as smartphone apps hold promise in achieving patient-driven early detection and risk control of ECC. OBJECTIVE This study aims to use a community-based participatory research strategy to refine and test the usability of an artificial intelligence-powered smartphone app, AICaries, to be used by children's parents/caregivers for dental caries detection in their children. METHODS Our previous work has led to the prototype of AICaries, which offers artificial intelligence-powered caries detection using photos of children's teeth taken by the parents' smartphones, interactive caries risk assessment, and personalized education on reducing children's ECC risk. This AICaries study will use a two-step qualitative study design to assess the feedback and usability of the app component and app flow, and whether parents can take photos of children's teeth on their own. Specifically, in step 1, we will conduct individual usability tests among 10 pairs of end users (parents with young children) to facilitate app module modification and fine-tuning using think aloud and instant data analysis strategies. In step 2, we will conduct unmoderated field testing for app feasibility and acceptability among 32 pairs of parents with their young children to assess the usability and acceptability of AICaries, including assessing the number/quality of teeth images taken by the parents for their children and parents' satisfaction. RESULTS The study is funded by the National Institute of Dental and Craniofacial Research, United States. This study received institutional review board approval and launched in August 2021. Data collection and analysis are expected to conclude by March 2022 and June 2022, respectively. CONCLUSIONS Using AICaries, parents can use their regular smartphones to take photos of their children's teeth and detect ECC aided by AICaries so that they can actively seek treatment for their children at an early and reversible stage of ECC. Using AICaries, parents can also obtain essential knowledge on reducing their children's caries risk. Data from this study will support a future clinical trial that evaluates the real-world impact of using this smartphone app on early detection and prevention of ECC among low-income children. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/32921.
Collapse
Affiliation(s)
- Jin Xiao
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Jiebo Luo
- Computer Science, University of Rochester, Rochester, NY, United States
| | - Oriana Ly-Mapes
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Tong Tong Wu
- Department of Biostatistics and Computational Biology, University of Rochester Medical Center, Rochester, NY, United States
| | - Timothy Dye
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, NY, United States
| | - Nisreen Al Jallad
- Eastman Institute for Oral Health, University of Rochester, Rochester, NY, United States
| | - Peirong Hao
- Computer Science, University of Rochester, Rochester, NY, United States
| | - Jinlong Ruan
- Computer Science, University of Rochester, Rochester, NY, United States
| | | | - Kevin Fiscella
- Department of Family Medicine, University of Rochester Medical Center, Rochester, NY, United States
| |
Collapse
|
32
|
Ajitha S, Akkara JD, Judy MV. Identification of glaucoma from fundus images using deep learning techniques. Indian J Ophthalmol 2021; 69:2702-2709. [PMID: 34571619 PMCID: PMC8597466 DOI: 10.4103/ijo.ijo_92_21] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Purpose Glaucoma is one of the preeminent causes of incurable visual disability and blindness across the world due to elevated intraocular pressure within the eyes. Accurate and timely diagnosis is essential for preventing visual disability. Manual detection of glaucoma is a challenging task that needs expertise and years of experience. Methods In this paper, we suggest a powerful and accurate algorithm using a convolutional neural network (CNN) for the automatic diagnosis of glaucoma. In this work, 1113 fundus images consisting of 660 normal and 453 glaucomatous images from four databases have been used for the diagnosis of glaucoma. A 13-layer CNN is potently trained from this dataset to mine vital features, and these features are classified into either glaucomatous or normal class during testing. The proposed algorithm is implemented in Google Colab, which made the task straightforward without spending hours installing the environment and supporting libraries. To evaluate the effectiveness of our algorithm, the dataset is divided into 70% for training, 20% for validation, and the remaining 10% utilized for testing. The training images are augmented to 12012 fundus images. Results Our model with SoftMax classifier achieved an accuracy of 93.86%, sensitivity of 85.42%, specificity of 100%, and precision of 100%. In contrast, the model with the SVM classifier achieved accuracy, sensitivity, specificity, and precision of 95.61, 89.58, 100, and 100%, respectively. Conclusion These results demonstrate the ability of the deep learning model to identify glaucoma from fundus images and suggest that the proposed system can help ophthalmologists in a fast, accurate, and reliable diagnosis of glaucoma.
Collapse
Affiliation(s)
- S Ajitha
- Department of Computer Applications, Cochin University of Science and Technology, Kochi, Kerala, India
| | - John D Akkara
- Ophthalmology Department, Sri Ramachandra Institute of Higher Education and Research, Chennai, Tamil Nadu, India
| | - M V Judy
- Department of Computer Applications, Cochin University of Science and Technology, Kochi, Kerala, India
| |
Collapse
|
33
|
Accuracy of Deep Learning Algorithms for the Diagnosis of Retinopathy of Prematurity by Fundus Images: A Systematic Review and Meta-Analysis. J Ophthalmol 2021; 2021:8883946. [PMID: 34394982 PMCID: PMC8363465 DOI: 10.1155/2021/8883946] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 06/30/2021] [Accepted: 07/27/2021] [Indexed: 12/14/2022] Open
Abstract
Background Retinopathy of prematurity (ROP) occurs in preterm infants and may contribute to blindness. Deep learning (DL) models have been used for ophthalmologic diagnoses. We performed a systematic review and meta-analysis of published evidence to summarize and evaluate the diagnostic accuracy of DL algorithms for ROP by fundus images. Methods We searched PubMed, EMBASE, Web of Science, and Institute of Electrical and Electronics Engineers Xplore Digital Library on June 13, 2021, for studies using a DL algorithm to distinguish individuals with ROP of different grades, which provided accuracy measurements. The pooled sensitivity and specificity values and the area under the curve (AUC) of summary receiver operating characteristics curves (SROC) summarized overall test performance. The performances in validation and test datasets were assessed together and separately. Subgroup analyses were conducted between the definition and grades of ROP. Threshold and nonthreshold effects were tested to assess biases and evaluate accuracy factors associated with DL models. Results Nine studies with fifteen classifiers were included in our meta-analysis. A total of 521,586 objects were applied to DL models. For combined validation and test datasets in each study, the pooled sensitivity and specificity were 0.953 (95% confidence intervals (CI): 0.946-0.959) and 0.975 (0.973-0.977), respectively, and the AUC was 0.984 (0.978-0.989). For the validation dataset and test dataset, the AUC was 0.977 (0.968-0.986) and 0.987 (0.982-0.992), respectively. In the subgroup analysis of ROP vs. normal and differentiation of two ROP grades, the AUC was 0.990 (0.944-0.994) and 0.982 (0.964-0.999), respectively. Conclusions Our study shows that DL models can play an essential role in detecting and grading ROP with high sensitivity, specificity, and repeatability. The application of a DL-based automated system may improve ROP screening and diagnosis in the future.
Collapse
|
34
|
Abstract
As resources in the healthcare environment continue to wane, leaders are seeking ways to continue to provide quality care bounded by the constraints of a reduced budget. This manuscript synthesizes the experience from a number of institutions to provide the healthcare leadership with an understanding of the value of an enterprise imaging program. The value of such a program extends across the entire health system. It leads to operational efficiencies through infrastructure and application consolidation and the creation of focused support capabilities with increased depth of skill. An enterprise imaging program provides a centralized foundation for all phases of image management from every image-producing specialty. Through centralization, standardized image exchange functions can be provided to all image producers. Telehealth services can be more tightly integrated into the electronic medical record. Mobile platforms can be utilized for image viewing and sharing by patients and providers. Mobile tools can also be utilized for image upload directly into the centralized image repository. Governance and data standards are more easily distributed, setting the stage for artificial intelligence and data analytics. Increased exposure to all image producers provides opportunities for cybersecurity optimization and increased awareness.
Collapse
|
35
|
Gupta K, Reddy S. Heart, Eye, and Artificial Intelligence: A Review. Cardiol Res 2021; 12:132-139. [PMID: 34046105 PMCID: PMC8139752 DOI: 10.14740/cr1179] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Accepted: 11/12/2020] [Indexed: 12/30/2022] Open
Abstract
Heart disease continues to be the leading cause of death in the USA. Deep learning-based artificial intelligence (AI) methods have become increasingly common in studying the various factors involved in cardiovascular disease. The usage of retinal scanning techniques to diagnose retinal diseases, such as diabetic retinopathy, age-related macular degeneration, glaucoma and others, using fundus photographs and optical coherence tomography angiography (OCTA) has been extensively documented. Researchers are now looking to combine the power of AI with the non-invasive ease of retinal scanning to examine the workings of the heart and predict changes in the macrovasculature based on microvascular features and function. In this review, we summarize the current state of the field in using retinal imaging to diagnose cardiovascular issues and other diseases.
Collapse
Affiliation(s)
- Kush Gupta
- Kasturba Medical College, Mangalore, India
| | | |
Collapse
|
36
|
Dutt S, Sivaraman A, Savoy F, Rajalakshmi R. Insights into the growing popularity of artificial intelligence in ophthalmology. Indian J Ophthalmol 2021; 68:1339-1346. [PMID: 32587159 PMCID: PMC7574057 DOI: 10.4103/ijo.ijo_1754_19] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Artificial intelligence (AI) in healthcare is the use of computer-algorithms in analyzing complex medical data to detect associations and provide diagnostic support outputs. AI and deep learning (DL) find obvious applications in fields like ophthalmology wherein huge amount of image-based data need to be analyzed; however, the outcomes related to image recognition are reasonably well-defined. AI and DL have found important roles in ophthalmology in early screening and detection of conditions such as diabetic retinopathy (DR), age-related macular degeneration (ARMD), retinopathy of prematurity (ROP), glaucoma, and other ocular disorders, being successful inroads as far as early screening and diagnosis are concerned and appear promising with advantages of high-screening accuracy, consistency, and scalability. AI algorithms need equally skilled manpower, trained optometrists/ophthalmologists (annotators) to provide accurate ground truth for training the images. The basis of diagnoses made by AI algorithms is mechanical, and some amount of human intervention is necessary for further interpretations. This review was conducted after tracing the history of AI in ophthalmology across multiple research databases and aims to summarise the journey of AI in ophthalmology so far, making a close observation of most of the crucial studies conducted. This article further aims to highlight the potential impact of AI in ophthalmology, the pitfalls, and how to optimally use it to the maximum benefits of the ophthalmologists, the healthcare systems and the patients, alike.
Collapse
Affiliation(s)
- Sreetama Dutt
- Department of Research & Development, Remidio Innovative Solutions, Bengaluru, Karnataka, India
| | - Anand Sivaraman
- Department of Research & Development, Remidio Innovative Solutions, Bengaluru, Karnataka, India
| | - Florian Savoy
- Department of Artificial Intelligence, Medios Technologies, Singapore
| | - Ramachandran Rajalakshmi
- Department of Ophthalmology, Dr. Mohan's Diabetes Specialities Centre Madras Diabetes Research Foundation, Chennai, Tamil Nadu, India
| |
Collapse
|
37
|
Pathological Myopia Image Recognition Strategy Based on Data Augmentation and Model Fusion. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5549779. [PMID: 34035883 PMCID: PMC8118733 DOI: 10.1155/2021/5549779] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 04/02/2021] [Accepted: 04/27/2021] [Indexed: 11/17/2022]
Abstract
The automatic diagnosis of various retinal diseases based on fundus images is important in supporting clinical decision-making. Convolutional neural networks (CNNs) have achieved remarkable results in such tasks. However, their high expression ability possibly leads to overfitting. Therefore, data augmentation (DA) techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with more parameters render traditional DA techniques insufficient. In this study, we proposed a new DA strategy based on multimodal fusion (DAMF) which could integrate the standard DA method, data disrupting method, data mixing method, and autoadjustment method to enhance the image data in the training dataset to create new training images. In addition, we fused the results of the classifier by voting on the basis of DAMF, which further improved the generalization ability of the model. The experimental results showed that the optimal DA mode could be matched to the image dataset through our DA strategy. We evaluated DAMF on the iChallenge-PM dataset. At last, we compared training results between 12 DAMF processed datasets and the original training dataset. Compared with the original dataset, the optimal DAMF achieved an accuracy increase of 2.85% on iChallenge-PM.
Collapse
|
38
|
Chan EJJ, Najjar RP, Tang Z, Milea D. Deep Learning for Retinal Image Quality Assessment of Optic Nerve Head Disorders. Asia Pac J Ophthalmol (Phila) 2021; 10:282-288. [PMID: 34383719 DOI: 10.1097/apo.0000000000000404] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
ABSTRACT Deep learning (DL)-based retinal image quality assessment (RIQA) algorithms have been gaining popularity, as a solution to reduce the frequency of diagnostically unusable images. Most existing RIQA tools target retinal conditions, with a dearth of studies looking into RIQA models for optic nerve head (ONH) disorders. The recent success of DL systems in detecting ONH abnormalities on color fundus images prompts the development of tailored RIQA algorithms for these specific conditions. In this review, we discuss recent progress in DL-based RIQA models in general and the need for RIQA models tailored for ONH disorders. Finally, we propose suggestions for such models in the future.
Collapse
Affiliation(s)
| | - Raymond P Najjar
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Zhiqun Tang
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Dan Milea
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
- Ophthalmology Department, Singapore National Eye Centre, Singapore
- Rigshospitalet, Copenhagen University, Denmark
| |
Collapse
|
39
|
Tseng RMWW, Gunasekeran DV, Tan SSH, Rim TH, Lum E, Tan GSW, Wong TY, Tham YC. Considerations for Artificial Intelligence Real-World Implementation in Ophthalmology: Providers' and Patients' Perspectives. Asia Pac J Ophthalmol (Phila) 2021; 10:299-306. [PMID: 34383721 DOI: 10.1097/apo.0000000000000400] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
ABSTRACT Artificial Intelligence (AI), in particular deep learning, has made waves in the health care industry, with several prominent examples shown in ophthalmology. Despite the burgeoning reports on the development of new AI algorithms for detection and management of various eye diseases, few have reached the stage of regulatory approval for real-world implementation. To better enable real-world translation of AI systems, it is important to understand the demands, needs, and concerns of both health care professionals and patients, as providers and recipients of clinical care are impacted by these solutions. This review outlines the advantages and concerns of incorporating AI in ophthalmology care delivery, from both the providers' and patients' perspectives, and the key enablers for seamless transition to real-world implementation.
Collapse
Affiliation(s)
| | - Dinesh Visva Gunasekeran
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore (NUS), Singapore
| | | | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | | | - Gavin S W Tan
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| |
Collapse
|
40
|
Park S, Kim H, Kim L, Kim JK, Lee IS, Ryu IH, Kim Y. Artificial intelligence-based nomogram for small-incision lenticule extraction. Biomed Eng Online 2021; 20:38. [PMID: 33892729 PMCID: PMC8063457 DOI: 10.1186/s12938-021-00867-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 03/12/2021] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND Small-incision lenticule extraction (SMILE) is a surgical procedure for the refractive correction of myopia and astigmatism, which has been reported as safe and effective. However, over- and under-correction still occur after SMILE. The necessity of nomograms is emphasized to achieve optimal refractive results. Ophthalmologists diagnose nomograms by analyzing the preoperative refractive data with their individual knowledge which they accumulate over years of experience. Our aim was to predict the nomograms of sphere, cylinder, and astigmatism axis for SMILE accurately by applying machine learning algorithm. METHODS We retrospectively analyzed the data of 3,034 eyes composed of four categorical features and 28 numerical features selected from 46 features. The multiple linear regression, decision tree, AdaBoost, XGBoost, and multi-layer perceptron were employed in developing the nomogram models for sphere, cylinder, and astigmatism axis. The scores of the root-mean-square error (RMSE) and accuracy were evaluated and compared. Subsequently, the feature importance of the best models was calculated. RESULTS AdaBoost achieved the highest performance with RMSE of 0.1378, 0.1166, and 5.17 for the sphere, cylinder, and astigmatism axis, respectively. The accuracies of which error below 0.25 D for the sphere and cylinder nomograms and 25° for the astigmatism axis nomograms were 0.969, 0.976, and 0.994, respectively. The feature with the highest importance was preoperative manifest refraction for all the cases of nomograms. For the sphere and cylinder nomograms, the following highly important feature was the surgeon. CONCLUSIONS Among the diverse machine learning algorithms, AdaBoost exhibited the highest performance in the prediction of the sphere, cylinder, and astigmatism axis nomograms for SMILE. The study proved the feasibility of applying artificial intelligence (AI) to nomograms for SMILE. Also, it may enhance the quality of the surgical result of SMILE by providing assistance in nomograms and preventing the misdiagnosis in nomograms.
Collapse
Affiliation(s)
- Seungbin Park
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea
| | - Hannah Kim
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea
- Division of Bio-Medical Science &Technology, KIST School, Korea University of Science and Technology, Seoul, Korea
| | - Laehyun Kim
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea
| | | | | | | | - Youngjun Kim
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea.
- Division of Bio-Medical Science &Technology, KIST School, Korea University of Science and Technology, Seoul, Korea.
| |
Collapse
|
41
|
Digital Image Processing and Development of Machine Learning Models for the Discrimination of Corneal Pathology: An Experimental Model. PHOTONICS 2021. [DOI: 10.3390/photonics8040118] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Machine learning (ML) has an impressive capacity to learn and analyze a large volume of data. This study aimed to train different algorithms to discriminate between healthy and pathologic corneal images by evaluating digitally processed spectral-domain optical coherence tomography (SD-OCT) corneal images. A set of 22 SD-OCT images belonging to a random set of corneal pathologies was compared to 71 healthy corneas (control group). A binary classification method was applied where three approaches of ML were explored. Once all images were analyzed, representative areas from every digital image were also extracted, processed and analyzed for a statistical feature comparison between healthy and pathologic corneas. The best performance was obtained from transfer learning—support vector machine (TL-SVM) (AUC = 0.94, SPE 88%, SEN 100%) and transfer learning—random forest (TL- RF) method (AUC = 0.92, SPE 84%, SEN 100%), followed by convolutional neural network (CNN) (AUC = 0.84, SPE 77%, SEN 91%) and random forest (AUC = 0.77, SPE 60%, SEN 95%). The highest diagnostic accuracy in classifying corneal images was achieved with the TL-SVM and the TL-RF models. In image classification, CNN was a strong predictor. This pilot experimental study developed a systematic mechanized system to discern pathologic from healthy corneas using a small sample.
Collapse
|
42
|
Perepelkina T, Fulton AB. Artificial Intelligence (AI) Applications for Age-Related Macular Degeneration (AMD) and Other Retinal Dystrophies. Semin Ophthalmol 2021; 36:304-309. [PMID: 33764255 DOI: 10.1080/08820538.2021.1896756] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Artificial intelligence (AI), with its subdivisions (machine and deep learning), is a new branch of computer science that has shown impressive results across a variety of domains. The applications of AI to medicine and biology are being widely investigated. Medical specialties that rely heavily on images, including radiology, dermatology, oncology and ophthalmology, were the first to explore AI approaches in analysis and diagnosis. Applications of AI in ophthalmology have concentrated on diseases with high prevalence, such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration (AMD), and glaucoma. Here we provide an overview of AI applications for diagnosis, classification, and clinical management of AMD and other macular dystrophies.
Collapse
Affiliation(s)
- Tatiana Perepelkina
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, United States
| | - Anne B Fulton
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, United States
| |
Collapse
|
43
|
Oke I, VanderVeen D. Machine Learning Applications in Pediatric Ophthalmology. Semin Ophthalmol 2021; 36:210-217. [PMID: 33641598 DOI: 10.1080/08820538.2021.1890151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Purpose: To describe emerging applications of machine learning (ML) in pediatric ophthalmology with an emphasis on the diagnosis and treatment of disorders affecting visual development. Methods: Literature review of studies applying ML algorithms to problems in pediatric ophthalmology. Results: At present, the ML literature emphasizes applications in retinopathy of prematurity. However, there are increasing efforts to apply ML techniques in the diagnosis of amblyogenic conditions such as pediatric cataracts, strabismus, and high refractive error. Conclusions: A greater understanding of the principles governing ML will enable pediatric eye care providers to apply the methodology to unexplored challenges within the subspecialty.
Collapse
Affiliation(s)
- Isdin Oke
- Department of Ophthalmology, Boston Children's Hospital, Boston, MA, USA.,Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Deborah VanderVeen
- Department of Ophthalmology, Boston Children's Hospital, Boston, MA, USA.,Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
44
|
Prabhakar B, Singh RK, Yadav KS. Artificial intelligence (AI) impacting diagnosis of glaucoma and understanding the regulatory aspects of AI-based software as medical device. Comput Med Imaging Graph 2021; 87:101818. [DOI: 10.1016/j.compmedimag.2020.101818] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 09/01/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
45
|
Zimmerman C, Bruggeman B, LaPorte A, Kaushal S, Stalvey M, Beauchamp G, Dayton K, Hiers P, Filipp SL, Gurka MJ, Silverstein JH, Jacobsen LM. Real-World Screening for Retinopathy in Youth With Type 1 Diabetes Using a Nonmydriatic Fundus Camera. Diabetes Spectr 2021; 34:27-33. [PMID: 33627991 PMCID: PMC7887527 DOI: 10.2337/ds20-0017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE To assess the use of a portable retinal camera in diabetic retinopathy (DR) screening in multiple settings and the presence of associated risk factors among children, adolescents, and young adults with type 1 diabetes. DESIGN AND METHODS Five hundred youth with type 1 diabetes of at least 1 year's duration were recruited from clinics, diabetes camp, and a diabetes conference and underwent retinal imaging using a nonmydriatic fundus camera. Retinal characterization was performed remotely by a licensed ophthalmologist. Risk factors for DR development were evaluated by a patient-reported questionnaire and medical chart review. RESULTS Of the 500 recruited subjects aged 9-26 years (mean 14.9, SD 3.8), 10 cases of DR were identified (nine mild and one moderate nonproliferative DR) with 100% of images of gradable quality. The prevalence of DR was 2.04% (95% CI 0.78-3.29), at an average age of 20.2 years, with the youngest affected subject being 17.1 years of age. The rate of DR was higher, at 6.5%, with diabetes duration >10 years (95% CI 0.86-12.12, P = 0.0002). In subjects with DR, the average duration of diabetes was 12.1 years (SD 4.6, range 6.2-20.0), and in a subgroup of clinic-only subjects (n = 114), elevated blood pressure in the year before screening was associated with DR (P = 0.0068). CONCLUSION This study in a large cohort of subjects with type 1 diabetes demonstrates that older adolescents and young adults (>17 years) with longer disease duration (>6 years) are at risk for DR development, and screening using a portable retinal camera is feasible in clinics and other locations. Recent elevated blood pressure was a risk factor in an analyzed subgroup.
Collapse
Affiliation(s)
- Chelsea Zimmerman
- Division of Pediatric Endocrinology, University of Florida, Gainesville, FL
| | - Brittany Bruggeman
- Division of Pediatric Endocrinology, University of Florida, Gainesville, FL
| | - Amanda LaPorte
- University of Florida College of Medicine, Gainesville, FL
| | | | - Michael Stalvey
- Division of Pediatric Endocrinology, University of Alabama at Birmingham, Birmingham, AL
| | - Giovanna Beauchamp
- Division of Pediatric Endocrinology, University of Alabama at Birmingham, Birmingham, AL
| | - Kristin Dayton
- Division of Pediatric Endocrinology, University of Florida, Gainesville, FL
| | - Paul Hiers
- Division of Pediatric Endocrinology, University of Florida, Gainesville, FL
| | - Stephanie L. Filipp
- Department of Health Outcomes and Policy, University of Florida, Gainesville, FL
| | - Matthew J. Gurka
- Department of Health Outcomes and Policy, University of Florida, Gainesville, FL
| | | | - Laura M. Jacobsen
- Division of Pediatric Endocrinology, University of Florida, Gainesville, FL
| |
Collapse
|
46
|
Affiliation(s)
| | - Paulo Schor
- Universidade Federal de São Paulo, São Paulo, SP, Brazil
| |
Collapse
|
47
|
Jayadev C, Shetty R. Artificial intelligence in laser refractive surgery - Potential and promise! Indian J Ophthalmol 2020; 68:2650-2651. [PMID: 33229635 PMCID: PMC7856980 DOI: 10.4103/ijo.ijo_3304_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Affiliation(s)
- Chaitra Jayadev
- Narayana Nethralaya Eye Institute, 121/C, Chord Road, Rajajinagar, Bangalore - 560 010, Karnataka, India
| | - Rohit Shetty
- Narayana Nethralaya Eye Institute, 121/C, Chord Road, Rajajinagar, Bangalore - 560 010, Karnataka, India
| |
Collapse
|
48
|
Suri R, Neupane YR, Jain GK, Kohli K. Recent theranostic paradigms for the management of Age-related macular degeneration. Eur J Pharm Sci 2020; 153:105489. [PMID: 32717428 DOI: 10.1016/j.ejps.2020.105489] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/21/2022]
Abstract
Degenerative diseases of eye like Age-related macular degeneration (AMD), that affects the central portion of the retina (macula), is one of the leading causes of blindness worldwide especially in the elderly population. It is classified mainly as wet and dry form. With expanding knowledge about the underlying pathophysiology of the disease, various treatment strategies are being employed to halt the course of the disease progression. Hitherto, there is no ideal therapy which can cure the disease completely, and targeting the posterior segment of the eye is yet another challenge. The purpose of this review is to summarize the recent advances in the management and treatment stratagems (therapies, delivery systems and diagnostic tools) pertaining to AMD viz. molecular targeting, stem cell therapy, nanotechnology and exosomes with special reference to newer technologies like artificial intelligence and 3D printing. Furthermore, the role of diet and nutritional supplements in the prevention and treatment of the disease has also been highlighted. The alarming increase in the said disorder around the globe demands exhaustive research and investigations in the treatment zone. This review thus additionally directs the attention towards the challenges and future perspectives of different treatment approaches for AMD.
Collapse
Affiliation(s)
- Reshal Suri
- Department of Pharmaceutics, School of Pharmaceutical Education & Research, Jamia Hamdard, New Delhi, 110062, India
| | - Yub Raj Neupane
- Department of Pharmacy, National University of Singapore, 117559, Singapore
| | - Gaurav Kumar Jain
- Department of Pharmaceutics, School of Pharmaceutical Education & Research, Jamia Hamdard, New Delhi, 110062, India
| | - Kanchan Kohli
- Department of Pharmaceutics, School of Pharmaceutical Education & Research, Jamia Hamdard, New Delhi, 110062, India.
| |
Collapse
|
49
|
Pao SI, Lin HZ, Chien KH, Tai MC, Chen JT, Lin GM. Detection of Diabetic Retinopathy Using Bichannel Convolutional Neural Network. J Ophthalmol 2020; 2020:9139713. [PMID: 32655944 PMCID: PMC7322591 DOI: 10.1155/2020/9139713] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 05/18/2020] [Indexed: 01/14/2023] Open
Abstract
Deep learning of fundus photograph has emerged as a practical and cost-effective technique for automatic screening and diagnosis of severer diabetic retinopathy (DR). The entropy image of luminance of fundus photograph has been demonstrated to increase the detection performance for referable DR using a convolutional neural network- (CNN-) based system. In this paper, the entropy image computed by using the green component of fundus photograph is proposed. In addition, image enhancement by unsharp masking (UM) is utilized for preprocessing before calculating the entropy images. The bichannel CNN incorporating the features of both the entropy images of the gray level and the green component preprocessed by UM is also proposed to improve the detection performance of referable DR by deep learning.
Collapse
Affiliation(s)
- Shu-I Pao
- Department of Ophthalmology, Tri-Service General Hospital and National Defense Medical Center, Taipei 114, Taiwan
| | - Hong-Zin Lin
- Department of Ophthalmology, Buddhist Tzu Chi General Hospital, Hualien 970, Taiwan
- Institute of Medical Sciences, Tzu Chi University, Hualien 970, Taiwan
| | - Ke-Hung Chien
- Department of Ophthalmology, Tri-Service General Hospital and National Defense Medical Center, Taipei 114, Taiwan
- Department of Medicine, Hualien Armed Forces General Hospital, Hualien 971, Taiwan
| | - Ming-Cheng Tai
- Department of Ophthalmology, Tri-Service General Hospital and National Defense Medical Center, Taipei 114, Taiwan
- Department of Medicine, Hualien Armed Forces General Hospital, Hualien 971, Taiwan
| | - Jiann-Torng Chen
- Department of Ophthalmology, Tri-Service General Hospital and National Defense Medical Center, Taipei 114, Taiwan
| | - Gen-Min Lin
- Department of Medicine, Hualien Armed Forces General Hospital, Hualien 971, Taiwan
- Department of Medicine, Tri-Service General Hospital and National Defense Medical Center, Taipei 114, Taiwan
- Department of Preventive Medicine, Northwestern University, Chicago, IL 60611, USA
| |
Collapse
|
50
|
Gupta V, Rajendran A, Narayanan R, Chawla S, Kumar A, Palanivelu MS, Muralidhar NS, Jayadev C, Pappuru R, Khatri M, Agarwal M, Aurora A, Bhende P, Bhende M, Bawankule P, Rishi P, Vinekar A, Trehan HS, Biswas J, Agarwal R, Natarajan S, Verma L, Ramasamy K, Giridhar A, Rishi E, Talwar D, Pathangey A, Azad R, Honavar SG. Evolving consensus on managing vitreo-retina and uvea practice in post-COVID-19 pandemic era. Indian J Ophthalmol 2020; 68:962-973. [PMID: 32461407 PMCID: PMC7508071 DOI: 10.4103/ijo.ijo_1404_20] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 05/09/2020] [Accepted: 05/09/2020] [Indexed: 02/06/2023] Open
Abstract
The COVID-19 pandemic has brought new challenges to the health care community. Many of the super-speciality practices are planning to re-open after the lockdown is lifted. However there is lot of apprehension in everyone's mind about conforming practices that would safeguard the patients, ophthalmologists, healthcare workers as well as taking adequate care of the equipment to minimize the damage. The aim of this article is to develop preferred practice patterns, by developing a consensus amongst the lead experts, that would help the institutes as well as individual vitreo-retina and uveitis experts to restart their practices with confidence. As the situation remains volatile, we would like to mention that these suggestions are evolving and likely to change as our understanding and experience gets better. Further, the suggestions are for routine patients as COVID-19 positive patients may be managed in designated hospitals as per local protocols. Also these suggestions have to be implemented keeping in compliance with local rules and regulations.
Collapse
Affiliation(s)
- Vishali Gupta
- Advanced Eye Centre, Post Graduate Institute of Medical Education and Research, Chandigarha, India
| | | | | | | | - Atul Kumar
- Dr. RP.Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Rupesh Agarwal
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore
| | | | | | | | | | | | | | | | - Rajvardhan Azad
- Regional Institute of Ophthalmology Indira Gandhi Institute of Medical Institute of Medical Sciences, Patna, India
| | | |
Collapse
|