1
|
D S R, Saji KS. Hybrid deep learning framework for diabetic retinopathy classification with optimized attention AlexNet. Comput Biol Med 2025; 190:110054. [PMID: 40154203 DOI: 10.1016/j.compbiomed.2025.110054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2024] [Revised: 02/24/2025] [Accepted: 03/18/2025] [Indexed: 04/01/2025]
Abstract
Diabetic retinopathy (DR) is a chronic condition associated with diabetes that can lead to vision impairment and, if not addressed, may progress to irreversible blindness. Consequently, it is essential to detect pathological changes in the retina to assess DR severity accurately. Manual examination of retinal disorders is often complex, time consuming, and susceptible to errors due to fine retinal disorder. In recent years, Deep Learning (DL) based optimizations have shown significant promises in improving DR recognition and classification. At last, the advanced classification method using metaheuristic optimization for grading severity in fundus images. This work presents an automated DR classification using metaheuristic optimization based advanced DL model. There are four stages are involved in the suggested DR classification. At first, the pre-processing stage is performed green channel conversion, CLAHE and Gaussian filtering (GF). Then, the fundus lesions are segmented by the Fuzzy Possibilistic C Ordered Means (FPCOM). Finally, the lesions are classified by Attention AlexNet based Improved Nutcracker Optimizer (At-AlexNet-ImNO). The ImNO optimizes the At-AlexNet's weights and hyperparameters and boosts the classification performance. The experimentation is performed on two benchmark datasets like APTOS-2019 Blindness-Detection and EyePacs. Accuracy, precision and recall values achieved are 99.23 %, 98 % and 98.2 % on APTOS-2019 and accuracy, precision and recall values achieved are 99.43 %, 98.2 % and 98.65 % on EyePacs respectively.
Collapse
Affiliation(s)
- Renu D S
- Department of Computer Science and Engineering, Mar Ephraem College of Engineering and Technology, Elavuvilai, Tamilnadu, India.
| | - K S Saji
- Department of Electrical and Electronics Engineering, Meenakshi Sundararajan Engineering College, Kodambakkam, Chennai, Tamilnadu, India.
| |
Collapse
|
2
|
Dinesen S, Schou MG, Hedegaard CV, Subhi Y, Savarimuthu TR, Peto T, Andersen JKH, Grauslund J. A Deep Learning Segmentation Model for Detection of Active Proliferative Diabetic Retinopathy. Ophthalmol Ther 2025; 14:1053-1063. [PMID: 40146482 PMCID: PMC12006569 DOI: 10.1007/s40123-025-01127-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2025] [Accepted: 03/05/2025] [Indexed: 03/28/2025] Open
Abstract
INTRODUCTION Existing deep learning (DL) algorithms lack the capability to accurately identify patients in immediate need of treatment for proliferative diabetic retinopathy (PDR). We aimed to develop a DL segmentation model to detect active PDR in six-field retinal images by the annotation of new retinal vessels and preretinal hemorrhages. METHODS We identified six-field retinal images classified at level 4 of the International Clinical Diabetic Retinopathy Disease Severity Scale collected at the Island of Funen from 2009 to 2019 as part of the Danish screening program for diabetic retinopathy (DR). A certified grader (grader 1) manually dichotomized the images into active or inactive PDR, and the images were then reassessed by two independent certified graders. In cases of disagreement, the final classification decision was made in collaboration between grader 1 and one of the secondary graders. Overall, 637 images were classified as active PDR. We then applied our pre-established DL segmentation model to annotate nine lesion types before training the algorithm. The segmentations of new vessels and preretinal hemorrhages were corrected for any inaccuracies before training the DL algorithm. After the classification and pre-segmentation phases the images were divided into training (70%), validation (10%), and testing (20%) datasets. We added 301 images with inactive PDR to the testing dataset. RESULTS We included 637 images of active PDR and 301 images of inactive PDR from 199 individuals. The training dataset had 1381 new vessel and preretinal hemorrhage lesions, while the validation dataset had 123 lesions and the testing dataset 374 lesions. The DL system demonstrated a sensitivity of 90% and a specificity of 70% for annotation-assisted classification of active PDR. The negative predictive value was 94%, while the positive predictive value was 57%. CONCLUSIONS Our DL segmentation model achieved excellent sensitivity and acceptable specificity in distinguishing active from inactive PDR.
Collapse
Affiliation(s)
- Sebastian Dinesen
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark.
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark.
- Steno Diabetes Centre Odense, Odense University Hospital, Odense, Denmark.
| | - Marianne G Schou
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Christoffer V Hedegaard
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
| | - Yousif Subhi
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Department of Ophthalmology, Rigshospitalet, Glostrup, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | | | - Tunde Peto
- Centre for Public Health, Queen's University Belfast, Belfast, UK
| | - Jakob K H Andersen
- The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Odense, Denmark
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Steno Diabetes Centre Odense, Odense University Hospital, Odense, Denmark
| |
Collapse
|
3
|
Ciapponi A, Ballivian J, Gentile C, Mejia JR, Ruiz-Baena J, Bardach A. Diagnostic utility of artificial intelligence software through non-mydriatic digital retinography in the screening of diabetic retinopathy: an overview of reviews. Eye (Lond) 2025:10.1038/s41433-025-03809-y. [PMID: 40301668 DOI: 10.1038/s41433-025-03809-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 04/07/2025] [Accepted: 04/08/2025] [Indexed: 05/01/2025] Open
Abstract
OBJECTIVE To evaluate the capability of artificial intelligence (AI) in screening for diabetic retinopathy (DR) utilizing digital retinography captured by non-mydriatic (NM) ≥45° cameras, focusing on diagnosis accuracy, effectiveness, and clinical safety. METHODS We performed an overview of systematic reviews (SRs) up to May 2023 in Medline, Embase, CINAHL, and Web of Science. We used AMSTAR-2 tool to assess the reliability of each SR. We reported meta-analysis estimates or ranges of diagnostic performance figures. RESULTS Out of 1336 records, ten SRs were selected, most deemed low or critically low quality. Eight primary studies were included in at least five of the ten SRs and 125 in less than five SRs. No SR reported efficacy, effectiveness, or safety outcomes. The sensitivity and specificity for referable DR were 68-100% and 20-100%, respectively, with an AUROC range of 88 to 99%. For detecting DR at any stage, sensitivity was 79-100%, and specificity was 50-100%, with an AUROC range of 93 to 98%. CONCLUSIONS AI demonstrates strong diagnostic potential for DR screening using NM cameras, with adequate sensitivity but variable specificity. While AI is increasingly integrated into routine practice, this overview highlights significant heterogeneity in AI models and the cameras used. Additionally, our study enlightens the low quality of existing systematic reviews and the significant challenge of integrating the rapidly growing volume of emerging evidence in this field. Policymakers should carefully evaluate AI tools in specific contexts, and future research must generate updated high-quality evidence to optimize their application and improve patient outcomes.
Collapse
Affiliation(s)
- Agustín Ciapponi
- Instituto de Efectividad Clínica y Sanitaria (IECS), Buenos Aires, Argentina.
| | - Jamile Ballivian
- Instituto de Efectividad Clínica y Sanitaria (IECS), Buenos Aires, Argentina
| | - Carolina Gentile
- Hospital Italiano de Buenos Aires, Servicio de Oftalmología, Buenos Aires, Argentina
| | - Jhonatan R Mejia
- Instituto de Efectividad Clínica y Sanitaria (IECS), Buenos Aires, Argentina
| | - Jessica Ruiz-Baena
- Àrea d'Avaluació i Qualitat, Agència de Qualitat i Avaluació Sanitàries de Catalunya (AQuAS), Catalunya, España
| | - Ariel Bardach
- Instituto de Efectividad Clínica y Sanitaria (IECS), Buenos Aires, Argentina
| |
Collapse
|
4
|
Zhang W, Rong H, Hei K, Liu G, He M, Du B, Wei R, Zhang Y. A deep learning-assisted automatic measurement of tear meniscus height on ocular surface images and its application in myopia control. Front Bioeng Biotechnol 2025; 13:1554432. [PMID: 40291564 PMCID: PMC12021850 DOI: 10.3389/fbioe.2025.1554432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2025] [Accepted: 04/02/2025] [Indexed: 04/30/2025] Open
Abstract
Purpose Modalities for myopia control, such as orthokeratology, repeated low-intensity red light (RLRL) treatment, and low-concentration atropine, have become popular topics. However, the effects of these three modalities on ocular surface health remain unclear. The tear meniscus height (TMH), a crucial criterion for evaluating ocular surface health and diagnosing dry eye, is conventionally measured via manual demarcation of ocular surface images, which is inefficient and involves subjective judgment. Therefore, this study sought to establish a deep learning model for automatic TMH measurement on ocular surface images to improve the efficiency and accuracy of the initial screening of dry eye associated with myopia control modalities. Methods To establish a model, 1,200 ocular surface images captured with an OCULUS Keratograph 5M were collected. The tear meniscus area on the image was initially marked by one experienced ophthalmologist and verified by the other. The whole image dataset was divided into a training set (70%), a validation set (20%), a test set (10%), and an external validation set (100 ocular surface images) for model construction. The deep learning model was applied to ocular surface imaging data from previous clinical trials using orthokeratology, RLRL therapy, and 0.01% atropine for myopia control. TMHs at follow-ups were automatically measured by the deep learning model. Results Two hundred training iterations were performed to establish the model. At the 124th iteration, the IoU of the validation set peaked at 0.913, and the parameters of the model were saved for the testing process. The model IoU was 0.928 during testing. The AUC of the ROC curve was 0.935, and the R2 of the linear regression analysis was 0.92. The good performance and comprehensive validation of the model warrants its application to automatic TMH measurement in clinical trials of myopia control. There were no significant changes in the TMH during the follow-up period after treatment with orthokeratology, RLRL, or 0.01% atropine. Conclusion A deep learning model was established for automatic measurement of the TMH on Keratograph 5M-captured ocular surface images. This model demonstrated high accuracy, great consistency with manual measurements, and applicability to the initial screening of dry eye associated with myopia control modalities.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Ruihua Wei
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Yan Zhang
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| |
Collapse
|
5
|
Zafar A, Kim KS, Ali MU, Byun JH, Kim SH. A lightweight multi-deep learning framework for accurate diabetic retinopathy detection and multi-level severity identification. Front Med (Lausanne) 2025; 12:1551315. [PMID: 40241910 PMCID: PMC12000039 DOI: 10.3389/fmed.2025.1551315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2024] [Accepted: 03/10/2025] [Indexed: 04/18/2025] Open
Abstract
Accurate and timely detection of diabetic retinopathy (DR) is crucial for managing its progression and improving patient outcomes. However, developing algorithms to analyze complex fundus images continues to be a major challenge. This work presents a lightweight deep-learning network developed for DR detection. The proposed framework consists of two stages. In the first step, the developed model is used to assess the presence of DR [i.e., healthy (no DR) or diseased (DR)]. The next step involves the use of transfer learning for further subclassification of DR severity (i.e., mild, moderate, severe DR, and proliferative DR). The designed model is reused for transfer learning, as correlated images facilitate further classification of DR severity. The online dataset is used to validate the proposed framework, and results show that the proposed model is lightweight and has comparatively low learnable parameters compared to others. The proposed two-stage framework enhances the classification performance, achieving a 99.06% classification rate for DR detection and an accuracy of 90.75% for DR severity identification for APTOS 2019 dataset.
Collapse
Affiliation(s)
- Amad Zafar
- Department of Artificial Intelligence and Robotics, Sejong University, Seoul, Republic of Korea
| | - Kwang Su Kim
- Department of Scientific Computing, Pukyong National University, Busan, Republic of Korea
| | - Muhammad Umair Ali
- Department of Artificial Intelligence and Robotics, Sejong University, Seoul, Republic of Korea
| | - Jong Hyuk Byun
- Department of Mathematics, Institute of Mathematical Science, Pusan National University, Busan, Republic of Korea
- Finance Fishery Manufacture Industrial Mathematics Center on Big Data, Pusan National University, Busan, Republic of Korea
| | - Seong-Han Kim
- Department of Artificial Intelligence and Robotics, Sejong University, Seoul, Republic of Korea
| |
Collapse
|
6
|
Pachade S, Porwal P, Kokare M, Deshmukh G, Sahasrabuddhe V, Luo Z, Han F, Sun Z, Qihan L, Kamata SI, Ho E, Wang E, Sivajohan A, Youn S, Lane K, Chun J, Wang X, Gu Y, Lu S, Oh YT, Park H, Lee CY, Yeh H, Cheng KW, Wang H, Ye J, He J, Gu L, Müller D, Soto-Rey I, Kramer F, Arai H, Ochi Y, Okada T, Giancardo L, Quellec G, Mériaudeau F. RFMiD: Retinal Image Analysis for multi-Disease Detection challenge. Med Image Anal 2025; 99:103365. [PMID: 39395210 DOI: 10.1016/j.media.2024.103365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 07/16/2024] [Accepted: 10/02/2024] [Indexed: 10/14/2024]
Abstract
In the last decades, many publicly available large fundus image datasets have been collected for diabetic retinopathy, glaucoma, and age-related macular degeneration, and a few other frequent pathologies. These publicly available datasets were used to develop a computer-aided disease diagnosis system by training deep learning models to detect these frequent pathologies. One challenge limiting the adoption of a such system by the ophthalmologist is, computer-aided disease diagnosis system ignores sight-threatening rare pathologies such as central retinal artery occlusion or anterior ischemic optic neuropathy and others that ophthalmologists currently detect. Aiming to advance the state-of-the-art in automatic ocular disease classification of frequent diseases along with the rare pathologies, a grand challenge on "Retinal Image Analysis for multi-Disease Detection" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2021). This paper, reports the challenge organization, dataset, top-performing participants solutions, evaluation measures, and results based on a new "Retinal Fundus Multi-disease Image Dataset" (RFMiD). There were two principal sub-challenges: disease screening (i.e. presence versus absence of pathology - a binary classification problem) and disease/pathology classification (a 28-class multi-label classification problem). It received a positive response from the scientific community with 74 submissions by individuals/teams that effectively entered in this challenge. The top-performing methodologies utilized a blend of data-preprocessing, data augmentation, pre-trained model, and model ensembling. This multi-disease (frequent and rare pathologies) detection will enable the development of generalizable models for screening the retina, unlike the previous efforts that focused on the detection of specific diseases.
Collapse
Affiliation(s)
- Samiksha Pachade
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India.
| | - Prasanna Porwal
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India
| | - Manesh Kokare
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India
| | | | - Vivek Sahasrabuddhe
- Department of Ophthalmology, Shankarrao Chavan Government Medical College, Nanded 431606, India
| | - Zhengbo Luo
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Feng Han
- University of Shanghai for Science and Technology, Shanghai, China
| | - Zitang Sun
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Li Qihan
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Sei-Ichiro Kamata
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Edward Ho
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Edward Wang
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Asaanth Sivajohan
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Saerom Youn
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Kevin Lane
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Jin Chun
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Xinliang Wang
- Beihang University School of Computer Science, China
| | - Yunchao Gu
- Beihang University School of Computer Science, China
| | - Sixu Lu
- Beijing Normal University School of Artificial Intelligence, China
| | - Young-Tack Oh
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Republic of Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea; School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Republic of Korea
| | - Chia-Yen Lee
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC
| | - Hung Yeh
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC; Institute of Biomedical Engineering, National Yang Ming Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, Taiwan, ROC
| | - Kai-Wen Cheng
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC
| | - Haoyu Wang
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Jin Ye
- ShenZhen Key Lab of Computer Vision and Pattern Recognition, Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Junjun He
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China; ShenZhen Key Lab of Computer Vision and Pattern Recognition, Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lixu Gu
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Dominik Müller
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany; Medical Data Integration Center, University Hospital Augsburg, Germany
| | - Iñaki Soto-Rey
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany; Medical Data Integration Center, University Hospital Augsburg, Germany
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany
| | | | - Yuma Ochi
- National Institute of Technology, Kisarazu College, Japan
| | - Takami Okada
- Institute of Industrial Ecological Sciences, University of Occupational and Environmental Health, Japan
| | - Luca Giancardo
- Center for Precision Health, School of Biomedical Informatics, University of Texas Health Science Center at Houston (UTHealth), Houston, TX 77030, USA
| | | | | |
Collapse
|
7
|
Burlina S, Radin S, Poggiato M, Cioccoloni D, Raimondo D, Romanello G, Tommasi C, Lombardi S. Screening for diabetic retinopathy with artificial intelligence: a real world evaluation. Acta Diabetol 2024; 61:1603-1607. [PMID: 38995312 DOI: 10.1007/s00592-024-02333-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Accepted: 07/02/2024] [Indexed: 07/13/2024]
Abstract
AIM Periodic screening for diabetic retinopathy (DR) is effective for preventing blindness. Artificial intelligence (AI) systems could be useful for increasing the screening of DR in diabetic patients. The aim of this study was to compare the performance of the DAIRET system in detecting DR to that of ophthalmologists in a real-world setting. METHODS Fundus photography was performed with a nonmydriatic camera in 958 consecutive patients older than 18 years who were affected by diabetes and who were enrolled in the DR screening in the Diabetes and Endocrinology Unit and in the Eye Unit of ULSS8 Berica (Italy) between June 2022 and June 2023. All retinal images were evaluated by DAIRET, which is a machine learning algorithm based on AI. In addition, all the images obtained were analysed by an ophthalmologist who graded the images. The results obtained by DAIRET were compared with those obtained by the ophthalmologist. RESULTS We included 958 patients, but only 867 (90.5%) patients had retinal images sufficient for evaluation by a human grader. The sensitivity for detecting cases of moderate DR and above was 1 (100%), and the sensitivity for detecting cases of mild DR was 0.84 ± 0.03. The specificity of detecting the absence of DR was lower (0.59 ± 0.04) because of the high number of false-positives. CONCLUSION DAIRET showed an optimal sensitivity in detecting all cases of referable DR (moderate DR or above) compared with that of a human grader. On the other hand, the specificity of DAIRET was low because of the high number of false-positives, which limits its cost-effectiveness.
Collapse
Affiliation(s)
- Silvia Burlina
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy.
| | - Sandra Radin
- Eye Unit, ULSS 8 Berica, Montecchio Maggiore, Veneto, VI, Italy
| | - Marzia Poggiato
- Eye Unit, ULSS 8 Berica, Montecchio Maggiore, Veneto, VI, Italy
| | - Dario Cioccoloni
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy
| | - Daniele Raimondo
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy
| | - Giovanni Romanello
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy
| | - Chiara Tommasi
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy
| | - Simonetta Lombardi
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy
| |
Collapse
|
8
|
Mihalache A, Huang RS, Mikhail D, Popovic MM, Shor R, Pereira A, Kwok J, Yan P, Wong DT, Kertes PJ, Kohly RP, Muni RH. Interpretation of Clinical Retinal Images Using an Artificial Intelligence Chatbot. OPHTHALMOLOGY SCIENCE 2024; 4:100556. [PMID: 39139542 PMCID: PMC11321281 DOI: 10.1016/j.xops.2024.100556] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 05/13/2024] [Accepted: 05/17/2024] [Indexed: 08/15/2024]
Abstract
Purpose To assess the performance of Chat Generative Pre-Trained Transformer-4 in providing accurate diagnoses to retina teaching cases from OCTCases. Design Cross-sectional study. Subjects Retina teaching cases from OCTCases. Methods We prompted a custom chatbot with 69 retina cases containing multimodal ophthalmic images, asking it to provide the most likely diagnosis. In a sensitivity analysis, we inputted increasing amounts of clinical information pertaining to each case until the chatbot achieved a correct diagnosis. We performed multivariable logistic regressions on Stata v17.0 (StataCorp LLC) to investigate associations between the amount of text-based information inputted per prompt and the odds of the chatbot achieving a correct diagnosis, adjusting for the laterality of cases, number of ophthalmic images inputted, and imaging modalities. Main Outcome Measures Our primary outcome was the proportion of cases for which the chatbot was able to provide a correct diagnosis. Our secondary outcome was the chatbot's performance in relation to the amount of text-based information accompanying ophthalmic images. Results Across 69 retina cases collectively containing 139 ophthalmic images, the chatbot was able to provide a definitive, correct diagnosis for 35 (50.7%) cases. The chatbot needed variable amounts of clinical information to achieve a correct diagnosis, where the entire patient description as presented by OCTCases was required for a majority of correctly diagnosed cases (23 of 35 cases, 65.7%). Relative to when the chatbot was only prompted with a patient's age and sex, the chatbot achieved a higher odds of a correct diagnosis when prompted with an entire patient description (odds ratio = 10.1, 95% confidence interval = 3.3-30.3, P < 0.01). Despite providing an incorrect diagnosis for 34 (49.3%) cases, the chatbot listed the correct diagnosis within its differential diagnosis for 7 (20.6%) of these incorrectly answered cases. Conclusions This custom chatbot was able to accurately diagnose approximately half of the retina cases requiring multimodal input, albeit relying heavily on text-based contextual information that accompanied ophthalmic images. The diagnostic ability of the chatbot in interpretation of multimodal imaging without text-based information is currently limited. The appropriate use of the chatbot in this setting is of utmost importance, given bioethical concerns. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Andrew Mihalache
- Temerty School of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Ryan S. Huang
- Temerty School of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - David Mikhail
- Temerty School of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Marko M. Popovic
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Reut Shor
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Austin Pereira
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Jason Kwok
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Peng Yan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - David T. Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Department of Ophthalmology, St. Michael’s Hospital/Unity Health Toronto, Toronto, Ontario, Canada
| | - Peter J. Kertes
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- John and Liz Tory Eye Centre, Sunnybrook Health Science Centre, Toronto, Ontario, Canada
| | - Radha P. Kohly
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- John and Liz Tory Eye Centre, Sunnybrook Health Science Centre, Toronto, Ontario, Canada
| | - Rajeev H. Muni
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Department of Ophthalmology, St. Michael’s Hospital/Unity Health Toronto, Toronto, Ontario, Canada
| |
Collapse
|
9
|
Domalpally A, Slater R, Linderman RE, Balaji R, Bogost J, Voland R, Pak J, Blodi BA, Channa R, Fong D, Chew EY. Strong versus Weak Data Labeling for Artificial Intelligence Algorithms in the Measurement of Geographic Atrophy. OPHTHALMOLOGY SCIENCE 2024; 4:100477. [PMID: 38827491 PMCID: PMC11141255 DOI: 10.1016/j.xops.2024.100477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 11/15/2023] [Accepted: 01/19/2024] [Indexed: 06/04/2024]
Abstract
Purpose To gain an understanding of data labeling requirements to train deep learning models for measurement of geographic atrophy (GA) with fundus autofluorescence (FAF) images. Design Evaluation of artificial intelligence (AI) algorithms. Subjects The Age-Related Eye Disease Study 2 (AREDS2) images were used for training and cross-validation, and GA clinical trial images were used for testing. Methods Training data consisted of 2 sets of FAF images; 1 with area measurements only and no indication of GA location (Weakly labeled) and the second with GA segmentation masks (Strongly labeled). Main Outcome Measures Bland-Altman plots and scatter plots were used to compare GA area measurement between ground truth and AI measurements. The Dice coefficient was used to compare accuracy of segmentation of the Strong model. Results In the cross-validation AREDS2 data set (n = 601), the mean (standard deviation [SD]) area of GA measured by human grader, Weakly labeled AI model, and Strongly labeled AI model was 6.65 (6.3) mm2, 6.83 (6.29) mm2, and 6.58 (6.24) mm2, respectively. The mean difference between ground truth and AI was 0.18 mm2 (95% confidence interval, [CI], -7.57 to 7.92) for the Weakly labeled model and -0.07 mm2 (95% CI, -1.61 to 1.47) for the Strongly labeled model. With GlaxoSmithKline testing data (n = 156), the mean (SD) GA area was 9.79 (5.6) mm2, 8.82 (4.61) mm2, and 9.55 (5.66) mm2 for human grader, Strongly labeled AI model, and Weakly labeled AI model, respectively. The mean difference between ground truth and AI for the 2 models was -0.97 mm2 (95% CI, -4.36 to 2.41) and -0.24 mm2 (95% CI, -4.98 to 4.49), respectively. The Dice coefficient was 0.99 for intergrader agreement, 0.89 for the cross-validation data, and 0.92 for the testing data. Conclusions Deep learning models can achieve reasonable accuracy even with Weakly labeled data. Training methods that integrate large volumes of Weakly labeled images with small number of Strongly labeled images offer a promising solution to overcome the burden of cost and time for data labeling. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Amitha Domalpally
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Robert Slater
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Rachel E. Linderman
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Rohit Balaji
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Jacob Bogost
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Rick Voland
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Jeong Pak
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Barbara A. Blodi
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Roomasa Channa
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | | | - Emily Y. Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| |
Collapse
|
10
|
Lim WX, Chen Z. Enhancing deep learning pre-trained networks on diabetic retinopathy fundus photographs with SLIC-G. Med Biol Eng Comput 2024; 62:2571-2583. [PMID: 38649629 DOI: 10.1007/s11517-024-03093-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 04/10/2024] [Indexed: 04/25/2024]
Abstract
Diabetic retinopathy disease contains lesions (e.g., exudates, hemorrhages, and microaneurysms) that are minute to the naked eye. Determining the lesions at pixel level poses a challenge as each pixel does not reflect any semantic entities. Furthermore, the computational cost of inspecting each pixel is expensive because the number of pixels is high even at low resolution. In this work, we propose a hybrid image processing method. Simple Linear Iterative Clustering with Gaussian Filter (SLIC-G) for the purpose of overcoming pixel constraints. The SLIC-G image processing method is divided into two stages: (1) simple linear iterative clustering superpixel segmentation and (2) Gaussian smoothing operation. In such a way, a large number of new transformed datasets are generated and then used for model training. Finally, two performance evaluation metrics that are suitable for imbalanced diabetic retinopathy datasets were used to validate the effectiveness of the proposed SLIC-G. The results indicate that, in comparison to prior published works' results, the proposed SLIC-G shows better performance on image classification of class imbalanced diabetic retinopathy datasets. This research reveals the importance of image processing and how it influences the performance of deep learning networks. The proposed SLIC-G enhances pre-trained network performance by eliminating the local redundancy of an image, which preserves local structures, but avoids over-segmented, noisy clips. It closes the research gap by introducing the use of superpixel segmentation and Gaussian smoothing operation as image processing methods in diabetic retinopathy-related tasks.
Collapse
Affiliation(s)
- Wei Xiang Lim
- Faculty of Science and Engineering, School of Computer Science, University of Nottingham Malaysia, Semenyih, Malaysia
| | - Zhiyuan Chen
- Faculty of Science and Engineering, School of Computer Science, University of Nottingham Malaysia, Semenyih, Malaysia.
| |
Collapse
|
11
|
Rao DP, Savoy FM, Sivaraman A, Dutt S, Shahsuvaryan M, Jrbashyan N, Hambardzumyan N, Yeghiazaryan N, Das T. Evaluation of an AI algorithm trained on an ethnically diverse dataset to screen a previously unseen population for diabetic retinopathy. Indian J Ophthalmol 2024; 72:1162-1167. [PMID: 39078960 PMCID: PMC11451790 DOI: 10.4103/ijo.ijo_2151_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 12/22/2023] [Accepted: 02/02/2024] [Indexed: 10/06/2024] Open
Abstract
PURPOSE This study aimed to determine the generalizability of an artificial intelligence (AI) algorithm trained on an ethnically diverse dataset to screen for referable diabetic retinopathy (RDR) in the Armenian population unseen during AI development. METHODS This study comprised 550 patients with diabetes mellitus visiting the polyclinics of Armenia over 10 months requiring diabetic retinopathy (DR) screening. The Medios AI-DR algorithm was developed using a robust, diverse, ethnically balanced dataset with no inherent bias and deployed offline on a smartphone-based fundus camera. The algorithm here analyzed the retinal images captured using the target device for the presence of RDR (i.e., moderate non-proliferative diabetic retinopathy (NPDR) and/or clinically significant diabetic macular edema (CSDME) or more severe disease) and sight-threatening DR (STDR, i.e., severe NPDR and/or CSDME or more severe disease). The results compared the AI output to a consensus or majority image grading of three expert graders according to the International Clinical Diabetic Retinopathy severity scale. RESULTS On 478 subjects included in the analysis, the algorithm achieved a high classification sensitivity of 95.30% (95% CI: 91.9%-98.7%) and a specificity of 83.89% (95% CI: 79.9%-87.9%) for the detection of RDR. The sensitivity for STDR detection was 100%. CONCLUSION The study proved that Medios AI-DR algorithm yields good accuracy in screening for RDR in the Armenian population. In our literature search, this is the only smartphone-based, offline AI model validated in different populations.
Collapse
Affiliation(s)
- Divya P Rao
- AL& ML, Remidio Innovative Solutions, Inc, Glen Allen, USA
| | - Florian M Savoy
- AI&ML, Medios Technologies Pte Ltd, Remidio Innovative Solutions, Singapore
| | - Anand Sivaraman
- AI&ML, Remidio Innovative Solutions Pvt Ltd, Bengaluru, India
| | - Sreetama Dutt
- AI&ML, Remidio Innovative Solutions Pvt Ltd, Bengaluru, India
| | - Marianne Shahsuvaryan
- Ophthalmology, Yerevan State Medical University, Armenia
- Armenian Eyecare Project, Yerevan State University, Armenia
| | | | | | | | - Taraprasad Das
- Vitreoretinal Services, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, India
| |
Collapse
|
12
|
Yun C, Tang F, Gao Z, Wang W, Bai F, Miller JD, Liu H, Lee Y, Lou Q. Construction of Risk Prediction Model of Type 2 Diabetic Kidney Disease Based on Deep Learning. Diabetes Metab J 2024; 48:771-779. [PMID: 38685670 PMCID: PMC11307115 DOI: 10.4093/dmj.2023.0033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 05/27/2023] [Indexed: 05/02/2024] Open
Abstract
BACKGRUOUND This study aimed to develop a diabetic kidney disease (DKD) prediction model using long short term memory (LSTM) neural network and evaluate its performance using accuracy, precision, recall, and area under the curve (AUC) of the receiver operating characteristic (ROC) curve. METHODS The study identified DKD risk factors through literature review and physician focus group, and collected 7 years of data from 6,040 type 2 diabetes mellitus patients based on the risk factors. Pytorch was used to build the LSTM neural network, with 70% of the data used for training and the other 30% for testing. Three models were established to examine the impact of glycosylated hemoglobin (HbA1c), systolic blood pressure (SBP), and pulse pressure (PP) variabilities on the model's performance. RESULTS The developed model achieved an accuracy of 83% and an AUC of 0.83. When the risk factor of HbA1c variability, SBP variability, or PP variability was removed one by one, the accuracy of each model was significantly lower than that of the optimal model, with an accuracy of 78% (P<0.001), 79% (P<0.001), and 81% (P<0.001), respectively. The AUC of ROC was also significantly lower for each model, with values of 0.72 (P<0.001), 0.75 (P<0.001), and 0.77 (P<0.05). CONCLUSION The developed DKD risk predictive model using LSTM neural networks demonstrated high accuracy and AUC value. When HbA1c, SBP, and PP variabilities were added to the model as featured characteristics, the model's performance was greatly improved.
Collapse
Affiliation(s)
- Chuan Yun
- Department of Endocrinology, The First Affiliated Hospital of Hainan Medical University, Haikou, China
| | - Fangli Tang
- International School of Nursing, Hainan Medical University, Haikou, China
| | - Zhenxiu Gao
- School of International Education, Nanjing Medical University, Nanjing, China
| | - Wenjun Wang
- Department of Endocrinology, The First Affiliated Hospital of Hainan Medical University, Haikou, China
| | - Fang Bai
- Nursing Department 531, The First Affiliated Hospital of Hainan Medical University, Haikou, China
| | - Joshua D. Miller
- Department of Medicine, Division of Endocrinology & Metabolism, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Huanhuan Liu
- Department of Endocrinology, Hainan General Hospital, Haikou, China
| | | | - Qingqing Lou
- The First Affiliated Hospital of Hainan Medical University, Hainan Clinical Research Center for Metabolic Disease, Haikou, China
| |
Collapse
|
13
|
Carrillo-Larco RM, Bravo-Rocca G, Castillo-Cara M, Xu X, Bernabe-Ortiz A. A multimodal approach using fundus images and text meta-data in a machine learning classifier with embeddings to predict years with self-reported diabetes - An exploratory analysis. Prim Care Diabetes 2024; 18:327-332. [PMID: 38616442 DOI: 10.1016/j.pcd.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/17/2024] [Accepted: 04/09/2024] [Indexed: 04/16/2024]
Abstract
AIMS Machine learning models can use image and text data to predict the number of years since diabetes diagnosis; such model can be applied to new patients to predict, approximately, how long the new patient may have lived with diabetes unknowingly. We aimed to develop a model to predict self-reported diabetes duration. METHODS We used the Brazilian Multilabel Ophthalmological Dataset. Unit of analysis was the fundus image and its meta-data, regardless of the patient. We included people 40 + years and fundus images without diabetic retinopathy. Fundus images and meta-data (sex, age, comorbidities and taking insulin) were passed to the MedCLIP model to extract the embedding representation. The embedding representation was passed to an Extra Tree Classifier to predict: 0-4, 5-9, 10-14 and 15 + years with self-reported diabetes. RESULTS There were 988 images from 563 people (mean age = 67 years; 64 % were women). Overall, the F1 score was 57 %. The group 15 + years of self-reported diabetes had the highest precision (64 %) and F1 score (63 %), while the highest recall (69 %) was observed in the group 0-4 years. The proportion of correctly classified observations was 55 % for the group 0-4 years, 51 % for 5-9 years, 58 % for 10-14 years, and 64 % for 15 + years with self-reported diabetes. CONCLUSIONS The machine learning model had acceptable accuracy and F1 score, and correctly classified more than half of the patients according to diabetes duration. Using large foundational models to extract image and text embeddings seems a feasible and efficient approach to predict years living with self-reported diabetes.
Collapse
Affiliation(s)
- Rodrigo M Carrillo-Larco
- Hubert Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, GA, USA; Emory Global Diabetes Research Center, Emory University, Atlanta, GA, USA.
| | | | | | - Xiaolin Xu
- School of Public Health, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China; The Key Laboratory of Intelligent Preventive Medicine of Zhejiang Province, Hangzhou, China; School of Public Health, Faculty of Medicine, The University of Queensland, Brisbane, Australia
| | | |
Collapse
|
14
|
Whitestone N, Nkurikiye J, Patnaik JL, Jaccard N, Lanouette G, Cherwek DH, Congdon N, Mathenge W. Feasibility and acceptance of artificial intelligence-based diabetic retinopathy screening in Rwanda. Br J Ophthalmol 2024; 108:840-845. [PMID: 37541766 DOI: 10.1136/bjo-2022-322683] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 07/15/2023] [Indexed: 08/06/2023]
Abstract
BACKGROUND Evidence on the practical application of artificial intelligence (AI)-based diabetic retinopathy (DR) screening is needed. METHODS Consented participants were screened for DR using retinal imaging with AI interpretation from March 2021 to June 2021 at four diabetes clinics in Rwanda. Additionally, images were graded by a UK National Health System-certified retinal image grader. DR grades based on the International Classification of Diabetic Retinopathy with a grade of 2.0 or higher were considered referable. The AI system was designed to detect optic nerve and macular anomalies outside of DR. A vertical cup to disc ratio of 0.7 and higher and/or macular anomalies recognised at a cut-off of 60% and higher were also considered referable by AI. RESULTS Among 827 participants (59.6% women (n=493)) screened by AI, 33.2% (n=275) were referred for follow-up. Satisfaction with AI screening was high (99.5%, n=823), and 63.7% of participants (n=527) preferred AI over human grading. Compared with human grading, the sensitivity of the AI for referable DR was 92% (95% CI 0.863%, 0.968%), with a specificity of 85% (95% CI 0.751%, 0.882%). Of the participants referred by AI: 88 (32.0%) were for DR only, 109 (39.6%) for DR and an anomaly, 65 (23.6%) for an anomaly only and 13 (4.73%) for other reasons. Adherence to referrals was highest for those referred for DR at 53.4%. CONCLUSION DR screening using AI led to accurate referrals from diabetes clinics in Rwanda and high rates of participant satisfaction, suggesting AI screening for DR is practical and acceptable.
Collapse
Affiliation(s)
| | - John Nkurikiye
- RIIO iHospital, Rwanda International Institute of Ophthalmology, Kigali, Rwanda
- Department of Ophthalmology, Rwanda Military Hospital, Kigali, Rwanda
| | - Jennifer L Patnaik
- Clinical Services, Orbis International, New York, New York, USA
- Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colorado, USA
| | - Nicolas Jaccard
- Clinical Services, Orbis International, New York, New York, USA
| | | | - David H Cherwek
- Clinical Services, Orbis International, New York, New York, USA
| | - Nathan Congdon
- Clinical Services, Orbis International, New York, New York, USA
- Centre for Public Health, Queen's University Belfast, Belfast, UK
| | - Wanjiku Mathenge
- Clinical Services, Orbis International, New York, New York, USA
- RIIO iHospital, Rwanda International Institute of Ophthalmology, Kigali, Rwanda
| |
Collapse
|
15
|
Yang Y, Cai Z, Qiu S, Xu P. Vision transformer with masked autoencoders for referable diabetic retinopathy classification based on large-size retina image. PLoS One 2024; 19:e0299265. [PMID: 38446810 PMCID: PMC10917269 DOI: 10.1371/journal.pone.0299265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 02/06/2024] [Indexed: 03/08/2024] Open
Abstract
Computer-aided diagnosis systems based on deep learning algorithms have shown potential applications in rapid diagnosis of diabetic retinopathy (DR). Due to the superior performance of Transformer over convolutional neural networks (CNN) on natural images, we attempted to develop a new model to classify referable DR based on a limited number of large-size retinal images by using Transformer. Vision Transformer (ViT) with Masked Autoencoders (MAE) was applied in this study to improve the classification performance of referable DR. We collected over 100,000 publicly fundus retinal images larger than 224×224, and then pre-trained ViT on these retinal images using MAE. The pre-trained ViT was applied to classify referable DR, the performance was also compared with that of ViT pre-trained using ImageNet. The improvement in model classification performance by pre-training with over 100,000 retinal images using MAE is superior to that pre-trained with ImageNet. The accuracy, area under curve (AUC), highest sensitivity and highest specificity of the present model are 93.42%, 0.9853, 0.973 and 0.9539, respectively. This study shows that MAE can provide more flexibility to the input image and substantially reduce the number of images required. Meanwhile, the pretraining dataset scale in this study is much smaller than ImageNet, and the pre-trained weights from ImageNet are not required also.
Collapse
Affiliation(s)
- Yaoming Yang
- College of Science, China Jiliang University, Hangzhou, Zhejiang, China
| | - Zhili Cai
- College of Science, China Jiliang University, Hangzhou, Zhejiang, China
| | - Shuxia Qiu
- College of Science, China Jiliang University, Hangzhou, Zhejiang, China
- Key Laboratory of Intelligent Manufacturing Quality Big Data Tracing and Analysis of Zhejiang Province, Hangzhou, Zhejiang, China
| | - Peng Xu
- College of Science, China Jiliang University, Hangzhou, Zhejiang, China
- Key Laboratory of Intelligent Manufacturing Quality Big Data Tracing and Analysis of Zhejiang Province, Hangzhou, Zhejiang, China
| |
Collapse
|
16
|
Abou Taha A, Dinesen S, Vergmann AS, Grauslund J. Present and future screening programs for diabetic retinopathy: a narrative review. Int J Retina Vitreous 2024; 10:14. [PMID: 38310265 PMCID: PMC10838429 DOI: 10.1186/s40942-024-00534-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 01/19/2024] [Indexed: 02/05/2024] Open
Abstract
Diabetes is a prevalent global concern, with an estimated 12% of the global adult population affected by 2045. Diabetic retinopathy (DR), a sight-threatening complication, has spurred diverse screening approaches worldwide due to advances in DR knowledge, rapid technological developments in retinal imaging and variations in healthcare resources.Many high income countries have fully implemented or are on the verge of completing a national Diabetic Eye Screening Programme (DESP). Although there have been some improvements in DR screening in Africa, Asia, and American countries further progress is needed. In low-income countries, only one out of 29, partially implemented a DESP, while 21 out of 50 lower-middle-income countries have started the DR policy cycle. Among upper-middle-income countries, a third of 59 nations have advanced in DR agenda-setting, with five having a comprehensive national DESP and 11 in the early stages of implementation.Many nations use 2-4 fields fundus images, proven effective with 80-98% sensitivity and 86-100% specificity compared to the traditional seven-field evaluation for DR. A cell phone based screening with a hand held retinal camera presents a potential low-cost alternative as imaging device. While this method in low-resource settings may not entirely match the sensitivity and specificity of seven-field stereoscopic photography, positive outcomes are observed.Individualized DR screening intervals are the standard in many high-resource nations. In countries that lacks a national DESP and resources, screening are more sporadic, i.e. screening intervals are not evidence-based and often less frequently, which can lead to late recognition of treatment required DR.The rising global prevalence of DR poses an economic challenge to nationwide screening programs AI-algorithms have showed high sensitivity and specificity for detection of DR and could provide a promising solution for the future screening burden.In summary, this narrative review enlightens on the epidemiology of DR and the necessity for effective DR screening programs. Worldwide evolution in existing approaches for DR screening has showed promising results but has also revealed limitations. Technological advancements, such as handheld imaging devices, tele ophthalmology and artificial intelligence enhance cost-effectiveness, but also the accessibility of DR screening in countries with low resources or where distance to or a shortage of ophthalmologists exists.
Collapse
Affiliation(s)
- Andreas Abou Taha
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark.
| | - Sebastian Dinesen
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark
| | - Anna Stage Vergmann
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark
| |
Collapse
|
17
|
Blair JPM, Rodriguez JN, Lasagni Vitar RM, Stadelmann MA, Abreu-González R, Donate J, Ciller C, Apostolopoulos S, Bermudez C, De Zanet S. Development of LuxIA, a Cloud-Based AI Diabetic Retinopathy Screening Tool Using a Single Color Fundus Image. Transl Vis Sci Technol 2023; 12:38. [PMID: 38032322 PMCID: PMC10691390 DOI: 10.1167/tvst.12.11.38] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Purpose Diabetic retinopathy (DR) is the leading cause of vision impairment in working-age adults. Automated screening can increase DR detection at early stages at relatively low costs. We developed and evaluated a cloud-based screening tool that uses artificial intelligence (AI), the LuxIA algorithm, to detect DR from a single fundus image. Methods Color fundus images that were previously graded by expert readers were collected from the Canarian Health Service (Retisalud) and used to train LuxIA, a deep-learning-based algorithm for the detection of more than mild DR. The algorithm was deployed in the Discovery cloud platform to evaluate each test set. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve were computed using a bootstrapping method to evaluate the algorithm performance and compared through different publicly available datasets. A usability test was performed to assess the integration into a clinical tool. Results Three separate datasets, Messidor-2, APTOS, and a holdout set from Retisalud were evaluated. Mean sensitivity and specificity with 95% confidence intervals (CIs) reached for these three datasets were 0.901 (0.901-0.902) and 0.955 (0.955-0.956), 0.995 (0.995-0.995) and 0.821 (0.821-0.823), and 0.911 (0.907-0.912) and 0.880 (0.879-0.880), respectively. The usability test confirmed the successful integration of LuxIA into Discovery. Conclusions Clinical data were used to train the deep-learning-based algorithm LuxIA to an expert-level performance. The whole process (image uploading and analysis) was integrated into the cloud-based platform Discovery, allowing more patients to have access to expert-level screening tools. Translational Relevance Using the cloud-based LuxIA tool as part of a screening program may give diabetic patients greater access to specialist-level decisions, without the need for consultation.
Collapse
Affiliation(s)
| | - Jose Natan Rodriguez
- Department of Information Technology, Nuestra Señora de la Candelaria University Hospital, Santa Cruz de Tenerife, Canarias, Spain
| | | | | | - Rodrigo Abreu-González
- Department of Ophthalmology, Nuestra Señora de la Candelaria University Hospital, Santa Cruz de Tenerife, Canarias, Spain
| | - Juan Donate
- Department of Ophthalmology, Nuestra Señora de la Candelaria University Hospital, Santa Cruz de Tenerife, Canarias, Spain
| | | | | | - Carlos Bermudez
- Department of Information Technology, Nuestra Señora de la Candelaria University Hospital, Santa Cruz de Tenerife, Canarias, Spain
| | | |
Collapse
|
18
|
Curran K, Whitestone N, Zabeen B, Ahmed M, Husain L, Alauddin M, Hossain MA, Patnaik JL, Lanoutee G, Cherwek DH, Congdon N, Peto T, Jaccard N. CHILDSTAR: CHIldren Living With Diabetes See and Thrive with AI Review. Clin Med Insights Endocrinol Diabetes 2023; 16:11795514231203867. [PMID: 37822362 PMCID: PMC10563496 DOI: 10.1177/11795514231203867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 08/23/2023] [Indexed: 10/13/2023] Open
Abstract
Background Artificial intelligence (AI) appears capable of detecting diabetic retinopathy (DR) with a high degree of accuracy in adults; however, there are few studies in children and young adults. Methods Children and young adults (3-26 years) with type 1 diabetes mellitus (T1DM) or type 2 diabetes mellitus (T2DM) were screened at the Dhaka BIRDEM-2 hospital, Bangladesh. All gradable fundus images were uploaded to Cybersight AI for interpretation. Two main outcomes were considered at a patient level: 1) Any DR, defined as mild non-proliferative diabetic retinopathy (NPDR or more severe; and 2) Referable DR, defined as moderate NPDR or more severe. Diagnostic test performance comparing Orbis International's Cybersight AI with the reference standard, a fully qualified optometrist certified in DR grading, was assessed using the Matthews correlation coefficient (MCC), area under the receiver operating characteristic curve (AUC-ROC), area under the precision-recall curve (AUC-PR), sensitivity, specificity, positive and negative predictive values. Results Among 1274 participants (53.1% female, mean age 16.7 years), 19.4% (n = 247) had any DR according to AI. For referable DR, 2.35% (n = 30) were detected by AI. The sensitivity and specificity of AI for any DR were 75.5% (CI 69.7-81.3%) and 91.8% (CI 90.2-93.5%) respectively, and for referable DR, these values were 84.2% (CI 67.8-100%) and 98.9% (CI 98.3%-99.5%). The MCC, AUC-ROC and the AUC-PR for referable DR were 63.4, 91.2 and 76.2% respectively. AI was most successful in accurately classifying younger children with shorter duration of diabetes. Conclusions Cybersight AI accurately detected any DR and referable DR among children and young adults, despite its algorithms having been trained on adults. The observed high specificity is particularly important to avoid over-referral in low-resource settings. AI may be an effective tool to reduce demands on scarce physician resources for the care of children with diabetes in low-resource settings.
Collapse
Affiliation(s)
- Katie Curran
- Centre for Public Health, Queens University Belfast, Belfast, UK
| | | | - Bedowra Zabeen
- Department of Paediatrics, Life for a Child & Changing Diabetes in Children Programme, Bangladesh Institute of Research & Rehabilitation in Diabetes, Endocrine & Metabolic Disorders (BIRDEM), Diabetic Association of Bangladesh, Dhaka, Bangladesh
| | | | | | | | | | - Jennifer L Patnaik
- Orbis International, New York, NY, USA
- Department of Ophthalmology, University of Colorado School of Medicine, Aurora, CO, USA
| | | | | | - Nathan Congdon
- Centre for Public Health, Queens University Belfast, Belfast, UK
- Orbis International, New York, NY, USA
- Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Tunde Peto
- Centre for Public Health, Queens University Belfast, Belfast, UK
| | | |
Collapse
|
19
|
Chen Y, Clayton EW, Novak LL, Anders S, Malin B. Human-Centered Design to Address Biases in Artificial Intelligence. J Med Internet Res 2023; 25:e43251. [PMID: 36961506 PMCID: PMC10132017 DOI: 10.2196/43251] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/30/2023] [Accepted: 02/28/2023] [Indexed: 03/25/2023] Open
Abstract
The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. By recognizing and addressing biases at each stage of the AI life cycle, AI can achieve its potential in health care.
Collapse
Affiliation(s)
- You Chen
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN, United States
| | - Ellen Wright Clayton
- Law School, Vanderbilt University, Nashville, TN, United States
- Center for Biomedical Ethics and Society, Vanderbilt University Medical Center, Nashville, TN, United States
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Laurie Lovett Novak
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Shilo Anders
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN, United States
- Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Bradley Malin
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN, United States
- Center for Biomedical Ethics and Society, Vanderbilt University Medical Center, Nashville, TN, United States
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, United States
| |
Collapse
|
20
|
Development of a Computer System for Automatically Generating a Laser Photocoagulation Plan to Improve the Retinal Coagulation Quality in the Treatment of Diabetic Retinopathy. Symmetry (Basel) 2023. [DOI: 10.3390/sym15020287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
In this article, the development of a computer system for high-tech medical uses in ophthalmology is proposed. An overview of the main methods and algorithms that formed the basis of the coagulation plan planning system is presented. The system provides the formation of a more effective plan for laser coagulation in comparison with the use of existing coagulation techniques. An analysis of monopulse- and pattern-based laser coagulation techniques in the treatment of diabetic retinopathy has shown that modern treatment methods do not provide the required efficacy of medical laser coagulation procedures, as the laser energy is nonuniformly distributed across the pigment epithelium and may exert an excessive effect on parts of the retina and anatomical elements. The analysis has shown that the efficacy of retinal laser coagulation for the treatment of diabetic retinopathy is determined by the relative position of coagulates and parameters of laser exposure. In the course of the development of the computer system proposed herein, main stages of processing diagnostic data were identified. They are as follows: the allocation of the laser exposure zone, the evaluation of laser pulse parameters that would be safe for the fundus, mapping a coagulation plan in the laser exposure zone, followed by the analysis of the generated plan for predicting the therapeutic effect. In the course of the study, it was found that the developed algorithms for placing coagulates in the area of laser exposure provide a more uniform distribution of laser energy across the pigment epithelium when compared to monopulse- and pattern-based laser coagulation techniques.
Collapse
|
21
|
Khanna NN, Maindarkar MA, Viswanathan V, Fernandes JFE, Paul S, Bhagawati M, Ahluwalia P, Ruzsa Z, Sharma A, Kolluri R, Singh IM, Laird JR, Fatemi M, Alizad A, Saba L, Agarwal V, Sharma A, Teji JS, Al-Maini M, Rathore V, Naidu S, Liblik K, Johri AM, Turk M, Mohanty L, Sobel DW, Miner M, Viskovic K, Tsoulfas G, Protogerou AD, Kitas GD, Fouda MM, Chaturvedi S, Kalra MK, Suri JS. Economics of Artificial Intelligence in Healthcare: Diagnosis vs. Treatment. Healthcare (Basel) 2022; 10:2493. [PMID: 36554017 PMCID: PMC9777836 DOI: 10.3390/healthcare10122493] [Citation(s) in RCA: 63] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/03/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
Motivation: The price of medical treatment continues to rise due to (i) an increasing population; (ii) an aging human growth; (iii) disease prevalence; (iv) a rise in the frequency of patients that utilize health care services; and (v) increase in the price. Objective: Artificial Intelligence (AI) is already well-known for its superiority in various healthcare applications, including the segmentation of lesions in images, speech recognition, smartphone personal assistants, navigation, ride-sharing apps, and many more. Our study is based on two hypotheses: (i) AI offers more economic solutions compared to conventional methods; (ii) AI treatment offers stronger economics compared to AI diagnosis. This novel study aims to evaluate AI technology in the context of healthcare costs, namely in the areas of diagnosis and treatment, and then compare it to the traditional or non-AI-based approaches. Methodology: PRISMA was used to select the best 200 studies for AI in healthcare with a primary focus on cost reduction, especially towards diagnosis and treatment. We defined the diagnosis and treatment architectures, investigated their characteristics, and categorized the roles that AI plays in the diagnostic and therapeutic paradigms. We experimented with various combinations of different assumptions by integrating AI and then comparing it against conventional costs. Lastly, we dwell on three powerful future concepts of AI, namely, pruning, bias, explainability, and regulatory approvals of AI systems. Conclusions: The model shows tremendous cost savings using AI tools in diagnosis and treatment. The economics of AI can be improved by incorporating pruning, reduction in AI bias, explainability, and regulatory approvals.
Collapse
Affiliation(s)
- Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110001, India
| | - Mahesh A. Maindarkar
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA
- Department of Biomedical Engineering, North Eastern Hill University, Shillong 793022, India
| | | | | | - Sudip Paul
- Department of Biomedical Engineering, North Eastern Hill University, Shillong 793022, India
| | - Mrinalini Bhagawati
- Department of Biomedical Engineering, North Eastern Hill University, Shillong 793022, India
| | - Puneet Ahluwalia
- Max Institute of Cancer Care, Max Super Specialty Hospital, New Delhi 110017, India
| | - Zoltan Ruzsa
- Invasive Cardiology Division, Faculty of Medicine, University of Szeged, 6720 Szeged, Hungary
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA 22904, USA
| | - Raghu Kolluri
- Ohio Health Heart and Vascular, Columbus, OH 43214, USA
| | - Inder M. Singh
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA 94574, USA
| | - Mostafa Fatemi
- Department of Physiology & Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Azra Alizad
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria, 40138 Cagliari, Italy
| | - Vikas Agarwal
- Department of Immunology, SGPGIMS, Lucknow 226014, India
| | - Aman Sharma
- Department of Immunology, SGPGIMS, Lucknow 226014, India
| | - Jagjit S. Teji
- Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON L4Z 4C4, Canada
| | | | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, MN 55812, USA
| | - Kiera Liblik
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada
| | - Amer M. Johri
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada
| | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, 27753 Delmenhorst, Germany
| | - Lopamudra Mohanty
- Department of Computer Science, ABES Engineering College, Ghaziabad 201009, India
| | - David W. Sobel
- Rheumatology Unit, National Kapodistrian University of Athens, 15772 Athens, Greece
| | - Martin Miner
- Men’s Health Centre, Miriam Hospital Providence, Providence, RI 02906, USA
| | - Klaudija Viskovic
- Department of Radiology and Ultrasound, University Hospital for Infectious Diseases, 10000 Zagreb, Croatia
| | - George Tsoulfas
- Department of Surgery, Aristoteleion University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Athanasios D. Protogerou
- Cardiovascular Prevention and Research Unit, Department of Pathophysiology, National & Kapodistrian University of Athens, 15772 Athens, Greece
| | - George D. Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley DY1 2HQ, UK
- Arthritis Research UK Epidemiology Unit, Manchester University, Manchester M13 9PL, UK
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA
| | - Seemant Chaturvedi
- Department of Neurology & Stroke Program, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | | | - Jasjit S. Suri
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA
| |
Collapse
|
22
|
Vujosevic S, Limoli C, Luzi L, Nucci P. Digital innovations for retinal care in diabetic retinopathy. Acta Diabetol 2022; 59:1521-1530. [PMID: 35962258 PMCID: PMC9374293 DOI: 10.1007/s00592-022-01941-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Accepted: 07/04/2022] [Indexed: 12/02/2022]
Abstract
AIM The purpose of this review is to examine the applications of novel digital technology domains for the screening and management of patients with diabetic retinopathy (DR). METHODS A PubMed engine search was performed, using the terms "Telemedicine", "Digital health", "Telehealth", "Telescreening", "Artificial intelligence", "Deep learning", "Smartphone", "Triage", "Screening", "Home-based", "Monitoring", "Ophthalmology", "Diabetes", "Diabetic Retinopathy", "Retinal imaging". Full-text English language studies from January 1, 2010, to February 1, 2022, and reference lists were considered for the conceptual framework of this review. RESULTS Diabetes mellitus and its eye complications, including DR, are particularly well suited to digital technologies, providing an ideal model for telehealth initiatives and real-world applications. The current development in the adoption of telemedicine, artificial intelligence and remote monitoring as an alternative to or in addition to traditional forms of care will be discussed. CONCLUSIONS Advances in digital health have created an ecosystem ripe for telemedicine in the field of DR to thrive. Stakeholders and policymakers should adopt a participatory approach to ensure sustained implementation of these technologies after the COVID-19 pandemic. This article belongs to the Topical Collection "Diabetic Eye Disease", managed by Giuseppe Querques.
Collapse
Affiliation(s)
- Stela Vujosevic
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy.
- Eye Clinic, IRCCS MultiMedica, Via San Vittore 12, 20123, Milan, Italy.
| | - Celeste Limoli
- Eye Clinic, IRCCS MultiMedica, Via San Vittore 12, 20123, Milan, Italy
- University of Milan, Milan, Italy
| | - Livio Luzi
- Department of Biomedical Sciences for Health, University of Milan, Milan, Italy
- Department of Endocrinology, Nutrition and Metabolic Diseases, IRCCS MultiMedica, Milan, Italy
| | - Paolo Nucci
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy
| |
Collapse
|
23
|
Mathenge W, Whitestone N, Nkurikiye J, Patnaik JL, Piyasena P, Uwaliraye P, Lanouette G, Kahook MY, Cherwek DH, Congdon N, Jaccard N. Impact of Artificial Intelligence Assessment of Diabetic Retinopathy on Referral Service Uptake in a Low-Resource Setting: The RAIDERS Randomized Trial. OPHTHALMOLOGY SCIENCE 2022; 2:100168. [PMID: 36531575 PMCID: PMC9754978 DOI: 10.1016/j.xops.2022.100168] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Revised: 04/21/2022] [Accepted: 04/25/2022] [Indexed: 06/02/2023]
Abstract
PURPOSE This trial was designed to determine if artificial intelligence (AI)-supported diabetic retinopathy (DR) screening improved referral uptake in Rwanda. DESIGN The Rwanda Artificial Intelligence for Diabetic Retinopathy Screening (RAIDERS) study was an investigator-masked, parallel-group randomized controlled trial. PARTICIPANTS Patients ≥ 18 years of age with known diabetes who required referral for DR based on AI interpretation. METHODS The RAIDERS study screened for DR using retinal imaging with AI interpretation implemented at 4 facilities from March 2021 through July 2021. Eligible participants were assigned randomly (1:1) to immediate feedback of AI grading (intervention) or communication of referral advice after human grading was completed 3 to 5 days after the initial screening (control). MAIN OUTCOME MEASURES Difference between study groups in the rate of presentation for referral services within 30 days of being informed of the need for a referral visit. RESULTS Of the 823 clinic patients who met inclusion criteria, 275 participants (33.4%) showed positive findings for referable DR based on AI screening and were randomized for inclusion in the trial. Study participants (mean age, 50.7 years; 58.2% women) were randomized to the intervention (n = 136 [49.5%]) or control (n = 139 [50.5%]) groups. No significant intergroup differences were found at baseline, and main outcome data were available for analyses for 100% of participants. Referral adherence was statistically significantly higher in the intervention group (70/136 [51.5%]) versus the control group (55/139 [39.6%]; P = 0.048), a 30.1% increase. Older age (odds ratio [OR], 1.04; 95% confidence interval [CI], 1.02-1.05; P < 0.0001), male sex (OR, 2.07; 95% CI, 1.22-3.51; P = 0.007), rural residence (OR, 1.79; 95% CI, 1.07-3.01; P = 0.027), and intervention group (OR, 1.74; 95% CI, 1.05-2.88; P = 0.031) were statistically significantly associated with acceptance of referral in multivariate analyses. CONCLUSIONS Immediate feedback on referral status based on AI-supported screening was associated with statistically significantly higher referral adherence compared with delayed communications of results from human graders. These results provide evidence for an important benefit of AI screening in promoting adherence to prescribed treatment for diabetic eye care in sub-Saharan Africa.
Collapse
Affiliation(s)
- Wanjiku Mathenge
- Rwanda International Institute of Ophthalmology, Kigali, Rwanda
- Orbis International, New York, New York
| | | | - John Nkurikiye
- Rwanda International Institute of Ophthalmology, Kigali, Rwanda
- Rwanda Military Hospital, Kigali, Rwanda
| | - Jennifer L. Patnaik
- Orbis International, New York, New York
- Department of Ophthalmology, University of Colorado School of Medicine, Aurora, Colorado
| | - Prabhath Piyasena
- Centre for Public Health, Queen’s University Belfast, Belfast, United Kingdom
| | | | | | - Malik Y. Kahook
- Department of Ophthalmology, University of Colorado School of Medicine, Aurora, Colorado
| | | | - Nathan Congdon
- Orbis International, New York, New York
- Centre for Public Health, Queen’s University Belfast, Belfast, United Kingdom
- Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | | |
Collapse
|
24
|
Chalkidou A, Shokraneh F, Kijauskaite G, Taylor-Phillips S, Halligan S, Wilkinson L, Glocker B, Garrett P, Denniston AK, Mackie A, Seedat F. Recommendations for the development and use of imaging test sets to investigate the test performance of artificial intelligence in health screening. Lancet Digit Health 2022; 4:e899-e905. [PMID: 36427951 DOI: 10.1016/s2589-7500(22)00186-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 08/11/2022] [Accepted: 09/09/2022] [Indexed: 11/24/2022]
Abstract
Rigorous evaluation of artificial intelligence (AI) systems for image classification is essential before deployment into health-care settings, such as screening programmes, so that adoption is effective and safe. A key step in the evaluation process is the external validation of diagnostic performance using a test set of images. We conducted a rapid literature review on methods to develop test sets, published from 2012 to 2020, in English. Using thematic analysis, we mapped themes and coded the principles using the Population, Intervention, and Comparator or Reference standard, Outcome, and Study design framework. A group of screening and AI experts assessed the evidence-based principles for completeness and provided further considerations. From the final 15 principles recommended here, five affect population, one intervention, two comparator, one reference standard, and one both reference standard and comparator. Finally, four are appliable to outcome and one to study design. Principles from the literature were useful to address biases from AI; however, they did not account for screening specific biases, which we now incorporate. The principles set out here should be used to support the development and use of test sets for studies that assess the accuracy of AI within screening programmes, to ensure they are fit for purpose and minimise bias.
Collapse
Affiliation(s)
| | - Farhad Shokraneh
- King's Technology Evaluation Centre, King's College London, London, UK
| | - Goda Kijauskaite
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | | | - Steve Halligan
- Centre for Medical Imaging, Division of Medicine, University College London, London, UK
| | | | - Ben Glocker
- Department of Computing, Imperial College London, London, UK
| | - Peter Garrett
- Department of Chemical Engineering and Analytical Science, University of Manchester, Manchester, UK
| | - Alastair K Denniston
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Anne Mackie
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | - Farah Seedat
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| |
Collapse
|
25
|
García-Sierra R, López-Lifante VM, Isusquiza Garcia E, Heras A, Besada I, Verde Lopez D, Alzamora MT, Forés R, Montero-Alia P, Ugarte Anduaga J, Torán-Monserrat P. Automated Systems for Calculating Arteriovenous Ratio in Retinographies: A Scoping Review. Diagnostics (Basel) 2022; 12:2865. [PMID: 36428925 PMCID: PMC9689345 DOI: 10.3390/diagnostics12112865] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Revised: 10/29/2022] [Accepted: 11/14/2022] [Indexed: 11/22/2022] Open
Abstract
There is evidence of an association between hypertension and retinal arteriolar narrowing. Manual measurement of retinal vessels comes with additional variability, which can be eliminated using automated software. This scoping review aims to summarize research on automated retinal vessel analysis systems. Searches were performed on Medline, Scopus, and Cochrane to find studies examining automated systems for the diagnosis of retinal vascular alterations caused by hypertension using the following keywords: diagnosis; diagnostic screening programs; image processing, computer-assisted; artificial intelligence; electronic data processing; hypertensive retinopathy; hypertension; retinal vessels; arteriovenous ratio and retinal image analysis. The searches generated 433 articles. Of these, 25 articles published from 2010 to 2022 were included in the review. The retinographies analyzed were extracted from international databases and real scenarios. Automated systems to detect alterations in the retinal vasculature are being introduced into clinical practice for diagnosis in ophthalmology and other medical specialties due to the association of such changes with various diseases. These systems make the classification of hypertensive retinopathy and cardiovascular risk more reliable. They also make it possible for diagnosis to be performed in primary care, thus optimizing ophthalmological visits.
Collapse
Affiliation(s)
- Rosa García-Sierra
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
- Multidisciplinary Research Group in Health and Society GREMSAS (2017 SGR 917), 08007 Barcelona, Spain
- Nursing Department, Faculty of Medicine, Universitat Autònoma de Barcelona, Campus Bellaterra, 08193 Barcelona, Spain
- Primary Care Group, Germans Trias i Pujol Research Institute (IGTP), 08916 Badalona, Spain
| | - Victor M. López-Lifante
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
- Palau-solità i Plegamans Primary Healthcare Centre, Palau-solità i Plegamans, Gerència d’Àmbit d’Atenció Primària Metropolitana Nord, Institut Català de la Salut, 08184 Barcelona, Spain
| | | | - Antonio Heras
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
- Primary Healthcare Centre Riu Nord-Riu Sud, Gerència d’Àmbit d’Atenció Primària Metropolitana Nord, Institut Català de la Salut, Santa Coloma de Gramenet, 08921 Barcelona, Spain
| | - Idoia Besada
- ULMA Medical Technologies, S. Coop, 20560 Onati, Spain
| | - David Verde Lopez
- Institut Universitari d’Investigació en Atenció Primària Jordi Gol (IDIAP Jordi Gol), 08007 Barcelona, Spain
| | - Maria Teresa Alzamora
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
- Primary Healthcare Centre Riu Nord-Riu Sud, Gerència d’Àmbit d’Atenció Primària Metropolitana Nord, Institut Català de la Salut, Santa Coloma de Gramenet, 08921 Barcelona, Spain
| | - Rosa Forés
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
| | - Pilar Montero-Alia
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
| | | | - Pere Torán-Monserrat
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
- Multidisciplinary Research Group in Health and Society GREMSAS (2017 SGR 917), 08007 Barcelona, Spain
- Primary Care Group, Germans Trias i Pujol Research Institute (IGTP), 08916 Badalona, Spain
- Department of Medicine, Faculty of Medicine, Universitat de Girona, 17004 Girona, Spain
| |
Collapse
|
26
|
Yang Y, Shang F, Wu B, Yang D, Wang L, Xu Y, Zhang W, Zhang T. Robust Collaborative Learning of Patch-Level and Image-Level Annotations for Diabetic Retinopathy Grading From Fundus Image. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11407-11417. [PMID: 33961571 DOI: 10.1109/tcyb.2021.3062638] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Diabetic retinopathy (DR) grading from fundus images has attracted increasing interest in both academic and industrial communities. Most convolutional neural network-based algorithms treat DR grading as a classification task via image-level annotations. However, these algorithms have not fully explored the valuable information in the DR-related lesions. In this article, we present a robust framework, which collaboratively utilizes patch-level and image-level annotations, for DR severity grading. By an end-to-end optimization, this framework can bidirectionally exchange the fine-grained lesion and image-level grade information. As a result, it exploits more discriminative features for DR grading. The proposed framework shows better performance than the recent state-of-the-art algorithms and three clinical ophthalmologists with over nine years of experience. By testing on datasets of different distributions (such as label and camera), we prove that our algorithm is robust when facing image quality and distribution variations that commonly exist in real-world practice. We inspect the proposed framework through extensive ablation studies to indicate the effectiveness and necessity of each motivation. The code and some valuable annotations are now publicly available.
Collapse
|
27
|
Martinez-Millana A, Saez-Saez A, Tornero-Costa R, Azzopardi-Muscat N, Traver V, Novillo-Ortiz D. Artificial intelligence and its impact on the domains of universal health coverage, health emergencies and health promotion: An overview of systematic reviews. Int J Med Inform 2022; 166:104855. [PMID: 35998421 PMCID: PMC9551134 DOI: 10.1016/j.ijmedinf.2022.104855] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 08/01/2022] [Accepted: 08/11/2022] [Indexed: 12/04/2022]
Abstract
BACKGROUND Artificial intelligence is fueling a new revolution in medicine and in the healthcare sector. Despite the growing evidence on the benefits of artificial intelligence there are several aspects that limit the measure of its impact in people's health. It is necessary to assess the current status on the application of AI towards the improvement of people's health in the domains defined by WHO's Thirteenth General Programme of Work (GPW13) and the European Programme of Work (EPW), to inform about trends, gaps, opportunities, and challenges. OBJECTIVE To perform a systematic overview of systematic reviews on the application of artificial intelligence in the people's health domains as defined in the GPW13 and provide a comprehensive and updated map on the application specialties of artificial intelligence in terms of methodologies, algorithms, data sources, outcomes, predictors, performance, and methodological quality. METHODS A systematic search in MEDLINE, EMBASE, Cochrane and IEEEXplore was conducted between January 2015 and June 2021 to collect systematic reviews using a combination of keywords related to the domains of universal health coverage, health emergencies protection, and better health and wellbeing as defined by the WHO's PGW13 and EPW. Eligibility criteria was based on methodological quality and the inclusion of practical implementation of artificial intelligence. Records were classified and labeled using ICD-11 categories into the domains of the GPW13. Descriptors related to the area of implementation, type of modeling, data entities, outcomes and implementation on care delivery were extracted using a structured form and methodological aspects of the included reviews studies was assessed using the AMSTAR checklist. RESULTS The search strategy resulted in the screening of 815 systematic reviews from which 203 were assessed for eligibility and 129 were included in the review. The most predominant domain for artificial intelligence applications was Universal Health Coverage (N = 98) followed by Health Emergencies (N = 16) and Better Health and Wellbeing (N = 15). Neoplasms area on Universal Health Coverage was the disease area featuring most of the applications (21.7 %, N = 28). The reviews featured analytics primarily over both public and private data sources (67.44 %, N = 87). The most used type of data was medical imaging (31.8 %, N = 41) and predictors based on regions of interest and clinical data. The most prominent subdomain of Artificial Intelligence was Machine Learning (43.4 %, N = 56), in which Support Vector Machine method was predominant (20.9 %, N = 27). Regarding the purpose, the application of Artificial Intelligence I is focused on the prediction of the diseases (36.4 %, N = 47). With respect to the validation, more than a half of the reviews (54.3 %, N = 70) did not report a validation procedure and, whenever available, the main performance indicator was the accuracy (28.7 %, N = 37). According to the methodological quality assessment, a third of the reviews (34.9 %, N = 45) implemented methods for analysis the risk of bias and the overall AMSTAR score below was 5 (4.01 ± 1.93) on all the included systematic reviews. CONCLUSION Artificial intelligence is being used for disease modelling, diagnose, classification and prediction in the three domains of GPW13. However, the evidence is often limited to laboratory and the level of adoption is largely unbalanced between ICD-11 categoriesand diseases. Data availability is a determinant factor on the developmental stage of artificial intelligence applications. Most of the reviewed studies show a poor methodological quality and are at high risk of bias, which limits the reproducibility of the results and the reliability of translating these applications to real clinical scenarios. The analyzed papers show results only in laboratory and testing scenarios and not in clinical trials nor case studies, limiting the supporting evidence to transfer artificial intelligence to actual care delivery.
Collapse
Affiliation(s)
- Antonio Martinez-Millana
- Instituto Universitario de Investigación de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas (ITACA), Universitat Politècnica de València, Camino de Vera S/N, Valencia 46022, Spain
| | - Aida Saez-Saez
- Instituto Universitario de Investigación de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas (ITACA), Universitat Politècnica de València, Camino de Vera S/N, Valencia 46022, Spain
| | - Roberto Tornero-Costa
- Instituto Universitario de Investigación de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas (ITACA), Universitat Politècnica de València, Camino de Vera S/N, Valencia 46022, Spain
| | - Natasha Azzopardi-Muscat
- Division of Country Health Policies and Systems, World Health Organization, Regional Office for Europe, Copenhagen, Denmark
| | - Vicente Traver
- Instituto Universitario de Investigación de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas (ITACA), Universitat Politècnica de València, Camino de Vera S/N, Valencia 46022, Spain
| | - David Novillo-Ortiz
- Division of Country Health Policies and Systems, World Health Organization, Regional Office for Europe, Copenhagen, Denmark.
| |
Collapse
|
28
|
Grauslund J. Diabetic retinopathy screening in the emerging era of artificial intelligence. Diabetologia 2022; 65:1415-1423. [PMID: 35639120 DOI: 10.1007/s00125-022-05727-0] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 04/05/2022] [Indexed: 12/29/2022]
Abstract
Diabetic retinopathy is a frequent complication in diabetes and a leading cause of visual impairment. Regular eye screening is imperative to detect sight-threatening stages of diabetic retinopathy such as proliferative diabetic retinopathy and diabetic macular oedema in order to treat these before irreversible visual loss occurs. Screening is cost-effective and has been implemented in various countries in Europe and elsewhere. Along with optimised diabetes care, this has substantially reduced the risk of visual loss. Nevertheless, the growing number of patients with diabetes poses an increasing burden on healthcare systems and automated solutions are needed to alleviate the task of screening and improve diagnostic accuracy. Deep learning by convolutional neural networks is an optimised branch of artificial intelligence that is particularly well suited to automated image analysis. Pivotal studies have demonstrated high sensitivity and specificity for classifying advanced stages of diabetic retinopathy and identifying diabetic macular oedema in optical coherence tomography scans. Based on this, different algorithms have obtained regulatory approval for clinical use and have recently been implemented to some extent in a few countries. Handheld mobile devices are another promising option for self-monitoring, but so far they have not demonstrated comparable image quality to that of fundus photography using non-portable retinal cameras, which is the gold standard for diabetic retinopathy screening. Such technology has the potential to be integrated in telemedicine-based screening programmes, enabling self-captured retinal images to be transferred virtually to reading centres for analysis and planning of further steps. While emerging technologies have shown a lot of promise, clinical implementation has been sparse. Legal obstacles and difficulties in software integration may partly explain this, but it may also indicate that existing algorithms may not necessarily integrate well with national screening initiatives, which often differ substantially between countries.
Collapse
Affiliation(s)
- Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark.
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark.
- Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark.
- Vestfold Hospital Trust, Tønsberg, Norway.
| |
Collapse
|
29
|
Pareja-Ríos A, Ceruso S, Romero-Aroca P, Bonaque-González S. A New Deep Learning Algorithm with Activation Mapping for Diabetic Retinopathy: Backtesting after 10 Years of Tele-Ophthalmology. J Clin Med 2022; 11:jcm11174945. [PMID: 36078875 PMCID: PMC9456446 DOI: 10.3390/jcm11174945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 08/17/2022] [Accepted: 08/22/2022] [Indexed: 11/16/2022] Open
Abstract
We report the development of a deep learning algorithm (AI) to detect signs of diabetic retinopathy (DR) from fundus images. For this, we use a ResNet-50 neural network with a double resolution, the addition of Squeeze–Excitation blocks, pre-trained in ImageNet, and trained for 50 epochs using the Adam optimizer. The AI-based algorithm not only classifies an image as pathological or not but also detects and highlights those signs that allow DR to be identified. For development, we have used a database of about half a million images classified in a real clinical environment by family doctors (FDs), ophthalmologists, or both. The AI was able to detect more than 95% of cases worse than mild DR and had 70% fewer misclassifications of healthy cases than FDs. In addition, the AI was able to detect DR signs in 1258 patients before they were detected by FDs, representing 7.9% of the total number of DR patients detected by the FDs. These results suggest that AI is at least comparable to the evaluation of FDs. We suggest that it may be useful to use signaling tools such as an aid to diagnosis rather than an AI as a stand-alone tool.
Collapse
Affiliation(s)
- Alicia Pareja-Ríos
- Department of Ophthalmology, University Hospital of the Canary Islands, 38320 San Cristóbal de La Laguna, Spain
| | - Sabato Ceruso
- School of Engineering and Technology, University of La Laguna, 38200 San Cristóbal de La Laguna, Spain
| | - Pedro Romero-Aroca
- Ophthalmology Department, University Hospital Sant Joan, Institute of Health Research Pere Virgili (IISPV), Universitat Rovira & Virgili, 43002 Tarragona, Spain
| | - Sergio Bonaque-González
- Instituto de Astrofísica de Canarias, 38205 San Cristóbal de La Laguna, Spain
- Correspondence:
| |
Collapse
|
30
|
Rao DP, Sindal MD, Sengupta S, Baskaran P, Venkatesh R, Sivaraman A, Savoy FM. Towards a Device Agnostic AI for Diabetic Retinopathy Screening: An External Validation Study. Clin Ophthalmol 2022; 16:2659-2667. [PMID: 36003071 PMCID: PMC9393096 DOI: 10.2147/opth.s369675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 07/11/2022] [Indexed: 11/23/2022] Open
Affiliation(s)
- Divya Parthasarathy Rao
- Artificial Intelligence R&D, Remidio Innovative Solutions Inc, Glen Allen, VA, USA
- Correspondence: Divya Parthasarathy Rao, Artificial Intelligence R&D, Remidio Innovative Solutions Inc, 11357 Nuckols Road, #102, Glen Allen, VA, 23059, USA, Tel +1 855 513-3335, Email
| | - Manavi D Sindal
- Vitreoretinal Services, Aravind Eye Hospitals and Postgraduate Institute of Ophthalmology, Pondicherry, India
| | - Sabyasachi Sengupta
- Department of Retina, Future Vision Eye Care and Research Center, Mumbai, India
| | - Prabu Baskaran
- Vitreoretinal Services, Aravind Eye Hospitals and Postgraduate Institute of Ophthalmology, Chennai, India
| | - Rengaraj Venkatesh
- Vitreoretinal Services, Aravind Eye Hospitals and Postgraduate Institute of Ophthalmology, Pondicherry, India
| | - Anand Sivaraman
- Artificial Intelligence R&D, Remidio Innovative Solutions Pvt Ltd, Bangalore, India
| | | |
Collapse
|
31
|
Cao J, Chang-Kit B, Katsnelson G, Far PM, Uleryk E, Ogunbameru A, Miranda RN, Felfeli T. Protocol for a systematic review and meta-analysis of the diagnostic accuracy of artificial intelligence for grading of ophthalmology imaging modalities. Diagn Progn Res 2022; 6:15. [PMID: 35831880 PMCID: PMC9281030 DOI: 10.1186/s41512-022-00127-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Accepted: 05/25/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND With the rise of artificial intelligence (AI) in ophthalmology, the need to define its diagnostic accuracy is increasingly important. The review aims to elucidate the diagnostic accuracy of AI algorithms in screening for all ophthalmic conditions in patient care settings that involve digital imaging modalities, using the reference standard of human graders. METHODS This is a systematic review and meta-analysis. A literature search will be conducted on Ovid MEDLINE, Ovid EMBASE, and Wiley Cochrane CENTRAL from January 1, 2000, to December 20, 2021. Studies will be selected via screening the titles and abstracts, followed by full-text screening. Articles that compare the results of AI-graded ophthalmic images with results from human graders as a reference standard will be included; articles that do not will be excluded. The systematic review software DistillerSR will be used to automate part of the screening process as an adjunct to human reviewers. After the full-text screening, data will be extracted from each study via the categories of study characteristics, patient information, AI methods, intervention, and outcomes. Risk of bias will be scored using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) by two trained independent reviewers. Disagreements at any step will be addressed by a third adjudicator. The study results will include summary receiver operating characteristic (sROC) curve plots as well as pooled sensitivity and specificity of artificial intelligence for detection of any ophthalmic conditions based on imaging modalities compared to the reference standard. Statistics will be calculated in the R statistical software. DISCUSSION This study will provide novel insights into the diagnostic accuracy of AI in new domains of ophthalmology that have not been previously studied. The protocol also outlines the use of an AI-based software to assist in article screening, which may serve as a reference for improving the efficiency and accuracy of future large systematic reviews. TRIAL REGISTRATION PROSPERO, CRD42021274441.
Collapse
Affiliation(s)
- Jessica Cao
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | | | - Glen Katsnelson
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | | | | | - Adeteju Ogunbameru
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
- THETA Collaborative, Toronto General Hospital, University Health Network, Eaton Building, 10th Floor, 200 Elizabeth Street, Toronto, Ontario, ON M5G, Canada
| | - Rafael N Miranda
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
- THETA Collaborative, Toronto General Hospital, University Health Network, Eaton Building, 10th Floor, 200 Elizabeth Street, Toronto, Ontario, ON M5G, Canada
| | - Tina Felfeli
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada.
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada.
- THETA Collaborative, Toronto General Hospital, University Health Network, Eaton Building, 10th Floor, 200 Elizabeth Street, Toronto, Ontario, ON M5G, Canada.
| |
Collapse
|
32
|
Cheng G, Zhang F, Xing Y, Hu X, Zhang H, Chen S, Li M, Peng C, Ding G, Zhang D, Chen P, Xia Q, Wu M. Artificial Intelligence-Assisted Score Analysis for Predicting the Expression of the Immunotherapy Biomarker PD-L1 in Lung Cancer. Front Immunol 2022; 13:893198. [PMID: 35844508 PMCID: PMC9286729 DOI: 10.3389/fimmu.2022.893198] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 05/27/2022] [Indexed: 12/12/2022] Open
Abstract
Programmed cell death ligand 1 (PD-L1) is a critical biomarker for predicting the response to immunotherapy. However, traditional quantitative evaluation of PD-L1 expression using immunohistochemistry staining remains challenging for pathologists. Here we developed a deep learning (DL)-based artificial intelligence (AI) model to automatically analyze the immunohistochemical expression of PD-L1 in lung cancer patients. A total of 1,288 patients with lung cancer were included in the study. The diagnostic ability of three different AI models (M1, M2, and M3) was assessed in both PD-L1 (22C3) and PD-L1 (SP263) assays. M2 and M3 showed improved performance in the evaluation of PD-L1 expression in the PD-L1 (22C3) assay, especially at 1% cutoff. Highly accurate performance in the PD-L1 (SP263) was also achieved, with accuracy and specificity of 96.4 and 96.8% in both M2 and M3, respectively. Moreover, the diagnostic results of these three AI-assisted models were highly consistent with those from the pathologist. Similar performances of M1, M2, and M3 in the 22C3 dataset were also obtained in lung adenocarcinoma and lung squamous cell carcinoma in both sampling methods. In conclusion, these results suggest that AI-assisted diagnostic models in PD-L1 expression are a promising tool for improving the efficiency of clinical pathologists.
Collapse
Affiliation(s)
- Guoping Cheng
- Department of Pathology, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China
- Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, China
| | | | | | - Xingyi Hu
- Department of Pathology, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China
- Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, China
- The Second Clinical Medical College, Zhejiang Chinese Medical University, Hangzhou, China
| | - He Zhang
- Department of Pathology, Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, China
| | | | | | | | - Guangtai Ding
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Dadong Zhang
- 3D Medicines Inc., Shanghai, China
- *Correspondence: Dadong Zhang, ; Peilin Chen, ; Qingxin Xia, ; Meijuan Wu,
| | - Peilin Chen
- 3D Medicines Inc., Shanghai, China
- *Correspondence: Dadong Zhang, ; Peilin Chen, ; Qingxin Xia, ; Meijuan Wu,
| | - Qingxin Xia
- Department of Pathology, Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, China
- *Correspondence: Dadong Zhang, ; Peilin Chen, ; Qingxin Xia, ; Meijuan Wu,
| | - Meijuan Wu
- Department of Pathology, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China
- Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, China
- *Correspondence: Dadong Zhang, ; Peilin Chen, ; Qingxin Xia, ; Meijuan Wu,
| |
Collapse
|
33
|
OLTU B, KARACA BK, ERDEM H, ÖZGÜR A. A systematic review of transfer learning-based approaches for diabetic retinopathy detection. GAZI UNIVERSITY JOURNAL OF SCIENCE 2022. [DOI: 10.35378/gujs.1081546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Cases of diabetes and related diabetic retinopathy (DR) have been increasing at an alarming rate in modern times. Early detection of DR is an important problem since it may cause permanent blindness in the late stages. In the last two decades, many different approaches have been applied in DR detection. Reviewing academic literature shows that deep neural networks (DNNs) have become the most preferred approach for DR detection. Among these DNN approaches, Convolutional Neural Network (CNN) models are the most used ones in the field of medical image classification. Designing a new CNN architecture is a tedious and time-consuming approach. Additionally, training an enormous number of parameters is also a difficult task. Due to this reason, instead of training CNNs from scratch, using pre-trained models has been suggested in recent years as transfer learning approach. Accordingly, the present study as a review focuses on DNN and Transfer Learning based applications of DR detection considering 43 publications between 2015 and 2021. The published papers are summarized using 3 figures and 10 tables, giving information about 29 pre-trained CNN models, 13 DR data sets and standard performance metrics.
Collapse
Affiliation(s)
- Burcu OLTU
- BAŞKENT ÜNİVERSİTESİ, MÜHENDİSLİK FAKÜLTESİ
| | | | | | | |
Collapse
|
34
|
Egger J, Gsaxner C, Pepe A, Pomykala KL, Jonske F, Kurz M, Li J, Kleesiek J. Medical deep learning-A systematic meta-review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106874. [PMID: 35588660 DOI: 10.1016/j.cmpb.2022.106874] [Citation(s) in RCA: 82] [Impact Index Per Article: 27.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 04/22/2022] [Accepted: 05/10/2022] [Indexed: 05/22/2023]
Abstract
Deep learning has remarkably impacted several different scientific disciplines over the last few years. For example, in image processing and analysis, deep learning algorithms were able to outperform other cutting-edge methods. Additionally, deep learning has delivered state-of-the-art results in tasks like autonomous driving, outclassing previous attempts. There are even instances where deep learning outperformed humans, for example with object recognition and gaming. Deep learning is also showing vast potential in the medical domain. With the collection of large quantities of patient records and data, and a trend towards personalized treatments, there is a great need for automated and reliable processing and analysis of health information. Patient data is not only collected in clinical centers, like hospitals and private practices, but also by mobile healthcare apps or online websites. The abundance of collected patient data and the recent growth in the deep learning field has resulted in a large increase in research efforts. In Q2/2020, the search engine PubMed returned already over 11,000 results for the search term 'deep learning', and around 90% of these publications are from the last three years. However, even though PubMed represents the largest search engine in the medical field, it does not cover all medical-related publications. Hence, a complete overview of the field of 'medical deep learning' is almost impossible to obtain and acquiring a full overview of medical sub-fields is becoming increasingly more difficult. Nevertheless, several review and survey articles about medical deep learning have been published within the last few years. They focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies. With these surveys as a foundation, the aim of this article is to provide the first high-level, systematic meta-review of medical deep learning surveys.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany.
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Frederic Jonske
- Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Manuel Kurz
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147 Essen, Germany
| |
Collapse
|
35
|
Munjral S, Maindarkar M, Ahluwalia P, Puvvula A, Jamthikar A, Jujaray T, Suri N, Paul S, Pathak R, Saba L, Chalakkal RJ, Gupta S, Faa G, Singh IM, Chadha PS, Turk M, Johri AM, Khanna NN, Viskovic K, Mavrogeni S, Laird JR, Pareek G, Miner M, Sobel DW, Balestrieri A, Sfikakis PP, Tsoulfas G, Protogerou A, Misra DP, Agarwal V, Kitas GD, Kolluri R, Teji J, Al-Maini M, Dhanjil SK, Sockalingam M, Saxena A, Sharma A, Rathore V, Fatemi M, Alizad A, Viswanathan V, Krishnan PR, Omerzu T, Naidu S, Nicolaides A, Fouda MM, Suri JS. Cardiovascular Risk Stratification in Diabetic Retinopathy via Atherosclerotic Pathway in COVID-19/Non-COVID-19 Frameworks Using Artificial Intelligence Paradigm: A Narrative Review. Diagnostics (Basel) 2022; 12:1234. [PMID: 35626389 PMCID: PMC9140106 DOI: 10.3390/diagnostics12051234] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/11/2022] [Accepted: 05/11/2022] [Indexed: 11/18/2022] Open
Abstract
Diabetes is one of the main causes of the rising cases of blindness in adults. This microvascular complication of diabetes is termed diabetic retinopathy (DR) and is associated with an expanding risk of cardiovascular events in diabetes patients. DR, in its various forms, is seen to be a powerful indicator of atherosclerosis. Further, the macrovascular complication of diabetes leads to coronary artery disease (CAD). Thus, the timely identification of cardiovascular disease (CVD) complications in DR patients is of utmost importance. Since CAD risk assessment is expensive for low-income countries, it is important to look for surrogate biomarkers for risk stratification of CVD in DR patients. Due to the common genetic makeup between the coronary and carotid arteries, low-cost, high-resolution imaging such as carotid B-mode ultrasound (US) can be used for arterial tissue characterization and risk stratification in DR patients. The advent of artificial intelligence (AI) techniques has facilitated the handling of large cohorts in a big data framework to identify atherosclerotic plaque features in arterial ultrasound. This enables timely CVD risk assessment and risk stratification of patients with DR. Thus, this review focuses on understanding the pathophysiology of DR, retinal and CAD imaging, the role of surrogate markers for CVD, and finally, the CVD risk stratification of DR patients. The review shows a step-by-step cyclic activity of how diabetes and atherosclerotic disease cause DR, leading to the worsening of CVD. We propose a solution to how AI can help in the identification of CVD risk. Lastly, we analyze the role of DR/CVD in the COVID-19 framework.
Collapse
Affiliation(s)
- Smiksha Munjral
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA; (S.M.); (M.M.); (A.P.); (A.J.); (T.J.); (I.M.S.); (P.S.C.); (S.K.D.)
| | - Mahesh Maindarkar
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA; (S.M.); (M.M.); (A.P.); (A.J.); (T.J.); (I.M.S.); (P.S.C.); (S.K.D.)
- Department of Biomedical Engineering, North Eastern Hill University, Shillong 793022, India;
| | - Puneet Ahluwalia
- Max Institute of Cancer Care, Max Super Specialty Hospital, New Delhi 110017, India;
| | - Anudeep Puvvula
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA; (S.M.); (M.M.); (A.P.); (A.J.); (T.J.); (I.M.S.); (P.S.C.); (S.K.D.)
- Annu’s Hospitals for Skin and Diabetes, Nellore 524101, India
| | - Ankush Jamthikar
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA; (S.M.); (M.M.); (A.P.); (A.J.); (T.J.); (I.M.S.); (P.S.C.); (S.K.D.)
| | - Tanay Jujaray
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA; (S.M.); (M.M.); (A.P.); (A.J.); (T.J.); (I.M.S.); (P.S.C.); (S.K.D.)
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, CA 95616, USA
| | - Neha Suri
- Mira Loma High School, Sacramento, CA 95821, USA;
| | - Sudip Paul
- Department of Biomedical Engineering, North Eastern Hill University, Shillong 793022, India;
| | - Rajesh Pathak
- Department of Computer Science Engineering, Rawatpura Sarkar University, Raipur 492015, India;
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria, 40138 Cagliari, Italy; (L.S.); (A.B.)
| | | | - Suneet Gupta
- CSE Department, Bennett University, Greater Noida 201310, India;
| | - Gavino Faa
- Department of Pathology, Azienda Ospedaliero Universitaria, 09124 Cagliari, Italy;
| | - Inder M. Singh
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA; (S.M.); (M.M.); (A.P.); (A.J.); (T.J.); (I.M.S.); (P.S.C.); (S.K.D.)
| | - Paramjit S. Chadha
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA; (S.M.); (M.M.); (A.P.); (A.J.); (T.J.); (I.M.S.); (P.S.C.); (S.K.D.)
| | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, 27753 Delmenhorst, Germany;
| | - Amer M. Johri
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada;
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110001, India; (N.N.K.); (A.S.)
| | - Klaudija Viskovic
- Department of Radiology and Ultrasound, University Hospital for Infectious Diseases, 10 000 Zagreb, Croatia;
| | - Sophie Mavrogeni
- Cardiology Clinic, Onassis Cardiac Surgery Centre, 17674 Athens, Greece;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA 94574, USA;
| | - Gyan Pareek
- Minimally Invasive Urology Institute, Brown University, Providence, RI 02912, USA;
| | - Martin Miner
- Men’s Health Centre, Miriam Hospital Providence, Providence, RI 02906, USA;
| | - David W. Sobel
- Rheumatology Unit, National Kapodistrian University of Athens, 15772 Athens, Greece; (D.W.S.); (P.P.S.)
| | - Antonella Balestrieri
- Department of Radiology, Azienda Ospedaliero Universitaria, 40138 Cagliari, Italy; (L.S.); (A.B.)
| | - Petros P. Sfikakis
- Rheumatology Unit, National Kapodistrian University of Athens, 15772 Athens, Greece; (D.W.S.); (P.P.S.)
| | - George Tsoulfas
- Department of Surgery, Aristoteleion University of Thessaloniki, 54124 Thessaloniki, Greece;
| | - Athanasios Protogerou
- Cardiovascular Prevention and Research Unit, Department of Pathophysiology, National & Kapodistrian University of Athens, 15772 Athens, Greece;
| | - Durga Prasanna Misra
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - Vikas Agarwal
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - George D. Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley DY1 2HQ, UK;
- Arthritis Research UK Epidemiology Unit, Manchester University, Manchester M13 9PL, UK
| | - Raghu Kolluri
- OhioHealth Heart and Vascular, Columbus, OH 43214, USA;
| | - Jagjit Teji
- Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA;
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON L4Z 4C4, Canada;
| | - Surinder K. Dhanjil
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA; (S.M.); (M.M.); (A.P.); (A.J.); (T.J.); (I.M.S.); (P.S.C.); (S.K.D.)
| | | | - Ajit Saxena
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110001, India; (N.N.K.); (A.S.)
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA 22904, USA;
| | - Vijay Rathore
- Nephrology Department, Kaiser Permanente, Sacramento, CA 95119, USA;
| | - Mostafa Fatemi
- Department of Physiology & Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | - Azra Alizad
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | - Vijay Viswanathan
- MV Hospital for Diabetes and Professor MVD Research Centre, Chennai 600013, India;
| | | | - Tomaz Omerzu
- Department of Neurology, University Medical Centre Maribor, 1262 Maribor, Slovenia;
| | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, MN 55812, USA;
| | - Andrew Nicolaides
- Vascular Screening and Diagnostic Centre, University of Nicosia Medical School, Nicosia 2408, Cyprus;
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA;
| | - Jasjit S. Suri
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA; (S.M.); (M.M.); (A.P.); (A.J.); (T.J.); (I.M.S.); (P.S.C.); (S.K.D.)
| |
Collapse
|
36
|
Liu X, Ali TK, Singh P, Shah A, McKinney SM, Ruamviboonsuk P, Turner AW, Keane PA, Chotcomwongse P, Nganthavee V, Chia M, Huemer J, Cuadros J, Raman R, Corrado GS, Peng L, Webster DR, Hammel N, Varadarajan AV, Liu Y, Chopra R, Bavishi P. Deep Learning to Detect OCT-derived Diabetic Macular Edema from Color Retinal Photographs: A Multicenter Validation Study. Ophthalmol Retina 2022; 6:398-410. [PMID: 34999015 DOI: 10.1016/j.oret.2021.12.021] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 11/09/2021] [Accepted: 12/29/2021] [Indexed: 01/20/2023]
Abstract
PURPOSE To validate the generalizability of a deep learning system (DLS) that detects diabetic macular edema (DME) from 2-dimensional color fundus photographs (CFP), for which the reference standard for retinal thickness and fluid presence is derived from 3-dimensional OCT. DESIGN Retrospective validation of a DLS across international datasets. PARTICIPANTS Paired CFP and OCT of patients from diabetic retinopathy (DR) screening programs or retina clinics. The DLS was developed using data sets from Thailand, the United Kingdom, and the United States and validated using 3060 unique eyes from 1582 patients across screening populations in Australia, India, and Thailand. The DLS was separately validated in 698 eyes from 537 screened patients in the United Kingdom with mild DR and suspicion of DME based on CFP. METHODS The DLS was trained using DME labels from OCT. The presence of DME was based on retinal thickening or intraretinal fluid. The DLS's performance was compared with expert grades of maculopathy and to a previous proof-of-concept version of the DLS. We further simulated the integration of the current DLS into an algorithm trained to detect DR from CFP. MAIN OUTCOME MEASURES The superiority of specificity and noninferiority of sensitivity of the DLS for the detection of center-involving DME, using device-specific thresholds, compared with experts. RESULTS The primary analysis in a combined data set spanning Australia, India, and Thailand showed the DLS had 80% specificity and 81% sensitivity, compared with expert graders, who had 59% specificity and 70% sensitivity. Relative to human experts, the DLS had significantly higher specificity (P = 0.008) and noninferior sensitivity (P < 0.001). In the data set from the United Kingdom, the DLS had a specificity of 80% (P < 0.001 for specificity of >50%) and a sensitivity of 100% (P = 0.02 for sensitivity of > 90%). CONCLUSIONS The DLS can generalize to multiple international populations with an accuracy exceeding that of experts. The clinical value of this DLS to reduce false-positive referrals, thus decreasing the burden on specialist eye care, warrants a prospective evaluation.
Collapse
Affiliation(s)
- Xinle Liu
- Google Health, Google LLC, Mountain View, California
| | - Tayyeba K Ali
- Google Health via Advanced Clinical, Deerfield, Illinois; California Pacific Medical Center, Department of Ophthalmology, San Francisco, CA
| | - Preeti Singh
- Google Health, Google LLC, Mountain View, California
| | - Ami Shah
- Google Health via Advanced Clinical, Deerfield, Illinois
| | | | - Paisan Ruamviboonsuk
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Angus W Turner
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia; University of Western Australia, Perth, Western Australia, Australia
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Peranut Chotcomwongse
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Variya Nganthavee
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Mark Chia
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia; NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Josef Huemer
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | | | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, India
| | | | - Lily Peng
- Google Health, Google LLC, Mountain View, California
| | | | - Naama Hammel
- Google Health, Google LLC, Mountain View, California.
| | | | - Yun Liu
- Google Health, Google LLC, Mountain View, California
| | - Reena Chopra
- Google Health, Google LLC, Mountain View, California; NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom.
| | - Pinal Bavishi
- Google Health, Google LLC, Mountain View, California
| |
Collapse
|
37
|
End-to-End Multi-Task Learning Approaches for the Joint Epiretinal Membrane Segmentation and Screening in OCT Images. Comput Med Imaging Graph 2022; 98:102068. [DOI: 10.1016/j.compmedimag.2022.102068] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 03/28/2022] [Accepted: 04/18/2022] [Indexed: 02/07/2023]
|
38
|
Fuller SD, Hu J, Liu JC, Gibson E, Gregory M, Kuo J, Rajagopal R. Five-Year Cost-Effectiveness Modeling of Primary Care-Based, Nonmydriatic Automated Retinal Image Analysis Screening Among Low-Income Patients With Diabetes. J Diabetes Sci Technol 2022; 16:415-427. [PMID: 33124449 PMCID: PMC8861785 DOI: 10.1177/1932296820967011] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Artificial intelligence-based technology systems offer an alternative solution for diabetic retinopathy (DR) screening compared with standard, in-office dilated eye examinations. We performed a cost-effectiveness analysis of Automated Retinal Image Analysis System (ARIAS)-based DR screening in a primary care medicine clinic that serves a low-income patient population. METHODS A model-based, cost-effectiveness analysis of two DR screening systems was created utilizing data from a recent study comparing adherence rates to follow-up eye care among adults ages 18 or older with a clinical diagnosis of diabetes. In the study, the patients were prescreened with an ARIAS-based, nonmydriatic (undilated), point-of-care tool in the primary care setting and were compared with patients with diabetes who were referred for dilated retinal screening without prescreening, as is the current standard of care. Using a Markov model with microsimulation resulting in a total of 600 000 simulated patient experiences, we calculated the incremental cost-utility ratio (ICUR) of the two screening approaches, with regard to five-year cost-effectiveness of DR screening and treatment of vision-threatening DR. RESULTS At five years, ARIAS-based screening showed similar utility as the standard of care screening systems. However, ARIAS reduced costs by 23.3%, with an ICUR of $258 721.81 comparing the current practice to ARIAS. CONCLUSIONS Primary care-based ARIAS DR screening is cost-effective when compared with standard of care screening methods.
Collapse
Affiliation(s)
- Spencer D. Fuller
- John F. Hardesty Department of
Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint
Louis, MO, USA
- Spencer D. Fuller, MD, MPH, John F. Hardesty
Department of Ophthalmology and Visual Sciences, Washington University School of
Medicine, 660 South Euclid Avenue, Campus Box 8096, Saint Louis, MO 63110, USA.
| | - Jenny Hu
- Shiley Eye Institute, University of
California San Diego School of Medicine, La Jolla, CA, USA
| | - James C. Liu
- John F. Hardesty Department of
Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint
Louis, MO, USA
| | - Ella Gibson
- John F. Hardesty Department of
Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint
Louis, MO, USA
| | - Martin Gregory
- John T. Milliken Department of Medicine,
Division of Gastroenterology, Washington University School of Medicine, St. Louis,
MO, USA
| | - Jessica Kuo
- John F. Hardesty Department of
Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint
Louis, MO, USA
| | - Rithwick Rajagopal
- John F. Hardesty Department of
Ophthalmology and Visual Sciences, Washington University School of Medicine, Saint
Louis, MO, USA
| |
Collapse
|
39
|
Untangling Computer-Aided Diagnostic System for Screening Diabetic Retinopathy Based on Deep Learning Techniques. SENSORS 2022; 22:s22051803. [PMID: 35270949 PMCID: PMC8914671 DOI: 10.3390/s22051803] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 02/16/2022] [Accepted: 02/17/2022] [Indexed: 01/27/2023]
Abstract
Diabetic Retinopathy (DR) is a predominant cause of visual impairment and loss. Approximately 285 million worldwide population is affected with diabetes, and one-third of these patients have symptoms of DR. Specifically, it tends to affect the patients with 20 years or more with diabetes, but it can be reduced by early detection and proper treatment. Diagnosis of DR by using manual methods is a time-consuming and expensive task which involves trained ophthalmologists to observe and evaluate DR using digital fundus images of the retina. This study aims to systematically find and analyze high-quality research work for the diagnosis of DR using deep learning approaches. This research comprehends the DR grading, staging protocols and also presents the DR taxonomy. Furthermore, identifies, compares, and investigates the deep learning-based algorithms, techniques, and, methods for classifying DR stages. Various publicly available dataset used for deep learning have also been analyzed and dispensed for descriptive and empirical understanding for real-time DR applications. Our in-depth study shows that in the last few years there has been an increasing inclination towards deep learning approaches. 35% of the studies have used Convolutional Neural Networks (CNNs), 26% implemented the Ensemble CNN (ECNN) and, 13% Deep Neural Networks (DNN) are amongst the most used algorithms for the DR classification. Thus using the deep learning algorithms for DR diagnostics have future research potential for DR early detection and prevention based solution.
Collapse
|
40
|
Mehraban Far P, Tai F, Ogunbameru A, Pechlivanoglou P, Sander B, Wong DT, Brent MH, Felfeli T. Diagnostic accuracy of teleretinal screening for detection of diabetic retinopathy and age-related macular degeneration: a systematic review and meta-analysis. BMJ Open Ophthalmol 2022; 7:e000915. [PMID: 35237724 PMCID: PMC8845326 DOI: 10.1136/bmjophth-2021-000915] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 01/07/2022] [Indexed: 11/20/2022] Open
Abstract
Objective To evaluate the diagnostic accuracy of teleretinal screening compared with face-to-face examination for detection of diabetic retinopathy (DR) and age-related macular degeneration (AMD). Methods and analysis This study adhered to the Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies (PRISMA-DTA). A comprehensive search of OVID MEDLINE, EMBASE and Cochrane CENTRAL was performed from January 2010 to July 2021. QUADAS-2 tool was used to assess methodological quality and applicability of the studies. A bivariate random effects model was used to perform the meta-analysis. Referrable DR was defined as any disease severity equal to or worse than moderate non-proliferative DR or diabetic macular oedema (DMO). Results 28 articles were included. Teleretinal screening achieved a sensitivity of 0.91 (95% CI: 0.82 to 0.96) and specificity of 0.88 (0.74 to 0.95) for any DR (13 studies, n=7207, Grading of Recommendations, Assessment, Development and Evaluation (GRADE) low). Accuracy for referrable DR (10 studies, n=6373, GRADE moderate) was lower with a sensitivity of 0.88 (0.81 to 0.93) and specificity of 0.86 (0.79 to 0.90). After exclusion of ungradable images, the specificity for referrable DR increased to 0.95 (0.90 to 0.98), while the sensitivity remained nearly unchanged at 0.85 (0.76 to 0.91). Teleretinal screening achieved a sensitivity of 0.71 (0.49 to 0.86) and specificity of 0.88 (0.85 to 0.90) for detection of AMD (three studies, n=697, GRADE low). Conclusion Teleretinal screening is highly accurate for detecting any DR and DR warranting referral. Data for AMD screening is promising but warrants further investigation. PROSPERO registration number CRD42020191994.
Collapse
Affiliation(s)
- Parsa Mehraban Far
- Department of Ophthalmology, Queen's University, Kingston, Ontario, Canada
| | - Felicia Tai
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Adeteju Ogunbameru
- Toronto Health Economics and Technology Assessment (THETA) Collaborative, University Health Network, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | - Petros Pechlivanoglou
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
- Peter Gilgan Centre for Research and Learning, The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada
| | - Beate Sander
- Toronto Health Economics and Technology Assessment (THETA) Collaborative, University Health Network, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | - David T Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Department of Ophthalmology, St. Michael's Hospital, Toronto Unity Health, Toronto, Toronto, Ontario, Canada
| | - Michael H Brent
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Retina Service, Donald K Johnson Eye Institute, University Health Network, Toronto, Ontario, Canada
| | - Tina Felfeli
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Toronto Health Economics and Technology Assessment (THETA) Collaborative, University Health Network, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
41
|
Jayakumar S, Sounderajah V, Normahani P, Harling L, Markar SR, Ashrafian H, Darzi A. Quality assessment standards in artificial intelligence diagnostic accuracy systematic reviews: a meta-research study. NPJ Digit Med 2022; 5:11. [PMID: 35087178 PMCID: PMC8795185 DOI: 10.1038/s41746-021-00544-y] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 11/28/2021] [Indexed: 01/05/2023] Open
Abstract
Artificial intelligence (AI) centred diagnostic systems are increasingly recognised as robust solutions in healthcare delivery pathways. In turn, there has been a concurrent rise in secondary research studies regarding these technologies in order to influence key clinical and policymaking decisions. It is therefore essential that these studies accurately appraise methodological quality and risk of bias within shortlisted trials and reports. In order to assess whether this critical step is performed, we undertook a meta-research study evaluating adherence to the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool within AI diagnostic accuracy systematic reviews. A literature search was conducted on all studies published from 2000 to December 2020. Of 50 included reviews, 36 performed the quality assessment, of which 27 utilised the QUADAS-2 tool. Bias was reported across all four domains of QUADAS-2. Two hundred forty-three of 423 studies (57.5%) across all systematic reviews utilising QUADAS-2 reported a high or unclear risk of bias in the patient selection domain, 110 (26%) reported a high or unclear risk of bias in the index test domain, 121 (28.6%) in the reference standard domain and 157 (37.1%) in the flow and timing domain. This study demonstrates the incomplete uptake of quality assessment tools in reviews of AI-based diagnostic accuracy studies and highlights inconsistent reporting across all domains of quality assessment. Poor standards of reporting act as barriers to clinical implementation. The creation of an AI-specific extension for quality assessment tools of diagnostic accuracy AI studies may facilitate the safe translation of AI tools into clinical practice.
Collapse
Affiliation(s)
- Shruti Jayakumar
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Viknesh Sounderajah
- Department of Surgery and Cancer, Imperial College London, London, UK
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Pasha Normahani
- Department of Surgery and Cancer, Imperial College London, London, UK
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Leanne Harling
- Department of Surgery and Cancer, Imperial College London, London, UK
- Department of Thoracic Surgery, Guy's Hospital, London, UK
| | - Sheraz R Markar
- Department of Surgery and Cancer, Imperial College London, London, UK
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Hutan Ashrafian
- Department of Surgery and Cancer, Imperial College London, London, UK.
| | - Ara Darzi
- Department of Surgery and Cancer, Imperial College London, London, UK
- Institute of Global Health Innovation, Imperial College London, London, UK
| |
Collapse
|
42
|
Kaya E, Gunec HG, Aydin KC, Urkmez ES, Duranay R, Ates HF. A deep learning approach to permanent tooth germ detection on pediatric panoramic radiographs. Imaging Sci Dent 2022; 52:275-281. [PMID: 36238699 PMCID: PMC9530294 DOI: 10.5624/isd.20220050] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 05/19/2022] [Accepted: 06/01/2022] [Indexed: 12/01/2022] Open
Abstract
Purpose The aim of this study was to assess the performance of a deep learning system for permanent tooth germ detection on pediatric panoramic radiographs. Materials and Methods In total, 4518 anonymized panoramic radiographs of children between 5 and 13 years of age were collected. YOLOv4, a convolutional neural network (CNN)-based object detection model, was used to automatically detect permanent tooth germs. Panoramic images of children processed in LabelImg were trained and tested in the YOLOv4 algorithm. True-positive, false-positive, and false-negative rates were calculated. A confusion matrix was used to evaluate the performance of the model. Results The YOLOv4 model, which detected permanent tooth germs on pediatric panoramic radiographs, provided an average precision value of 94.16% and an F1 value of 0.90, indicating a high level of significance. The average YOLOv4 inference time was 90 ms. Conclusion The detection of permanent tooth germs on pediatric panoramic X-rays using a deep learning-based approach may facilitate the early diagnosis of tooth deficiency or supernumerary teeth and help dental practitioners find more accurate treatment options while saving time and effort.
Collapse
Affiliation(s)
- Emine Kaya
- Department of Pediatric Dentistry, Faculty of Dentistry, Istanbul Okan University, Istanbul, Turkey
| | - Huseyin Gurkan Gunec
- Department of Endodontics, Faculty of Dentistry, Atlas University, Istanbul, Turkey
| | - Kader Cesur Aydin
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Istanbul Medipol University, Istanbul, Turkey
| | | | - Recep Duranay
- Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Atlas University, Istanbul, Turkey
| | - Hasan Fehmi Ates
- Department of Computer Engineering, School of Engineering and Natural Sciences, Istanbul Medipol University, Istanbul, Turkey
| |
Collapse
|
43
|
Tsai MJ, Hsieh YT, Tsai CH, Chen M, Hsieh AT, Tsai CW, Chen ML. Cross-Camera External Validation for Artificial Intelligence Software in Diagnosis of Diabetic Retinopathy. J Diabetes Res 2022; 2022:5779276. [PMID: 35308093 PMCID: PMC8926465 DOI: 10.1155/2022/5779276] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 02/12/2022] [Accepted: 02/17/2022] [Indexed: 11/18/2022] Open
Abstract
AIMS To investigate the applicability of deep learning image assessment software VeriSee DR to different color fundus cameras for the screening of diabetic retinopathy (DR). METHODS Color fundus images of diabetes patients taken with three different nonmydriatic fundus cameras, including 477 Topcon TRC-NW400, 459 Topcon TRC-NW8 series, and 471 Kowa nonmyd 8 series that were judged as "gradable" by one ophthalmologist were enrolled for validation. VeriSee DR was then used for the diagnosis of referable DR according to the International Clinical Diabetic Retinopathy Disease Severity Scale. Gradability, sensitivity, and specificity were calculated for each camera model. RESULTS All images (100%) from the three camera models were gradable for VeriSee DR. The sensitivity for diagnosing referable DR in the TRC-NW400, TRC-NW8, and non-myd 8 series was 89.3%, 94.6%, and 95.7%, respectively, while the specificity was 94.2%, 90.4%, and 89.3%, respectively. Neither the sensitivity nor the specificity differed significantly between these camera models and the original camera model used for VeriSee DR development (p = 0.40, p = 0.065, respectively). CONCLUSIONS VeriSee DR was applicable to a variety of color fundus cameras with 100% agreement with ophthalmologists in terms of gradability and good sensitivity and specificity for the diagnosis of referable DR.
Collapse
Affiliation(s)
- Meng-Ju Tsai
- Department of Ophthalmology, Taoyuan General Hospital, Ministry of Health and Welfare, Taoyuan, Taiwan
| | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| | | | | | - An-Tsz Hsieh
- Hsieh's Endocrinologic Clinic, New Taipei, Taiwan
- Department of Internal Medicine, School of Medicine, National Defense Medical Center, Taipei, Taiwan
| | | | | |
Collapse
|
44
|
Gelman R, Fernandez-Granda C. ANALYSIS OF TRANSFER LEARNING FOR SELECT RETINAL DISEASE CLASSIFICATION. Retina 2022; 42:174-183. [PMID: 34393210 PMCID: PMC8702452 DOI: 10.1097/iae.0000000000003282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
PURPOSE To analyze the effect of transfer learning for classification of diabetic retinopathy (DR) by fundus photography and select retinal diseases by spectral domain optical coherence tomography (SD-OCT). METHODS Five widely used open-source deep neural networks and four customized simpler and smaller networks, termed the CBR family, were trained and evaluated on two tasks: 1) classification of DR using fundus photography and 2) classification of drusen, choroidal neovascularization, and diabetic macular edema using SD-OCT. For DR classification, the quadratic weighted Kappa coefficient was used to measure the level of agreement between each network and ground truth-labeled test cases. For SD-OCT-based classification, accuracy was calculated for each network. Kappa and accuracy were compared between iterations with and without use of transfer learning for each network to assess for its effect. RESULTS For DR classification, Kappa increased with transfer learning for all networks (range of increase 0.152-0.556). For SD-OCT-based classification, accuracy increased for four of five open-source deep neural networks (range of increase 1.8%-3.5%), slightly decreased for the remaining deep neural network (-0.6%), decreased slightly for three of four CBR networks (range of decrease 0.9%-1.8%), and decreased by 9.6% for the remaining CBR network. CONCLUSION Transfer learning improved performance, as measured by Kappa, for DR classification for all networks, although the effect ranged from small to substantial. Transfer learning had minimal effect on accuracy for SD-OCT-based classification for eight of the nine networks analyzed. These results imply that transfer learning may substantially increase performance for DR classification but may have minimal effect for SD-OCT-based classification.
Collapse
Affiliation(s)
- Rony Gelman
- Courant Institute of Mathematical Sciences, New York University, New York, New York; and
| | - Carlos Fernandez-Granda
- Courant Institute of Mathematical Sciences, New York University, New York, New York; and
- Center for Data Science, New York University, New York, New York
| |
Collapse
|
45
|
Watson MJG, McCluskey PJ, Grigg JR, Kanagasingam Y, Daire J, Estai M. Barriers and facilitators to diabetic retinopathy screening within Australian primary care. BMC FAMILY PRACTICE 2021; 22:239. [PMID: 34847874 PMCID: PMC8630186 DOI: 10.1186/s12875-021-01586-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Accepted: 11/09/2021] [Indexed: 11/10/2022]
Abstract
Background Despite recent incentives through Medicare (Australia’s universal health insurance scheme) to increase retinal screening rates in primary care, comprehensive diabetic retinopathy (DR) screening has not been reached in Australia. The current study aimed to identify key factors affecting the delivery of diabetic retinopathy (DR) screening in Australian general practices. Methods A descriptive qualitative study involving in-depth interviews was carried out from November 2019 to March 2020. Using purposive snowballing sampling, 15 general practitioners (GPs) were recruited from urban and rural general practices in New South Wales and Western Australia. A semi-structured interview guide was used to collect data from participants. All interviews were conducted over the phone by one facilitator, and each interview lasted up to 45 min. The Socio-Ecological Model was used to inform the content of the interview topic guides and subsequent data analysis. Recorded data were transcribed verbatim, and thematic analysis was conducted to identify and classify recurrent themes. Results Of 15 GPs interviewed, 13 were male doctors, and the mean age was 54.7 ± 15.5 years. Seven participants were practising in urban areas, while eight were practising in regional or remote areas. All participants had access to a direct ophthalmoscope, but none owned retinal cameras. None of the participants reported performing DR screening. Only three participants were aware of the Medicare Benefits Schedule (MBS) items 12,325 and 12,326 that allow GPs to bill for retinal screening. Seven themes, a combination of facilitators and barriers, emerged from interviews with the GPs. Despite the strong belief in their role in managing chronic diseases, barriers such as costs of retinal cameras, time constraints, lack of skills to make DR diagnosis, and unawareness of Medicare incentives for non-mydriatic retinal photography made it difficult to conduct DR screening in general practice. However, several enabling strategies to deliver DR screening within primary care include increasing GPs’ access to continuing professional development, subsidising the cost of retinal cameras, and the need for a champion ace to take the responsibility of retinal photography. Conclusion This study identified essential areas at the system level that require addressing to promote the broader implementation of DR screening, in particular, a nationwide awareness campaign to maximise the use of MBS items, improve GPs’ competency, and subsidise costs of the retinal cameras for small and rural general practices. Supplementary Information The online version contains supplementary material available at 10.1186/s12875-021-01586-7.
Collapse
Affiliation(s)
- Matthew J G Watson
- The Australian e-Health Research Centre, CSIRO, 147 Underwood Avenue, Floreat, WA, 6014, Australia.,Save Sight Institute, The Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Peter J McCluskey
- Save Sight Institute, The Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - John R Grigg
- Save Sight Institute, The Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Yogesan Kanagasingam
- School of Medicine, University of Notre Dame Australia, Fremantle, Australia.,St John of God Public and Private Hospitals, Midland, Australia
| | - Judith Daire
- School of Population Health, The Faculty of Health Sciences, Curtin University, Bentley, Australia
| | - Mohamed Estai
- The Australian e-Health Research Centre, CSIRO, 147 Underwood Avenue, Floreat, WA, 6014, Australia. .,School of Human Sciences, The University of Western Australia, Perth, Australia.
| |
Collapse
|
46
|
Storås AM, Strümke I, Riegler MA, Grauslund J, Hammer HL, Yazidi A, Halvorsen P, Gundersen KG, Utheim TP, Jackson CJ. Artificial intelligence in dry eye disease. Ocul Surf 2021; 23:74-86. [PMID: 34843999 DOI: 10.1016/j.jtos.2021.11.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 11/08/2021] [Accepted: 11/09/2021] [Indexed: 12/21/2022]
Abstract
Dry eye disease (DED) has a prevalence of between 5 and 50%, depending on the diagnostic criteria used and population under study. However, it remains one of the most underdiagnosed and undertreated conditions in ophthalmology. Many tests used in the diagnosis of DED rely on an experienced observer for image interpretation, which may be considered subjective and result in variation in diagnosis. Since artificial intelligence (AI) systems are capable of advanced problem solving, use of such techniques could lead to more objective diagnosis. Although the term 'AI' is commonly used, recent success in its applications to medicine is mainly due to advancements in the sub-field of machine learning, which has been used to automatically classify images and predict medical outcomes. Powerful machine learning techniques have been harnessed to understand nuances in patient data and medical images, aiming for consistent diagnosis and stratification of disease severity. This is the first literature review on the use of AI in DED. We provide a brief introduction to AI, report its current use in DED research and its potential for application in the clinic. Our review found that AI has been employed in a wide range of DED clinical tests and research applications, primarily for interpretation of interferometry, slit-lamp and meibography images. While initial results are promising, much work is still needed on model development, clinical testing and standardisation.
Collapse
Affiliation(s)
- Andrea M Storås
- SimulaMet, Oslo, Norway; Department of Computer Science, Oslo Metropolitan University, Norway.
| | | | | | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Department of Ophthalmology, Vestfold University Trust, Tønsberg, Norway
| | - Hugo L Hammer
- SimulaMet, Oslo, Norway; Department of Computer Science, Oslo Metropolitan University, Norway
| | - Anis Yazidi
- Department of Computer Science, Oslo Metropolitan University, Norway
| | - Pål Halvorsen
- SimulaMet, Oslo, Norway; Department of Computer Science, Oslo Metropolitan University, Norway
| | | | - Tor P Utheim
- Department of Computer Science, Oslo Metropolitan University, Norway; Department of Medical Biochemistry, Oslo University Hospital, Norway; Department of Ophthalmology, Oslo University Hospital, Norway
| | | |
Collapse
|
47
|
Cai S, Han IC, Scott AW. Artificial intelligence for improving sickle cell retinopathy diagnosis and management. Eye (Lond) 2021; 35:2675-2684. [PMID: 33958737 PMCID: PMC8452674 DOI: 10.1038/s41433-021-01556-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 03/17/2021] [Accepted: 04/13/2021] [Indexed: 02/04/2023] Open
Abstract
Sickle cell retinopathy is often initially asymptomatic even in proliferative stages, but can progress to cause vision loss due to vitreous haemorrhages or tractional retinal detachments. Challenges with access and adherence to screening dilated fundus examinations, particularly in medically underserved areas where the burden of sickle cell disease is highest, highlight the need for novel approaches to screening for patients with vision-threatening sickle cell retinopathy. This article reviews the existing literature on and suggests future research directions for coupling artificial intelligence with multimodal retinal imaging to expand access to automated, accurate, imaging-based screening for sickle cell retinopathy. Given the variability in retinal specialist practice patterns with regards to monitoring and treatment of sickle cell retinopathy, we also discuss recent progress toward development of machine learning models that can quantitatively track disease progression over time. These artificial intelligence-based applications have great potential for informing evidence-based and resource-efficient clinical diagnosis and management of sickle cell retinopathy.
Collapse
Affiliation(s)
- Sophie Cai
- Retina Division, Duke Eye Center, Durham, NC, USA
| | - Ian C Han
- Institute for Vision Research, Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Adrienne W Scott
- Retina Division, Wilmer Eye Institute, Johns Hopkins University School of Medicine and Hospital, Baltimore, MD, USA.
| |
Collapse
|
48
|
Freeman K, Geppert J, Stinton C, Todkill D, Johnson S, Clarke A, Taylor-Phillips S. Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy. BMJ 2021; 374:n1872. [PMID: 34470740 PMCID: PMC8409323 DOI: 10.1136/bmj.n1872] [Citation(s) in RCA: 113] [Impact Index Per Article: 28.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVE To examine the accuracy of artificial intelligence (AI) for the detection of breast cancer in mammography screening practice. DESIGN Systematic review of test accuracy studies. DATA SOURCES Medline, Embase, Web of Science, and Cochrane Database of Systematic Reviews from 1 January 2010 to 17 May 2021. ELIGIBILITY CRITERIA Studies reporting test accuracy of AI algorithms, alone or in combination with radiologists, to detect cancer in women's digital mammograms in screening practice, or in test sets. Reference standard was biopsy with histology or follow-up (for screen negative women). Outcomes included test accuracy and cancer type detected. STUDY SELECTION AND SYNTHESIS Two reviewers independently assessed articles for inclusion and assessed the methodological quality of included studies using the QUality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. A single reviewer extracted data, which were checked by a second reviewer. Narrative data synthesis was performed. RESULTS Twelve studies totalling 131 822 screened women were included. No prospective studies measuring test accuracy of AI in screening practice were found. Studies were of poor methodological quality. Three retrospective studies compared AI systems with the clinical decisions of the original radiologist, including 79 910 women, of whom 1878 had screen detected cancer or interval cancer within 12 months of screening. Thirty four (94%) of 36 AI systems evaluated in these studies were less accurate than a single radiologist, and all were less accurate than consensus of two or more radiologists. Five smaller studies (1086 women, 520 cancers) at high risk of bias and low generalisability to the clinical context reported that all five evaluated AI systems (as standalone to replace radiologist or as a reader aid) were more accurate than a single radiologist reading a test set in the laboratory. In three studies, AI used for triage screened out 53%, 45%, and 50% of women at low risk but also 10%, 4%, and 0% of cancers detected by radiologists. CONCLUSIONS Current evidence for AI does not yet allow judgement of its accuracy in breast cancer screening programmes, and it is unclear where on the clinical pathway AI might be of most benefit. AI systems are not sufficiently specific to replace radiologist double reading in screening programmes. Promising results in smaller studies are not replicated in larger studies. Prospective studies are required to measure the effect of AI in clinical practice. Such studies will require clear stopping rules to ensure that AI does not reduce programme specificity. STUDY REGISTRATION Protocol registered as PROSPERO CRD42020213590.
Collapse
Affiliation(s)
- Karoline Freeman
- Division of Health Sciences, University of Warwick, Coventry, UK
| | - Julia Geppert
- Division of Health Sciences, University of Warwick, Coventry, UK
| | - Chris Stinton
- Division of Health Sciences, University of Warwick, Coventry, UK
| | - Daniel Todkill
- Division of Health Sciences, University of Warwick, Coventry, UK
| | - Samantha Johnson
- Division of Health Sciences, University of Warwick, Coventry, UK
| | - Aileen Clarke
- Division of Health Sciences, University of Warwick, Coventry, UK
| | | |
Collapse
|
49
|
Nuzzi R, Boscia G, Marolo P, Ricardi F. The Impact of Artificial Intelligence and Deep Learning in Eye Diseases: A Review. Front Med (Lausanne) 2021; 8:710329. [PMID: 34527682 PMCID: PMC8437147 DOI: 10.3389/fmed.2021.710329] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 07/23/2021] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence (AI) is a subset of computer science dealing with the development and training of algorithms that try to replicate human intelligence. We report a clinical overview of the basic principles of AI that are fundamental to appreciating its application to ophthalmology practice. Here, we review the most common eye diseases, focusing on some of the potential challenges and limitations emerging with the development and application of this new technology into ophthalmology.
Collapse
Affiliation(s)
- Raffaele Nuzzi
- Ophthalmology Unit, A.O.U. City of Health and Science of Turin, Department of Surgical Sciences, University of Turin, Turin, Italy
| | | | | | | |
Collapse
|
50
|
Diagnostic performance of deep-learning-based screening methods for diabetic retinopathy in primary care-A meta-analysis. PLoS One 2021; 16:e0255034. [PMID: 34375355 PMCID: PMC8354436 DOI: 10.1371/journal.pone.0255034] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 07/09/2021] [Indexed: 02/01/2023] Open
Abstract
Background Diabetic retinopathy (DR) affects 10–24% of patients with diabetes mellitus type 1 or 2 in the primary care (PC) sector. As early detection is crucial for treatment, deep learning screening methods in PC setting could potentially aid in an accurate and timely diagnosis. Purpose The purpose of this meta-analysis was to determine the current state of knowledge regarding deep learning (DL) screening methods for DR in PC. Data sources A systematic literature search was conducted using Medline, Web of Science, and Scopus to identify suitable studies. Study selection Suitable studies were selected by two researchers independently. Studies assessing DL methods and the suitability of these screening systems (diagnostic parameters such as sensitivity and specificity, information on datasets and setting) in PC were selected. Excluded were studies focusing on lesions, applying conventional diagnostic imaging tools, conducted in secondary or tertiary care, and all publication types other than original research studies on human subjects. Data extraction The following data was extracted from included studies: authors, title, year of publication, objectives, participants, setting, type of intervention/method, reference standard, grading scale, outcome measures, dataset, risk of bias, and performance measures. Data synthesis and conclusion The summed sensitivity of all included studies was 87% and specificity was 90%. Given a prevalence of DR of 10% in patients with DM Type 2 in PC, the negative predictive value is 98% while the positive predictive value is 49%. Limitations Selected studies showed a high variation in sample size and quality and quantity of available data.
Collapse
|