51
|
Vilela MAP, Arrigo A, Parodi MB, da Silva Mengue C. Smartphone Eye Examination: Artificial Intelligence and Telemedicine. Telemed J E Health 2024; 30:341-353. [PMID: 37585566 DOI: 10.1089/tmj.2023.0041] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/18/2023] Open
Abstract
Background: The current medical scenario is closely linked to recent progress in telecommunications, photodocumentation, and artificial intelligence (AI). Smartphone eye examination may represent a promising tool in the technological spectrum, with special interest for primary health care services. Obtaining fundus imaging with this technique has improved and democratized the teaching of fundoscopy, but in particular, it contributes greatly to screening diseases with high rates of blindness. Eye examination using smartphones essentially represents a cheap and safe method, thus contributing to public policies on population screening. This review aims to provide an update on the use of this resource and its future prospects, especially as a screening and ophthalmic diagnostic tool. Methods: In this review, we surveyed major published advances in retinal and anterior segment analysis using AI. We performed an electronic search on the Medical Literature Analysis and Retrieval System Online (MEDLINE), EMBASE, and Cochrane Library for published literature without a deadline. We included studies that compared the diagnostic accuracy of smartphone ophthalmoscopy for detecting prevalent diseases with an accurate or commonly employed reference standard. Results: There are few databases with complete metadata, providing demographic data, and few databases with sufficient images involving current or new therapies. It should be taken into consideration that these are databases containing images captured using different systems and formats, with information often being excluded without essential detailing of the reasons for exclusion, which further distances them from real-life conditions. The safety, portability, low cost, and reproducibility of smartphone eye images are discussed in several studies, with encouraging results. Conclusions: The high level of agreement between conventional and a smartphone method shows a powerful arsenal for screening and early diagnosis of the main causes of blindness, such as cataract, glaucoma, diabetic retinopathy, and age-related macular degeneration. In addition to streamlining the medical workflow and bringing benefits for public health policies, smartphone eye examination can make safe and quality assessment available to the population.
Collapse
Affiliation(s)
| | - Alessandro Arrigo
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Maurizio Battaglia Parodi
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Carolina da Silva Mengue
- Post-Graduation Ophthalmological School, Ivo Corrêa-Meyer/Cardiology Institute, Porto Alegre, Brazil
| |
Collapse
|
52
|
Ong KTI, Kwon T, Jang H, Kim M, Lee CS, Byeon SH, Kim SS, Yeo J, Choi EY. Multitask Deep Learning for Joint Detection of Necrotizing Viral and Noninfectious Retinitis From Common Blood and Serology Test Data. Invest Ophthalmol Vis Sci 2024; 65:5. [PMID: 38306107 PMCID: PMC10851173 DOI: 10.1167/iovs.65.2.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 01/09/2024] [Indexed: 02/03/2024] Open
Abstract
Purpose Necrotizing viral retinitis is a serious eye infection that requires immediate treatment to prevent permanent vision loss. Uncertain clinical suspicion can result in delayed diagnosis, inappropriate administration of corticosteroids, or repeated intraocular sampling. To quickly and accurately distinguish between viral and noninfectious retinitis, we aimed to develop deep learning (DL) models solely using noninvasive blood test data. Methods This cross-sectional study trained DL models using common blood and serology test data from 3080 patients (noninfectious uveitis of the posterior segment [NIU-PS] = 2858, acute retinal necrosis [ARN] = 66, cytomegalovirus [CMV], retinitis = 156). Following the development of separate base DL models for ARN and CMV retinitis, multitask learning (MTL) was employed to enable simultaneous discrimination. Advanced MTL models incorporating adversarial training were used to enhance DL feature extraction from the small, imbalanced data. We evaluated model performance, disease-specific important features, and the causal relationship between DL features and detection results. Results The presented models all achieved excellent detection performances, with the adversarial MTL model achieving the highest receiver operating characteristic curves (0.932 for ARN and 0.982 for CMV retinitis). Significant features for ARN detection included varicella-zoster virus (VZV) immunoglobulin M (IgM), herpes simplex virus immunoglobulin G, and neutrophil count, while for CMV retinitis, they encompassed VZV IgM, CMV IgM, and lymphocyte count. The adversarial MTL model exhibited substantial changes in detection outcomes when the key features were contaminated, indicating stronger causality between DL features and detection results. Conclusions The adversarial MTL model, using blood test data, may serve as a reliable adjunct for the expedited diagnosis of ARN, CMV retinitis, and NIU-PS simultaneously in real clinical settings.
Collapse
Affiliation(s)
- Kai Tzu-iunn Ong
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Taeyoon Kwon
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Harok Jang
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Min Kim
- Department of Ophthalmology, Institute of Vision Research, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Christopher Seungkyu Lee
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Suk Ho Byeon
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sung Soo Kim
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jinyoung Yeo
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Eun Young Choi
- Department of Ophthalmology, Institute of Vision Research, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
53
|
Li X, Owen LA, Taylor KD, Ostmo S, Chen YDI, Coyner AS, Sonmez K, Hartnett ME, Guo X, Ipp E, Roll K, Genter P, Chan RVP, DeAngelis MM, Chiang MF, Campbell JP, Rotter JI. Genome-wide association identifies novel ROP risk loci in a multiethnic cohort. Commun Biol 2024; 7:107. [PMID: 38233474 PMCID: PMC10794688 DOI: 10.1038/s42003-023-05743-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Accepted: 12/26/2023] [Indexed: 01/19/2024] Open
Abstract
We conducted a genome-wide association study (GWAS) in a multiethnic cohort of 920 at-risk infants for retinopathy of prematurity (ROP), a major cause of childhood blindness, identifying 1 locus at genome-wide significance level (p < 5×10-8) and 9 with significance of p < 5×10-6 for ROP ≥ stage 3. The most significant locus, rs2058019, reached genome-wide significance within the full multiethnic cohort (p = 4.96×10-9); Hispanic and European Ancestry infants driving the association. The lead single nucleotide polymorphism (SNP) falls in an intronic region within the Glioma-associated oncogene family zinc finger 3 (GLI3) gene. Relevance for GLI3 and other top-associated genes to human ocular disease was substantiated through in-silico extension analyses, genetic risk score analysis and expression profiling in human donor eye tissues. Thus, we identify a novel locus at GLI3 with relevance to retinal biology, supporting genetic susceptibilities for ROP risk with possible variability by race and ethnicity.
Collapse
Affiliation(s)
- Xiaohui Li
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Leah A Owen
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA.
- Department of Population Health Sciences, University of Utah, Salt Lake City, UT, USA.
- Department of Obstetrics and Gynecology, University of Utah, Salt Lake City, UT, USA.
- Department of Ophthalmology, University at Buffalo the State University of New York, Buffalo, NY, USA.
| | - Kent D Taylor
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Susan Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Yii-Der Ida Chen
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Aaron S Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Kemal Sonmez
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | | | - Xiuqing Guo
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Eli Ipp
- Division of Endocrinology and Metabolism, Department of Medicine, The Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Kathryn Roll
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Pauline Genter
- Division of Endocrinology and Metabolism, Department of Medicine, The Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center, Torrance, CA, USA
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | - Margaret M DeAngelis
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA
- Department of Population Health Sciences, University of Utah, Salt Lake City, UT, USA
- Department of Ophthalmology, University at Buffalo the State University of New York, Buffalo, NY, USA
- Department of Biochemistry; Jacobs School of Medicine and Biomedical Sciences, University at Buffalo/State University of New York (SUNY), Buffalo, NY, USA
- Department of Neuroscience; Jacobs School of Medicine and Biomedical Sciences, University at Buffalo/State University of New York (SUNY), Buffalo, NY, USA
- Department of Genetics, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo/State University of New York (SUNY), Buffalo, NY, USA
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - J Peter Campbell
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.
| | - Jerome I Rotter
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA.
| |
Collapse
|
54
|
Li B, Chen H, Yu W, Zhang M, Lu F, Ma J, Hao Y, Li X, Hu B, Shen L, Mao J, He X, Wang H, Ding D, Li X, Chen Y. The performance of a deep learning system in assisting junior ophthalmologists in diagnosing 13 major fundus diseases: a prospective multi-center clinical trial. NPJ Digit Med 2024; 7:8. [PMID: 38212607 PMCID: PMC10784504 DOI: 10.1038/s41746-023-00991-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/11/2023] [Indexed: 01/13/2024] Open
Abstract
Artificial intelligence (AI)-based diagnostic systems have been reported to improve fundus disease screening in previous studies. This multicenter prospective self-controlled clinical trial aims to evaluate the diagnostic performance of a deep learning system (DLS) in assisting junior ophthalmologists in detecting 13 major fundus diseases. A total of 1493 fundus images from 748 patients were prospectively collected from five tertiary hospitals in China. Nine junior ophthalmologists were trained and annotated the images with or without the suggestions proposed by the DLS. The diagnostic performance was evaluated among three groups: DLS-assisted junior ophthalmologist group (test group), junior ophthalmologist group (control group) and DLS group. The diagnostic consistency was 84.9% (95%CI, 83.0% ~ 86.9%), 72.9% (95%CI, 70.3% ~ 75.6%) and 85.5% (95%CI, 83.5% ~ 87.4%) in the test group, control group and DLS group, respectively. With the help of the proposed DLS, the diagnostic consistency of junior ophthalmologists improved by approximately 12% (95% CI, 9.1% ~ 14.9%) with statistical significance (P < 0.001). For the detection of 13 diseases, the test group achieved significant higher sensitivities (72.2% ~ 100.0%) and comparable specificities (90.8% ~ 98.7%) comparing with the control group (sensitivities, 50% ~ 100%; specificities 96.7 ~ 99.8%). The DLS group presented similar performance to the test group in the detection of any fundus abnormality (sensitivity, 95.7%; specificity, 87.2%) and each of the 13 diseases (sensitivity, 83.3% ~ 100.0%; specificity, 89.0 ~ 98.0%). The proposed DLS provided a novel approach for the automatic detection of 13 major fundus diseases with high diagnostic consistency and assisted to improve the performance of junior ophthalmologists, resulting especially in reducing the risk of missed diagnoses. ClinicalTrials.gov NCT04723160.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Ming Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Fang Lu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Jingxue Ma
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Yuhua Hao
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xiaorong Li
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Bojie Hu
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Lijun Shen
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Jianbo Mao
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Xixi He
- School of Information Science and Technology, North China University of Technology, Beijing, China
- Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, Beijing, China
| | - Hao Wang
- Visionary Intelligence Ltd., Beijing, China
| | | | - Xirong Li
- MoE Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
| |
Collapse
|
55
|
Nguyen TTP, Young BK, Coyner A, Ostmo S, Chan RVP, Kalpathy-Cramer J, Chiang MF, Campbell JP. Discrepancies in Diagnosis of Treatment-Requiring Retinopathy of Prematurity. Ophthalmol Retina 2024; 8:88-91. [PMID: 37689182 PMCID: PMC10841666 DOI: 10.1016/j.oret.2023.09.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 08/21/2023] [Accepted: 09/01/2023] [Indexed: 09/11/2023]
Abstract
52% of treated eyes with retinopathy of prematurity in a multicenter cohort didn’t require intervention per evaluation by an independent reading center. An artificial intelligence system detected worse vascular severity in the group designed as treatment-requiring by reading center.
Collapse
Affiliation(s)
- Thanh-Tin P Nguyen
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Benjamin K Young
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon
| | - Susan Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - R V Paul Chan
- Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | | | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland; National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - J Peter Campbell
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
56
|
Sullivan BA, Beam K, Vesoulis ZA, Aziz KB, Husain AN, Knake LA, Moreira AG, Hooven TA, Weiss EM, Carr NR, El-Ferzli GT, Patel RM, Simek KA, Hernandez AJ, Barry JS, McAdams RM. Transforming neonatal care with artificial intelligence: challenges, ethical consideration, and opportunities. J Perinatol 2024; 44:1-11. [PMID: 38097685 PMCID: PMC10872325 DOI: 10.1038/s41372-023-01848-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 11/21/2023] [Accepted: 11/30/2023] [Indexed: 12/17/2023]
Abstract
Artificial intelligence (AI) offers tremendous potential to transform neonatology through improved diagnostics, personalized treatments, and earlier prevention of complications. However, there are many challenges to address before AI is ready for clinical practice. This review defines key AI concepts and discusses ethical considerations and implicit biases associated with AI. Next we will review literature examples of AI already being explored in neonatology research and we will suggest future potentials for AI work. Examples discussed in this article include predicting outcomes such as sepsis, optimizing oxygen therapy, and image analysis to detect brain injury and retinopathy of prematurity. Realizing AI's potential necessitates collaboration between diverse stakeholders across the entire process of incorporating AI tools in the NICU to address testability, usability, bias, and transparency. With multi-center and multi-disciplinary collaboration, AI holds tremendous potential to transform the future of neonatology.
Collapse
Affiliation(s)
- Brynne A Sullivan
- Division of Neonatology, Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Kristyn Beam
- Department of Neonatology, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Zachary A Vesoulis
- Division of Newborn Medicine, Department of Pediatrics, Washington University in St. Louis, St. Louis, MO, USA
| | - Khyzer B Aziz
- Division of Neonatology, Department of Pediatrics, Johns Hopkins University, Baltimore, MD, USA
| | - Ameena N Husain
- Division of Neonatology, Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, UT, USA
| | - Lindsey A Knake
- Division of Neonatology, Department of Pediatrics, University of Iowa, Iowa City, IA, USA
| | - Alvaro G Moreira
- Division of Neonatology, Department of Pediatrics, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Thomas A Hooven
- Division of Newborn Medicine, Department of Pediatrics, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Elliott M Weiss
- Department of Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
- Treuman Katz Center for Pediatric Bioethics and Palliative Care, Seattle Children's Research Institute, Seattle, WA, USA
| | - Nicholas R Carr
- Division of Neonatology, Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, UT, USA
| | - George T El-Ferzli
- Division of Neonatology, Department of Pediatrics, Ohio State University, Nationwide Children's Hospital, Columbus, OH, USA
| | - Ravi M Patel
- Division of Neonatology, Department of Pediatrics, Emory University School of Medicine and Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - Kelsey A Simek
- Division of Neonatology, Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, UT, USA
| | - Antonio J Hernandez
- Division of Neonatology, Department of Pediatrics, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - James S Barry
- Division of Neonatology, Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Ryan M McAdams
- Department of Pediatrics, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA.
| |
Collapse
|
57
|
Yang X, Huang K, Yang D, Zhao W, Zhou X. Biomedical Big Data Technologies, Applications, and Challenges for Precision Medicine: A Review. GLOBAL CHALLENGES (HOBOKEN, NJ) 2024; 8:2300163. [PMID: 38223896 PMCID: PMC10784210 DOI: 10.1002/gch2.202300163] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 09/20/2023] [Indexed: 01/16/2024]
Abstract
The explosive growth of biomedical Big Data presents both significant opportunities and challenges in the realm of knowledge discovery and translational applications within precision medicine. Efficient management, analysis, and interpretation of big data can pave the way for groundbreaking advancements in precision medicine. However, the unprecedented strides in the automated collection of large-scale molecular and clinical data have also introduced formidable challenges in terms of data analysis and interpretation, necessitating the development of novel computational approaches. Some potential challenges include the curse of dimensionality, data heterogeneity, missing data, class imbalance, and scalability issues. This overview article focuses on the recent progress and breakthroughs in the application of big data within precision medicine. Key aspects are summarized, including content, data sources, technologies, tools, challenges, and existing gaps. Nine fields-Datawarehouse and data management, electronic medical record, biomedical imaging informatics, Artificial intelligence-aided surgical design and surgery optimization, omics data, health monitoring data, knowledge graph, public health informatics, and security and privacy-are discussed.
Collapse
Affiliation(s)
- Xue Yang
- Department of Pancreatic Surgery and West China Biomedical Big Data CenterWest China HospitalSichuan UniversityChengdu610041China
| | - Kexin Huang
- Department of Pancreatic Surgery and West China Biomedical Big Data CenterWest China HospitalSichuan UniversityChengdu610041China
| | - Dewei Yang
- College of Advanced Manufacturing EngineeringChongqing University of Posts and TelecommunicationsChongqingChongqing400000China
| | - Weiling Zhao
- Center for Systems MedicineSchool of Biomedical InformaticsUTHealth at HoustonHoustonTX77030USA
| | - Xiaobo Zhou
- Center for Systems MedicineSchool of Biomedical InformaticsUTHealth at HoustonHoustonTX77030USA
| |
Collapse
|
58
|
Chen JS, Marra KV, Robles-Holmes HK, Ly KB, Miller J, Wei G, Aguilar E, Bucher F, Ideguchi Y, Coyner AS, Ferrara N, Campbell JP, Friedlander M, Nudleman E. Applications of Deep Learning: Automated Assessment of Vascular Tortuosity in Mouse Models of Oxygen-Induced Retinopathy. OPHTHALMOLOGY SCIENCE 2024; 4:100338. [PMID: 37869029 PMCID: PMC10585474 DOI: 10.1016/j.xops.2023.100338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 05/01/2023] [Accepted: 05/19/2023] [Indexed: 10/24/2023]
Abstract
Objective To develop a generative adversarial network (GAN) to segment major blood vessels from retinal flat-mount images from oxygen-induced retinopathy (OIR) and demonstrate the utility of these GAN-generated vessel segmentations in quantifying vascular tortuosity. Design Development and validation of GAN. Subjects Three datasets containing 1084, 50, and 20 flat-mount mice retina images with various stains used and ages at sacrifice acquired from previously published manuscripts. Methods Four graders manually segmented major blood vessels from flat-mount images of retinas from OIR mice. Pix2Pix, a high-resolution GAN, was trained on 984 pairs of raw flat-mount images and manual vessel segmentations and then tested on 100 and 50 image pairs from a held-out and external test set, respectively. GAN-generated and manual vessel segmentations were then used as an input into a previously published algorithm (iROP-Assist) to generate a vascular cumulative tortuosity index (CTI) for 20 image pairs containing mouse eyes treated with aflibercept versus control. Main Outcome Measures Mean dice coefficients were used to compare segmentation accuracy between the GAN-generated and manually annotated segmentation maps. For the image pairs treated with aflibercept versus control, mean CTIs were also calculated for both GAN-generated and manual vessel maps. Statistical significance was evaluated using Wilcoxon signed-rank tests (P ≤ 0.05 threshold for significance). Results The dice coefficient for the GAN-generated versus manual vessel segmentations was 0.75 ± 0.27 and 0.77 ± 0.17 for the held-out test set and external test set, respectively. The mean CTI generated from the GAN-generated and manual vessel segmentations was 1.12 ± 0.07 versus 1.03 ± 0.02 (P = 0.003) and 1.06 ± 0.04 versus 1.01 ± 0.01 (P < 0.001), respectively, for eyes treated with aflibercept versus control, demonstrating that vascular tortuosity was rescued by aflibercept when quantified by GAN-generated and manual vessel segmentations. Conclusions GANs can be used to accurately generate vessel map segmentations from flat-mount images. These vessel maps may be used to evaluate novel metrics of vascular tortuosity in OIR, such as CTI, and have the potential to accelerate research in treatments for ischemic retinopathies. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Kyle V. Marra
- Molecular Medicine, the Scripps Research Institute, San Diego, California
- School of Medicine, University of California San Diego, San Diego, California
| | - Hailey K. Robles-Holmes
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Kristine B. Ly
- College of Optometry, Pacific University, Forest Grove, Oregon
| | - Joseph Miller
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Guoqin Wei
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Edith Aguilar
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Felicitas Bucher
- Eye Center, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Yoichi Ideguchi
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Aaron S. Coyner
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Napoleone Ferrara
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Martin Friedlander
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Eric Nudleman
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| |
Collapse
|
59
|
Soleimani M, Esmaili K, Rahdar A, Aminizadeh M, Cheraqpour K, Tabatabaei SA, Mirshahi R, Bibak-Bejandi Z, Mohammadi SF, Koganti R, Yousefi S, Djalilian AR. From the diagnosis of infectious keratitis to discriminating fungal subtypes; a deep learning-based study. Sci Rep 2023; 13:22200. [PMID: 38097753 PMCID: PMC10721811 DOI: 10.1038/s41598-023-49635-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 12/10/2023] [Indexed: 12/17/2023] Open
Abstract
Infectious keratitis (IK) is a major cause of corneal opacity. IK can be caused by a variety of microorganisms. Typically, fungal ulcers carry the worst prognosis. Fungal cases can be subdivided into filamentous and yeasts, which shows fundamental differences. Delays in diagnosis or initiation of treatment increase the risk of ocular complications. Currently, the diagnosis of IK is mainly based on slit-lamp examination and corneal scrapings. Notably, these diagnostic methods have their drawbacks, including experience-dependency, tissue damage, and time consumption. Artificial intelligence (AI) is designed to mimic and enhance human decision-making. An increasing number of studies have utilized AI in the diagnosis of IK. In this paper, we propose to use AI to diagnose IK (model 1), differentiate between bacterial keratitis and fungal keratitis (model 2), and discriminate the filamentous type from the yeast type of fungal cases (model 3). Overall, 9329 slit-lamp photographs gathered from 977 patients were enrolled in the study. The models exhibited remarkable accuracy, with model 1 achieving 99.3%, model 2 at 84%, and model 3 reaching 77.5%. In conclusion, our study offers valuable support in the early identification of potential fungal and bacterial keratitis cases and helps enable timely management.
Collapse
Affiliation(s)
- Mohammad Soleimani
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Kosar Esmaili
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Amir Rahdar
- Department of Telecommunication, Faculty of Electrical Engineering, Shahid Beheshti University, Tehran, Iran
| | - Mehdi Aminizadeh
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Kasra Cheraqpour
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Ali Tabatabaei
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Reza Mirshahi
- Eye Research Center, The Five Senses Health Institute, Rasoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Zahra Bibak-Bejandi
- Translational Ophthalmology Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Farzad Mohammadi
- Translational Ophthalmology Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Raghuram Koganti
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA
| | - Ali R Djalilian
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA.
- Cornea Service, Stem Cell Therapy and Corneal Tissue Engineering Laboratory, Illinois Eye and Ear Infirmary, 1855 W. Taylor Street, M/C 648, Chicago, IL, 60612, USA.
| |
Collapse
|
60
|
Than J, Sim PY, Muttuvelu D, Ferraz D, Koh V, Kang S, Huemer J. Teleophthalmology and retina: a review of current tools, pathways and services. Int J Retina Vitreous 2023; 9:76. [PMID: 38053188 PMCID: PMC10699065 DOI: 10.1186/s40942-023-00502-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 10/02/2023] [Indexed: 12/07/2023] Open
Abstract
Telemedicine, the use of telecommunication and information technology to deliver healthcare remotely, has evolved beyond recognition since its inception in the 1970s. Advances in telecommunication infrastructure, the advent of the Internet, exponential growth in computing power and associated computer-aided diagnosis, and medical imaging developments have created an environment where telemedicine is more accessible and capable than ever before, particularly in the field of ophthalmology. Ever-increasing global demand for ophthalmic services due to population growth and ageing together with insufficient supply of ophthalmologists requires new models of healthcare provision integrating telemedicine to meet present day challenges, with the recent COVID-19 pandemic providing the catalyst for the widespread adoption and acceptance of teleophthalmology. In this review we discuss the history, present and future application of telemedicine within the field of ophthalmology, and specifically retinal disease. We consider the strengths and limitations of teleophthalmology, its role in screening, community and hospital management of retinal disease, patient and clinician attitudes, and barriers to its adoption.
Collapse
Affiliation(s)
- Jonathan Than
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Peng Y Sim
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Danson Muttuvelu
- Department of Ophthalmology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
- MitØje ApS/Danske Speciallaeger Aps, Aarhus, Denmark
| | - Daniel Ferraz
- D'Or Institute for Research and Education (IDOR), São Paulo, Brazil
- Institute of Ophthalmology, University College London, London, UK
| | - Victor Koh
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Swan Kang
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Josef Huemer
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK.
- Department of Ophthalmology and Optometry, Kepler University Hospital, Johannes Kepler University, Linz, Austria.
| |
Collapse
|
61
|
Hanif A, Prajna NV, Lalitha P, NaPier E, Parker M, Steinkamp P, Keenan JD, Campbell JP, Song X, Redd TK. Assessing the Impact of Image Quality on Deep Learning Classification of Infectious Keratitis. OPHTHALMOLOGY SCIENCE 2023; 3:100331. [PMID: 37920421 PMCID: PMC10618822 DOI: 10.1016/j.xops.2023.100331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 04/13/2023] [Accepted: 05/08/2023] [Indexed: 11/04/2023]
Abstract
Objective To investigate the impact of corneal photograph quality on convolutional neural network (CNN) predictions. Design A CNN trained to classify bacterial and fungal keratitis was evaluated using photographs of ulcers labeled according to 5 corneal image quality parameters: eccentric gaze direction, abnormal eyelid position, over/under-exposure, inadequate focus, and malpositioned light reflection. Participants All eligible subjects with culture and stain-proven bacterial and/or fungal ulcers presenting to Aravind Eye Hospital in Madurai, India, between January 1, 2021 and December 31, 2021. Methods Convolutional neural network classification performance was compared for each quality parameter, and gradient class activation heatmaps were generated to visualize regions of highest influence on CNN predictions. Main Outcome Measures Area under the receiver operating characteristic and precision recall curves were calculated to quantify model performance. Bootstrapped confidence intervals were used for statistical comparisons. Logistic loss was calculated to measure individual prediction accuracy. Results Individual presence of either light reflection or eyelids obscuring the corneal surface was associated with significantly higher CNN performance. No other quality parameter significantly influenced CNN performance. Qualitative review of gradient class activation heatmaps generally revealed the infiltrate as having the highest diagnostic relevance. Conclusions The CNN demonstrated expert-level performance regardless of image quality. Future studies may investigate use of smartphone cameras and image sets with greater variance in image quality to further explore the influence of these parameters on model performance. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Adam Hanif
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | | | | | - Erin NaPier
- John A. Burns School of Medicine, University of Hawai'i, Honolulu, Hawaii
| | - Maria Parker
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Peter Steinkamp
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Jeremy D. Keenan
- Francis I. Proctor Foundation, University of California, San Francisco, San Francisco, California
| | - J. Peter Campbell
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Xubo Song
- Department of Medical Informatics and Clinical Epidemiology and Program of Computer Science and Electrical Engineering, Oregon Health & Science University, Portland, Oregon
| | - Travis K. Redd
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| |
Collapse
|
62
|
Xu X, Jia Q, Yuan H, Qiu H, Dong Y, Xie W, Yao Z, Zhang J, Nie Z, Li X, Shi Y, Zou JY, Huang M, Zhuang J. A clinically applicable AI system for diagnosis of congenital heart diseases based on computed tomography images. Med Image Anal 2023; 90:102953. [PMID: 37734140 DOI: 10.1016/j.media.2023.102953] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 08/22/2023] [Accepted: 09/01/2023] [Indexed: 09/23/2023]
Abstract
Congenital heart disease (CHD) is the most common type of birth defect. Without timely detection and treatment, approximately one-third of children with CHD would die in the infant period. However, due to the complicated heart structures, early diagnosis of CHD and its types is quite challenging, even for experienced radiologists. Here, we present an artificial intelligence (AI) system that achieves a comparable performance of human experts in the critical task of classifying 17 categories of CHD types. We collected the first-large CT dataset from three different CT machines, including more than 3750 CHD patients over 14 years. Experimental results demonstrate that it can achieve diagnosis accuracy (86.03%) comparable with junior cardiovascular radiologists (86.27%) in a World Health Organization appointed research and cooperation center in China on most types of CHD, and obtains a higher sensitivity (82.91%) than junior cardiovascular radiologists (76.18%). The accuracy of the combination of our AI system (97.20%) and senior radiologists achieves comparable results to that of junior radiologists and senior radiologists (97.16%) which is the current clinical routine. Our AI system can further provide 3D visualization of hearts to senior radiologists for interpretation and flexible review, surgeons for precise intuition of heart structures, and clinicians for more precise outcome prediction. We demonstrate the potential of our model to be integrated into current clinic practice to improve the diagnosis of CHD globally, especially in regions where experienced radiologists can be scarce.
Collapse
Affiliation(s)
- Xiaowei Xu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Qianjun Jia
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Haiyun Yuan
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Hailong Qiu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Yuhao Dong
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Wen Xie
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Zeyang Yao
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Jiawei Zhang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Zhiqaing Nie
- Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong Special Administrative Region
| | - Yiyu Shi
- Computer Science and Engineering, University of Notre Dame, IN, 46656, USA
| | - James Y Zou
- Department of Computer Science, Stanford University, Stanford, CA, 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA.
| | - Meiping Huang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
| | - Jian Zhuang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| |
Collapse
|
63
|
Keles E, Bagci U. The past, current, and future of neonatal intensive care units with artificial intelligence: a systematic review. NPJ Digit Med 2023; 6:220. [PMID: 38012349 PMCID: PMC10682088 DOI: 10.1038/s41746-023-00941-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 10/05/2023] [Indexed: 11/29/2023] Open
Abstract
Machine learning and deep learning are two subsets of artificial intelligence that involve teaching computers to learn and make decisions from any sort of data. Most recent developments in artificial intelligence are coming from deep learning, which has proven revolutionary in almost all fields, from computer vision to health sciences. The effects of deep learning in medicine have changed the conventional ways of clinical application significantly. Although some sub-fields of medicine, such as pediatrics, have been relatively slow in receiving the critical benefits of deep learning, related research in pediatrics has started to accumulate to a significant level, too. Hence, in this paper, we review recently developed machine learning and deep learning-based solutions for neonatology applications. We systematically evaluate the roles of both classical machine learning and deep learning in neonatology applications, define the methodologies, including algorithmic developments, and describe the remaining challenges in the assessment of neonatal diseases by using PRISMA 2020 guidelines. To date, the primary areas of focus in neonatology regarding AI applications have included survival analysis, neuroimaging, analysis of vital parameters and biosignals, and retinopathy of prematurity diagnosis. We have categorically summarized 106 research articles from 1996 to 2022 and discussed their pros and cons, respectively. In this systematic review, we aimed to further enhance the comprehensiveness of the study. We also discuss possible directions for new AI models and the future of neonatology with the rising power of AI, suggesting roadmaps for the integration of AI into neonatal intensive care units.
Collapse
Affiliation(s)
- Elif Keles
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA.
| | - Ulas Bagci
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA
- Northwestern University, Department of Biomedical Engineering, Chicago, IL, USA
- Department of Electrical and Computer Engineering, Chicago, IL, USA
| |
Collapse
|
64
|
Vandevenne MM, Favuzza E, Veta M, Lucenteforte E, Berendschot TT, Mencucci R, Nuijts RM, Virgili G, Dickman MM. Artificial intelligence for detecting keratoconus. Cochrane Database Syst Rev 2023; 11:CD014911. [PMID: 37965960 PMCID: PMC10646985 DOI: 10.1002/14651858.cd014911.pub2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
BACKGROUND Keratoconus remains difficult to diagnose, especially in the early stages. It is a progressive disorder of the cornea that starts at a young age. Diagnosis is based on clinical examination and corneal imaging; though in the early stages, when there are no clinical signs, diagnosis depends on the interpretation of corneal imaging (e.g. topography and tomography) by trained cornea specialists. Using artificial intelligence (AI) to analyse the corneal images and detect cases of keratoconus could help prevent visual acuity loss and even corneal transplantation. However, a missed diagnosis in people seeking refractive surgery could lead to weakening of the cornea and keratoconus-like ectasia. There is a need for a reliable overview of the accuracy of AI for detecting keratoconus and the applicability of this automated method to the clinical setting. OBJECTIVES To assess the diagnostic accuracy of artificial intelligence (AI) algorithms for detecting keratoconus in people presenting with refractive errors, especially those whose vision can no longer be fully corrected with glasses, those seeking corneal refractive surgery, and those suspected of having keratoconus. AI could help ophthalmologists, optometrists, and other eye care professionals to make decisions on referral to cornea specialists. Secondary objectives To assess the following potential causes of heterogeneity in diagnostic performance across studies. • Different AI algorithms (e.g. neural networks, decision trees, support vector machines) • Index test methodology (preprocessing techniques, core AI method, and postprocessing techniques) • Sources of input to train algorithms (topography and tomography images from Placido disc system, Scheimpflug system, slit-scanning system, or optical coherence tomography (OCT); number of training and testing cases/images; label/endpoint variable used for training) • Study setting • Study design • Ethnicity, or geographic area as its proxy • Different index test positivity criteria provided by the topography or tomography device • Reference standard, topography or tomography, one or two cornea specialists • Definition of keratoconus • Mean age of participants • Recruitment of participants • Severity of keratoconus (clinically manifest or subclinical) SEARCH METHODS: We searched CENTRAL (which contains the Cochrane Eyes and Vision Trials Register), Ovid MEDLINE, Ovid Embase, OpenGrey, the ISRCTN registry, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP). There were no date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 29 November 2022. SELECTION CRITERIA We included cross-sectional and diagnostic case-control studies that investigated AI for the diagnosis of keratoconus using topography, tomography, or both. We included studies that diagnosed manifest keratoconus, subclinical keratoconus, or both. The reference standard was the interpretation of topography or tomography images by at least two cornea specialists. DATA COLLECTION AND ANALYSIS Two review authors independently extracted the study data and assessed the quality of studies using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. When an article contained multiple AI algorithms, we selected the algorithm with the highest Youden's index. We assessed the certainty of evidence using the GRADE approach. MAIN RESULTS We included 63 studies, published between 1994 and 2022, that developed and investigated the accuracy of AI for the diagnosis of keratoconus. There were three different units of analysis in the studies: eyes, participants, and images. Forty-four studies analysed 23,771 eyes, four studies analysed 3843 participants, and 15 studies analysed 38,832 images. Fifty-four articles evaluated the detection of manifest keratoconus, defined as a cornea that showed any clinical sign of keratoconus. The accuracy of AI seems almost perfect, with a summary sensitivity of 98.6% (95% confidence interval (CI) 97.6% to 99.1%) and a summary specificity of 98.3% (95% CI 97.4% to 98.9%). However, accuracy varied across studies and the certainty of the evidence was low. Twenty-eight articles evaluated the detection of subclinical keratoconus, although the definition of subclinical varied. We grouped subclinical keratoconus, forme fruste, and very asymmetrical eyes together. The tests showed good accuracy, with a summary sensitivity of 90.0% (95% CI 84.5% to 93.8%) and a summary specificity of 95.5% (95% CI 91.9% to 97.5%). However, the certainty of the evidence was very low for sensitivity and low for specificity. In both groups, we graded most studies at high risk of bias, with high applicability concerns, in the domain of patient selection, since most were case-control studies. Moreover, we graded the certainty of evidence as low to very low due to selection bias, inconsistency, and imprecision. We could not explain the heterogeneity between the studies. The sensitivity analyses based on study design, AI algorithm, imaging technique (topography versus tomography), and data source (parameters versus images) showed no differences in the results. AUTHORS' CONCLUSIONS AI appears to be a promising triage tool in ophthalmologic practice for diagnosing keratoconus. Test accuracy was very high for manifest keratoconus and slightly lower for subclinical keratoconus, indicating a higher chance of missing a diagnosis in people without clinical signs. This could lead to progression of keratoconus or an erroneous indication for refractive surgery, which would worsen the disease. We are unable to draw clear and reliable conclusions due to the high risk of bias, the unexplained heterogeneity of the results, and high applicability concerns, all of which reduced our confidence in the evidence. Greater standardization in future research would increase the quality of studies and improve comparability between studies.
Collapse
Affiliation(s)
- Magali Ms Vandevenne
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Eleonora Favuzza
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
| | - Mitko Veta
- Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Ersilia Lucenteforte
- Department of Statistics, Computer Science and Applications «G. Parenti», University of Florence, Florence, Italy
| | - Tos Tjm Berendschot
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Rita Mencucci
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
| | - Rudy Mma Nuijts
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Gianni Virgili
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
- Queen's University Belfast, Belfast, UK
| | - Mor M Dickman
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| |
Collapse
|
65
|
Hoyek S, Cruz NFSD, Patel NA, Al-Khersan H, Fan KC, Berrocal AM. Identification of novel biomarkers for retinopathy of prematurity in preterm infants by use of innovative technologies and artificial intelligence. Prog Retin Eye Res 2023; 97:101208. [PMID: 37611892 DOI: 10.1016/j.preteyeres.2023.101208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/16/2023] [Accepted: 08/18/2023] [Indexed: 08/25/2023]
Abstract
Retinopathy of prematurity (ROP) is a leading cause of preventable vision loss in preterm infants. While appropriate screening is crucial for early identification and treatment of ROP, current screening guidelines remain limited by inter-examiner variability in screening modalities, absence of local protocol for ROP screening in some settings, a paucity of resources and an increased survival of younger and smaller infants. This review summarizes the advancements and challenges of current innovative technologies, artificial intelligence (AI), and predictive biomarkers for the diagnosis and management of ROP. We provide a contemporary overview of AI-based models for detection of ROP, its severity, progression, and response to treatment. To address the transition from experimental settings to real-world clinical practice, challenges to the clinical implementation of AI for ROP are reviewed and potential solutions are proposed. The use of optical coherence tomography (OCT) and OCT angiography (OCTA) technology is also explored, providing evaluation of subclinical ROP characteristics that are often imperceptible on fundus examination. Furthermore, we explore several potential biomarkers to reduce the need for invasive procedures, to enhance diagnostic accuracy and treatment efficacy. Finally, we emphasize the need of a symbiotic integration of biologic and imaging biomarkers and AI in ROP screening, where the robustness of biomarkers in early disease detection is complemented by the predictive precision of AI algorithms.
Collapse
Affiliation(s)
- Sandra Hoyek
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Natasha F S da Cruz
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Nimesh A Patel
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Hasenin Al-Khersan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Kenneth C Fan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Audina M Berrocal
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA.
| |
Collapse
|
66
|
Subramaniam A, Orge F, Douglass M, Can B, Monteoliva G, Fried E, Schbib V, Saidman G, Peña B, Ulacia S, Acevedo P, Rollins AM, Wilson DL. Image harmonization and deep learning automated classification of plus disease in retinopathy of prematurity. J Med Imaging (Bellingham) 2023; 10:061107. [PMID: 37794884 PMCID: PMC10546198 DOI: 10.1117/1.jmi.10.6.061107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/25/2023] [Accepted: 09/11/2023] [Indexed: 10/06/2023] Open
Abstract
Purpose Retinopathy of prematurity (ROP) is a retinal vascular disease affecting premature infants that can culminate in blindness within days if not monitored and treated. A disease stage for scrutiny and administration of treatment within ROP is "plus disease" characterized by increased tortuosity and dilation of posterior retinal blood vessels. The monitoring of ROP occurs via routine imaging, typically using expensive instruments ($50 to $140 K) that are unavailable in low-resource settings at the point of care. Approach As part of the smartphone-ROP program to enable referrals to expert physicians, fundus images are acquired using smartphone cameras and inexpensive lenses. We developed methods for artificial intelligence determination of plus disease, consisting of a preprocessing pipeline to enhance vessels and harmonize images followed by deep learning classification. A deep learning binary classifier (plus disease versus no plus disease) was developed using GoogLeNet. Results Vessel contrast was enhanced by 90% after preprocessing as assessed by the contrast improvement index. In an image quality evaluation, preprocessed and original images were evaluated by pediatric ophthalmologists from the US and South America with years of experience diagnosing ROP and plus disease. All participating ophthalmologists agreed or strongly agreed that vessel visibility was improved with preprocessing. Using images from various smartphones, harmonized via preprocessing (e.g., vessel enhancement and size normalization) and augmented in physically reasonable ways (e.g., image rotation), we achieved an area under the ROC curve of 0.9754 for plus disease on a limited dataset. Conclusions Promising results indicate the potential for developing algorithms and software to facilitate the usage of cell phone images for staging of plus disease.
Collapse
Affiliation(s)
- Ananya Subramaniam
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - Faruk Orge
- Case Medical Center University Hospitals, Department of Ophthalmology, Cleveland, Ohio, United States
| | - Michael Douglass
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - Basak Can
- Case Medical Center University Hospitals, Department of Ophthalmology, Cleveland, Ohio, United States
| | | | - Evelin Fried
- Hospital Italiano de San Justo Agustin Rocca, Buenos Aires, Argentina
| | - Vanina Schbib
- Hospital de Niños Sor Maria Ludovica, Buenos Aires, Argentina
| | | | - Brenda Peña
- Centro Integral de Salud Visual Daponte, Buenos Aires, Argentina
| | - Soledad Ulacia
- Mineserio de Salud Argentina, Ministry of Public Works Building, Buenos Aires, Argentina
| | | | - Andrew M. Rollins
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - David L. Wilson
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
- Case Western Reserve University, Department of Radiology, Cleveland, Ohio, United States
| |
Collapse
|
67
|
Daich Varela M, Sen S, De Guimaraes TAC, Kabiri N, Pontikos N, Balaskas K, Michaelides M. Artificial intelligence in retinal disease: clinical application, challenges, and future directions. Graefes Arch Clin Exp Ophthalmol 2023; 261:3283-3297. [PMID: 37160501 PMCID: PMC10169139 DOI: 10.1007/s00417-023-06052-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/20/2023] [Accepted: 03/24/2023] [Indexed: 05/11/2023] Open
Abstract
Retinal diseases are a leading cause of blindness in developed countries, accounting for the largest share of visually impaired children, working-age adults (inherited retinal disease), and elderly individuals (age-related macular degeneration). These conditions need specialised clinicians to interpret multimodal retinal imaging, with diagnosis and intervention potentially delayed. With an increasing and ageing population, this is becoming a global health priority. One solution is the development of artificial intelligence (AI) software to facilitate rapid data processing. Herein, we review research offering decision support for the diagnosis, classification, monitoring, and treatment of retinal disease using AI. We have prioritised diabetic retinopathy, age-related macular degeneration, inherited retinal disease, and retinopathy of prematurity. There is cautious optimism that these algorithms will be integrated into routine clinical practice to facilitate access to vision-saving treatments, improve efficiency of healthcare systems, and assist clinicians in processing the ever-increasing volume of multimodal data, thereby also liberating time for doctor-patient interaction and co-development of personalised management plans.
Collapse
Affiliation(s)
- Malena Daich Varela
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | | | | | - Nikolas Pontikos
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | - Michel Michaelides
- UCL Institute of Ophthalmology, London, UK.
- Moorfields Eye Hospital, London, UK.
| |
Collapse
|
68
|
Landau Prat D, Zloto O, Kapelushnik N, Leshno A, Klang E, Sina S, Segev S, Soudry S, Ben Simon GJ. Big Data Analysis of Glaucoma Prevalence in Israel. J Glaucoma 2023; 32:962-967. [PMID: 37566879 DOI: 10.1097/ijg.0000000000002281] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 07/10/2023] [Indexed: 08/13/2023]
Abstract
PRCIS The prevalence of glaucoma in the adult population included in this study was 2.3%. Normal values of routine eye examinations are provided including age and sex variations. PURPOSE The purpose of this study was to analyze the prevalence of glaucoma in a very large database. METHODS Retrospective analysis of medical records of patients examined at the Medical Survey Institute of a tertiary care university referral center between 2001 and 2020. A natural language process (NLP) algorithm identified patients with a diagnosis of glaucoma. The main outcome measures included the prevalence and age distribution of glaucoma. The secondary outcome measures included the prevalence and distribution of visual acuity (VA), intraocular pressure (IOP), and cup-to-disc ratio (CDR). RESULTS Data were derived from 184,589 visits of 36,762 patients (mean age: 52 y, 68% males). The NLP model was highly sensitive in identifying glaucoma, achieving an accuracy of 94.98% (area under the curve=93.85%), and 633 of 27,517 patients (2.3%) were diagnosed as having glaucoma with increasing prevalence in older age. The mean VA was 20/21, IOP 14.4±2.84 mm Hg, and CDR 0.28±0.16, higher in males. The VA decreased with age, while the IOP and CDR increased with age. CONCLUSIONS The prevalence of glaucoma in the adult population included in this study was 2.3%. Normal values of routine eye examinations are provided including age and sex variations. We proved the validity and accuracy of the NLP model in identifying glaucoma.
Collapse
Affiliation(s)
- Daphna Landau Prat
- Goldschleger Eye Surveillance Institution & Medical Screening Institute
- Talpiot Medical Leadership Program, Sheba Medical Center
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv
| | - Ofira Zloto
- Goldschleger Eye Surveillance Institution & Medical Screening Institute
- Talpiot Medical Leadership Program, Sheba Medical Center
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv
| | - Noa Kapelushnik
- Goldschleger Eye Surveillance Institution & Medical Screening Institute
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv
| | - Ari Leshno
- Goldschleger Eye Surveillance Institution & Medical Screening Institute
- Talpiot Medical Leadership Program, Sheba Medical Center
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv
| | - Eyal Klang
- Talpiot Medical Leadership Program, Sheba Medical Center
- The Sami Sagol AI Hub, ARC Innovation Center, Sheba Medical Center
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv
| | - Sigal Sina
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv
| | - Shlomo Segev
- Institute for Medical Screening, Chaim Sheba Medical Center
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv
| | | | - Guy J Ben Simon
- Goldschleger Eye Surveillance Institution & Medical Screening Institute
- Talpiot Medical Leadership Program, Sheba Medical Center
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv
| |
Collapse
|
69
|
Gholami S, Lim JI, Leng T, Ong SSY, Thompson AC, Alam MN. Federated learning for diagnosis of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1259017. [PMID: 37901412 PMCID: PMC10613107 DOI: 10.3389/fmed.2023.1259017] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/25/2023] [Indexed: 10/31/2023] Open
Abstract
This paper presents a federated learning (FL) approach to train deep learning models for classifying age-related macular degeneration (AMD) using optical coherence tomography image data. We employ the use of residual network and vision transformer encoders for the normal vs. AMD binary classification, integrating four unique domain adaptation techniques to address domain shift issues caused by heterogeneous data distribution in different institutions. Experimental results indicate that FL strategies can achieve competitive performance similar to centralized models even though each local model has access to a portion of the training data. Notably, the Adaptive Personalization FL strategy stood out in our FL evaluations, consistently delivering high performance across all tests due to its additional local model. Furthermore, the study provides valuable insights into the efficacy of simpler architectures in image classification tasks, particularly in scenarios where data privacy and decentralization are critical using both encoders. It suggests future exploration into deeper models and other FL strategies for a more nuanced understanding of these models' performance. Data and code are available at https://github.com/QIAIUNCC/FL_UNCC_QIAI.
Collapse
Affiliation(s)
- Sina Gholami
- Department of Electrical Engineering, University of North Carolina at Charlotte, Charlotte, NC, United States
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Science, University of Illinois at Chicago, Chicago, IL, United States
| | - Theodore Leng
- Department of Ophthalmology, School of Medicine, Stanford University, Stanford, CA, United States
| | - Sally Shin Yee Ong
- Department of Surgical Ophthalmology, Atrium-Health Wake Forest Baptist, Winston-Salem, NC, United States
| | - Atalie Carina Thompson
- Department of Surgical Ophthalmology, Atrium-Health Wake Forest Baptist, Winston-Salem, NC, United States
| | - Minhaj Nur Alam
- Department of Electrical Engineering, University of North Carolina at Charlotte, Charlotte, NC, United States
| |
Collapse
|
70
|
Hou N, Shi J, Ding X, Nie C, Wang C, Wan J. ROP-GAN: an image synthesis method for retinopathy of prematurity based on generative adversarial network. Phys Med Biol 2023; 68:205016. [PMID: 37619572 DOI: 10.1088/1361-6560/acf3c9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 08/24/2023] [Indexed: 08/26/2023]
Abstract
Objective. Training data with annotations are scarce in the intelligent diagnosis of retinopathy of prematurity (ROP), and existing typical data augmentation methods cannot generate data with a high degree of diversity. In order to increase the sample size and the generalization ability of the classification model, we propose a method called ROP-GAN for image synthesis of ROP based on a generative adversarial network.Approach. To generate a binary vascular network from color fundus images, we first design an image segmentation model based on U2-Net that can extract multi-scale features without reducing the resolution of the feature map. The vascular network is then fed into an adversarial autoencoder for reconstruction, which increases the diversity of the vascular network diagram. Then, we design an ROP image synthesis algorithm based on a generative adversarial network, in which paired color fundus images and binarized vascular networks are input into the image generation model to train the generator and discriminator, and attention mechanism modules are added to the generator to improve its detail synthesis ability.Main results. Qualitative and quantitative evaluation indicators are applied to evaluate the proposed method, and experiments demonstrate that the proposed method is superior to the existing ROP image synthesis methods, as it can synthesize realistic ROP fundus images.Significance. Our method effectively alleviates the problem of data imbalance in ROP intelligent diagnosis, contributes to the implementation of ROP staging tasks, and lays the foundation for further research. In addition to classification tasks, our synthesized images can facilitate tasks that require large amounts of medical data, such as detecting lesions and segmenting medical images.
Collapse
Affiliation(s)
- Ning Hou
- School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, People's Republic of China
| | - Jianhua Shi
- School of Mechanical and Electrical Engineering, Shanxi Datong University, Shanxi 037009, People's Republic of China
| | - Xiaoxuan Ding
- School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, People's Republic of China
| | - Chuan Nie
- Department of Neonatology, Guangdong Women and Children Hospital, Guangzhou 511442, People's Republic of China
| | - Cuicui Wang
- Graduate School, Guangzhou Medical University, Guangzhou 511495, People's Republic of China
| | - Jiafu Wan
- School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, People's Republic of China
| |
Collapse
|
71
|
Zhang Y, Chai X, Fan Z, Zhang S, Zhang G. Research hotspots and trends in retinopathy of prematurity from 2003 to 2022: a bibliometric analysis. Front Pediatr 2023; 11:1273413. [PMID: 37854031 PMCID: PMC10579817 DOI: 10.3389/fped.2023.1273413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Accepted: 09/22/2023] [Indexed: 10/20/2023] Open
Abstract
Background In order to understand the research hotspots and trends in the field of retinopathy of prematurity (ROP), our study analyzed the relevant publications from 2003 to 2022 by using bibliometric analysis. Methods The Citespace 6.2.R3 system was used to analyze the publications collected from the Web of Science Core Collection (WoSCC) database. Results In total, 4,957 publications were included in this study. From 2003 to 2022, the number of publications gradually increased and peaked in 2022. The United States was the country with the most publications, while Harvard University was the most productive institution. The top co-cited journal PEDIATRICS is published by the United States. Author analysis showed that Hellström A was the author with the most publications, while Good WV was the top co-cited author. The co-citation analysis of references showed seven major clusters: genetic polymorphism, neurodevelopmental outcome, threshold retinopathy, oxygen-induced retinopathy, low birth weight infant, prematurity diagnosis cluster and artificial intelligence (AI). For the citation burst analysis, there remained seven keywords in their burst phases until 2022, including ranibizumab, validation, trends, type 1 retinopathy, preterm, deep learning and artificial intelligence. Conclusion Intravitreal anti-vascular endothelial growth factor therapy and AI-assisted clinical decision-making were two major topics of ROP research, which may still be the research trends in the coming years.
Collapse
Affiliation(s)
- Yulin Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Xiaoyan Chai
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Zixin Fan
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Sifan Zhang
- Department of Biology, New York University, New York, NY, United States
| | - Guoming Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| |
Collapse
|
72
|
Rao DP, Savoy FM, Tan JZE, Fung BPE, Bopitiya CM, Sivaraman A, Vinekar A. Development and validation of an artificial intelligence based screening tool for detection of retinopathy of prematurity in a South Indian population. Front Pediatr 2023; 11:1197237. [PMID: 37794964 PMCID: PMC10545957 DOI: 10.3389/fped.2023.1197237] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 08/29/2023] [Indexed: 10/06/2023] Open
Abstract
Purpose The primary objective of this study was to develop and validate an AI algorithm as a screening tool for the detection of retinopathy of prematurity (ROP). Participants Images were collected from infants enrolled in the KIDROP tele-ROP screening program. Methods We developed a deep learning (DL) algorithm with 227,326 wide-field images from multiple camera systems obtained from the KIDROP tele-ROP screening program in India over an 11-year period. 37,477 temporal retina images were utilized with the dataset split into train (n = 25,982, 69.33%), validation (n = 4,006, 10.69%), and an independent test set (n = 7,489, 19.98%). The algorithm consists of a binary classifier that distinguishes between the presence of ROP (Stages 1-3) and the absence of ROP. The image labels were retrieved from the daily registers of the tele-ROP program. They consist of per-eye diagnoses provided by trained ROP graders based on all images captured during the screening session. Infants requiring treatment and a proportion of those not requiring urgent referral had an additional confirmatory diagnosis from an ROP specialist. Results Of the 7,489 temporal images analyzed in the test set, 2,249 (30.0%) images showed the presence of ROP. The sensitivity and specificity to detect ROP was 91.46% (95% CI: 90.23%-92.59%) and 91.22% (95% CI: 90.42%-91.97%), respectively, while the positive predictive value (PPV) was 81.72% (95% CI: 80.37%-83.00%), negative predictive value (NPV) was 96.14% (95% CI: 95.60%-96.61%) and the AUROC was 0.970. Conclusion The novel ROP screening algorithm demonstrated high sensitivity and specificity in detecting the presence of ROP. A prospective clinical validation in a real-world tele-ROP platform is under consideration. It has the potential to lower the number of screening sessions required to be conducted by a specialist for a high-risk preterm infant thus significantly improving workflow efficiency.
Collapse
Affiliation(s)
- Divya Parthasarathy Rao
- Artificial Intelligence Research and Development, Remidio Innovative Solutions Inc., Glen Allen, VA, United States
| | - Florian M. Savoy
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Joshua Zhi En Tan
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Brian Pei-En Fung
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Chiran Mandula Bopitiya
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Anand Sivaraman
- Artificial Intelligence Research and Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, India
| | - Anand Vinekar
- Department of Pediatric Retina, Narayana Nethralaya Eye Institute, Bangalore, India
| |
Collapse
|
73
|
Toofanee MSA, Dowlut S, Hamroun M, Tamine K, Duong AK, Petit V, Sauveron D. DFU-Helper: An Innovative Framework for Longitudinal Diabetic Foot Ulcer Diseases Evaluation Using Deep Learning. APPLIED SCIENCES 2023; 13:10310. [DOI: 10.3390/app131810310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2025]
Abstract
Diabetes affects roughly 537 million people, and is predicted to reach 783 million by 2045. Diabetes Foot Ulcer (DFU) is a major complication associated with diabetes and can lead to lower limb amputation. The rapid evolution of diabetic foot ulcers (DFUs) necessitates immediate intervention to prevent the severe consequences of amputation and related complications. Continuous and meticulous patient monitoring for individuals with diabetic foot ulcers (DFU) is crucial and is currently carried out by medical practitioners on a daily basis. This research article introduces DFU-Helper, a novel framework that employs a Siamese Neural Network (SNN) for accurate and objective assessment of the progression of diabetic foot ulcers (DFUs) over time. DFU-Helper provides healthcare professionals with a comprehensive visual and numerical representation in terms of the similarity distance of the disease, considering five distinct disease conditions: none, infection, ischemia, both (presence of ischemia and infection), and healthy. The SNN achieves the best Macro F1-score of 0.6455 on the test dataset when applying pseudo-labeling with a pseudo-threshold set to 0.9. The SNN is used in the process of creating anchors for each class using feature vectors. When a patient initially consults a healthcare professional, an image is transmitted to the model, which computes the distances from each class anchor point. It generates a comprehensive table with corresponding figures and a visually intuitive radar chart. In subsequent visits, another image is captured and fed into the model alongside the initial image. DFU-Helper then plots both images and presents the distances from the class anchor points. Our proposed system represents a significant advancement in the application of deep learning for the longitudinal assessment of DFU. To the best of our knowledge, no existing tool harnesses deep learning for DFU follow-up in a comparable manner.
Collapse
Affiliation(s)
- Mohammud Shaad Ally Toofanee
- Department of Computer Science, XLIM, UMR CNRS 7252, University of Limoges, Avenue Albert Thomas, 87060 Limoges, France
- Department of Applied Computer Science, Université des Mascareignes, Concorde Avenue, Roches Brunesl-Rose Hill 71259, Mauritius
| | - Sabeena Dowlut
- Department of Applied Computer Science, Université des Mascareignes, Concorde Avenue, Roches Brunesl-Rose Hill 71259, Mauritius
| | - Mohamed Hamroun
- Department of Computer Science, XLIM, UMR CNRS 7252, University of Limoges, Avenue Albert Thomas, 87060 Limoges, France
- 3iL Ingénieurs, 43 Rue de Sainte Anne, 87015 Limoges, France
| | - Karim Tamine
- Department of Computer Science, XLIM, UMR CNRS 7252, University of Limoges, Avenue Albert Thomas, 87060 Limoges, France
| | - Anh Kiet Duong
- Faculty of Science and Technology, University of Limoges, 23, Avenue Albert Thomas, 87060 Limoges, France
| | - Vincent Petit
- Department of Applied Computer Science, Université des Mascareignes, Concorde Avenue, Roches Brunesl-Rose Hill 71259, Mauritius
| | - Damien Sauveron
- Department of Computer Science, XLIM, UMR CNRS 7252, University of Limoges, Avenue Albert Thomas, 87060 Limoges, France
| |
Collapse
|
74
|
Liu Y, Du Y, Wang X, Zhao X, Zhang S, Yu Z, Wu Z, Ntentakis DP, Tian R, Chen Y, Wang C, Yao X, Li R, Heng PA, Zhang G. An Artificial Intelligence System for Screening and Recommending the Treatment Modalities for Retinopathy of Prematurity. Asia Pac J Ophthalmol (Phila) 2023; 12:468-476. [PMID: 37851564 DOI: 10.1097/apo.0000000000000638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 08/01/2023] [Indexed: 10/20/2023] Open
Abstract
PURPOSE The purpose of this study was to develop an artificial intelligence (AI) system for the identification of disease status and recommending treatment modalities for retinopathy of prematurity (ROP). METHODS This retrospective cohort study included a total of 24,495 RetCam images from 1075 eyes of 651 preterm infants who received RetCam examination at the Shenzhen Eye Hospital in Shenzhen, China, from January 2003 to August 2021. Three tasks included ROP identification, severe ROP identification, and treatment modalities identification (retinal laser photocoagulation or intravitreal injections). The AI system was developed to identify the 3 tasks, especially the treatment modalities of ROP. The performance between the AI system and ophthalmologists was compared using extra 200 RetCam images. RESULTS The AI system exhibited favorable performance in the 3 tasks, including ROP identification [area under the receiver operating characteristic curve (AUC), 0.9531], severe ROP identification (AUC, 0.9132), and treatment modalities identification with laser photocoagulation or intravitreal injections (AUC, 0.9360). The AI system achieved an accuracy of 0.8627, a sensitivity of 0.7059, and a specificity of 0.9412 for identifying the treatment modalities of ROP. External validation results confirmed the good performance of the AI system with an accuracy of 92.0% in all 3 tasks, which was better than 4 experienced ophthalmologists who scored 56%, 65%, 71%, and 76%, respectively. CONCLUSIONS The described AI system achieved promising outcomes in the automated identification of ROP severity and treatment modalities. Using such algorithmic approaches as accessory tools in the clinic may improve ROP screening in the future.
Collapse
Affiliation(s)
- Yaling Liu
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Yueshanyi Du
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
- Guizhou Medical University, Guiyang, Guizhou, China
| | - Xi Wang
- Zhejiang Lab, Hangzhou, China
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, CA
| | - Xinyu Zhao
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Sifan Zhang
- Southern University of Science and Technology School of Medicine, Shenzhen, China
| | - Zhen Yu
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Zhenquan Wu
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Dimitrios P Ntentakis
- Retina Service, Ines and Fred Yeatts Retina Research Laboratory, Boston, MA
- Angiogenesis Laboratory, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA
| | - Ruyin Tian
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Yi Chen
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Cui Wang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Xue Yao
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Ruijiang Li
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, CA
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| |
Collapse
|
75
|
Ruamviboonsuk P, Ruamviboonsuk V, Tiwari R. Recent evidence of economic evaluation of artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2023; 34:449-458. [PMID: 37459289 DOI: 10.1097/icu.0000000000000987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/12/2023]
Abstract
PURPOSE OF REVIEW Health economic evaluation (HEE) is essential for assessing value of health interventions, including artificial intelligence. Recent approaches, current challenges, and future directions of HEE of artificial intelligence in ophthalmology are reviewed. RECENT FINDINGS Majority of recent HEEs of artificial intelligence in ophthalmology were for diabetic retinopathy screening. Two models, one conducted in the rural USA (5-year period) and another in China (35-year period), found artificial intelligence to be more cost-effective than without screening for diabetic retinopathy. Two additional models, which compared artificial intelligence with human screeners in Brazil and Thailand for the lifetime of patients, found artificial intelligence to be more expensive from a healthcare system perspective. In the Thailand analysis, however, artificial intelligence was less expensive when opportunity loss from blindness was included. An artificial intelligence model for screening retinopathy of prematurity was cost-effective in the USA. A model for screening age-related macular degeneration in Japan and another for primary angle close in China did not find artificial intelligence to be cost-effective, compared with no screening. The costs of artificial intelligence varied widely in these models. SUMMARY Like other medical fields, there is limited evidence in assessing the value of artificial intelligence in ophthalmology and more appropriate HEE models are needed.
Collapse
Affiliation(s)
- Paisan Ruamviboonsuk
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University
| | | | | |
Collapse
|
76
|
Chou YB, Kale AU, Lanzetta P, Aslam T, Barratt J, Danese C, Eldem B, Eter N, Gale R, Korobelnik JF, Kozak I, Li X, Li X, Loewenstein A, Ruamviboonsuk P, Sakamoto T, Ting DS, van Wijngaarden P, Waldstein SM, Wong D, Wu L, Zapata MA, Zarranz-Ventura J. Current status and practical considerations of artificial intelligence use in screening and diagnosing retinal diseases: Vision Academy retinal expert consensus. Curr Opin Ophthalmol 2023; 34:403-413. [PMID: 37326222 PMCID: PMC10399944 DOI: 10.1097/icu.0000000000000979] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) technologies in screening and diagnosing retinal diseases may play an important role in telemedicine and has potential to shape modern healthcare ecosystems, including within ophthalmology. RECENT FINDINGS In this article, we examine the latest publications relevant to AI in retinal disease and discuss the currently available algorithms. We summarize four key requirements underlining the successful application of AI algorithms in real-world practice: processing massive data; practicability of an AI model in ophthalmology; policy compliance and the regulatory environment; and balancing profit and cost when developing and maintaining AI models. SUMMARY The Vision Academy recognizes the advantages and disadvantages of AI-based technologies and gives insightful recommendations for future directions.
Collapse
Affiliation(s)
- Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Aditya U. Kale
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Paolo Lanzetta
- Department of Medicine – Ophthalmology, University of Udine
- Istituto Europeo di Microchirurgia Oculare, Udine, Italy
| | - Tariq Aslam
- Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester School of Health Sciences, Manchester, UK
| | - Jane Barratt
- International Federation on Ageing, Toronto, Canada
| | - Carla Danese
- Department of Medicine – Ophthalmology, University of Udine
- Department of Ophthalmology, AP-HP Hôpital Lariboisière, Université Paris Cité, Paris, France
| | - Bora Eldem
- Department of Ophthalmology, Hacettepe University, Ankara, Turkey
| | - Nicole Eter
- Department of Ophthalmology, University of Münster Medical Center, Münster, Germany
| | - Richard Gale
- Department of Ophthalmology, York Teaching Hospital NHS Foundation Trust, York, UK
| | - Jean-François Korobelnik
- Service d’ophtalmologie, CHU Bordeaux
- University of Bordeaux, INSERM, BPH, UMR1219, F-33000 Bordeaux, France
| | - Igor Kozak
- Moorfields Eye Hospital Centre, Abu Dhabi, UAE
| | - Xiaorong Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin
| | - Xiaoxin Li
- Xiamen Eye Center, Xiamen University, Xiamen, China
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Daniel S.W. Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - Peter van Wijngaarden
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | | | - David Wong
- Unity Health Toronto – St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Lihteh Wu
- Macula, Vitreous and Retina Associates of Costa Rica, San José, Costa Rica
| | | | | |
Collapse
|
77
|
Bryan JM, Bryar PJ, Mirza RG. Convolutional Neural Networks Accurately Identify Ungradable Images in a Diabetic Retinopathy Telemedicine Screening Program. Telemed J E Health 2023; 29:1349-1355. [PMID: 36730708 DOI: 10.1089/tmj.2022.0357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Purpose: Diabetic retinopathy (DR) is a microvascular complication of diabetes mellitus (DM). Standard of care for patients with DM is an annual eye examination or retinal imaging to assess for DR, the latter of which may be completed through telemedicine approaches. One significant issue is poor-quality images that prevent adequate screening and are thus ungradable. We used artificial intelligence to enable point-of-care (at time of imaging) identification of ungradable images in a DR screening program. Methods: Nonmydriatic retinal images were gathered from patients with DM imaged during a primary care or endocrinology visit from September 1, 2017, to June 1, 2021. The Topcon TRC-NW400 retinal camera (Topcon Corp., Tokyo, Japan) was used. Images were interpreted by 5 ophthalmologists for gradeability, presence and stage of DR, and presence of non-DR pathologies. A convolutional neural network with Inception V3 network architecture was trained to assess image gradeability. Images were divided into training and test sets, and 10-fold cross-validation was performed. Results: A total of 1,377 images from 537 patients (56.1% female, median age 58) were analyzed. Ophthalmologists classified 25.9% of images as ungradable. Of gradable images, 18.6% had DR of varying degrees and 26.5% had non-DR pathology. 10 fold cross-validation produced an average area under receiver operating characteristic curve (AUC) of 0.922 (standard deviation: 0.027, range: 0.882 to 0.961). The final model exhibited similar test set performance with an AUC of 0.924. Conclusions: This model accurately assesses gradeability of nonmydriatic retinal images. It could be used for increasing the efficiency of DR screening programs by enabling point-of-care identification of poor-quality images.
Collapse
Affiliation(s)
- John M Bryan
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Paul J Bryar
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Rukhsana G Mirza
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| |
Collapse
|
78
|
Tan TF, Dai P, Zhang X, Jin L, Poh S, Hong D, Lim J, Lim G, Teo ZL, Liu N, Ting DSW. Explainable artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2023; 34:422-430. [PMID: 37527200 DOI: 10.1097/icu.0000000000000983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/03/2023]
Abstract
PURPOSE OF REVIEW Despite the growing scope of artificial intelligence (AI) and deep learning (DL) applications in the field of ophthalmology, most have yet to reach clinical adoption. Beyond model performance metrics, there has been an increasing emphasis on the need for explainability of proposed DL models. RECENT FINDINGS Several explainable AI (XAI) methods have been proposed, and increasingly applied in ophthalmological DL applications, predominantly in medical imaging analysis tasks. SUMMARY We summarize an overview of the key concepts, and categorize some examples of commonly employed XAI methods. Specific to ophthalmology, we explore XAI from a clinical perspective, in enhancing end-user trust, assisting clinical management, and uncovering new insights. We finally discuss its limitations and future directions to strengthen XAI for application to clinical practice.
Collapse
Affiliation(s)
- Ting Fang Tan
- Artificial Intelligence and Digital Innovation Research Group
- Singapore National Eye Centre, Singapore General Hospital
| | - Peilun Dai
- Institute of High Performance Computing, A∗STAR
| | - Xiaoman Zhang
- Duke-National University of Singapore Medical School, Singapore
| | - Liyuan Jin
- Artificial Intelligence and Digital Innovation Research Group
- Duke-National University of Singapore Medical School, Singapore
| | - Stanley Poh
- Singapore National Eye Centre, Singapore General Hospital
| | - Dylan Hong
- Artificial Intelligence and Digital Innovation Research Group
| | - Joshua Lim
- Singapore National Eye Centre, Singapore General Hospital
| | - Gilbert Lim
- Artificial Intelligence and Digital Innovation Research Group
| | - Zhen Ling Teo
- Artificial Intelligence and Digital Innovation Research Group
- Singapore National Eye Centre, Singapore General Hospital
| | - Nan Liu
- Artificial Intelligence and Digital Innovation Research Group
- Duke-National University of Singapore Medical School, Singapore
| | - Daniel Shu Wei Ting
- Artificial Intelligence and Digital Innovation Research Group
- Singapore National Eye Centre, Singapore General Hospital
- Duke-National University of Singapore Medical School, Singapore
- Byers Eye Institute, Stanford University, Stanford, California, USA
| |
Collapse
|
79
|
Patel SN, Al-Khaled T, Kang KB, Jonas KE, Ostmo S, Ventura CV, Martinez-Castellanos MA, Anzures RGAS, Campbell JP, Chiang MF, Chan RVP. Characterization of Errors in Retinopathy of Prematurity Diagnosis by Ophthalmologists-in-Training in Middle-Income Countries. J Pediatr Ophthalmol Strabismus 2023; 60:344-352. [PMID: 36263934 DOI: 10.3928/01913913-20220609-02] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
PURPOSE To characterize common errors in the diagnosis of retinopathy of prematurity (ROP) among ophthalmologistsin-training in middle-income countries. METHODS In this prospective cohort study, 200 ophthalmologists-in-training from programs in Brazil, Mexico, and the Philippines participated. A secure web-based educational system was developed using a repository of more than 2,500 unique image sets of ROP, and a reference standard diagnosis was established by combining the clinical diagnosis and the image-based diagnosis by multiple experts. Twenty web-based cases of wide-field retinal images were presented, and ophthalmologists-in-training were asked to diagnose plus disease, zone, stage, and category for each eye. Trainees' responses were compared to the consensus reference standard diagnosis. Main outcome measures were frequency and types of diagnostic errors were analyzed. RESULTS The error rate in the diagnosis of any category of ROP was between 48% and 59% for all countries. The error rate in identifying type 2 or pre-plus disease was 77%, with a tendency for overdiagnosis (27% underdiagnosis vs 50% overdiagnosis; mean difference: 23.4; 95% CI: 12.1 to 34.7; P = .005). Misdiagnosis of treatment-requiring ROP as type 2 ROP was most commonly associated with incorrectly identifying plus disease (plus disease error rate = 18% with correct category diagnosis vs 69% when misdiagnosed; mean difference: 51.0; 95% CI: 49.3 to 52.7; P = .003). CONCLUSIONS Ophthalmologists-in-training from middle-income countries misdiagnosed ROP more than half of the time. Identification of plus disease was the salient factor leading to incorrect diagnosis. These findings emphasize the need for improved access to ROP education to improve competency in diagnosis among ophthalmologists-in-training in middle-income countries. [J Pediatr Ophthalmol Strabismus. 2023;60(5):344-352.].
Collapse
|
80
|
Al-Khaled T, Patel SN, Valikodath NG, Jonas KE, Ostmo S, Allozi R, Hallak J, Campbell JP, Chiang MF, Chan RVP. Characterization of Errors in Retinopathy of Prematurity Diagnosis by Ophthalmologists-in-Training in the United States and Canada. J Pediatr Ophthalmol Strabismus 2023; 60:337-343. [PMID: 36263935 DOI: 10.3928/01913913-20220609-01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
PURPOSE To identify the prominent factors that lead to misdiagnosis of retinopathy of prematurity (ROP) by ophthalmologists-in-training in the United States and Canada. METHODS This prospective cohort study included 32 ophthalmologists-in-training at six ophthalmology training programs in the United States and Canada. Twenty web-based cases of ROP using wide-field retinal images were presented, and ophthalmologists-in-training were asked to diagnose plus disease, zone, stage, and category for each eye. Responses were compared to a consensus reference standard diagnosis for accuracy, which was established by combining the clinical diagnosis and the image-based diagnosis by multiple experts. The types of diagnostic errors that occurred were analyzed with descriptive and chi-squared analysis. Main outcome measures were frequency of types (category, zone, stage, plus disease) of diagnostic errors; association of errors in zone, stage, and plus disease diagnosis with incorrectly identified category; and performance of ophthalmologists-in-training across postgraduate years. RESULTS Category of ROP was misdiagnosed at a rate of 48%. Errors in classification of plus disease were most commonly associated with misdiagnosis of treatment-requiring (plus error rate = 16% when treatment-requiring was correctly diagnosed vs 81% when underdiagnosed as type 2 or pre-plus; mean difference: 64.3; 95% CI: 51.9 to 76.7; P < .001) and type 2 or pre-plus (plus error rate = 35% when type 2 or pre-plus was correctly diagnosed vs 76% when overdiagnosed as treatment-requiring; mean difference: 41.0; 95% CI: 28.4 to 53.5; P < .001) disease. The diagnostic error rate of postgraduate year (PGY)-2 trainees was significantly higher than PGY-3 trainees (PGY-2 category error rate = 61% vs PGY-3 = 35%; mean difference, 25.4; 95% CI: 17.7 to 33.0; P < .001). CONCLUSIONS Ophthalmologists-in-training in the United States and Canada misdiagnosed ROP nearly half of the time, with incorrect identification of plus disease as a leading cause. Integration of structured learning for ROP in residency education may improve diagnostic competency. [J Pediatr Ophthalmol Strabismus. 2023;60(5):337-343.].
Collapse
|
81
|
Crincoli E, Sacconi R, Querques G. Reshaping the use of Artificial Intelligence in Ophthalmology: Sometimes you Need to go Backwards. Retina 2023; 43:1429-1432. [PMID: 37343295 DOI: 10.1097/iae.0000000000003878] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Affiliation(s)
- Emanuele Crincoli
- Department of Ophthalmology, University Vita-Salute, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | | |
Collapse
|
82
|
Linde G, Chalakkal R, Zhou L, Huang JL, O’Keeffe B, Shah D, Davidson S, Hong SC. Automatic Refractive Error Estimation Using Deep Learning-Based Analysis of Red Reflex Images. Diagnostics (Basel) 2023; 13:2810. [PMID: 37685347 PMCID: PMC10486607 DOI: 10.3390/diagnostics13172810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/23/2023] [Accepted: 08/26/2023] [Indexed: 09/10/2023] Open
Abstract
Purpose/Background: We evaluate how a deep learning model can be applied to extract refractive error metrics from pupillary red reflex images taken by a low-cost handheld fundus camera. This could potentially provide a rapid and economical vision-screening method, allowing for early intervention to prevent myopic progression and reduce the socioeconomic burden associated with vision impairment in the later stages of life. Methods: Infrared and color images of pupillary crescents were extracted from eccentric photorefraction images of participants from Choithram Hospital in India and Dargaville Medical Center in New Zealand. The pre-processed images were then used to train different convolutional neural networks to predict refractive error in terms of spherical power and cylindrical power metrics. Results: The best-performing trained model achieved an overall accuracy of 75% for predicting spherical power using infrared images and a multiclass classifier. Conclusions: Even though the model's performance is not superior, the proposed method showed good usability of using red reflex images in estimating refractive error. Such an approach has never been experimented with before and can help guide researchers, especially when the future of eye care is moving towards highly portable and smartphone-based devices.
Collapse
Affiliation(s)
| | | | - Lydia Zhou
- University of Sydney, Sydney, NSW 2050, Australia
| | | | | | | | | | - Sheng Chiong Hong
- Public Health Unit, Dunedin Hospital, Te Whatu Ora Southern, Dunedin 9016, New Zealand
| |
Collapse
|
83
|
Nakayama LF, Mitchell WG, Ribeiro LZ, Dychiao RG, Phanphruk W, Celi LA, Kalua K, Santiago APD, Regatieri CVS, Moraes NSB. Fairness and generalisability in deep learning of retinopathy of prematurity screening algorithms: a literature review. BMJ Open Ophthalmol 2023; 8:e001216. [PMID: 37558406 PMCID: PMC10414056 DOI: 10.1136/bmjophth-2022-001216] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 07/04/2023] [Indexed: 08/11/2023] Open
Abstract
BACKGROUND Retinopathy of prematurity (ROP) is a vasoproliferative disease responsible for more than 30 000 blind children worldwide. Its diagnosis and treatment are challenging due to the lack of specialists, divergent diagnostic concordance and variation in classification standards. While artificial intelligence (AI) can address the shortage of professionals and provide more cost-effective management, its development needs fairness, generalisability and bias controls prior to deployment to avoid producing harmful unpredictable results. This review aims to compare AI and ROP study's characteristics, fairness and generalisability efforts. METHODS Our review yielded 220 articles, of which 18 were included after full-text assessment. The articles were classified into ROP severity grading, plus detection, detecting treatment requiring, ROP prediction and detection of retinal zones. RESULTS All the article's authors and included patients are from middle-income and high-income countries, with no low-income countries, South America, Australia and Africa Continents representation.Code is available in two articles and in one on request, while data are not available in any article. 88.9% of the studies use the same retinal camera. In two articles, patients' sex was described, but none applied a bias control in their models. CONCLUSION The reviewed articles included 180 228 images and reported good metrics, but fairness, generalisability and bias control remained limited. Reproducibility is also a critical limitation, with few articles sharing codes and none sharing data. Fair and generalisable ROP and AI studies are needed that include diverse datasets, data and code sharing, collaborative research, and bias control to avoid unpredictable and harmful deployments.
Collapse
Affiliation(s)
- Luis Filipe Nakayama
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Department of Ophthalmology, Sao Paulo Federal University, Sao Paulo, Brazil
| | - William Greig Mitchell
- Department of Ophthalmology, The Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Lucas Zago Ribeiro
- Department of Ophthalmology, Sao Paulo Federal University, Sao Paulo, Brazil
| | - Robyn Gayle Dychiao
- University of the Philippines Manila College of Medicine, Manila, Philippines
| | | | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Department of Biostatistics, Harvard University T H Chan School of Public Health, Boston, Massachusetts, USA
| | - Khumbo Kalua
- Department of Ophthalmology, Blantyre Institute for Community Ophthalmology, BICO, Blantyre, Malawi
| | | | | | | |
Collapse
|
84
|
Bai A, Dai S, Hung J, Kirpalani A, Russell H, Elder J, Shah S, Carty C, Tan Z. Multicenter Validation of Deep Learning Algorithm ROP.AI for the Automated Diagnosis of Plus Disease in ROP. Transl Vis Sci Technol 2023; 12:13. [PMID: 37578427 PMCID: PMC10431208 DOI: 10.1167/tvst.12.8.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 06/30/2023] [Indexed: 08/15/2023] Open
Abstract
Purpose Retinopathy of prematurity (ROP) is a sight-threatening vasoproliferative retinal disease affecting premature infants. The detection of plus disease, a severe form of ROP requiring treatment, remains challenging owing to subjectivity, frequency, and time intensity of retinal examinations. Recent artificial intelligence (AI) algorithms developed to detect plus disease aims to alleviate these challenges; however, they have not been tested against a diverse neonatal population. Our study aims to validate ROP.AI, an AI algorithm developed from a single cohort, against a multicenter Australian cohort to determine its performance in detecting plus disease. Methods Retinal images captured during routine ROP screening from May 2021 to February 2022 across five major tertiary centers throughout Australia were collected and uploaded to ROP.AI. AI diagnostic output was compared with one of five ROP experts. Sensitivity, specificity, negative predictive value, and area under the receiver operator curve were determined. Results We collected 8052 images. The area under the receiver operator curve for the diagnosis of plus disease was 0.75. ROP.AI achieved 84% sensitivity, 43% specificity, and 96% negative predictive value for the detection of plus disease after operating point optimization. Conclusions ROP.AI was able to detect plus disease in an external, multicenter cohort despite being trained from a single center. Algorithm performance was demonstrated without preprocessing or augmentation, simulating real-world clinical applicability. Further training may improve generalizability for clinical implementation. Translational Relevance These results demonstrate ROP.AI's potential as a screening tool for the detection of plus disease in future clinical practice and provides a solution to overcome current diagnostic challenges.
Collapse
Affiliation(s)
- Amelia Bai
- Department of Ophthalmology, Queensland Children's Hospital, South Brisbane, Queensland, Australia
- Centre for Children's Health Research, South Brisbane, Queensland, Australia
- School of Medical Science, Griffith University, Southport, Queensland, Australia
| | - Shuan Dai
- Department of Ophthalmology, Queensland Children's Hospital, South Brisbane, Queensland, Australia
- School of Medical Science, Griffith University, Southport, Queensland, Australia
- University of Queensland, St Lucia, Queensland, Australia
| | - Jacky Hung
- Centre for Children's Health Research, South Brisbane, Queensland, Australia
| | - Aditi Kirpalani
- Department of Ophthalmology, Gold Coast University Hospital, Southport, Queensland, Australia
| | - Heather Russell
- Department of Ophthalmology, Gold Coast University Hospital, Southport, Queensland, Australia
- Bond University, Robina, Queensland, Australia
| | - James Elder
- Department of Ophthalmology, Royal Women's Hospital, Parkville, Victoria, Australia
- University of Melbourne, Parkville, Victoria, Australia
| | - Shaheen Shah
- Mater Misericordiae, South Brisbane, Queensland, Australia
| | - Christopher Carty
- Griffith Centre of Biomedical and Rehabilitation Engineering (GCORE), Menzies Health Institute Queensland, Griffith University, Southport, Australia
- Department of Orthopaedics, Children's Health Queensland Hospital and Health Service, Queensland Children's Hospital, South Brisbane, Australia
| | - Zachary Tan
- Aegis Ventures, Sydney, New South Wales, Australia
| |
Collapse
|
85
|
Ramanathan A, Athikarisamy SE, Lam GC. Artificial intelligence for the diagnosis of retinopathy of prematurity: A systematic review of current algorithms. Eye (Lond) 2023; 37:2518-2526. [PMID: 36577806 PMCID: PMC10397194 DOI: 10.1038/s41433-022-02366-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 11/23/2022] [Accepted: 12/09/2022] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND/OBJECTIVES With the increasing survival of premature infants, there is an increased demand to provide adequate retinopathy of prematurity (ROP) services. Wide field retinal imaging (WFDRI) and artificial intelligence (AI) have shown promise in the field of ROP and have the potential to improve the diagnostic performance and reduce the workload for screening ophthalmologists. The aim of this review is to systematically review and provide a summary of the diagnostic characteristics of existing deep learning algorithms. SUBJECT/METHODS Two authors independently searched the literature, and studies using a deep learning system from retinal imaging were included. Data were extracted, assessed and reported using PRISMA guidelines. RESULTS Twenty-seven studies were included in this review. Nineteen studies used AI systems to diagnose ROP, classify the staging of ROP, diagnose the presence of pre-plus or plus disease, or assess the quality of retinal images. The included studies reported a sensitivity of 71%-100%, specificity of 74-99% and area under the curve of 91-99% for the primary outcome of the study. AI techniques were comparable to the assessment of ophthalmologists in terms of overall accuracy and sensitivity. Eight studies evaluated vascular severity scores and were able to accurately differentiate severity using an automated classification score. CONCLUSION Artificial intelligence for ROP diagnosis is a growing field, and many potential utilities have already been identified, including the presence of plus disease, staging of disease and a new automated severity score. AI has a role as an adjunct to clinical assessment; however, there is insufficient evidence to support its use as a sole diagnostic tool currently.
Collapse
Affiliation(s)
- Ashwin Ramanathan
- Department of Paediatrics, Perth Children's Hospital, Perth, Australia
| | - Sam Ebenezer Athikarisamy
- Department of Neonatology, Perth Children's Hospital, Perth, Australia.
- School of Medicine, University of Western Australia, Crawley, Australia.
| | - Geoffrey C Lam
- Department of Ophthalmology, Perth Children's Hospital, Perth, Australia
- Centre for Ophthalmology and Visual Science, University of Western Australia, Crawley, Australia
| |
Collapse
|
86
|
Zhang R, Dong L, Li R, Zhang K, Li Y, Zhao H, Shi J, Ge X, Xu X, Jiang L, Shi X, Zhang C, Zhou W, Xu L, Wu H, Li H, Yu C, Li J, Ma J, Wei W. Automatic retinoblastoma screening and surveillance using deep learning. Br J Cancer 2023; 129:466-474. [PMID: 37344582 PMCID: PMC10403507 DOI: 10.1038/s41416-023-02320-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 05/17/2023] [Accepted: 06/12/2023] [Indexed: 06/23/2023] Open
Abstract
BACKGROUND Retinoblastoma is the most common intraocular malignancy in childhood. With the advanced management strategy, the globe salvage and overall survival have significantly improved, which proposes subsequent challenges regarding long-term surveillance and offspring screening. This study aimed to apply a deep learning algorithm to reduce the burden of follow-up and offspring screening. METHODS This cohort study includes retinoblastoma patients who visited Beijing Tongren Hospital from March 2018 to January 2022 for deep learning algorism development. Clinical-suspected and treated retinoblastoma patients from February 2022 to June 2022 were prospectively collected for prospective validation. Images from the posterior pole and peripheral retina were collected, and reference standards were made according to the consensus of the multidisciplinary management team. A deep learning algorithm was trained to identify "normal fundus", "stable retinoblastoma" in which specific treatment is not required, and "active retinoblastoma" in which specific treatment is required. The performance of each classifier included sensitivity, specificity, accuracy, and cost-utility. RESULTS A total of 36,623 images were included for developing the Deep Learning Assistant for Retinoblastoma Monitoring (DLA-RB) algorithm. In internal fivefold cross-validation, DLA-RB achieved an area under curve (AUC) of 0.998 (95% confidence interval [CI] 0.986-1.000) in distinguishing normal fundus and active retinoblastoma, and 0.940 (95% CI 0.851-0.996) in distinguishing stable and active retinoblastoma. From February 2022 to June 2022, 139 eyes of 103 patients were prospectively collected. In identifying active retinoblastoma tumours from all clinical-suspected patients and active retinoblastoma from all treated retinoblastoma patients, the AUC of DLA-RB reached 0.991 (95% CI 0.970-1.000), and 0.962 (95% CI 0.915-1.000), respectively. The combination between ophthalmologists and DLA-RB significantly improved the accuracy of competent ophthalmologists and residents regarding both binary tasks. Cost-utility analysis revealed DLA-RB-based diagnosis mode is cost-effective in both retinoblastoma diagnosis and active retinoblastoma identification. CONCLUSIONS DLA-RB achieved high accuracy and sensitivity in identifying active retinoblastoma from the normal and stable retinoblastoma fundus. It can be used to surveil the activity of retinoblastoma during follow-up and screen high-risk offspring. Compared with referral procedures to ophthalmologic centres, DLA-RB-based screening and surveillance is cost-effective and can be incorporated within telemedicine programs. CLINICAL TRIAL REGISTRATION This study was registered on ClinicalTrials.gov (NCT05308043).
Collapse
Affiliation(s)
- Ruiheng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ruyue Li
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kai Zhang
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Yitong Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Hongshu Zhao
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jitong Shi
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xin Ge
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xiaolin Xu
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Libin Jiang
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuhan Shi
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuan Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenda Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Liangyuan Xu
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Haotian Wu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Heyan Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuyao Yu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jing Li
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jianmin Ma
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
87
|
deCampos-Stairiker MA, Coyner AS, Gupta A, Oh M, Shah PK, Subramanian P, Venkatapathy N, Singh P, Kalpathy-Cramer J, Chiang MF, Chan RVP, Campbell JP. Epidemiologic Evaluation of Retinopathy of Prematurity Severity in a Large Telemedicine Program in India Using Artificial Intelligence. Ophthalmology 2023; 130:837-843. [PMID: 37030453 PMCID: PMC10524227 DOI: 10.1016/j.ophtha.2023.03.026] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 03/08/2023] [Accepted: 03/29/2023] [Indexed: 04/08/2023] Open
Abstract
PURPOSE Epidemiological changes in retinopathy of prematurity (ROP) depend on neonatal care, neonatal mortality, and the ability to carefully titrate and monitor oxygen. We evaluate whether an artificial intelligence (AI) algorithm for assessing ROP severity in babies can be used to evaluate changes in disease epidemiology in babies from South India over a 5-year period. DESIGN Retrospective cohort study. PARTICIPANTS Babies (3093) screened for ROP at neonatal care units (NCUs) across the Aravind Eye Care System (AECS) in South India. METHODS Images and clinical data were collected as part of routine tele-ROP screening at the AECS in India over 2 time periods: August 2015 to October 2017 and March 2019 to December 2020. All babies in the original cohort were matched 1:3 by birthweight (BW) and gestational age (GA) with babies in the later cohort. We compared the proportion of eyes with moderate (type 2) or treatment-requiring (TR) ROP, and an AI-derived ROP vascular severity score (from retinal fundus images) at the initial tele-retinal screening exam for all babies in a district, VSS), in the 2 time periods. MAIN OUTCOME MEASURES Differences in the proportions of type 2 or worse and TR-ROP cases, and VSS between time periods. RESULTS Among BW and GA matched babies, the proportion [95% confidence interval {CI}] of babies with type 2 or worse and TR-ROP decreased from 60.9% [53.8%-67.7%] to 17.1% [14.0%-20.5%] (P < 0.001) and 16.8% [11.9%-22.7%] to 5.1% [3.4%-7.3%] (P < 0.001), over the 2 time periods. Similarly, the median [interquartile range] VSS in the population decreased from 2.9 [1.2] to 2.4 [1.8] (P < 0.001). CONCLUSIONS In South India, over a 5-year period, the proportion of babies developing moderate to severe ROP has dropped significantly for babies at similar demographic risk, strongly suggesting improvements in primary prevention of ROP. These results suggest that AI-based assessment of ROP severity may be a useful epidemiologic tool to evaluate temporal changes in ROP epidemiology. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
| | - Aaron S Coyner
- Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Aditi Gupta
- Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Minn Oh
- Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Parag K Shah
- Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | - Prema Subramanian
- Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | | | - Praveer Singh
- Ophthalmology, University of Colorado, Aurora, Colorado; Radiology, MGH/Harvard Medical School, Charlestown, Massachusetts
| | - Jayashree Kalpathy-Cramer
- Ophthalmology, University of Colorado, Aurora, Colorado; Radiology, MGH/Harvard Medical School, Charlestown, Massachusetts
| | - Michael F Chiang
- National Eye Institute, National Institute of Health, Bethesda, Maryland; National Library of Medicine, National Institute of Health, Bethesda, Maryland
| | - R V Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - J Peter Campbell
- Ophthalmology, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
88
|
Raja Sankari VM, Snekhalatha U, Chandrasekaran A, Baskaran P. Automated diagnosis of Retinopathy of prematurity from retinal images of preterm infants using hybrid deep learning techniques. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
89
|
Popescu (Patoni) SI, Muşat AAM, Patoni C, Popescu MN, Munteanu M, Costache IB, Pîrvulescu RA, Mușat O. Artificial intelligence in ophthalmology. Rom J Ophthalmol 2023; 67:207-213. [PMID: 37876505 PMCID: PMC10591433 DOI: 10.22336/rjo.2023.37] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/19/2023] [Indexed: 10/26/2023] Open
Abstract
One of the fields of medicine in which artificial intelligence techniques have made progress is ophthalmology. Artificial intelligence (A.I.) applications for preventing vision loss in eye illnesses have developed quickly. Artificial intelligence uses computer programs to execute various activities while mimicking human thought. Machine learning techniques are frequently utilized in the field of ophthalmology. Ophthalmology holds great promise for advancing artificial intelligence, thanks to various digital methods like optical coherence tomography (OCT) and visual field testing. Artificial intelligence has been used in ophthalmology to treat eye conditions impairing vision, including macular holes (M.H.), age-related macular degeneration (AMD), diabetic retinopathy, glaucoma, and cataracts. The more common occurrence of these diseases has led to artificial intelligence development. It is important to get annual screenings to detect eye diseases such as glaucoma, diabetic retinopathy, and age-related macular degeneration. These conditions can cause decreased visual acuity, and it is necessary to identify any changes or progression in the disease to receive appropriate treatment. Numerous studies have been conducted based on artificial intelligence using different algorithms to improve and simplify current medical practice and for early detection of eye diseases to prevent vision loss. Abbreviations: AI = artificial intelligence, AMD = age-related macular degeneration, ANN = artificial neural networks, AAO = American Academy of Ophthalmology, CNN = convolutional neural network, DL = deep learning, DVP = deep vascular plexus, FDA = Food and Drug Administration, GCL = ganglion cell layer, IDP = Iowa Detection Program, ML = Machine learning techniques, MH = macular holes, MTANN = massive training of the artificial neural network, NLP = natural language processing methods, OCT = optical coherence tomography, RBS = Radial Basis Function, RNFL = nerve fiber layer, ROP = Retinopathy of Prematurity, SAP = standard automated perimetry, SVP = Superficial vascular plexus, U.S. = United States, VEGF = vascular endothelial growth factor.
Collapse
Affiliation(s)
- Stella Ioana Popescu (Patoni)
- Department of Ophthalmology, “Dr. Carol Davila” Central Military Emergency University Hospital, Bucharest, Romania
- Department of Ophthalmology, “Victor Babeş” University of Medicine and Pharmacy, Timişoara, Romania
| | | | - Cristina Patoni
- “Carol Davila” University of Medicine and Pharmacy, Bucharest, Romania
- Department of Gastroenterology, “Dr. Carol Davila” Central Military Emergency University Hospital, Bucharest, Romania
| | - Marius-Nicolae Popescu
- “Carol Davila” University of Medicine and Pharmacy, Bucharest, Romania
- Physical and Rehabilitation Medicine, Elias Emergency University Hospital, Bucharest, Romania
| | - Mihnea Munteanu
- Department of Ophthalmology, “Victor Babeş” University of Medicine and Pharmacy, Timişoara, Romania
| | - Ioana Bianca Costache
- Department of Ophthalmology, “Dr. Carol Davila” Central Military Emergency University Hospital, Bucharest, Romania
| | - Ruxandra Angela Pîrvulescu
- “Carol Davila” University of Medicine and Pharmacy, Bucharest, Romania
- Department of Ophthalmology, Bucharest Emergency University Hospital, Bucharest, Romania
| | - Ovidiu Mușat
- Department of Ophthalmology, “Dr. Carol Davila” Central Military Emergency University Hospital, Bucharest, Romania
- “Carol Davila” University of Medicine and Pharmacy, Bucharest, Romania
| |
Collapse
|
90
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
91
|
Wang Z, Lim G, Ng WY, Tan TE, Lim J, Lim SH, Foo V, Lim J, Sinisterra LG, Zheng F, Liu N, Tan GSW, Cheng CY, Cheung GCM, Wong TY, Ting DSW. Synthetic artificial intelligence using generative adversarial network for retinal imaging in detection of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1184892. [PMID: 37425325 PMCID: PMC10324667 DOI: 10.3389/fmed.2023.1184892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 05/30/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Age-related macular degeneration (AMD) is one of the leading causes of vision impairment globally and early detection is crucial to prevent vision loss. However, the screening of AMD is resource dependent and demands experienced healthcare providers. Recently, deep learning (DL) systems have shown the potential for effective detection of various eye diseases from retinal fundus images, but the development of such robust systems requires a large amount of datasets, which could be limited by prevalence of the disease and privacy of patient. As in the case of AMD, the advanced phenotype is often scarce for conducting DL analysis, which may be tackled via generating synthetic images using Generative Adversarial Networks (GANs). This study aims to develop GAN-synthesized fundus photos with AMD lesions, and to assess the realness of these images with an objective scale. Methods To build our GAN models, a total of 125,012 fundus photos were used from a real-world non-AMD phenotypical dataset. StyleGAN2 and human-in-the-loop (HITL) method were then applied to synthesize fundus images with AMD features. To objectively assess the quality of the synthesized images, we proposed a novel realness scale based on the frequency of the broken vessels observed in the fundus photos. Four residents conducted two rounds of gradings on 300 images to distinguish real from synthetic images, based on their subjective impression and the objective scale respectively. Results and discussion The introduction of HITL training increased the percentage of synthetic images with AMD lesions, despite the limited number of AMD images in the initial training dataset. Qualitatively, the synthesized images have been proven to be robust in that our residents had limited ability to distinguish real from synthetic ones, as evidenced by an overall accuracy of 0.66 (95% CI: 0.61-0.66) and Cohen's kappa of 0.320. For the non-referable AMD classes (no or early AMD), the accuracy was only 0.51. With the objective scale, the overall accuracy improved to 0.72. In conclusion, GAN models built with HITL training are capable of producing realistic-looking fundus images that could fool human experts, while our objective realness scale based on broken vessels can help identifying the synthetic fundus photos.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Wei Yan Ng
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Jane Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Sing Hui Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Valencia Foo
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Joshua Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | | | - Feihui Zheng
- Singapore Eye Research Institute, Singapore, Singapore
| | - Nan Liu
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Ching-Yu Cheng
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Gemmy Chui Ming Cheung
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore, Singapore
- School of Medicine, Tsinghua University, Beijing, China
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| |
Collapse
|
92
|
Zhao PY, Bommakanti N, Yu G, Aaberg MT, Patel TP, Paulus YM. Deep learning for automated detection of neovascular leakage on ultra-widefield fluorescein angiography in diabetic retinopathy. Sci Rep 2023; 13:9165. [PMID: 37280345 DOI: 10.1038/s41598-023-36327-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 06/01/2023] [Indexed: 06/08/2023] Open
Abstract
Diabetic retinopathy is a leading cause of blindness in working-age adults worldwide. Neovascular leakage on fluorescein angiography indicates progression to the proliferative stage of diabetic retinopathy, which is an important distinction that requires timely ophthalmic intervention with laser or intravitreal injection treatment to reduce the risk of severe, permanent vision loss. In this study, we developed a deep learning algorithm to detect neovascular leakage on ultra-widefield fluorescein angiography images obtained from patients with diabetic retinopathy. The algorithm, an ensemble of three convolutional neural networks, was able to accurately classify neovascular leakage and distinguish this disease marker from other angiographic disease features. With additional real-world validation and testing, our algorithm could facilitate identification of neovascular leakage in the clinical setting, allowing timely intervention to reduce the burden of blinding diabetic eye disease.
Collapse
Affiliation(s)
- Peter Y Zhao
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Nikhil Bommakanti
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Gina Yu
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Michael T Aaberg
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Tapan P Patel
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Yannis M Paulus
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA.
| |
Collapse
|
93
|
Gomes RFT, Schuch LF, Martins MD, Honório EF, de Figueiredo RM, Schmith J, Machado GN, Carrard VC. Use of Deep Neural Networks in the Detection and Automated Classification of Lesions Using Clinical Images in Ophthalmology, Dermatology, and Oral Medicine-A Systematic Review. J Digit Imaging 2023; 36:1060-1070. [PMID: 36650299 PMCID: PMC10287602 DOI: 10.1007/s10278-023-00775-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 01/03/2023] [Accepted: 01/04/2023] [Indexed: 01/19/2023] Open
Abstract
Artificial neural networks (ANN) are artificial intelligence (AI) techniques used in the automated recognition and classification of pathological changes from clinical images in areas such as ophthalmology, dermatology, and oral medicine. The combination of enterprise imaging and AI is gaining notoriety for its potential benefits in healthcare areas such as cardiology, dermatology, ophthalmology, pathology, physiatry, radiation oncology, radiology, and endoscopic. The present study aimed to analyze, through a systematic literature review, the application of performance of ANN and deep learning in the recognition and automated classification of lesions from clinical images, when comparing to the human performance. The PRISMA 2020 approach (Preferred Reporting Items for Systematic Reviews and Meta-analyses) was used by searching four databases of studies that reference the use of IA to define the diagnosis of lesions in ophthalmology, dermatology, and oral medicine areas. A quantitative and qualitative analyses of the articles that met the inclusion criteria were performed. The search yielded the inclusion of 60 studies. It was found that the interest in the topic has increased, especially in the last 3 years. We observed that the performance of IA models is promising, with high accuracy, sensitivity, and specificity, most of them had outcomes equivalent to human comparators. The reproducibility of the performance of models in real-life practice has been reported as a critical point. Study designs and results have been progressively improved. IA resources have the potential to contribute to several areas of health. In the coming years, it is likely to be incorporated into everyday life, contributing to the precision and reducing the time required by the diagnostic process.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil.
| | - Lauren Frenzel Schuch
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | - Manoela Domingues Martins
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | | | - Rodrigo Marques de Figueiredo
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Jean Schmith
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Giovanna Nunes Machado
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Vinicius Coelho Carrard
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Epidemiology, School of Medicine, TelessaúdeRS-UFRGS, Federal University of Rio Grande Do Sul, Porto Alegre, RS, Brazil
- Department of Oral Medicine, Otorhinolaryngology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, RS, Brazil
| |
Collapse
|
94
|
Shen Y, Luo Z, Xu M, Liang Z, Fan X, Lu X. Automated detection for Retinopathy of Prematurity with knowledge distilling from multi-stream fusion network. Knowl Based Syst 2023; 269:110461. [DOI: 10.1016/j.knosys.2023.110461] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2025]
|
95
|
Young BK, Cole ED, Shah PK, Ostmo S, Subramaniam P, Venkatapathy N, Tsai ASH, Coyner AS, Gupta A, Singh P, Chiang MF, Kalpathy-Cramer J, Chan RVP, Campbell JP. Efficacy of Smartphone-Based Telescreening for Retinopathy of Prematurity With and Without Artificial Intelligence in India. JAMA Ophthalmol 2023; 141:582-588. [PMID: 37166816 PMCID: PMC10176185 DOI: 10.1001/jamaophthalmol.2023.1466] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/20/2023] [Indexed: 05/12/2023]
Abstract
Importance Retinopathy of prematurity (ROP) telemedicine screening programs have been found to be effective, but they rely on widefield digital fundus imaging (WDFI) cameras, which are expensive, making them less accessible in low- to middle-income countries. Cheaper, smartphone-based fundus imaging (SBFI) systems have been described, but these have a narrower field of view (FOV) and have not been tested in a real-world, operational telemedicine setting. Objective To assess the efficacy of SBFI systems compared with WDFI when used by technicians for ROP screening with both artificial intelligence (AI) and human graders. Design, Setting, and Participants This prospective cross-sectional comparison study took place as a single-center ROP teleophthalmology program in India from January 2021 to April 2022. Premature infants who met normal ROP screening criteria and enrolled in the teleophthalmology screening program were included. Those who had already been treated for ROP were excluded. Exposures All participants had WDFI images and from 1 of 2 SBFI devices, the Make-In-India (MII) Retcam or Keeler Monocular Indirect Ophthalmoscope (MIO) devices. Two masked readers evaluated zone, stage, plus, and vascular severity scores (VSS, from 1-9) in all images. Smartphone images were then stratified by patient into training (70%), validation (10%), and test (20%) data sets and used to train a ResNet18 deep learning architecture for binary classification of normal vs preplus or plus disease, which was then used for patient-level predictions of referral warranted (RW)- and treatment requiring (TR)-ROP. Main Outcome and Measures Sensitivity and specificity of detection of RW-ROP, and TR-ROP by both human graders and an AI system and area under the receiver operating characteristic curve (AUC) of grader-assigned VSS. Sensitivity and specificity were compared between the 2 SBFI systems using Pearson χ2testing. Results A total of 156 infants (312 eyes; mean [SD] gestational age, 33.0 [3.0] weeks; 75 [48%] female) were included with paired examinations. Sensitivity and specificity were not found to be statistically different between the 2 SBFI systems. Human graders were effective with SBFI at detecting TR-ROP with a sensitivity of 100% and specificity of 83.49%. The AUCs with grader-assigned VSS only were 0.95 (95% CI, 0.91-0.99) and 0.96 (95% CI, 0.93-0.99) for RW-ROP and TR-ROP, respectively. For the AI system, the sensitivity of detecting TR-ROP sensitivity was 100% with specificity of 58.6%, and RW-ROP sensitivity was 80.0% with specificity of 59.3%. Conclusions and Relevance In this cross-sectional study, 2 different SBFI systems used by technicians in an ROP screening program were highly sensitive for TR-ROP. SBFI systems with AI may be a cost-effective method to improve the global capacity for ROP screening.
Collapse
Affiliation(s)
- Benjamin K. Young
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - Emily D. Cole
- Department of Ophthalmology, University of Michigan, Ann Arbor
| | - Parag K. Shah
- Department of Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - Prema Subramaniam
- Department of Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | - Narendran Venkatapathy
- Department of Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | - Andrew S. H. Tsai
- Department of Surgical Retina, Singapore National Eye Center, Singapore
| | - Aaron S. Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - Aditi Gupta
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - Praveer Singh
- Department of Ophthalmology, University of Colorado, Aurora
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
- National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Jayashree Kalpathy-Cramer
- Department of Ophthalmology, University of Colorado, Aurora
- Mass General Brigham and Brigham and Women’s Hospital Center for Clinical Data Science, Boston, Massachusetts
| | - R. V. Paul Chan
- Department of Ophthalmology, Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| |
Collapse
|
96
|
Vasseneix C, Nusinovici S, Xu X, Hwang JM, Hamann S, Chen JJ, Loo JL, Milea L, Tan KBK, Ting DSW, Liu Y, Newman NJ, Biousse V, Wong TY, Milea D, Najjar RP. Deep Learning System Outperforms Clinicians in Identifying Optic Disc Abnormalities. J Neuroophthalmol 2023; 43:159-167. [PMID: 36719740 DOI: 10.1097/wno.0000000000001800] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND The examination of the optic nerve head (optic disc) is mandatory in patients with headache, hypertension, or any neurological symptoms, yet it is rarely or poorly performed in general clinics. We recently developed a brain and optic nerve study with artificial intelligence-deep learning system (BONSAI-DLS) capable of accurately detecting optic disc abnormalities including papilledema (swelling due to elevated intracranial pressure) on digital fundus photographs with a comparable classification performance to expert neuro-ophthalmologists, but its performance compared to first-line clinicians remains unknown. METHODS In this international, cross-sectional multicenter study, the DLS, trained on 14,341 fundus photographs, was tested on a retrospectively collected convenience sample of 800 photographs (400 normal optic discs, 201 papilledema and 199 other abnormalities) from 454 patients with a robust ground truth diagnosis provided by the referring expert neuro-ophthalmologists. The areas under the receiver-operating-characteristic curves were calculated for the BONSAI-DLS. Error rates, accuracy, sensitivity, and specificity of the algorithm were compared with those of 30 clinicians with or without ophthalmic training (6 general ophthalmologists, 6 optometrists, 6 neurologists, 6 internists, 6 emergency department [ED] physicians) who graded the same testing set of images. RESULTS With an error rate of 15.3%, the DLS outperformed all clinicians (average error rates 24.4%, 24.8%, 38.2%, 44.8%, 47.9% for general ophthalmologists, optometrists, neurologists, internists and ED physicians, respectively) in the overall classification of optic disc appearance. The DLS displayed significantly higher accuracies than 100%, 86.7% and 93.3% of clinicians (n = 30) for the classification of papilledema, normal, and other disc abnormalities, respectively. CONCLUSIONS The performance of the BONSAI-DLS to classify optic discs on fundus photographs was superior to that of clinicians with or without ophthalmic training. A trained DLS may offer valuable diagnostic aid to clinicians from various clinical settings for the screening of optic disc abnormalities harboring potentially sight- or life-threatening neurological conditions.
Collapse
Affiliation(s)
- Caroline Vasseneix
- Visual Neuroscience Group (CV, SN, DT, TYW, DM, RPN), Singapore Eye Research Institute, Singapore; Duke NUS Medical School (DT, TYW, DM, RPN), National University of Singapore, Singapore; Institute of High Performance Computing (XX, YL), Agency for Science, Technology and Research (A*STAR), Singapore; Department of Ophthalmology (J-MH), Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam-si, Korea (the Republic of); Department of Ophthalmology (SH), Rigshospitalet, University of Copenhagen, Kobenhavn, Denmark ; Departments of Ophthalmology and Neurology (JJC), Mayo Clinic Rochester, Minnesota; Singapore National Eye Centre (JLL, DT, TYW, DM), Singapore; Berkeley University (LM), Berkeley, California; Department of Emergency Medicine (KT), Singapore General Hospital, Singapore; Departments of Ophthalmology, Neurology and Neurological Surgery (NJN, VB), Emory University School of Medicine, Atlanta, Georgia; and Department of Ophthalmology (RPN), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
97
|
Coyner AS, Singh P, Brown JM, Ostmo S, Chan RP, Chiang MF, Kalpathy-Cramer J, Campbell JP. Association of Biomarker-Based Artificial Intelligence With Risk of Racial Bias in Retinal Images. JAMA Ophthalmol 2023; 141:543-552. [PMID: 37140902 PMCID: PMC10160994 DOI: 10.1001/jamaophthalmol.2023.1310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 03/01/2023] [Indexed: 05/05/2023]
Abstract
Importance Although race is a social construct, it is associated with variations in skin and retinal pigmentation. Image-based medical artificial intelligence (AI) algorithms that use images of these organs have the potential to learn features associated with self-reported race (SRR), which increases the risk of racially biased performance in diagnostic tasks; understanding whether this information can be removed, without affecting the performance of AI algorithms, is critical in reducing the risk of racial bias in medical AI. Objective To evaluate whether converting color fundus photographs to retinal vessel maps (RVMs) of infants screened for retinopathy of prematurity (ROP) removes the risk for racial bias. Design, Setting, and Participants The retinal fundus images (RFIs) of neonates with parent-reported Black or White race were collected for this study. A u-net, a convolutional neural network (CNN) that provides precise segmentation for biomedical images, was used to segment the major arteries and veins in RFIs into grayscale RVMs, which were subsequently thresholded, binarized, and/or skeletonized. CNNs were trained with patients' SRR labels on color RFIs, raw RVMs, and thresholded, binarized, or skeletonized RVMs. Study data were analyzed from July 1 to September 28, 2021. Main Outcomes and Measures Area under the precision-recall curve (AUC-PR) and area under the receiver operating characteristic curve (AUROC) at both the image and eye level for classification of SRR. Results A total of 4095 RFIs were collected from 245 neonates with parent-reported Black (94 [38.4%]; mean [SD] age, 27.2 [2.3] weeks; 55 majority sex [58.5%]) or White (151 [61.6%]; mean [SD] age, 27.6 [2.3] weeks, 80 majority sex [53.0%]) race. CNNs inferred SRR from RFIs nearly perfectly (image-level AUC-PR, 0.999; 95% CI, 0.999-1.000; infant-level AUC-PR, 1.000; 95% CI, 0.999-1.000). Raw RVMs were nearly as informative as color RFIs (image-level AUC-PR, 0.938; 95% CI, 0.926-0.950; infant-level AUC-PR, 0.995; 95% CI, 0.992-0.998). Ultimately, CNNs were able to learn whether RFIs or RVMs were from Black or White infants regardless of whether images contained color, vessel segmentation brightness differences were nullified, or vessel segmentation widths were uniform. Conclusions and Relevance Results of this diagnostic study suggest that it can be very challenging to remove information relevant to SRR from fundus photographs. As a result, AI algorithms trained on fundus photographs have the potential for biased performance in practice, even if based on biomarkers rather than raw images. Regardless of the methodology used for training AI, evaluating performance in relevant subpopulations is critical.
Collapse
Affiliation(s)
- Aaron S. Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - Praveer Singh
- Radiology, MGH/Harvard Medical School, Charlestown, Massachusetts
- MGH & BWH Center for Clinical Data Science, Boston, Massachusetts
| | - James M. Brown
- School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - R.V. Paul Chan
- Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Jayashree Kalpathy-Cramer
- Radiology, MGH/Harvard Medical School, Charlestown, Massachusetts
- MGH & BWH Center for Clinical Data Science, Boston, Massachusetts
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| |
Collapse
|
98
|
Wagner SK, Liefers B, Radia M, Zhang G, Struyven R, Faes L, Than J, Balal S, Hennings C, Kilduff C, Pooprasert P, Glinton S, Arunakirinathan M, Giannakis P, Braimah IZ, Ahmed ISH, Al-Feky M, Khalid H, Ferraz D, Vieira J, Jorge R, Husain S, Ravelo J, Hinds AM, Henderson R, Patel HI, Ostmo S, Campbell JP, Pontikos N, Patel PJ, Keane PA, Adams G, Balaskas K. Development and international validation of custom-engineered and code-free deep-learning models for detection of plus disease in retinopathy of prematurity: a retrospective study. Lancet Digit Health 2023; 5:e340-e349. [PMID: 37088692 PMCID: PMC10279502 DOI: 10.1016/s2589-7500(23)00050-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 01/08/2023] [Accepted: 02/14/2023] [Indexed: 04/25/2023]
Abstract
BACKGROUND Retinopathy of prematurity (ROP), a leading cause of childhood blindness, is diagnosed through interval screening by paediatric ophthalmologists. However, improved survival of premature neonates coupled with a scarcity of available experts has raised concerns about the sustainability of this approach. We aimed to develop bespoke and code-free deep learning-based classifiers for plus disease, a hallmark of ROP, in an ethnically diverse population in London, UK, and externally validate them in ethnically, geographically, and socioeconomically diverse populations in four countries and three continents. Code-free deep learning is not reliant on the availability of expertly trained data scientists, thus being of particular potential benefit for low resource health-care settings. METHODS This retrospective cohort study used retinal images from 1370 neonates admitted to a neonatal unit at Homerton University Hospital NHS Foundation Trust, London, UK, between 2008 and 2018. Images were acquired using a Retcam Version 2 device (Natus Medical, Pleasanton, CA, USA) on all babies who were either born at less than 32 weeks gestational age or had a birthweight of less than 1501 g. Each images was graded by two junior ophthalmologists with disagreements adjudicated by a senior paediatric ophthalmologist. Bespoke and code-free deep learning models (CFDL) were developed for the discrimination of healthy, pre-plus disease, and plus disease. Performance was assessed internally on 200 images with the majority vote of three senior paediatric ophthalmologists as the reference standard. External validation was on 338 retinal images from four separate datasets from the USA, Brazil, and Egypt with images derived from Retcam and the 3nethra neo device (Forus Health, Bengaluru, India). FINDINGS Of the 7414 retinal images in the original dataset, 6141 images were used in the final development dataset. For the discrimination of healthy versus pre-plus or plus disease, the bespoke model had an area under the curve (AUC) of 0·986 (95% CI 0·973-0·996) and the CFDL model had an AUC of 0·989 (0·979-0·997) on the internal test set. Both models generalised well to external validation test sets acquired using the Retcam for discriminating healthy from pre-plus or plus disease (bespoke range was 0·975-1·000 and CFDL range was 0·969-0·995). The CFDL model was inferior to the bespoke model on discriminating pre-plus disease from healthy or plus disease in the USA dataset (CFDL 0·808 [95% CI 0·671-0·909, bespoke 0·942 [0·892-0·982]], p=0·0070). Performance also reduced when tested on the 3nethra neo imaging device (CFDL 0·865 [0·742-0·965] and bespoke 0·891 [0·783-0·977]). INTERPRETATION Both bespoke and CFDL models conferred similar performance to senior paediatric ophthalmologists for discriminating healthy retinal images from ones with features of pre-plus or plus disease; however, CFDL models might generalise less well when considering minority classes. Care should be taken when testing on data acquired using alternative imaging devices from that used for the development dataset. Our study justifies further validation of plus disease classifiers in ROP screening and supports a potential role for code-free approaches to help prevent blindness in vulnerable neonates. FUNDING National Institute for Health Research Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust and the University College London Institute of Ophthalmology. TRANSLATIONS For the Portuguese and Arabic translations of the abstract see Supplementary Materials section.
Collapse
Affiliation(s)
- Siegfried K Wagner
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Bart Liefers
- NIHR Moorfields Biomedical Research Centre, London, UK
| | - Meera Radia
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Gongyu Zhang
- NIHR Moorfields Biomedical Research Centre, London, UK
| | - Robbert Struyven
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Livia Faes
- NIHR Moorfields Biomedical Research Centre, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Jonathan Than
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Shafi Balal
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | | | | | | | | | | | - Periklis Giannakis
- Institute of Health Sciences Education, Queen Mary University of London, London, UK
| | - Imoro Zeba Braimah
- Lions International Eye Centre, Korle-Bu Teaching Hospital, Accra, Ghana
| | - Islam S H Ahmed
- Faculty of Medicine, Alexandria University, Alexandria, Egypt; Alexandria University Hospital, Alexandria, Egypt
| | - Mariam Al-Feky
- Department of Ophthalmology, Ain Shams University Hospitals, Cairo, Egypt; Watany Eye Hospital, Cairo, Egypt
| | - Hagar Khalid
- Moorfields Eye Hospital NHS Foundation Trust, London, UK; Department of Ophthalmology, Tanta University, Tanta, Egypt
| | - Daniel Ferraz
- Institute of Ophthalmology, University College London, London, UK; D'Or Institute for Research and Education, São Paulo, Brazil
| | - Juliana Vieira
- Department of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Rodrigo Jorge
- Department of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Shahid Husain
- The Blizard Institute, Queen Mary University of London, London, UK; Neonatology Department, Homerton University Hospital NHS Foundation Trust, London, UK
| | - Janette Ravelo
- Neonatology Department, Homerton University Hospital NHS Foundation Trust, London, UK
| | | | - Robert Henderson
- UCL Great Ormond Street Institute of Child Health, University College London, London, UK; Clinical and Academic Department of Ophthalmology, Great Ormond Street Hospital for Children, London, UK
| | - Himanshu I Patel
- Moorfields Eye Hospital NHS Foundation Trust, London, UK; The Royal London Hospital, Barts Health NHS Trust, London, UK
| | - Susan Ostmo
- Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - J Peter Campbell
- Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - Nikolas Pontikos
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Praveen J Patel
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Pearse A Keane
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Gill Adams
- NIHR Moorfields Biomedical Research Centre, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Konstantinos Balaskas
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK.
| |
Collapse
|
99
|
Wang B, Li L, Nakashima Y, Kawasaki R, Nagahara H. Real-time estimation of the remaining surgery duration for cataract surgery using deep convolutional neural networks and long short-term memory. BMC Med Inform Decis Mak 2023; 23:80. [PMID: 37143041 PMCID: PMC10161556 DOI: 10.1186/s12911-023-02160-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 03/23/2023] [Indexed: 05/06/2023] Open
Abstract
PURPOSE Estimating the surgery length has the potential to be utilized as skill assessment, surgical training, or efficient surgical facility utilization especially if it is done in real-time as a remaining surgery duration (RSD). Surgical length reflects a certain level of efficiency and mastery of the surgeon in a well-standardized surgery such as cataract surgery. In this paper, we design and develop a real-time RSD estimation method for cataract surgery that does not require manual labeling and is transferable with minimum fine-tuning. METHODS A regression method consisting of convolutional neural networks (CNNs) and long short-term memory (LSTM) is designed for RSD estimation. The model is firstly trained and evaluated for the single main surgeon with a large number of surgeries. Then, the fine-tuning strategy is used to transfer the model to the data of the other two surgeons. Mean Absolute Error (MAE in seconds) was used to evaluate the performance of the RSD estimation. The proposed method is compared with the naïve method which is based on the statistic of the historical data. A transferability experiment is also set to demonstrate the generalizability of the method. RESULT The mean surgical time for the sample videos was 318.7 s (s) (standard deviation 83.4 s) for the main surgeon for the initial training. In our experiments, the lowest MAE of 19.4 s (equal to about 6.4% of the mean surgical time) is achieved by our best-trained model for the independent test data of the main target surgeon. It reduces the MAE by 35.5 s (-10.2%) compared to the naïve method. The fine-tuning strategy transfers the model trained for the main target to the data of other surgeons with only a small number of training data (20% of the pre-training). The MAEs for the other two surgeons are 28.3 s and 30.6 s with the fine-tuning model, which decreased by -8.1 s and -7.5 s than the Per-surgeon model (average declining of -7.8 s and 1.3% of video duration). External validation study with Cataract-101 outperformed 3 reported methods of TimeLSTM, RSDNet, and CataNet. CONCLUSION An approach to build a pre-trained model for estimating RSD estimation based on a single surgeon and then transfer to other surgeons demonstrated both low prediction error and good transferability with minimum fine-tuning videos.
Collapse
Affiliation(s)
- Bowen Wang
- Institute for Datability Science (IDS), Osaka University, Suita, 565-0871 Japan
| | - Liangzhi Li
- Institute for Datability Science (IDS), Osaka University, Suita, 565-0871 Japan
| | - Yuta Nakashima
- Institute for Datability Science (IDS), Osaka University, Suita, 565-0871 Japan
| | - Ryo Kawasaki
- Artificial Intelligence Center for Medical Research and Application, Osaka University Hospital, Suita, 565-0871 Japan
- Department of Vision Informatics, Graduate School of Medicine, Osaka University, Suita, 565-0871 Japan
| | - Hajime Nagahara
- Institute for Datability Science (IDS), Osaka University, Suita, 565-0871 Japan
| |
Collapse
|
100
|
Jayanna S, Padhi TR, Nedhina EK, Agarwal K, Jalali S. Color fundus imaging in retinopathy of prematurity screening: Present and future. Indian J Ophthalmol 2023; 71:1777-1782. [PMID: 37203030 PMCID: PMC10391467 DOI: 10.4103/ijo.ijo_2913_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/20/2023] Open
Abstract
Advent of pediatric handheld fundus cameras like RetCam, 3netra Forus, and Phoenix ICON pediatric retinal camera has aided in effective screening of retinopathy of prematurity (ROP), especially in countries with limited number of trained specialists. Recent advent of various smartphone-based cameras has made pediatric fundus photography furthermore affordable and portable. Future advances like ultra-wide field fundus cameras, trans-pars-planar illumination pediatric fundus camera, artificial intelligence, deep learning algorithm, and handheld SS-OCTA can help in more accurate imaging and documentation. This article summarizes the features of existing and upcoming imaging modalities in detail, including their features, advantages, challenges, and effectiveness, which can help in implementation of telescreening as a standard screening protocol for ROP across developing as well as developed countries.
Collapse
Affiliation(s)
- Sushma Jayanna
- Srimati Kanuri Santhamma Center for Vitreoretinal Diseases, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, Telangana; Newborn Eye Health Alliance (NEHA), L. V. Prasad Eye Institute Network; Child Sight Institute, L. V. Prasad Eye Institute Network, Bhubaneshwar, Odissa, India
| | - Tapas R Padhi
- Department of Vitreo Retina, Mithun Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneshwar, Odissa, India
| | - E K Nedhina
- Department of Vitreo Retina, Nethra Jyothi Advanced Eye Care, Taliparamba, Kannur, Kerala, India
| | - Komal Agarwal
- Srimati Kanuri Santhamma Center for Vitreoretinal Diseases, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, Telangana; Newborn Eye Health Alliance (NEHA), L. V. Prasad Eye Institute Network; Child Sight Institute, L. V. Prasad Eye Institute Network, Bhubaneshwar, Odissa, India
| | - Subhadra Jalali
- Srimati Kanuri Santhamma Center for Vitreoretinal Diseases, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, Telangana; Newborn Eye Health Alliance (NEHA), L. V. Prasad Eye Institute Network; Child Sight Institute, L. V. Prasad Eye Institute Network, Bhubaneshwar, Odissa, India
| |
Collapse
|