1
|
Hallaj S, Chuter BG, Lieu AC, Singh P, Kalpathy-Cramer J, Xu BY, Christopher M, Zangwill LM, Weinreb RN, Baxter SL. Federated Learning in Glaucoma: A Comprehensive Review and Future Perspectives. Ophthalmol Glaucoma 2025; 8:92-105. [PMID: 39214457 PMCID: PMC11911940 DOI: 10.1016/j.ogla.2024.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/20/2024] [Accepted: 08/23/2024] [Indexed: 09/04/2024]
Abstract
CLINICAL RELEVANCE Glaucoma is a complex eye condition with varied morphological and clinical presentations, making diagnosis and management challenging. The lack of a consensus definition for glaucoma or glaucomatous optic neuropathy further complicates the development of universal diagnostic tools. Developing robust artificial intelligence (AI) models for glaucoma screening is essential for early detection and treatment but faces significant obstacles. Effective deep learning algorithms require large, well-curated datasets from diverse patient populations and imaging protocols. However, creating centralized data repositories is hindered by concerns over data sharing, patient privacy, regulatory compliance, and intellectual property. Federated Learning (FL) offers a potential solution by enabling data to remain locally hosted while facilitating distributed model training across multiple sites. METHODS A comprehensive literature review was conducted on the application of Federated Learning in training AI models for glaucoma screening. Publications from 1950 to 2024 were searched using databases such as PubMed and IEEE Xplore with keywords including "glaucoma," "federated learning," "artificial intelligence," "deep learning," "machine learning," "distributed learning," "privacy-preserving," "data sharing," "medical imaging," and "ophthalmology." Articles were included if they discussed the use of FL in glaucoma-related AI tasks or addressed data sharing and privacy challenges in ophthalmic AI development. RESULTS FL enables collaborative model development without centralizing sensitive patient data, addressing privacy and regulatory concerns. Studies show that FL can improve model performance and generalizability by leveraging diverse datasets while maintaining data security. FL models have achieved comparable or superior accuracy to those trained on centralized data, demonstrating effectiveness in real-world clinical settings. CONCLUSIONS Federated Learning presents a promising strategy to overcome current obstacles in developing AI models for glaucoma screening. By balancing the need for extensive, diverse training data with the imperative to protect patient privacy and comply with regulations, FL facilitates collaborative model training without compromising data security. This approach offers a pathway toward more accurate and generalizable AI solutions for glaucoma detection and management. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Shahin Hallaj
- Division of Ophthalmology Informatics and Data Science, Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California; Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, California
| | - Benton G Chuter
- Division of Ophthalmology Informatics and Data Science, Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California; Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, California
| | - Alexander C Lieu
- Division of Ophthalmology Informatics and Data Science, Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California; Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, California
| | - Praveer Singh
- Division of Artificial Medical Intelligence, Department of Ophthalmology, University of Colorado School of Medicine, Aurora, Colorado
| | - Jayashree Kalpathy-Cramer
- Division of Artificial Medical Intelligence, Department of Ophthalmology, University of Colorado School of Medicine, Aurora, Colorado
| | - Benjamin Y Xu
- Roski Eye Institute, Keck School of Medicine, University of Southern California, Los Angeles, California
| | - Mark Christopher
- Division of Ophthalmology Informatics and Data Science, Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Linda M Zangwill
- Division of Ophthalmology Informatics and Data Science, Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Robert N Weinreb
- Division of Ophthalmology Informatics and Data Science, Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Sally L Baxter
- Division of Ophthalmology Informatics and Data Science, Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California; Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, California.
| |
Collapse
|
2
|
Schoenpflug LA, Nie Y, Sheikhzadeh F, Koelzer VH. A review on federated learning in computational pathology. Comput Struct Biotechnol J 2024; 23:3938-3945. [PMID: 39582895 PMCID: PMC11584763 DOI: 10.1016/j.csbj.2024.10.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Revised: 10/22/2024] [Accepted: 10/22/2024] [Indexed: 11/26/2024] Open
Abstract
Training generalizable computational pathology (CPATH) algorithms is heavily dependent on large-scale, multi-institutional data. Simultaneously, healthcare data underlies strict data privacy rules, hindering the creation of large datasets. Federated Learning (FL) is a paradigm addressing this dilemma, by allowing separate institutions to collaborate in a training process while keeping each institution's data private and exchanging model parameters instead. In this study, we identify and review key developments of FL for CPATH applications. We consider 15 studies, thereby evaluating the current status of exploring and adapting this emerging technology for CPATH applications. Proof-of-concept studies have been conducted across a wide range of CPATH use cases, showcasing the performance equivalency of models trained in a federated compared to a centralized manner. Six studies focus on model aggregation or model alignment methods reporting minor ( 0 ∼ 3 % ) performance improvement compared to conventional FL techniques, while four studies explore domain alignment methods, resulting in more significant performance improvements ( 4 ∼ 20 % ). To further reduce the privacy risk posed by sharing model parameters, four studies investigated the use of privacy preservation methods, where all methods demonstrated equivalent or slightly degraded performance ( 0.2 ∼ 6 % lower). To facilitate broader, real-world environment adoption, it is imperative to establish guidelines for the setup and deployment of FL infrastructure, alongside the promotion of standardized software frameworks. These steps are crucial to 1) further democratize CPATH research by allowing smaller institutions to pool data and computational resources 2) investigating rare diseases, 3) conducting multi-institutional studies, and 4) allowing rapid prototyping on private data.
Collapse
Affiliation(s)
- Lydia A. Schoenpflug
- Department of Pathology and Molecular Pathology, University Hospital and University of Zürich, Zürich, Switzerland
| | - Yao Nie
- Roche Diagnostics, Digital Pathology, Santa Clara, CA, United States
| | | | - Viktor H. Koelzer
- Department of Pathology and Molecular Pathology, University Hospital and University of Zürich, Zürich, Switzerland
- Institute of Medical Genetics and Pathology, University Hospital Basel, Basel, Switzerland
- Department of Oncology, University of Oxford, Oxford, UK
- Nuffield Department of Medicine, University of Oxford, Oxford, UK
| |
Collapse
|
3
|
Yang Y, Chen X, Lin H. Privacy preserving technology in ophthalmology. Curr Opin Ophthalmol 2024; 35:431-437. [PMID: 39259650 DOI: 10.1097/icu.0000000000001087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
PURPOSE OF REVIEW Patient privacy protection is a critical focus in medical practice. Advances over the past decade in big data have led to the digitization of medical records, making medical data increasingly accessible through frequent data sharing and online communication. Periocular features, iris, and fundus images all contain biometric characteristics of patients, making privacy protection in ophthalmology particularly important. Consequently, privacy-preserving technologies have emerged, and are reviewed in this study. RECENT FINDINGS Recent findings indicate that general medical privacy-preserving technologies, such as federated learning and blockchain, have been gradually applied in ophthalmology. However, the exploration of privacy protection techniques of specific ophthalmic examinations, like digital mask, is still limited. Moreover, we have observed advancements in addressing ophthalmic ethical issues related to privacy protection in the era of big data, such as algorithm fairness and explainability. SUMMARY Future privacy protection for ophthalmic patients still faces challenges and requires improved strategies. Progress in privacy protection technology for ophthalmology will continue to promote a better healthcare environment and patient experience, as well as more effective data sharing and scientific research.
Collapse
Affiliation(s)
- Yahan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong
| | - Xinwei Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan
- Centre for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
4
|
Ran AR, Wang X, Chan PP, Wong MOM, Yuen H, Lam NM, Chan NCY, Yip WWK, Young AL, Yung HW, Chang RT, Mannil SS, Tham YC, Cheng CY, Wong TY, Pang CP, Heng PA, Tham CC, Cheung CY. Developing a privacy-preserving deep learning model for glaucoma detection: a multicentre study with federated learning. Br J Ophthalmol 2024; 108:1114-1123. [PMID: 37857452 DOI: 10.1136/bjo-2023-324188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 09/23/2023] [Indexed: 10/21/2023]
Abstract
BACKGROUND Deep learning (DL) is promising to detect glaucoma. However, patients' privacy and data security are major concerns when pooling all data for model development. We developed a privacy-preserving DL model using the federated learning (FL) paradigm to detect glaucoma from optical coherence tomography (OCT) images. METHODS This is a multicentre study. The FL paradigm consisted of a 'central server' and seven eye centres in Hong Kong, the USA and Singapore. Each centre first trained a model locally with its own OCT optic disc volumetric dataset and then uploaded its model parameters to the central server. The central server used FedProx algorithm to aggregate all centres' model parameters. Subsequently, the aggregated parameters are redistributed to each centre for its local model optimisation. We experimented with three three-dimensional (3D) networks to evaluate the stabilities of the FL paradigm. Lastly, we tested the FL model on two prospectively collected unseen datasets. RESULTS We used 9326 volumetric OCT scans from 2785 subjects. The FL model performed consistently well with different networks in 7 centres (accuracies 78.3%-98.5%, 75.9%-97.0%, and 78.3%-97.5%, respectively) and stably in the 2 unseen datasets (accuracies 84.8%-87.7%, 81.3%-84.8%, and 86.0%-87.8%, respectively). The FL model achieved non-inferior performance in classifying glaucoma compared with the traditional model and significantly outperformed the individual models. CONCLUSION The 3D FL model could leverage all the datasets and achieve generalisable performance, without data exchange across centres. This study demonstrated an OCT-based FL paradigm for glaucoma identification with ensured patient privacy and data security, charting another course toward the real-world transition of artificial intelligence in ophthalmology.
Collapse
Affiliation(s)
- An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Xi Wang
- Zhejiang Lab, Hangzhou, China
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
- Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California, USA
| | - Poemen P Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
- Hong Kong Eye Hospital, Hong Kong SAR, China
| | | | - Hunter Yuen
- Hong Kong Eye Hospital, Hong Kong SAR, China
| | - Nai Man Lam
- Hong Kong Eye Hospital, Hong Kong SAR, China
| | - Noel C Y Chan
- Ophthalmology and Visual Sciences, Prince of Wales Hospital, Hong Kong SAR, China
- Ophthalmology and Visual Sciences, Alice Ho Miu Ling Nethersole Hospital, Hong Kong SAR, China
| | - Wilson W K Yip
- Ophthalmology and Visual Sciences, Prince of Wales Hospital, Hong Kong SAR, China
- Ophthalmology and Visual Sciences, Alice Ho Miu Ling Nethersole Hospital, Hong Kong SAR, China
| | - Alvin L Young
- Ophthalmology and Visual Sciences, Prince of Wales Hospital, Hong Kong SAR, China
- Ophthalmology and Visual Sciences, Alice Ho Miu Ling Nethersole Hospital, Hong Kong SAR, China
| | | | - Robert T Chang
- Ophthalmology, Stanford University School of Medicine, Stanford, California, USA
| | - Suria S Mannil
- Ophthalmology, Stanford University School of Medicine, Stanford, California, USA
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-National University of Singapore Medical School, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-National University of Singapore Medical School, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Tsinghua University, Beijing, China
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Chi Pui Pang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
- Hong Kong Eye Hospital, Hong Kong SAR, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
5
|
Lim JI, Rachitskaya AV, Hallak JA, Gholami S, Alam MN. Artificial intelligence for retinal diseases. Asia Pac J Ophthalmol (Phila) 2024; 13:100096. [PMID: 39209215 DOI: 10.1016/j.apjo.2024.100096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 08/02/2024] [Accepted: 08/20/2024] [Indexed: 09/04/2024] Open
Abstract
PURPOSE To discuss the worldwide applications and potential impact of artificial intelligence (AI) for the diagnosis, management and analysis of treatment outcomes of common retinal diseases. METHODS We performed an online literature review, using PubMed Central (PMC), of AI applications to evaluate and manage retinal diseases. Search terms included AI for screening, diagnosis, monitoring, management, and treatment outcomes for age-related macular degeneration (AMD), diabetic retinopathy (DR), retinal surgery, retinal vascular disease, retinopathy of prematurity (ROP) and sickle cell retinopathy (SCR). Additional search terms included AI and color fundus photographs, optical coherence tomography (OCT), and OCT angiography (OCTA). We included original research articles and review articles. RESULTS Research studies have investigated and shown the utility of AI for screening for diseases such as DR, AMD, ROP, and SCR. Research studies using validated and labeled datasets confirmed AI algorithms could predict disease progression and response to treatment. Studies showed AI facilitated rapid and quantitative interpretation of retinal biomarkers seen on OCT and OCTA imaging. Research articles suggest AI may be useful for planning and performing robotic surgery. Studies suggest AI holds the potential to help lessen the impact of socioeconomic disparities on the outcomes of retinal diseases. CONCLUSIONS AI applications for retinal diseases can assist the clinician, not only by disease screening and monitoring for disease recurrence but also in quantitative analysis of treatment outcomes and prediction of treatment response. The public health impact on the prevention of blindness from DR, AMD, and other retinal vascular diseases remains to be determined.
Collapse
Affiliation(s)
- Jennifer I Lim
- University of Illinois at Chicago, College of Medicine, Department of Ophthalmology and Visual Sciences, Chicago, IL, United States.
| | - Aleksandra V Rachitskaya
- Department of Ophthalmology at Case Western Reserve University, Cleveland Clinic Lerner College of Medicine, Cleveland Clinic Cole Eye Institute, United States
| | - Joelle A Hallak
- University of Illinois at Chicago, College of Medicine, Department of Ophthalmology and Visual Sciences, Chicago, IL, United States; Department of Ophthalmology and Visual Sciences, College of Medicine, University of Illinois at Chicago, Chicago, IL, United States
| | - Sina Gholami
- University of North Carolina at Charlotte, United States
| | - Minhaj N Alam
- University of North Carolina at Charlotte, United States
| |
Collapse
|
6
|
Bernstein IA, Fernandez KS, Stein JD, Pershing S, Wang SY. Big data and electronic health records for glaucoma research. Taiwan J Ophthalmol 2024; 14:352-359. [PMID: 39430348 PMCID: PMC11488813 DOI: 10.4103/tjo.tjo-d-24-00055] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 06/05/2024] [Indexed: 10/22/2024] Open
Abstract
The digitization of health records through electronic health records (EHRs) has transformed the landscape of ophthalmic research, particularly in the study of glaucoma. EHRs offer a wealth of structured and unstructured data, allowing for comprehensive analyses of patient characteristics, treatment histories, and outcomes. This review comprehensively discusses different EHR data sources, their strengths, limitations, and applicability towards glaucoma research. Institutional EHR repositories provide detailed multimodal clinical data, enabling in-depth investigations into conditions such as glaucoma and facilitating the development of artificial intelligence applications. Multicenter initiatives such as the Sight Outcomes Research Collaborative and the Intelligent Research In Sight registry offer larger, more diverse datasets, enhancing the generalizability of findings and supporting large-scale studies on glaucoma epidemiology, treatment outcomes, and practice patterns. The All of Us Research Program, with a special emphasis on diversity and inclusivity, presents a unique opportunity for glaucoma research by including underrepresented populations and offering comprehensive health data even beyond the EHR. Challenges persist, such as data access restrictions and standardization issues, but may be addressed through continued collaborative efforts between researchers, institutions, and regulatory bodies. Standardized data formats and improved data linkage methods, especially for ophthalmic imaging and testing, would further enhance the utility of EHR datasets for ophthalmic research, ultimately advancing our understanding and treatment of glaucoma and other ocular diseases on a global scale.
Collapse
Affiliation(s)
- Isaac A. Bernstein
- Department of Ophthalmology, Byers Eye Institute, Stanford University, California
| | - Karen S. Fernandez
- Department of Ophthalmology, Byers Eye Institute, Stanford University, California
| | - Joshua D. Stein
- Division of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, MI, USA
| | - Suzann Pershing
- Department of Ophthalmology, Byers Eye Institute, Stanford University, California
| | - Sophia Y. Wang
- Department of Ophthalmology, Byers Eye Institute, Stanford University, California
| |
Collapse
|
7
|
Coyner AS, Murickan T, Oh MA, Young BK, Ostmo SR, Singh P, Chan RVP, Moshfeghi DM, Shah PK, Venkatapathy N, Chiang MF, Kalpathy-Cramer J, Campbell JP. Multinational External Validation of Autonomous Retinopathy of Prematurity Screening. JAMA Ophthalmol 2024; 142:327-335. [PMID: 38451496 PMCID: PMC10921347 DOI: 10.1001/jamaophthalmol.2024.0045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/15/2023] [Indexed: 03/08/2024]
Abstract
Importance Retinopathy of prematurity (ROP) is a leading cause of blindness in children, with significant disparities in outcomes between high-income and low-income countries, due in part to insufficient access to ROP screening. Objective To evaluate how well autonomous artificial intelligence (AI)-based ROP screening can detect more-than-mild ROP (mtmROP) and type 1 ROP. Design, Setting, and Participants This diagnostic study evaluated the performance of an AI algorithm, trained and calibrated using 2530 examinations from 843 infants in the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) study, on 2 external datasets (6245 examinations from 1545 infants in the Stanford University Network for Diagnosis of ROP [SUNDROP] and 5635 examinations from 2699 infants in the Aravind Eye Care Systems [AECS] telemedicine programs). Data were taken from 11 and 48 neonatal care units in the US and India, respectively. Data were collected from January 2012 to July 2021, and data were analyzed from July to December 2023. Exposures An imaging processing pipeline was created using deep learning to autonomously identify mtmROP and type 1 ROP in eye examinations performed via telemedicine. Main Outcomes and Measures The area under the receiver operating characteristics curve (AUROC) as well as sensitivity and specificity for detection of mtmROP and type 1 ROP at the eye examination and patient levels. Results The prevalence of mtmROP and type 1 ROP were 5.9% (91 of 1545) and 1.2% (18 of 1545), respectively, in the SUNDROP dataset and 6.2% (168 of 2699) and 2.5% (68 of 2699) in the AECS dataset. Examination-level AUROCs for mtmROP and type 1 ROP were 0.896 and 0.985, respectively, in the SUNDROP dataset and 0.920 and 0.982 in the AECS dataset. At the cross-sectional examination level, mtmROP detection had high sensitivity (SUNDROP: mtmROP, 83.5%; 95% CI, 76.6-87.7; type 1 ROP, 82.2%; 95% CI, 81.2-83.1; AECS: mtmROP, 80.8%; 95% CI, 76.2-84.9; type 1 ROP, 87.8%; 95% CI, 86.8-88.7). At the patient level, all infants who developed type 1 ROP screened positive (SUNDROP: 100%; 95% CI, 81.4-100; AECS: 100%; 95% CI, 94.7-100) prior to diagnosis. Conclusions and Relevance Where and when ROP telemedicine programs can be implemented, autonomous ROP screening may be an effective force multiplier for secondary prevention of ROP.
Collapse
Affiliation(s)
- Aaron S. Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Tom Murickan
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Minn A. Oh
- Casey Eye Institute, Oregon Health & Science University, Portland
| | | | - Susan R. Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Praveer Singh
- Ophthalmology, University of Colorado School of Medicine, Aurora
| | - R. V. Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago
| | - Darius M. Moshfeghi
- Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, California
| | - Parag K. Shah
- Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | | | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
- National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | | | | |
Collapse
|
8
|
Yan B, Cao D, Jiang X, Chen Y, Dai W, Dong F, Huang W, Zhang T, Gao C, Chen Q, Yan Z, Wang Z. FedEYE: A scalable and flexible end-to-end federated learning platform for ophthalmology. PATTERNS (NEW YORK, N.Y.) 2024; 5:100928. [PMID: 38370128 PMCID: PMC10873155 DOI: 10.1016/j.patter.2024.100928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/03/2023] [Accepted: 01/11/2024] [Indexed: 02/20/2024]
Abstract
Data-driven machine learning, as a promising approach, possesses the capability to build high-quality, exact, and robust models from ophthalmic medical data. Ophthalmic medical data, however, presently exist across disparate data silos with privacy limitations, making centralized training challenging. While ophthalmologists may not specialize in machine learning and artificial intelligence (AI), considerable impediments arise in the associated realm of research. To address these issues, we design and develop FedEYE, a scalable and flexible end-to-end ophthalmic federated learning platform. During FedEYE design, we adhere to four fundamental design principles, ensuring that ophthalmologists can effortlessly create independent and federated AI research tasks. Benefiting from the design principles and architecture of FedEYE, it encloses numerous key features, including rich and customizable capabilities, separation of concerns, scalability, and flexible deployment. We also validated the applicability of FedEYE by employing several prevalent neural networks on ophthalmic disease image classification tasks.
Collapse
Affiliation(s)
- Bingjie Yan
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Danmin Cao
- Aier Eye Hospital of Wuhan University, Wuhan, China
| | - Xinlong Jiang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yiqiang Chen
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
- Peng Cheng Laboratory, Shenzhen, Guangdong, China
| | - Weiwei Dai
- Institute of Digital Ophthalmology and Visual Science, Changsha Aier Eye Hospital, Hunan, China
- AnHui Aier Eye Hospital, Anhui Medical University, Anhui, China
| | - Fan Dong
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Wuliang Huang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Teng Zhang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Chenlong Gao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Qian Chen
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Zhen Yan
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Zhirui Wang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
9
|
Drukker K, Chen W, Gichoya J, Gruszauskas N, Kalpathy-Cramer J, Koyejo S, Myers K, Sá RC, Sahiner B, Whitney H, Zhang Z, Giger M. Toward fairness in artificial intelligence for medical image analysis: identification and mitigation of potential biases in the roadmap from data collection to model deployment. J Med Imaging (Bellingham) 2023; 10:061104. [PMID: 37125409 PMCID: PMC10129875 DOI: 10.1117/1.jmi.10.6.061104] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 04/03/2023] [Indexed: 05/02/2023] Open
Abstract
Purpose To recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups. Approach Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development. Results Five main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies. Conclusions Our findings provide a valuable resource to researchers, clinicians, and the public at large.
Collapse
Affiliation(s)
- Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Weijie Chen
- US Food and Drug Administration, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Judy Gichoya
- Emory University, Department of Radiology, Atlanta, Georgia, United States
| | - Nicholas Gruszauskas
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | | | - Sanmi Koyejo
- Stanford University, Department of Computer Science, Stanford, California, United States
| | - Kyle Myers
- Puente Solutions LLC, Phoenix, Arizona, United States
| | - Rui C. Sá
- National Institutes of Health, Bethesda, Maryland, United States
- University of California, San Diego, La Jolla, California, United States
| | - Berkman Sahiner
- US Food and Drug Administration, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Heather Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Zi Zhang
- Jefferson Health, Philadelphia, Pennsylvania, United States
| | - Maryellen Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
10
|
Gholami S, Lim JI, Leng T, Ong SSY, Thompson AC, Alam MN. Federated learning for diagnosis of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1259017. [PMID: 37901412 PMCID: PMC10613107 DOI: 10.3389/fmed.2023.1259017] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/25/2023] [Indexed: 10/31/2023] Open
Abstract
This paper presents a federated learning (FL) approach to train deep learning models for classifying age-related macular degeneration (AMD) using optical coherence tomography image data. We employ the use of residual network and vision transformer encoders for the normal vs. AMD binary classification, integrating four unique domain adaptation techniques to address domain shift issues caused by heterogeneous data distribution in different institutions. Experimental results indicate that FL strategies can achieve competitive performance similar to centralized models even though each local model has access to a portion of the training data. Notably, the Adaptive Personalization FL strategy stood out in our FL evaluations, consistently delivering high performance across all tests due to its additional local model. Furthermore, the study provides valuable insights into the efficacy of simpler architectures in image classification tasks, particularly in scenarios where data privacy and decentralization are critical using both encoders. It suggests future exploration into deeper models and other FL strategies for a more nuanced understanding of these models' performance. Data and code are available at https://github.com/QIAIUNCC/FL_UNCC_QIAI.
Collapse
Affiliation(s)
- Sina Gholami
- Department of Electrical Engineering, University of North Carolina at Charlotte, Charlotte, NC, United States
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Science, University of Illinois at Chicago, Chicago, IL, United States
| | - Theodore Leng
- Department of Ophthalmology, School of Medicine, Stanford University, Stanford, CA, United States
| | - Sally Shin Yee Ong
- Department of Surgical Ophthalmology, Atrium-Health Wake Forest Baptist, Winston-Salem, NC, United States
| | - Atalie Carina Thompson
- Department of Surgical Ophthalmology, Atrium-Health Wake Forest Baptist, Winston-Salem, NC, United States
| | - Minhaj Nur Alam
- Department of Electrical Engineering, University of North Carolina at Charlotte, Charlotte, NC, United States
| |
Collapse
|
11
|
deCampos-Stairiker MA, Coyner AS, Gupta A, Oh M, Shah PK, Subramanian P, Venkatapathy N, Singh P, Kalpathy-Cramer J, Chiang MF, Chan RVP, Campbell JP. Epidemiologic Evaluation of Retinopathy of Prematurity Severity in a Large Telemedicine Program in India Using Artificial Intelligence. Ophthalmology 2023; 130:837-843. [PMID: 37030453 PMCID: PMC10524227 DOI: 10.1016/j.ophtha.2023.03.026] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 03/08/2023] [Accepted: 03/29/2023] [Indexed: 04/08/2023] Open
Abstract
PURPOSE Epidemiological changes in retinopathy of prematurity (ROP) depend on neonatal care, neonatal mortality, and the ability to carefully titrate and monitor oxygen. We evaluate whether an artificial intelligence (AI) algorithm for assessing ROP severity in babies can be used to evaluate changes in disease epidemiology in babies from South India over a 5-year period. DESIGN Retrospective cohort study. PARTICIPANTS Babies (3093) screened for ROP at neonatal care units (NCUs) across the Aravind Eye Care System (AECS) in South India. METHODS Images and clinical data were collected as part of routine tele-ROP screening at the AECS in India over 2 time periods: August 2015 to October 2017 and March 2019 to December 2020. All babies in the original cohort were matched 1:3 by birthweight (BW) and gestational age (GA) with babies in the later cohort. We compared the proportion of eyes with moderate (type 2) or treatment-requiring (TR) ROP, and an AI-derived ROP vascular severity score (from retinal fundus images) at the initial tele-retinal screening exam for all babies in a district, VSS), in the 2 time periods. MAIN OUTCOME MEASURES Differences in the proportions of type 2 or worse and TR-ROP cases, and VSS between time periods. RESULTS Among BW and GA matched babies, the proportion [95% confidence interval {CI}] of babies with type 2 or worse and TR-ROP decreased from 60.9% [53.8%-67.7%] to 17.1% [14.0%-20.5%] (P < 0.001) and 16.8% [11.9%-22.7%] to 5.1% [3.4%-7.3%] (P < 0.001), over the 2 time periods. Similarly, the median [interquartile range] VSS in the population decreased from 2.9 [1.2] to 2.4 [1.8] (P < 0.001). CONCLUSIONS In South India, over a 5-year period, the proportion of babies developing moderate to severe ROP has dropped significantly for babies at similar demographic risk, strongly suggesting improvements in primary prevention of ROP. These results suggest that AI-based assessment of ROP severity may be a useful epidemiologic tool to evaluate temporal changes in ROP epidemiology. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
| | - Aaron S Coyner
- Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Aditi Gupta
- Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Minn Oh
- Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Parag K Shah
- Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | - Prema Subramanian
- Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | | | - Praveer Singh
- Ophthalmology, University of Colorado, Aurora, Colorado; Radiology, MGH/Harvard Medical School, Charlestown, Massachusetts
| | - Jayashree Kalpathy-Cramer
- Ophthalmology, University of Colorado, Aurora, Colorado; Radiology, MGH/Harvard Medical School, Charlestown, Massachusetts
| | - Michael F Chiang
- National Eye Institute, National Institute of Health, Bethesda, Maryland; National Library of Medicine, National Institute of Health, Bethesda, Maryland
| | - R V Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - J Peter Campbell
- Ophthalmology, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
12
|
Matta S, Hassine MB, Lecat C, Borderie L, Guilcher AL, Massin P, Cochener B, Lamard M, Quellec G. Federated Learning for Diabetic Retinopathy Detection in a Multi-center Fundus Screening Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082571 DOI: 10.1109/embc40787.2023.10340772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Federated learning (FL) is a machine learning framework that allows remote clients to collaboratively learn a global model while keeping their training data localized. It has emerged as an effective tool to solve the problem of data privacy protection. In particular, in the medical field, it is gaining relevance for achieving collaborative learning while protecting sensitive data. In this work, we demonstrate the feasibility of FL in the development of a deep learning model for screening diabetic retinopathy (DR) in fundus photographs. To this end, we conduct a simulated FL framework using nearly 700,000 fundus photographs collected from OPHDIAT, a French multi-center screening network for detecting DR. We develop two FL algorithms: 1) a cross-center FL algorithm using data distributed across the OPHDIAT centers and 2) a cross-grader FL algorithm using data distributed across the OPHDIAT graders. We explore and assess different FL strategies and compare them to a conventional learning algorithm, namely centralized learning (CL), where all the data is stored in a centralized repository. For the task of referable DR detection, our simulated FL algorithms achieved similar performance to CL, in terms of area under the ROC curve (AUC): AUC =0.9482 for CL, AUC = 0.9317 for cross-center FL and AUC = 0.9522 for cross-grader FL. Our work indicates that the FL algorithm is a viable and reliable framework that can be applied in a screening network.Clinical relevance- Given that data sharing is regarded as an essential component of modern medical research, achieving collaborative learning while protecting sensitive data is key.
Collapse
|
13
|
Nguyen TX, Ran AR, Hu X, Yang D, Jiang M, Dou Q, Cheung CY. Federated Learning in Ocular Imaging: Current Progress and Future Direction. Diagnostics (Basel) 2022; 12:2835. [PMID: 36428895 PMCID: PMC9689273 DOI: 10.3390/diagnostics12112835] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/11/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Advances in artificial intelligence deep learning (DL) have made tremendous impacts on the field of ocular imaging over the last few years. Specifically, DL has been utilised to detect and classify various ocular diseases on retinal photographs, optical coherence tomography (OCT) images, and OCT-angiography images. In order to achieve good robustness and generalisability of model performance, DL training strategies traditionally require extensive and diverse training datasets from various sites to be transferred and pooled into a "centralised location". However, such a data transferring process could raise practical concerns related to data security and patient privacy. Federated learning (FL) is a distributed collaborative learning paradigm which enables the coordination of multiple collaborators without the need for sharing confidential data. This distributed training approach has great potential to ensure data privacy among different institutions and reduce the potential risk of data leakage from data pooling or centralisation. This review article aims to introduce the concept of FL, provide current evidence of FL in ocular imaging, and discuss potential challenges as well as future applications.
Collapse
Affiliation(s)
- Truong X. Nguyen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Xiaoyan Hu
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Meirui Jiang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
14
|
Teo ZL, Lee AY, Campbell P, Chan RVP, Ting DSW. Developments in Artificial Intelligence for Ophthalmology: Federated Learning. Asia Pac J Ophthalmol (Phila) 2022; 11:500-502. [PMID: 36417673 DOI: 10.1097/apo.0000000000000582] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 10/04/2022] [Indexed: 11/24/2022] Open
Affiliation(s)
- Zhen Ling Teo
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
| | - Aaron Y Lee
- Department of Ophthalmology, US Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, WA
| | - Peter Campbell
- Department of Ophthalmology, Oregon Health and Science University, Portland, OR
| | - R V Paul Chan
- Department of Ophthalmology, University of Illinois Chicago, Chicago, IL
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
- Duke-NUS Medical School, Singapore
| |
Collapse
|
15
|
Jaber A, Vayron R, Harmand S. Effect of temperature on evaporation dynamics of sheep's blood droplets and topographic analysis of induced patterns. Heliyon 2022; 8:e11258. [PMID: 36353154 PMCID: PMC9637573 DOI: 10.1016/j.heliyon.2022.e11258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 06/01/2022] [Accepted: 10/20/2022] [Indexed: 11/13/2022] Open
Abstract
To characterize various induced phenomena and the blood of healthy sheep using several parameters, the evaporation dynamics of 72 drops of sheep blood evaporated at several temperatures: 23, 37, 60, and 90 °C on glass hydrophilic substrates were studied. This allows the prediction of the sheep blood pattern, knowing the surface temperature and vice versa. To determine the variation in the Marangoni number between the center and the triple line, an infrared thermography method was used to measure the temperature variation along the surface of the drop. Simultaneously, a high-performance camera was used to measure the variation in the height of the drop during the evaporation using a superior algorithm software for image analysis, drop shape analyzer, under controlled conditions (Humidity = 40%, Tatm = 23 °C). The study of the evaporation dynamics and pattern formation shows the effect of temperature on the flow circulation inside the drop, resulting in the final deposit. The results showed two categories corresponding to two different evaporation phenomena induced by the thermal Marangoni effect. Furthermore, to transform the induced pattern of sheep blood evaporation into a 3D image, a topographic study was performed using a highly accurate, fast, and flexible optical 3D measurement system. The topographic parameters were subsequently extracted from these 3D images. The statistical study showed a good correlation between the topographic parameters and the surface temperature, and a significant difference between each temperature group for each parameter.
Collapse
|
16
|
Lu C, Hanif A, Singh P, Chang K, Coyner AS, Brown JM, Ostmo S, Chan RVP, Rubin D, Chiang MF, Campbell JP, Kalpathy-Cramer J. Federated Learning for Multicenter Collaboration in Ophthalmology: Improving Classification Performance in Retinopathy of Prematurity. Ophthalmol Retina 2022; 6:657-663. [PMID: 35296449 DOI: 10.1016/j.oret.2022.02.015] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/10/2022] [Accepted: 02/28/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To compare the performance of deep learning classifiers for the diagnosis of plus disease in retinopathy of prematurity (ROP) trained using 2 methods for developing models on multi-institutional data sets: centralizing data versus federated learning (FL) in which no data leave each institution. DESIGN Evaluation of a diagnostic test or technology. SUBJECTS Deep learning models were trained, validated, and tested on 5255 wide-angle retinal images in the neonatal intensive care units of 7 institutions as part of the Imaging and Informatics in ROP study. All images were labeled for the presence of plus, preplus, or no plus disease with a clinical label and a reference standard diagnosis (RSD) determined by 3 image-based ROP graders and the clinical diagnosis. METHODS We compared the area under the receiver operating characteristic curve (AUROC) for models developed on multi-institutional data, using a central approach initially, followed by FL, and compared locally trained models with both approaches. We compared the model performance (κ) with the label agreement (between clinical and RSD), data set size, and number of plus disease cases in each training cohort using the Spearman correlation coefficient (CC). MAIN OUTCOME MEASURES Model performance using AUROC and linearly weighted κ. RESULTS Four settings of experiment were used: FL trained on RSD against central trained on RSD, FL trained on clinical labels against central trained on clinical labels, FL trained on RSD against central trained on clinical labels, and FL trained on clinical labels against central trained on RSD (P = 0.046, P = 0.126, P = 0.224, and P = 0.0173, respectively). Four of the 7 (57%) models trained on local institutional data performed inferiorly to the FL models. The model performance for local models was positively correlated with the label agreement (between clinical and RSD labels, CC = 0.389, P = 0.387), total number of plus cases (CC = 0.759, P = 0.047), and overall training set size (CC = 0.924, P = 0.002). CONCLUSIONS We found that a trained FL model performs comparably to a centralized model, confirming that FL may provide an effective, more feasible solution for interinstitutional learning. Smaller institutions benefit more from collaboration than larger institutions, showing the potential of FL for addressing disparities in resource access.
Collapse
Affiliation(s)
- Charles Lu
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Adam Hanif
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Aaron S Coyner
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon
| | - James M Brown
- School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Susan Ostmo
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon
| | - Robison V Paul Chan
- Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, Illinois
| | - Daniel Rubin
- Center for Biomedical Informatics Research, Stanford University School of Medicine, Stanford, California
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - John Peter Campbell
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts.
| |
Collapse
|
17
|
Federated Learning in Ophthalmology: Retinopathy of Prematurity. Ophthalmol Retina 2022; 6:647-649. [PMID: 35933119 DOI: 10.1016/j.oret.2022.03.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 03/18/2022] [Indexed: 11/21/2022]
|