1
|
Raza A, Guzzo A, Ianni M, Lappano R, Zanolini A, Maggiolini M, Fortino G. Federated Learning in radiomics: A comprehensive meta-survey on medical image analysis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 267:108768. [PMID: 40279838 DOI: 10.1016/j.cmpb.2025.108768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2024] [Revised: 03/03/2025] [Accepted: 04/09/2025] [Indexed: 04/29/2025]
Abstract
Federated Learning (FL) has emerged as a promising approach for collaborative medical image analysis while preserving data privacy, making it particularly suitable for radiomics tasks. This paper presents a systematic meta-analysis of recent surveys on Federated Learning in Medical Imaging (FL-MI), published in reputable venues over the past five years. We adopt the PRISMA methodology, categorizing and analyzing the existing body of research in FL-MI. Our analysis identifies common trends, challenges, and emerging strategies for implementing FL in medical imaging, including handling data heterogeneity, privacy concerns, and model performance in non-IID settings. The paper also highlights the most widely used datasets and a comparison of adopted machine learning models. Moreover, we examine FL frameworks in FL-MI applications, such as tumor detection, organ segmentation, and disease classification. We identify several research gaps, including the need for more robust privacy protection. Our findings provide a comprehensive overview of the current state of FL-MI and offer valuable directions for future research and development in this rapidly evolving field.
Collapse
Affiliation(s)
- Asaf Raza
- Department of Informatics, Modeling, Electronics, and Systems, University of Calabria, Rende, Italy
| | - Antonella Guzzo
- Department of Informatics, Modeling, Electronics, and Systems, University of Calabria, Rende, Italy
| | - Michele Ianni
- Department of Informatics, Modeling, Electronics, and Systems, University of Calabria, Rende, Italy.
| | - Rosamaria Lappano
- Department of Pharmacy, Health and Nutritional Sciences, University of Calabria, Rende, Italy
| | - Alfredo Zanolini
- Radiology Unit, "Annunziata" Hospital, Azienda Ospedaliera di Cosenza, Cosenza, Italy
| | - Marcello Maggiolini
- Department of Pharmacy, Health and Nutritional Sciences, University of Calabria, Rende, Italy
| | - Giancarlo Fortino
- Department of Informatics, Modeling, Electronics, and Systems, University of Calabria, Rende, Italy
| |
Collapse
|
2
|
Li L, Back E, Lee S, Shipley R, Mapitse N, Elbe S, Smallman M, Wilson J, Yasin I, Rees G, Gordon B, Murray V, Roberts SL, Cupani A, Kostkova P. Balancing Risks and Opportunities: Data-Empowered-Health Ecosystems. J Med Internet Res 2025; 27:e57237. [PMID: 40132190 PMCID: PMC11979548 DOI: 10.2196/57237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 09/24/2024] [Accepted: 02/18/2025] [Indexed: 03/27/2025] Open
Abstract
This viewpoint paper addresses the ongoing challenges and opportunities within the data-for-health ecosystem, drawing insights from a multistakeholder workshop. Despite notable progress in the digitization of health care systems, data sharing and interoperability remain limited, so the full potential of health care data is not realized. There is a critical need for data ecosystems that can enable the timely, safe, efficient, and sustainable collection and sharing of health care data. However, efforts to meet this need face risks related to privacy, data protection, security, democratic governance, and exclusion. Key challenges include poor interoperability, inconsistent approaches to data governance, and concerns about the commodification of data. While emerging platforms such as social media play a growing role in gathering and sharing health information, their integration into formal data systems remains limited. A robust and secure data-for-health ecosystem requires stronger frameworks for data governance, interoperability, and citizen engagement to build public trust. This paper argues that reframing health care data as a common good, improving the transparency of data acquisition and processing, and promoting the use of application programming interfaces (APIs) for real-time data access are essential to overcoming these challenges. In addition, it highlights the need for international norms and standards guided by multisector leadership, given the multinational nature of data sharing. Ultimately, this paper emphasizes the need to balance risks and opportunities to create a socially acceptable, secure, and effective data-sharing ecosystem in health care.
Collapse
Affiliation(s)
- Lan Li
- University College London, London, United Kingdom
| | - Emma Back
- University College London, London, United Kingdom
| | - Suna Lee
- University College London, London, United Kingdom
| | | | - Néo Mapitse
- World Organisation for Animal Health, Paris, France
| | - Stefan Elbe
- University of Sussex, Brighton, United Kingdom
| | | | - James Wilson
- University College London, London, United Kingdom
| | - Ifat Yasin
- University College London, London, United Kingdom
| | - Geraint Rees
- University College London, London, United Kingdom
| | - Ben Gordon
- Our Future Health, London, United Kingdom
| | | | | | - Anna Cupani
- University College London, London, United Kingdom
| | | |
Collapse
|
3
|
Jiang S, Bukhari SMA, Krishnan A, Bera K, Sharma A, Caovan D, Rosipko B, Gupta A. Deployment of Artificial Intelligence in Radiology: Strategies for Success. AJR Am J Roentgenol 2025; 224:e2431898. [PMID: 39475198 DOI: 10.2214/ajr.24.31898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2024]
Abstract
Radiology, as a highly technical and information-rich medical specialty, is well suited for artificial intelligence (AI) product development, and many U.S. FDA-cleared AI medical devices are authorized for uses within the specialty. In this Clinical Perspective, we discuss the deployment of AI tools in radiology, exploring regulatory processes, the need for transparency, and other practical challenges. We further highlight the importance of rigorous validation, real-world testing, seamless workflow integration, and end user education. We emphasize the role for continuous feedback and robust monitoring processes, to guide AI tools' adaptation and help ensure sustained performance. Traditional standalone and alternative platform-based approaches to radiology AI implementation are considered. The presented strategies will help achieve successful deployment and fully realize AI's potential benefits in radiology.
Collapse
Affiliation(s)
- Sirui Jiang
- Department of Radiology, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Cleveland, OH 44106
| | - Syed M A Bukhari
- Department of Radiology, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Cleveland, OH 44106
| | - Arjun Krishnan
- Department of Biology, Cleveland State University, Cleveland, OH
| | - Kaustav Bera
- Department of Radiology, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Cleveland, OH 44106
| | - Avishkar Sharma
- Department of Radiology, Jefferson Einstein Philadelphia Hospital, Philadelphia, PA
| | - Danielle Caovan
- Department of Radiology, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Cleveland, OH 44106
| | - Beverly Rosipko
- Department of Radiology, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Cleveland, OH 44106
| | - Amit Gupta
- Department of Radiology, University Hospitals Cleveland Medical Center, 11100 Euclid Ave, Cleveland, OH 44106
| |
Collapse
|
4
|
Huang Q, Huang F, Chen C, Xiao P, Liu J, Gao Y. Machine-learning model based on ultrasomics for non-invasive evaluation of fibrosis in IgA nephropathy. Eur Radiol 2025:10.1007/s00330-025-11368-9. [PMID: 39853332 DOI: 10.1007/s00330-025-11368-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 12/02/2024] [Accepted: 12/19/2024] [Indexed: 01/26/2025]
Abstract
OBJECTIVES To develop and validate an ultrasomics-based machine-learning (ML) model for non-invasive assessment of interstitial fibrosis and tubular atrophy (IF/TA) in patients with IgA nephropathy (IgAN). MATERIALS AND METHODS In this multi-center retrospective study, 471 patients with primary IgA nephropathy from four institutions were included (training, n = 275; internal testing, n = 69; external testing, n = 127; respectively). The least absolute shrinkage and selection operator logistic regression with tenfold cross-validation was used to identify the most relevant features. The ML models were constructed based on ultrasomics. The Shapley Additive Explanation (SHAP) was used to explore the interpretability of the models. Logistic regression analysis was employed to combine ultrasomics, clinical data, and ultrasound imaging characteristics, creating a comprehensive model. A receiver operating characteristic curve, calibration, decision curve, and clinical impact curve were used to evaluate prediction performance. RESULTS To differentiate between mild and moderate-to-severe IF/TA, three prediction models were developed: the Rad_SVM_Model, Clinic_LR_Model, and Rad_Clinic_Model. The area under curves of these three models were 0.861, 0.884, and 0.913 in the training cohort, and 0.760, 0.860, and 0.894 in the internal validation cohort, as well as 0.794, 0.865, and 0.904 in the external validation cohort. SHAP identified the contribution of radiomics features. Difference analysis showed that there were significant differences between radiomics features and fibrosis. The comprehensive model was superior to that of individual indicators and performed well. CONCLUSIONS We developed and validated a model that combined ultrasomics, clinical data, and clinical ultrasonic characteristics based on ML to assess the extent of fibrosis in IgAN. KEY POINTS Question Currently, there is a lack of a comprehensive ultrasomics-based machine-learning model for non-invasive assessment of the extent of Immunoglobulin A nephropathy (IgAN) fibrosis. Findings We have developed and validated a robust and interpretable machine-learning model based on ultrasomics for assessing the degree of fibrosis in IgAN. Clinical relevance The machine-learning model developed in this study has significant interpretable clinical relevance. The ultrasomics-based comprehensive model had the potential for non-invasive assessment of fibrosis in IgAN, which helped evaluate disease progress.
Collapse
Affiliation(s)
- Qun Huang
- Department of Ultrasound, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Fangyi Huang
- Department of Ultrasound, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Chengcai Chen
- Department of Ultrasound, Affiliated Hospital of Youjiang Medical University for Nationalities, Baise, China
| | - Pan Xiao
- Department of Ultrasound, Second Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Jiali Liu
- Department of Ultrasound, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Yong Gao
- Department of Ultrasound, First Affiliated Hospital of Guangxi Medical University, Nanning, China.
| |
Collapse
|
5
|
Sourlos N, Vliegenthart R, Santinha J, Klontzas ME, Cuocolo R, Huisman M, van Ooijen P. Recommendations for the creation of benchmark datasets for reproducible artificial intelligence in radiology. Insights Imaging 2024; 15:248. [PMID: 39400639 PMCID: PMC11473745 DOI: 10.1186/s13244-024-01833-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 09/20/2024] [Indexed: 10/15/2024] Open
Abstract
Various healthcare domains have witnessed successful preliminary implementation of artificial intelligence (AI) solutions, including radiology, though limited generalizability hinders their widespread adoption. Currently, most research groups and industry have limited access to the data needed for external validation studies. The creation and accessibility of benchmark datasets to validate such solutions represents a critical step towards generalizability, for which an array of aspects ranging from preprocessing to regulatory issues and biostatistical principles come into play. In this article, the authors provide recommendations for the creation of benchmark datasets in radiology, explain current limitations in this realm, and explore potential new approaches. CLINICAL RELEVANCE STATEMENT: Benchmark datasets, facilitating validation of AI software performance can contribute to the adoption of AI in clinical practice. KEY POINTS: Benchmark datasets are essential for the validation of AI software performance. Factors like image quality and representativeness of cases should be considered. Benchmark datasets can help adoption by increasing the trustworthiness and robustness of AI.
Collapse
Affiliation(s)
- Nikos Sourlos
- Department of Radiology, University Medical Center of Groningen, Groningen, The Netherlands
- DataScience Center in Health, University Medical Center Groningen, Groningen, The Netherlands
| | - Rozemarijn Vliegenthart
- Department of Radiology, University Medical Center of Groningen, Groningen, The Netherlands
- DataScience Center in Health, University Medical Center Groningen, Groningen, The Netherlands
| | - Joao Santinha
- Digital Surgery LAB, Champalimaud Foundation, Champalimaud Clinical Centre, Lisbon, Portugal
| | - Michail E Klontzas
- Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Greece
- Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece
| | - Renato Cuocolo
- Department of Medicine, Surgery, and Dentistry, University of Salerno, Baronissi, Italy
| | - Merel Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Peter van Ooijen
- DataScience Center in Health, University Medical Center Groningen, Groningen, The Netherlands.
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, The Netherlands.
| |
Collapse
|
6
|
Tang W, van Ooijen PMA, Sival DA, Maurits NM. Automatic two-dimensional & three-dimensional video analysis with deep learning for movement disorders: A systematic review. Artif Intell Med 2024; 156:102952. [PMID: 39180925 DOI: 10.1016/j.artmed.2024.102952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 07/19/2024] [Accepted: 08/13/2024] [Indexed: 08/27/2024]
Abstract
The advent of computer vision technology and increased usage of video cameras in clinical settings have facilitated advancements in movement disorder analysis. This review investigated these advancements in terms of providing practical, low-cost solutions for the diagnosis and analysis of movement disorders, such as Parkinson's disease, ataxia, dyskinesia, and Tourette syndrome. Traditional diagnostic methods for movement disorders are typically reliant on the subjective assessment of motor symptoms, which poses inherent challenges. Furthermore, early symptoms are often overlooked, and overlapping symptoms across diseases can complicate early diagnosis. Consequently, deep learning has been used for the objective video-based analysis of movement disorders. This study systematically reviewed the latest advancements in automatic two-dimensional & three-dimensional video analysis using deep learning for movement disorders. We comprehensively analyzed the literature published until September 2023 by searching the Web of Science, PubMed, Scopus, and Embase databases. We identified 68 relevant studies and extracted information on their objectives, datasets, modalities, and methodologies. The study aimed to identify, catalogue, and present the most significant advancements, offering a consolidated knowledge base on the role of video analysis and deep learning in movement disorder analysis. First, the objectives, including specific PD symptom quantification, ataxia assessment, cerebral palsy assessment, gait disorder analysis, tremor assessment, tic detection (in the context of Tourette syndrome), dystonia assessment, and abnormal movement recognition were discussed. Thereafter, the datasets used in the study were examined. Subsequently, video modalities and deep learning methodologies related to the topic were investigated. Finally, the challenges and opportunities in terms of datasets, interpretability, evaluation methods, and home/remote monitoring were discussed.
Collapse
Affiliation(s)
- Wei Tang
- Department of Neurology, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; Data Science Center in Health, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands.
| | - Peter M A van Ooijen
- Data Science Center in Health, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands
| | - Deborah A Sival
- Department of Pediatric Neurology, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands
| | - Natasha M Maurits
- Department of Neurology, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands
| |
Collapse
|
7
|
Vahdati S, Khosravi B, Mahmoudi E, Zhang K, Rouzrokh P, Faghani S, Moassefi M, Tahmasebi A, Andriole KP, Chang P, Farahani K, Flores MG, Folio L, Houshmand S, Giger ML, Gichoya JW, Erickson BJ. A Guideline for Open-Source Tools to Make Medical Imaging Data Ready for Artificial Intelligence Applications: A Society of Imaging Informatics in Medicine (SIIM) Survey. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2015-2024. [PMID: 38558368 PMCID: PMC11522208 DOI: 10.1007/s10278-024-01083-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 02/29/2024] [Accepted: 03/08/2024] [Indexed: 04/04/2024]
Abstract
In recent years, the role of Artificial Intelligence (AI) in medical imaging has become increasingly prominent, with the majority of AI applications approved by the FDA being in imaging and radiology in 2023. The surge in AI model development to tackle clinical challenges underscores the necessity for preparing high-quality medical imaging data. Proper data preparation is crucial as it fosters the creation of standardized and reproducible AI models while minimizing biases. Data curation transforms raw data into a valuable, organized, and dependable resource and is a fundamental process to the success of machine learning and analytical projects. Considering the plethora of available tools for data curation in different stages, it is crucial to stay informed about the most relevant tools within specific research areas. In the current work, we propose a descriptive outline for different steps of data curation while we furnish compilations of tools collected from a survey applied among members of the Society of Imaging Informatics (SIIM) for each of these stages. This collection has the potential to enhance the decision-making process for researchers as they select the most appropriate tool for their specific tasks.
Collapse
Affiliation(s)
- Sanaz Vahdati
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Bardia Khosravi
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Elham Mahmoudi
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Kuan Zhang
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Pouria Rouzrokh
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Shahriar Faghani
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Mana Moassefi
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Aylin Tahmasebi
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Katherine P Andriole
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Peter Chang
- Department of Radiological Sciences, Irvine Medical Center, University of California, Orange, CA, USA
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | | | - Les Folio
- Diagnostic Imaging & Interventional Radiology Moffitt Cancer Center, Tampa, FL, USA
| | - Sina Houshmand
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Maryellen L Giger
- Department of Radiology, The University of Chicago, Chicago, IL, USA
| | - Judy W Gichoya
- Department of Radiology, Emory University School of Medicine, Atlanta, GA, USA
| | - Bradley J Erickson
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA.
| |
Collapse
|
8
|
Jia PF, Li YR, Wang LY, Lu XR, Guo X. Radiomics in esophagogastric junction cancer: A scoping review of current status and advances. Eur J Radiol 2024; 177:111577. [PMID: 38905802 DOI: 10.1016/j.ejrad.2024.111577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 06/03/2024] [Accepted: 06/14/2024] [Indexed: 06/23/2024]
Abstract
PURPOSE This scoping review aimed to understand the advances in radiomics in esophagogastric junction (EGJ) cancer and assess the current status of radiomics in EGJ cancer. METHODS We conducted systematic searches of PubMed, Embase, and Web of Science databases from January 18, 2012, to January 15, 2023, to identify radiomics articles related to EGJ cancer. Two researchers independently screened the literature, extracted data, and assessed the quality of the studies using the Radiomics Quality Score (RQS) and the METhodological RadiomICs Score (METRICS) tool, respectively. RESULTS A total of 120 articles were retrieved from the three databases, and after screening, only six papers met the inclusion criteria. These studies investigated the role of radiomics in differentiating adenocarcinoma from squamous carcinoma, diagnosing T-stage, evaluating HER2 overexpression, predicting response to neoadjuvant therapy, and prognosis in EGJ cancer. The median score percentage of RQS was 34.7% (range from 22.2% to 38.9%). The median score percentage of METRICS was 71.2% (range from 58.2% to 84.9%). CONCLUSION Although there is a considerable difference between the RQS and METRICS scores of the included literature, we believe that the research value of radiomics in EGJ cancer has been revealed. In the future, while actively exploring more diagnostic, prognostic, and biological correlation studies in EGJ cancer, greater emphasis should be placed on the standardization and clinical application of radiomics.
Collapse
Affiliation(s)
- Ping-Fan Jia
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Yu-Ru Li
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Lu-Yao Wang
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Xiao-Rui Lu
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Xing Guo
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China.
| |
Collapse
|
9
|
Yeom JC, Kim JH, Kim YJ, Kim J, Kim KG. A Comparative Study of Performance Between Federated Learning and Centralized Learning Using Pathological Image of Endometrial Cancer. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1683-1690. [PMID: 38381385 PMCID: PMC11300724 DOI: 10.1007/s10278-024-01020-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 12/26/2023] [Accepted: 12/29/2023] [Indexed: 02/22/2024]
Abstract
Federated learning, an innovative artificial intelligence training method, offers a secure solution for institutions to collaboratively develop models without sharing raw data. This approach offers immense promise and is particularly advantageous for domains dealing with sensitive information, such as patient data. However, when confronted with a distributed data environment, challenges arise due to data paucity or inherent heterogeneity, potentially impacting the performance of federated learning models. Hence, scrutinizing the efficacy of this method in such intricate settings is indispensable. To address this, we harnessed pathological image datasets of endometrial cancer from four hospitals for training and evaluating the performance of a federated learning model and compared it with a centralized learning model. With optimal processing techniques (data augmentation, color normalization, and adaptive optimizer), federated learning exhibited lower precision but higher recall and Dice similarity coefficient (DSC) than centralized learning. Hence, considering the critical importance of recall in the context of medical image processing, federated learning is demonstrated as a viable and applicable approach in this field, offering advantages in terms of both performance and data security.
Collapse
Affiliation(s)
- Jong Chan Yeom
- Department of Bio-health Medical Engineering, Gachon University, Seongnam, Republic of Korea
| | - Jae Hoon Kim
- Obstetrics and Gynecology, Yonsei University College of Medicine, Seoul, 03722, Republic of Korea
- Department of Obstetrics and Gynecology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, 06229, Republic of Korea
- Institute of Women's Life Medical Science, Yonsei University College of Medicine, Seoul, 03722, Republic of Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, Gachon University, 191, Hambangmoe-ro, Yeonsu-gu, Incheon, 21936, Korea
- Department of Biomedical Engineering, Gachon University College of Medicine, Gil Medical Center, 38-13 Docjeom-ro 3 Beon-gil, Namdong-gu, Incheon, 21565, Korea
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Seongnam-si, 13120, Korea
| | - Jisup Kim
- Department of Pathology, Gil Medical Center, Gachon University College of Medicine, Incheon, 21565, Republic of Korea.
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University, 191, Hambangmoe-ro, Yeonsu-gu, Incheon, 21936, Korea.
- Department of Biomedical Engineering, Gachon University College of Medicine, Gil Medical Center, 38-13 Docjeom-ro 3 Beon-gil, Namdong-gu, Incheon, 21565, Korea.
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Seongnam-si, 13120, Korea.
| |
Collapse
|
10
|
Cho H, Froelicher D, Dokmai N, Nandi A, Sadhuka S, Hong MM, Berger B. Privacy-Enhancing Technologies in Biomedical Data Science. Annu Rev Biomed Data Sci 2024; 7:317-343. [PMID: 39178425 PMCID: PMC11346580 DOI: 10.1146/annurev-biodatasci-120423-120107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
Abstract
The rapidly growing scale and variety of biomedical data repositories raise important privacy concerns. Conventional frameworks for collecting and sharing human subject data offer limited privacy protection, often necessitating the creation of data silos. Privacy-enhancing technologies (PETs) promise to safeguard these data and broaden their usage by providing means to share and analyze sensitive data while protecting privacy. Here, we review prominent PETs and illustrate their role in advancing biomedicine. We describe key use cases of PETs and their latest technical advances and highlight recent applications of PETs in a range of biomedical domains. We conclude by discussing outstanding challenges and social considerations that need to be addressed to facilitate a broader adoption of PETs in biomedical data science.
Collapse
Affiliation(s)
- Hyunghoon Cho
- Department of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, Connecticut, USA;
| | - David Froelicher
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| | - Natnatee Dokmai
- Department of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, Connecticut, USA;
| | - Anupama Nandi
- Department of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, Connecticut, USA;
| | - Shuvom Sadhuka
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| | - Matthew M Hong
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| | - Bonnie Berger
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
- Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| |
Collapse
|
11
|
Contino S, Cruciata L, Gambino O, Pirrone R. IODeep: An IOD for the introduction of deep learning in the DICOM standard. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108113. [PMID: 38479148 DOI: 10.1016/j.cmpb.2024.108113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 02/22/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND AND OBJECTIVE In recent years, Artificial Intelligence (AI) and in particular Deep Neural Networks (DNN) became a relevant research topic in biomedical image segmentation due to the availability of more and more data sets along with the establishment of well known competitions. Despite the popularity of DNN based segmentation on the research side, these techniques are almost unused in the daily clinical practice even if they could support effectively the physician during the diagnostic process. Apart from the issues related to the explainability of the predictions of a neural model, such systems are not integrated in the diagnostic workflow, and a standardization of their use is needed to achieve this goal. METHODS This paper presents IODeep a new DICOM Information Object Definition (IOD) aimed at storing both the weights and the architecture of a DNN already trained on a particular image dataset that is labeled as regards the acquisition modality, the anatomical region, and the disease under investigation. RESULTS The IOD architecture is presented along with a DNN selection algorithm from the PACS server based on the labels outlined above, and a simple PACS viewer purposely designed for demonstrating the effectiveness of the DICOM integration, while no modifications are required on the PACS server side. Also a service based architecture in support of the entire workflow has been implemented. CONCLUSION IODeep ensures full integration of a trained AI model in a DICOM infrastructure, and it is also enables a scenario where a trained model can be either fine-tuned with hospital data or trained in a federated learning scheme shared by different hospitals. In this way AI models can be tailored to the real data produced by a Radiology ward thus improving the physician decision making process. Source code is freely available at https://github.com/CHILab1/IODeep.git.
Collapse
Affiliation(s)
- Salvatore Contino
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy
| | - Luca Cruciata
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy
| | - Orazio Gambino
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy.
| | - Roberto Pirrone
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy
| |
Collapse
|
12
|
Flory MN, Napel S, Tsai EB. Artificial Intelligence in Radiology: Opportunities and Challenges. Semin Ultrasound CT MR 2024; 45:152-160. [PMID: 38403128 DOI: 10.1053/j.sult.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Artificial intelligence's (AI) emergence in radiology elicits both excitement and uncertainty. AI holds promise for improving radiology with regards to clinical practice, education, and research opportunities. Yet, AI systems are trained on select datasets that can contain bias and inaccuracies. Radiologists must understand these limitations and engage with AI developers at every step of the process - from algorithm initiation and design to development and implementation - to maximize benefit and minimize harm that can be enabled by this technology.
Collapse
Affiliation(s)
- Marta N Flory
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Sandy Napel
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Emily B Tsai
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA.
| |
Collapse
|
13
|
Feng B, Ma C, liu Y, Hu Q, Lei Y, Wan M, Lin F, Cui J, Long W, Cui E. Deep learning vs. robust federal learning for distinguishing adrenal metastases from benign lesions with multi-phase CT images. Heliyon 2024; 10:e25655. [PMID: 38371957 PMCID: PMC10873667 DOI: 10.1016/j.heliyon.2024.e25655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 01/25/2024] [Accepted: 01/31/2024] [Indexed: 02/20/2024] Open
Abstract
Background Differentiating adrenal adenomas from metastases poses a significant challenge, particularly in patients with a history of extra-adrenal malignancy. This study investigates the performance of three-phase computed tomography (CT) based robust federal learning algorithm and traditional deep learning for distinguishing metastases from benign adrenal lesions. Material and methods This retrospective analysis includes 1187 instances who underwent three-phase CT scans between January 2008 and March 2021, comprising 720 benign lesions and 467 metastases. Utilizing the three-phase CT images, both a Robust Federal Learning Signature (RFLS) and a traditional Deep Learning Signature (DLS) were constructed using the Least Absolute Shrinkage and Selection Operator (LASSO) logistic regression. Their diagnostic capabilities were subsequently validated and compared using metrics such as the Areas Under the Receiver Operating Curve (AUCs), Net Reclassification Improvement (NRI), and Decision Curve Analysis (DCA). Results Compared with DLS, the RFLS showed better capability in distinguishing metastases from benign adrenal lesions (average AUC: 0.816 vs.0.798, NRI = 0.126, P < 0.072; 0.889 vs.0.838, NRI = 0.209, P < 0.001; 0.903 vs.0.825, NRI = 0.643, p < 0.001) in the four-testing cohort, respectively. DCA showed that the RFLS added more net benefit than DLS for clinical utility. Moreover, Comparison with state-of-the-art federal learning methods, the results once again confirmed that the RFLS significantly improved the diagnostic performance based on three-phase CT (AUC: AP, 0.727 vs. 0.757 vs. 0.739 vs. 0.796; PCP, 0.781 vs. 0.851 vs. 0.790 vs. 0.882; VP, 0.789 vs. 0.814 vs. 0.779 vs. 0.886). Conclusion RFLS was superior to DLS for preoperative distinguishing metastases from benign adrenal lesions with multi-phase CT Images.
Collapse
Affiliation(s)
- Bao Feng
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, 529030, China
- Laboratory of Intelligent Detection and Information Processing, Guilin University of Aerospace Technology, Guilin, 541004, China
| | - Changyi Ma
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, 529030, China
| | - Yu liu
- Laboratory of Intelligent Detection and Information Processing, Guilin University of Aerospace Technology, Guilin, 541004, China
| | - Qinghui Hu
- Laboratory of Intelligent Detection and Information Processing, Guilin University of Aerospace Technology, Guilin, 541004, China
| | - Yan Lei
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, 529030, China
| | - Meiqi Wan
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, 529030, China
| | - Fan Lin
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, 518035, China
| | - Jin Cui
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, 529030, China
| | - Wansheng Long
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, 529030, China
| | - Enming Cui
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, 529030, China
- Guangzhou Key Laboratory of Molecular and Functional Imaging for Clinical Translation, Guangzhou, 510620, China
| |
Collapse
|
14
|
Klontzas ME, Kalarakis G, Koltsakis E, Papathomas T, Karantanas AH, Tzortzakakis A. Convolutional neural networks for the differentiation between benign and malignant renal tumors with a multicenter international computed tomography dataset. Insights Imaging 2024; 15:26. [PMID: 38270726 PMCID: PMC10811309 DOI: 10.1186/s13244-023-01601-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 12/17/2023] [Indexed: 01/26/2024] Open
Abstract
OBJECTIVES To use convolutional neural networks (CNNs) for the differentiation between benign and malignant renal tumors using contrast-enhanced CT images of a multi-institutional, multi-vendor, and multicenter CT dataset. METHODS A total of 264 histologically confirmed renal tumors were included, from US and Swedish centers. Images were augmented and divided randomly 70%:30% for algorithm training and testing. Three CNNs (InceptionV3, Inception-ResNetV2, VGG-16) were pretrained with transfer learning and fine-tuned with our dataset to distinguish between malignant and benign tumors. The ensemble consensus decision of the three networks was also recorded. Performance of each network was assessed with receiver operating characteristics (ROC) curves and their area under the curve (AUC-ROC). Saliency maps were created to demonstrate the attention of the highest performing CNN. RESULTS Inception-ResNetV2 achieved the highest AUC of 0.918 (95% CI 0.873-0.963), whereas VGG-16 achieved an AUC of 0.813 (95% CI 0.752-0.874). InceptionV3 and ensemble achieved the same performance with an AUC of 0.894 (95% CI 0.844-0.943). Saliency maps indicated that Inception-ResNetV2 decisions are based on the characteristics of the tumor while in most tumors considering the characteristics of the interface between the tumor and the surrounding renal parenchyma. CONCLUSION Deep learning based on a diverse multicenter international dataset can enable accurate differentiation between benign and malignant renal tumors. CRITICAL RELEVANCE STATEMENT Convolutional neural networks trained on a diverse CT dataset can accurately differentiate between benign and malignant renal tumors. KEY POINTS • Differentiation between benign and malignant tumors based on CT is extremely challenging. • Inception-ResNetV2 trained on a diverse dataset achieved excellent differentiation between tumor types. • Deep learning can be used to distinguish between benign and malignant renal tumors.
Collapse
Affiliation(s)
- Michail E Klontzas
- Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Georgios Kalarakis
- Department of Diagnostic Radiology, Karolinska University Hospital, Stockholm, Sweden
- Division of Radiology, Department for Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden
| | - Emmanouil Koltsakis
- Department of Diagnostic Radiology, Karolinska University Hospital, Stockholm, Sweden
| | - Thomas Papathomas
- Institute of Metabolism and Systems Research, University of Birmingham, Birmingham, UK
- Department of Clinical Pathology, Vestre Viken Hospital Trust, Drammen, Norway
| | - Apostolos H Karantanas
- Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Antonios Tzortzakakis
- Division of Radiology, Department for Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden.
- Medical Radiation Physics and Nuclear Medicine, Section for Nuclear Medicine, Karolinska University Hospital, 14 186, Huddinge, Stockholm, Sweden.
| |
Collapse
|
15
|
Gu X, Sabrina F, Fan Z, Sohail S. A Review of Privacy Enhancement Methods for Federated Learning in Healthcare Systems. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:6539. [PMID: 37569079 PMCID: PMC10418741 DOI: 10.3390/ijerph20156539] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 07/11/2023] [Accepted: 08/04/2023] [Indexed: 08/13/2023]
Abstract
Federated learning (FL) provides a distributed machine learning system that enables participants to train using local data to create a shared model by eliminating the requirement of data sharing. In healthcare systems, FL allows Medical Internet of Things (MIoT) devices and electronic health records (EHRs) to be trained locally without sending patients data to the central server. This allows healthcare decisions and diagnoses based on datasets from all participants, as well as streamlining other healthcare processes. In terms of user data privacy, this technology allows collaborative training without the need of sharing the local data with the central server. However, there are privacy challenges in FL arising from the fact that the model updates are shared between the client and the server which can be used for re-generating the client's data, breaching privacy requirements of applications in domains like healthcare. In this paper, we have conducted a review of the literature to analyse the existing privacy and security enhancement methods proposed for FL in healthcare systems. It has been identified that the research in the domain focuses on seven techniques: Differential Privacy, Homomorphic Encryption, Blockchain, Hierarchical Approaches, Peer to Peer Sharing, Intelligence on the Edge Device, and Mixed, Hybrid and Miscellaneous Approaches. The strengths, limitations, and trade-offs of each technique were discussed, and the possible future for these seven privacy enhancement techniques for healthcare FL systems was identified.
Collapse
Affiliation(s)
- Xin Gu
- School of Information Technology, King’s Own Institute, Sydney, NSW 2000, Australia;
| | - Fariza Sabrina
- School of Engineering and Technology, Central Queensland University, Sydney, NSW 2000, Australia;
| | - Zongwen Fan
- College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China
| | - Shaleeza Sohail
- College of Engineering, Science and Environment, The University of Newcastle, Callaghan, NSW 2308, Australia;
| |
Collapse
|
16
|
Jin T, Pan S, Li X, Chen S. Metadata and Image Features Co-Aware Personalized Federated Learning for Smart Healthcare. IEEE J Biomed Health Inform 2023; 27:4110-4119. [PMID: 37220032 DOI: 10.1109/jbhi.2023.3279096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Recently, artificial intelligence has been widely used in intelligent disease diagnosis and has achieved great success. However, most of the works mainly rely on the extraction of image features but ignore the use of clinical text information of patients, which may limit the diagnosis accuracy fundamentally. In this paper, we propose a metadata and image features co-aware personalized federated learning scheme for smart healthcare. Specifically, we construct an intelligent diagnosis model, by which users can obtain fast and accurate diagnosis services. Meanwhile, a personalized federated learning scheme is designed to utilize the knowledge learned from other edge nodes with larger contributions and customize high-quality personalized classification models for each edge node. Subsequently, a Naïve Bayes classifier is devised for classifying patient metadata. And then the image and metadata diagnosis results are jointly aggregated by different weights to improve the accuracy of intelligent diagnosis. Finally, the simulation results illustrate that, compared with the existing methods, our proposed algorithm achieves better classification accuracy, reaching about 97.16% on PAD-UFES-20 dataset.
Collapse
|
17
|
Walsh G, Stogiannos N, van de Venter R, Rainey C, Tam W, McFadden S, McNulty JP, Mekis N, Lewis S, O'Regan T, Kumar A, Huisman M, Bisdas S, Kotter E, Pinto dos Santos D, Sá dos Reis C, van Ooijen P, Brady AP, Malamateniou C. Responsible AI practice and AI education are central to AI implementation: a rapid review for all medical imaging professionals in Europe. BJR Open 2023; 5:20230033. [PMID: 37953871 PMCID: PMC10636340 DOI: 10.1259/bjro.20230033] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 05/27/2023] [Accepted: 05/30/2023] [Indexed: 11/14/2023] Open
Abstract
Artificial intelligence (AI) has transitioned from the lab to the bedside, and it is increasingly being used in healthcare. Radiology and Radiography are on the frontline of AI implementation, because of the use of big data for medical imaging and diagnosis for different patient groups. Safe and effective AI implementation requires that responsible and ethical practices are upheld by all key stakeholders, that there is harmonious collaboration between different professional groups, and customised educational provisions for all involved. This paper outlines key principles of ethical and responsible AI, highlights recent educational initiatives for clinical practitioners and discusses the synergies between all medical imaging professionals as they prepare for the digital future in Europe. Responsible and ethical AI is vital to enhance a culture of safety and trust for healthcare professionals and patients alike. Educational and training provisions for medical imaging professionals on AI is central to the understanding of basic AI principles and applications and there are many offerings currently in Europe. Education can facilitate the transparency of AI tools, but more formalised, university-led training is needed to ensure the academic scrutiny, appropriate pedagogy, multidisciplinarity and customisation to the learners' unique needs are being adhered to. As radiographers and radiologists work together and with other professionals to understand and harness the benefits of AI in medical imaging, it becomes clear that they are faced with the same challenges and that they have the same needs. The digital future belongs to multidisciplinary teams that work seamlessly together, learn together, manage risk collectively and collaborate for the benefit of the patients they serve.
Collapse
Affiliation(s)
- Gemma Walsh
- Division of Midwifery & Radiography, City University of London, London, United Kingdom
| | | | | | - Clare Rainey
- School of Health Sciences, Ulster University, Derry~Londonderry, Northern Ireland
| | - Winnie Tam
- Division of Midwifery & Radiography, City University of London, London, United Kingdom
| | - Sonyia McFadden
- School of Health Sciences, Ulster University, Coleraine, United Kingdom
| | | | - Nejc Mekis
- Medical Imaging and Radiotherapy Department, University of Ljubljana, Faculty of Health Sciences, Ljubljana, Slovenia
| | - Sarah Lewis
- Discipline of Medical Imaging Science, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Tracy O'Regan
- The Society and College of Radiographers, London, United Kingdom
| | - Amrita Kumar
- Frimley Health NHS Foundation Trust, Frimley, United Kingdom
| | - Merel Huisman
- Department of Radiology, University Medical Center Utrecht, Utrecht, Netherlands
| | | | | | | | - Cláudia Sá dos Reis
- School of Health Sciences (HESAV), University of Applied Sciences and Arts Western Switzerland (HES-SO), Lausanne, Switzerland
| | | | | | | |
Collapse
|
18
|
Deep Hybrid Learning Prediction of Patient-Specific Quality Assurance in Radiotherapy: Implementation in Clinical Routine. Diagnostics (Basel) 2023; 13:diagnostics13050943. [PMID: 36900087 PMCID: PMC10001389 DOI: 10.3390/diagnostics13050943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 02/27/2023] [Accepted: 02/27/2023] [Indexed: 03/06/2023] Open
Abstract
BACKGROUND Arc therapy allows for better dose deposition conformation, but the radiotherapy plans (RT plans) are more complex, requiring patient-specific pre-treatment quality assurance (QA). In turn, pre-treatment QA adds to the workload. The objective of this study was to develop a predictive model of Delta4-QA results based on RT-plan complexity indices to reduce QA workload. METHODS Six complexity indices were extracted from 1632 RT VMAT plans. A machine learning (ML) model was developed for classification purpose (two classes: compliance with the QA plan or not). For more complex locations (breast, pelvis and head and neck), innovative deep hybrid learning (DHL) was trained to achieve better performance. RESULTS For not complex RT plans (with brain and thorax tumor locations), the ML model achieved 100% specificity and 98.9% sensitivity. However, for more complex RT plans, specificity falls to 87%. For these complex RT plans, an innovative QA classification method using DHL was developed and achieved a sensitivity of 100% and a specificity of 97.72%. CONCLUSIONS The ML and DHL models predicted QA results with a high degree of accuracy. Our predictive QA online platform is offering substantial time savings in terms of accelerator occupancy and working time.
Collapse
|
19
|
Zhou B, Miao T, Mirian N, Chen X, Xie H, Feng Z, Guo X, Li X, Zhou SK, Duncan JS, Liu C. Federated Transfer Learning for Low-dose PET Denoising: A Pilot Study with Simulated Heterogeneous Data. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:284-295. [PMID: 37789946 PMCID: PMC10544830 DOI: 10.1109/trpms.2022.3194408] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Positron emission tomography (PET) with a reduced injection dose, i.e., low-dose PET, is an efficient way to reduce radiation dose. However, low-dose PET reconstruction suffers from a low signal-to-noise ratio (SNR), affecting diagnosis and other PET-related applications. Recently, deep learning-based PET denoising methods have demonstrated superior performance in generating high-quality reconstruction. However, these methods require a large amount of representative data for training, which can be difficult to collect and share due to medical data privacy regulations. Moreover, low-dose PET data at different institutions may use different low-dose protocols, leading to non-identical data distribution. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, it is challenging for previous methods to address the large domain shift caused by different low-dose PET settings, and the application of FL to PET is still under-explored. In this work, we propose a federated transfer learning (FTL) framework for low-dose PET denoising using heterogeneous low-dose data. Our experimental results on simulated multi-institutional data demonstrate that our method can efficiently utilize heterogeneous low-dose data without compromising data privacy for achieving superior low-dose PET denoising performance for different institutions with different low-dose settings, as compared to previous FL methods.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Tianshun Miao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Zhicheng Feng
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90007, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Xiaoxiao Li
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, Canada
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China and the Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China
| | - James S Duncan
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Chi Liu
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| |
Collapse
|
20
|
Bibb A, Schmidt K, Brink L, Pisano E, Coombs L, Apgar C, Dreyer K, Wald C. Specialty Society Support for Multicenter Research in Artificial Intelligence. Acad Radiol 2023; 30:640-643. [PMID: 36813668 DOI: 10.1016/j.acra.2023.01.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 01/06/2023] [Accepted: 01/08/2023] [Indexed: 02/22/2023]
Affiliation(s)
- Allen Bibb
- Grandview Medical Center, ACR Data Science Institute, Birmingham, Alabama.
| | | | - Laura Brink
- American College of Radiology, Reston, Virginia
| | - E Pisano
- American College of Radiology, Reston, Virginia
| | | | | | - Keith Dreyer
- Massachusetts General Hospital, ACR Data Science Institute, Boston, Massachusetts
| | - Christoph Wald
- Lahey Hospital and Medical Center, ACR Commission on Informatics, Boston, Massachusetts
| |
Collapse
|
21
|
Oh W, Nadkarni GN. Federated Learning in Health care Using Structured Medical Data. ADVANCES IN KIDNEY DISEASE AND HEALTH 2023; 30:4-16. [PMID: 36723280 PMCID: PMC10208416 DOI: 10.1053/j.akdh.2022.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
The success of machine learning-based studies is largely subjected to accessing a large amount of data. However, accessing such data is typically not feasible within a single health system/hospital. Although multicenter studies are the most effective way to access a vast amount of data, sharing data outside the institutes involves legal, business, and technical challenges. Federated learning (FL) is a newly proposed machine learning framework for multicenter studies, tackling data-sharing issues across participant institutes. The promise of FL is simple. FL facilitates multicenter studies without losing data access control and allows the construction of a global model by aggregating local models trained from participant institutes. This article reviewed recently published studies that utilized FL in clinical studies with structured medical data. In addition, challenges and open questions in FL in clinical studies with structured medical data were discussed.
Collapse
Affiliation(s)
- Wonsuk Oh
- Charles Bronfman Institute for Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Girish N Nadkarni
- Charles Bronfman Institute for Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY; Division of Data-Driven and Digital Medicine, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, NY; Division of Nephrology, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, NY.
| |
Collapse
|
22
|
Florescu LM, Streba CT, Şerbănescu MS, Mămuleanu M, Florescu DN, Teică RV, Nica RE, Gheonea IA. Federated Learning Approach with Pre-Trained Deep Learning Models for COVID-19 Detection from Unsegmented CT images. Life (Basel) 2022; 12:958. [PMID: 35888048 PMCID: PMC9316900 DOI: 10.3390/life12070958] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/21/2022] [Accepted: 06/23/2022] [Indexed: 12/17/2022] Open
Abstract
(1) Background: Coronavirus disease 2019 (COVID-19) is an infectious disease caused by SARS-CoV-2. Reverse transcription polymerase chain reaction (RT-PCR) remains the current gold standard for detecting SARS-CoV-2 infections in nasopharyngeal swabs. In Romania, the first reported patient to have contracted COVID-19 was officially declared on 26 February 2020. (2) Methods: This study proposes a federated learning approach with pre-trained deep learning models for COVID-19 detection. Three clients were locally deployed with their own dataset. The goal of the clients was to collaborate in order to obtain a global model without sharing samples from the dataset. The algorithm we developed was connected to our internal picture archiving and communication system and, after running backwards, it encountered chest CT changes suggestive for COVID-19 in a patient investigated in our medical imaging department on the 28 January 2020. (4) Conclusions: Based on our results, we recommend using an automated AI-assisted software in order to detect COVID-19 based on the lung imaging changes as an adjuvant diagnostic method to the current gold standard (RT-PCR) in order to greatly enhance the management of these patients and also limit the spread of the disease, not only to the general population but also to healthcare professionals.
Collapse
Affiliation(s)
- Lucian Mihai Florescu
- Department of Radiology and Medical Imaging, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania; (L.M.F.); (I.A.G.)
| | - Costin Teodor Streba
- Department of Pneumology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania;
| | - Mircea-Sebastian Şerbănescu
- Department of Medical Informatics and Biostatistics, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania;
| | - Mădălin Mămuleanu
- Department of Automatic Control and Electronics, University of Craiova, 200585 Craiova, Romania
| | - Dan Nicolae Florescu
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Rossy Vlăduţ Teică
- Doctoral School, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania; (R.V.T.); (R.E.N.)
| | - Raluca Elena Nica
- Doctoral School, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania; (R.V.T.); (R.E.N.)
| | - Ioana Andreea Gheonea
- Department of Radiology and Medical Imaging, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania; (L.M.F.); (I.A.G.)
| |
Collapse
|