1
|
Engelhard M, Wojdyla D, Wang H, Pencina M, Henao R. Exploring trade-offs in equitable stroke risk prediction with parity-constrained and race-free models. Artif Intell Med 2025; 164:103130. [PMID: 40253926 DOI: 10.1016/j.artmed.2025.103130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 04/06/2025] [Accepted: 04/09/2025] [Indexed: 04/22/2025]
Abstract
A recent analysis of common stroke risk prediction models showed that performance differs between Black and White subgroups, and that applying standard machine learning methods does not reduce these disparities. There have been calls in the clinical literature to correct such disparities by removing race as a predictor (i.e., race-free models). Alternatively, a variety of machine learning methods have been proposed to constrain differences in model predictions between racial groups. In this work, we compare these approaches for equitable stroke risk prediction. We begin by proposing a discrete-time, neural network-based time-to-event model that incorporates a parity constraint designed to make predictions more similar between groups. Using harmonized data from Framingham Offspring, MESA, and ARIC studies, we develop both parity-constrained and unconstrained stroke risk prediction models, then compare their performance with race-free models in a held-out test set and a secondary validation set (REGARDS). Our evaluation includes both intra-group and inter-group performance metrics for right-censored time to event outcomes. Results illustrate a fundamental trade-off in which parity-constrained models must sacrifice intra-group calibration to improve inter-group discrimination performance, while the race-free models strike a balance between the two. Consequently, the choice of model must depend on the potential benefits and harms associated with the intended clinical use. All models as well as code implementing our approach are available in a public repository. More broadly, these results provide a roadmap for development of equitable clinical risk prediction models and illustrate both merits and limitations of a race-free approach.
Collapse
Affiliation(s)
- Matthew Engelhard
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, United States of America; Duke AI Health, United States of America.
| | - Daniel Wojdyla
- Duke Clinical Research Institute, United States of America
| | - Haoyuan Wang
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, United States of America
| | - Michael Pencina
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, United States of America; Duke AI Health, United States of America; Duke Clinical Research Institute, United States of America
| | - Ricardo Henao
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, United States of America; Duke AI Health, United States of America; Duke Clinical Research Institute, United States of America
| |
Collapse
|
2
|
An K. Navigating the Future: Opportunities and Challenges of Generative AI in Nursing Research. Res Nurs Health 2025; 48:299-300. [PMID: 40317754 DOI: 10.1002/nur.22464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2025] [Accepted: 03/26/2025] [Indexed: 05/07/2025]
Affiliation(s)
- Kyungeh An
- Georgia State University, Atlanta, Georgia, USA
| |
Collapse
|
3
|
Xie SJ, Spice C, Wedgeworth P, Langevin R, Lybarger K, Singh AP, Wood BR, Klein JW, Hsieh G, Duber HC, Hartzler AL. Patient and clinician acceptability of automated extraction of social drivers of health from clinical notes in primary care. J Am Med Inform Assoc 2025; 32:855-865. [PMID: 40085013 PMCID: PMC12012364 DOI: 10.1093/jamia/ocaf046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Revised: 02/26/2025] [Accepted: 03/05/2025] [Indexed: 03/16/2025] Open
Abstract
OBJECTIVE Artificial Intelligence (AI)-based approaches for extracting Social Drivers of Health (SDoH) from clinical notes offer healthcare systems an efficient way to identify patients' social needs, yet we know little about the acceptability of this approach to patients and clinicians. We investigated patient and clinician acceptability through interviews. MATERIALS AND METHODS We interviewed primary care patients experiencing social needs (n = 19) and clinicians (n = 14) about their acceptability of "SDoH autosuggest," an AI-based approach for extracting SDoH from clinical notes. We presented storyboards depicting the approach and asked participants to rate their acceptability and discuss their rationale. RESULTS Participants rated SDoH autosuggest moderately acceptable (mean = 3.9/5 patients; mean = 3.6/5 clinicians). Patients' ratings varied across domains, with substance use rated most and employment rated least acceptable. Both groups raised concern about information integrity, actionability, impact on clinical interactions and relationships, and privacy. In addition, patients raised concern about transparency, autonomy, and potential harm, whereas clinicians raised concern about usability. DISCUSSION Despite reporting moderate acceptability of the envisioned approach, patients and clinicians expressed multiple concerns about AI systems that extract SDoH. Participants emphasized the need for high-quality data, non-intrusive presentation methods, and clear communication strategies regarding sensitive social needs. Findings underscore the importance of engaging patients and clinicians to mitigate unintended consequences when integrating AI approaches into care. CONCLUSION Although AI approaches like SDoH autosuggest hold promise for efficiently identifying SDoH from clinical notes, they must also account for concerns of patients and clinicians to ensure these systems are acceptable and do not undermine trust.
Collapse
Affiliation(s)
- Serena Jinchen Xie
- Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Carolin Spice
- Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Patrick Wedgeworth
- Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Raina Langevin
- Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Kevin Lybarger
- Information Sciences and Technology, George Mason University, Fairfax, VA 22030, United States
| | - Angad Preet Singh
- Department of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Brian R Wood
- Department of Medicine, University of Washington, Seattle, WA 98195, United States
| | - Jared W Klein
- Department of Medicine, University of Washington, School of Medicine, Seattle, WA 98195, United States
| | - Gary Hsieh
- Human Centered Design & Engineering, University of Washington, Seattle, WA 98195, United States
| | - Herbert C Duber
- Washington State Department of Health, Olympia, WA 98501, United States
- Department of Emergency Medicine, University of Washington, Seattle, WA 98195, United States
| | - Andrea L Hartzler
- Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA 98195, United States
| |
Collapse
|
4
|
Kopalli SR, Shukla M, Jayaprakash B, Kundlas M, Srivastava A, Jagtap J, Gulati M, Chigurupati S, Ibrahim E, Khandige PS, Garcia DS, Koppula S, Gasmi A. Artificial intelligence in stroke rehabilitation: From acute care to long-term recovery. Neuroscience 2025; 572:214-231. [PMID: 40068721 DOI: 10.1016/j.neuroscience.2025.03.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2024] [Revised: 03/04/2025] [Accepted: 03/07/2025] [Indexed: 03/18/2025]
Abstract
Stroke is a leading cause of disability worldwide, driving the need for advanced rehabilitation strategies. The integration of Artificial Intelligence (AI) into stroke rehabilitation presents significant advancements across the continuum of care, from acute diagnosis to long-term recovery. This review explores AI's role in stroke rehabilitation, highlighting its impact on early diagnosis, motor recovery, and cognitive rehabilitation. AI-driven imaging techniques, such as deep learning applied to CT and MRI scans, improve early diagnosis and identify ischemic penumbra, enabling timely, personalized interventions. AI-assisted decision support systems optimize acute stroke treatment, including thrombolysis and endovascular therapy. In motor rehabilitation, AI-powered robotics and exoskeletons provide precise, adaptive assistance, while AI-augmented Virtual and Augmented Reality environments offer immersive, tailored recovery experiences. Brain-Computer Interfaces utilize AI for neurorehabilitation through neural signal processing, supporting motor recovery. Machine learning models predict functional recovery outcomes and dynamically adjust therapy intensities. Wearable technologies equipped with AI enable continuous monitoring and real-time feedback, facilitating home-based rehabilitation. AI-driven tele-rehabilitation platforms overcome geographic barriers by enabling remote assessment and intervention. The review also addresses the ethical, legal, and regulatory challenges associated with AI implementation, including data privacy and technical integration. Future research directions emphasize the transformative potential of AI in stroke rehabilitation, with case studies and clinical trials illustrating the practical benefits and efficacy of AI technologies in improving patient recovery.
Collapse
Affiliation(s)
- Spandana Rajendra Kopalli
- Department of Bioscience and Biotechnology, Sejong University, Gwangjin-gu, Seoul 05006, Republic of Korea.
| | - Madhu Shukla
- Marwadi University Research Center, Department of Computer Engineering, Faculty of Engineering & Technology, Marwadi University, Rajkot 360003, Gujarat, India
| | - B Jayaprakash
- Department of Computer Science & IT, School of Sciences, JAIN (Deemed to be University), Bangalore, Karnataka, India
| | - Mayank Kundlas
- Centre for Research Impact & Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India
| | - Ankur Srivastava
- Department of CSE, Chandigarh Engineering College, Chandigarh Group of Colleges-Jhanjeri, Mohali 140307, Punjab, India
| | - Jayant Jagtap
- Department of Computing Science and Artificial Intelligence, NIMS Institute of Engineering and Technology, NIMS University Rajasthan, Jaipur, India
| | - Monica Gulati
- School of Pharmaceutical Sciences, Lovely Professional University, Phagwara, Punjab 1444411, India; ARCCIM, Faculty of Health, University of Technology Sydney, Ultimo, NSW 20227, Australia
| | - Sridevi Chigurupati
- Department of Medicinal Chemistry and Pharmacognosy, College of Pharmacy, Qassim University, Buraydah 51452, Saudi Arabia
| | - Eiman Ibrahim
- Department of Pharmacy Practice, College of Pharmacy, Qassim University, Buraydah 51452, Saudi Arabia
| | - Prasanna Shama Khandige
- NITTE (Deemed to be University) NGSM Institute of Pharmaceutical Sciences, Mangaluru, Karnartaka, India
| | - Dario Salguero Garcia
- Department of Developmental and Educational Psychology, University of Almeria, Almeria, Spain
| | - Sushruta Koppula
- College of Biomedical and Health Sciences, Konkuk University, Chungju-Si, Chungcheongbuk Do 27478, Republic of Korea
| | - Amin Gasmi
- International Institute of Nutrition and Micronutrition Sciences, Saint- Etienne, France; Société Francophone de Nutrithérapie et de Nutrigénétique Appliquée, Villeurbanne, France
| |
Collapse
|
5
|
Bottacin WE, de Souza TT, Melchiors AC, Reis WCT. Explanation and elaboration of MedinAI: guidelines for reporting artificial intelligence studies in medicines, pharmacotherapy, and pharmaceutical services. Int J Clin Pharm 2025:10.1007/s11096-025-01906-2. [PMID: 40249526 DOI: 10.1007/s11096-025-01906-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Accepted: 03/13/2025] [Indexed: 04/19/2025]
Abstract
The increasing adoption of artificial intelligence (AI) in medicines, pharmacotherapy, and pharmaceutical services necessitates clear guidance on reporting standards. While the MedinAI Statement (Bottacin in Int J Clin Pharm, https://doi.org/10.1007/s11096-025-01905-3, 2025) provides core guidelines for reporting AI studies in these fields, detailed explanations and practical examples are crucial for optimal implementation. This companion document was developed to offer comprehensive guidance and real-world examples for each guideline item. The document elaborates on all 14 items and 78 sub-items across four domains: core, ethical considerations in medication and pharmacotherapy, medicines as products, and services related to medicines and pharmacotherapy. Through clear, actionable guidance and diverse examples, this document enhances MedinAI's utility, enabling researchers and stakeholders to improve the quality and transparency of AI research reporting across various contexts, study designs, and development stages.
Collapse
Affiliation(s)
- Wallace Entringer Bottacin
- Postgraduate Program in Pharmaceutical Services and Policies, Federal University of Paraná, Avenida Prefeito Lothário Meissner, 632 - Jardim Botânico, Curitiba, PR, 80210-170, Brazil.
| | - Thais Teles de Souza
- Department of Pharmaceutical Sciences, Federal University of Paraíba, João Pessoa, PB, Brazil
| | - Ana Carolina Melchiors
- Postgraduate Program in Pharmaceutical Services and Policies, Federal University of Paraná, Avenida Prefeito Lothário Meissner, 632 - Jardim Botânico, Curitiba, PR, 80210-170, Brazil
| | | |
Collapse
|
6
|
Tan JM, Khanna AK. Innovations in Perioperative Medicine: Technologies to Improve Outcomes. Int Anesthesiol Clin 2025:00004311-990000000-00092. [PMID: 40231372 DOI: 10.1097/aia.0000000000000479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/16/2025]
Affiliation(s)
- Jonathan M Tan
- Department of Anesthesiology Critical Care Medicine, Children's Hospital Los Angeles
- Department of Anesthesiology, Keck School of Medicine and the Spatial Sciences Institute at the University of Southern California, Los Angeles, California
| | - Ashish K Khanna
- Department of Anesthesiology, Section on Critical Care Medicine, Wake Forest School of Medicine, Atrium Health Wake Forest Baptist Medical Center, Perioperative Outcomes and Informatics Collaborative, Winston-Salem, North Carolina
- Outcomes Research Consortium, Houston, Texas
| |
Collapse
|
7
|
Huhulea EN, Huang L, Eng S, Sumawi B, Huang A, Aifuwa E, Hirani R, Tiwari RK, Etienne M. Artificial Intelligence Advancements in Oncology: A Review of Current Trends and Future Directions. Biomedicines 2025; 13:951. [PMID: 40299653 PMCID: PMC12025054 DOI: 10.3390/biomedicines13040951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2025] [Revised: 04/03/2025] [Accepted: 04/10/2025] [Indexed: 05/01/2025] Open
Abstract
Cancer remains one of the leading causes of mortality worldwide, driving the need for innovative approaches in research and treatment. Artificial intelligence (AI) has emerged as a powerful tool in oncology, with the potential to revolutionize cancer diagnosis, treatment, and management. This paper reviews recent advancements in AI applications within cancer research, focusing on early detection through computer-aided diagnosis, personalized treatment strategies, and drug discovery. We survey AI-enhanced diagnostic applications and explore AI techniques such as deep learning, as well as the integration of AI with nanomedicine and immunotherapy for cancer care. Comparative analyses of AI-based models versus traditional diagnostic methods are presented, highlighting AI's superior potential. Additionally, we discuss the importance of integrating social determinants of health to optimize cancer care. Despite these advancements, challenges such as data quality, algorithmic biases, and clinical validation remain, limiting widespread adoption. The review concludes with a discussion of the future directions of AI in oncology, emphasizing its potential to reshape cancer care by enhancing diagnosis, personalizing treatments and targeted therapies, and ultimately improving patient outcomes.
Collapse
Affiliation(s)
- Ellen N. Huhulea
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Lillian Huang
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Shirley Eng
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Bushra Sumawi
- Barshop Institute, The University of Texas Health Science Center, San Antonio, TX 78229, USA
| | - Audrey Huang
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Esewi Aifuwa
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Rahim Hirani
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
- Graduate School of Biomedical Sciences, New York Medical College, Valhalla, NY 10595, USA
| | - Raj K. Tiwari
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
- Graduate School of Biomedical Sciences, New York Medical College, Valhalla, NY 10595, USA
| | - Mill Etienne
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
- Department of Neurology, New York Medical College, Valhalla, NY 10595, USA
| |
Collapse
|
8
|
Bombiński P, Szatkowski P, Sobieski B, Kwieciński T, Płotka S, Adamek M, Banasiuk M, Furmanek MI, Biecek P. Underestimation of lung regions on chest X-ray segmentation masks assessed by comparison with total lung volume evaluated on computed tomography. Radiography (Lond) 2025; 31:102930. [PMID: 40174327 DOI: 10.1016/j.radi.2025.102930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Revised: 03/03/2025] [Accepted: 03/14/2025] [Indexed: 04/04/2025]
Abstract
INTRODUCTION The lung regions on chest X-ray segmentation masks created according to the current gold standard method for AI-driven applications are underestimated. This can be evaluated by comparison with computed tomography. METHODS This retrospective study included data from non-contrast chest low-dose CT examinations of 55 individuals without pulmonary pathology. Synthetic X-ray images were generated by projecting a 3D CT examination onto a 2D image plane. Two experienced radiologists manually created two types of lung masks: 3D lung masks from CT examinations (ground truth for further calculations) and 2D lung masks from synthetic X-ray images (according to the current gold standard method: following the contours of other anatomical structures). Overlapping and non-overlapping lung regions covered by both types of masks were analyzed. Volume of the overlapping regions was compared with total lung volume, and volume fractions of non-overlapping lung regions in relation to the total lung volume were calculated. The performance results between the two radiologists were compared. RESULTS Significant differences were observed between lung regions covered by CT and synthetic X-ray masks. The mean volume fractions of the lung regions not covered by synthetic X-ray masks for the right lung, the left lung, and both lungs were 22.8 %, 32.9 %, and 27.3 %, respectively, for Radiologist 1 and 22.7 %, 32.9 %, and 27.3 %, respectively, for Radiologist 2. There was excellent spatial agreement between the masks created by the two radiologists. CONCLUSIONS Lung X-ray masks created according to the current gold standard method significantly underestimate lung regions and do not cover substantial portions of the lungs. IMPLICATIONS FOR PRACTICE Standard lung masks fail to encompass the whole range of the lungs and significantly restrict the field of analysis in AI-driven applications, which may lead to false conclusions and diagnoses.
Collapse
Affiliation(s)
- P Bombiński
- Department of Pediatric Radiology, Medical University of Warsaw, Pediatric Clinical Hospital, 63A Żwirki i Wigury St, Warsaw 02-091, Poland; upmedic, 2/11 Sądowa St, 20-027 Lublin, Poland.
| | - P Szatkowski
- 2nd Department of Clinical Radiology, Medical University of Warsaw, Central Clinical Hospital, 1A Banacha St, Warsaw 02-097, Poland.
| | - B Sobieski
- Faculty of Mathematics and Information Science, Warsaw University of Technology, 75 Koszykowa St, Warsaw 00-661, Poland; MI2.ai, Warsaw University of Technology, 75 Koszykowa St, Warsaw 00-661, Poland.
| | - T Kwieciński
- Faculty of Mathematics and Information Science, Warsaw University of Technology, 75 Koszykowa St, Warsaw 00-661, Poland; MI2.ai, Warsaw University of Technology, 75 Koszykowa St, Warsaw 00-661, Poland.
| | - S Płotka
- Faculty of Mathematics and Information Science, Warsaw University of Technology, 75 Koszykowa St, Warsaw 00-661, Poland; MI2.ai, Warsaw University of Technology, 75 Koszykowa St, Warsaw 00-661, Poland; Informatics Institute, University of Amsterdam, Science Park 900, 1098 XH Amsterdam, the Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands.
| | - M Adamek
- Department of Thoracic Surgery, Medical University of Silesia, 35 Ceglana St, 40-514 Katowice, Poland; Department of Thoracic Surgery, Medical University of Gdańsk, 17 Smoluchowskiego St, 80-214 Gdańsk, Poland.
| | - M Banasiuk
- Department of Pediatric Gastroenterology and Nutrition, Medical University of Warsaw, Pediatric Clinical Hospital, 63A Żwirki i Wigury, Warsaw 02-091, Poland.
| | - M I Furmanek
- Department of Pediatric Radiology, Medical University of Warsaw, Pediatric Clinical Hospital, 63A Żwirki i Wigury St, Warsaw 02-091, Poland.
| | - P Biecek
- Faculty of Mathematics and Information Science, Warsaw University of Technology, 75 Koszykowa St, Warsaw 00-661, Poland; MI2.ai, Warsaw University of Technology, 75 Koszykowa St, Warsaw 00-661, Poland; Faculty of Mathematics, Informatics, and Mechanics, University of Warsaw, 1A Banacha St, Warsaw 02-097, Poland.
| |
Collapse
|
9
|
Acitores Cortina JM, Fatapour Y, Brown KL, Gisladottir U, Zietz M, Bear Don't Walk IV OJ, Peter D, Berkowitz JS, Friedrich NA, Kivelson S, Kuchi A, Liu H, Srinivasan A, Tsang KK, Tatonetti NP. Biases in Race and Ethnicity Introduced by Filtering Electronic Health Records for "Complete Data": Observational Clinical Data Analysis. JMIR Med Inform 2025; 13:e67591. [PMID: 40146917 PMCID: PMC11967746 DOI: 10.2196/67591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Revised: 01/06/2025] [Accepted: 01/12/2025] [Indexed: 03/29/2025] Open
Abstract
Background Integrated clinical databases from national biobanks have advanced the capacity for disease research. Data quality and completeness filters are used when building clinical cohorts to address limitations of data missingness. However, these filters may unintentionally introduce systemic biases when they are correlated with race and ethnicity. Objective In this study, we examined the race and ethnicity biases introduced by applying common filters to 4 clinical records databases. Specifically, we evaluated whether these filters introduce biases that disproportionately exclude minoritized groups. Methods We applied 19 commonly used data filters to electronic health record datasets from 4 geographically varied locations comprising close to 12 million patients to understand how using these filters introduces sample bias along racial and ethnic groupings. These filters covered a range of information, including demographics, medication records, visit details, and observation periods. We observed the variation in sample drop-off between self-reported ethnic and racial groups for each site as we applied each filter individually. Results Applying the observation period filter substantially reduced data availability across all races and ethnicities in all 4 datasets. However, among those examined, the availability of data in the white group remained consistently higher compared to other racial groups after applying each filter. Conversely, the Black or African American group was the most impacted by each filter on these 3 datasets: Cedars-Sinai dataset, UK Biobank, and Columbia University dataset. Among the 4 distinct datasets, only applying the filters to the All of Us dataset resulted in minimal deviation from the baseline, with most racial and ethnic groups following a similar pattern. Conclusions Our findings underscore the importance of using only necessary filters, as they might disproportionally affect data availability of minoritized racial and ethnic populations. Researchers must consider these unintentional biases when performing data-driven research and explore techniques to minimize the impact of these filters, such as probabilistic methods or adjusted cohort selection methods. Additionally, we recommend disclosing sample sizes for racial and ethnic groups both before and after data filters are applied to aid the reader in understanding the generalizability of the results. Future work should focus on exploring the effects of filters on downstream analyses.
Collapse
Affiliation(s)
- Jose Miguel Acitores Cortina
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, 700 North San Vicente Boulevard, Pacific Design Center Suite G540, Los Angeles, CA, 90069, United States, 1 424 315 1031
- Cedars-Sinai Cancer, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Yasaman Fatapour
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, 700 North San Vicente Boulevard, Pacific Design Center Suite G540, Los Angeles, CA, 90069, United States, 1 424 315 1031
- Cedars-Sinai Cancer, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Kathleen LaRow Brown
- Department of Systems Biology, Columbia University, New York, NY, United States
- Department of Biomedical Informatics, Columbia University, New York, NY, United States
| | - Undina Gisladottir
- Department of Systems Biology, Columbia University, New York, NY, United States
- Department of Biomedical Informatics, Columbia University, New York, NY, United States
| | - Michael Zietz
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, 700 North San Vicente Boulevard, Pacific Design Center Suite G540, Los Angeles, CA, 90069, United States, 1 424 315 1031
| | | | - Danner Peter
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, United States
| | - Jacob S Berkowitz
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, 700 North San Vicente Boulevard, Pacific Design Center Suite G540, Los Angeles, CA, 90069, United States, 1 424 315 1031
- Cedars-Sinai Cancer, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Nadine A Friedrich
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, 700 North San Vicente Boulevard, Pacific Design Center Suite G540, Los Angeles, CA, 90069, United States, 1 424 315 1031
- Cedars-Sinai Cancer, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Sophia Kivelson
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, 700 North San Vicente Boulevard, Pacific Design Center Suite G540, Los Angeles, CA, 90069, United States, 1 424 315 1031
- Cedars-Sinai Cancer, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Aditi Kuchi
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, 700 North San Vicente Boulevard, Pacific Design Center Suite G540, Los Angeles, CA, 90069, United States, 1 424 315 1031
- Cedars-Sinai Cancer, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Hongyu Liu
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, 700 North San Vicente Boulevard, Pacific Design Center Suite G540, Los Angeles, CA, 90069, United States, 1 424 315 1031
- Cedars-Sinai Cancer, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Apoorva Srinivasan
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, 700 North San Vicente Boulevard, Pacific Design Center Suite G540, Los Angeles, CA, 90069, United States, 1 424 315 1031
- Cedars-Sinai Cancer, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Kevin K Tsang
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, 700 North San Vicente Boulevard, Pacific Design Center Suite G540, Los Angeles, CA, 90069, United States, 1 424 315 1031
- Cedars-Sinai Cancer, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Nicholas P Tatonetti
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, 700 North San Vicente Boulevard, Pacific Design Center Suite G540, Los Angeles, CA, 90069, United States, 1 424 315 1031
- Cedars-Sinai Cancer, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Systems Biology, Columbia University, New York, NY, United States
- Department of Biomedical Informatics, Columbia University, New York, NY, United States
| |
Collapse
|
10
|
Mairi A, Hamza L, Touati A. Artificial intelligence and its application in clinical microbiology. Expert Rev Anti Infect Ther 2025:1-22. [PMID: 40131188 DOI: 10.1080/14787210.2025.2484284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 03/12/2025] [Accepted: 03/21/2025] [Indexed: 03/26/2025]
Abstract
INTRODUCTION Traditional microbiological diagnostics face challenges in pathogen identification speed and antimicrobial resistance (AMR) evaluation. Artificial intelligence (AI) offers transformative solutions, necessitating a comprehensive review of its applications, advancements, and integration challenges in clinical microbiology. AREAS COVERED This review examines AI-driven methodologies, including machine learning (ML), deep learning (DL), and convolutional neural networks (CNNs), for enhancing pathogen detection, AMR prediction, and diagnostic imaging. Applications in virology (e.g. COVID-19 RT-PCR optimization), parasitology (e.g. malaria detection), and bacteriology (e.g. automated colony counting) are analyzed. A literature search was conducted using PubMed, Scopus, and Web of Science (2018-2024), prioritizing peer-reviewed studies on AI's diagnostic accuracy, workflow efficiency, and clinical validation. EXPERT OPINION AI significantly improves diagnostic precision and operational efficiency but requires robust validation to address data heterogeneity, model interpretability, and ethical concerns. Future success hinges on interdisciplinary collaboration to develop standardized, equitable AI tools tailored for global healthcare settings. Advancing explainable AI and federated learning frameworks will be critical for bridging current implementation gaps and maximizing AI's potential in combating infectious diseases.
Collapse
Affiliation(s)
- Assia Mairi
- Université de Bejaia, Laboratoire d'Ecologie Microbienne, Bejaia, Algeria
| | - Lamia Hamza
- Université de Bejaia, Département d'informatique Laboratoire d'Informatique MEDicale (LIMED), Bejaia, Algeria
| | - Abdelaziz Touati
- Université de Bejaia, Laboratoire d'Ecologie Microbienne, Bejaia, Algeria
| |
Collapse
|
11
|
Hasanzadeh F, Josephson CB, Waters G, Adedinsewo D, Azizi Z, White JA. Bias recognition and mitigation strategies in artificial intelligence healthcare applications. NPJ Digit Med 2025; 8:154. [PMID: 40069303 PMCID: PMC11897215 DOI: 10.1038/s41746-025-01503-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 02/06/2025] [Indexed: 03/15/2025] Open
Abstract
Artificial intelligence (AI) is delivering value across all aspects of clinical practice. However, bias may exacerbate healthcare disparities. This review examines the origins of bias in healthcare AI, strategies for mitigation, and responsibilities of relevant stakeholders towards achieving fair and equitable use. We highlight the importance of systematically identifying bias and engaging relevant mitigation activities throughout the AI model lifecycle, from model conception through to deployment and longitudinal surveillance.
Collapse
Affiliation(s)
- Fereshteh Hasanzadeh
- Libin Cardiovascular Institute, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Colin B Josephson
- Department of Medicine, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Gabriella Waters
- Morgan State University, Center for Equitable AI & Machine Learning Systems, Baltimore, MD, USA
| | | | - Zahra Azizi
- Department of Cardiovascular Medicine, Stanford University, Stanford, CA, USA
| | - James A White
- Libin Cardiovascular Institute, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada.
- Department of Medicine, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada.
| |
Collapse
|
12
|
He R, Sarwal V, Qiu X, Zhuang Y, Zhang L, Liu Y, Chiang J. Generative AI Models in Time-Varying Biomedical Data: Scoping Review. J Med Internet Res 2025; 27:e59792. [PMID: 40063929 PMCID: PMC11933772 DOI: 10.2196/59792] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 08/08/2024] [Accepted: 11/15/2024] [Indexed: 03/28/2025] Open
Abstract
BACKGROUND Trajectory modeling is a long-standing challenge in the application of computational methods to health care. In the age of big data, traditional statistical and machine learning methods do not achieve satisfactory results as they often fail to capture the complex underlying distributions of multimodal health data and long-term dependencies throughout medical histories. Recent advances in generative artificial intelligence (AI) have provided powerful tools to represent complex distributions and patterns with minimal underlying assumptions, with major impact in fields such as finance and environmental sciences, prompting researchers to apply these methods for disease modeling in health care. OBJECTIVE While AI methods have proven powerful, their application in clinical practice remains limited due to their highly complex nature. The proliferation of AI algorithms also poses a significant challenge for nondevelopers to track and incorporate these advances into clinical research and application. In this paper, we introduce basic concepts in generative AI and discuss current algorithms and how they can be applied to health care for practitioners with little background in computer science. METHODS We surveyed peer-reviewed papers on generative AI models with specific applications to time-series health data. Our search included single- and multimodal generative AI models that operated over structured and unstructured data, physiological waveforms, medical imaging, and multi-omics data. We introduce current generative AI methods, review their applications, and discuss their limitations and future directions in each data modality. RESULTS We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines and reviewed 155 articles on generative AI applications to time-series health care data across modalities. Furthermore, we offer a systematic framework for clinicians to easily identify suitable AI methods for their data and task at hand. CONCLUSIONS We reviewed and critiqued existing applications of generative AI to time-series health data with the aim of bridging the gap between computational methods and clinical application. We also identified the shortcomings of existing approaches and highlighted recent advances in generative AI that represent promising directions for health care modeling.
Collapse
Affiliation(s)
- Rosemary He
- Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Computational Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Varuni Sarwal
- Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Computational Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Xinru Qiu
- Division of Biomedical Sciences, School of Medicine, University of California Riverside, Riverside, CA, United States
| | - Yongwen Zhuang
- Department of Biostatistics, University of Michigan, Ann Arbor, MI, United States
| | - Le Zhang
- Institute for Integrative Genome Biology, University of California Riverside, Riverside, CA, United States
| | - Yue Liu
- Institute for Cellular and Molecular Biology, University of Texas at Austin, Austin, TX, United States
| | - Jeffrey Chiang
- Department of Computational Medicine, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Neurosurgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
13
|
Modise LM, Alborzi Avanaki M, Ameen S, Celi LA, Chen VXY, Cordes A, Elmore M, Fiske A, Gallifant J, Hayes M, Marcelo A, Matos J, Nakayama L, Ozoani E, Silverman BC, Comeau DS. Introducing the Team Card: Enhancing governance for medical Artificial Intelligence (AI) systems in the age of complexity. PLOS DIGITAL HEALTH 2025; 4:e0000495. [PMID: 40036250 DOI: 10.1371/journal.pdig.0000495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 12/23/2024] [Indexed: 03/06/2025]
Abstract
This paper introduces the Team Card (TC) as a protocol to address harmful biases in the development of clinical artificial intelligence (AI) systems by emphasizing the often-overlooked role of researchers' positionality. While harmful bias in medical AI, particularly in Clinical Decision Support (CDS) tools, is frequently attributed to issues of data quality, this limited framing neglects how researchers' worldviews-shaped by their training, backgrounds, and experiences-can influence AI design and deployment. These unexamined subjectivities can create epistemic limitations, amplifying biases and increasing the risk of inequitable applications in clinical settings. The TC emphasizes reflexivity-critical self-reflection-as an ethical strategy to identify and address biases stemming from the subjectivity of research teams. By systematically documenting team composition, positionality, and the steps taken to monitor and address unconscious bias, TCs establish a framework for assessing how diversity within teams impacts AI development. Studies across business, science, and organizational contexts demonstrate that diversity improves outcomes, including innovation, decision-making quality, and overall performance. However, epistemic diversity-diverse ways of thinking and problem-solving-must be actively cultivated through intentional, collaborative processes to mitigate bias effectively. By embedding epistemic diversity into research practices, TCs may enhance model performance, improve fairness and offer an empirical basis for evaluating how diversity influences bias mitigation efforts over time. This represents a critical step toward developing inclusive, ethical, and effective AI systems in clinical care. A publicly available prototype presenting our TC is accessible at https://www.teamcard.io/team/demo.
Collapse
Affiliation(s)
- Lesedi Mamodise Modise
- Center for Bioethics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Mahsa Alborzi Avanaki
- Department of Radiology, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
| | - Saleem Ameen
- Department of Biomedical Informatics, Harvard Medical School, Harvard University, Boston, Massachusetts, United States of America
- Tasmanian School of Medicine, College of Health and Medicine, University of Tasmania, Hobart, Tasmania, Australia
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Leo A Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care, and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, United States of America
| | - Victor Xin Yuan Chen
- Center for Bioethics, Harvard Medical School, Boston, Massachusetts, United States of America
- Faculty of Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong SAR
| | - Ashley Cordes
- Indigenous Media in Environmental Studies Program and the Department of Data Science, University of Oregon, Eugene, Oregon, United States of America
| | - Matthew Elmore
- Duke Health, AI Evaluation and Governance, Duke University, Durham, North Carolina, United States of America
| | - Amelia Fiske
- Department of Preclinical Medicine, Institute of History and Ethics in Medicine, TUM School of Medicine and Health, Technical University of Munich, Bavaria, Germany
| | - Jack Gallifant
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Critical Care, Guy's and St. Thomas' NHS Trust, London, United Kingdom
| | - Megan Hayes
- Department of Environmental Studies, University of Oregon, Eugene, Oregon, United States of America
| | - Alvin Marcelo
- Medical Informatics Unit, College of Medicine, University of the Philippines Manila, Philippines
| | - Joao Matos
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Faculty of Engineering, University of Porto, Portugal
- Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - Luis Nakayama
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Ophthalmology, Sao Paulo Federal University, Sao Paulo, Brazil
| | - Ezinwanne Ozoani
- Machine Learning and Ethics Research Engineer, Innovation n Ethics, Dublin, Ireland
| | - Benjamin C Silverman
- Center for Bioethics, Harvard Medical School, Boston, Massachusetts, United States of America
- Department of Human Research Affairs, Mass General Brigham, Somerville, Massachusetts, United States of America
- Institute for Technology in Psychiatry, McLean Hospital, Belmont, Massachusetts, United States of America
| | - Donnella S Comeau
- Department of Radiology, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
- Department of Human Research Affairs, Mass General Brigham, Somerville, Massachusetts, United States of America
| |
Collapse
|
14
|
Wang H, Sambamoorthi N, Hoot N, Bryant D, Sambamoorthi U. Evaluating fairness of machine learning prediction of prolonged wait times in Emergency Department with Interpretable eXtreme gradient boosting. PLOS DIGITAL HEALTH 2025; 4:e0000751. [PMID: 40111994 PMCID: PMC11925291 DOI: 10.1371/journal.pdig.0000751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Accepted: 01/11/2025] [Indexed: 03/22/2025]
Abstract
It is essential to evaluate performance and assess quality before applying artificial intelligence (AI) and machine learning (ML) models to clinical practice. This study utilized ML to predict patient wait times in the Emergency Department (ED), determine model performance accuracies, and conduct fairness evaluations to further assess ethnic disparities in using ML for wait time prediction among different patient populations in the ED. This retrospective observational study included adult patients (age ≥18 years) in the ED (n=173,856 visits) who were assigned an Emergency Severity Index (ESI) level of 3 at triage. Prolonged wait time was defined as waiting time ≥30 minutes. We employed extreme gradient boosting (XGBoost) for predicting prolonged wait times. Model performance was assessed with accuracy, recall, precision, F1 score, and false negative rate (FNR). To perform the global and local interpretation of feature importance, we utilized Shapley additive explanations (SHAP) to interpret the output from the XGBoost model. Fairness in ML models were evaluated across sensitive attributes (sex, race and ethnicity, and insurance status) at both subgroup and individual levels. We found that nearly half (48.43%, 84,195) of ED patient visits demonstrated prolonged ED wait times. XGBoost model exhibited moderate accuracy performance (AUROC=0.81). When fairness was evaluated with FNRs, unfairness existed across different sensitive attributes (male vs. female, Hispanic vs. Non-Hispanic White, and patients with insurances vs. without insurance). The predicted FNRs were lower among females, Hispanics, and patients without insurance compared to their counterparts. Therefore, XGBoost model demonstrated acceptable performance in predicting prolonged wait times in ED visits. However, disparities arise in predicting patients with different sex, race and ethnicity, and insurance status. To enhance the utility of ML model predictions in clinical practice, conducting performance assessments and fairness evaluations are crucial.
Collapse
Affiliation(s)
- Hao Wang
- Department of Emergency Medicine, JPS Health Network, Fort Worth, Texas, United States of America
| | - Nethra Sambamoorthi
- Senior biostatistician, CRM Portals LLC, Fort Worth, Texas, United States of America
| | - Nathan Hoot
- Department of Emergency Medicine, JPS Health Network, Fort Worth, Texas, United States of America
| | - David Bryant
- Department of Emergency Medicine, JPS Health Network, Fort Worth, Texas, United States of America
| | - Usha Sambamoorthi
- College of Pharmacy, University of North Texas Health Science Center, Fort Worth, Texas, United States of America
| |
Collapse
|
15
|
Arslan B, Nuhoglu C, Satici MO, Altinbilek E. Evaluating LLM-based generative AI tools in emergency triage: A comparative study of ChatGPT Plus, Copilot Pro, and triage nurses. Am J Emerg Med 2025; 89:174-181. [PMID: 39731895 DOI: 10.1016/j.ajem.2024.12.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Revised: 11/08/2024] [Accepted: 12/09/2024] [Indexed: 12/30/2024] Open
Abstract
BACKGROUND The number of emergency department (ED) visits has been on steady increase globally. Artificial Intelligence (AI) technologies, including Large Language Model (LLMs)-based generative AI models, have shown promise in improving triage accuracy. This study evaluates the performance of ChatGPT and Copilot in triage at a high-volume urban hospital, hypothesizing that these tools can match trained physicians' accuracy and reduce human bias amidst ED crowding challenges. METHODS This single-center, prospective observational study was conducted in an urban ED over one week. Adult patients were enrolled through random 24-h intervals. Exclusions included minors, trauma cases, and incomplete data. Triage nurses assessed patients while an emergency medicine (EM) physician documented clinical vignettes and assigned emergency severity index (ESI) levels. These vignettes were then introduced to ChatGPT and Copilot for comparison with the triage nurse's decision. RESULTS The overall triage accuracy was 65.2 % for nurses, 66.5 % for ChatGPT, and 61.8 % for Copilot, with no significant difference (p = 0.000). Moderate agreement was observed between the EM physician and ChatGPT, triage nurses, and Copilot (Cohen's Kappa = 0.537, 0.477, and 0.472, respectively). In recognizing high-acuity patients, ChatGPT and Copilot outperformed triage nurses (87.8 % and 85.7 % versus 32.7 %, respectively). Compared to ChatGPT and Copilot, nurses significantly under-triaged patients (p < 0.05). The analysis of predictive performance for ChatGPT, Copilot, and triage nurses demonstrated varying discrimination abilities across ESI levels, all of which were statistically significant (p < 0.05). ChatGPT and Copilot exhibited consistent accuracy across age, gender, and admission time, whereas triage nurses were more likely to mistriage patients under 45 years old. CONCLUSION ChatGPT and Copilot outperform traditional nurse triage in identifying high-acuity patients, but real-time ED capacity data is crucial to prevent overcrowding and ensure high-quality of emergency care.
Collapse
Affiliation(s)
- B Arslan
- Department of Emergency Medicine, Sisli Hamidiye Etfal Training and Research Hospital, Istanbul, Turkey.
| | - C Nuhoglu
- Department of Emergency Medicine, Sisli Hamidiye Etfal Training and Research Hospital, Istanbul, Turkey
| | - M O Satici
- Department of Emergency Medicine, Sisli Hamidiye Etfal Training and Research Hospital, Istanbul, Turkey
| | - E Altinbilek
- Department of Emergency Medicine, Sisli Hamidiye Etfal Training and Research Hospital, Istanbul, Turkey
| |
Collapse
|
16
|
Kim DW, Park CY, Shin JH, Lee HJ. The Role of Artificial Intelligence in Obesity Medicine. Endocrinol Metab Clin North Am 2025; 54:207-215. [PMID: 39919876 DOI: 10.1016/j.ecl.2024.10.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2025]
Abstract
The rising prevalence of obesity presents significant health, economic, and social challenges, necessitating a comprehensive approach to prevention, diagnosis, treatment, and long-term management. This review highlights the transformative role of artificial intelligence in obesity medicine, showcasing how technologies such as machine learning, deep learning, natural language processing, and large language models improve obesity management. The capacity of artificial intelligence to analyze extensive datasets enables predictive analytics, personalized treatment plans, and real-time behavioral interventions. Despite its potential, integrating artificial intelligence in obesity medicine faces challenges and ethical considerations, such as data privacy, algorithmic bias, artificial intelligence hallucination, transparency, and implementation barriers.
Collapse
Affiliation(s)
- Dong Wook Kim
- Division of Endocrinology, Diabetes and Hypertension, Center for Weight Management and Wellness, Brigham and Women's Hospital, 221 Longwood Avenue, RFB 490, Boston, MA 02115, USA.
| | - Cheol-Young Park
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jeong-Hun Shin
- Department of Internal Medicine, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Hyunjoo Jenny Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| |
Collapse
|
17
|
Sullivan BA, Grundmeier RW. Machine Learning Models as Early Warning Systems for Neonatal Infection. Clin Perinatol 2025; 52:167-183. [PMID: 39892951 DOI: 10.1016/j.clp.2024.10.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2025]
Abstract
Neonatal infections pose a significant threat to the health of newborns. Associated morbidity and mortality risks underscore the urgency of prompt diagnosis and treatment with appropriate empiric antibiotics. Delay in treatment can be fatal; thus, early detection improves outcomes. However, diagnosing early is a challenge as signs and symptoms of neonatal infection are non-specific and overlap with non-infectious conditions. Machine learning (ML) offers promise in early detection, utilizing various data sources and methodologies. However, ML models require rigorous validation and consideration of various challenges, including false alarms and user acceptance requiring careful integration and ongoing evaluation for successful implementation.
Collapse
Affiliation(s)
- Brynne A Sullivan
- Division of Neonatology, Department of Pediatrics, University of Virginia School of Medicine, 1215 Lee Street, P.O. Box 800386, Charlottesville, VA 22947, USA.
| | - Robert W Grundmeier
- Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania; Division of Clinical Informatics, Department of Biomedical and Health Informatics, The Children's Hospital of Philadelphia, 3400 Civic Center Boulevard Ste 10, Philadelphia, PA 19104, USA
| |
Collapse
|
18
|
Cabral BP, Braga LAM, Conte Filho CG, Penteado B, Freire de Castro Silva SL, Castro L, Fornazin M, Mota F. Future Use of AI in Diagnostic Medicine: 2-Wave Cross-Sectional Survey Study. J Med Internet Res 2025; 27:e53892. [PMID: 40053779 PMCID: PMC11907171 DOI: 10.2196/53892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 05/06/2024] [Accepted: 10/18/2024] [Indexed: 03/09/2025] Open
Abstract
BACKGROUND The rapid evolution of artificial intelligence (AI) presents transformative potential for diagnostic medicine, offering opportunities to enhance diagnostic accuracy, reduce costs, and improve patient outcomes. OBJECTIVE This study aimed to assess the expected future impact of AI on diagnostic medicine by comparing global researchers' expectations using 2 cross-sectional surveys. METHODS The surveys were conducted in September 2020 and February 2023. Each survey captured a 10-year projection horizon, gathering insights from >3700 researchers with expertise in AI and diagnostic medicine from all over the world. The survey sought to understand the perceived benefits, integration challenges, and evolving attitudes toward AI use in diagnostic settings. RESULTS Results indicated a strong expectation among researchers that AI will substantially influence diagnostic medicine within the next decade. Key anticipated benefits include enhanced diagnostic reliability, reduced screening costs, improved patient care, and decreased physician workload, addressing the growing demand for diagnostic services outpacing the supply of medical professionals. Specifically, x-ray diagnosis, heart rhythm interpretation, and skin malignancy detection were identified as the diagnostic tools most likely to be integrated with AI technologies due to their maturity and existing AI applications. The surveys highlighted the growing optimism regarding AI's ability to transform traditional diagnostic pathways and enhance clinical decision-making processes. Furthermore, the study identified barriers to the integration of AI in diagnostic medicine. The primary challenges cited were the difficulties of embedding AI within existing clinical workflows, ethical and regulatory concerns, and data privacy issues. Respondents emphasized uncertainties around legal responsibility and accountability for AI-supported clinical decisions, data protection challenges, and the need for robust regulatory frameworks to ensure safe AI deployment. Ethical concerns, particularly those related to algorithmic transparency and bias, were noted as increasingly critical, reflecting a heightened awareness of the potential risks associated with AI adoption in clinical settings. Differences between the 2 survey waves indicated a growing focus on ethical and regulatory issues, suggesting an evolving recognition of these challenges over time. CONCLUSIONS Despite these barriers, there was notable consistency in researchers' expectations across the 2 survey periods, indicating a stable and sustained outlook on AI's transformative potential in diagnostic medicine. The findings show the need for interdisciplinary collaboration among clinicians, AI developers, and regulators to address ethical and practical challenges while maximizing AI's benefits. This study offers insights into the projected trajectory of AI in diagnostic medicine, guiding stakeholders, including health care providers, policy makers, and technology developers, on navigating the opportunities and challenges of AI integration.
Collapse
Affiliation(s)
- Bernardo Pereira Cabral
- Cellular Communication Laboratory, Oswaldo Cruz Institute, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
- Department of Economics, Faculty of Economics, Federal University of Bahia, Salvador, Brazil
| | - Luiza Amara Maciel Braga
- Cellular Communication Laboratory, Oswaldo Cruz Institute, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
| | | | - Bruno Penteado
- Fiocruz Strategy for the 2030 Agenda, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
| | - Sandro Luis Freire de Castro Silva
- National Cancer Institute, Rio de Janeiro, Brazil
- Graduate Program in Management and Strategy, Federal Rural University of Rio de Janeiro, Seropedica, Brazil
| | - Leonardo Castro
- Fiocruz Strategy for the 2030 Agenda, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
- National School of Public Health, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
| | - Marcelo Fornazin
- Fiocruz Strategy for the 2030 Agenda, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
- National School of Public Health, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
| | - Fabio Mota
- Cellular Communication Laboratory, Oswaldo Cruz Institute, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
| |
Collapse
|
19
|
Omar M, Sorin V, Agbareia R, Apakama DU, Soroush A, Sakhuja A, Freeman R, Horowitz CR, Richardson LD, Nadkarni GN, Klang E. Evaluating and addressing demographic disparities in medical large language models: a systematic review. Int J Equity Health 2025; 24:57. [PMID: 40011901 DOI: 10.1186/s12939-025-02419-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2024] [Accepted: 02/18/2025] [Indexed: 02/28/2025] Open
Abstract
BACKGROUND Large language models are increasingly evaluated for use in healthcare. However, concerns about their impact on disparities persist. This study reviews current research on demographic biases in large language models to identify prevalent bias types, assess measurement methods, and evaluate mitigation strategies. METHODS We conducted a systematic review, searching publications from January 2018 to July 2024 across five databases. We included peer-reviewed studies evaluating demographic biases in large language models, focusing on gender, race, ethnicity, age, and other factors. Study quality was assessed using the Joanna Briggs Institute Critical Appraisal Tools. RESULTS Our review included 24 studies. Of these, 22 (91.7%) identified biases. Gender bias was the most prevalent, reported in 15 of 16 studies (93.7%). Racial or ethnic biases were observed in 10 of 11 studies (90.9%). Only two studies found minimal or no bias in certain contexts. Mitigation strategies mainly included prompt engineering, with varying effectiveness. However, these findings are tempered by a potential publication bias, as studies with negative results are less frequently published. CONCLUSION Biases are observed in large language models across various medical domains. While bias detection is improving, effective mitigation strategies are still developing. As LLMs increasingly influence critical decisions, addressing these biases and their resultant disparities is essential for ensuring fair artificial intelligence systems. Future research should focus on a wider range of demographic factors, intersectional analyses, and non-Western cultural contexts.
Collapse
Affiliation(s)
- Mahmud Omar
- The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Vera Sorin
- Diagnostic Radiology, Mayo Clinic, Rochester, MN, USA
| | - Reem Agbareia
- Ophthalmology Department, Hadassah Medical Center, Jerusalem, Israel
| | - Donald U Apakama
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Ali Soroush
- The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Ankit Sakhuja
- The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Robert Freeman
- The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Carol R Horowitz
- Institute for Health Equity Research, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Lynne D Richardson
- Institute for Health Equity Research, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Girish N Nadkarni
- The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Eyal Klang
- The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
20
|
Kapur I, Kennedy R, Hickman C. Artificial Intelligence Algorithms, Bias, and Innovation: Implications for Social Work. JOURNAL OF EVIDENCE-BASED SOCIAL WORK (2019) 2025:1-23. [PMID: 40008407 DOI: 10.1080/26408066.2025.2470903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2025]
Abstract
PURPOSE Artificial Intelligence (AI) technologies are rapidly expanding across diverse contexts. As the reach of AI continues to grow, there is a need to examine student perspectives on the increasing prevalence of AI and AI-based practice approaches in social work. MATERIALS AND METHODS In this qualitative study, we conducted structured interviews with 15 students in bachelors and masters social work programs. We developed an interview guide with a list of questions to ask students and no prior knowledge of AI was required by the students. The study was framed based on an interpretive phenomenological analysis approach. RESULTS Through thematic analysis, five key themes were developed, including 1) Risks associated with AI, 2) Ethical Concerns in AI and Technology Use, 3) Bias and Fairness in AI, 4) Applications and Possibilities of AI in Social Work, and 5) Training and Awareness of AI in Social Work. DISCUSSION Social workers can help disadvantaged clients by ensuring access to the various AI technologies and facilitating social welfare interventions created using these technologies. There is a need to address the gap in the existing literature about the use of AI in social work practice and education. Social work researchers can explore and conduct future studies that utilize mixed methods methodologies that can evaluate the use of AI in social work domains. CONCLUSION This study highlights the need to increase awareness of AI in social work education and practice settings given the potential of these technologies to aid various aspects of social work practice.
Collapse
Affiliation(s)
- Ishita Kapur
- College of Social Work, The University of Tennessee, Knoxville, USA
| | - Reeve Kennedy
- School of Social Work, East Carolina University, North Carolina, USA
| | - Christy Hickman
- College of Social Work, The University of Tennessee, Knoxville, USA
| |
Collapse
|
21
|
Sedano R, Solitano V, Vuyyuru SK, Yuan Y, Hanžel J, Ma C, Nardone OM, Jairath V. Artificial intelligence to revolutionize IBD clinical trials: a comprehensive review. Therap Adv Gastroenterol 2025; 18:17562848251321915. [PMID: 39996136 PMCID: PMC11848901 DOI: 10.1177/17562848251321915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2024] [Accepted: 02/04/2025] [Indexed: 02/26/2025] Open
Abstract
Integrating artificial intelligence (AI) into clinical trials for inflammatory bowel disease (IBD) has potential to be transformative to the field. This article explores how AI-driven technologies, including machine learning (ML), natural language processing, and predictive analytics, have the potential to enhance important aspects of IBD trials-from patient recruitment and trial design to data analysis and personalized treatment strategies. As AI advances, it has potential to improve long-standing challenges in trial efficiency, accuracy, and personalization with the goal of accelerating the discovery of novel therapies and improve outcomes for people living with IBD. AI can streamline multiple trial phases, from target identification and patient recruitment to data analysis and monitoring. By integrating multi-omics data, electronic health records, and imaging repositories, AI can uncover molecular targets and personalize trial strategies, ultimately expediting drug development. However, the adoption of AI in IBD clinical trials encounters significant challenges. These include technical barriers in data integration, ethical concerns regarding patient privacy, and regulatory issues related to AI validation standards. Additionally, AI models risk producing biased outcomes if training datasets lack diversity, potentially impacting underrepresented populations in clinical trials. Addressing these limitations requires standardized data formats, interdisciplinary collaboration, and robust ethical frameworks to ensure inclusivity and accuracy. Continued partnerships among clinicians, researchers, data scientists, and regulators will be essential to establish transparent, patient-centered AI frameworks. By overcoming these obstacles, AI has the potential to enhance the efficiency, equity, and efficacy of IBD clinical trials, ultimately benefiting patient care.
Collapse
Affiliation(s)
- Rocio Sedano
- Division of Gastroenterology, Department of Medicine, Western University, London, ON, Canada
- Department of Epidemiology and Biostatistics, Western University, London, ON, Canada
- Lawson Health Research Institute, London, ON, Canada
| | - Virginia Solitano
- Department of Epidemiology and Biostatistics, Western University, London, ON, Canada
- Division of Gastroenterology and Gastrointestinal Endoscopy, IRCCS Ospedale San Raffaele, Università Vita-Salute San Raffaele, Milan, Lombardy, Italy
| | - Sudheer K. Vuyyuru
- Division of Gastroenterology, Department of Medicine, Western University, London, ON, Canada
| | - Yuhong Yuan
- Division of Gastroenterology, Department of Medicine, Western University, London, ON, Canada
- Lawson Health Research Institute, London, ON, Canada
| | - Jurij Hanžel
- Department of Gastroenterology, University Medical Centre Ljubljana, University of Ljubljana, Ljubljana, Slovenia
| | - Christopher Ma
- Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
- Division of Gastroenterology and Hepatology, Department of Medicine, University of Calgary, Calgary, AB, Canada
| | - Olga Maria Nardone
- Gastroenterology, Department of Public Health, University Federico II of Naples, Naples, Italy
| | - Vipul Jairath
- Division of Gastroenterology, Department of Medicine, Western University, London, ON, Canada
- Department of Epidemiology and Biostatistics, Western University, London, ON, Canada
- Lawson Health Research Institute, Room A10-219, University Hospital, 339 Windermere Rd, London, ON N6A 5A5, Canada
| |
Collapse
|
22
|
Rodriguez I, Huckins LM, Bulik CM, Xu J, Igudesman D. Harnessing precision nutrition to individualize weight restoration in anorexia nervosa. J Eat Disord 2025; 13:29. [PMID: 39962541 PMCID: PMC11834214 DOI: 10.1186/s40337-025-01209-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Accepted: 01/27/2025] [Indexed: 02/20/2025] Open
Abstract
Anorexia nervosa (AN) is a severe psychiatric disorder for which effective treatment and sustained recovery are contingent upon successful weight restoration, yet the efficacy of existing treatments is suboptimal. This narrative review considers the potential of precision nutrition for tailoring dietary interventions to individual characteristics to enhance acute and longer-term weight outcomes in AN. We review key factors that drive variation in nutritional requirements, including energy expenditure, fecal energy loss, the gut microbiota, genetic factors, and psychiatric comorbidities. Although scientific evidence supporting precision nutrition in AN is limited, preliminary findings suggest that individualized nutrition therapies, particularly those considering duration of illness and the gut microbiota, may augment weight gain. Some patients may benefit from microbiota-directed dietary plans that focus on restoring microbial diversity, keystone taxa, or functions that promote energy absorption, which could enhance weight restoration-although stronger evidence is needed to support this approach. Furthermore, accounting for psychiatric comorbidities such as depression and anxiety as well as genetic factors influencing metabolism may help refine nutrition prescriptions improving upon existing energy estimation equations, which were not developed for patients with AN. Given the reliance on large sample sizes, costly data collection, and the need for computationally intensive artificial intelligence algorithms to assimilate deep phenotypes into personalized interventions, we highlight practical considerations related to the implementation of precision nutrition approaches in clinical practice. More research is needed to identify which factors, including metabolic profiles, genetic markers, demographics, and habitual lifestyle behaviors, are most critical to target for individualizing weight restoration, and whether personalized recommendations can be practicably applied to improve and sustain patient recovery from this debilitating disorder with high relapse and mortality rates.
Collapse
Affiliation(s)
- Isabel Rodriguez
- School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Laura M Huckins
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, USA
| | - Cynthia M Bulik
- Department of Nutrition, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden.
| | - Jiayi Xu
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, USA
| | - Daria Igudesman
- AdventHealth Translational Research Intsitute, 301 E Princeton St, Orlando, FL, 32805, USA
| |
Collapse
|
23
|
Cau R, Pisu F, Suri JS, Saba L. Addressing hidden risks: Systematic review of artificial intelligence biases across racial and ethnic groups in cardiovascular diseases. Eur J Radiol 2025; 183:111867. [PMID: 39637580 DOI: 10.1016/j.ejrad.2024.111867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Revised: 11/25/2024] [Accepted: 11/28/2024] [Indexed: 12/07/2024]
Abstract
BACKGROUND Artificial intelligence (AI)-based models are increasingly being integrated into cardiovascular medicine. Despite promising potential, racial and ethnic biases remain a key concern regarding the development and implementation of AI models in clinical settings. OBJECTIVE This systematic review offers an overview of the accuracy and clinical applicability of AI models for cardiovascular diagnosis and prognosis across diverse racial and ethnic groups. METHOD A comprehensive literature search was conducted across four medical and scientific databases: PubMed, MEDLINE via Ovid, Scopus, and the Cochrane Library, to evaluate racial and ethnic disparities in cardiovascular medicine. RESULTS A total of 1704 references were screened, of which 11 articles were included in the final analysis. Applications of AI-based algorithms across different race/ethnic groups were varied and involved diagnosis, prognosis, and imaging segmentation. Among the 11 studies, 9 (82%) concluded that racial/ethnic bias existed, while 2 (18%) found no differences in the outcomes of AI models across various ethnicities. CONCLUSION Our results suggest significant differences in how AI models perform in cardiovascular medicine across diverse racial and ethnic groups. CLINICAL RELEVANCE STATEMENT The increasing integration of AI into cardiovascular medicine highlights the importance of evaluating its performance across diverse populations. This systematic review underscores the critical need to address racial and ethnic disparities in AI-based models to ensure equitable healthcare delivery.
Collapse
Affiliation(s)
- Riccardo Cau
- Department of Radiology, Azienda Ospedaliero Universitaria, Monserrato, Cagliari, Italy
| | - Francesco Pisu
- Department of Radiology, Azienda Ospedaliero Universitaria, Monserrato, Cagliari, Italy
| | - Jasjit S Suri
- Department of Radiology, Azienda Ospedaliero Universitaria, Monserrato, Cagliari, Italy
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria, Monserrato, Cagliari, Italy.
| |
Collapse
|
24
|
Vos S, Hebeda K, Milota M, Sand M, Drogt J, Grünberg K, Jongsma K. Making Pathologists Ready for the New Artificial Intelligence Era: Changes in Required Competencies. Mod Pathol 2025; 38:100657. [PMID: 39542175 DOI: 10.1016/j.modpat.2024.100657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 09/11/2024] [Accepted: 11/07/2024] [Indexed: 11/17/2024]
Abstract
In recent years, there has been an increasing interest in developing and using artificial intelligence (AI) models in pathology. Although pathologists generally have a positive attitude toward AI, they report a lack of knowledge and skills regarding how to use it in practice. Furthermore, it remains unclear what skills pathologists would require to use AI adequately and responsibly. However, adequate training of (future) pathologists is essential for successful AI use in pathology. In this paper, we assess which entrustable professional activities (EPAs) and associated competencies pathologists should acquire in order to use AI in their daily practice. We make use of the available academic literature, including literature in radiology, another image-based discipline, which is currently more advanced in terms of AI development and implementation. Although microscopy evaluation and reporting could be transferrable to AI in the future, most of the current pathologist EPAs and competencies will likely remain relevant when using AI techniques and interpreting and communicating results for individual patient cases. In addition, new competencies related to technology evaluation and implementation will likely be necessary, along with knowing one's own strengths and limitations in human-AI interactions. Because current EPAs do not sufficiently address the need to train pathologists in developing expertise related to technology evaluation and implementation, we propose a new EPA to enable pathology training programs to make pathologists fit for the new AI era "using AI in diagnostic pathology practice" and outline its associated competencies.
Collapse
Affiliation(s)
- Shoko Vos
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands.
| | - Konnie Hebeda
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Megan Milota
- Department of Bioethics and Health Humanities, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Martin Sand
- Faculty of Technology, Technical University Delft, Delft, the Netherlands
| | - Jojanneke Drogt
- Department of Bioethics and Health Humanities, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Katrien Grünberg
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Karin Jongsma
- Department of Bioethics and Health Humanities, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
25
|
Yu B, Shao S, Ma W. Frontiers in pancreatic cancer on biomarkers, microenvironment, and immunotherapy. Cancer Lett 2025; 610:217350. [PMID: 39581219 DOI: 10.1016/j.canlet.2024.217350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Revised: 11/06/2024] [Accepted: 11/21/2024] [Indexed: 11/26/2024]
Abstract
Pancreatic cancer remains one of the most challenging malignancies to treat due to its late-stage diagnosis, aggressive progression, and high resistance to existing therapies. This review examines the latest advancements in early detection, and therapeutic strategies, with a focus on emerging biomarkers, tumor microenvironment (TME) modulation, and the integration of artificial intelligence (AI) in data analysis. We highlight promising biomarkers, including microRNAs (miRNAs) and circulating tumor DNA (ctDNA), that offer enhanced sensitivity and specificity for early-stage diagnosis when combined with multi-omics panels. A detailed analysis of the TME reveals how components such as cancer-associated fibroblasts (CAFs), immune cells, and the extracellular matrix (ECM) contribute to therapy resistance by creating immunosuppressive barriers. We also discuss therapeutic interventions that target these TME components, aiming to improve drug delivery and overcome immune evasion. Furthermore, AI-driven analyses are explored for their potential to interpret complex multi-omics data, enabling personalized treatment strategies and real-time monitoring of treatment response. We conclude by identifying key areas for future research, including the clinical validation of biomarkers, regulatory frameworks for AI applications, and equitable access to innovative therapies. This comprehensive approach underscores the need for integrated, personalized strategies to improve outcomes in pancreatic cancer.
Collapse
Affiliation(s)
- Baofa Yu
- Taimei Baofa Cancer Hospital, Dongping, Shandong, 271500, China; Jinan Baofa Cancer Hospital, Jinan, Shandong, 250000, China; Beijing Baofa Cancer Hospital, Beijing, 100010, China; Immune Oncology Systems, Inc, San Diego, CA, 92102, USA.
| | - Shengwen Shao
- Institute of Microbiology and Immunology, Huzhou University School of Medicine, Huzhou, Zhejiang, 313000, China.
| | - Wenxue Ma
- Department of Medicine, Sanford Stem Cell Institute, and Moores Cancer Center, University of California San Diego, La Jolla, CA, 92093, USA.
| |
Collapse
|
26
|
van Kessel R, Seghers LE, Anderson M, Schutte NM, Monti G, Haig M, Schmidt J, Wharton G, Roman-Urrestarazu A, Larrain B, Sapanel Y, Stüwe L, Bourbonneux A, Yoon J, Lee M, Paccoud I, Borga L, Ndili N, Sutherland E, Görgens M, Weicken E, Coder M, de Fatima Marin H, Val E, Profili MC, Kosinska M, Browne CE, Marcelo A, Agarwal S, Mrazek MF, Eskandar H, Chestnov R, Smelyanskaya M, Källander K, Buttigieg S, Ramesh K, Holly L, Rys A, Azzopardi-Muscat N, de Barros J, Quintana Y, Spina A, Hyder AA, Labrique A, Kamel Boulos MN, Chen W, Agrawal A, Cho J, Klucken J, Prainsack B, Balicer R, Kickbusch I, Novillo-Ortiz D, Mossialos E. A scoping review and expert consensus on digital determinants of health. Bull World Health Organ 2025; 103:110-125H. [PMID: 39882497 PMCID: PMC11774227 DOI: 10.2471/blt.24.292057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 10/01/2024] [Accepted: 10/02/2024] [Indexed: 01/31/2025] Open
Abstract
Objective To map how social, commercial, political and digital determinants of health have changed or emerged during the recent digital transformation of society and to identify priority areas for policy action. Methods We systematically searched MEDLINE, Embase and Web of Science on 24 September 2023, to identify eligible reviews published in 2018 and later. To ensure we included the most recent literature, we supplemented our review with non-systematic searches in PubMed® and Google Scholar, along with records identified by subject matter experts. Using thematic analysis, we clustered the extracted data into five societal domains affected by digitalization. The clustering also informed a novel framework, which the authors and contributors reviewed for comprehensiveness and accuracy. Using a two-round consensus process, we rated the identified determinants into high, moderate and low urgency for policy actions. Findings We identified 13 804 records, of which 204 met the inclusion criteria. A total of 127 health determinants were found to have emerged or changed during the digital transformation of society (37 digital, 33 social, 33 commercial and economic and 24 political determinants). Of these, 30 determinants (23.6%) were considered particularly urgent for policy action. Conclusion This review offers a comprehensive overview of health determinants across digital, social, commercial and economic, and political domains, highlighting how policy decisions, individual behaviours and broader factors influence health by digitalization. The findings deepen our understanding of how health outcomes manifest within a digital ecosystem and inform strategies for addressing the complex and evolving networks of health determinants.
Collapse
Affiliation(s)
- Robin van Kessel
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
| | - Laure-Elise Seghers
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
| | - Michael Anderson
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
| | - Nienke M Schutte
- Innovation in Health Information Systems Unit, Sciensano, Brussels, Belgium
| | - Giovanni Monti
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
| | - Madeleine Haig
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
| | - Jelena Schmidt
- Department of International Health, Maastricht University, Maastricht, Kingdom of the Netherlands
| | - George Wharton
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
| | | | - Blanca Larrain
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Yoann Sapanel
- Institute of Digital Medicine, National University of Singapore, Singapore
| | - Louisa Stüwe
- Digital Health Delegation for Digital Health, Ministry of Labour, Health and Solidarities, Paris, France
| | - Agathe Bourbonneux
- Digital Health Delegation for Digital Health, Ministry of Labour, Health and Solidarities, Paris, France
| | - Junghee Yoon
- Department of Clinical Research Design and Evaluation, Sungkyunkwan University, Seoul, Republic of Korea
| | - Mangyeong Lee
- Department of Clinical Research Design and Evaluation, Sungkyunkwan University, Seoul, Republic of Korea
| | - Ivana Paccoud
- Luxembourg Centre for Systems Biomedicine, Université du Luxembourg, Belvaux, Luxembourg
| | - Liyousew Borga
- Luxembourg Centre for Systems Biomedicine, Université du Luxembourg, Belvaux, Luxembourg
| | - Njide Ndili
- PharmAccess Foundation Nigeria, Lagos, Nigeria
| | | | - Marelize Görgens
- Health, Nutrition and Population Global Practice, World Bank Group, WashingtonDC, United States of America (USA)
| | - Eva Weicken
- Fraunhofer Institute for Telecommunications, Heinrich Hertz Institut, Berlin, Germany
| | | | - Heimar de Fatima Marin
- Department of Biomedical and Data Science, Yale University School of Medicine, New Haven, USA
| | - Elena Val
- Migration Health Division, International Organization for Migration Regional Office for the European Economic Area, the EU and NATO, Brussels, Belgium
| | - Maria Cristina Profili
- Migration Health Division, International Organization for Migration Regional Office for the European Economic Area, the EU and NATO, Brussels, Belgium
| | - Monika Kosinska
- Department of Social Determinants of Health, World Health Organization, Geneva, Switzerland
| | | | - Alvin Marcelo
- Medical Informatics Unit, University of the Philippines, Manila, Philippines
| | - Smisha Agarwal
- Department of International Health, The Johns Hopkins University Bloomberg School of Public Health, Baltimore, USA
| | - Monique F. Mrazek
- International Finance Corporation, World Bank Group, WashingtonDC, USA
| | - Hani Eskandar
- Digital Services Division, International Telecommunications Union, Geneva, Switzerland
| | - Roman Chestnov
- Digital Services Division, International Telecommunications Union, Geneva, Switzerland
| | - Marina Smelyanskaya
- HIV and Health Group, United Nations Development Programme Europe and Central Asia, Istanbul, Türkiye
| | | | | | | | - Louise Holly
- Digital Transformations for Health Lab, Geneva, Switzerland
| | - Andrzej Rys
- Health Systems, Medical Products and Innovation, European Commission, Brussels, Belgium
| | - Natasha Azzopardi-Muscat
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- Innovation in Health Information Systems Unit, Sciensano, Brussels, Belgium
| | - Jerome de Barros
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- Department of International Health, Maastricht University, Maastricht, Kingdom of the Netherlands
| | - Yuri Quintana
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Antonio Spina
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- Institute of Digital Medicine, National University of Singapore, Singapore
| | - Adnan A Hyder
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- Digital Health Delegation for Digital Health, Ministry of Labour, Health and Solidarities, Paris, France
| | - Alain Labrique
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- Department of Clinical Research Design and Evaluation, Sungkyunkwan University, Seoul, Republic of Korea
| | - Maged N Kamel Boulos
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- Luxembourg Centre for Systems Biomedicine, Université du Luxembourg, Belvaux, Luxembourg
| | - Wen Chen
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- PharmAccess Foundation Nigeria, Lagos, Nigeria
| | - Anurag Agrawal
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- Paris, France
| | - Juhee Cho
- Department of Clinical Research Design and Evaluation, Sungkyunkwan University, Seoul, Republic of Korea
| | - Jochen Klucken
- Luxembourg Centre for Systems Biomedicine, Université du Luxembourg, Belvaux, Luxembourg
| | - Barbara Prainsack
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- Health, Nutrition and Population Global Practice, World Bank Group, WashingtonDC, United States of America (USA)
| | - Ran Balicer
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- Fraunhofer Institute for Telecommunications, Heinrich Hertz Institut, Berlin, Germany
| | | | - David Novillo-Ortiz
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
- Innovation in Health Information Systems Unit, Sciensano, Brussels, Belgium
| | - Elias Mossialos
- LSE Health, Department of Health Policy, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, London, England
| |
Collapse
|
27
|
Hussain SA, Bresnahan M, Zhuang J. The bias algorithm: how AI in healthcare exacerbates ethnic and racial disparities - a scoping review. ETHNICITY & HEALTH 2025; 30:197-214. [PMID: 39488857 DOI: 10.1080/13557858.2024.2422848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 10/24/2024] [Indexed: 11/05/2024]
Abstract
This scoping review examined racial and ethnic bias in artificial intelligence health algorithms (AIHA), the role of stakeholders in oversight, and the consequences of AIHA for health equity. Using the PRISMA-ScR guidelines, databases were searched between 2020 and 2024 using the terms racial and ethnic bias in health algorithms resulting in a final sample of 23 sources. Suggestions for how to mitigate algorithmic bias were compiled and evaluated, roles played by stakeholders were identified, and governance and stewardship plans for AIHA were examined. While AIHA represent a significant breakthrough in predictive analytics and treatment optimization, regularly outperforming humans in diagnostic precision and accuracy, they also present serious challenges to patient privacy, data security, institutional transparency, and health equity. Evidence from extant sources including those in this review showed that AIHA carry the potential to perpetuate health inequities. While the current study considered AIHA in the US, the use of AIHA carries implications for global health equity.
Collapse
Affiliation(s)
| | - Mary Bresnahan
- Department of Communication, Michigan State University, East Lansing, MI, USA
| | - Jie Zhuang
- Department of Communication, Texas Christian University, Fort Worth, TX, USA
| |
Collapse
|
28
|
Choudhary OP, Infant SS, As V, Chopra H, Manuta N. Exploring the potential and limitations of artificial intelligence in animal anatomy. Ann Anat 2025; 258:152366. [PMID: 39631569 DOI: 10.1016/j.aanat.2024.152366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2024] [Revised: 11/29/2024] [Accepted: 11/30/2024] [Indexed: 12/07/2024]
Abstract
BACKGROUND Artificial intelligence (AI) is revolutionizing veterinary medicine, particularly in the domain of veterinary anatomy. At present, there is no existing review article in the literature that examines the prospects and challenges associated with the use of AI in animal anatomy education. STUDY DESIGN Narrative review. OBJECTIVE This review article explores the prospects and drawbacks of AI applications in veterinary anatomy. Anatomy and AI-powered diagnostic systems enhance clinical examination, diagnosis, and treatment by analyzing vast datasets, improving accuracy, and detecting subtle anomalies. METHODS We reviewed and analyzed recent literature on AI applications in veterinary anatomy education, emphasizing their potential, limitations, and future directions.. CONCLUSION In veterinary anatomy education, AI integrates advanced tools like three-dimensional (3D) models, virtual reality (VR), and augmented reality (AR), offering dynamic and interactive learning experiences to students as well as the faculty of veterinary institutions across the globe. Despite these advantages, AI faces challenges such as the need for extensive, high-quality data, potential biases, and issues with algorithmic transparency. Additionally, virtual dissection and educational tools may impact hands-on learning and ethical and legal concerns regarding data privacy must be addressed. Balancing AI integration with traditional skills and addressing these challenges will maximize AI's benefits in veterinary anatomy and ensure comprehensive veterinary care.
Collapse
Affiliation(s)
- Om Prakash Choudhary
- Department of Veterinary Anatomy, College of Veterinary Science, Guru Angad Dev Veterinary and Animal Sciences University, Rampura Phul, Bathinda, Punjab 151103, India.
| | - Shofia Saghya Infant
- Department of Biotechnology, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| | - Vickram As
- Department of Biotechnology, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| | - Hitesh Chopra
- Centre for Research Impact & Outcome, Chitkara College of Pharmacy, Chitkara University, Rajpura, Punjab 140401, India
| | - Nicoleta Manuta
- Laboratory of Veterinary Anatomy, Faculty of Veterinary Medicine, Istanbul University- Cerrahpasa, Turkey
| |
Collapse
|
29
|
Sood A, Moyer A, Jahangiri P, Mar D, Nitichaikulvatana P, Ramreddy N, Stolyar L, Lin J. Evaluation of the Reliability of ChatGPT to Provide Guidance on Recombinant Zoster Vaccination for Patients With Rheumatic and Musculoskeletal Diseases. J Clin Rheumatol 2025:00124743-990000000-00303. [PMID: 39814338 DOI: 10.1097/rhu.0000000000002198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2025]
Abstract
INTRODUCTION Large language models (LLMs) such as ChatGPT can potentially transform the delivery of health information. This study aims to evaluate the accuracy and completeness of ChatGPT in responding to questions on recombinant zoster vaccination (RZV) in patients with rheumatic and musculoskeletal diseases. METHODS A cross-sectional study was conducted using 20 prompts based on information from the Centers for Disease Control and Prevention (CDC), the Advisory Committee on Immunization Practices (ACIP), and the American College of Rheumatology (ACR). These prompts were inputted into ChatGPT 3.5. Five rheumatologists independently scored the ChatGPT responses for accuracy (Likert 1 to 5) and completeness (Likert 1 to 3) compared with validated information sources (CDC, ACIP, and ACR). RESULTS The overall mean accuracy of ChatGPT responses on a 5-point scale was 4.04, with 80% of responses scoring ≥4. The mean completeness score of ChatGPT response on a 3-point scale was 2.3, with 95% of responses scoring ≥2. Among the 5 raters, ChatGPT unanimously scored with high accuracy and completeness to various patient and physician questions surrounding RZV. There was one instance where it scored with low accuracy and completeness. Although not significantly different, ChatGPT demonstrated the highest accuracy and completeness in answering questions related to ACIP guidelines compared with other information sources. CONCLUSIONS ChatGPT exhibits promising ability to address specific queries regarding RZV for rheumatic and musculoskeletal disease patients. However, it is essential to approach ChatGPT with caution due to risk of misinformation. This study emphasizes the importance of rigorously validating LLMs as a health information source.
Collapse
Affiliation(s)
- Akhil Sood
- From the Division of Immunology and Rheumatology, Stanford University School of Medicine, Palo Alto, CA
| | | | | | | | | | | | | | | |
Collapse
|
30
|
Chia JLL, He GS, Ngiam KY, Hartman M, Ng QX, Goh SSN. Harnessing Artificial Intelligence to Enhance Global Breast Cancer Care: A Scoping Review of Applications, Outcomes, and Challenges. Cancers (Basel) 2025; 17:197. [PMID: 39857979 PMCID: PMC11764353 DOI: 10.3390/cancers17020197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 01/02/2025] [Accepted: 01/07/2025] [Indexed: 01/27/2025] Open
Abstract
BACKGROUND In recent years, Artificial Intelligence (AI) has shown transformative potential in advancing breast cancer care globally. This scoping review seeks to provide a comprehensive overview of AI applications in breast cancer care, examining how they could reshape diagnosis, treatment, and management on a worldwide scale and discussing both the benefits and challenges associated with their adoption. METHODS In accordance with PRISMA-ScR and ensuing guidelines on scoping reviews, PubMed, Web of Science, Cochrane Library, and Embase were systematically searched from inception to end of May 2024. Keywords included "Artificial Intelligence" and "Breast Cancer". Original studies were included based on their focus on AI applications in breast cancer care and narrative synthesis was employed for data extraction and interpretation, with the findings organized into coherent themes. RESULTS Finally, 84 articles were included. The majority were conducted in developed countries (n = 54). The majority of publications were in the last 10 years (n = 83). The six main themes for AI applications were AI for breast cancer screening (n = 32), AI for image detection of nodal status (n = 7), AI-assisted histopathology (n = 8), AI in assessing post-neoadjuvant chemotherapy (NACT) response (n = 23), AI in breast cancer margin assessment (n = 5), and AI as a clinical decision support tool (n = 9). AI has been used as clinical decision support tools to augment treatment decisions for breast cancer and in multidisciplinary tumor board settings. Overall, AI applications demonstrated improved accuracy and efficiency; however, most articles did not report patient-centric clinical outcomes. CONCLUSIONS AI applications in breast cancer care show promise in enhancing diagnostic accuracy and treatment planning. However, persistent challenges in AI adoption, such as data quality, algorithm transparency, and resource disparities, must be addressed to advance the field.
Collapse
Affiliation(s)
- Jolene Li Ling Chia
- NUS Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Dr. S117597, Singapore 119077, Singapore (G.S.H.)
| | - George Shiyao He
- NUS Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Dr. S117597, Singapore 119077, Singapore (G.S.H.)
| | - Kee Yuen Ngiam
- Department of Surgery, National University Hospital, Singapore 119074, Singapore; (K.Y.N.); (M.H.)
- Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, 12 Science Drive 2, #10-01, Singapore 117549, Singapore
| | - Mikael Hartman
- Department of Surgery, National University Hospital, Singapore 119074, Singapore; (K.Y.N.); (M.H.)
- Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, 12 Science Drive 2, #10-01, Singapore 117549, Singapore
| | - Qin Xiang Ng
- Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, 12 Science Drive 2, #10-01, Singapore 117549, Singapore
- SingHealth Duke-NUS Global Health Institute, Singapore 169857, Singapore
| | - Serene Si Ning Goh
- Department of Surgery, National University Hospital, Singapore 119074, Singapore; (K.Y.N.); (M.H.)
- Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, 12 Science Drive 2, #10-01, Singapore 117549, Singapore
| |
Collapse
|
31
|
Sasseville M, Ouellet S, Rhéaume C, Sahlia M, Couture V, Després P, Paquette JS, Darmon D, Bergeron F, Gagnon MP. Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review. J Med Internet Res 2025; 27:e60269. [PMID: 39773888 PMCID: PMC11751650 DOI: 10.2196/60269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 09/26/2024] [Accepted: 11/07/2024] [Indexed: 01/11/2025] Open
Abstract
BACKGROUND Artificial intelligence (AI) predictive models in primary health care have the potential to enhance population health by rapidly and accurately identifying individuals who should receive care and health services. However, these models also carry the risk of perpetuating or amplifying existing biases toward diverse groups. We identified a gap in the current understanding of strategies used to assess and mitigate bias in primary health care algorithms related to individuals' personal or protected attributes. OBJECTIVE This study aimed to describe the attempts, strategies, and methods used to mitigate bias in AI models within primary health care, to identify the diverse groups or protected attributes considered, and to evaluate the results of these approaches on both bias reduction and AI model performance. METHODS We conducted a scoping review following Joanna Briggs Institute (JBI) guidelines, searching Medline (Ovid), CINAHL (EBSCO), PsycINFO (Ovid), and Web of Science databases for studies published between January 1, 2017, and November 15, 2022. Pairs of reviewers independently screened titles and abstracts, applied selection criteria, and performed full-text screening. Discrepancies regarding study inclusion were resolved by consensus. Following reporting standards for AI in health care, we extracted data on study objectives, model features, targeted diverse groups, mitigation strategies used, and results. Using the mixed methods appraisal tool, we appraised the quality of the studies. RESULTS After removing 585 duplicates, we screened 1018 titles and abstracts. From the remaining 189 full-text articles, we included 17 studies. The most frequently investigated protected attributes were race (or ethnicity), examined in 12 of the 17 studies, and sex (often identified as gender), typically classified as "male versus female" in 10 of the studies. We categorized bias mitigation approaches into four clusters: (1) modifying existing AI models or datasets, (2) sourcing data from electronic health records, (3) developing tools with a "human-in-the-loop" approach, and (4) identifying ethical principles for informed decision-making. Algorithmic preprocessing methods, such as relabeling and reweighing data, along with natural language processing techniques that extract data from unstructured notes, showed the greatest potential for bias mitigation. Other methods aimed at enhancing model fairness included group recalibration and the application of the equalized odds metric. However, these approaches sometimes exacerbated prediction errors across groups or led to overall model miscalibrations. CONCLUSIONS The results suggest that biases toward diverse groups are more easily mitigated when data are open-sourced, multiple stakeholders are engaged, and during the algorithm's preprocessing stage. Further empirical studies that include a broader range of groups, such as Indigenous peoples in Canada, are needed to validate and expand upon these findings. TRIAL REGISTRATION OSF Registry osf.io/9ngz5/; https://osf.io/9ngz5/. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) RR2-10.2196/46684.
Collapse
Affiliation(s)
- Maxime Sasseville
- Faculté des sciences infirmières, Université Laval, Québec, QC, Canada
- Vitam Research Center on Sustainable Health, Québec, QC, Canada
| | - Steven Ouellet
- Faculté des sciences infirmières, Université Laval, Québec, QC, Canada
| | - Caroline Rhéaume
- Vitam Research Center on Sustainable Health, Québec, QC, Canada
- Département de médecine familiale et de médecine d'urgence de la Faculté de médecine, Université Laval, Québec, QC, Canada
- Research Center of Quebec Heart and Lungs Institute, Québec, QC, Canada
| | - Malek Sahlia
- École Nationale des Sciences de l'Informatique, Université de La Manouba, La Manouba, Tunisia
| | - Vincent Couture
- Faculté des sciences infirmières, Université Laval, Québec, QC, Canada
| | - Philippe Després
- Département de physique, de génie physique et d'optique de la Faculté des sciences et de génie, Université Laval, Québec, QC, Canada
| | - Jean-Sébastien Paquette
- Vitam Research Center on Sustainable Health, Québec, QC, Canada
- Département de médecine familiale et de médecine d'urgence de la Faculté de médecine, Université Laval, Québec, QC, Canada
| | - David Darmon
- Risques, Epidémiologie, Territoires, Informations, Education et Santé. Département d'enseignement et de recherche en médecine générale, Université Côte d'Azur, Nice, France
| | - Frédéric Bergeron
- Direction des services-conseils de la Bibliothèque, Université Laval, Québec, QC, Canada
| | - Marie-Pierre Gagnon
- Faculté des sciences infirmières, Université Laval, Québec, QC, Canada
- Vitam Research Center on Sustainable Health, Québec, QC, Canada
| |
Collapse
|
32
|
Li F, Wang S, Gao Z, Qing M, Pan S, Liu Y, Hu C. Harnessing artificial intelligence in sepsis care: advances in early detection, personalized treatment, and real-time monitoring. Front Med (Lausanne) 2025; 11:1510792. [PMID: 39835096 PMCID: PMC11743359 DOI: 10.3389/fmed.2024.1510792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Accepted: 12/10/2024] [Indexed: 01/22/2025] Open
Abstract
Sepsis remains a leading cause of morbidity and mortality worldwide due to its rapid progression and heterogeneous nature. This review explores the potential of Artificial Intelligence (AI) to transform sepsis management, from early detection to personalized treatment and real-time monitoring. AI, particularly through machine learning (ML) techniques such as random forest models and deep learning algorithms, has shown promise in analyzing electronic health record (EHR) data to identify patterns that enable early sepsis detection. For instance, random forest models have demonstrated high accuracy in predicting sepsis onset in intensive care unit (ICU) patients, while deep learning approaches have been applied to recognize complications such as sepsis-associated acute respiratory distress syndrome (ARDS). Personalized treatment plans developed through AI algorithms predict patient-specific responses to therapies, optimizing therapeutic efficacy and minimizing adverse effects. AI-driven continuous monitoring systems, including wearable devices, provide real-time predictions of sepsis-related complications, enabling timely interventions. Beyond these advancements, AI enhances diagnostic accuracy, predicts long-term outcomes, and supports dynamic risk assessment in clinical settings. However, ethical challenges, including data privacy concerns and algorithmic biases, must be addressed to ensure fair and effective implementation. The significance of this review lies in addressing the current limitations in sepsis management and highlighting how AI can overcome these hurdles. By leveraging AI, healthcare providers can significantly enhance diagnostic accuracy, optimize treatment protocols, and improve overall patient outcomes. Future research should focus on refining AI algorithms with diverse datasets, integrating emerging technologies, and fostering interdisciplinary collaboration to address these challenges and realize AI's transformative potential in sepsis care.
Collapse
Affiliation(s)
- Fang Li
- Department of General Surgery, Chongqing General Hospital, Chongqing, China
| | - Shengguo Wang
- Department of Stomatology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Zhi Gao
- Department of Stomatology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Maofeng Qing
- Department of Stomatology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Shan Pan
- Department of Stomatology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yingying Liu
- Department of Stomatology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Chengchen Hu
- Department of Stomatology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| |
Collapse
|
33
|
Mourid MR, Irfan H, Oduoye MO. Artificial Intelligence in Pediatric Epilepsy Detection: Balancing Effectiveness With Ethical Considerations for Welfare. Health Sci Rep 2025; 8:e70372. [PMID: 39846037 PMCID: PMC11751886 DOI: 10.1002/hsr2.70372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 11/22/2024] [Accepted: 01/03/2025] [Indexed: 01/24/2025] Open
Abstract
BACKGROUND AND AIM Epilepsy is a major neurological challenge, especially for pediatric populations. It profoundly impacts both developmental progress and quality of life in affected children. With the advent of artificial intelligence (AI), there's a growing interest in leveraging its capabilities to improve the diagnosis and management of pediatric epilepsy. This review aims to assess the effectiveness of AI in pediatric epilepsy detection while considering the ethical implications surrounding its implementation. METHODOLOGY A comprehensive systematic review was conducted across multiple databases including PubMed, EMBASE, Google Scholar, Scopus, and Medline. Search terms encompassed "pediatric epilepsy," "artificial intelligence," "machine learning," "ethical considerations," and "data security." Publications from the past decade were scrutinized for methodological rigor, with a focus on studies evaluating AI's efficacy in pediatric epilepsy detection and management. RESULTS AI systems have demonstrated strong potential in diagnosing and monitoring pediatric epilepsy, often matching clinical accuracy. For example, AI-driven decision support achieved 93.4% accuracy in diagnosis, closely aligning with expert assessments. Specific methods, like EEG-based AI for detecting interictal discharges, showed high specificity (93.33%-96.67%) and sensitivity (76.67%-93.33%), while neuroimaging approaches using rs-fMRI and DTI reached up to 97.5% accuracy in identifying microstructural abnormalities. Deep learning models, such as CNN-LSTM, have also enhanced seizure detection from video by capturing subtle movement and expression cues. Non-EEG sensor-based methods effectively identified nocturnal seizures, offering promising support for pediatric care. However, ethical considerations around privacy, data security, and model bias remain crucial for responsible AI integration. CONCLUSION While AI holds immense potential to enhance pediatric epilepsy management, ethical considerations surrounding transparency, fairness, and data security must be rigorously addressed. Collaborative efforts among stakeholders are imperative to navigate these ethical challenges effectively, ensuring responsible AI integration and optimizing patient outcomes in pediatric epilepsy care.
Collapse
Affiliation(s)
| | - Hamza Irfan
- Department of MedicineShaikh Khalifa Bin Zayed Al Nahyan Medical and Dental CollegeLahorePakistan
| | - Malik Olatunde Oduoye
- Department of ResearchThe Medical Research Circle (MedReC)GomaDemocratic Republic of the Congo
| |
Collapse
|
34
|
Urbina JT, Vu PD, Nguyen MV. Disability Ethics and Education in the Age of Artificial Intelligence: Identifying Ability Bias in ChatGPT and Gemini. Arch Phys Med Rehabil 2025; 106:14-19. [PMID: 39216786 DOI: 10.1016/j.apmr.2024.08.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 08/17/2024] [Accepted: 08/19/2024] [Indexed: 09/04/2024]
Abstract
OBJECTIVE To identify and quantify ability bias in generative artificial intelligence large language model chatbots, specifically OpenAI's ChatGPT and Google's Gemini. DESIGN Observational study of language usage in generative artificial intelligence models. SETTING Investigation-only browser profile restricted to ChatGPT and Gemini. PARTICIPANTS Each chatbot generated 60 descriptions of people prompted without specified functional status, 30 descriptions of people with a disability, 30 descriptions of patients with a disability, and 30 descriptions of athletes with a disability (N=300). INTERVENTIONS Not applicable. MAIN OUTCOME MEASURES Generated descriptions produced by the models were parsed into words that were linguistically analyzed into favorable qualities or limiting qualities. RESULTS Both large language models significantly underestimated disability in a population of people, and linguistic analysis showed that descriptions of people, patients, and athletes with a disability were generated as having significantly fewer favorable qualities and significantly more limitations than people without a disability in both ChatGPT and Gemini. CONCLUSIONS Generative artificial intelligence chatbots demonstrate quantifiable ability bias and often exclude people with disabilities in their responses. Ethical use of these generative large language model chatbots in medical systems should recognize this limitation, and further consideration should be taken in developing equitable artificial intelligence technologies.
Collapse
Affiliation(s)
- Jacob T Urbina
- Department of Physical Medicine and Rehabilitation, McGovern Medical School, UTHealth Houston, Houston, TX.
| | - Peter D Vu
- Department of Physical Medicine and Rehabilitation, McGovern Medical School, UTHealth Houston, Houston, TX
| | - Michael V Nguyen
- Department of Physical Medicine and Rehabilitation, McGovern Medical School, UTHealth Houston, Houston, TX
| |
Collapse
|
35
|
Levkovich I, Omar M. Evaluating of BERT-based and Large Language Mod for Suicide Detection, Prevention, and Risk Assessment: A Systematic Review. J Med Syst 2024; 48:113. [PMID: 39738935 PMCID: PMC11685247 DOI: 10.1007/s10916-024-02134-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 12/16/2024] [Indexed: 01/02/2025]
Abstract
Suicide constitutes a public health issue of major concern. Ongoing progress in the field of artificial intelligence, particularly in the domain of large language models, has played a significant role in the detection, risk assessment, and prevention of suicide. The purpose of this review was to explore the use of LLM tools in various aspects of suicide prevention. PubMed, Embase, Web of Science, Scopus, APA PsycNet, Cochrane Library, and IEEE Xplore-for studies published were systematically searched for articles published between January 1, 2018, until April 2024. The 29 reviewed studies utilized LLMs such as GPT, Llama, and BERT. We categorized the studies into three main tasks: detecting suicidal ideation or behaviors, assessing the risk of suicidal ideation, and preventing suicide by predicting attempts. Most of the studies demonstrated that these models are highly efficient, often outperforming mental health professionals in early detection and prediction capabilities. Large language models demonstrate significant potential for identifying and detecting suicidal behaviors and for saving lives. Nevertheless, ethical problems still need to be examined and cooperation with skilled professionals is essential.
Collapse
Affiliation(s)
- Inbar Levkovich
- Tel-Hai Academic College, 2208, Qiryat Shemona, Upper Galilee, Israel.
| | - Mahmud Omar
- Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| |
Collapse
|
36
|
Alfaraj A, Nagai T, AlQallaf H, Lin WS. Race to the Moon or the Bottom? Applications, Performance, and Ethical Considerations of Artificial Intelligence in Prosthodontics and Implant Dentistry. Dent J (Basel) 2024; 13:13. [PMID: 39851589 PMCID: PMC11763855 DOI: 10.3390/dj13010013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Revised: 12/09/2024] [Accepted: 12/24/2024] [Indexed: 01/26/2025] Open
Abstract
Objectives: This review aims to explore the applications of artificial intelligence (AI) in prosthodontics and implant dentistry, focusing on its performance outcomes and associated ethical concerns. Materials and Methods: Following the PRISMA guidelines, a search was conducted across databases such as PubMed, Medline, Web of Science, and Scopus. Studies published between January 2022 and May 2024, in English, were considered. The Population (P) included patients or extracted teeth with AI applications in prosthodontics and implant dentistry; the Intervention (I) was AI-based tools; the Comparison (C) was traditional methods, and the Outcome (O) involved AI performance outcomes and ethical considerations. The Newcastle-Ottawa Scale was used to assess the quality and risk of bias in the studies. Results: Out of 3420 initially identified articles, 18 met the inclusion criteria for AI applications in prosthodontics and implant dentistry. The review highlighted AI's significant role in improving diagnostic accuracy, treatment planning, and prosthesis design. AI models demonstrated high accuracy in classifying dental implants and predicting implant outcomes, although limitations were noted in data diversity and model generalizability. Regarding ethical issues, five studies identified concerns such as data privacy, system bias, and the potential replacement of human roles by AI. While patients generally viewed AI positively, dental professionals expressed hesitancy due to a lack of familiarity and regulatory guidelines, highlighting the need for better education and ethical frameworks. Conclusions: AI has the potential to revolutionize prosthodontics and implant dentistry by enhancing treatment accuracy and efficiency. However, there is a pressing need to address ethical issues through comprehensive training and the development of regulatory frameworks. Future research should focus on broadening AI applications and addressing the identified ethical concerns.
Collapse
Affiliation(s)
- Amal Alfaraj
- Department of Prosthodontics and Dental Implantology, College of Dentistry, King Faisal University, Al Ahsa 31982, Saudi Arabia;
- Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, IN 46202, USA;
| | - Toshiki Nagai
- Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, IN 46202, USA;
| | - Hawra AlQallaf
- Department of Periodontology, Indiana University School of Dentistry, Indianapolis, IN 46202, USA;
| | - Wei-Shao Lin
- Department of Prosthodontics, Indiana University School of Dentistry, Indianapolis, IN 46202, USA;
| |
Collapse
|
37
|
Willem T, Fritzsche MC, Zimmermann BM, Sierawska A, Breuer S, Braun M, Ruess AK, Bak M, Schönweitz FB, Meier LJ, Fiske A, Tigard D, Müller R, McLennan S, Buyx A. Embedded Ethics in Practice: A Toolbox for Integrating the Analysis of Ethical and Social Issues into Healthcare AI Research. SCIENCE AND ENGINEERING ETHICS 2024; 31:3. [PMID: 39718728 PMCID: PMC11668859 DOI: 10.1007/s11948-024-00523-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 11/07/2024] [Indexed: 12/25/2024]
Abstract
Integrating artificial intelligence (AI) into critical domains such as healthcare holds immense promise. Nevertheless, significant challenges must be addressed to avoid harm, promote the well-being of individuals and societies, and ensure ethically sound and socially just technology development. Innovative approaches like Embedded Ethics, which refers to integrating ethics and social science into technology development based on interdisciplinary collaboration, are emerging to address issues of bias, transparency, misrepresentation, and more. This paper aims to develop this approach further to enable future projects to effectively deploy it. Based on the practical experience of using ethics and social science methodology in interdisciplinary AI-related healthcare consortia, this paper presents several methods that have proven helpful for embedding ethical and social science analysis and inquiry. They include (1) stakeholder analyses, (2) literature reviews, (3) ethnographic approaches, (4) peer-to-peer interviews, (5) focus groups, (6) interviews with affected groups and external stakeholders, (7) bias analyses, (8) workshops, and (9) interdisciplinary results dissemination. We believe that applying Embedded Ethics offers a pathway to stimulate reflexivity, proactively anticipate social and ethical concerns, and foster interdisciplinary inquiry into such concerns at every stage of technology development. This approach can help shape responsible, inclusive, and ethically aware technology innovation in healthcare and beyond.
Collapse
Affiliation(s)
- Theresa Willem
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany.
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany.
| | - Marie-Christine Fritzsche
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
| | - Bettina M Zimmermann
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Institute of Philosophy & Multidisciplinary Center for Infectious Diseases, University of Bern, Bern, Switzerland
| | - Anna Sierawska
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- TUD Dresden University of Technology, Dresden, Germany
| | - Svenja Breuer
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Department of Economics and Policy, School of Management, Technical University of Munich, Munich, Germany
- Center for Responsible AI Technologies, Technical University of Munich & University of Augsburg & Munich School of Philosophy, Munich, Germany
| | - Maximilian Braun
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Department of Economics and Policy, School of Management, Technical University of Munich, Munich, Germany
| | - Anja K Ruess
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Department of Economics and Policy, School of Management, Technical University of Munich, Munich, Germany
| | - Marieke Bak
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Amsterdam UMC, Department of Ethics, Law and Humanities, University of Amsterdam, Amsterdam, The Netherlands
| | - Franziska B Schönweitz
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
| | - Lukas J Meier
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Churchill College, University of Cambridge, Cambridge, UK
- Edmond & Lily Safra Center for Ethics, Harvard University, Cambridge, USA
| | - Amelia Fiske
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
| | - Daniel Tigard
- Department of Philosophy, University of San Diego, San Diego, USA
- Institute for Experiential AI, Northeastern University, Boston, USA
| | - Ruth Müller
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Department of Economics and Policy, School of Management, Technical University of Munich, Munich, Germany
- Center for Responsible AI Technologies, Technical University of Munich & University of Augsburg & Munich School of Philosophy, Munich, Germany
| | - Stuart McLennan
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| | - Alena Buyx
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Center for Responsible AI Technologies, Technical University of Munich & University of Augsburg & Munich School of Philosophy, Munich, Germany
| |
Collapse
|
38
|
Lewandowska M, Street D, Yim J, Jones S, Viney R. Artificial intelligence in radiation therapy treatment planning: A discrete choice experiment. J Med Radiat Sci 2024. [PMID: 39705152 DOI: 10.1002/jmrs.843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 11/28/2024] [Indexed: 12/22/2024] Open
Abstract
INTRODUCTION The application of artificial intelligence (AI) in radiation therapy holds promise for addressing challenges, such as healthcare staff shortages, increased efficiency and treatment planning variations. Increased AI adoption has the potential to standardise treatment protocols, enhance quality, improve patient outcomes, and reduce costs. However, drawbacks include impacts on employment and algorithmic biases, making it crucial to navigate trade-offs. A discrete choice experiment (DCE) was undertaken to examine the AI-related characteristics radiation oncology professionals think are most important for adoption in radiation therapy treatment planning. METHODS Radiation oncology professionals completed an online discrete choice experiment to express their preferences about AI systems for radiation therapy planning which were described by five attributes, each with 2-4 levels: accuracy, automation, exploratory ability, compatibility with other systems and impact on workload. The survey also included questions about attitudes to AI. Choices were modelled using mixed logit regression. RESULTS The survey was completed by 82 respondents. The results showed they preferred AI systems that offer the largest time saving, and that provide explanations of the AI reasoning (both in-depth and basic). They also favoured systems that provide improved contouring precision compared with manual systems. Respondents emphasised the importance of AI systems being cost-effective, while also recognising AI's impact on professional roles, responsibilities, and service delivery. CONCLUSIONS This study provides important information about radiation oncology professionals' priorities for AI in treatment planning. The findings from this study can be used to inform future research on economic evaluations and management perspectives of AI-driven technologies in radiation therapy.
Collapse
Affiliation(s)
- Milena Lewandowska
- Centre for Health Economics Research and Evaluation, University of Technology Sydney, Sydney, New South Wales, Australia
| | - Deborah Street
- Centre for Health Economics Research and Evaluation, University of Technology Sydney, Sydney, New South Wales, Australia
| | - Jackie Yim
- Centre for Health Economics Research and Evaluation, University of Technology Sydney, Sydney, New South Wales, Australia
- Radiation Oncology, Royal North Shore Hospital, Sydney, New South Wales, Australia
| | - Scott Jones
- Radiation Oncology Princess Alexandra Hospital Raymond Terrace, Brisbane, Queens Land, Australia
| | - Rosalie Viney
- Centre for Health Economics Research and Evaluation, University of Technology Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
39
|
Kato Y, Ushida K, Momosaki R. Evaluating the Accuracy of ChatGPT in the Japanese Board-Certified Physiatrist Examination. Cureus 2024; 16:e76214. [PMID: 39845219 PMCID: PMC11753804 DOI: 10.7759/cureus.76214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2024] [Indexed: 01/24/2025] Open
Abstract
Background Generative artificial intelligence (AI), such as Chat Generative Pre-trained Transformer (ChatGPT), has shown potential in various medical applications, including answering licensing examination questions. However, its performance in rehabilitation medicine remains underexplored. This study aimed to evaluate the accuracy of ChatGPT4o in answering questions from the Japanese Board-Certified Physiatrist Examination and assess its potential as an educational and clinical support tool. Methods This study assessed the performance of ChatGPT4o on questions from the 2021-2023 Japanese Board-Certified Physiatrist Examinations. Questions were categorized into text- and image-based types and correct response rates were calculated. Errors were classified into informational, logical, or statistical. Results ChatGPT4o achieved correct response rates of 79.1% in 2021, 80.0% in 2022, and 86.3% in 2023, with an overall accuracy of 81.8%. The AI performed better on text-based (83.0%) than on image-based (70.0%) questions. Most errors (92.8%) were related to information. Conclusions ChatGPT4o demonstrated high accuracy in the Japanese Board-Certified Physiatrist Examination, particularly for text-based questions, demonstrating its potential as an educational tool. However, limitations in image interpretation and specialized topics indicate the need for further improvements for clinical application.
Collapse
Affiliation(s)
- Yuki Kato
- Department of Rehabilitation, Saiseikai Meiwa Hospital, Meiwa, JPN
- Department of Rehabilitation Medicine, Mie University Graduate School of Medicine, Tsu, JPN
| | - Kenta Ushida
- Department of Rehabilitation Medicine, Mie University Graduate School of Medicine, Tsu, JPN
- Department of Rehabilitation, Mie University Hospital, Tsu, JPN
| | - Ryo Momosaki
- Department of Rehabilitation Medicine, Mie University Graduate School of Medicine, Tsu, JPN
- Department of Rehabilitation, Mie University Hospital, Tsu, JPN
| |
Collapse
|
40
|
Asediya VS, Anjaria PA, Mathakiya RA, Koringa PG, Nayak JB, Bisht D, Fulmali D, Patel VA, Desai DN. Vaccine development using artificial intelligence and machine learning: A review. Int J Biol Macromol 2024; 282:136643. [PMID: 39426778 DOI: 10.1016/j.ijbiomac.2024.136643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2024] [Revised: 09/30/2024] [Accepted: 10/15/2024] [Indexed: 10/21/2024]
Abstract
The COVID-19 pandemic has underscored the critical importance of effective vaccines, yet their development is a challenging and demanding process. It requires identifying antigens that elicit protective immunity, selecting adjuvants that enhance immunogenicity, and designing delivery systems that ensure optimal efficacy. Artificial intelligence (AI) can facilitate this process by using machine learning methods to analyze large and diverse datasets, suggest novel vaccine candidates, and refine their design and predict their performance. This review explores how AI can be applied to various aspects of vaccine development, such as predicting immune response from protein sequences, discovering adjuvants, optimizing vaccine doses, modeling vaccine supply chains, and predicting protein structures. We also address the challenges and ethical issues that emerge from the use of AI in vaccine development, such as data privacy, algorithmic bias, and health data sensitivity. We contend that AI has immense potential to accelerate vaccine development and respond to future pandemics, but it also requires careful attention to the quality and validity of the data and methods used.
Collapse
Affiliation(s)
| | | | | | | | | | - Deepanker Bisht
- Indian Veterinary Research Institute, Izatnagar, U.P., India
| | | | | | | |
Collapse
|
41
|
Heydari S, Masoumi N, Esmaeeli E, Ayyoubzadeh SM, Ghorbani-Bidkorpeh F, Ahmadi M. Artificial intelligence in nanotechnology for treatment of diseases. J Drug Target 2024; 32:1247-1266. [PMID: 39155708 DOI: 10.1080/1061186x.2024.2393417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 07/06/2024] [Accepted: 08/11/2024] [Indexed: 08/20/2024]
Abstract
Nano-based drug delivery systems (DDSs) have demonstrated the ability to address challenges posed by therapeutic agents, enhancing drug efficiency and reducing side effects. Various nanoparticles (NPs) are utilised as DDSs with unique characteristics, leading to diverse applications across different diseases. However, the complexity, cost and time-consuming nature of laboratory processes, the large volume of data, and the challenges in data analysis have prompted the integration of artificial intelligence (AI) tools. AI has been employed in designing, characterising and manufacturing drug delivery nanosystems, as well as in predicting treatment efficiency. AI's potential to personalise drug delivery based on individual patient factors, optimise formulation design and predict drug properties has been highlighted. By leveraging AI and large datasets, developing safe and effective DDSs can be accelerated, ultimately improving patient outcomes and advancing pharmaceutical sciences. This review article investigates the role of AI in the development of nano-DDSs, with a focus on their therapeutic applications. The use of AI in DDSs has the potential to revolutionise treatment optimisation and improve patient care.
Collapse
Affiliation(s)
- Soroush Heydari
- Department of Health Information Management, School of Allied Medical Sciences, Tehran University of Medical Sciences, Tehran, Iran
| | - Niloofar Masoumi
- Department of Pharmaceutics and Pharmaceutical Nanotechnology, School of Pharmacy, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Erfan Esmaeeli
- Department of Health Information Management, School of Allied Medical Sciences, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Mohammad Ayyoubzadeh
- Department of Health Information Management, School of Allied Medical Sciences, Tehran University of Medical Sciences, Tehran, Iran
- Health Information Management Research Center, Tehran University of Medical Sciences, Tehran, Iran
| | - Fatemeh Ghorbani-Bidkorpeh
- Department of Pharmaceutics and Pharmaceutical Nanotechnology, School of Pharmacy, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mahnaz Ahmadi
- Department of Tissue Engineering and Applied Cell Sciences, School of Advanced Technologies in Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- Medical Nanotechnology and Tissue Engineering Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
42
|
Mendizabal-Ruiz G, Paredes O, Álvarez Á, Acosta-Gómez F, Hernández-Morales E, González-Sandoval J, Mendez-Zavala C, Borrayo E, Chavez-Badiola A. Artificial Intelligence in Human Reproduction. Arch Med Res 2024; 55:103131. [PMID: 39615376 DOI: 10.1016/j.arcmed.2024.103131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 11/04/2024] [Accepted: 11/12/2024] [Indexed: 01/04/2025]
Abstract
The use of artificial intelligence (AI) in human reproduction is a rapidly evolving field with both exciting possibilities and ethical considerations. This technology has the potential to improve success rates and reduce the emotional and financial burden of infertility. However, it also raises ethical and privacy concerns. This paper presents an overview of the current and potential applications of AI in human reproduction. It explores the use of AI in various aspects of reproductive medicine, including fertility tracking, assisted reproductive technologies, management of pregnancy complications, and laboratory automation. In addition, we discuss the need for robust ethical frameworks and regulations to ensure the responsible and equitable use of AI in reproductive medicine.
Collapse
Affiliation(s)
- Gerardo Mendizabal-Ruiz
- Conceivable Life Sciences, Department of Research and Development, Guadalajara, Jalisco, Mexico; Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico.
| | - Omar Paredes
- Laboratorio de Innovación Biodigital, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico; IVF 2.0 Limited, Department of Research and Development, London, UK
| | - Ángel Álvarez
- Conceivable Life Sciences, Department of Research and Development, Guadalajara, Jalisco, Mexico; Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Fátima Acosta-Gómez
- Conceivable Life Sciences, Department of Research and Development, Guadalajara, Jalisco, Mexico; Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Estefanía Hernández-Morales
- Conceivable Life Sciences, Department of Research and Development, Guadalajara, Jalisco, Mexico; Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Josué González-Sandoval
- Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Celina Mendez-Zavala
- Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Ernesto Borrayo
- Laboratorio de Innovación Biodigital, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Alejandro Chavez-Badiola
- Conceivable Life Sciences, Department of Research and Development, Guadalajara, Jalisco, Mexico; IVF 2.0 Limited, Department of Research and Development, London, UK; New Hope Fertility Center, Deparment of Research, Ciudad de México, Mexico
| |
Collapse
|
43
|
Collins BX, Bélisle-Pipon JC, Evans BJ, Ferryman K, Jiang X, Nebeker C, Novak L, Roberts K, Were M, Yin Z, Ravitsky V, Coco J, Hendricks-Sturrup R, Williams I, Clayton EW, Malin BA. Addressing ethical issues in healthcare artificial intelligence using a lifecycle-informed process. JAMIA Open 2024; 7:ooae108. [PMID: 39553826 PMCID: PMC11565898 DOI: 10.1093/jamiaopen/ooae108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 08/19/2024] [Accepted: 10/04/2024] [Indexed: 11/19/2024] Open
Abstract
Objectives Artificial intelligence (AI) proceeds through an iterative and evaluative process of development, use, and refinement which may be characterized as a lifecycle. Within this context, stakeholders can vary in their interests and perceptions of the ethical issues associated with this rapidly evolving technology in ways that can fail to identify and avert adverse outcomes. Identifying issues throughout the AI lifecycle in a systematic manner can facilitate better-informed ethical deliberation. Materials and Methods We analyzed existing lifecycles from within the current literature for ethical issues of AI in healthcare to identify themes, which we relied upon to create a lifecycle that consolidates these themes into a more comprehensive lifecycle. We then considered the potential benefits and harms of AI through this lifecycle to identify ethical questions that can arise at each step and to identify where conflicts and errors could arise in ethical analysis. We illustrated the approach in 3 case studies that highlight how different ethical dilemmas arise at different points in the lifecycle. Results Discussion and Conclusion Through case studies, we show how a systematic lifecycle-informed approach to the ethical analysis of AI enables mapping of the effects of AI onto different steps to guide deliberations on benefits and harms. The lifecycle-informed approach has broad applicability to different stakeholders and can facilitate communication on ethical issues for patients, healthcare professionals, research participants, and other stakeholders.
Collapse
Affiliation(s)
- Benjamin X Collins
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Center for Biomedical Ethics and Society, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States
| | | | - Barbara J Evans
- Levin College of Law, University of Florida, Gainesville, FL 32611, United States
- Herbert Wertheim College of Engineering, University of Florida, Gainesville, FL 32611, United States
| | - Kadija Ferryman
- Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD 21205, United States
| | - Xiaoqian Jiang
- McWilliams School of Biomedical Informatics, UTHealth Houston, Houston, TX 77030, United States
| | - Camille Nebeker
- Herbert Wertheim School of Public Health and Human Longevity Science, University of California, San Diego, La Jolla, CA 92093, United States
| | - Laurie Novak
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Kirk Roberts
- McWilliams School of Biomedical Informatics, UTHealth Houston, Houston, TX 77030, United States
| | - Martin Were
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States
| | - Zhijun Yin
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN 37212, United States
| | | | - Joseph Coco
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Rachele Hendricks-Sturrup
- National Alliance against Disparities in Patient Health, Woodbridge, VA 22191, United States
- Margolis Center for Health Policy, Duke University, Washington, DC 20004, United States
| | - Ishan Williams
- School of Nursing, University of Virginia, Charlottesville, VA 22903, United States
| | - Ellen W Clayton
- Center for Biomedical Ethics and Society, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Law School, Vanderbilt University, Nashville, TN 37203, United States
| | - Bradley A Malin
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN 37212, United States
| | | |
Collapse
|
44
|
Lyu X, Li J, Li S. Approaches to Reach Trustworthy Patient Education: A Narrative Review. Healthcare (Basel) 2024; 12:2322. [PMID: 39684944 PMCID: PMC11641738 DOI: 10.3390/healthcare12232322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2024] [Revised: 11/15/2024] [Accepted: 11/15/2024] [Indexed: 12/18/2024] Open
Abstract
BACKGROUND Patient education is a cornerstone of modern healthcare. Health literacy improves health-related quality of life and health outcomes of patients, enhanced by effective patient education. Inadequate competency of patient education in healthcare providers triggered this review to summarize common approaches and recent advancements. METHODS This narrative review summarizes common approaches and recent advancements in patient education with their relations to health literacy, their strengths, limitations, and practical issues. RESULTS This review highlighted the multifaceted approaches to patient education, emphasizing the importance of tailoring methods to meet the diverse needs of patients. By integrating various strategies, including intrapersonal, interpersonal, and societal/community-level interventions, healthcare providers can create a more comprehensive educational experience that addresses the complexities of patient needs, meanwhile improving the health literacy of patients. With the rise of digital media and artificial intelligence, there is an increasing need for innovative educational resources that can effectively reach and engage patients. Ongoing research and collaboration among healthcare professionals and policymakers will be essential to refine educational strategies and adapt to emerging challenges. It is essential to remain vigilant about potential conflicts of interest that may compromise the integrity of educational content. CONCLUSION Effective patient education empowers individuals and their contributions to a healthier society by fostering informed decision-making and encouraging proactive health management.
Collapse
Affiliation(s)
- Xiafei Lyu
- Department of Radiology, West China Hospital, Sichuan University, Chengdu 610041, China;
| | - Jing Li
- Department of Endocrinology and Metabolism, MAGIC China Centre, Chinese Evidence-Based Medicine Centre, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Sheyu Li
- Department of Endocrinology and Metabolism, MAGIC China Centre, Chinese Evidence-Based Medicine Centre, West China Hospital, Sichuan University, Chengdu 610041, China
| |
Collapse
|
45
|
Garg P, Singhal G, Kulkarni P, Horne D, Salgia R, Singhal SS. Artificial Intelligence-Driven Computational Approaches in the Development of Anticancer Drugs. Cancers (Basel) 2024; 16:3884. [PMID: 39594838 PMCID: PMC11593155 DOI: 10.3390/cancers16223884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2024] [Revised: 11/13/2024] [Accepted: 11/16/2024] [Indexed: 11/28/2024] Open
Abstract
The integration of AI has revolutionized cancer drug development, transforming the landscape of drug discovery through sophisticated computational techniques. AI-powered models and algorithms have enhanced computer-aided drug design (CADD), offering unprecedented precision in identifying potential anticancer compounds. Traditionally, cancer drug design has been a complex, resource-intensive process, but AI introduces new opportunities to accelerate discovery, reduce costs, and optimize efficiency. This manuscript delves into the transformative applications of AI-driven methodologies in predicting and developing anticancer drugs, critically evaluating their potential to reshape the future of cancer therapeutics while addressing their challenges and limitations.
Collapse
Affiliation(s)
- Pankaj Garg
- Department of Chemistry, GLA University, Mathura 281406, Uttar Pradesh, India
| | - Gargi Singhal
- Department of Medical Sciences, S.N. Medical College, Agra 282002, Uttar Pradesh, India
| | - Prakash Kulkarni
- Department of Medical Oncology & Therapeutics Research, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - David Horne
- Department of Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Ravi Salgia
- Department of Medical Oncology & Therapeutics Research, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sharad S. Singhal
- Department of Medical Oncology & Therapeutics Research, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| |
Collapse
|
46
|
Badawy W, Zinhom H, Shaban M. Navigating ethical considerations in the use of artificial intelligence for patient care: A systematic review. Int Nurs Rev 2024. [PMID: 39545614 DOI: 10.1111/inr.13059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Accepted: 10/19/2024] [Indexed: 11/17/2024]
Abstract
AIM To explore the ethical considerations and challenges faced by nursing professionals in integrating artificial intelligence (AI) into patient care. BACKGROUND AI's integration into nursing practice enhances clinical decision-making and operational efficiency but raises ethical concerns regarding privacy, accountability, informed consent, and the preservation of human-centered care. METHODS A systematic review was conducted, following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Thirteen studies were selected from databases including PubMed, Embase, IEEE Xplore, PsycINFO, and CINAHL. Thematic analysis identified key ethical themes related to AI use in nursing. RESULTS The review highlighted critical ethical challenges, such as data privacy and security, accountability for AI-driven decisions, transparency in AI decision-making, and maintaining the human touch in care. The findings underscore the importance of stakeholder engagement, continuous education for nurses, and robust governance frameworks to guide ethical AI implementation in nursing. DISCUSSION The results align with existing literature on AI's ethical complexities in healthcare. Addressing these challenges requires strengthening nursing competencies in AI, advocating for patient-centered AI design, and ensuring that AI integration upholds ethical standards. CONCLUSION Although AI offers significant benefits for nursing practice, it also introduces ethical challenges that must be carefully managed. Enhancing nursing education, promoting stakeholder engagement, and developing comprehensive policies are essential for ethically integrating AI into nursing. IMPLICATIONS FOR NURSING AI can improve clinical decision-making and efficiency, but nurses must actively preserve humanistic care aspects through ongoing education and involvement in AI governance. IMPLICATIONS FOR HEALTH POLICY Establish ethical frameworks and data protection policies tailored to AI in nursing. Support continuous professional development and allocate resources for the ethical integration of AI in healthcare.
Collapse
Affiliation(s)
- Walaa Badawy
- Department of Psychology, College of Education, King Khaled University, Abha, Saudi Arabia
| | - Haithm Zinhom
- Mohammed Bin Zayed University for Humanities, Abu Dhabi, UAE
| | - Mostafa Shaban
- Community Health Nursing Department, College of Nursing, Jouf University, Sakaka, Saudi Arabia
| |
Collapse
|
47
|
Lee C, Vogt KA, Kumar S. Prospects for AI clinical summarization to reduce the burden of patient chart review. Front Digit Health 2024; 6:1475092. [PMID: 39575412 PMCID: PMC11578995 DOI: 10.3389/fdgth.2024.1475092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Accepted: 10/22/2024] [Indexed: 11/24/2024] Open
Abstract
Effective summarization of unstructured patient data in electronic health records (EHRs) is crucial for accurate diagnosis and efficient patient care, yet clinicians often struggle with information overload and time constraints. This review dives into recent literature and case studies on both the significant impacts and outstanding issues of patient chart review on communications, diagnostics, and management. It also discusses recent efforts to integrate artificial intelligence (AI) into clinical summarization tasks, and its transformative impact on the clinician's potential, including but not limited to reductions of administrative burden and improved patient-centered care. Furthermore, it takes into account the numerous ethical challenges associated with integrating AI into clinical workflow, including biases, data privacy, and cybersecurity.
Collapse
Affiliation(s)
- Chanseo Lee
- Department of Surgery, Yale School of Medicine, New Haven, CT, United States
| | | | | |
Collapse
|
48
|
Reuben JS, Meiri H, Arien-Zakay H. AI's pivotal impact on redefining stakeholder roles and their interactions in medical education and health care. Front Digit Health 2024; 6:1458811. [PMID: 39564581 PMCID: PMC11573760 DOI: 10.3389/fdgth.2024.1458811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Accepted: 10/04/2024] [Indexed: 11/21/2024] Open
Abstract
Artificial Intelligence (AI) has the potential to revolutionize medical training, diagnostics, treatment planning, and healthcare delivery while also bringing challenges such as data privacy, the risk of technological overreliance, and the preservation of critical thinking. This manuscript explores the impact of AI and Machine Learning (ML) on healthcare interactions, focusing on faculty, students, clinicians, and patients. AI and ML's early inclusion in the medical curriculum will support student-centered learning; however, all stakeholders will require specialized training to bridge the gap between medical practice and technological innovation. This underscores the importance of education in the ethical and responsible use of AI and emphasizing collaboration to maximize its benefits. This manuscript calls for a re-evaluation of interpersonal relationships within healthcare to improve the overall quality of care and safeguard the welfare of all stakeholders by leveraging AI's strengths and managing its risks.
Collapse
Affiliation(s)
- Jayne S Reuben
- Texas A&M School of Dentistry, Dallas, TX, United States
| | - Hila Meiri
- Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Surgery, Sheba Tel-Hashomer Medical Center, Associated with Tel-Aviv University, Tel-Aviv, Israel
| | - Hadar Arien-Zakay
- The Faculty of Medicine, School of Pharmacy, Institute for Drug Research, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
49
|
Hochheiser H, Klug J, Mathie T, Pollard TJ, Raffa JD, Ballard SL, Conrad EA, Edakalavan S, Joseph A, Alnomasy N, Nutman S, Hill V, Kapoor S, Claudio EP, Kravchenko OV, Li R, Nourelahi M, Diaz J, Taylor WM, Rooney SR, Woeltje M, Celi LA, Horvat CM. Raising awareness of potential biases in medical machine learning: Experience from a Datathon. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.10.21.24315543. [PMID: 39502657 PMCID: PMC11537317 DOI: 10.1101/2024.10.21.24315543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/14/2024]
Abstract
Objective To challenge clinicians and informaticians to learn about potential sources of bias in medical machine learning models through investigation of data and predictions from an open-source severity of illness score. Methods Over a two-day period (total elapsed time approximately 28 hours), we conducted a datathon that challenged interdisciplinary teams to investigate potential sources of bias in the Global Open Source Severity of Illness Score. Teams were invited to develop hypotheses, to use tools of their choosing to identify potential sources of bias, and to provide a final report. Results Five teams participated, three of which included both informaticians and clinicians. Most (4/5) used Python for analyses, the remaining team used R. Common analysis themes included relationship of the GOSSIS-1 prediction score with demographics and care related variables; relationships between demographics and outcomes; calibration and factors related to the context of care; and the impact of missingness. Representativeness of the population, differences in calibration and model performance among groups, and differences in performance across hospital settings were identified as possible sources of bias. Discussion Datathons are a promising approach for challenging developers and users to explore questions relating to unrecognized biases in medical machine learning algorithms.
Collapse
Affiliation(s)
- Harry Hochheiser
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA USA
| | - Jesse Klug
- UPMC Intensive Care Unit Service Center, UPMC, Pittsburgh, PA, USA
| | - Thomas Mathie
- Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Tom J. Pollard
- MIT Laboratory for Computational Physiology, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Jesse D. Raffa
- MIT Laboratory for Computational Physiology, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Stephanie L. Ballard
- Health Informatics, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA, USA
| | - Evamarie A. Conrad
- Health Informatics, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA, USA
| | - Smitha Edakalavan
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA USA
| | - Allan Joseph
- Division of Critical Care Medicine, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Nader Alnomasy
- Health Informatics, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA, USA
- College of Nursing, Medical Surgical Department, University of Ha’il, Ha’il, Saudi Arabia
| | - Sarah Nutman
- Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Veronika Hill
- Health Informatics, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA, USA
| | - Sumit Kapoor
- Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Eddie Pérez Claudio
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA USA
| | - Olga V. Kravchenko
- Department of Family and Community Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Ruoting Li
- Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Mehdi Nourelahi
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA USA
| | - Jenny Diaz
- Health Informatics, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA, USA
| | - W. Michael Taylor
- Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Sydney R. Rooney
- Division of Cardiology, Department of Pediatrics, Children’s Hospital of Pittsburgh, University of Pittsburgh, Pittsburgh, PA, USA
| | - Maeve Woeltje
- Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Leo Anthony Celi
- MIT Laboratory for Computational Physiology, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | | |
Collapse
|
50
|
Stephanian B, Karki S, Debnath K, Saltychev M, Rossi-Meyer M, Kandathil CK, Most SP. Role of Artificial Intelligence and Machine Learning in Facial Aesthetic Surgery: A Systematic Review. Facial Plast Surg Aesthet Med 2024; 26:679-705. [PMID: 39591584 DOI: 10.1089/fpsam.2024.0204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2024] Open
Abstract
Objective: To analyze the quality of artificial intelligence (AI) and machine learning (ML) tools developed for facial aesthetic surgery. Data Sources: Medline, Embase, CINAHL, Central, Scopus, and Web of Science databases were searched in February 2024. Study Selection: All original research in adults undergoing facial aesthetic surgery was included. Pilot reports, case reports, case series (n < 5), conference proceedings, letters (except research letters and brief reports), and editorials were excluded. Main Outcomes and Measures: Facial aesthetic surgery procedures employing AI and ML tools to measure improvements in diagnostic accuracy, predictive outcomes, precision patient counseling, and the scope of facial aesthetic surgery procedures where these tools have been implemented. Results: Out of 494 initial studies, 66 were included in the qualitative analysis. Of these, 42 (63.6%) were of "good" quality, 20 (30.3%) were of "fair" quality, and 4 (6.1%) were of "poor" quality. Conclusion: AI improves diagnostic accuracy, predictive capabilities, patient counseling, and facial aesthetic surgery treatment planning.
Collapse
Affiliation(s)
| | - Sabin Karki
- Indiana University School of Medicine, Indianapolis Indiana, USA
| | | | - Mikhail Saltychev
- Department of Physical and Rehabilitation Medicine, Turku University Hospital and University of Turku, Turku, Finland
| | - Monica Rossi-Meyer
- Division of Facial Plastic and Reconstructive Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Cherian Kurian Kandathil
- Division of Facial Plastic and Reconstructive Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Sam P Most
- Division of Facial Plastic and Reconstructive Surgery, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|