1
|
Groeneveld SWM, van Os-Medendorp H, van Gemert-Pijnen JEWC, Verdaasdonk RM, van Houwelingen T, Dekkers T, den Ouden MEM. Essential competencies of nurses working with AI-driven lifestyle monitoring in long-term care: A modified Delphi study. NURSE EDUCATION TODAY 2025; 149:106659. [PMID: 40056483 DOI: 10.1016/j.nedt.2025.106659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 12/16/2024] [Accepted: 02/27/2025] [Indexed: 03/10/2025]
Abstract
BACKGROUND As more and more older adults prefer to stay in their homes as they age, there's a need for technology to support this. A relevant technology is Artificial Intelligence (AI)-driven lifestyle monitoring, utilizing data from sensors placed in the home. This technology is not intended to replace nurses but to serve as a support tool. Understanding the specific competencies that nurses require to effectively use it is crucial. The aim of this study is to identify the essential competencies nurses require to work with AI-driven lifestyle monitoring in long-term care. METHODS A three round modified Delphi study was conducted, consisting of two online questionnaires and one focus group. A group of 48 experts participated in the study: nurses, innovators, developers, researchers, managers and educators. In the first two rounds experts assessed clarity and relevance on a proposed list of competencies, with the opportunity to provide suggestions for adjustments or inclusion of new competencies. In the third round the items without consensus were bespoken in a focus group. FINDINGS After the first round consensus was reached on relevance and clarity on n = 46 (72 %) of the competencies, after the second round on n = 54 (83 %) of the competencies. After the third round a final list of 10 competency domains and 61 sub-competencies was finalized. The 10 competency domains are: Fundamentals of AI, Participation in AI design, Patient-centered needs assessment, Personalisation of AI to patients' situation, Data reporting, Interpretation of AI output, Integration of AI output into clinical practice, Communication about AI use, Implementation of AI and Evaluation of AI use. These competencies span from basic understanding of AI-driven lifestyle monitoring, to being able to integrate it in daily work, being able to evaluate it and communicate its use to other stakeholders, including patients and informal caregivers. CONCLUSION Our study introduces a novel framework highlighting the (sub)competencies, required for nurses to work with AI-driven lifestyle monitoring in long-term care. These findings provide a foundation for developing initial educational programs and lifelong learning activities for nurses in this evolving field. Moreover, the importance that experts attach to AI competencies calls for a broader discussion about a potential shift in nursing responsibilities and tasks as healthcare becomes increasingly technologically advanced and data-driven, possibly leading to new roles within nursing.
Collapse
Affiliation(s)
- S W M Groeneveld
- Research Group Technology, Health & Care, School of Social Work, Saxion University of Applied Sciences, P.O. box 70.000, 7500 KB Enschede, Netherlands; Research Group Smart Health, School of Health, Saxion University of Applied Sciences, P.O. box 70.000, 7500 KB Enschede, Netherlands; TechMed Center, Health Technology Implementation, University of Twente, P.O. box 217, 7500 AE Enschede, Netherlands.
| | - H van Os-Medendorp
- Faculty Health, Sports, and Social Work, Inholland University of Applied Sciences, P.O. box 75068, 1070 AB Amsterdam, Netherlands; Spaarne Gasthuis Academy, P.O. box 417, 2000 AK Haarlem, Netherlands.
| | - J E W C van Gemert-Pijnen
- Centre for eHealth and Wellbeing Research, Section of Psychology, Health and Technology, University of Twente, P.O. box 217, 7500 AE Enschede, Netherlands.
| | - R M Verdaasdonk
- TechMed Center, Health Technology Implementation, University of Twente, P.O. box 217, 7500 AE Enschede, Netherlands.
| | - T van Houwelingen
- Research Group Technology for Healthcare Innovations, Research Centre for Healthy and Sustainable Living, University of Applied Sciences Utrecht, P.O. box 13102, 3507 LC Utrecht, Netherlands.
| | - T Dekkers
- Centre for eHealth and Wellbeing Research, Section of Psychology, Health and Technology, University of Twente, P.O. box 217, 7500 AE Enschede, Netherlands.
| | - M E M den Ouden
- Research Group Technology, Health & Care, School of Social Work, Saxion University of Applied Sciences, P.O. box 70.000, 7500 KB Enschede, Netherlands; Research Group Care and Technology, Regional Community College of Twente, P.O. box 636, 7550 AP Hengelo, Netherlands.
| |
Collapse
|
2
|
Arvai N, Katonai G, Mesko B. Health Care Professionals' Concerns About Medical AI and Psychological Barriers and Strategies for Successful Implementation: Scoping Review. J Med Internet Res 2025; 27:e66986. [PMID: 40267462 DOI: 10.2196/66986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2024] [Revised: 02/17/2025] [Accepted: 02/25/2025] [Indexed: 04/25/2025] Open
Abstract
BACKGROUND The rapid progress in the development of artificial intelligence (AI) is having a substantial impact on health care (HC) delivery and the physician-patient interaction. OBJECTIVE This scoping review aims to offer a thorough analysis of the current status of integrating AI into medical practice as well as the apprehensions expressed by HC professionals (HCPs) over its application. METHODS This scoping review used the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines to examine articles that investigated the apprehensions of HCPs about medical AI. Following the application of inclusion and exclusion criteria, 32 of an initial 217 studies (14.7%) were selected for the final analysis. We aimed to develop an attitude range that accurately captured the unfavorable emotions of HCPs toward medical AI. We achieved this by selecting attitudes and ranking them on a scale that represented the degree of aversion, ranging from mild skepticism to intense fear. The ultimate depiction of the scale was as follows: skepticism, reluctance, anxiety, resistance, and fear. RESULTS In total, 3 themes were identified through the process of thematic analysis. National surveys performed among HCPs aimed to comprehensively analyze their current emotions, worries, and attitudes regarding the integration of AI in the medical industry. Research on technostress primarily focused on the psychological dimensions of adopting AI, examining the emotional reactions, fears, and difficulties experienced by HCPs when they encountered AI-powered technology. The high-level perspective category included studies that took a broad and comprehensive approach to evaluating overarching themes, trends, and implications related to the integration of AI technology in HC. We discovered 15 sources of attitudes, which we classified into 2 distinct groups: intrinsic and extrinsic. The intrinsic group focused on HCPs' inherent professional identity, encompassing their tasks and capacities. Conversely, the extrinsic group pertained to their patients and the influence of AI on patient care. Next, we examined the shared themes and made suggestions to potentially tackle the problems discovered. Ultimately, we analyzed the results in relation to the attitude scale, assessing the degree to which each attitude was portrayed. CONCLUSIONS The solution to addressing resistance toward medical AI appears to be centered on comprehensive education, the implementation of suitable legislation, and the delineation of roles. Addressing these issues may foster acceptance and optimize AI integration, enhancing HC delivery while maintaining ethical standards. Due to the current prominence and extensive research on regulation, we suggest that further research could be dedicated to education.
Collapse
Affiliation(s)
- Nora Arvai
- Kálmán Laki Doctoral School of Biomedical and Clinical Sciences, University of Debrecen, Debrecen, Hungary
| | - Gellért Katonai
- Kálmán Laki Doctoral School of Biomedical and Clinical Sciences, University of Debrecen, Debrecen, Hungary
- Department of Family Medicine, Semmelweis University, Budapest, Hungary
| | | |
Collapse
|
3
|
Bottacin WE, de Souza TT, Melchiors AC, Reis WCT. Explanation and elaboration of MedinAI: guidelines for reporting artificial intelligence studies in medicines, pharmacotherapy, and pharmaceutical services. Int J Clin Pharm 2025:10.1007/s11096-025-01906-2. [PMID: 40249526 DOI: 10.1007/s11096-025-01906-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Accepted: 03/13/2025] [Indexed: 04/19/2025]
Abstract
The increasing adoption of artificial intelligence (AI) in medicines, pharmacotherapy, and pharmaceutical services necessitates clear guidance on reporting standards. While the MedinAI Statement (Bottacin in Int J Clin Pharm, https://doi.org/10.1007/s11096-025-01905-3, 2025) provides core guidelines for reporting AI studies in these fields, detailed explanations and practical examples are crucial for optimal implementation. This companion document was developed to offer comprehensive guidance and real-world examples for each guideline item. The document elaborates on all 14 items and 78 sub-items across four domains: core, ethical considerations in medication and pharmacotherapy, medicines as products, and services related to medicines and pharmacotherapy. Through clear, actionable guidance and diverse examples, this document enhances MedinAI's utility, enabling researchers and stakeholders to improve the quality and transparency of AI research reporting across various contexts, study designs, and development stages.
Collapse
Affiliation(s)
- Wallace Entringer Bottacin
- Postgraduate Program in Pharmaceutical Services and Policies, Federal University of Paraná, Avenida Prefeito Lothário Meissner, 632 - Jardim Botânico, Curitiba, PR, 80210-170, Brazil.
| | - Thais Teles de Souza
- Department of Pharmaceutical Sciences, Federal University of Paraíba, João Pessoa, PB, Brazil
| | - Ana Carolina Melchiors
- Postgraduate Program in Pharmaceutical Services and Policies, Federal University of Paraná, Avenida Prefeito Lothário Meissner, 632 - Jardim Botânico, Curitiba, PR, 80210-170, Brazil
| | | |
Collapse
|
4
|
Jackson GP, Shortliffe EH. Understanding the evidence for artificial intelligence in healthcare. BMJ Qual Saf 2025:bmjqs-2025-018559. [PMID: 40246317 DOI: 10.1136/bmjqs-2025-018559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2025] [Indexed: 04/19/2025]
Affiliation(s)
- Gretchen Purcell Jackson
- Digital, Intuitive Surgical Inc, Sunnyvale, California, USA
- Pediatric Surgery, Pediatrics, and Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Edward H Shortliffe
- Biomedical Informatics, Columbia University, New York, New York, USA
- Population Health Sciences, Weill Cornell Medical College, New York, New York, USA
| |
Collapse
|
5
|
Benda NC, Reading Turchioe M. We Need a Healthcare System That Supports Patients and Health Professionals in Using AI (If They Choose). J Gen Intern Med 2025:10.1007/s11606-025-09505-7. [PMID: 40240617 DOI: 10.1007/s11606-025-09505-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2025] [Accepted: 04/04/2025] [Indexed: 04/18/2025]
|
6
|
Maj A, Makowska M, Sacharczuk K. The content analysis used in nursing research and the possibility of including artificial intelligence support: A methodological review. Appl Nurs Res 2025; 82:151919. [PMID: 40086938 DOI: 10.1016/j.apnr.2025.151919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 01/20/2025] [Accepted: 01/31/2025] [Indexed: 03/16/2025]
Abstract
BACKGROUND This article explores how AI supports nurses by employing content analysis for scientific nursing research. METHODS A narrative literature review was conducted. RESULTS The article summarizes the knowledge known about content analysis and outlines qualitative and quantitative content analysis concepts and simplifies the issues related to the coding process. It explains how to identify and assess quality during content analysis and gives examples of topics that can be investigated using it, especially in the field of nursing. CONCLUSIONS Knowledge of AI capabilities is needed to make positive use of it. These capabilities change very quickly and require constant knowledge updates. Legal and ethical regulations concerning the use of technology are still lacking, so AI outputs still require human verification of them.
Collapse
Affiliation(s)
- Agnieszka Maj
- Warsaw University of Life Sciences, Faculty of Sociology and Pedagogy, Department of Sociology, Poland
| | - Marta Makowska
- Kozminski University in Warsaw, Department of Economic Psychology, Poland.
| | | |
Collapse
|
7
|
El Arab RA, Abu-Mahfouz MS, Abuadas FH, Alzghoul H, Almari M, Ghannam A, Seweid MM. Bridging the Gap: From AI Success in Clinical Trials to Real-World Healthcare Implementation-A Narrative Review. Healthcare (Basel) 2025; 13:701. [PMID: 40217999 PMCID: PMC11988730 DOI: 10.3390/healthcare13070701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2025] [Revised: 03/12/2025] [Accepted: 03/20/2025] [Indexed: 04/14/2025] Open
Abstract
BACKGROUND Artificial intelligence (AI) has demonstrated remarkable diagnostic accuracy in controlled clinical trials, sometimes rivaling or even surpassing experienced clinicians. However, AI's real-world effectiveness is frequently diminished when applied to diverse clinical settings, owing to methodological shortcomings, limited multicenter studies, and insufficient real-world validations. OBJECTIVE This narrative review critically examines the discrepancy between AI's robust performance in clinical trials and its inconsistent real-world implementation. Our goal is to synthesize methodological, ethical, and operational challenges impeding AI integration and propose a comprehensive framework to bridge this gap. METHODS We conducted a thematic synthesis of peer-reviewed studies from the PubMed, IEEE Xplore, and Scopus databases, targeting studies from 2014 to 2024. Included studies addressed diagnostic, therapeutic, or operational AI applications and related implementation challenges in healthcare. Non-peer-reviewed articles and studies without rigorous analysis were excluded. RESULTS Our synthesis identified key barriers to AI's real-world deployment, including algorithmic bias from homogeneous datasets, workflow misalignment, increased clinician workload, and ethical concerns surrounding transparency, accountability, and data privacy. Additionally, scalability remains a challenge due to interoperability issues, insufficient methodological rigor, and inconsistent reporting standards. To address these challenges, we introduce the AI Healthcare Integration Framework (AI-HIF), a structured model incorporating theoretical and operational strategies for responsible AI implementation in healthcare. CONCLUSIONS Translating AI from controlled environments to real-world clinical practice necessitates a multifaceted, interdisciplinary approach. Future research should prioritize large-scale pragmatic trials and observational studies to empirically validate the proposed AI Healthcare Integration Framework (AI-HIF) in diverse, real-world healthcare contexts.
Collapse
Affiliation(s)
- Rabie Adel El Arab
- Department of Health Management and Informatics, Almoosa College of Health Sciences, Al Ahsa 36422, Saudi Arabia
- Department of Nursing, Almoosa College of Health Sciences, Al Ahsa 36422, Saudi Arabia
| | | | - Fuad H. Abuadas
- Department of Nursing, Jouf University, Skakka 72388, Saudi Arabia;
| | - Husam Alzghoul
- Department of Nursing, Almoosa College of Health Sciences, Al Ahsa 36422, Saudi Arabia
| | - Mohammed Almari
- Department of Nursing, Almoosa College of Health Sciences, Al Ahsa 36422, Saudi Arabia
| | - Ahmad Ghannam
- Department of Computer Science, Princess Sumaya University for Technology, Amman 11941, Jordan
| | - Mohamed Mahmoud Seweid
- Department of Nursing, Almoosa College of Health Sciences, Al Ahsa 36422, Saudi Arabia
- Faculty of Nursing, Beni-Suef University, Beni-Suef 62111, Egypt
| |
Collapse
|
8
|
Oddiri U, Ryan MS, Collins JE, Han P, Klein M, Lyle ANJ, Kloster HM. A Narrative Review of Key Studies in Medical Education in 2023: Applying the Current Literature to Educational Practice and Scholarship. Acad Pediatr 2025; 25:102605. [PMID: 39571969 DOI: 10.1016/j.acap.2024.102605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2024] [Revised: 10/22/2024] [Accepted: 11/06/2024] [Indexed: 12/23/2024]
Abstract
Pediatric clinician educators face the challenge of juggling clinical practice with teaching responsibilities. The balancing act is even more challenging when one considers the need to stay current with evidence from clinical and medical education literature. In this narrative review of 2023 medical education literature, the Academic Pediatric Association Education Committee's Top Articles team summarizes high-yield articles that possess the potential to significantly influence pediatric clinician educator teaching and practice. A standardized blinded rubric was applied to identify the most impactful articles from 19 medical education and specialty journals. Final selections were categorized into six domains: artificial intelligence and technology, belonging in the learning environment, bias in the workplace, clinical learning, curriculum and assessment, and family and community partnerships. The reviewers summarize key findings from the top articles and describe implications for pediatric clinician educator practice.
Collapse
Affiliation(s)
- Uchechi Oddiri
- Department of Pediatrics (U Oddiri), Division of Pediatric Critical Care, Renaissance School of Medicine at Stony Brook University, Stony Brook, NY.
| | - Michael S Ryan
- Department of Pediatrics (MS Ryan), University of Virginia School of Medicine, Charlottesville, Va
| | - Jolene E Collins
- Department of Pediatrics (JE Collins), Division of General Pediatrics, Children's Hospital Los Angeles, Los Angeles, Calif; Department of Pediatrics, Division of General Pediatrics, USC Keck School of Medicine (JE Collins), Los Angeles, Calif
| | - Peggy Han
- Division of Critical Care Medicine (P Han), Department of Pediatrics, Stanford University School of Medicine, Palo Alto, Calif
| | - Melissa Klein
- Department of Pediatrics (M Klein), University of Cincinnati College of Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio
| | - Allison N J Lyle
- Norton Children's Medical Group (ANJ Lyle), University of Louisville School of Medicine, Department of Pediatrics, Division of Neonatology, Louisville, Ky
| | - Heidi M Kloster
- Department of Pediatrics (HM Kloster), University of Wisconsin School of Medicine and Public Health, Madison, Wis
| |
Collapse
|
9
|
Gin BC, O'Sullivan PS, Hauer KE, Abdulnour RE, Mackenzie M, Ten Cate O, Boscardin CK. Entrustment and EPAs for Artificial Intelligence (AI): A Framework to Safeguard the Use of AI in Health Professions Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2025; 100:264-272. [PMID: 39761533 DOI: 10.1097/acm.0000000000005930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/28/2025]
Abstract
ABSTRACT In this article, the authors propose a repurposing of the concept of entrustment to help guide the use of artificial intelligence (AI) in health professions education (HPE). Entrustment can help identify and mitigate the risks of incorporating generative AI tools with limited transparency about their accuracy, source material, and disclosure of bias into HPE practice. With AI's growing role in education-related activities, like automated medical school application screening and feedback quality and content appraisal, there is a critical need for a trust-based approach to ensure these technologies are beneficial and safe. Drawing parallels with HPE's entrustment concept, which assesses a trainee's readiness to perform clinical tasks-or entrustable professional activities-the authors propose assessing the trustworthiness of AI tools to perform an HPE-related task across 3 characteristics: ability (competence to perform tasks accurately), integrity (transparency and honesty), and benevolence (alignment with ethical principles). The authors draw on existing theories of entrustment decision-making to envision a structured way to decide on AI's role and level of engagement in HPE-related tasks, including proposing an AI-specific entrustment scale. Identifying tasks that AI could be entrusted with provides a focus around which considerations of trustworthiness and entrustment decision-making may be synthesized, making explicit the risks associated with AI use and identifying strategies to mitigate these risks. Responsible, trustworthy, and ethical use of AI requires health professions educators to develop safeguards for using it in teaching, learning, and practice-guardrails that can be operationalized via applying the entrustment concept to AI. Without such safeguards, HPE practice stands to be shaped by the oncoming wave of AI innovations tied to commercial motivations, rather than modeled after HPE principles-principles rooted in the trust and transparency that are built together with trainees and patients.
Collapse
|
10
|
Ichikawa T, Olsen E, Vinod A, Glenn N, Hanna K, Lund GC, Pierce-Talsma S. Generative Artificial Intelligence in Medical Education-Policies and Training at US Osteopathic Medical Schools: Descriptive Cross-Sectional Survey. JMIR MEDICAL EDUCATION 2025; 11:e58766. [PMID: 39934984 PMCID: PMC11835596 DOI: 10.2196/58766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Revised: 10/09/2024] [Accepted: 01/02/2025] [Indexed: 02/13/2025]
Abstract
Background Interest has recently increased in generative artificial intelligence (GenAI), a subset of artificial intelligence that can create new content. Although the publicly available GenAI tools are not specifically trained in the medical domain, they have demonstrated proficiency in a wide range of medical assessments. The future integration of GenAI in medicine remains unknown. However, the rapid availability of GenAI with a chat interface and the potential risks and benefits are the focus of great interest. As with any significant medical advancement or change, medical schools must adapt their curricula to equip students with the skills necessary to become successful physicians. Furthermore, medical schools must ensure that faculty members have the skills to harness these new opportunities to increase their effectiveness as educators. How medical schools currently fulfill their responsibilities is unclear. Colleges of Osteopathic Medicine (COMs) in the United States currently train a significant proportion of the total number of medical students. These COMs are in academic settings ranging from large public research universities to small private institutions. Therefore, studying COMs will offer a representative sample of the current GenAI integration in medical education. Objective This study aims to describe the policies and training regarding the specific aspect of GenAI in US COMs, targeting students, faculty, and administrators. Methods Web-based surveys were sent to deans and Student Government Association (SGA) presidents of the main campuses of fully accredited US COMs. The dean survey included questions regarding current and planned policies and training related to GenAI for students, faculty, and administrators. The SGA president survey included only those questions related to current student policies and training. Results Responses were received from 81% (26/32) of COMs surveyed. This included 47% (15/32) of the deans and 50% (16/32) of the SGA presidents (with 5 COMs represented by both the deans and the SGA presidents). Most COMs did not have a policy on the student use of GenAI, as reported by the dean (14/15, 93%) and the SGA president (14/16, 88%). Of the COMs with no policy, 79% (11/14) had no formal plans for policy development. Only 1 COM had training for students, which focused entirely on the ethics of using GenAI. Most COMs had no formal plans to provide mandatory (11/14, 79%) or elective (11/15, 73%) training. No COM had GenAI policies for faculty or administrators. Eighty percent had no formal plans for policy development. Furthermore, 33.3% (5/15) of COMs had faculty or administrator GenAI training. Except for examination question development, there was no training to increase faculty or administrator capabilities and efficiency or to decrease their workload. Conclusions The survey revealed that most COMs lack GenAI policies and training for students, faculty, and administrators. The few institutions with policies or training were extremely limited in scope. Most institutions without current training or policies had no formal plans for development. The lack of current policies and training initiatives suggests inadequate preparedness for integrating GenAI into the medical school environment, therefore, relegating the responsibility for ethical guidance and training to the individual COM member.
Collapse
Affiliation(s)
- Tsunagu Ichikawa
- College of Osteopathic Medicine, University of New England, 11 Hills Beach Road, Biddeford, ME, 04005, United States, 1 2076022880
| | - Elizabeth Olsen
- College of Osteopathic Medicine, Rocky Vista University, Parker, CO, United States
| | - Arathi Vinod
- College of Osteopathic Medicine, Touro University California, Vallejo, CA, United States
| | - Noah Glenn
- McCombs School of Business, University of Texas at Austin, Austin, TX, United States
| | - Karim Hanna
- Morsani College of Medicine, University of South Florida, Tampa, FL, United States
| | | | - Stacey Pierce-Talsma
- College of Osteopathic Medicine, University of New England, 11 Hills Beach Road, Biddeford, ME, 04005, United States, 1 2076022880
| |
Collapse
|
11
|
Singhal A, Zhao X, Wall P, So E, Calderini G, Partin A, Koussa N, Vasanthakumari P, Narykov O, Zhu Y, Jones SE, Abbas-Aghababazadeh F, Nair SK, Bélisle-Pipon JC, Jayaram A, Parker BA, Yeung KT, Griffiths JI, Weil R, Nath A, Haibe-Kains B, Ideker T. The Hallmarks of Predictive Oncology. Cancer Discov 2025; 15:271-285. [PMID: 39760657 PMCID: PMC11969157 DOI: 10.1158/2159-8290.cd-24-0760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2024] [Revised: 08/30/2024] [Accepted: 10/16/2024] [Indexed: 01/07/2025]
Abstract
SIGNIFICANCE As the field of artificial intelligence evolves rapidly, these hallmarks are intended to capture fundamental, complementary concepts necessary for the progress and timely adoption of predictive modeling in precision oncology. Through these hallmarks, we hope to establish standards and guidelines that enable the symbiotic development of artificial intelligence and precision oncology.
Collapse
Affiliation(s)
- Akshat Singhal
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA, USA
| | - Xiaoyu Zhao
- Division of Human Genomics and Precision Medicine, Department of Medicine, University of California, San Diego, La Jolla, CA, USA
| | - Patrick Wall
- Department of Bioengineering, University of California, San Diego, La Jolla, CA, USA
| | - Emily So
- Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada
| | - Guido Calderini
- Faculty of Health Science, Simon Fraser University, Burnaby, BC, Canada
- École de santé publique, Université de Montréal, Montréal, QC, Canada
| | - Alexander Partin
- Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA
| | - Natasha Koussa
- Cancer Data Science Initiatives, Cancer Research Technology Program, Frederick National Laboratory for Cancer Research, Frederick, MD, USA
| | | | - Oleksandr Narykov
- Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA
| | - Yitan Zhu
- Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA
| | - Sara E. Jones
- Cancer Data Science Initiatives, Cancer Research Technology Program, Frederick National Laboratory for Cancer Research, Frederick, MD, USA
| | | | | | | | | | - Barbara A. Parker
- Moores Cancer Center, Department of Medicine, University of California, San Diego, La Jolla, CA, USA
| | - Kay T. Yeung
- Moores Cancer Center, Department of Medicine, University of California, San Diego, La Jolla, CA, USA
| | - Jason I. Griffiths
- Department of Medical Oncology and Therapeutics Research, Beckman Research Institute, City of Hope National Medical Center, Monrovia, CA, USA
| | - Ryan Weil
- Cancer Data Science Initiatives, Cancer Research Technology Program, Frederick National Laboratory for Cancer Research, Frederick, MD, USA
| | - Aritro Nath
- Department of Medical Oncology and Therapeutics Research, Beckman Research Institute, City of Hope National Medical Center, Monrovia, CA, USA
| | - Benjamin Haibe-Kains
- Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada
- Medical Biophysics, University of Toronto, Toronto, Canada
- Vector Institute for Artificial Intelligence, Toronto, Canada
- Department of Biostatistics, Dalla Lana School of Public Health, Toronto, Canada
| | - Trey Ideker
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA, USA
- Division of Human Genomics and Precision Medicine, Department of Medicine, University of California, San Diego, La Jolla, CA, USA
- Department of Bioengineering, University of California, San Diego, La Jolla, CA, USA
- Moores Cancer Center, Department of Medicine, University of California, San Diego, La Jolla, CA, USA
| |
Collapse
|
12
|
Matheny ME, Goldsack JC, Saria S, Shah NH, Gerhart J, Cohen IG, Price WN, Patel B, Payne PRO, Embí PJ, Anderson B, Horvitz E. Artificial Intelligence In Health And Health Care: Priorities For Action. Health Aff (Millwood) 2025; 44:163-170. [PMID: 39841940 DOI: 10.1377/hlthaff.2024.01003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2025]
Abstract
The field of artificial intelligence (AI) has entered a new cycle of intense opportunity, fueled by advances in deep learning, including generative AI. Applications of recent advances affect many aspects of everyday life, yet nowhere is it more important to use this technology safely, effectively, and equitably than in health and health care. Here, as part of the National Academy of Medicine's Vital Directions for Health and Health Care: Priorities for 2025 initiative, which is designed to provide guidance on pressing health care issues for the incoming presidential administration, we describe the steps needed to achieve these goals. We focus on four strategic areas: ensuring safe, effective, and trustworthy use of AI; promotion and development of an AI-competent health care workforce; investing in AI research to support the science, practice, and delivery of health and health care; and promotion of policies and procedures to clarify AI liability and responsibilities.
Collapse
Affiliation(s)
- Michael E Matheny
- Michael E. Matheny , Vanderbilt University and Veterans Affairs Tennessee Valley Healthcare System, Nashville, Tennessee
| | | | - Suchi Saria
- Suchi Saria, Johns Hopkins University, Baltimore, Maryland
| | - Nigam H Shah
- Nigam H. Shah, Stanford University, Palo Alto, California
| | | | - I Glenn Cohen
- I. Glenn Cohen, Harvard University, Boston, Massachusetts
| | | | | | - Philip R O Payne
- Philip R. O. Payne, Washington University in St. Louis, St. Louis, Missouri
| | | | - Brian Anderson
- Brian Anderson, Coalition for Health AI, Boston, Massachusetts
| | | |
Collapse
|
13
|
Burke H, Kazinka R, Gandhi R, Murray A. Artificial Intelligence-Generated Writing in the ERAS Personal Statement: An Emerging Quandary for Post-graduate Medical Education. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2025; 49:13-17. [PMID: 39505810 DOI: 10.1007/s40596-024-02080-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 10/19/2024] [Indexed: 11/08/2024]
Abstract
OBJECTIVE This study was designed to investigate if artificial intelligence (AI) detection software can determine the use of AI in personal statements for residency applications. METHOD Previously written personal statements were collected from physicians who had already matched to residency through the Electronic Residency Application System. Physicians were recruited for the study through collegial relationships and were given study information via email. The study team constructed five parallel statements from the shared personal statements to prompt AI to create a personal statement of similar content. An online AI detection tool, GPTZero, was used to assess all the personal statements. Statistical analyses were conducted using R. Descriptive statistics, t-tests, and Pearson correlations were used to assess the data. RESULTS Eight physicians' statements were compared to eight AI-generated statements. GPTZero was able to correctly identify AI-generated writing, assigning them significantly higher AI probability scores compared to human-authored essays. Human-generated statements were considered more readable, used shorter words with fewer syllables, and had more sentences compared to AI-generated essays. Longer average sentence length, low readability scores, and high SAT word percentages were strongly associated with AI-generated essays. CONCLUSIONS This study shows the capacity of GPTZero to distinguish human-created versus AI-generated writing. Use of AI can pose significant ethical challenges and carries a risk of inadvertent harm to certain applicants and erosion of trust in the application process. Authors suggest standardization of protocol regarding the use of AI prior to its integration in post-graduate medical education.
Collapse
Affiliation(s)
- Hugh Burke
- Medical School, University of Minnesota (Twin Cities), Minneapolis, MN, USA.
| | - Rebecca Kazinka
- Medical School, University of Minnesota (Twin Cities), Minneapolis, MN, USA
| | - Raghu Gandhi
- Abbott Northwestern Hospital, Minneapolis, MN, USA
| | - Aimee Murray
- Medical School, University of Minnesota (Twin Cities), Minneapolis, MN, USA
| |
Collapse
|
14
|
Gavarkovs AG, Kueper J, Arntfield R, Myslik F, Thompson K, McCauley W. Assessing Physician Motivation to Engage in Continuing Professional Development on Artificial Intelligence. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2025:00005141-990000000-00147. [PMID: 39878545 DOI: 10.1097/ceh.0000000000000594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2025]
Abstract
ABSTRACT To realize the transformative potential of artificial intelligence (AI) in health care, physicians must learn how to use AI-based tools effectively, safely, and equitably. Continuing professional development (CPD) activities are one way to learn how to do this. The purpose of this article is to describe a theory-based approach for assessing health professionals' motivation to participate in CPD on AI-based tools. An online survey, based on an AI competency framework developed from existing literature and expert consultations, was administered to practicing physicians in Ontario, Canada. Across eight subcompetencies for using AI-based tools (eg, appraise AI-based tools for their regulatory and legal status), the survey measured physicians' perception they could successfully enact the competency, the importance of the competency in meeting their practice needs, and the desirability of participating in CPD activities on the competency. Motivation scores were calculated by multiplying the three scores together. Ninety-five physicians completed the survey. The highest motivation scores were for the subcompetency of identifying AI-based tools based on clinical needs, while the lowest motivation scores were for appraising tools' regulatory and legal status. All AI subcompetencies were generally rated as important, and CPD activities were generally perceived as desirable. This survey demonstrates the utility of a theory-based approach for assessing physicians' motivation to learn. Although the survey results are context specific, the approach may be useful for other CPD providers to support decision making about future AI-related CPD activities.
Collapse
Affiliation(s)
- Adam G Gavarkovs
- Dr. Adam G. Gavarkovs: Research Associate, Division of Continuing Professional Development, Faculty of Medicine, University of British Columbia
- Dr. Jacqueline Kueper: Senior Staff Scientist, Scripps Research Translational Institute, La Jolla, CA
- Dr. Robert Arntfield: Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Frank Myslik: Associate Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Keith Thompson: Adjunct Professor, Department of Family Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario; and
- Dr. William McCauley: Associate Dean, Continuing Professional Development, Schulich School of Medicine & Dentistry, Western University, London, Ontario
| | - Jacqueline Kueper
- Dr. Adam G. Gavarkovs: Research Associate, Division of Continuing Professional Development, Faculty of Medicine, University of British Columbia
- Dr. Jacqueline Kueper: Senior Staff Scientist, Scripps Research Translational Institute, La Jolla, CA
- Dr. Robert Arntfield: Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Frank Myslik: Associate Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Keith Thompson: Adjunct Professor, Department of Family Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario; and
- Dr. William McCauley: Associate Dean, Continuing Professional Development, Schulich School of Medicine & Dentistry, Western University, London, Ontario
| | - Robert Arntfield
- Dr. Adam G. Gavarkovs: Research Associate, Division of Continuing Professional Development, Faculty of Medicine, University of British Columbia
- Dr. Jacqueline Kueper: Senior Staff Scientist, Scripps Research Translational Institute, La Jolla, CA
- Dr. Robert Arntfield: Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Frank Myslik: Associate Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Keith Thompson: Adjunct Professor, Department of Family Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario; and
- Dr. William McCauley: Associate Dean, Continuing Professional Development, Schulich School of Medicine & Dentistry, Western University, London, Ontario
| | - Frank Myslik
- Dr. Adam G. Gavarkovs: Research Associate, Division of Continuing Professional Development, Faculty of Medicine, University of British Columbia
- Dr. Jacqueline Kueper: Senior Staff Scientist, Scripps Research Translational Institute, La Jolla, CA
- Dr. Robert Arntfield: Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Frank Myslik: Associate Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Keith Thompson: Adjunct Professor, Department of Family Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario; and
- Dr. William McCauley: Associate Dean, Continuing Professional Development, Schulich School of Medicine & Dentistry, Western University, London, Ontario
| | - Keith Thompson
- Dr. Adam G. Gavarkovs: Research Associate, Division of Continuing Professional Development, Faculty of Medicine, University of British Columbia
- Dr. Jacqueline Kueper: Senior Staff Scientist, Scripps Research Translational Institute, La Jolla, CA
- Dr. Robert Arntfield: Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Frank Myslik: Associate Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Keith Thompson: Adjunct Professor, Department of Family Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario; and
- Dr. William McCauley: Associate Dean, Continuing Professional Development, Schulich School of Medicine & Dentistry, Western University, London, Ontario
| | - William McCauley
- Dr. Adam G. Gavarkovs: Research Associate, Division of Continuing Professional Development, Faculty of Medicine, University of British Columbia
- Dr. Jacqueline Kueper: Senior Staff Scientist, Scripps Research Translational Institute, La Jolla, CA
- Dr. Robert Arntfield: Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Frank Myslik: Associate Professor, Department of Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario
- Dr. Keith Thompson: Adjunct Professor, Department of Family Medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario; and
- Dr. William McCauley: Associate Dean, Continuing Professional Development, Schulich School of Medicine & Dentistry, Western University, London, Ontario
| |
Collapse
|
15
|
Schuitmaker L, Drogt J, Benders M, Jongsma K. Physicians' required competencies in AI-assisted clinical settings: a systematic review. Br Med Bull 2025; 153:ldae025. [PMID: 39821209 PMCID: PMC11738171 DOI: 10.1093/bmb/ldae025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Revised: 12/12/2024] [Indexed: 01/19/2025]
Abstract
BACKGROUND Utilizing Artificial Intelligence (AI) in clinical settings may offer significant benefits. A roadblock to the responsible implementation of medical AI is the remaining uncertainty regarding requirements for AI users at the bedside. An overview of the academic literature on human requirements for the adequate use of AI in clinical settings is therefore of significant value. SOURCES OF DATA A systematic review of the potential implications of medical AI for the required competencies of physicians as mentioned in the academic literature. AREAS OF AGREEMENT Our findings emphasize the importance of physicians' critical human skills, alongside the growing demand for technical and digital competencies. AREAS OF CONTROVERSY Concrete guidance on physicians' required competencies in AI-assisted clinical settings remains ambiguous and requires further clarification and specification. Dissensus remains over whether physicians are adequately equipped to use and monitor AI in clinical settings in terms of competencies, skills and expertise, issues of ownership regarding normative guidance, and training of physicians' skills. GROWING POINTS Our review offers a basis for subsequent further research and normative analysis on the responsible use of AI in clinical settings. AREAS TIMELY FOR DEVELOPING RESEARCH Future research should clearly outline (i) how physicians must be(come) competent in working with AI in clinical settings, (ii) who or what should take ownership of embedding these competencies in a normative and regulatory framework, (iii) investigate conditions for achieving a reasonable amount of trust in AI, and (iv) assess the connection between trust and efficiency in patient care.
Collapse
Affiliation(s)
- Lotte Schuitmaker
- Department of Bioethics & Health Humanities, University Medical Center Utrecht, Utrecht University, Universiteitsweg 100, 3584 CG Utrecht, the Netherlands
| | - Jojanneke Drogt
- Department of Bioethics & Health Humanities, University Medical Center Utrecht, Utrecht University, Universiteitsweg 100, 3584 CG Utrecht, the Netherlands
| | - Manon Benders
- Department of Neonatology, University Medical Center Utrecht, Utrecht University, Heidelberglaan 100, 3584 CX Utrecht, the Netherlands
| | - Karin Jongsma
- Department of Bioethics & Health Humanities, University Medical Center Utrecht, Utrecht University, Universiteitsweg 100, 3584 CG Utrecht, the Netherlands
| |
Collapse
|
16
|
Levkovich I. Is Artificial Intelligence the Next Co-Pilot for Primary Care in Diagnosing and Recommending Treatments for Depression? Med Sci (Basel) 2025; 13:8. [PMID: 39846703 PMCID: PMC11755475 DOI: 10.3390/medsci13010008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Revised: 12/19/2024] [Accepted: 01/06/2025] [Indexed: 01/24/2025] Open
Abstract
Depression poses significant challenges to global healthcare systems and impacts the quality of life of individuals and their family members. Recent advancements in artificial intelligence (AI) have had a transformative impact on the diagnosis and treatment of depression. These innovations have the potential to significantly enhance clinical decision-making processes and improve patient outcomes in healthcare settings. AI-powered tools can analyze extensive patient data-including medical records, genetic information, and behavioral patterns-to identify early warning signs of depression, thereby enhancing diagnostic accuracy. By recognizing subtle indicators that traditional assessments may overlook, these tools enable healthcare providers to make timely and precise diagnostic decisions that are crucial in preventing the onset or escalation of depressive episodes. In terms of treatment, AI algorithms can assist in personalizing therapeutic interventions by predicting the effectiveness of various approaches for individual patients based on their unique characteristics and medical history. This includes recommending tailored treatment plans that consider the patient's specific symptoms. Such personalized strategies aim to optimize therapeutic outcomes and improve the overall efficiency of healthcare. This theoretical review uniquely synthesizes current evidence on AI applications in primary care depression management, offering a comprehensive analysis of both diagnostic and treatment personalization capabilities. Alongside these advancements, we also address the conflicting findings in the field and the presence of biases that necessitate important limitations.
Collapse
Affiliation(s)
- Inbar Levkovich
- Faculty of Education, Tel-Hai Academic College, Upper Galilee 2208, Israel
| |
Collapse
|
17
|
Benjamin J, Masters K, Agrawal A, MacNeill H, Mehta N. Twelve tips on applying AI tools in HPE scholarship using Boyer's model. MEDICAL TEACHER 2025:1-6. [PMID: 39791860 DOI: 10.1080/0142159x.2024.2445058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Accepted: 12/17/2024] [Indexed: 01/12/2025]
Abstract
AI has changed the landscape of health professions education. With the hype now behind us, we find ourselves in the phase of reckoning, considering what's next; where do we start and how can educators use these powerful tools for daily teaching and learning. We recognize the great need for training to use AI meaningfully for education. Boyer's model of scholarship provides a pedagogical approach for teaching with AI and how to maximize these efforts towards scholarship. By offering practical solutions and demonstrating their usefulness, this Twelve tips article demonstrates how to apply AI towards scholarship by leveraging the capabilities of the tools. Despite their potential, our recommendation is to exercise caution against AI dependency and to role model responsible use of AI by evaluating AI outputs critically with a commitment to accuracy and scrutinize for hallucinations and false citations.
Collapse
Affiliation(s)
- Jennifer Benjamin
- Department of Pediatrics, Huffington Department of Education Innovation and Technology, Baylor College of Medicine, Houston, TX, USA
- Academic General Pediatrics, Texas Children's Hospital, Houston, TX, USA
| | - Ken Masters
- College of Medicine and Health Sciences, Sultan Qaboos University, Muscat, Oman
| | - Anoop Agrawal
- Department of Medicine, Baylor College of Medicine, Houston, TX, USA
| | - Heather MacNeill
- University of Toronto, Toronto, Canada
- Toronto Metropolitan University, Toronto, Canada
| | - Neil Mehta
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University School of Medicine, Cleveland, OH, USA
| |
Collapse
|
18
|
Janumpally R, Nanua S, Ngo A, Youens K. Generative artificial intelligence in graduate medical education. Front Med (Lausanne) 2025; 11:1525604. [PMID: 39867924 PMCID: PMC11758457 DOI: 10.3389/fmed.2024.1525604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2024] [Accepted: 12/23/2024] [Indexed: 01/28/2025] Open
Abstract
Generative artificial intelligence (GenAI) is rapidly transforming various sectors, including healthcare and education. This paper explores the potential opportunities and risks of GenAI in graduate medical education (GME). We review the existing literature and provide commentary on how GenAI could impact GME, including five key areas of opportunity: electronic health record (EHR) workload reduction, clinical simulation, individualized education, research and analytics support, and clinical decision support. We then discuss significant risks, including inaccuracy and overreliance on AI-generated content, challenges to authenticity and academic integrity, potential biases in AI outputs, and privacy concerns. As GenAI technology matures, it will likely come to have an important role in the future of GME, but its integration should be guided by a thorough understanding of both its benefits and limitations.
Collapse
Affiliation(s)
| | | | | | - Kenneth Youens
- Clinical Informatics Fellowship Program, Baylor Scott & White Health, Round Rock, TX, United States
| |
Collapse
|
19
|
Bland T. Enhancing Medical Student Engagement Through Cinematic Clinical Narratives: Multimodal Generative AI-Based Mixed Methods Study. JMIR MEDICAL EDUCATION 2025; 11:e63865. [PMID: 39791333 PMCID: PMC11751740 DOI: 10.2196/63865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 09/26/2024] [Accepted: 11/07/2024] [Indexed: 01/12/2025]
Abstract
Background Medical students often struggle to engage with and retain complex pharmacology topics during their preclinical education. Traditional teaching methods can lead to passive learning and poor long-term retention of critical concepts. Objective This study aims to enhance the teaching of clinical pharmacology in medical school by using a multimodal generative artificial intelligence (genAI) approach to create compelling, cinematic clinical narratives (CCNs). Methods We transformed a standard clinical case into an engaging, interactive multimedia experience called "Shattered Slippers." This CCN used various genAI tools for content creation: GPT-4 for developing the storyline, Leonardo.ai and Stable Diffusion for generating images, Eleven Labs for creating audio narrations, and Suno for composing a theme song. The CCN integrated narrative styles and pop culture references to enhance student engagement. It was applied in teaching first-year medical students about immune system pharmacology. Student responses were assessed through the Situational Interest Survey for Multimedia and examination performance. The target audience comprised first-year medical students (n=40), with 18 responding to the Situational Interest Survey for Multimedia survey (n=18). Results The study revealed a marked preference for the genAI-enhanced CCNs over traditional teaching methods. Key findings include the majority of surveyed students preferring the CCN over traditional clinical cases (14/18), as well as high average scores for triggered situational interest (mean 4.58, SD 0.53), maintained interest (mean 4.40, SD 0.53), maintained-feeling interest (mean 4.38, SD 0.51), and maintained-value interest (mean 4.42, SD 0.54). Students achieved an average score of 88% on examination questions related to the CCN material, indicating successful learning and retention. Qualitative feedback highlighted increased engagement, improved recall, and appreciation for the narrative style and pop culture references. Conclusions This study demonstrates the potential of using a multimodal genAI-driven approach to create CCNs in medical education. The "Shattered Slippers" case effectively enhanced student engagement and promoted knowledge retention in complex pharmacological topics. This innovative method suggests a novel direction for curriculum development that could improve learning outcomes and student satisfaction in medical education. Future research should explore the long-term retention of knowledge and the applicability of learned material in clinical settings, as well as the potential for broader implementation of this approach across various medical education contexts.
Collapse
Affiliation(s)
- Tyler Bland
- Department of Medical Education, University of Idaho, 875 Perimeter Drive MS 4061, WWAMI Medical EducationMoscow, ID, 83844-9803, United States, 1 5092090908
| |
Collapse
|
20
|
Soleas EK, Dittmer D, Waddington A, van Wylick R. Demystifying Artificial Intelligence for Health Care Professionals: Continuing Professional Development as an Agent of Transformation Leading to Artificial Intelligence-Augmented Practice. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2025; 45:52-55. [PMID: 39162740 DOI: 10.1097/ceh.0000000000000571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/21/2024]
Abstract
ABSTRACT The rapid rise of artificial intelligence (AI) is transforming society; yet, the education of health care providers in this field is lagging. In health care, where AI promises to facilitate diagnostic accuracy, and allow for personalized treatment, bridging the knowledge and skill gaps for providers becomes vital. This article explores the challenges of AI education, such as the emergence of self-proclaimed experts during the pandemic, and the need for comprehensive training in AI language, mechanics, and ethics. It advocates for a new breed of health care professionals who are both practitioners and informaticians, who are capable through initial training or through continuing professional development of harnessing AI's potential. Interdisciplinary collaboration, ongoing education, and incentives are proposed to ensure health care benefits from AI's trajectory. This perspective article explores the hurdles and the imperative of creating educational programming designed specifically to help health care professionals augment their practice with AI.
Collapse
Affiliation(s)
- Eleftherios K Soleas
- Dr. Soleas: Director of Lifelong Learning and Innovation, Queen's Health Sciences, Kingston, Ontario, Canada. Dr. Dittmer: Physical Medicine and Rehabilitation, Grand River Hospital, Kitchener, Ontario, Canada. Dr. van Wylick: Vice-Dean, Health Sciences Education, Queen's Health Sciences, and Associate Professor, Pediatrics, Queen's Health Sciences, Kingston, Ontario, Canada. Dr. Waddington: Assistant Dean, Continuing Professional Development, Associate Professor, Obstetrics and Gynecology, Queen's Health Sciences, Kingston, Ontario, Canada
| | | | | | | |
Collapse
|
21
|
Hassan M, Ayad M, Nembhard C, Hayes-Dixon A, Lin A, Janjua M, Franko J, Tee M. Artificial Intelligence Compared to Manual Selection of Prospective Surgical Residents. JOURNAL OF SURGICAL EDUCATION 2025; 82:103308. [PMID: 39509905 DOI: 10.1016/j.jsurg.2024.103308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 09/26/2024] [Accepted: 10/02/2024] [Indexed: 11/15/2024]
Abstract
BACKGROUND Artificial Intelligence (AI) in the selection of residency program applicants is a new tool that is gaining traction, with the aim of screening high numbers of applicants while introducing objectivity and mitigating bias in a traditionally subjective process. This study aims to compare applicants screened by an AI software to a single Program Director (PD) for interview selection. METHODS A single PD at an ACGME-accredited, academic general surgery program screened applicants. A parallel screen by AI software, programmed by the same PD, was conducted on the same pool of applicants. Weighted preferences were assigned in the following order: personal statement, research, medical school rankings, letters of recommendation, personal qualities, board scores, graduate degree, geographic preference, past experiences, program signal, honor society membership, and multilingualism. Statistical analyses were conducted by chi-square, ANOVA, and independent two-sided t-tests. RESULTS Out of 1235 applications, 144 applications were PD-selected and 150 AI-selected (294 top applications). Twenty applications (7.3%) were both PD and AI selected for a total analysis cohort of 274 prospective residents. We performed two analyses: 1) PD-selected vs. AI-selected vs. Both and 2) PD-selected vs. AI-selected with the overlapping applicants censored. For the first analysis, AI selected significantly: more White/Hispanic applicants (p < 0.001), less signals (p < 0.001), more AOA honors society (p = 0.016), and more publications (p < 0.001). When censoring overlapping PD and AI selection, AI selected significantly: more White/Hispanic applicants (p < 0.001), less signals (p < 0.001), more US medical graduates (p = 0.027), less applicants needing visa sponsorship (p = 0.01), younger applicants (p = 0.024), higher USMLE Step 2 CK scores (p < 0.001), and more publications (p < 0.001). CONCLUSIONS There was only a 7% overlap between PD-selected and AI-selected applicants for interview screening in the same applicant pool. Despite the same PD educating the AI software, the 2 application pools differed significantly. In its present state, AI may be utilized as a tool in resident application selection but should not completely replace human review. We recommend careful analysis of the performance of each AI model in the respective environment of each institution applying it, as it may alter the group of interviewees.
Collapse
Affiliation(s)
- Monalisa Hassan
- Department of Surgery, Howard University Hospital, Washington, District of Columbia; Department of Surgery, University of California, Davis, California
| | - Marco Ayad
- Department of Surgery, Howard University Hospital, Washington, District of Columbia
| | - Christine Nembhard
- Department of Surgery, Howard University Hospital, Washington, District of Columbia
| | - Andrea Hayes-Dixon
- Department of Surgery, Howard University Hospital, Washington, District of Columbia
| | - Anna Lin
- Department of Surgery, Howard University Hospital, Washington, District of Columbia
| | - Mahin Janjua
- Department of Surgery, Howard University Hospital, Washington, District of Columbia
| | - Jan Franko
- Department of Surgery, MercyOne Medical Center, Des Moines, Iowa
| | - May Tee
- Department of Surgery, Howard University Hospital, Washington, District of Columbia.
| |
Collapse
|
22
|
Chadha N, Popil E, Gregory J, Armstrong-Davies L, Justin G. How do we teach generative artificial intelligence to medical educators? Pilot of a faculty development workshop using ChatGPT. MEDICAL TEACHER 2025; 47:160-162. [PMID: 38648540 DOI: 10.1080/0142159x.2024.2341806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 04/08/2024] [Indexed: 04/25/2024]
Abstract
PURPOSE Artificial intelligence (AI) is already impacting the practice of medicine and it is therefore important for future healthcare professionals and medical educators to gain experience with the benefits, limitations, and applications of this technology. The purpose of this project was to develop, implement, and evaluate a faculty development workshop on generative AI using ChatGPT, to familiarise participants with AI. MATERIALS AND METHODS A brief workshop introducing faculty to generative AI and its applications in medical education was developed for preclinical clinical skills preceptors at our institution. During the workshop, faculty were given prompts to enter into ChatGPT that were relevant to their teaching activities, including generating differential diagnoses and providing feedback on student notes. Participant feedback was collected using an anonymous survey. RESULTS 27/36 participants completed the survey. Prior to the workshop, 15% of participants indicated having used ChatGPT, and approximately half were familiar with AI applications in medical education. Interest in using the tool increased from 43% to 65% following the workshop, yet participants expressed concerns regarding accuracy and privacy with use of ChatGPT. CONCLUSION This brief workshop serves as a model for faculty development in AI applications in medical education. The workshop increased interest in using ChatGPT for educational purposes, and was well received.
Collapse
Affiliation(s)
- Nisha Chadha
- Department of Ophthalmology and Medical Education, Icahn School of Medicine at Mount Sinai/New York Eye and Ear Infirmary, Eye and Vision Research Institute, New York, NY, USA
| | - Erik Popil
- Instructional Technology Group, Digital and Technology Partners, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jill Gregory
- Instructional Technology Group, Digital and Technology Partners, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Lily Armstrong-Davies
- Instructional Technology Group, Digital and Technology Partners, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Gale Justin
- Department of Ophthalmology and Medical Education, Icahn School of Medicine at Mount Sinai/New York Eye and Ear Infirmary, Eye and Vision Research Institute, New York, NY, USA
| |
Collapse
|
23
|
Franco D’Souza R, Mathew M, Mishra V, Surapaneni KM. Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. MEDICAL EDUCATION ONLINE 2024; 29:2330250. [PMID: 38566608 PMCID: PMC10993743 DOI: 10.1080/10872981.2024.2330250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 03/08/2024] [Indexed: 04/04/2024]
Abstract
Artificial Intelligence (AI) holds immense potential for revolutionizing medical education and healthcare. Despite its proven benefits, the full integration of AI faces hurdles, with ethical concerns standing out as a key obstacle. Thus, educators should be equipped to address the ethical issues that arise and ensure the seamless integration and sustainability of AI-based interventions. This article presents twelve essential tips for addressing the major ethical concerns in the use of AI in medical education. These include emphasizing transparency, addressing bias, validating content, prioritizing data protection, obtaining informed consent, fostering collaboration, training educators, empowering students, regularly monitoring, establishing accountability, adhering to standard guidelines, and forming an ethics committee to address the issues that arise in the implementation of AI. By adhering to these tips, medical educators and other stakeholders can foster a responsible and ethical integration of AI in medical education, ensuring its long-term success and positive impact.
Collapse
Affiliation(s)
- Russell Franco D’Souza
- Department of Education, UNESCO Chair in Bioethics, Melbourne, Australia
- Department of Organisational Psychological Medicine, International Institute of Organisational Psychological Medicine, Melbourne, Australia
| | - Mary Mathew
- Department of Pathology, Kasturba Medical College, Manipal, Manipal Academy of Higher Education (MAHE), Manipal, India
| | - Vedprakash Mishra
- School of Hogher Education and Research, Datta Meghe Institute of Higher Education and Research (Deemed to be University), Nagpur, India
| | - Krishna Mohan Surapaneni
- Department of Biochemistry, Panimalar Medical College Hospital & Research Institute, Chennai, India
- Department of Medical Education, Panimalar Medical College Hospital & Research Institute, Chennai, India
| |
Collapse
|
24
|
De Busser B, Roth L, De Loof H. The role of large language models in self-care: a study and benchmark on medicines and supplement guidance accuracy. Int J Clin Pharm 2024:10.1007/s11096-024-01839-2. [PMID: 39644377 DOI: 10.1007/s11096-024-01839-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Accepted: 11/12/2024] [Indexed: 12/09/2024]
Abstract
BACKGROUND The recent surge in the capabilities of artificial intelligence systems, particularly large language models, is also impacting the medical and pharmaceutical field in a major way. Beyond specialized uses in diagnostics and data discovery, these tools have now become accessible to the general public. AIM The study aimed to critically analyse the current performance of large language models in answering patient's self-care questions regarding medications and supplements. METHOD Answers from six major language models were analysed for correctness, language-independence, context-sensitivity, and reproducibility using a newly developed reference set of questions and a scoring matrix. RESULTS The investigated large language models are capable of answering a clear majority of self-care questions accurately, providing relevant health information. However, substantial variability in the responses, including potentially unsafe advice, was observed, influenced by language, question structure, user context and time. GPT 4.0 scored highest on average, while GPT 3.5, Gemini, and Gemini Advanced had varied scores. Responses were context and language sensitive. In terms of consistency over time, Perplexity had the worst performance. CONCLUSION Given the high-quality output of large language models, their potential in self-care applications is undeniable. The newly created benchmark can facilitate further validation and guide the establishment of strict safeguards to combat the sizable risk of misinformation in order to reach a more favourable risk/benefit ratio when this cutting-edge technology is used by patients.
Collapse
Affiliation(s)
- Branco De Busser
- Laboratory of Physiopharmacology, University of Antwerp, Universiteitsplein 1, 2610, Antwerp, Belgium
| | - Lynn Roth
- Laboratory of Physiopharmacology, University of Antwerp, Universiteitsplein 1, 2610, Antwerp, Belgium
| | - Hans De Loof
- Laboratory of Physiopharmacology, University of Antwerp, Universiteitsplein 1, 2610, Antwerp, Belgium.
| |
Collapse
|
25
|
Weissman GE, Zwaan L, Bell SK. Diagnostic scope: the AI can't see what the mind doesn't know. Diagnosis (Berl) 2024:dx-2024-0151. [PMID: 39624993 DOI: 10.1515/dx-2024-0151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Accepted: 11/20/2024] [Indexed: 01/27/2025]
Abstract
BACKGROUND Diagnostic scope is the range of diagnoses found in a clinical setting. Although the diagnostic scope is an essential feature of training and evaluating artificial intelligence (AI) systems to promote diagnostic excellence, its impact on AI systems and the diagnostic process remains under-explored. CONTENT We define the concept of diagnostic scope, discuss its nuanced role in building safe and effective AI-based diagnostic decision support systems, review current challenges to measurement and use, and highlight knowledge gaps for future research. SUMMARY The diagnostic scope parallels the differential diagnosis although the latter is at the level of an encounter and the former is at the level of a clinical setting. Therefore, diagnostic scope will vary by local characteristics including geography, population, and resources. The true, observed, and considered scope in each setting may also diverge, both posing challenges for clinicians, patients, and AI developers, while also highlighting opportunities to improve safety. Further work is needed to systematically define and measure diagnostic scope in terms that are accurate, equitable, and meaningful at the bedside. AI tools tailored to a particular setting, such as a primary care clinic or intensive care unit, will each require specifying and measuring the appropriate diagnostic scope. OUTLOOK AI tools will promote diagnostic excellence if they are aligned with patient and clinician needs and trained on an accurately measured diagnostic scope. A careful understanding and rigorous evaluation of the diagnostic scope in each clinical setting will promote optimal care through human-AI collaborations in the diagnostic process.
Collapse
Affiliation(s)
- Gary E Weissman
- 14640 Palliative and Advanced Illness Research (PAIR) Center, University of Pennsylvania Perelman School of Medicine , Philadelphia, PA, USA
- Pulmonary, Allergy, and Critical Care Division, Department of Medicine, 14640 University of Pennsylvania Perelman School of Medicine , Philadelphia, PA, USA
- Division of Informatics, Department of Biostatistics, Epidemiology & Informatics, 14640 University of Pennsylvania Perelman School of Medicine , Philadelphia, PA, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
| | - Laura Zwaan
- Institute of Medical Education Research, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Sigall K Bell
- Department of Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
26
|
Collins BX, Bélisle-Pipon JC, Evans BJ, Ferryman K, Jiang X, Nebeker C, Novak L, Roberts K, Were M, Yin Z, Ravitsky V, Coco J, Hendricks-Sturrup R, Williams I, Clayton EW, Malin BA. Addressing ethical issues in healthcare artificial intelligence using a lifecycle-informed process. JAMIA Open 2024; 7:ooae108. [PMID: 39553826 PMCID: PMC11565898 DOI: 10.1093/jamiaopen/ooae108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 08/19/2024] [Accepted: 10/04/2024] [Indexed: 11/19/2024] Open
Abstract
Objectives Artificial intelligence (AI) proceeds through an iterative and evaluative process of development, use, and refinement which may be characterized as a lifecycle. Within this context, stakeholders can vary in their interests and perceptions of the ethical issues associated with this rapidly evolving technology in ways that can fail to identify and avert adverse outcomes. Identifying issues throughout the AI lifecycle in a systematic manner can facilitate better-informed ethical deliberation. Materials and Methods We analyzed existing lifecycles from within the current literature for ethical issues of AI in healthcare to identify themes, which we relied upon to create a lifecycle that consolidates these themes into a more comprehensive lifecycle. We then considered the potential benefits and harms of AI through this lifecycle to identify ethical questions that can arise at each step and to identify where conflicts and errors could arise in ethical analysis. We illustrated the approach in 3 case studies that highlight how different ethical dilemmas arise at different points in the lifecycle. Results Discussion and Conclusion Through case studies, we show how a systematic lifecycle-informed approach to the ethical analysis of AI enables mapping of the effects of AI onto different steps to guide deliberations on benefits and harms. The lifecycle-informed approach has broad applicability to different stakeholders and can facilitate communication on ethical issues for patients, healthcare professionals, research participants, and other stakeholders.
Collapse
Affiliation(s)
- Benjamin X Collins
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Center for Biomedical Ethics and Society, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States
| | | | - Barbara J Evans
- Levin College of Law, University of Florida, Gainesville, FL 32611, United States
- Herbert Wertheim College of Engineering, University of Florida, Gainesville, FL 32611, United States
| | - Kadija Ferryman
- Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD 21205, United States
| | - Xiaoqian Jiang
- McWilliams School of Biomedical Informatics, UTHealth Houston, Houston, TX 77030, United States
| | - Camille Nebeker
- Herbert Wertheim School of Public Health and Human Longevity Science, University of California, San Diego, La Jolla, CA 92093, United States
| | - Laurie Novak
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Kirk Roberts
- McWilliams School of Biomedical Informatics, UTHealth Houston, Houston, TX 77030, United States
| | - Martin Were
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States
| | - Zhijun Yin
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN 37212, United States
| | | | - Joseph Coco
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Rachele Hendricks-Sturrup
- National Alliance against Disparities in Patient Health, Woodbridge, VA 22191, United States
- Margolis Center for Health Policy, Duke University, Washington, DC 20004, United States
| | - Ishan Williams
- School of Nursing, University of Virginia, Charlottesville, VA 22903, United States
| | - Ellen W Clayton
- Center for Biomedical Ethics and Society, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Law School, Vanderbilt University, Nashville, TN 37203, United States
| | - Bradley A Malin
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN 37212, United States
| | | |
Collapse
|
27
|
Shah RM, Shah KM, Bahar P, James CA. Preparing Physicians of the Future: Incorporating Data Science into Medical Education. MEDICAL SCIENCE EDUCATOR 2024; 34:1565-1570. [PMID: 39758456 PMCID: PMC11699019 DOI: 10.1007/s40670-024-02137-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/02/2024] [Indexed: 01/07/2025]
Abstract
The recent excitement surrounding artificial intelligence (AI) in health care underscores the importance of physician engagement with new technologies. Future clinicians must develop a strong understanding of data science (DS) to further enhance patient care. However, DS remains largely absent from medical school curricula, even though it is recognized as vital by medical students and residents alike. Here, we evaluate the current DS landscape in medical education and illustrate its impact in medicine through examples in pathology classification and sepsis detection. We also explore reasons for the exclusion of DS and propose solutions to integrate it into existing medical education frameworks.
Collapse
Affiliation(s)
- Rishi M. Shah
- Department of Applied Mathematics, Yale College, New Haven, CT USA
| | - Kavya M. Shah
- Department of Clinical Neurosciences, University of Cambridge, Hills Road, Cambridge, England CB2 0QQ UK
| | - Piroz Bahar
- University of Michigan Medical School, Ann Arbor, MI USA
| | - Cornelius A. James
- Department of Internal Medicine, Pediatrics, and Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI USA
| |
Collapse
|
28
|
Reading Turchioe M, Desai P, Harkins S, Kim J, Kumar S, Zhang Y, Joly R, Pathak J, Hermann A, Benda N. Differing perspectives on artificial intelligence in mental healthcare among patients: a cross-sectional survey study. Front Digit Health 2024; 6:1410758. [PMID: 39679142 PMCID: PMC11638230 DOI: 10.3389/fdgth.2024.1410758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 10/14/2024] [Indexed: 12/17/2024] Open
Abstract
Introduction Artificial intelligence (AI) is being developed for mental healthcare, but patients' perspectives on its use are unknown. This study examined differences in attitudes towards AI being used in mental healthcare by history of mental illness, current mental health status, demographic characteristics, and social determinants of health. Methods We conducted a cross-sectional survey of an online sample of 500 adults asking about general perspectives, comfort with AI, specific concerns, explainability and transparency, responsibility and trust, and the importance of relevant bioethical constructs. Results Multiple vulnerable subgroups perceive potential harms related to AI being used in mental healthcare, place importance on upholding bioethical constructs, and would blame or reduce trust in multiple parties, including mental healthcare professionals, if harm or conflicting assessments resulted from AI. Discussion Future research examining strategies for ethical AI implementation and supporting clinician AI literacy is critical for optimal patient and clinician interactions with AI in mental healthcare.
Collapse
Affiliation(s)
| | - Pooja Desai
- Department of Biomedical Informatics, Columbia University, New York, NY, United States
| | - Sarah Harkins
- Columbia University School of Nursing, New York, NY, United States
| | - Jessica Kim
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Shiveen Kumar
- College of Agriculture and Life Sciences, Cornell University, Ithaca, NY, United States
| | - Yiye Zhang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Rochelle Joly
- Department of Obstetrics and Gynecology, Weill Cornell Medicine, New York, NY, United States
| | - Jyotishman Pathak
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Alison Hermann
- Department of Psychiatry, Weill Cornell Medicine, New York, NY, United States
| | - Natalie Benda
- Columbia University School of Nursing, New York, NY, United States
| |
Collapse
|
29
|
Anderson HD, Kwon S, Linnebur LA, Valdez CA, Linnebur SA. Pharmacy student use of ChatGPT: A survey of students at a U.S. School of Pharmacy. CURRENTS IN PHARMACY TEACHING & LEARNING 2024; 16:102156. [PMID: 39029382 DOI: 10.1016/j.cptl.2024.102156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 07/01/2024] [Accepted: 07/02/2024] [Indexed: 07/21/2024]
Abstract
OBJECTIVE To learn how students in an accredited PharmD program in the United States are using ChatGPT for personal, academic, and clinical reasons, and whether students think ChatGPT training should be incorporated into their program's curriculum. METHODS In August 2023, an 18-item survey was developed, pilot tested, and sent to all students who were enrolled during the Spring 2023 semester in the entry-level PharmD program at the University of Colorado. E-mail addresses were separated from survey responses to maintain anonymity. Responses were described using descriptive statistics. RESULTS 206 pharmacy students responded to the survey for a 49% response rate. Nearly one-half (48.5%) indicated they had used ChatGPT for personal reasons; 30.2% had used it for academic reasons; and 7.5% had used it for clinical reasons. The most common personal use for ChatGPT was answering questions and looking-up information (67.0%). The top academic reason for using ChatGPT was summarizing information or a body of text (42.6%), while the top clinical reason was simplifying a complex topic (53.3%). Most respondents (61.8%) indicated they would be interested in learning about how ChatGPT could help them in pharmacy school, and 28.1% thought ChatGPT training should be incorporated into their pharmacy curriculum. CONCLUSION At the time of the survey, ChatGPT was being used by approximately one-half of our pharmacy student respondents for personal, academic, or clinical reasons. Overall, many students indicated they want to learn how to use ChatGPT to help them with their education and think ChatGPT training should be integrated into their curriculum.
Collapse
Affiliation(s)
- Heather D Anderson
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| | - Sue Kwon
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| | - Lauren A Linnebur
- University of Colorado Anschutz Medical Campus, School of Medicine, Division of Geriatric Medicine, 12631 East 17th Avenue, Suite 8111, Aurora, CO 80045, United States of America.
| | - Connie A Valdez
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| | - Sunny A Linnebur
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| |
Collapse
|
30
|
Labkoff S, Oladimeji B, Kannry J, Solomonides A, Leftwich R, Koski E, Joseph AL, Lopez-Gonzalez M, Fleisher LA, Nolen K, Dutta S, Levy DR, Price A, Barr PJ, Hron JD, Lin B, Srivastava G, Pastor N, Luque US, Bui TTT, Singh R, Williams T, Weiner MG, Naumann T, Sittig DF, Jackson GP, Quintana Y. Toward a responsible future: recommendations for AI-enabled clinical decision support. J Am Med Inform Assoc 2024; 31:2730-2739. [PMID: 39325508 PMCID: PMC11491642 DOI: 10.1093/jamia/ocae209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Revised: 07/08/2024] [Accepted: 08/13/2024] [Indexed: 09/27/2024] Open
Abstract
BACKGROUND Integrating artificial intelligence (AI) in healthcare settings has the potential to benefit clinical decision-making. Addressing challenges such as ensuring trustworthiness, mitigating bias, and maintaining safety is paramount. The lack of established methodologies for pre- and post-deployment evaluation of AI tools regarding crucial attributes such as transparency, performance monitoring, and adverse event reporting makes this situation challenging. OBJECTIVES This paper aims to make practical suggestions for creating methods, rules, and guidelines to ensure that the development, testing, supervision, and use of AI in clinical decision support (CDS) systems are done well and safely for patients. MATERIALS AND METHODS In May 2023, the Division of Clinical Informatics at Beth Israel Deaconess Medical Center and the American Medical Informatics Association co-sponsored a working group on AI in healthcare. In August 2023, there were 4 webinars on AI topics and a 2-day workshop in September 2023 for consensus-building. The event included over 200 industry stakeholders, including clinicians, software developers, academics, ethicists, attorneys, government policy experts, scientists, and patients. The goal was to identify challenges associated with the trusted use of AI-enabled CDS in medical practice. Key issues were identified, and solutions were proposed through qualitative analysis and a 4-month iterative consensus process. RESULTS Our work culminated in several key recommendations: (1) building safe and trustworthy systems; (2) developing validation, verification, and certification processes for AI-CDS systems; (3) providing a means of safety monitoring and reporting at the national level; and (4) ensuring that appropriate documentation and end-user training are provided. DISCUSSION AI-enabled Clinical Decision Support (AI-CDS) systems promise to revolutionize healthcare decision-making, necessitating a comprehensive framework for their development, implementation, and regulation that emphasizes trustworthiness, transparency, and safety. This framework encompasses various aspects including model training, explainability, validation, certification, monitoring, and continuous evaluation, while also addressing challenges such as data privacy, fairness, and the need for regulatory oversight to ensure responsible integration of AI into clinical workflow. CONCLUSIONS Achieving responsible AI-CDS systems requires a collective effort from many healthcare stakeholders. This involves implementing robust safety, monitoring, and transparency measures while fostering innovation. Future steps include testing and piloting proposed trust mechanisms, such as safety reporting protocols, and establishing best practice guidelines.
Collapse
Affiliation(s)
- Steven Labkoff
- Quantori, Boston, MA 02142, United States
- Division of Clinical Informatics, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA 02215, United States
| | | | - Joseph Kannry
- Division of General Internal Medicine, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | | | - Russell Leftwich
- Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, United States
| | - Eileen Koski
- IBM Research, Yorktown Heights, NY, United States
| | - Amanda L Joseph
- School of Health Information Science, University of Victoria, Victoria, BC, Canada
| | | | - Lee A Fleisher
- Anesthesiology and Critical Care, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| | | | - Sayon Dutta
- Department of Emergency Medicine, Massachusetts General Hospital, Boston, MA, United States
- Clinical Informatics, Mass General Brigham Digital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Deborah R Levy
- Department of Medicine, Pain Research Informatics Multimorbidities and Epidemiology (PRIME) Center, VA-Connecticut Healthcare System, West Haven, CT, United States
- Department of Biomedical Informatics and Data Sciences, Yale School of Medicine, New Haven, CT, United States
| | - Amy Price
- The Dartmouth Institute for Health Policy and Clinical Practice, Geisel School of Medicine at Dartmouth, Hanover, NH, United States
- BMJ, London, United Kingdom
| | - Paul J Barr
- The Dartmouth Institute for Health Policy and Clinical Practice, Geisel School of Medicine at Dartmouth, Hanover, NH, United States
| | - Jonathan D Hron
- Department of Pediatrics, Division of General Pediatrics, Boston Children’s Hospital, Boston, MA 02115, United States
- Department of Pediatrics, Harvard Medical School, Boston, MA 02115, United States
| | - Baihan Lin
- Departments of AI, Psychiatry, and Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, United States
- Berkman Klein Center for Internet and Society, Harvard Law School, Cambridge, MA, United States
| | - Gyana Srivastava
- Division of Clinical Informatics, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA 02215, United States
- Harvard School of Public Health, Boston, MA 02115, United States
| | | | | | - Tien Thi Thuy Bui
- Massachusetts College of Pharmacy and Health Sciences, Boston, MA, United States
| | - Reva Singh
- American Medical Informatics Association, Washington, DC, United States
| | - Tayler Williams
- American Medical Informatics Association, Washington, DC, United States
| | - Mark G Weiner
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | | | - Dean F Sittig
- Department of Clinical and Health Informatics, University of Texas Health Science Center, Houston, TX, United States
| | - Gretchen Purcell Jackson
- Intuitive Surgical, Nashville, TN, United States
- Department of Pediatrics and Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Yuri Quintana
- Division of Clinical Informatics, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA 02215, United States
- School of Health Information Science, University of Victoria, Victoria, BC, Canada
- Harvard Medical School, Boston, MA, United States
- Homewood Research Institute, Guelph, ON, Canada
| |
Collapse
|
31
|
Cross S, Bell I, Nicholas J, Valentine L, Mangelsdorf S, Baker S, Titov N, Alvarez-Jimenez M. Use of AI in Mental Health Care: Community and Mental Health Professionals Survey. JMIR Ment Health 2024; 11:e60589. [PMID: 39392869 PMCID: PMC11488652 DOI: 10.2196/60589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Accepted: 07/30/2024] [Indexed: 10/13/2024] Open
Abstract
Background Artificial intelligence (AI) has been increasingly recognized as a potential solution to address mental health service challenges by automating tasks and providing new forms of support. Objective This study is the first in a series which aims to estimate the current rates of AI technology use as well as perceived benefits, harms, and risks experienced by community members (CMs) and mental health professionals (MHPs). Methods This study involved 2 web-based surveys conducted in Australia. The surveys collected data on demographics, technology comfort, attitudes toward AI, specific AI use cases, and experiences of benefits and harms from AI use. Descriptive statistics were calculated, and thematic analysis of open-ended responses were conducted. Results The final sample consisted of 107 CMs and 86 MHPs. General attitudes toward AI varied, with CMs reporting neutral and MHPs reporting more positive attitudes. Regarding AI usage, 28% (30/108) of CMs used AI, primarily for quick support (18/30, 60%) and as a personal therapist (14/30, 47%). Among MHPs, 43% (37/86) used AI; mostly for research (24/37, 65%) and report writing (20/37, 54%). While the majority found AI to be generally beneficial (23/30, 77% of CMs and 34/37, 92% of MHPs), specific harms and concerns were experienced by 47% (14/30) of CMs and 51% (19/37) of MHPs. There was an equal mix of positive and negative sentiment toward the future of AI in mental health care in open feedback. Conclusions Commercial AI tools are increasingly being used by CMs and MHPs. Respondents believe AI will offer future advantages for mental health care in terms of accessibility, cost reduction, personalization, and work efficiency. However, they were equally concerned about reducing human connection, ethics, privacy and regulation, medical errors, potential for misuse, and data security. Despite the immense potential, integration into mental health systems must be approached with caution, addressing legal and ethical concerns while developing safeguards to mitigate potential harms. Future surveys are planned to track use and acceptability of AI and associated issues over time.
Collapse
Affiliation(s)
- Shane Cross
- Orygen Digital, 35 Poplar Rd, Parkville, Melbourne, 3052, Australia, 61 3 9966 9383
- Centre for Youth Mental Health, University of Melbourne, Melbourne, Australia
| | - Imogen Bell
- Orygen Digital, 35 Poplar Rd, Parkville, Melbourne, 3052, Australia, 61 3 9966 9383
- Centre for Youth Mental Health, University of Melbourne, Melbourne, Australia
| | - Jennifer Nicholas
- Orygen Digital, 35 Poplar Rd, Parkville, Melbourne, 3052, Australia, 61 3 9966 9383
- Centre for Youth Mental Health, University of Melbourne, Melbourne, Australia
| | - Lee Valentine
- Orygen Digital, 35 Poplar Rd, Parkville, Melbourne, 3052, Australia, 61 3 9966 9383
- Centre for Youth Mental Health, University of Melbourne, Melbourne, Australia
| | - Shaminka Mangelsdorf
- Orygen Digital, 35 Poplar Rd, Parkville, Melbourne, 3052, Australia, 61 3 9966 9383
- Centre for Youth Mental Health, University of Melbourne, Melbourne, Australia
| | - Simon Baker
- Orygen Digital, 35 Poplar Rd, Parkville, Melbourne, 3052, Australia, 61 3 9966 9383
| | - Nick Titov
- School of Psychological Sciences, Macquarie University, Sydney, Australia
- MindSpot, Sydney, Australia
| | - Mario Alvarez-Jimenez
- Orygen Digital, 35 Poplar Rd, Parkville, Melbourne, 3052, Australia, 61 3 9966 9383
- Centre for Youth Mental Health, University of Melbourne, Melbourne, Australia
| |
Collapse
|
32
|
McCoy LG, Ci Ng FY, Sauer CM, Yap Legaspi KE, Jain B, Gallifant J, McClurkin M, Hammond A, Goode D, Gichoya J, Celi LA. Understanding and training for the impact of large language models and artificial intelligence in healthcare practice: a narrative review. BMC MEDICAL EDUCATION 2024; 24:1096. [PMID: 39375721 PMCID: PMC11459854 DOI: 10.1186/s12909-024-06048-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 09/18/2024] [Indexed: 10/09/2024]
Abstract
Reports of Large Language Models (LLMs) passing board examinations have spurred medical enthusiasm for their clinical integration. Through a narrative review, we reflect upon the skill shifts necessary for clinicians to succeed in an LLM-enabled world, achieving benefits while minimizing risks. We suggest how medical education must evolve to prepare clinicians capable of navigating human-AI systems.
Collapse
Affiliation(s)
- Liam G McCoy
- Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - Faye Yu Ci Ng
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Christopher M Sauer
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany.
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Katelyn Edelwina Yap Legaspi
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
- University of the Philippines Manila College of Medicine, Ermita Manila, Philippines
| | - Bhav Jain
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Jack Gallifant
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Critical Care, Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Michael McClurkin
- Department of Psychiatry, Yale School of Medicine, New Haven, CT, USA
| | - Alessandro Hammond
- Harvard University, Cambridge, MA, USA
- Division of Hematology/Oncology, Department of Pediatric Oncology, Boston Children's Hospital, Boston, MA, USA
| | - Deirdre Goode
- Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Judy Gichoya
- Department of Radiology, Emory School of Medicine, Atlanta, GA, USA
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
33
|
Denham MW, Rieth KKS, Rameau A. The Ethics of Incorporating Artificial Intelligence Technologies in Prognostic Clinical Decision-Making in Otolaryngology. Otolaryngol Head Neck Surg 2024; 171:1236-1239. [PMID: 38943455 DOI: 10.1002/ohn.886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 05/17/2024] [Accepted: 06/15/2024] [Indexed: 07/01/2024]
Affiliation(s)
- Michael W Denham
- Department of Otolaryngology-Head and Neck Surgery, Columbia University Vagelos College of Physicians and Surgeons, NewYork-Presbyterian/Columbia University Irving Medical Center, New York City, New York, USA
| | - Katherine K S Rieth
- Department of Otolaryngology, University of Rochester Medical Center, Rochester, New York, USA
| | - Anaïs Rameau
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, Sean Parker Institute for the Voice, New York City, New York, USA
| |
Collapse
|
34
|
Reading Turchioe M, Kisselev S, Van Bulck L, Bakken S. Increasing Generative Artificial Intelligence Competency among Students Enrolled in Doctoral Nursing Research Coursework. Appl Clin Inform 2024; 15:842-851. [PMID: 39053615 PMCID: PMC11483171 DOI: 10.1055/a-2373-3151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 07/24/2024] [Indexed: 07/27/2024] Open
Abstract
BACKGROUND Generative artificial intelligence (AI) tools may soon be integrated into health care practice and research. Nurses in leadership roles, many of whom are doctorally prepared, will need to determine whether and how to integrate them in a safe and useful way. OBJECTIVE This study aimed to develop and evaluate a brief intervention to increase PhD nursing students' knowledge of appropriate applications for using generative AI tools in health care. METHODS We created didactic lectures and laboratory-based activities to introduce generative AI to students enrolled in a nursing PhD data science and visualization course. Students were provided with a subscription to Chat Generative Pretrained Transformer (ChatGPT) 4.0, a general-purpose generative AI tool, for use in and outside the class. During the didactic portion, we described generative AI and its current and potential future applications in health care, including examples of appropriate and inappropriate applications. In the laboratory sessions, students were given three tasks representing different use cases of generative AI in health care practice and research (clinical decision support, patient decision support, and scientific communication) and asked to engage with ChatGPT on each. Students (n = 10) independently wrote a brief reflection for each task evaluating safety (accuracy, hallucinations) and usability (ease of use, usefulness, and intention to use in the future). Reflections were analyzed using directed content analysis. RESULTS Students were able to identify the strengths and limitations of ChatGPT in completing all three tasks and developed opinions on whether they would feel comfortable using ChatGPT for similar tasks in the future. All of them reported increasing their self-rated competency in generative AI by one to two points on a five-point rating scale. CONCLUSION This brief educational intervention supported doctoral nursing students in understanding the appropriate uses of ChatGPT, which may support their ability to appraise and use these tools in their future work.
Collapse
Affiliation(s)
| | - Sergey Kisselev
- Columbia University School of Nursing, New York, New York, United States
| | - Liesbet Van Bulck
- Department of Public Health and Primary Care, KU Leuven - University of Leuven, Leuven, Belgium
| | - Suzanne Bakken
- Columbia University School of Nursing, New York, New York, United States
- Department of Biomedical Informatics, Columbia University, New York, New York, United States
- Data Science Institute, Columbia University, New York, New York, United States
| |
Collapse
|
35
|
Levingston H, Anderson MC, Roni MA. From Theory to Practice: Artificial Intelligence (AI) Literacy Course for First-Year Medical Students. Cureus 2024; 16:e70706. [PMID: 39493023 PMCID: PMC11530082 DOI: 10.7759/cureus.70706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2024] [Indexed: 11/05/2024] Open
Abstract
Artificial intelligence (AI) is rapidly transforming healthcare by enhancing diagnostics, personalized medicine, and clinical decision-making. In medical education, AI chatbots have the potential to be used as virtual tutors or learning assistants. Despite AI's growing impact, its integration into medical education remains limited. AI is not a standard component of medical curricula, which could leave many future healthcare professionals unprepared for an AI-driven workplace. To address this significant gap, this editorial describes the development of a mini-course to integrate AI training for first-year medical students. The course was focused on the fundamentals of AI, prompt engineering, practical applications of chatbots as learning assistants, and ethical use of generative AI.
Collapse
Affiliation(s)
- Hunter Levingston
- Health Sciences Education and Pathology, University of Illinois College of Medicine, Peoria, USA
| | - Max C Anderson
- Curriculum Operations, University of Miami Miller School of Medicine, Miami, USA
| | - Monzurul A Roni
- Health Sciences Education and Pathology, University of Illinois College of Medicine, Peoria, USA
| |
Collapse
|
36
|
Jeffery AD, Sengstack P. Teaching Data Science through an Interactive, Hands-On Workshop with Clinically Relevant Case Studies. Appl Clin Inform 2024; 15:1074-1079. [PMID: 39214146 PMCID: PMC11634532 DOI: 10.1055/a-2407-1272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 08/29/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND In this case report, we describe the development of an innovative workshop to bridge the gap in data science education for practicing clinicians (and particularly nurses). In the workshop, we emphasize the core concepts of machine learning and predictive modeling to increase understanding among clinicians. OBJECTIVES Addressing the limited exposure of health care providers to leverage and critique data science methods, this interactive workshop aims to provide clinicians with foundational knowledge in data science, enabling them to contribute effectively to teams focused on improving care quality. METHODS The workshop focuses on meaningful topics for clinicians, such as model performance evaluation and introduces machine learning through hands-on exercises using free, interactive python notebooks. Clinical case studies on sepsis recognition and opioid overdose death provide relatable contexts for applying data science concepts. RESULTS Positive feedback from over 300 participants across various settings highlights the workshop's effectiveness in making complex topics accessible to clinicians. CONCLUSION Our approach prioritizes engaging content delivery and practical application over extensive programming instruction, aligning with adult learning principles. This initiative underscores the importance of equipping clinicians with data science knowledge to navigate today's data-driven health care landscape, offering a template for integrating data science education into health care informatics programs or continuing professional development.
Collapse
Affiliation(s)
- Alvin D. Jeffery
- Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee, United States
| | - Patricia Sengstack
- Department of Informatics, Vanderbilt University School of Nursing, Nashville, Tennessee, United States
| |
Collapse
|
37
|
Benda N, Desai P, Reza Z, Zheng A, Kumar S, Harkins S, Hermann A, Zhang Y, Joly R, Kim J, Pathak J, Reading Turchioe M. Patient Perspectives on AI for Mental Health Care: Cross-Sectional Survey Study. JMIR Ment Health 2024; 11:e58462. [PMID: 39293056 PMCID: PMC11447436 DOI: 10.2196/58462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/26/2024] [Accepted: 07/14/2024] [Indexed: 09/20/2024] Open
Abstract
BACKGROUND The application of artificial intelligence (AI) to health and health care is rapidly increasing. Several studies have assessed the attitudes of health professionals, but far fewer studies have explored the perspectives of patients or the general public. Studies investigating patient perspectives have focused on somatic issues, including those related to radiology, perinatal health, and general applications. Patient feedback has been elicited in the development of specific mental health care solutions, but broader perspectives toward AI for mental health care have been underexplored. OBJECTIVE This study aims to understand public perceptions regarding potential benefits of AI, concerns about AI, comfort with AI accomplishing various tasks, and values related to AI, all pertaining to mental health care. METHODS We conducted a 1-time cross-sectional survey with a nationally representative sample of 500 US-based adults. Participants provided structured responses on their perceived benefits, concerns, comfort, and values regarding AI for mental health care. They could also add free-text responses to elaborate on their concerns and values. RESULTS A plurality of participants (245/497, 49.3%) believed AI may be beneficial for mental health care, but this perspective differed based on sociodemographic variables (all P<.05). Specifically, Black participants (odds ratio [OR] 1.76, 95% CI 1.03-3.05) and those with lower health literacy (OR 2.16, 95% CI 1.29-3.78) perceived AI to be more beneficial, and women (OR 0.68, 95% CI 0.46-0.99) perceived AI to be less beneficial. Participants endorsed concerns about accuracy, possible unintended consequences such as misdiagnosis, the confidentiality of their information, and the loss of connection with their health professional when AI is used for mental health care. A majority of participants (80.4%, 402/500) valued being able to understand individual factors driving their risk, confidentiality, and autonomy as it pertained to the use of AI for their mental health. When asked who was responsible for the misdiagnosis of mental health conditions using AI, 81.6% (408/500) of participants found the health professional to be responsible. Qualitative results revealed similar concerns related to the accuracy of AI and how its use may impact the confidentiality of patients' information. CONCLUSIONS Future work involving the use of AI for mental health care should investigate strategies for conveying the level of AI's accuracy, factors that drive patients' mental health risks, and how data are used confidentially so that patients can determine with their health professionals when AI may be beneficial. It will also be important in a mental health care context to ensure the patient-health professional relationship is preserved when AI is used.
Collapse
Affiliation(s)
- Natalie Benda
- School of Nursing, Columbia University, New York, NY, United States
| | - Pooja Desai
- Department of Biomedical Informatics, Columbia University, New York, NY, United States
| | - Zayan Reza
- Mailman School of Public Health, Columbia University, New York, NY, United States
| | - Anna Zheng
- Stuyvestant High School, New York, NY, United States
| | - Shiveen Kumar
- College of Agriculture and Life Science, Cornell University, Ithaca, NY, United States
| | - Sarah Harkins
- School of Nursing, Columbia University, New York, NY, United States
| | - Alison Hermann
- Department of Psychiatry, Weill Cornell Medicine, New York, NY, United States
| | - Yiye Zhang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Rochelle Joly
- Department of Obstetrics and Gynecology, Weill Cornell Medicine, New York, NY, United States
| | - Jessica Kim
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Jyotishman Pathak
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | | |
Collapse
|
38
|
Rony MKK, Numan SM, Akter K, Tushar H, Debnath M, Johra FT, Akter F, Mondal S, Das M, Uddin MJ, Begum J, Parvin MR. Nurses' perspectives on privacy and ethical concerns regarding artificial intelligence adoption in healthcare. Heliyon 2024; 10:e36702. [PMID: 39281626 PMCID: PMC11400963 DOI: 10.1016/j.heliyon.2024.e36702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 08/08/2024] [Accepted: 08/20/2024] [Indexed: 09/18/2024] Open
Abstract
Background With the increasing integration of artificial intelligence (AI) technologies into healthcare systems, there is a growing emphasis on privacy and ethical considerations. Nurses, as frontline healthcare professionals, are pivotal in-patient care and offer valuable insights into the ethical implications of AI adoption. Objectives This study aimed to explore nurses' perspectives on privacy and ethical concerns associated with the implementation of AI in healthcare settings. Methods We employed Van Manen's hermeneutic phenomenology as the qualitative research approach. Data were collected through purposive sampling from the December 7, 2023 to the January 15, 2024, with interviews conducted in Bengali. Thematic analysis was utilized following member checking and an audit trail. Results Six themes emerged from the research findings: Ethical dimensions of AI integration, highlighting complexities in incorporating AI ethically; Privacy challenges in healthcare AI, revealing concerns about data security and confidentiality; Balancing innovation and ethical practice, indicating a need to reconcile technological advancements with ethical considerations; Human touch vs. technological progress, underscoring tensions between automation and personalized care; Patient-centered care in the AI era, emphasizing the importance of maintaining focus on patients amidst technological advancements; and Ethical preparedness and education, suggesting a need for enhanced training and education on ethical AI use in healthcare. Conclusions The findings underscore the importance of addressing privacy and ethical concerns in AI healthcare development. Nurses advocate for patient-centered approaches and collaborate with policymakers and tech developers to ensure responsible AI adoption. Further research is imperative for mitigating ethical challenges and promoting ethical AI in healthcare practice.
Collapse
Affiliation(s)
| | - Sharker Md Numan
- School of Science and Technology, Bangladesh Open University, Gazipur, Bangladesh
| | - Khadiza Akter
- Master of Public Health, Daffodil International University, Dhaka, Bangladesh
| | - Hasanuzzaman Tushar
- Department of Business Administration, International University of Business Agriculture and Technology, Dhaka, Bangladesh
| | - Mitun Debnath
- Master of Public Health, National Institute of Preventive and Social Medicine, Dhaka, Bangladesh
| | - Fateha Tuj Johra
- Masters in Disaster Management, University of Dhaka, Dhaka, Bangladesh
| | - Fazila Akter
- Dhaka Nursing College, Affiliated with the University of Dhaka, Bangladesh
| | - Sujit Mondal
- Master of Science in Nursing, National Institute of Advanced Nursing Education and Research Mugda, Dhaka, Bangladesh
| | - Mousumi Das
- Master of Public Health, Leading University, Sylhet, Bangladesh
| | - Muhammad Join Uddin
- Master of Public Health, RTM Al-Kabir Technical University, Sylhet, Bangladesh
| | - Jeni Begum
- Master of Public Health, Leading University, Sylhet, Bangladesh
| | - Mst Rina Parvin
- School of Medical Sciences, Shahjalal University of Science and Technology, Bangladesh
- Bangladesh Army (AFNS Officer), Combined Military Hospital, Dhaka, Bangladesh
| |
Collapse
|
39
|
Rowland P, Brydges M, Kulasegaram KM. Sociotechnical imaginaries in academic medicine strategic planning: a document analysis. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:1435-1451. [PMID: 38801543 PMCID: PMC11369035 DOI: 10.1007/s10459-024-10339-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 05/05/2024] [Indexed: 05/29/2024]
Abstract
Purpose Along with other industries, healthcare is becoming increasingly digitized. Our study explores how the field of academic medicine is preparing for this digital future. Method Active strategic plans available in English were collected from faculties of medicine in Canada (n = 14), departments in medical schools (n = 17), academic health science centres (n = 23) and associated research institutes (n = 5). In total, 59 strategic plans were subjected to a practice-oriented form of document analysis, informed by the concept of sociotechnical imaginaries. Results On the one hand, digital health is discursively treated as a continuation of the academic medicine vision, with expansions of physician competencies and of research institutes contributions. These imaginaries do not necessarily disrupt the field of academic medicine as currently configured. On the other hand, there is a vision of digital health pursuing a robust sociotechnical future with transformative implications for how care is conducted, what forms of knowledge are prioritized, how patients and patienthood will be understood, and how data work will be distributed. This imaginary may destabilize existing distributions of knowledge and power. Conclusions Looking through the lens of sociotechnical imaginaries, this study illuminates strategic plans as framing desirable futures, directing attention towards specific ways of understanding problems of healthcare, and mobilizing the resources to knit together social and technical systems in ways that bring these visions to fruition. There are bound to be tensions as these sociotechnical imaginaries are translated into material realities. Many of those tensions and their attempted resolutions will have direct implications for the expectations of health professional graduates, the nature of clinical learning environments, and future relationships with patients. Sociology of digital health and science and technology studies can provide useful insights to guide leaders in academic medicine shaping these digital futures.
Collapse
Affiliation(s)
- Paula Rowland
- Temerty Faculty of Medicine, University of Toronto, Toronto, Canada.
| | - Madison Brydges
- Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | | |
Collapse
|
40
|
Suresh S, Misra SM. Large Language Models in Pediatric Education: Current Uses and Future Potential. Pediatrics 2024; 154:e2023064683. [PMID: 39108227 DOI: 10.1542/peds.2023-064683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 06/03/2024] [Accepted: 06/05/2024] [Indexed: 09/02/2024] Open
Abstract
Generative artificial intelligence, especially large language models (LLMs), has the potential to affect every level of pediatric education and training. Demonstrating speed and adaptability, LLMs can aid educators, trainees, and practicing pediatricians with tasks such as enhancing curriculum design through the creation of cases, videos, and assessments; creating individualized study plans and providing real-time feedback for trainees; and supporting pediatricians by enhancing information searches, clinic efficiency, and bedside teaching. LLMs can refine patient education materials to address patients' specific needs. The current versions of LLMs sometimes provide "hallucinations" or incorrect information but are likely to improve. There are ethical concerns related to bias in the output of LLMs, the potential for plagiarism, and the possibility of the overuse of an online tool at the expense of in-person learning. The potential benefits of LLMs in pediatric education can outweigh the potential risks if employed judiciously by content experts who conscientiously review the output. All stakeholders must firmly establish rules and policies to provide rigorous guidance and assure the safe and proper use of this transformative tool in the care of the child. In this article, we outline the history, current uses, and challenges with generative artificial intelligence in pediatrics education. We provide examples of LLM output, including performance on a pediatrics examination guide and the creation of patient care instructions. Future directions to establish a safe and appropriate path for the use of LLMs will be discussed.
Collapse
Affiliation(s)
- Srinivasan Suresh
- Divisions of Health Informatics & Emergency Medicine, Department of Pediatrics, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Sanghamitra M Misra
- Division of Academic General Pediatrics, Department of Pediatrics, Baylor College of Medicine, Houston, Texas
- Texas Children's Hospital, Houston, Texas
| |
Collapse
|
41
|
Salwei ME, Weinger MB. Artificial Intelligence in Anesthesiology: Field of Dreams or Fire Swamp? Preemptive Strategies for Optimizing Our Inevitable Future. Anesthesiology 2024; 141:217-221. [PMID: 38980165 PMCID: PMC11831969 DOI: 10.1097/aln.0000000000005046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Affiliation(s)
- Megan E Salwei
- Center for Research and Innovation in Systems Safety, Department of Anesthesiology and the Institute for Medicine and Public Health, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Matthew B Weinger
- Center for Research and Innovation in Systems Safety, Department of Anesthesiology and the Institute for Medicine and Public Health, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
42
|
Sridharan K, Sequeira RP. Evaluation of artificial intelligence-generated drug therapy communication skill competencies in medical education. Br J Clin Pharmacol 2024. [PMID: 38953544 DOI: 10.1111/bcp.16144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 05/27/2024] [Accepted: 06/01/2024] [Indexed: 07/04/2024] Open
Abstract
AIMS This study compared three artificial intelligence (AI) platforms' potential to identify drug therapy communication competencies expected of a graduating medical doctor. METHODS We presented three AI platforms, namely, Poe Assistant©, ChatGPT© and Google Bard©, with structured queries to generate communication skill competencies and case scenarios appropriate for graduating medical doctors. These case scenarios comprised 15 prototypical medical conditions that required drug prescriptions. Two authors independently evaluated the AI-enhanced clinical encounters, which integrated a diverse range of information to create patient-centred care plans. Through a consensus-based approach using a checklist, the communication components generated for each scenario were assessed. The instructions and warnings provided for each case scenario were evaluated by referencing the British National Formulary. RESULTS AI platforms demonstrated overlap in competency domains generated, albeit with variations in wording. The domains of knowledge (basic and clinical pharmacology, prescribing, communication and drug safety) were unanimously recognized by all platforms. A broad consensus among Poe Assistant© and ChatGPT© on drug therapy-related communication issues specific to each case scenario was evident. The consensus primarily encompassed salutation, generic drug prescribed, treatment goals and follow-up schedules. Differences were observed in patient instruction clarity, listed side effects, warnings and patient empowerment. Google Bard did not provide guidance on patient communication issues. CONCLUSIONS AI platforms recognized competencies with variations in how these were stated. Poe Assistant© and ChatGPT© exhibited alignment of communication issues. However, significant discrepancies were observed in specific skill components, indicating the necessity of human intervention to critically evaluate AI-generated outputs.
Collapse
Affiliation(s)
- Kannan Sridharan
- Department of Pharmacology & Therapeutics, College of Medicine & Medical Sciences, Arabian Gulf University, Manama, Kingdom of Bahrain
| | - Reginald P Sequeira
- Department of Pharmacology & Therapeutics, College of Medicine & Medical Sciences, Arabian Gulf University, Manama, Kingdom of Bahrain
| |
Collapse
|
43
|
Lukkahatai N, Han G. Perspectives on Artificial Intelligence in Nursing in Asia. Asian Pac Isl Nurs J 2024; 8:e55321. [PMID: 38896473 PMCID: PMC11222764 DOI: 10.2196/55321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/22/2024] [Accepted: 05/22/2024] [Indexed: 06/21/2024] Open
Abstract
Artificial intelligence (AI) is reshaping health care, including nursing, across Asia, presenting opportunities to improve patient care and outcomes. This viewpoint presents our perspective and interpretation of the current AI landscape, acknowledging its evolution driven by enhanced processing capabilities, extensive data sets, and refined algorithms. Notable applications in countries such as Singapore, South Korea, Japan, and China showcase the integration of AI-powered technologies such as chatbots, virtual assistants, data mining, and automated risk assessment systems. This paper further explores the transformative impact of AI on nursing education, emphasizing personalized learning, adaptive approaches, and AI-enriched simulation tools, and discusses the opportunities and challenges of these developments. We argue for the harmonious coexistence of traditional nursing values with AI innovations, marking a significant stride toward a promising health care future in Asia.
Collapse
Affiliation(s)
- Nada Lukkahatai
- School of Nursing, Johns Hopkins University, Baltimore, MD, United States
| | - Gyumin Han
- School of Nursing, Johns Hopkins University, Baltimore, MD, United States
- College of Nursing, Research Institute of Nursing Science, Pusan National University, Busan, Republic of Korea
| |
Collapse
|
44
|
Koleilat I, Bongu A, Chang S, Nieman D, Priolo S, Patel NM. Residency Application Selection Committee Discriminatory Ability in Identifying Artificial Intelligence-Generated Personal Statements. JOURNAL OF SURGICAL EDUCATION 2024; 81:780-785. [PMID: 38679494 DOI: 10.1016/j.jsurg.2024.02.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 01/20/2024] [Accepted: 02/19/2024] [Indexed: 05/01/2024]
Abstract
OBJECTIVE Advances in artificial intelligence (AI) have given rise to sophisticated algorithms capable of generating human-like text. The goal of this study was to evaluate the ability of human reviewers to reliably differentiate personal statements (PS) written by human authors from those generated by AI software. SETTING Four personal statements from the archives of two surgical program directors were de-identified and used as the human samples. Two AI platforms were used to generate nine additional PS. PARTICIPANTS Four surgeons from the residency selection committees of two surgical residency programs of a large multihospital system served as blinded reviewers. AI was also asked to evaluate each PS sample for authorship. DESIGN Sensitivity, specificity and accuracy of the reviewers in identifying the PS author were calculated. Kappa statistic for correlation between the hypothesized author and the true author were calculated. Inter-rater reliability was calculated using the kappa statistic with Light's modification given more than two reviewers in a fully-crossed design. Logistic regression was performed with to model the impact of perceived creativity, writing quality, and authorship or the likelihood of offering an interview. RESULTS Human reviewer sensitivity for identifying an AI-generated PS was 0.87 with specificity of 0.37 and overall accuracy of 0.55. The level of agreement by kappa statistic of the reviewer estimate of authorship and the true authorship was 0.19 (slight agreement). The reviewers themselves had an inter-rater reliability of 0.067 (poor), with only complete agreement (four out of four reviewers) on two PS, both authored by humans. The odds ratio of offering an interview (compared to a composite of "backup" status or no interview) to a perceived human author was 7 times that of a perceived AI author (95% confidence interval 1.5276 to 32.0758, p=0.0144). AI hypothesized human authorship for twelve of the PS, with the last one "unsure." CONCLUSIONS The increasing pervasiveness of AI will have far-reaching effects including on the resident application and recruitment process. Identifying AI-generated personal statements is exceedingly difficult. With the decreasing availability of objective data to assess applicants, a review and potential restructuring of the approach to resident recruitment may be warranted.
Collapse
Affiliation(s)
- Issam Koleilat
- Department of Surgery, Community Medical Center, RWJ/Barnabas Health, Tom's River, New Jersey.
| | - Advaith Bongu
- Department of Surgery, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| | - Sumy Chang
- Department of Surgery, Community Medical Center, RWJ/Barnabas Health, Tom's River, New Jersey
| | - Dylan Nieman
- Department of Surgery, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| | - Steven Priolo
- Department of Surgery, Community Medical Center, RWJ/Barnabas Health, Tom's River, New Jersey
| | - Nell Maloney Patel
- Department of Surgery, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| |
Collapse
|
45
|
Scott IA, van der Vegt A, Lane P, McPhail S, Magrabi F. Achieving large-scale clinician adoption of AI-enabled decision support. BMJ Health Care Inform 2024; 31:e100971. [PMID: 38816209 PMCID: PMC11141172 DOI: 10.1136/bmjhci-2023-100971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 05/15/2024] [Indexed: 06/01/2024] Open
Abstract
Computerised decision support (CDS) tools enabled by artificial intelligence (AI) seek to enhance accuracy and efficiency of clinician decision-making at the point of care. Statistical models developed using machine learning (ML) underpin most current tools. However, despite thousands of models and hundreds of regulator-approved tools internationally, large-scale uptake into routine clinical practice has proved elusive. While underdeveloped system readiness and investment in AI/ML within Australia and perhaps other countries are impediments, clinician ambivalence towards adopting these tools at scale could be a major inhibitor. We propose a set of principles and several strategic enablers for obtaining broad clinician acceptance of AI/ML-enabled CDS tools.
Collapse
Affiliation(s)
- Ian A Scott
- Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Brisbane, Queensland, Australia
- Centre for Health Services Research, The University of Queensland Faculty of Medicine and Biomedical Sciences, Brisbane, Queensland, Australia
| | - Anton van der Vegt
- Digital Health Centre, The University of Queensland Faculty of Medicine and Biomedical Sciences, Herston, Queensland, Australia
| | - Paul Lane
- Safety, Quality and Innovation, The Prince Charles Hospital, Brisbane, Queensland, Australia
| | - Steven McPhail
- Australian Centre for Health Services Innovation, Queensland University of Technology Faculty of Health, Brisbane, Queensland, Australia
| | - Farah Magrabi
- Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
46
|
Cary MP, De Gagne JC, Kauschinger ED, Carter BM. Advancing Health Equity Through Artificial Intelligence: An Educational Framework for Preparing Nurses in Clinical Practice and Research. Creat Nurs 2024; 30:154-164. [PMID: 38689433 DOI: 10.1177/10784535241249193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The integration of artificial intelligence (AI) into health care offers the potential to enhance patient care, improve diagnostic precision, and broaden access to health-care services. Nurses, positioned at the forefront of patient care, play a pivotal role in utilizing AI to foster a more efficient and equitable health-care system. However, to fulfil this role, nurses will require education that prepares them with the necessary skills and knowledge for the effective and ethical application of AI. This article proposes a framework for nurses which includes AI principles, skills, competencies, and curriculum development focused on the practical use of AI, with an emphasis on care that aims to achieve health equity. By adopting this educational framework, nurses will be prepared to make substantial contributions to reducing health disparities and fostering a health-care system that is more efficient and equitable.
Collapse
Affiliation(s)
- Michael P Cary
- Duke University School of Nursing, Durham, NC, USA
- Duke University School of Medicine, Durham, NC, USA
- Duke AI Health, Durham, NC, USA
- American Association of Colleges of Nursing, Durham, NC, USA
| | - Jennie C De Gagne
- Duke University School of Nursing, Durham, NC, USA
- Duke University School of Medicine, Durham, NC, USA
- Duke AI Health, Durham, NC, USA
- American Association of Colleges of Nursing, Durham, NC, USA
| | - Elaine D Kauschinger
- Duke University School of Nursing, Durham, NC, USA
- Duke University School of Medicine, Durham, NC, USA
- Duke AI Health, Durham, NC, USA
- American Association of Colleges of Nursing, Durham, NC, USA
| | - Brigit M Carter
- Duke University School of Nursing, Durham, NC, USA
- Duke University School of Medicine, Durham, NC, USA
- Duke AI Health, Durham, NC, USA
- American Association of Colleges of Nursing, Durham, NC, USA
| |
Collapse
|
47
|
Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif Intell Med 2024; 151:102861. [PMID: 38555850 DOI: 10.1016/j.artmed.2024.102861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 03/19/2024] [Accepted: 03/25/2024] [Indexed: 04/02/2024]
Abstract
Healthcare organizations have realized that Artificial intelligence (AI) can provide a competitive edge through personalized patient experiences, improved patient outcomes, early diagnosis, augmented clinician capabilities, enhanced operational efficiencies, or improved medical service accessibility. However, deploying AI-driven tools in the healthcare ecosystem could be challenging. This paper categorizes AI applications in healthcare and comprehensively examines the challenges associated with deploying AI in medical practices at scale. As AI continues to make strides in healthcare, its integration presents various challenges, including production timelines, trust generation, privacy concerns, algorithmic biases, and data scarcity. The paper highlights that flawed business models and wrong workflows in healthcare practices cannot be rectified merely by deploying AI-driven tools. Healthcare organizations should re-evaluate root problems such as misaligned financial incentives (e.g., fee-for-service models), dysfunctional medical workflows (e.g., high rates of patient readmissions), poor care coordination between different providers, fragmented electronic health records systems, and inadequate patient education and engagement models in tandem with AI adoption. This study also explores the need for a cultural shift in viewing AI not as a threat but as an enabler that can enhance healthcare delivery and create new employment opportunities while emphasizing the importance of addressing underlying operational issues. The necessity of investments beyond finance is discussed, emphasizing the importance of human capital, continuous learning, and a supportive environment for AI integration. The paper also highlights the crucial role of clear regulations in building trust, ensuring safety, and guiding the ethical use of AI, calling for coherent frameworks addressing transparency, model accuracy, data quality control, liability, and ethics. Furthermore, this paper underscores the importance of advancing AI literacy within academia to prepare future healthcare professionals for an AI-driven landscape. Through careful navigation and proactive measures addressing these challenges, the healthcare community can harness AI's transformative power responsibly and effectively, revolutionizing healthcare delivery and patient care. The paper concludes with a vision and strategic suggestions for the future of healthcare with AI, emphasizing thoughtful, responsible, and innovative engagement as the pathway to realizing its full potential to unlock immense benefits for healthcare organizations, physicians, nurses, and patients while proactively mitigating risks.
Collapse
Affiliation(s)
- Pouyan Esmaeilzadeh
- Department of Information Systems and Business Analytics, College of Business, Florida International University (FIU), Modesto A. Maidique Campus, 11200 S.W. 8th St, RB 261B, Miami, FL 33199, United States.
| |
Collapse
|
48
|
Lee YM, Kim S, Lee YH, Kim HS, Seo SW, Kim H, Kim KJ. Defining Medical AI Competencies for Medical School Graduates: Outcomes of a Delphi Survey and Medical Student/Educator Questionnaire of South Korean Medical Schools. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:524-533. [PMID: 38207056 DOI: 10.1097/acm.0000000000005618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2024]
Abstract
PURPOSE Given the increasing significance and potential impact of artificial intelligence (AI) technology on health care delivery, there is an increasing demand to integrate AI into medical school curricula. This study aimed to define medical AI competencies and identify the essential competencies for medical graduates in South Korea. METHOD An initial Delphi survey conducted in 2022 involving 4 groups of medical AI experts (n = 28) yielded 42 competency items. Subsequently, an online questionnaire survey was carried out with 1,955 participants (1,174 students and 781 professors) from medical schools across South Korea, utilizing the list of 42 competencies developed from the first Delphi round. A subsequent Delphi survey was conducted with 33 medical educators from 21 medical schools to differentiate the essential AI competencies from the optional ones. RESULTS The study identified 6 domains encompassing 36 AI competencies essential for medical graduates: (1) understanding digital health and changes driven by AI; (2) fundamental knowledge and skills in medical AI; (3) ethics and legal aspects in the use of medical AI; (4) medical AI application in clinical practice; (5) processing, analyzing, and evaluating medical data; and (6) research and development of medical AI, as well as subcompetencies within each domain. While numerous competencies within the first 4 domains were deemed essential, a higher percentage of experts indicated responses in the last 2 domains, data science and medical AI research and development, were optional. CONCLUSIONS This medical AI framework of 6 competencies and their subcompetencies for medical graduates exhibits promising potential for guiding the integration of AI into medical curricula. Further studies conducted in diverse contexts and countries are necessary to validate and confirm the applicability of these findings. Additional research is imperative for developing specific and feasible educational models to integrate these proposed competencies into pre-existing curricula.
Collapse
|
49
|
Hasan HE, Jaber D, Khabour OF, Alzoubi KH. Perspectives of Pharmacy Students on Ethical Issues Related to Artificial Intelligence: A Comprehensive Survey Study. RESEARCH SQUARE 2024:rs.3.rs-4302115. [PMID: 38746156 PMCID: PMC11092854 DOI: 10.21203/rs.3.rs-4302115/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Background The integration of artificial intelligence (AI) into pharmacy education and practice holds the potential to advance learning experiences and prepare future pharmacists for evolving healthcare practice. However, it also raises ethical considerations that need to be addressed carefully. This study aimed to explore pharmacy students' attitudes regarding AI integration into pharmacy education and practice. Methods A cross-sectional design was employed, utilizing a validated online questionnaire administered to 702 pharmacy students from diverse demographic backgrounds. The questionnaire gathered data on participants' attitudes and concerns regarding AI integration, as well as demographic information and factors influencing their attitudes. Results Most participants were female students (72.8%), from public universities (55.6%) and not working (64.2%). Participants expressed a generally negative attitude toward AI integration, citing concerns and barriers such as patient data privacy (62.0%), susceptibility to hacking (56.2%), potential job displacement (69.3%), cost limitations (66.8%), access (69.1%) and the absence of regulations (48.1% agree), training (70.4%), physicians' reluctance (65.1%) and patient apprehension (70.8%). Factors including country of residence, academic year, cumulative GPA, work status, technology literacy, and AI understanding significantly influenced participants' attitudes (p < 0.05). Conclusion The study highlights the need for comprehensive AI education in pharmacy curricula including related ethical concerns. Addressing students' concerns is crucial to ensuring ethical, equitable, and beneficial AI integration in pharmacy education and practice.
Collapse
|
50
|
Ruksakulpiwat S, Thorngthip S, Niyomyart A, Benjasirisan C, Phianhasin L, Aldossary H, Ahmed BH, Samai T. A Systematic Review of the Application of Artificial Intelligence in Nursing Care: Where are We, and What's Next? J Multidiscip Healthc 2024; 17:1603-1616. [PMID: 38628616 PMCID: PMC11020344 DOI: 10.2147/jmdh.s459946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 03/05/2024] [Indexed: 04/19/2024] Open
Abstract
Background Integrating Artificial Intelligence (AI) into healthcare has transformed the landscape of patient care and healthcare delivery. Despite this, there remains a notable gap in the existing literature synthesizing the comprehensive understanding of AI's utilization in nursing care. Objective This systematic review aims to synthesize the available evidence to comprehensively understand the application of AI in nursing care. Methods Studies published between January 2019 and December 2023, identified through CINAHL Plus with Full Text, Web of Science, PubMed, and Medline, were included in this review. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines guided the identification, screening, exclusion, and inclusion of articles. The convergent integrated analysis framework, as proposed by the Joanna Briggs Institute, was employed to synthesize data from the included studies for theme generation. Results A total of 337 records were identified from databases. Among them, 35 duplicates were removed, and 302 records underwent eligibility screening. After applying inclusion and exclusion criteria, eleven studies were deemed eligible and included in this review. Through data synthesis of these studies, six themes pertaining to the use of AI in nursing care were identified: 1) Risk Identification, 2) Health Assessment, 3) Patient Classification, 4) Research Development, 5) Improved Care Delivery and Medical Records, and 6) Developing a Nursing Care Plan. Conclusion This systematic review contributes valuable insights into the multifaceted applications of AI in nursing care. Through the synthesis of data from the included studies, six distinct themes emerged. These findings not only consolidate the current knowledge base but also underscore the diverse ways in which AI is shaping and improving nursing care practices.
Collapse
Affiliation(s)
- Suebsarn Ruksakulpiwat
- Department of Medical Nursing, Faculty of Nursing, Mahidol University, Bangkok, Thailand
| | - Sutthinee Thorngthip
- Department of Nursing Siriraj Hospital, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Atsadaporn Niyomyart
- Ramathibodi School of Nursing, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | | | - Lalipat Phianhasin
- Department of Medical Nursing, Faculty of Nursing, Mahidol University, Bangkok, Thailand
| | - Heba Aldossary
- Department of Nursing, Prince Sultan Military College of Health Sciences, Dammam, Saudi Arabia
| | - Bootan Hasan Ahmed
- Frances Payne Bolton School of Nursing, Case Western Reserve University, Cleveland, OH, USA
| | - Thanistha Samai
- Department of Public Health Nursing, Faculty of Nursing, Mahidol University, Nakhon Pathom, Thailand
| |
Collapse
|