1
|
Morris MX, Fiocco D, Caneva T, Yiapanis P, Orgill DP. Current and future applications of artificial intelligence in surgery: implications for clinical practice and research. Front Surg 2024; 11:1393898. [PMID: 38783862 PMCID: PMC11111929 DOI: 10.3389/fsurg.2024.1393898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 04/29/2024] [Indexed: 05/25/2024] Open
Abstract
Surgeons are skilled at making complex decisions over invasive procedures that can save lives and alleviate pain and avoid complications in patients. The knowledge to make these decisions is accumulated over years of schooling and practice. Their experience is in turn shared with others, also via peer-reviewed articles, which get published in larger and larger amounts every year. In this work, we review the literature related to the use of Artificial Intelligence (AI) in surgery. We focus on what is currently available and what is likely to come in the near future in both clinical care and research. We show that AI has the potential to be a key tool to elevate the effectiveness of training and decision-making in surgery and the discovery of relevant and valid scientific knowledge in the surgical domain. We also address concerns about AI technology, including the inability for users to interpret algorithms as well as incorrect predictions. A better understanding of AI will allow surgeons to use new tools wisely for the benefit of their patients.
Collapse
Affiliation(s)
- Miranda X. Morris
- Duke University School of Medicine, Duke University Hospital, Durham, NC, United States
| | - Davide Fiocco
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Tommaso Caneva
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Paris Yiapanis
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Dennis P. Orgill
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, United States
| |
Collapse
|
2
|
Salih SM. Perceptions of Faculty and Students About Use of Artificial Intelligence in Medical Education: A Qualitative Study. Cureus 2024; 16:e57605. [PMID: 38707183 PMCID: PMC11069392 DOI: 10.7759/cureus.57605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/02/2024] [Indexed: 05/07/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) implies using a computer to model intelligent behavior with minimal human intervention. With the advances of AI use in healthcare comes the need to reform medical education to produce doctors competent in AI use. Therefore, this qualitative study was conducted to explore faculty and students' perspectives on AI, their use of AI applications, and their perspective on its value and impact on medical education at a Saudi faculty of medicine. METHODS This qualitative study was conducted at the Faculty of Medicine, Jazan University in Saudi Arabia. A direct interview was held with 11 faculty members, and six focus group discussions were conducted with students from the second to sixth year (34 students). Data were collected using semi-structured open-ended interview questions based on relevant literature. FINDINGS Most respondents (91.11%) believed AI systems would positively impact medical education, especially in research, knowledge gain, assessment, and simulation. However, ethical concerns were raised about threats to academic integrity, plagiarism, privacy/confidentiality issues, and AI's lacking cultural sensitivity. Faculty and students felt a need for training on AI use (80%) and that the curriculum could adapt to integrate AI (64.44%), though resources were seen as currently needing to be improved. CONCLUSION AI's potential to enhance medical education is generally viewed positively in the study, but ethical concerns must be addressed. Integrating AI into medical education programs requires adequate resources, training, and curriculum adaptation. There is still a need for further research in this area to develop comprehensive strategies.
Collapse
Affiliation(s)
- Sarah M Salih
- Department of Community and Family Medicine, Faculty of Medicine, Jazan University, Jazan, SAU
| |
Collapse
|
3
|
Gordon M, Daniel M, Ajiboye A, Uraiby H, Xu NY, Bartlett R, Hanson J, Haas M, Spadafore M, Grafton-Clarke C, Gasiea RY, Michie C, Corral J, Kwan B, Dolmans D, Thammasitboon S. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. MEDICAL TEACHER 2024; 46:446-470. [PMID: 38423127 DOI: 10.1080/0142159x.2024.2314198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 01/31/2024] [Indexed: 03/02/2024]
Abstract
BACKGROUND Artificial Intelligence (AI) is rapidly transforming healthcare, and there is a critical need for a nuanced understanding of how AI is reshaping teaching, learning, and educational practice in medical education. This review aimed to map the literature regarding AI applications in medical education, core areas of findings, potential candidates for formal systematic review and gaps for future research. METHODS This rapid scoping review, conducted over 16 weeks, employed Arksey and O'Malley's framework and adhered to STORIES and BEME guidelines. A systematic and comprehensive search across PubMed/MEDLINE, EMBASE, and MedEdPublish was conducted without date or language restrictions. Publications included in the review spanned undergraduate, graduate, and continuing medical education, encompassing both original studies and perspective pieces. Data were charted by multiple author pairs and synthesized into various thematic maps and charts, ensuring a broad and detailed representation of the current landscape. RESULTS The review synthesized 278 publications, with a majority (68%) from North American and European regions. The studies covered diverse AI applications in medical education, such as AI for admissions, teaching, assessment, and clinical reasoning. The review highlighted AI's varied roles, from augmenting traditional educational methods to introducing innovative practices, and underscores the urgent need for ethical guidelines in AI's application in medical education. CONCLUSION The current literature has been charted. The findings underscore the need for ongoing research to explore uncharted areas and address potential risks associated with AI use in medical education. This work serves as a foundational resource for educators, policymakers, and researchers in navigating AI's evolving role in medical education. A framework to support future high utility reporting is proposed, the FACETS framework.
Collapse
Affiliation(s)
- Morris Gordon
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
- Blackpool Hospitals NHS Foundation Trust, Blackpool, UK
| | - Michelle Daniel
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Aderonke Ajiboye
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
| | - Hussein Uraiby
- Department of Cellular Pathology, University Hospitals of Leicester NHS Trust, Leicester, UK
| | - Nicole Y Xu
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Rangana Bartlett
- Department of Cognitive Science, University of California, San Diego, CA, USA
| | - Janice Hanson
- Department of Medicine and Office of Education, School of Medicine, Washington University in Saint Louis, Saint Louis, MO, USA
| | - Mary Haas
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Maxwell Spadafore
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | | | | | - Colin Michie
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
| | - Janet Corral
- Department of Medicine, University of Nevada Reno, School of Medicine, Reno, NV, USA
| | - Brian Kwan
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Diana Dolmans
- School of Health Professions Education, Faculty of Health, Maastricht University, Maastricht, NL, USA
| | - Satid Thammasitboon
- Center for Research, Innovation and Scholarship in Health Professions Education, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
4
|
Weidener L, Fischer M. Proposing a Principle-Based Approach for Teaching AI Ethics in Medical Education. JMIR MEDICAL EDUCATION 2024; 10:e55368. [PMID: 38285931 PMCID: PMC10891487 DOI: 10.2196/55368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/02/2024] [Accepted: 01/29/2024] [Indexed: 01/31/2024]
Abstract
The use of artificial intelligence (AI) in medicine, potentially leading to substantial advancements such as improved diagnostics, has been of increasing scientific and societal interest in recent years. However, the use of AI raises new ethical challenges, such as an increased risk of bias and potential discrimination against patients, as well as misdiagnoses potentially leading to over- or underdiagnosis with substantial consequences for patients. Recognizing these challenges, current research underscores the importance of integrating AI ethics into medical education. This viewpoint paper aims to introduce a comprehensive set of ethical principles for teaching AI ethics in medical education. This dynamic and principle-based approach is designed to be adaptive and comprehensive, addressing not only the current but also emerging ethical challenges associated with the use of AI in medicine. This study conducts a theoretical analysis of the current academic discourse on AI ethics in medical education, identifying potential gaps and limitations. The inherent interconnectivity and interdisciplinary nature of these anticipated challenges are illustrated through a focused discussion on "informed consent" in the context of AI in medicine and medical education. This paper proposes a principle-based approach to AI ethics education, building on the 4 principles of medical ethics-autonomy, beneficence, nonmaleficence, and justice-and extending them by integrating 3 public health ethics principles-efficiency, common good orientation, and proportionality. The principle-based approach to teaching AI ethics in medical education proposed in this study offers a foundational framework for addressing the anticipated ethical challenges of using AI in medicine, recommended in the current academic discourse. By incorporating the 3 principles of public health ethics, this principle-based approach ensures that medical ethics education remains relevant and responsive to the dynamic landscape of AI integration in medicine. As the advancement of AI technologies in medicine is expected to increase, medical ethics education must adapt and evolve accordingly. The proposed principle-based approach for teaching AI ethics in medical education provides an important foundation to ensure that future medical professionals are not only aware of the ethical dimensions of AI in medicine but also equipped to make informed ethical decisions in their practice. Future research is required to develop problem-based and competency-oriented learning objectives and educational content for the proposed principle-based approach to teaching AI ethics in medical education.
Collapse
Affiliation(s)
- Lukas Weidener
- UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| | - Michael Fischer
- UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| |
Collapse
|
5
|
Weidener L, Fischer M. Role of Ethics in Developing AI-Based Applications in Medicine: Insights From Expert Interviews and Discussion of Implications. JMIR AI 2024; 3:e51204. [PMID: 38875585 PMCID: PMC11041491 DOI: 10.2196/51204] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 11/20/2023] [Accepted: 12/09/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND The integration of artificial intelligence (AI)-based applications in the medical field has increased significantly, offering potential improvements in patient care and diagnostics. However, alongside these advancements, there is growing concern about ethical considerations, such as bias, informed consent, and trust in the development of these technologies. OBJECTIVE This study aims to assess the role of ethics in the development of AI-based applications in medicine. Furthermore, this study focuses on the potential consequences of neglecting ethical considerations in AI development, particularly their impact on patients and physicians. METHODS Qualitative content analysis was used to analyze the responses from expert interviews. Experts were selected based on their involvement in the research or practical development of AI-based applications in medicine for at least 5 years, leading to the inclusion of 7 experts in the study. RESULTS The analysis revealed 3 main categories and 7 subcategories reflecting a wide range of views on the role of ethics in AI development. This variance underscores the subjectivity and complexity of integrating ethics into the development of AI in medicine. Although some experts view ethics as fundamental, others prioritize performance and efficiency, with some perceiving ethics as potential obstacles to technological progress. This dichotomy of perspectives clearly emphasizes the subjectivity and complexity surrounding the role of ethics in AI development, reflecting the inherent multifaceted nature of this issue. CONCLUSIONS Despite the methodological limitations impacting the generalizability of the results, this study underscores the critical importance of consistent and integrated ethical considerations in AI development for medical applications. It advocates further research into effective strategies for ethical AI development, emphasizing the need for transparent and responsible practices, consideration of diverse data sources, physician training, and the establishment of comprehensive ethical and legal frameworks.
Collapse
Affiliation(s)
- Lukas Weidener
- Research Unit for Quality and Ethics in Health Care, UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| | - Michael Fischer
- Research Unit for Quality and Ethics in Health Care, UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| |
Collapse
|
6
|
Jacobs SM, Lundy NN, Issenberg SB, Chandran L. Reimagining Core Entrustable Professional Activities for Undergraduate Medical Education in the Era of Artificial Intelligence. JMIR MEDICAL EDUCATION 2023; 9:e50903. [PMID: 38052721 PMCID: PMC10762622 DOI: 10.2196/50903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 11/15/2023] [Accepted: 12/05/2023] [Indexed: 12/07/2023]
Abstract
The proliferation of generative artificial intelligence (AI) and its extensive potential for integration into many aspects of health care signal a transformational shift within the health care environment. In this context, medical education must evolve to ensure that medical trainees are adequately prepared to navigate the rapidly changing health care landscape. Medical education has moved toward a competency-based education paradigm, leading the Association of American Medical Colleges (AAMC) to define a set of Entrustable Professional Activities (EPAs) as its practical operational framework in undergraduate medical education. The AAMC's 13 core EPAs for entering residencies have been implemented with varying levels of success across medical schools. In this paper, we critically assess the existing core EPAs in the context of rapid AI integration in medicine. We identify EPAs that require refinement, redefinition, or comprehensive change to align with the emerging trends in health care. Moreover, this perspective proposes a set of "emerging" EPAs, informed by the changing landscape and capabilities presented by generative AI technologies. We provide a practical evaluation of the EPAs, alongside actionable recommendations on how medical education, viewed through the lens of the AAMC EPAs, can adapt and remain relevant amid rapid technological advancements. By leveraging the transformative potential of AI, we can reshape medical education to align with an AI-integrated future of medicine. This approach will help equip future health care professionals with technological competence and adaptive skills to meet the dynamic and evolving demands in health care.
Collapse
Affiliation(s)
- Sarah Marie Jacobs
- Department of Medical Education, University of Miami Miller School of Medicine, Miami, FL, United States
| | - Neva Nicole Lundy
- Department of Medical Education, University of Miami Miller School of Medicine, Miami, FL, United States
| | - Saul Barry Issenberg
- Department of Medical Education, University of Miami Miller School of Medicine, Miami, FL, United States
| | - Latha Chandran
- Department of Medical Education, University of Miami Miller School of Medicine, Miami, FL, United States
| |
Collapse
|
7
|
Hummelsberger P, Koch TK, Rauh S, Dorn J, Lermer E, Raue M, Hudecek MFC, Schicho A, Colak E, Ghassemi M, Gaube S. Insights on the Current State and Future Outlook of AI in Health Care: Expert Interview Study. JMIR AI 2023; 2:e47353. [PMID: 38875571 PMCID: PMC11041415 DOI: 10.2196/47353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 07/06/2023] [Accepted: 08/01/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND Artificial intelligence (AI) is often promoted as a potential solution for many challenges health care systems face worldwide. However, its implementation in clinical practice lags behind its technological development. OBJECTIVE This study aims to gain insights into the current state and prospects of AI technology from the stakeholders most directly involved in its adoption in the health care sector whose perspectives have received limited attention in research to date. METHODS For this purpose, the perspectives of AI researchers and health care IT professionals in North America and Western Europe were collected and compared for profession-specific and regional differences. In this preregistered, mixed methods, cross-sectional study, 23 experts were interviewed using a semistructured guide. Data from the interviews were analyzed using deductive and inductive qualitative methods for the thematic analysis along with topic modeling to identify latent topics. RESULTS Through our thematic analysis, four major categories emerged: (1) the current state of AI systems in health care, (2) the criteria and requirements for implementing AI systems in health care, (3) the challenges in implementing AI systems in health care, and (4) the prospects of the technology. Experts discussed the capabilities and limitations of current AI systems in health care in addition to their prevalence and regional differences. Several criteria and requirements deemed necessary for the successful implementation of AI systems were identified, including the technology's performance and security, smooth system integration and human-AI interaction, costs, stakeholder involvement, and employee training. However, regulatory, logistical, and technical issues were identified as the most critical barriers to an effective technology implementation process. In the future, our experts predicted both various threats and many opportunities related to AI technology in the health care sector. CONCLUSIONS Our work provides new insights into the current state, criteria, challenges, and outlook for implementing AI technology in health care from the perspective of AI researchers and IT professionals in North America and Western Europe. For the full potential of AI-enabled technologies to be exploited and for them to contribute to solving current health care challenges, critical implementation criteria must be met, and all groups involved in the process must work together.
Collapse
Affiliation(s)
- Pia Hummelsberger
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
| | - Timo K Koch
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
- Department of Psychology, LMU Munich, Munich, Germany
| | - Sabrina Rauh
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
| | - Julia Dorn
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
| | - Eva Lermer
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
- Department of Business Psychology, Technical University of Applied Sciences Augsburg, Augsburg, Germany
| | - Martina Raue
- MIT AgeLab, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Matthias F C Hudecek
- Department of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Andreas Schicho
- Department of Radiology, University Hospital Regensburg, Regensburg, Germany
| | - Errol Colak
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, ON, Canada
- Department of Medical Imaging, St. Michael's Hospital, Unity Health Toronto, Toronto, ON, Canada
- Department of Medical Imaging, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Marzyeh Ghassemi
- Electrical Engineering and Computer Science, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, United States
- Vector Institute, Toronto, ON, Canada
| | - Susanne Gaube
- UCL Global Business School for Health, University College London, London, United Kingdom
| |
Collapse
|