1
|
Gundlack J, Negash S, Thiel C, Buch C, Schildmann J, Unverzagt S, Mikolajczyk R, Frese T. Artificial Intelligence in Medical Care - Patients' Perceptions on Caregiving Relationships and Ethics: A Qualitative Study. Health Expect 2025; 28:e70216. [PMID: 40094179 PMCID: PMC11911933 DOI: 10.1111/hex.70216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2024] [Revised: 02/07/2025] [Accepted: 02/25/2025] [Indexed: 03/19/2025] Open
Abstract
INTRODUCTION Artificial intelligence (AI) offers several opportunities to enhance medical care, but practical application is limited. Consideration of patient needs is essential for the successful implementation of AI-based systems. Few studies have explored patients' perceptions, especially in Germany, resulting in insufficient exploration of perspectives of outpatients, older patients and patients with chronic diseases. We aimed to explore how patients perceive AI in medical care, focusing on relationships to physicians and ethical aspects. METHODS We conducted a qualitative study with six semi-structured focus groups from June 2022 to March 2023. We analysed data using a content analysis approach by systemising the textual material via a coding system. Participants were mostly recruited from outpatient settings in the regions of Halle and Erlangen, Germany. They were enrolled primarily through convenience sampling supplemented by purposive sampling. RESULTS Patients (N = 35; 13 females, 22 males) with a median age of 50 years participated. Participants were mixed in socioeconomic status and affinity for new technology. Most had chronic diseases. Perceived main advantages of AI were its efficient and flawless functioning, its ability to process and provide large data volume, and increased patient safety. Major perceived disadvantages were impersonality, potential data security issues, and fear of errors based on medical staff relying too much on AI. A dominant theme was that human interaction, personal conversation, and understanding of emotions cannot be replaced by AI. Participants emphasised the need to involve everyone in the informing process about AI. Most considered physicians as responsible for decisions resulting from AI applications. Transparency of data use and data protection were other important points. CONCLUSIONS Patients could generally imagine AI as support in medical care if its usage is focused on patient well-being and the human relationship is maintained. Including patients' needs in the development of AI and adequate communication about AI systems are essential for successful implementation in practice. PATIENT OR PUBLIC CONTRIBUTION Patients' perceptions as participants in this study were crucial. Further, patients assessed the presentation and comprehensibility of the research material during a pretest, and recommended adaptations were implemented. After each FG, space was provided for requesting modifications and discussion.
Collapse
Affiliation(s)
- Jana Gundlack
- Institute of General Practice & Family Medicine, Interdisciplinary Center of Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Sarah Negash
- Institute for Medical Epidemiology, Biometrics and Informatics, Interdisciplinary Center for Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Carolin Thiel
- Institute of General Practice & Family Medicine, Interdisciplinary Center of Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
- SRH University of Applied Health SciencesHeidelbergGermany
| | - Charlotte Buch
- Institute for History and Ethics of Medicine, Interdisciplinary Center for Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Jan Schildmann
- Institute for History and Ethics of Medicine, Interdisciplinary Center for Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Susanne Unverzagt
- Institute of General Practice & Family Medicine, Interdisciplinary Center of Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Rafael Mikolajczyk
- Institute for Medical Epidemiology, Biometrics and Informatics, Interdisciplinary Center for Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Thomas Frese
- Institute of General Practice & Family Medicine, Interdisciplinary Center of Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| |
Collapse
|
2
|
Penner SB, Mercado NR, Bernstein S, Erickson E, DuBois MA, Dreisbach C. Fostering Informed Consent and Shared Decision-Making in Maternity Nursing With the Advancement of Artificial Intelligence. MCN Am J Matern Child Nurs 2025; 50:78-85. [PMID: 39724549 DOI: 10.1097/nmc.0000000000001083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2024]
Abstract
ABSTRACT Artificial intelligence (AI), defined as algorithms built to reproduce human behavior, has various applications in health care such as risk prediction, medical image classification, text analysis, and complex disease diagnosis. Due to the increasing availability and volume of data, especially from electronic health records, AI technology is expanding into all fields of nursing and medicine. As the health care system moves toward automation and computationally driven clinical decision-making, nurses play a vital role in bridging the gap between the technological output, the patient, and the health care team. We explore the nurses' role in translating AI-generated output to patients and identify considerations for ensuring informed consent and shared decision-making throughout the process. A brief review of AI technology and informed consent, an identification of power dynamics that underly informed consent, and descriptions of the role of the nurse in various relationships such as nurse-AI, nurse-patient, and patient-AI are covered. Ultimately, nurses and physicians bear the responsibility of upholding and safeguarding the right to informed choice, as it is a fundamental aspect of safe and ethical patient-centered health care.
Collapse
|
3
|
Jha D, Durak G, Sharma V, Keles E, Cicek V, Zhang Z, Srivastava A, Rauniyar A, Hagos DH, Tomar NK, Miller FH, Topcu A, Yazidi A, Håkegård JE, Bagci U. A Conceptual Framework for Applying Ethical Principles of AI to Medical Practice. Bioengineering (Basel) 2025; 12:180. [PMID: 40001699 PMCID: PMC11851997 DOI: 10.3390/bioengineering12020180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2024] [Revised: 01/11/2025] [Accepted: 01/22/2025] [Indexed: 02/27/2025] Open
Abstract
Artificial Intelligence (AI) is reshaping healthcare through advancements in clinical decision support and diagnostic capabilities. While human expertise remains foundational to medical practice, AI-powered tools are increasingly matching or exceeding specialist-level performance across multiple domains, paving the way for a new era of democratized healthcare access. These systems promise to reduce disparities in care delivery across demographic, racial, and socioeconomic boundaries by providing high-quality diagnostic support at scale. As a result, advanced healthcare services can be affordable to all populations, irrespective of demographics, race, or socioeconomic background. The democratization of such AI tools can reduce the cost of care, optimize resource allocation, and improve the quality of care. In contrast to humans, AI can potentially uncover complex relationships in the data from a large set of inputs and generate new evidence-based knowledge in medicine. However, integrating AI into healthcare raises several ethical and philosophical concerns, such as bias, transparency, autonomy, responsibility, and accountability. In this study, we examine recent advances in AI-enabled medical image analysis, current regulatory frameworks, and emerging best practices for clinical integration. We analyze both technical and ethical challenges inherent in deploying AI systems across healthcare institutions, with particular attention to data privacy, algorithmic fairness, and system transparency. Furthermore, we propose practical solutions to address key challenges, including data scarcity, racial bias in training datasets, limited model interpretability, and systematic algorithmic biases. Finally, we outline a conceptual algorithm for responsible AI implementations and identify promising future research and development directions.
Collapse
Affiliation(s)
- Debesh Jha
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL 60611, USA; (D.J.); (G.D.); (V.S.); (E.K.); (V.C.); (Z.Z.); (A.S.); (N.K.T.); (F.H.M.)
| | - Gorkem Durak
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL 60611, USA; (D.J.); (G.D.); (V.S.); (E.K.); (V.C.); (Z.Z.); (A.S.); (N.K.T.); (F.H.M.)
| | - Vanshali Sharma
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL 60611, USA; (D.J.); (G.D.); (V.S.); (E.K.); (V.C.); (Z.Z.); (A.S.); (N.K.T.); (F.H.M.)
| | - Elif Keles
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL 60611, USA; (D.J.); (G.D.); (V.S.); (E.K.); (V.C.); (Z.Z.); (A.S.); (N.K.T.); (F.H.M.)
| | - Vedat Cicek
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL 60611, USA; (D.J.); (G.D.); (V.S.); (E.K.); (V.C.); (Z.Z.); (A.S.); (N.K.T.); (F.H.M.)
| | - Zheyuan Zhang
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL 60611, USA; (D.J.); (G.D.); (V.S.); (E.K.); (V.C.); (Z.Z.); (A.S.); (N.K.T.); (F.H.M.)
| | - Abhishek Srivastava
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL 60611, USA; (D.J.); (G.D.); (V.S.); (E.K.); (V.C.); (Z.Z.); (A.S.); (N.K.T.); (F.H.M.)
| | - Ashish Rauniyar
- Sustainable Communication Technologies, SINTEF Digital, 7034 Trondheim, Norway; (A.R.); (J.E.H.)
| | - Desta Haileselassie Hagos
- Department of Electrical Engineering and Computer Science, Howard University, Washington, DC 20059, USA;
| | - Nikhil Kumar Tomar
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL 60611, USA; (D.J.); (G.D.); (V.S.); (E.K.); (V.C.); (Z.Z.); (A.S.); (N.K.T.); (F.H.M.)
| | - Frank H. Miller
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL 60611, USA; (D.J.); (G.D.); (V.S.); (E.K.); (V.C.); (Z.Z.); (A.S.); (N.K.T.); (F.H.M.)
| | - Ahmet Topcu
- Department of General Surgery, Tokat State Hospital, Tokat 60100, Türkiye;
| | - Anis Yazidi
- OsloMet Artificial Intelligence (AI) Lab, Oslo Metropolitan University, 0130 Oslo, Norway;
| | - Jan Erik Håkegård
- Sustainable Communication Technologies, SINTEF Digital, 7034 Trondheim, Norway; (A.R.); (J.E.H.)
| | - Ulas Bagci
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL 60611, USA; (D.J.); (G.D.); (V.S.); (E.K.); (V.C.); (Z.Z.); (A.S.); (N.K.T.); (F.H.M.)
| |
Collapse
|
4
|
Ayoub NF, Rameau A, Brenner MJ, Bur AM, Ator GA, Briggs SE, Takashima M, Stankovic KM. American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) Report on Artificial Intelligence. Otolaryngol Head Neck Surg 2025; 172:734-743. [PMID: 39666770 DOI: 10.1002/ohn.1080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Revised: 10/31/2024] [Accepted: 11/22/2024] [Indexed: 12/14/2024]
Abstract
This report synthesizes the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) Task Force's guidance on the integration of artificial intelligence (AI) in otolaryngology-head and neck surgery (OHNS). A comprehensive literature review was conducted, focusing on the applications, benefits, and challenges of AI in OHNS, alongside ethical, legal, and social implications. The Task Force, formulated by otolaryngologist experts in AI, used an iterative approach, adapted from the Delphi method, to prioritize topics for inclusion and to reach a consensus on guiding principles. The Task Force's findings highlight AI's transformative potential for OHNS, offering potential advancements in precision medicine, clinical decision support, operational efficiency, research, and education. However, challenges such as data quality, health equity, privacy concerns, transparency, regulatory gaps, and ethical dilemmas necessitate careful navigation. Incorporating AI into otolaryngology practice in a safe, equitable, and patient-centered manner requires clinician judgment, transparent AI systems, and adherence to ethical and legal standards. The Task Force principles underscore the importance of otolaryngologists' involvement in AI's ethical development, implementation, and regulation to harness benefits while mitigating risks. The proposed principles inform the integration of AI in otolaryngology, aiming to enhance patient outcomes, clinician well-being, and efficiency of health care delivery.
Collapse
Affiliation(s)
- Noel F Ayoub
- Department of Otolaryngology-Head and Neck Surgery, Mass Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head and Neck Surgery, Stanford University, Palo Alto, California, USA
| | - Anaïs Rameau
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medical College, Ithaca, New York, USA
| | - Michael J Brenner
- Department of Otolaryngology-Head and Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan, USA
| | - Andrés M Bur
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, USA
| | - Gregory A Ator
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, USA
| | - Selena E Briggs
- Department of Otolaryngology-Head and Neck Surgery, MedStar Georgetown University Hospital, Washington, District of Columbia, USA
| | - Masayoshi Takashima
- Department Otolaryngology-Head and Neck Surgery, Houston Methodist, Houston, Texas, USA
| | - Konstantina M Stankovic
- Department of Otolaryngology-Head and Neck Surgery, Stanford University, Palo Alto, California, USA
| |
Collapse
|
5
|
Sobaih AEE, Chaibi A, Brini R, Abdelghani Ibrahim TM. Unlocking Patient Resistance to AI in Healthcare: A Psychological Exploration. Eur J Investig Health Psychol Educ 2025; 15:6. [PMID: 39852189 PMCID: PMC11765336 DOI: 10.3390/ejihpe15010006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Revised: 12/20/2024] [Accepted: 12/24/2024] [Indexed: 01/26/2025] Open
Abstract
Artificial intelligence (AI) has transformed healthcare, yet patients' acceptance of AI-driven medical services remains constrained. Despite its significant potential, patients exhibit reluctance towards this technology. A notable lack of comprehensive research exists that examines the variables driving patients' resistance to AI. This study explores the variables influencing patients' resistance to adopt AI technology in healthcare by applying an extended Ram and Sheth Model. More specifically, this research examines the roles of the need for personal contact (NPC), perceived technological dependence (PTD), and general skepticism toward AI (GSAI) in shaping patient resistance to AI integration. For this reason, a sequential mixed-method approach was employed, beginning with semi-structured interviews to identify adaptable factors in healthcare. It then followed with a survey to validate the qualitative findings through Structural Equation Modeling (SEM) via AMOS (version 24). The findings confirm that NPC, PTD, and GSAI significantly contribute to patient resistance to AI in healthcare. Precisely, patients who prefer personal interaction, feel dependent on AI, or are skeptical of AI's promises are more likely to resist its adoption. The findings highlight the psychological factors driving patient reluctance toward AI in healthcare, offering valuable insights for healthcare administrators. Strategies to balance AI's efficiency with human interaction, mitigate technological dependence, and foster trust are recommended for successful implementation of AI. This research adds to the theoretical understanding of Innovation Resistance Theory, providing both conceptual insights and practical implications for the effective incorporation of AI in healthcare.
Collapse
Affiliation(s)
- Abu Elnasr E. Sobaih
- Management Department, College of Business Administration, King Faisal University, Al-Ahsaa 31982, Saudi Arabia
| | - Asma Chaibi
- Management Department, Mediterranean School of Business (MSB), South Mediterranean University, Tunis 1053, Tunisia;
| | - Riadh Brini
- Department of Business Administration, College of Business Administration, Majmaah University, Al Majma’ah 11952, Saudi Arabia
| | | |
Collapse
|
6
|
Panadés R, Yuguero O. Cyber-bioethics: the new ethical discipline for digital health. Front Digit Health 2025; 6:1523180. [PMID: 39839847 PMCID: PMC11747591 DOI: 10.3389/fdgth.2024.1523180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Accepted: 12/23/2024] [Indexed: 01/23/2025] Open
Affiliation(s)
- Robert Panadés
- Intelligence for Primary Care Research Group, Fundació Institut Universitari per a la recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Manresa, Spain
- Equip d'Atenció Primària d'Anoia Rural, Gerència d'Atenció Primària i a la comunitat Penedès, Institut Català de la Salut, Barcelona, Spain
- Digital Health Group, Family Medicine Society of Catalunya (CAMFIC), Barcelona, Spain
| | - Oriol Yuguero
- Digital Health Group, Family Medicine Society of Catalunya (CAMFIC), Barcelona, Spain
- E-RLab, Health Center, Univeritat Oberta de Catalunya (UOC), Barcelona, Spain
| |
Collapse
|
7
|
Taymour N, Fouda SM, Abdelrahaman HH, Hassan MG. Performance of the ChatGPT-3.5, ChatGPT-4, and Google Gemini large language models in responding to dental implantology inquiries. J Prosthet Dent 2025:S0022-3913(24)00833-3. [PMID: 39757053 DOI: 10.1016/j.prosdent.2024.12.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Revised: 12/06/2024] [Accepted: 12/10/2024] [Indexed: 01/07/2025]
Abstract
STATEMENT OF PROBLEM Artificial intelligence (AI) chatbots have been proposed as promising resources for oral health information. However, the quality and readability of existing online health-related information is often inconsistent and challenging. PURPOSE This study aimed to compare the reliability and usefulness of dental implantology-related information provided by the ChatGPT-3.5, ChatGPT-4, and Google Gemini large language models (LLMs). MATERIAL AND METHODS A total of 75 questions were developed covering various dental implant domains. These questions were then presented to 3 different LLMs: ChatGPT-3.5, ChatGPT-4, and Google Gemini. The responses generated were recorded and independently assessed by 2 specialists who were blinded to the source of the responses. The evaluation focused on the accuracy of the generated answers using a modified 5-point Likert scale to measure the reliability and usefulness of the information provided. Additionally, the ability of the AI-chatbots to offer definitive responses to closed questions, provide reference citation, and advise scheduling consultations with a dental specialist was also analyzed. The Friedman, Mann Whitney U and Spearman Correlation tests were used for data analysis (α=.05). RESULTS Google Gemini exhibited higher reliability and usefulness scores compared with ChatGPT-3.5 and ChatGPT-4 (P<.001). Google Gemini also demonstrated superior proficiency in identifying closed questions (25 questions, 41%) and recommended specialist consultations for 74 questions (98.7%), significantly outperforming ChatGPT-4 (30 questions, 40.0%) and ChatGPT-3.5 (28 questions, 37.3%) (P<.001). A positive correlation was found between reliability and usefulness scores, with Google Gemini showing the strongest correlation (ρ=.702). CONCLUSIONS The 3 AI Chatbots showed acceptable levels of reliability and usefulness in addressing dental implant-related queries. Google Gemini distinguished itself by providing responses consistent with specialist consultations.
Collapse
Affiliation(s)
- Noha Taymour
- Lecturer, Department of Substitutive Dental Sciences, College of Dentistry, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia.
| | - Shaimaa M Fouda
- Lecturer, Department of Substitutive Dental Sciences, College of Dentistry, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Hams H Abdelrahaman
- Assistant Lecturer, Department of Pediatric Dentistry, and Dental Public Health, Faculty of Dentistry, Alexandria University, Alexandria, Egypt
| | - Mohamed G Hassan
- Postdoctoral Research Associate, Division of Bone and Mineral Diseases, Department of Internal Medicine, School of Medicine, Washington University in St. Louis, St. Louis, MO; and Lecturer, Department of Orthodontics, Faculty of Dentistry, Assiut University, Assiut, Egypt
| |
Collapse
|
8
|
Branstetter R, Piedy E, Rajendra R, Bronstone A, Dasa V. Navigating the Intersection of Technology and Surgical Education: Advancements, Challenges, and Ethical Considerations in Orthopedic Training. Orthop Clin North Am 2025; 56:21-28. [PMID: 39581642 DOI: 10.1016/j.ocl.2024.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2024]
Abstract
The emergence of technological advancements such as artificial intelligence, virtual reality, and robotics may offer new solutions to address crucial deficiencies in surgical residency training. However, these technologies also introduce ethical dilemmas and practical complexities. Achieving a balance between embracing innovation and refining traditional surgical techniques is essential in molding well-rounded, proficient surgeons. Addressing concerns such as disparities in access to technology and the risk of excessive automated system dependence demands thorough deliberation and the establishment of universal guidelines. By approaching these challenges with care and insight, surgeons can utilize new technology to elevate both surgical training and outcomes.
Collapse
Affiliation(s)
- Robert Branstetter
- Department of Orthopedic Surgery, Louisiana State University Health Sciences Center School of Medicine, 2020 Gravier Street, New Orleans, LA 70112, USA.
| | - Erik Piedy
- Department of Orthopedic Surgery, Louisiana State University Health Sciences Center School of Medicine, 2020 Gravier Street, New Orleans, LA 70112, USA
| | - Ravi Rajendra
- Department of Orthopedic Surgery, Louisiana State University Health Sciences Center, 2021 Perdido Street, 7th Floor, New Orleans, LA 70112, USA
| | - Amy Bronstone
- Department of Orthopedic Surgery, Louisiana State University Health Sciences Center, 2021 Perdido Street, 7th Floor, New Orleans, LA 70112, USA
| | - Vinod Dasa
- Department of Orthopedic Surgery, Louisiana State University Health Sciences Center, 2021 Perdido Street, 7th Floor, New Orleans, LA 70112, USA
| |
Collapse
|
9
|
Gozum IEA, Flake CCD. Human Dignity and Artificial Intelligence in Healthcare: A Basis for a Catholic Ethics on AI. JOURNAL OF RELIGION AND HEALTH 2024:10.1007/s10943-024-02206-1. [PMID: 39730882 DOI: 10.1007/s10943-024-02206-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/02/2024] [Indexed: 12/29/2024]
Abstract
The rise of artificial intelligence (AI) has caught the attention of the world as it challenges the status quo on human operations. As AI has dramatically impacted education, healthcare, industry, and economics, a Catholic ethical study of human dignity in the context of AI in healthcare is presented in this article. The debates regarding whether AI will usher well or doom the dignity of humankind are occasioned by increasing developments of technology in patient care and medical decision-making. This paper draws from Catholic ethics, especially the concepts of inherent human dignity, the sanctity of human life, and morality in the medical field. It talks about using AI to upgrade healthcare outcomes without losing the essential humanity of human dignity in medical practice. It also touches on the most likely ethical issues: the morality of AI-related decisions and the depersonalization of health care. Finally, it provides a framework that brings AI development in tandem with a Catholic vision of human dignity and supports a healthcare system that caters to the common good but correctly respects the irreplaceable value of the human person and highlights moral responsibility.
Collapse
Affiliation(s)
- Ivan Efreaim A Gozum
- Institute of Religion, University of Santo Tomas, 1008, Sampaloc, Manila, Philippines.
- The Graduate School, University of Santo Tomas, 1008, Sampaloc, Manila, Philippines.
| | | |
Collapse
|
10
|
Sajdeya R, Narouze S. Harnessing artificial intelligence for predicting and managing postoperative pain: a narrative literature review. Curr Opin Anaesthesiol 2024; 37:604-615. [PMID: 39011674 DOI: 10.1097/aco.0000000000001408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/17/2024]
Abstract
PURPOSE OF REVIEW This review examines recent research on artificial intelligence focusing on machine learning (ML) models for predicting postoperative pain outcomes. We also identify technical, ethical, and practical hurdles that demand continued investigation and research. RECENT FINDINGS Current ML models leverage diverse datasets, algorithmic techniques, and validation methods to identify predictive biomarkers, risk factors, and phenotypic signatures associated with increased acute and chronic postoperative pain and persistent opioid use. ML models demonstrate satisfactory performance to predict pain outcomes and their prognostic trajectories, identify modifiable risk factors and at-risk patients who benefit from targeted pain management strategies, and show promise in pain prevention applications. However, further evidence is needed to evaluate the reliability, generalizability, effectiveness, and safety of ML-driven approaches before their integration into perioperative pain management practices. SUMMARY Artificial intelligence (AI) has the potential to enhance perioperative pain management by providing more accurate predictive models and personalized interventions. By leveraging ML algorithms, clinicians can better identify at-risk patients and tailor treatment strategies accordingly. However, successful implementation needs to address challenges in data quality, algorithmic complexity, and ethical and practical considerations. Future research should focus on validating AI-driven interventions in clinical practice and fostering interdisciplinary collaboration to advance perioperative care.
Collapse
Affiliation(s)
- Ruba Sajdeya
- Department of Anesthesiology, Duke University School of Medicine, Durham, North Carolina
| | - Samer Narouze
- Division of Pain Medicine, University Hospitals Medical Center, Cleveland, Ohio, USA
| |
Collapse
|
11
|
Abujaber AA, Nashwan AJ. Ethical framework for artificial intelligence in healthcare research: A path to integrity. World J Methodol 2024; 14:94071. [PMID: 39310239 PMCID: PMC11230076 DOI: 10.5662/wjm.v14.i3.94071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Revised: 04/18/2024] [Accepted: 05/06/2024] [Indexed: 06/25/2024] Open
Abstract
The integration of Artificial Intelligence (AI) into healthcare research promises unprecedented advancements in medical diagnostics, treatment personalization, and patient care management. However, these innovations also bring forth significant ethical challenges that must be addressed to maintain public trust, ensure patient safety, and uphold data integrity. This article sets out to introduce a detailed framework designed to steer governance and offer a systematic method for assuring that AI applications in healthcare research are developed and executed with integrity and adherence to medical research ethics.
Collapse
Affiliation(s)
- Ahmad A Abujaber
- Department of Nursing, Hazm Mebaireek General Hospital (HMGH), Doha 3050, Qatar
| | | |
Collapse
|
12
|
Rose SL, Shapiro D. An Ethically Supported Framework for Determining Patient Notification and Informed Consent Practices When Using Artificial Intelligence in Health Care. Chest 2024; 166:572-578. [PMID: 38788895 PMCID: PMC11443239 DOI: 10.1016/j.chest.2024.04.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 03/05/2024] [Accepted: 04/09/2024] [Indexed: 05/26/2024] Open
Abstract
Artificial intelligence (AI) is increasingly being used in health care. Without an ethically supportable, standard approach to knowing when patients should be informed about AI, hospital systems and clinicians run the risk of fostering mistrust among their patients and the public. Therefore, hospital leaders need guidance on when to tell patients about the use of AI in their care. In this article, we provide such guidance. To determine which AI technologies fall into each of the identified categories (no notification or no informed consent [IC], notification only, and formal IC), we propose that AI use-cases should be evaluated using the following criteria: (1) AI model autonomy, (2) departure from standards of practice, (3) whether the AI model is patient facing, (4) clinical risk introduced by the model, and (5) administrative burdens. We take each of these in turn, using a case example of AI in health care to illustrate our proposed framework. As AI becomes more commonplace in health care, our proposal may serve as a starting point for creating consensus on standards for notification and IC for the use of AI in patient care.
Collapse
Affiliation(s)
- Susannah L Rose
- Center for Bioethics and Society, Department of Bioinformatics, Vanderbilt University Medical Center, Vanderbilt University, Nashville, TN.
| | - Devora Shapiro
- Department of Social Medicine, Ohio University-Heritage College of Osteopathic Medicine, Athens, OH
| |
Collapse
|
13
|
De Micco F, Tambone V, Frati P, Cingolani M, Scendoni R. Disability 4.0: bioethical considerations on the use of embodied artificial intelligence. Front Med (Lausanne) 2024; 11:1437280. [PMID: 39219800 PMCID: PMC11362069 DOI: 10.3389/fmed.2024.1437280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 08/06/2024] [Indexed: 09/04/2024] Open
Abstract
Robotics and artificial intelligence have marked the beginning of a new era in the care and integration of people with disabilities, helping to promote their independence, autonomy and social participation. In this area, bioethical reflection assumes a key role at anthropological, ethical, legal and socio-political levels. However, there is currently a substantial diversity of opinions and ethical arguments, as well as a lack of consensus on the use of assistive robots, while the focus remains predominantly on the usability of products. The article presents a bioethical analysis that highlights the risk arising from using embodied artificial intelligence according to a functionalist model. Failure to recognize disability as the result of a complex interplay between health, personal and situational factors could result in potential damage to the intrinsic dignity of the person and human relations with healthcare workers. Furthermore, the danger of discrimination in accessing these new technologies is highlighted, emphasizing the need for an ethical approach that considers the social and moral implications of implementing embodied AI in the field of rehabilitation.
Collapse
Affiliation(s)
- Francesco De Micco
- Research Unit of Bioethics and Humanities, Department of Medicine and Surgery, University Campus Bio-Medico of Rome, Rome, Italy
- Operative Research Unit of Clinical Affairs, Healthcare Bioethics Center, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | - Vittoradolfo Tambone
- Research Unit of Bioethics and Humanities, Department of Medicine and Surgery, University Campus Bio-Medico of Rome, Rome, Italy
- Operative Research Unit of Clinical Affairs, Healthcare Bioethics Center, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | - Paola Frati
- Department of Anatomical, Histological, Forensic and Orthopedic Sciences, Sapienza University, Rome, Italy
| | - Mariano Cingolani
- Department of Law, Institute of Legal Medicine, University of Macerata, Macerata, Italy
| | - Roberto Scendoni
- Department of Law, Institute of Legal Medicine, University of Macerata, Macerata, Italy
| |
Collapse
|
14
|
Isavand P, Aghamiri SS, Amin R. Applications of Multimodal Artificial Intelligence in Non-Hodgkin Lymphoma B Cells. Biomedicines 2024; 12:1753. [PMID: 39200217 PMCID: PMC11351272 DOI: 10.3390/biomedicines12081753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 07/22/2024] [Accepted: 08/01/2024] [Indexed: 09/02/2024] Open
Abstract
Given advancements in large-scale data and AI, integrating multimodal artificial intelligence into cancer research can enhance our understanding of tumor behavior by simultaneously processing diverse biomedical data types. In this review, we explore the potential of multimodal AI in comprehending B-cell non-Hodgkin lymphomas (B-NHLs). B-cell non-Hodgkin lymphomas (B-NHLs) represent a particular challenge in oncology due to tumor heterogeneity and the intricate ecosystem in which tumors develop. These complexities complicate diagnosis, prognosis, and therapy response, emphasizing the need to use sophisticated approaches to enhance personalized treatment strategies for better patient outcomes. Therefore, multimodal AI can be leveraged to synthesize critical information from available biomedical data such as clinical record, imaging, pathology and omics data, to picture the whole tumor. In this review, we first define various types of modalities, multimodal AI frameworks, and several applications in precision medicine. Then, we provide several examples of its usage in B-NHLs, for analyzing the complexity of the ecosystem, identifying immune biomarkers, optimizing therapy strategy, and its clinical applications. Lastly, we address the limitations and future directions of multimodal AI, highlighting the need to overcome these challenges for better clinical practice and application in healthcare.
Collapse
Affiliation(s)
- Pouria Isavand
- Department of Radiology, School of Medicine, Zanjan University of Medical Sciences, Zanjan 4513956184, Iran
| | | | - Rada Amin
- Department of Biochemistry, University of Nebraska, Lincoln, NE 68503, USA
| |
Collapse
|
15
|
Teasdale A, Mills L, Costello R. Artificial Intelligence-Powered Surgical Consent: Patient Insights. Cureus 2024; 16:e68134. [PMID: 39347259 PMCID: PMC11438496 DOI: 10.7759/cureus.68134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/28/2024] [Indexed: 10/01/2024] Open
Abstract
Introduction The integration of artificial intelligence (AI) in healthcare has revolutionized patient interactions and service delivery. AI's role extends from supporting clinical diagnostics and enhancing operational efficiencies to potentially improving informed consent processes in surgical settings. This study investigates the application of AI, particularly large language models like OpenAI's ChatGPT, in facilitating surgical consent, focusing on patient understanding, satisfaction, and trust. Methods We employed a mixed-methods approach involving 86 participants, including laypeople and medical staff, who engaged in a simulated AI-driven consent process for a tonsillectomy. Participants interacted with ChatGPT-4, which provided detailed procedure explanations, risks, and benefits. Post-interaction, participants completed a survey assessing their experience through quantitative and qualitative measures. Results Participants had a cautiously optimistic response to AI in the surgical consent process. Notably, 71% felt adequately informed, 86% found the information clear, and 71% felt they could make informed decisions. Overall, 71% were satisfied, 57% felt respected and confident, and 57% would recommend it, indicating areas needing refinement. However, concerns about data privacy and the lack of personal interaction were significant, with only 42% reassured about the security of their data. The standardization of information provided by AI was appreciated for potentially reducing human error, but the absence of empathetic human interaction was noted as a drawback. Discussion While AI shows promise in enhancing the consistency and comprehensiveness of information delivered during the consent process, significant challenges remain. These include addressing data privacy concerns and bridging the gap in personal interaction. The potential for AI to misinform due to system "hallucinations" or inherent biases also needs consideration. Future research should focus on refining AI interactions to support more nuanced and empathetic engagements, ensuring that AI supplements rather than replacing human elements in healthcare. Conclusion The integration of AI into surgical consent processes could standardize and potentially improve the delivery of information but must be balanced with efforts to maintain the critical human elements of care. Collaborative efforts between developers, clinicians, and ethicists are essential to optimize AI use, ensuring it complements the traditional consent process while enhancing patient satisfaction and trust.
Collapse
Affiliation(s)
| | - Laura Mills
- General Practice, Dyfed Road Surgery, Swansea, GBR
| | | |
Collapse
|
16
|
Iserson KV. Reflexive control in emergency medicine. Am J Emerg Med 2024; 81:75-81. [PMID: 38677197 DOI: 10.1016/j.ajem.2024.04.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 04/11/2024] [Accepted: 04/18/2024] [Indexed: 04/29/2024] Open
Abstract
Emergency physicians (EPs) navigate high-pressure environments, making rapid decisions amidst ambiguity. Their choices are informed by a complex interplay of experience, information, and external forces. While cognitive shortcuts (heuristics) expedite assessments, there are multiple ways they can be subtly manipulated, potentially leading to reflexive control: external actors steering EPs' decisions for their own benefit. Pharmaceutical companies, device manufacturers, and media narratives are among the numerous factors that influence the EPs' information landscape. Using tactics such as selective data dissemination, framing, and financial incentives, these actors can exploit pre-existing cognitive biases like anchoring, confirmation, and availability. This creates fertile ground for reflexive control, where EPs may believe they are acting independently while unknowingly serving the goals of external influencers. The consequences of manipulated decision making can be severe: misdiagnoses, inappropriate treatments, and increased healthcare costs. Ethical dilemmas arise when external pressures conflict with patient well-being. Recognizing these dangers empowers EPs to resist reflexive control through (1) critical thinking: examining information for potential biases and prioritizing evidence-based practices, (2) continuous education: learning about cognitive biases and mitigation strategies, and (3) institutional policies: implementing regulations to reduce external influence and to promote transparency. This vulnerability of emergency medicine decision making highlights the need for awareness, education, and robust ethical frameworks. Understanding reflexive control techniques is crucial for safeguarding patient care and promoting independent, ethical decision making in emergency medicine.
Collapse
Affiliation(s)
- Kenneth V Iserson
- Department of Emergency Medicine, The University of Arizona, Tucson, AZ, United States of America.
| |
Collapse
|
17
|
Maleki Varnosfaderani S, Forouzanfar M. The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century. Bioengineering (Basel) 2024; 11:337. [PMID: 38671759 PMCID: PMC11047988 DOI: 10.3390/bioengineering11040337] [Citation(s) in RCA: 83] [Impact Index Per Article: 83.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/28/2024] Open
Abstract
As healthcare systems around the world face challenges such as escalating costs, limited access, and growing demand for personalized care, artificial intelligence (AI) is emerging as a key force for transformation. This review is motivated by the urgent need to harness AI's potential to mitigate these issues and aims to critically assess AI's integration in different healthcare domains. We explore how AI empowers clinical decision-making, optimizes hospital operation and management, refines medical image analysis, and revolutionizes patient care and monitoring through AI-powered wearables. Through several case studies, we review how AI has transformed specific healthcare domains and discuss the remaining challenges and possible solutions. Additionally, we will discuss methodologies for assessing AI healthcare solutions, ethical challenges of AI deployment, and the importance of data privacy and bias mitigation for responsible technology use. By presenting a critical assessment of AI's transformative potential, this review equips researchers with a deeper understanding of AI's current and future impact on healthcare. It encourages an interdisciplinary dialogue between researchers, clinicians, and technologists to navigate the complexities of AI implementation, fostering the development of AI-driven solutions that prioritize ethical standards, equity, and a patient-centered approach.
Collapse
Affiliation(s)
| | - Mohamad Forouzanfar
- Département de Génie des Systèmes, École de Technologie Supérieure (ÉTS), Université du Québec, Montréal, QC H3C 1K3, Canada
- Centre de Recherche de L’institut Universitaire de Gériatrie de Montréal (CRIUGM), Montréal, QC H3W 1W5, Canada
| |
Collapse
|