1
|
Shoval DH, Gigi K, Haber Y, Itzhaki A, Asraf K, Piterman D, Elyoseph Z. A controlled trial examining large Language model conformity in psychiatric assessment using the Asch paradigm. BMC Psychiatry 2025; 25:478. [PMID: 40355854 PMCID: PMC12070653 DOI: 10.1186/s12888-025-06912-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2025] [Accepted: 04/25/2025] [Indexed: 05/15/2025] Open
Abstract
BACKGROUND Despite significant advances in AI-driven medical diagnostics, the integration of large language models (LLMs) into psychiatric practice presents unique challenges. While LLMs demonstrate high accuracy in controlled settings, their performance in collaborative clinical environments remains unclear. This study examined whether LLMs exhibit conformity behavior under social pressure across different diagnostic certainty levels, with a particular focus on psychiatric assessment. METHODS Using an adapted Asch paradigm, we conducted a controlled trial examining GPT-4o's performance across three domains representing increasing levels of diagnostic uncertainty: circle similarity judgments (high certainty), brain tumor identification (intermediate certainty), and psychiatric assessment using children's drawings (high uncertainty). The study employed a 3 × 3 factorial design with three pressure conditions: no pressure, full pressure (five consecutive incorrect peer responses), and partial pressure (mixed correct and incorrect peer responses). We conducted 10 trials per condition combination (90 total observations), using standardized prompts and multiple-choice responses. The binomial test and chi-square analyses assessed performance differences across conditions. RESULTS Under no pressure, GPT-4o achieved 100% accuracy across all domains. Under full pressure, accuracy declined systematically with increasing diagnostic uncertainty: 50% in circle recognition, 40% in tumor identification, and 0% in psychiatric assessment. Partial pressure showed a similar pattern, with maintained accuracy in basic tasks (80% in circle recognition, 100% in tumor identification) but complete failure in psychiatric assessment (0%). All differences between no pressure and pressure conditions were statistically significant (P <.05), with the most severe effects observed in psychiatric assessment (χ²₁=16.20, P <.001). CONCLUSIONS This study reveals that LLMs exhibit conformity patterns that intensify with diagnostic uncertainty, culminating in complete performance failure in psychiatric assessment under social pressure. These findings suggest that successful implementation of AI in psychiatry requires careful consideration of social dynamics and the inherent uncertainty in psychiatric diagnosis. Future research should validate these findings across different AI systems and diagnostic tools while developing strategies to maintain AI independence in clinical settings. TRIAL REGISTRATION Not applicable.
Collapse
Affiliation(s)
- Dorit Hadar Shoval
- The Center for Psychobiological Research, Department of Psychology and Educational Counseling, Max Stern Yezreel Valley College, Yezreel Valley, Israel.
- The Institute for Research and Development, The Artificial Third, Tel Aviv, Israel.
| | - Karny Gigi
- The Institute for Research and Development, The Artificial Third, Tel Aviv, Israel
| | - Yuval Haber
- The Institute for Research and Development, The Artificial Third, Tel Aviv, Israel
- The PhD Program of Hermeneutics & Cultural Studies, Interdisciplinary Unit, Bar-Ilan University, Ramat Gan, Israel
| | - Amir Itzhaki
- The Institute for Research and Development, The Artificial Third, Tel Aviv, Israel
- At time of research: Senior at Hakfar Hayarok High School, Ramat HaSharon, Israel
| | - Kfir Asraf
- The Center for Psychobiological Research, Department of Psychology and Educational Counseling, Max Stern Yezreel Valley College, Yezreel Valley, Israel
| | - David Piterman
- Faculty of Education, School of Therapy, Counseling, and Human Development, University of Haifa, Haifa, Israel
| | - Zohar Elyoseph
- The Institute for Research and Development, The Artificial Third, Tel Aviv, Israel
- Faculty of Education, School of Therapy, Counseling, and Human Development, University of Haifa, Haifa, Israel
| |
Collapse
|
2
|
Manz R, Bäcker J, Cramer S, Meyer P, Müller D, Muzalyova A, Rentschler L, Wengenmayr C, Hinske LC, Huss R, Raffler J, Soto‐Rey I. Do explainable AI (XAI) methods improve the acceptance of AI in clinical practice? An evaluation of XAI methods on Gleason grading. J Pathol Clin Res 2025; 11:e70023. [PMID: 40079401 PMCID: PMC11904816 DOI: 10.1002/2056-4538.70023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2024] [Revised: 01/28/2025] [Accepted: 02/17/2025] [Indexed: 03/15/2025]
Abstract
This work aimed to evaluate both the usefulness and user acceptance of five gradient-based explainable artificial intelligence (XAI) methods in the use case of a prostate carcinoma clinical decision support system environment. In addition, we aimed to determine whether XAI helps to increase the acceptance of artificial intelligence (AI) and recommend a particular method for this use case. The evaluation was conducted on a tool developed in-house with different visualization approaches to the AI-generated Gleason grade and the corresponding XAI explanations on top of the original slide. The study was a heuristic evaluation of five XAI methods. The participants were 15 pathologists from the University Hospital of Augsburg with a wide range of experience in Gleason grading and AI. The evaluation consisted of a user information form, short questionnaires on each XAI method, a ranking of the methods, and a general questionnaire to evaluate the performance and usefulness of the AI. There were significant differences between the ratings of the methods, with Grad-CAM++ performing best. Both AI decision support and XAI explanations were seen as helpful by the majority of participants. In conclusion, our pilot study suggests that the evaluated XAI methods can indeed improve the usefulness and acceptance of AI. The results obtained are a good indicator, but further studies involving larger sample sizes are warranted to draw more definitive conclusions.
Collapse
Affiliation(s)
- Robin Manz
- Digital MedicineUniversity Hospital of AugsburgAugsburgGermany
| | - Jonas Bäcker
- Digital MedicineUniversity Hospital of AugsburgAugsburgGermany
| | - Samantha Cramer
- Digital MedicineUniversity Hospital of AugsburgAugsburgGermany
| | - Philip Meyer
- Digital MedicineUniversity Hospital of AugsburgAugsburgGermany
| | - Dominik Müller
- Digital MedicineUniversity Hospital of AugsburgAugsburgGermany
- IT‐Infrastructure for Translational Medical ResearchUniversity of AugsburgAugsburgGermany
| | - Anna Muzalyova
- Digital MedicineUniversity Hospital of AugsburgAugsburgGermany
| | - Lukas Rentschler
- Institute for Pathology and Molecular DiagnosticsUniversity Hospital of AugsburgAugsburgGermany
| | | | | | - Ralf Huss
- Institute for Pathology and Molecular DiagnosticsUniversity Hospital of AugsburgAugsburgGermany
- BioM Biotech Cluster Development GmbHPlaneggGermany
| | - Johannes Raffler
- Digital MedicineUniversity Hospital of AugsburgAugsburgGermany
- Bavarian Cancer Research Center (BZKF)AugsburgGermany
| | - Iñaki Soto‐Rey
- Digital MedicineUniversity Hospital of AugsburgAugsburgGermany
| |
Collapse
|
3
|
Artsi Y, Sorin V, Glicksberg BS, Nadkarni GN, Klang E. Advancing Clinical Practice: The Potential of Multimodal Technology in Modern Medicine. J Clin Med 2024; 13:6246. [PMID: 39458196 PMCID: PMC11508674 DOI: 10.3390/jcm13206246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Revised: 10/15/2024] [Accepted: 10/17/2024] [Indexed: 10/28/2024] Open
Abstract
Multimodal technology is poised to revolutionize clinical practice by integrating artificial intelligence with traditional diagnostic modalities. This evolution traces its roots from Hippocrates' humoral theory to the use of sophisticated AI-driven platforms that synthesize data across multiple sensory channels. The interplay between historical medical practices and modern technology challenges conventional patient-clinician interactions and redefines diagnostic accuracy. Highlighting applications from neurology to radiology, the potential of multimodal technology emerges, suggesting a future where AI not only supports but enhances human sensory inputs in medical diagnostics. This shift invites the medical community to navigate the ethical, practical, and technological changes reshaping the landscape of clinical medicine.
Collapse
Affiliation(s)
- Yaara Artsi
- Azrieli Faculty of Medicine, Bar-Ilan University, Zefat 1311502, Israel
| | - Vera Sorin
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA;
| | - Benjamin S. Glicksberg
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (B.S.G.); (G.N.N.); (E.K.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Girish N. Nadkarni
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (B.S.G.); (G.N.N.); (E.K.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Eyal Klang
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (B.S.G.); (G.N.N.); (E.K.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| |
Collapse
|
4
|
Funer F, Tinnemeyer S, Liedtke W, Salloch S. Clinicians' roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students. BMC Med Ethics 2024; 25:107. [PMID: 39375660 PMCID: PMC11457475 DOI: 10.1186/s12910-024-01109-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 09/26/2024] [Indexed: 10/09/2024] Open
Abstract
BACKGROUND Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision-making when using AI-CDSS. Empirical evidence on stakeholders' viewpoints on these issues is scarce so far. The present study complements the empirical-ethical body of research by, on the one hand, investigating the requirements for understanding and explicability in depth with regard to the rationale behind them. On the other hand, it surveys medical students at the end of their studies as stakeholders, of whom little data is available so far, but for whom AI-CDSS will be an important part of their medical practice. METHODS Fifteen semi-structured qualitative interviews (each lasting an average of 56 min) were conducted with German medical students to investigate their perspectives and attitudes on the use of AI-CDSS. The problem-centred interviews draw on two hypothetical case vignettes of AI-CDSS employed in nephrology and surgery. Interviewees' perceptions and convictions of their own clinical role and responsibilities in dealing with AI-CDSS were elicited as well as viewpoints on explicability as well as the necessary level of understanding and competencies needed on the clinicians' side. The qualitative data were analysed according to key principles of qualitative content analysis (Kuckartz). RESULTS In response to the central question about the necessary understanding of AI-CDSS tools and the emergence of their outputs as well as the reasons for the requirements placed on them, two types of argumentation could be differentiated inductively from the interviewees' statements: the first type, the clinician as a systemic trustee (or "the one relying"), highlights that there needs to be empirical evidence and adequate approval processes that guarantee minimised harm and a clinical benefit from the employment of an AI-CDSS. Based on proof of these requirements, the use of an AI-CDSS would be appropriate, as according to "the one relying", clinicians should choose those measures that statistically cause the least harm. The second type, the clinician as an individual expert (or "the one controlling"), sets higher prerequisites that go beyond ensuring empirical evidence and adequate approval processes. These higher prerequisites relate to the clinician's necessary level of competence and understanding of how a specific AI-CDSS works and how to use it properly in order to evaluate its outputs and to mitigate potential risks for the individual patient. Both types are unified in their high esteem of evidence-based clinical practice and the need to communicate with the patient on the use of medical AI. However, the interviewees' different conceptions of the clinician's role and responsibilities cause them to have different requirements regarding the clinician's understanding and explicability of an AI-CDSS beyond the proof of benefit. CONCLUSIONS The study results highlight two different types among (future) clinicians regarding their view of the necessary levels of understanding and competence. These findings should inform the debate on appropriate training programmes and professional standards (e.g. clinical practice guidelines) that enable the safe and effective clinical employment of AI-CDSS in various clinical fields. While current approaches search for appropriate minimum requirements of the necessary understanding and competence, the differences between (future) clinicians in terms of their information and understanding needs described here can lead to more differentiated approaches to solutions.
Collapse
Affiliation(s)
- F Funer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany
- Institute for Ethics and History of Medicine, Eberhard Karls University Tübingen, Gartenstr. 47, 72074, Tübingen, Germany
| | - S Tinnemeyer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany
| | - W Liedtke
- Faculty of Theology, University of Greifswald, Am Rubenowplatz 2/3, 17489, Greifswald, Germany
| | - S Salloch
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany.
| |
Collapse
|
5
|
Wabro A, Herrmann M, Winkler EC. When time is of the essence: ethical reconsideration of XAI in time-sensitive environments. JOURNAL OF MEDICAL ETHICS 2024:jme-2024-110046. [PMID: 39299730 DOI: 10.1136/jme-2024-110046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 09/06/2024] [Indexed: 09/22/2024]
Abstract
The objective of explainable artificial intelligence systems designed for clinical decision support (XAI-CDSS) is to enhance physicians' diagnostic performance, confidence and trust through the implementation of interpretable methods, thus providing for a superior epistemic positioning, a robust foundation for critical reflection and trustworthiness in times of heightened technological dependence. However, recent studies have revealed shortcomings in achieving these goals, questioning the widespread endorsement of XAI by medical professionals, ethicists and policy-makers alike. Based on a surgical use case, this article challenges generalising calls for XAI-CDSS and emphasises the significance of time-sensitive clinical environments which frequently preclude adequate consideration of system explanations. Therefore, XAI-CDSS may not be able to meet expectations of augmenting clinical decision-making in specific circumstances where time is of the essence. This article, by employing a principled ethical balancing methodology, highlights several fallacies associated with XAI deployment in time-sensitive clinical situations and recommends XAI endorsement only where scientific evidence or stakeholder assessments do not contradict such deployment in specific target settings.
Collapse
Affiliation(s)
- Andreas Wabro
- National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics, Heidelberg, Germany
| | - Markus Herrmann
- National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics, Heidelberg, Germany
| | - Eva C Winkler
- National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics, Heidelberg, Germany
| |
Collapse
|
6
|
Funer F, Schneider D, Heyen NB, Aichinger H, Klausen AD, Tinnemeyer S, Liedtke W, Salloch S, Bratan T. Impacts of Clinical Decision Support Systems on the Relationship, Communication, and Shared Decision-Making Between Health Care Professionals and Patients: Multistakeholder Interview Study. J Med Internet Res 2024; 26:e55717. [PMID: 39178023 PMCID: PMC11380058 DOI: 10.2196/55717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 05/02/2024] [Accepted: 06/07/2024] [Indexed: 08/24/2024] Open
Abstract
BACKGROUND Clinical decision support systems (CDSSs) are increasingly being introduced into various domains of health care. Little is known so far about the impact of such systems on the health care professional-patient relationship, and there is a lack of agreement about whether and how patients should be informed about the use of CDSSs. OBJECTIVE This study aims to explore, in an empirically informed manner, the potential implications for the health care professional-patient relationship and to underline the importance of this relationship when using CDSSs for both patients and future professionals. METHODS Using a methodological triangulation, 15 medical students and 12 trainee nurses were interviewed in semistructured interviews and 18 patients were involved in focus groups between April 2021 and April 2022. All participants came from Germany. Three examples of CDSSs covering different areas of health care (ie, surgery, nephrology, and intensive home care) were used as stimuli in the study to identify similarities and differences regarding the use of CDSSs in different fields of application. The interview and focus group transcripts were analyzed using a structured qualitative content analysis. RESULTS From the interviews and focus groups analyzed, three topics were identified that interdependently address the interactions between patients and health care professionals: (1) CDSSs and their impact on the roles of and requirements for health care professionals, (2) CDSSs and their impact on the relationship between health care professionals and patients (including communication requirements for shared decision-making), and (3) stakeholders' expectations for patient education and information about CDSSs and their use. CONCLUSIONS The results indicate that using CDSSs could restructure established power and decision-making relationships between (future) health care professionals and patients. In addition, respondents expected that the use of CDSSs would involve more communication, so they anticipated an increased time commitment. The results shed new light on the existing discourse by demonstrating that the anticipated impact of CDSSs on the health care professional-patient relationship appears to stem less from the function of a CDSS and more from its integration in the relationship. Therefore, the anticipated effects on the relationship between health care professionals and patients could be specifically addressed in patient information about the use of CDSSs.
Collapse
Affiliation(s)
- Florian Funer
- Institute for Ethics and History of Medicine, Eberhard Karls University Tuebingen, Tuebingen, Germany
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Diana Schneider
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Nils B Heyen
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Heike Aichinger
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Andrea Diana Klausen
- Institute for Medical Informatics, University Medical Center - RWTH Aachen, Aachen, Germany
| | - Sara Tinnemeyer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Wenke Liedtke
- Department of Social Work, Protestant University of Applied Sciences Rhineland-Westphalia-Lippe, Bochum, Germany
- Ethics and its Didactics, Faculty of Theology, University of Greifswald, Greifswald, Germany
| | - Sabine Salloch
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Tanja Bratan
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| |
Collapse
|
7
|
Funer F, Wiesing U. Physician's autonomy in the face of AI support: walking the ethical tightrope. Front Med (Lausanne) 2024; 11:1324963. [PMID: 38606162 PMCID: PMC11007068 DOI: 10.3389/fmed.2024.1324963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 03/18/2024] [Indexed: 04/13/2024] Open
Abstract
The introduction of AI support tools raises questions about the normative orientation of medical practice and the need to rethink its basic concepts. One of these concepts that is central to the discussion is the physician’s autonomy and its appropriateness in the face of high-powered AI applications. In this essay, a differentiation of the physician’s autonomy is made on the basis of a conceptual analysis. It is argued that the physician’s decision-making autonomy is a purposeful autonomy. The physician’s decision-making autonomy is fundamentally anchored in the medical ethos for the purpose to promote the patient’s health and well-being and to prevent him or her from harm. It follows from this purposefulness that the physician’s autonomy is not to be protected for its own sake, but only insofar as it serves this end better than alternative means. We argue that today, given existing limitations of AI support tools, physicians still need physician’s decision-making autonomy. For the possibility of physicians to exercise decision-making autonomy in the face of AI support, we elaborate three conditions: (1) sufficient information about AI support and its statements, (2) sufficient competencies to integrate AI statements into clinical decision-making, and (3) a context of voluntariness that allows, in justified cases, deviations from AI support. If the physician should fulfill his or her moral obligation to promote the health and well-being of the patient, then the use of AI should be designed in such a way that it promotes or at least maintains the physician’s decision-making autonomy.
Collapse
Affiliation(s)
- Florian Funer
- Institute for Ethics and History of Medicine, University Hospital and Faculty of Medicine, University of Tübingen, Tübingen, Germany
| | | |
Collapse
|
8
|
Alotaibi AA, Alotaibi KA, Almutairi AN, Alsaab A. Physicians' Perspectives on the Impact of Insurance Status on Clinical Decision-Making in Saudi Arabia. Cureus 2024; 16:e53756. [PMID: 38465027 PMCID: PMC10921445 DOI: 10.7759/cureus.53756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/07/2024] [Indexed: 03/12/2024] Open
Abstract
Background The decision-making process in clinical practice depends heavily on collaboration and information sharing. Physicians' decision-making processes are profoundly influenced by the patient's insurance status, which warrants focused investigation. Hence, this study aimed to investigate how physicians perceive the influence of insurance status on treatment options and medical interventions and to explore the extent to which physicians discuss insurance-related considerations with patients during the shared decision-making process. Methodology This was a cross-sectional exploratory study conducted in various healthcare facilities all over Saudi Arabia. The electronic questionnaire was the primary tool for data collection. Data were then coded, entered, and analyzed using both descriptive and inferential statistical methods. Results The study involved 430 physicians, primarily male (n = 230, 53.5%), aged 31-40 years (n = 215, 50%), and mostly non-Saudi (n = 285, 66.3%). Medical officers constituted the majority of the study population (n = 258, 60%), with one to five years of experience (n = 187, 43.5%), and engaged in private practice (n = 230, 70%). Concerning insurance, 287 (66.7%) physicians considered patient's insurance when discussing treatment options, while 318 (74%) physicians discussed the financial implications of different treatment options with the patients. Regarding outcomes, 373 (86.7%) physicians believed that insurance status affected patient outcomes and treatment modalities. Significant factors, such as age between 31 and 40 years (P < 0.001), over 10 years of clinical experience (P = 0.002), engagement in both governmental and private practice (P = 0.012), and being a medical officer (P = 0.005), demonstrated a high impact on the insurance status influencing clinical decision-making. Overall, recognizing the influence of insurance on decision-making is crucial for equitable healthcare. Conclusions More than half of the physicians demonstrated high scores indicating the impact of insurance status on the clinical decision-making process. This impact was influenced by specific physician parameters such as age, experience, specialty, and type of practice. Moreover, the financial situation and insurance status of the patients significantly affected treatment and patient outcomes.
Collapse
Affiliation(s)
| | | | - Ahmad N Almutairi
- Family Medicine, Medical Services in Saudi Royal Land Forces, Riyadh, SAU
| | - Anas Alsaab
- Family Medicine, King Saud Medical City, Riyadh, SAU
| |
Collapse
|
9
|
Čartolovni A, Malešević A, Poslon L. Critical analysis of the AI impact on the patient-physician relationship: A multi-stakeholder qualitative study. Digit Health 2023; 9:20552076231220833. [PMID: 38130798 PMCID: PMC10734361 DOI: 10.1177/20552076231220833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 11/29/2023] [Indexed: 12/23/2023] Open
Abstract
Objective This qualitative study aims to present the aspirations, expectations and critical analysis of the potential for artificial intelligence (AI) to transform patient-physician relationship, according to multi-stakeholder insight. Methods This study was conducted from June to December 2021, using an anticipatory ethics approach and sociology of expectations as the theoretical frameworks. It focused mainly on three groups of stakeholders; namely, physicians (n = 12), patients (n = 15) and healthcare managers (n = 11), all of whom are directly related to the adoption of AI in medicine (n = 38). Results In this study, interviews were conducted with 40% of the patients in the sample (15/38), as well as 31% of the physicians (12/38) and 29% of health managers in the sample (11/38). The findings highlight the following: (1) the impact of AI on fundamental aspects of the patient-physician relationship and the underlying importance of a synergistic relationship between the physician and AI; (2) the potential for AI to alleviate workload and reduce administrative burden by saving time and putting the patient at the centre of the caring process and (3) the potential risk to the holistic approach by neglecting humanness in healthcare. Conclusions This multi-stakeholder qualitative study, which focused on the micro-level of healthcare decision-making, sheds new light on the impact of AI on healthcare and the potential transformation of patient-physician relationship. The results of the current study highlight the need to adopt a critical awareness approach to the implementation of AI in healthcare by applying critical thinking and reasoning. It is important not to rely solely upon the recommendations of AI while neglecting clinical reasoning and physicians' knowledge of best clinical practices. Instead, it is vital that the core values of the existing patient-physician relationship - such as trust and honesty, conveyed through open and sincere communication - are preserved.
Collapse
Affiliation(s)
- Anto Čartolovni
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
- School of Medicine, Catholic University of Croatia, Zagreb, Croatia
| | - Anamaria Malešević
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
| | - Luka Poslon
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
| |
Collapse
|
10
|
Ilan Y. Department of Medicine 2040: Implementing a Constrained Disorder Principle-Based Second-Generation Artificial Intelligence System for Improved Patient Outcomes in the Department of Internal Medicine. INQUIRY : A JOURNAL OF MEDICAL CARE ORGANIZATION, PROVISION AND FINANCING 2023; 60:469580231221285. [PMID: 38142419 PMCID: PMC10749528 DOI: 10.1177/00469580231221285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 11/10/2023] [Accepted: 11/30/2023] [Indexed: 12/26/2023]
Abstract
Internal medicine departments must adapt their structures and methods of operation to accommodate changing healthcare systems. The present paper discusses some challenges departments of medicine face as healthcare providers and consumers continue to change. A co-pilot model is described in this article for augmenting physicians rather than replacing them. The paper presents the co-pilot models to improve diagnoses, treatments, and monitoring. Personalized variability patterns based on the constrained-disorder principle (CDP) are described to assess chronic therapies' effectiveness in improving patient outcomes. Based on CDP-based enhanced digital twins, this paper presents personalized treatments and follow-ups that improve diagnosis accuracy and therapy outcomes. While maintaining their professional values, departments of internal medicine must respond proactively to the needs of patients and healthcare systems. To meet the needs of patients and healthcare systems, they must strive for medical professionalism and adapt to the dynamic environment.
Collapse
Affiliation(s)
- Yaron Ilan
- Hebrew University and Hadassah Medical Center, Jerusalem, Israel
| |
Collapse
|