1
|
Woode ME, De Silva Perera U, Degeling C, Aquino YSJ, Houssami N, Carter SM, Chen G. Preferences for the Use of Artificial Intelligence for Breast Cancer Screening in Australia: A Discrete Choice Experiment. THE PATIENT 2025:10.1007/s40271-025-00742-w. [PMID: 40347323 DOI: 10.1007/s40271-025-00742-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/08/2025] [Indexed: 05/12/2025]
Abstract
BACKGROUND Breast cancer screening is considered an effective early detection strategy. Artificial intelligence (AI) may both offer benefits and create risks for breast screening programmes. To use AI in health screening services, the views and expectations of consumers are critical. This study examined the preferences of Australian women regarding AI use in breast cancer screening and the impact of information on preferences using discrete choice experiments. METHODS The experiment presented two alternative screening services based on seven attributes (reading method, screening sensitivity, screening specificity, time between screening and receiving results, supporting evidence, fair representation, and who should be held accountable) to 2063 women aged between 40 and 74 years recruited from an online panel. Participants were randomised into two arms. Both received standard information on AI use in breast screening, but one arm received additional information on its potential benefits. Preferences for hypothetical breast cancer screening services were modelled using a random parameter logit model. Relative attribute importance and uptake rates were estimated. RESULTS Participants preferred mixed reading (radiologist + AI system) over the other two reading methods. They showed a strong preference for fewer missed cases with a high attribute relative importance. Fewer false positives and a shorter waiting time for results were also preferred. Strength of preferences for mixed reading was significantly higher compared to two radiologists when additional information on AI is provided, highlighting the impact of information. CONCLUSIONS This study revealed the preferences among Australian women for the use of AI-driven breast cancer screening services. Results generally suggest women are open to their mammograms being read by both a radiologist and an AI-based system under certain conditions.
Collapse
Affiliation(s)
- Maame Esi Woode
- Centre for Health Economics, Monash Business School, Monash University, 900 Dandenong Road, East Caulfield, VIC, 3145, Australia.
- Monash Data Futures Research Institute, Monash University, East Caulfield, Australia.
| | - Udeni De Silva Perera
- Centre for Health Economics, Monash Business School, Monash University, 900 Dandenong Road, East Caulfield, VIC, 3145, Australia
- School of Psychology, Faculty of Health, Deakin University, Burwood, Australia
| | - Chris Degeling
- Australian Centre for Health Engagement, Evidence and Values, School of Social Sciences, University of Wollongong, Wollongong, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Social Sciences, University of Wollongong, Wollongong, Australia
| | - Nehmat Houssami
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, Australia
- The Daffodil Centre, The University of Sydney, a Joint Venture with Cancer Council, Kings Cross, Sydney, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Social Sciences, University of Wollongong, Wollongong, Australia
| | - Gang Chen
- Centre for Health Economics, Monash Business School, Monash University, 900 Dandenong Road, East Caulfield, VIC, 3145, Australia
- Cancer Health Services Research, Collaborative Centre for Genomic Cancer Medicine and Centre for Health Policy, University of Melbourne, Melbourne, Australia
- Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
| |
Collapse
|
2
|
Guo W, Chen Y. Investigating Whether AI Will Replace Human Physicians and Understanding the Interplay of the Source of Consultation, Health-Related Stigma, and Explanations of Diagnoses on Patients' Evaluations of Medical Consultations: Randomized Factorial Experiment. J Med Internet Res 2025; 27:e66760. [PMID: 40053785 PMCID: PMC11923482 DOI: 10.2196/66760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2024] [Revised: 01/25/2025] [Accepted: 01/31/2025] [Indexed: 03/09/2025] Open
Abstract
BACKGROUND The increasing use of artificial intelligence (AI) in medical diagnosis and consultation promises benefits such as greater accuracy and efficiency. However, there is little evidence to systematically test whether the ideal technological promises translate into an improved evaluation of the medical consultation from the patient's perspective. This perspective is significant because AI as a technological solution does not necessarily improve patient confidence in diagnosis and adherence to treatment at the functional level, create meaningful interactions between the medical agent and the patient at the relational level, evoke positive emotions, or reduce the patient's pessimism at the emotional level. OBJECTIVE This study aims to investigate, from a patient-centered perspective, whether AI or human-involved AI can replace the role of human physicians in diagnosis at the functional, relational, and emotional levels as well as how some health-related differences between human-AI and human-human interactions affect patients' evaluations of the medical consultation. METHODS A 3 (consultation source: AI vs human-involved AI vs human) × 2 (health-related stigma: low vs high) × 2 (diagnosis explanation: without vs with explanation) factorial experiment was conducted with 249 participants. The main effects and interaction effects of the variables were examined on individuals' functional, relational, and emotional evaluations of the medical consultation. RESULTS Functionally, people trusted the diagnosis of the human physician (mean 4.78-4.85, SD 0.06-0.07) more than medical AI (mean 4.34-4.55, SD 0.06-0.07) or human-involved AI (mean 4.39-4.56, SD 0.06-0.07; P<.001), but at the relational and emotional levels, there was no significant difference between human-AI and human-human interactions (P>.05). Health-related stigma had no significant effect on how people evaluated the medical consultation or contributed to preferring AI-powered systems over humans (P>.05); however, providing explanations of the diagnosis significantly improved the functional (P<.001), relational (P<.05), and emotional (P<.05) evaluations of the consultation for all 3 medical agents. CONCLUSIONS The findings imply that at the current stage of AI development, people trust human expertise more than accurate AI, especially for decisions traditionally made by humans, such as medical diagnosis, supporting the algorithm aversion theory. Surprisingly, even for highly stigmatized diseases such as AIDS, where we assume anonymity and privacy are preferred in medical consultations, the dehumanization of AI does not contribute significantly to the preference for AI-powered medical agents over humans, suggesting that instrumental needs of diagnosis override patient privacy concerns. Furthermore, explaining the diagnosis effectively improves treatment adherence, strengthens the physician-patient relationship, and fosters positive emotions during the consultation. This provides insights for the design of AI medical agents, which have long been criticized for lacking transparency while making highly consequential decisions. This study concludes by outlining theoretical contributions to research on health communication and human-AI interaction and discusses the implications for the design and application of medical AI.
Collapse
Affiliation(s)
- Weiqi Guo
- School of Foreign Languages, Renmin University of China, Beijing, China
| | - Yang Chen
- School of Journalism and Communication, Renmin University of China, Beijing, China
| |
Collapse
|
3
|
Busch F, Hoffmann L, Rueger C, van Dijk EH, Kader R, Ortiz-Prado E, Makowski MR, Saba L, Hadamitzky M, Kather JN, Truhn D, Cuocolo R, Adams LC, Bressem KK. Current applications and challenges in large language models for patient care: a systematic review. COMMUNICATIONS MEDICINE 2025; 5:26. [PMID: 39838160 PMCID: PMC11751060 DOI: 10.1038/s43856-024-00717-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 12/17/2024] [Indexed: 01/23/2025] Open
Abstract
BACKGROUND The introduction of large language models (LLMs) into clinical practice promises to improve patient education and empowerment, thereby personalizing medical care and broadening access to medical knowledge. Despite the popularity of LLMs, there is a significant gap in systematized information on their use in patient care. Therefore, this systematic review aims to synthesize current applications and limitations of LLMs in patient care. METHODS We systematically searched 5 databases for qualitative, quantitative, and mixed methods articles on LLMs in patient care published between 2022 and 2023. From 4349 initial records, 89 studies across 29 medical specialties were included. Quality assessment was performed using the Mixed Methods Appraisal Tool 2018. A data-driven convergent synthesis approach was applied for thematic syntheses of LLM applications and limitations using free line-by-line coding in Dedoose. RESULTS We show that most studies investigate Generative Pre-trained Transformers (GPT)-3.5 (53.2%, n = 66 of 124 different LLMs examined) and GPT-4 (26.6%, n = 33/124) in answering medical questions, followed by patient information generation, including medical text summarization or translation, and clinical documentation. Our analysis delineates two primary domains of LLM limitations: design and output. Design limitations include 6 second-order and 12 third-order codes, such as lack of medical domain optimization, data transparency, and accessibility issues, while output limitations include 9 second-order and 32 third-order codes, for example, non-reproducibility, non-comprehensiveness, incorrectness, unsafety, and bias. CONCLUSIONS This review systematically maps LLM applications and limitations in patient care, providing a foundational framework and taxonomy for their implementation and evaluation in healthcare settings.
Collapse
Affiliation(s)
- Felix Busch
- School of Medicine and Health, Department of Diagnostic and Interventional Radiology, Klinikum rechts der Isar, TUM University Hospital, Technical University of Munich, Munich, Germany.
| | - Lena Hoffmann
- Department of Neuroradiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Christopher Rueger
- Department of Neuroradiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Elon Hc van Dijk
- Department of Ophthalmology, Leiden University Medical Center, Leiden, The Netherlands
- Department of Ophthalmology, Sir Charles Gairdner Hospital, Perth, Australia
| | - Rawen Kader
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Esteban Ortiz-Prado
- One Health Research Group, Faculty of Health Science, Universidad de Las Américas, Quito, Ecuador
| | - Marcus R Makowski
- School of Medicine and Health, Department of Diagnostic and Interventional Radiology, Klinikum rechts der Isar, TUM University Hospital, Technical University of Munich, Munich, Germany
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), Cagliari, Italy
| | - Martin Hadamitzky
- School of Medicine and Health, Institute for Cardiovascular Radiology and Nuclear Medicine, German Heart Center Munich, TUM University Hospital, Technical University of Munich, Munich, Germany
| | - Jakob Nikolas Kather
- Department of Medical Oncology, National Center for Tumor Diseases (NCT), Heidelberg University Hospital, Heidelberg, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Renato Cuocolo
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, Italy
| | - Lisa C Adams
- School of Medicine and Health, Department of Diagnostic and Interventional Radiology, Klinikum rechts der Isar, TUM University Hospital, Technical University of Munich, Munich, Germany
| | - Keno K Bressem
- School of Medicine and Health, Department of Diagnostic and Interventional Radiology, Klinikum rechts der Isar, TUM University Hospital, Technical University of Munich, Munich, Germany
- School of Medicine and Health, Institute for Cardiovascular Radiology and Nuclear Medicine, German Heart Center Munich, TUM University Hospital, Technical University of Munich, Munich, Germany
| |
Collapse
|
4
|
Gozum IEA, Flake CCD. Human Dignity and Artificial Intelligence in Healthcare: A Basis for a Catholic Ethics on AI. JOURNAL OF RELIGION AND HEALTH 2024:10.1007/s10943-024-02206-1. [PMID: 39730882 DOI: 10.1007/s10943-024-02206-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/02/2024] [Indexed: 12/29/2024]
Abstract
The rise of artificial intelligence (AI) has caught the attention of the world as it challenges the status quo on human operations. As AI has dramatically impacted education, healthcare, industry, and economics, a Catholic ethical study of human dignity in the context of AI in healthcare is presented in this article. The debates regarding whether AI will usher well or doom the dignity of humankind are occasioned by increasing developments of technology in patient care and medical decision-making. This paper draws from Catholic ethics, especially the concepts of inherent human dignity, the sanctity of human life, and morality in the medical field. It talks about using AI to upgrade healthcare outcomes without losing the essential humanity of human dignity in medical practice. It also touches on the most likely ethical issues: the morality of AI-related decisions and the depersonalization of health care. Finally, it provides a framework that brings AI development in tandem with a Catholic vision of human dignity and supports a healthcare system that caters to the common good but correctly respects the irreplaceable value of the human person and highlights moral responsibility.
Collapse
Affiliation(s)
- Ivan Efreaim A Gozum
- Institute of Religion, University of Santo Tomas, 1008, Sampaloc, Manila, Philippines.
- The Graduate School, University of Santo Tomas, 1008, Sampaloc, Manila, Philippines.
| | | |
Collapse
|
5
|
Reis M, Reis F, Kunde W. Influence of believed AI involvement on the perception of digital medical advice. Nat Med 2024; 30:3098-3100. [PMID: 39054373 PMCID: PMC11564086 DOI: 10.1038/s41591-024-03180-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 07/04/2024] [Indexed: 07/27/2024]
Abstract
Large language models offer novel opportunities to seek digital medical advice. While previous research primarily addressed the performance of such artificial intelligence (AI)-based tools, public perception of these advancements received little attention. In two preregistered studies (n = 2,280), we presented participants with scenarios of patients obtaining medical advice. All participants received identical information, but we manipulated the putative source of this advice ('AI', 'human physician', 'human + AI'). 'AI'- and 'human + AI'-labeled advice was evaluated as significantly less reliable and less empathetic compared with 'human'-labeled advice. Moreover, participants indicated lower willingness to follow the advice when AI was believed to be involved in advice generation. Our findings point toward an anti-AI bias when receiving digital medical advice, even when AI is supposedly supervised by physicians. Given the tremendous potential of AI for medicine, elucidating ways to counteract this bias should be an important objective of future research.
Collapse
Affiliation(s)
- Moritz Reis
- Institute of Psychology, Julius-Maximilians-Universität Würzburg, Würzburg, Germany.
- Judge Business School, University of Cambridge, Cambridge, UK.
| | - Florian Reis
- Medical Affairs, Pfizer Pharma GmbH, Berlin, Germany
| | - Wilfried Kunde
- Institute of Psychology, Julius-Maximilians-Universität Würzburg, Würzburg, Germany
| |
Collapse
|
6
|
Jiang P, Niu W, Wang Q, Yuan R, Chen K. Understanding Users' Acceptance of Artificial Intelligence Applications: A Literature Review. Behav Sci (Basel) 2024; 14:671. [PMID: 39199067 PMCID: PMC11351494 DOI: 10.3390/bs14080671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 07/30/2024] [Accepted: 08/01/2024] [Indexed: 09/01/2024] Open
Abstract
In recent years, with the continuous expansion of artificial intelligence (AI) application forms and fields, users' acceptance of AI applications has attracted increasing attention from scholars and business practitioners. Although extant studies have extensively explored user acceptance of different AI applications, there is still a lack of understanding of the roles played by different AI applications in human-AI interaction, which may limit the understanding of inconsistent findings about user acceptance of AI. This study addresses this issue by conducting a systematic literature review on AI acceptance research in leading journals of Information Systems and Marketing disciplines from 2020 to 2023. Based on a review of 80 papers, this study made contributions by (i) providing an overview of methodologies and theoretical frameworks utilized in AI acceptance research; (ii) summarizing the key factors, potential mechanisms, and theorization of users' acceptance response to AI service providers and AI task substitutes, respectively; and (iii) proposing opinions on the limitations of extant research and providing guidance for future research.
Collapse
Affiliation(s)
- Pengtao Jiang
- School of Information Science and Engineering, NingboTech University, Ningbo 315100, China;
- Nottingham University Business School China, University of Nottingham Ningbo China, Ningbo 315100, China;
| | - Wanshu Niu
- Business School, Ningbo University, Ningbo 315211, China;
| | - Qiaoli Wang
- School of Management, Zhejiang University, Hangzhou 310058, China;
| | - Ruizhi Yuan
- Nottingham University Business School China, University of Nottingham Ningbo China, Ningbo 315100, China;
| | - Keyu Chen
- Business School, Ningbo University, Ningbo 315211, China;
| |
Collapse
|
7
|
Movahed M, Bilderback S. Evaluating the readiness of healthcare administration students to utilize AI for sustainable leadership: a survey study. J Health Organ Manag 2024; ahead-of-print. [PMID: 38858220 DOI: 10.1108/jhom-12-2023-0385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2024]
Abstract
PURPOSE This paper explores how healthcare administration students perceive the integration of Artificial Intelligence (AI) in healthcare leadership, mainly focusing on the sustainability aspects involved. It aims to identify gaps in current educational curricula and suggests enhancements to better prepare future healthcare professionals for the evolving demands of AI-driven healthcare environments. DESIGN/METHODOLOGY/APPROACH This study utilized a cross-sectional survey design to understand healthcare administration students' perceptions regarding integrating AI in healthcare leadership. An online questionnaire, developed from an extensive literature review covering fundamental AI knowledge and its role in sustainable leadership, was distributed to students majoring and minoring in healthcare administration. This methodological approach garnered participation from 62 students, providing insights and perspectives crucial for the study's objectives. FINDINGS The research revealed that while a significant majority of healthcare administration students (70%) recognize the potential of AI in fostering sustainable leadership in healthcare, only 30% feel adequately prepared to work in AI-integrated environments. Additionally, students were interested in learning more about AI applications in healthcare and the role of AI in sustainable leadership, underscoring the need for comprehensive AI-focused education in their curriculum. RESEARCH LIMITATIONS/IMPLICATIONS The research is limited by its focus on a single academic institution, which may not fully represent the diversity of perspectives in healthcare administration. PRACTICAL IMPLICATIONS This study highlights the need for healthcare administration curricula to incorporate AI education, aligning theoretical knowledge with practical applications, to effectively prepare future professionals for the evolving demands of AI-integrated healthcare environments. ORIGINALITY/VALUE This research paper presents insights into healthcare administration students' readiness and perspectives toward AI integration in healthcare leadership, filling a critical gap in understanding the educational needs in the evolving landscape of AI-driven healthcare.
Collapse
Affiliation(s)
- Mohammad Movahed
- Department of Economics, Finance, and Healthcare Administration, Valdosta State University, Valdosta, Georgia, USA
| | | |
Collapse
|
8
|
Kerstan S, Bienefeld N, Grote G. Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2024; 44:939-957. [PMID: 37722964 DOI: 10.1111/risa.14216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 07/05/2023] [Accepted: 07/08/2023] [Indexed: 09/20/2023]
Abstract
The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are key drivers for successful implementation. Therefore, it is crucial to understand the factors that condition these perceptions. In this study, we draw on the risk perception and human-AI interaction literature to examine how explicit (i.e., deliberate) and implicit (i.e., automatic) comparative trust associations with AI versus physicians, and knowledge about AI, relate to likelihood perceptions of risks and benefits of AI in healthcare and preferences for the integration of AI in healthcare. We use survey data (N = 378) to specify a path model. Results reveal that the path for implicit comparative trust associations on relative preferences for AI over physicians is only significant through risk, but not through benefit perceptions. This finding is reversed for AI knowledge. Explicit comparative trust associations relate to AI preference through risk and benefit perceptions. These findings indicate that risk perceptions of AI in healthcare might be driven more strongly by affect-laden factors than benefit perceptions, which in turn might depend more on reflective cognition. Implications of our findings and directions for future research are discussed considering the conceptualization of trust as heuristic and dual-process theories of judgment and decision-making. Regarding the design and implementation of AI-based healthcare technologies, our findings suggest that a holistic integration of public viewpoints is warranted.
Collapse
Affiliation(s)
- Sophie Kerstan
- Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Nadine Bienefeld
- Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Gudela Grote
- Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
9
|
Rony MKK, Parvin MR, Wahiduzzaman M, Debnath M, Bala SD, Kayesh I. "I Wonder if my Years of Training and Expertise Will be Devalued by Machines": Concerns About the Replacement of Medical Professionals by Artificial Intelligence. SAGE Open Nurs 2024; 10:23779608241245220. [PMID: 38596508 PMCID: PMC11003342 DOI: 10.1177/23779608241245220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 03/08/2024] [Accepted: 03/15/2024] [Indexed: 04/11/2024] Open
Abstract
Background The rapid integration of artificial intelligence (AI) into healthcare has raised concerns among healthcare professionals about the potential displacement of human medical professionals by AI technologies. However, the apprehensions and perspectives of healthcare workers regarding the potential substitution of them with AI are unknown. Objective This qualitative research aimed to investigate healthcare workers' concerns about artificial intelligence replacing medical professionals. Methods A descriptive and exploratory research design was employed, drawing upon the Technology Acceptance Model (TAM), Technology Threat Avoidance Theory, and Sociotechnical Systems Theory as theoretical frameworks. Participants were purposively sampled from various healthcare settings, representing a diverse range of roles and backgrounds. Data were collected through individual interviews and focus group discussions, followed by thematic analysis. Results The analysis revealed seven key themes reflecting healthcare workers' concerns, including job security and economic concerns; trust and acceptance of AI; ethical and moral dilemmas; quality of patient care; workforce role redefinition and training; patient-provider relationships; healthcare policy and regulation. Conclusions This research underscores the multifaceted concerns of healthcare workers regarding the increasing role of AI in healthcare. Addressing job security, fostering trust, addressing ethical dilemmas, and redefining workforce roles are crucial factors to consider in the successful integration of AI into healthcare. Healthcare policy and regulation must be developed to guide this transformation while maintaining the quality of patient care and preserving patient-provider relationships. The study findings offer insights for policymakers and healthcare institutions to navigate the evolving landscape of AI in healthcare while addressing the concerns of healthcare professionals.
Collapse
Affiliation(s)
- Moustaq Karim Khan Rony
- Master of Public Health, Bangladesh Open University, Gazipur, Bangladesh
- Institute of Social Welfare and Research, University of Dhaka, Dhaka, Bangladesh
| | - Mst. Rina Parvin
- Armed Forces Nursing Service, Major at Bangladesh Army (AFNS Officer), Combined Military Hospital, Dhaka, Bangladesh
| | - Md. Wahiduzzaman
- School of Medical Sciences, Shahjalal University of Science and Technology, Sylhet, Bangladesh
| | - Mitun Debnath
- Master of Public Health, National Institute of Preventive and Social Medicine, Dhaka, Bangladesh
| | - Shuvashish Das Bala
- College of Nursing, International University of Business Agriculture and Technology, Dhaka, Bangladesh
| | - Ibne Kayesh
- Institute of Social Welfare and Research, University of Dhaka, Dhaka, Bangladesh
- Faculty of Graduate Studies, University of Kelaniya, Colombo, Sri Lanka
| |
Collapse
|
10
|
Drezga-Kleiminger M, Demaree-Cotton J, Koplin J, Savulescu J, Wilkinson D. Should AI allocate livers for transplant? Public attitudes and ethical considerations. BMC Med Ethics 2023; 24:102. [PMID: 38012660 PMCID: PMC10683249 DOI: 10.1186/s12910-023-00983-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 11/14/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND Allocation of scarce organs for transplantation is ethically challenging. Artificial intelligence (AI) has been proposed to assist in liver allocation, however the ethics of this remains unexplored and the view of the public unknown. The aim of this paper was to assess public attitudes on whether AI should be used in liver allocation and how it should be implemented. METHODS We first introduce some potential ethical issues concerning AI in liver allocation, before analysing a pilot survey including online responses from 172 UK laypeople, recruited through Prolific Academic. FINDINGS Most participants found AI in liver allocation acceptable (69.2%) and would not be less likely to donate their organs if AI was used in allocation (72.7%). Respondents thought AI was more likely to be consistent and less biased compared to humans, although were concerned about the "dehumanisation of healthcare" and whether AI could consider important nuances in allocation decisions. Participants valued accuracy, impartiality, and consistency in a decision-maker, more than interpretability and empathy. Respondents were split on whether AI should be trained on previous decisions or programmed with specific objectives. Whether allocation decisions were made by transplant committee or AI, participants valued consideration of urgency, survival likelihood, life years gained, age, future medication compliance, quality of life, future alcohol use and past alcohol use. On the other hand, the majority thought the following factors were not relevant to prioritisation: past crime, future crime, future societal contribution, social disadvantage, and gender. CONCLUSIONS There are good reasons to use AI in liver allocation, and our sample of participants appeared to support its use. If confirmed, this support would give democratic legitimacy to the use of AI in this context and reduce the risk that donation rates could be affected negatively. Our findings on specific ethical concerns also identify potential expectations and reservations laypeople have regarding AI in this area, which can inform how AI in liver allocation could be best implemented.
Collapse
Affiliation(s)
- Max Drezga-Kleiminger
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK
| | - Joanna Demaree-Cotton
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK
| | - Julian Koplin
- Monash Bioethics Centre, Monash University, Melbourne, Australia
| | - Julian Savulescu
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK
- Murdoch Children's Research Institute, Melbourne, Australia
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Dominic Wilkinson
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia.
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK.
- Murdoch Children's Research Institute, Melbourne, Australia.
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.
- John Radcliffe Hospital, Oxford, UK.
| |
Collapse
|
11
|
Rojahn J, Palu A, Skiena S, Jones JJ. American public opinion on artificial intelligence in healthcare. PLoS One 2023; 18:e0294028. [PMID: 37943752 PMCID: PMC10635466 DOI: 10.1371/journal.pone.0294028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 10/15/2023] [Indexed: 11/12/2023] Open
Abstract
Billions of dollars are being invested into developing medical artificial intelligence (AI) systems and yet public opinion of AI in the medical field seems to be mixed. Although high expectations for the future of medical AI do exist in the American public, anxiety and uncertainty about what it can do and how it works is widespread. Continuing evaluation of public opinion on AI in healthcare is necessary to ensure alignment between patient attitudes and the technologies adopted. We conducted a representative-sample survey (total N = 203) to measure the trust of the American public towards medical AI. Primarily, we contrasted preferences for AI and human professionals to be medical decision-makers. Additionally, we measured expectations for the impact and use of medical AI in the future. We present four noteworthy results: (1) The general public strongly prefers human medical professionals make medical decisions, while at the same time believing they are more likely to make culturally biased decisions than AI. (2) The general public is more comfortable with a human reading their medical records than an AI, both now and "100 years from now." (3) The general public is nearly evenly split between those who would trust their own doctor to use AI and those who would not. (4) Respondents expect AI will improve medical treatment but more so in the distant future than immediately.
Collapse
Affiliation(s)
- Jessica Rojahn
- Department of Sociology, Stony Brook University, Stony Brook, New York, United States of America
| | - Andrea Palu
- Department of Sociology, Stony Brook University, Stony Brook, New York, United States of America
| | - Steven Skiena
- Department of Computer Science, Stony Brook University, Stony Brook, New York, United States of America
| | - Jason J. Jones
- Department of Sociology, Stony Brook University, Stony Brook, New York, United States of America
- Institute for Advanced Computational Science, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
12
|
Alanzi T, Alanazi F, Mashhour B, Altalhi R, Alghamdi A, Al Shubbar M, Alamro S, Alshammari M, Almusmili L, Alanazi L, Alzahrani S, Alalouni R, Alanzi N, Alsharifa A. Surveying Hematologists' Perceptions and Readiness to Embrace Artificial Intelligence in Diagnosis and Treatment Decision-Making. Cureus 2023; 15:e49462. [PMID: 38152821 PMCID: PMC10751460 DOI: 10.7759/cureus.49462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/23/2023] [Indexed: 12/29/2023] Open
Abstract
AIM This study aims to explore the critical dimension of assessing the perceptions and readiness of hematologists to embrace artificial intelligence (AI) technologies in their diagnostic and treatment decision-making processes. METHODS This study used a cross-sectional design for collecting data related to the perceptions and readiness of hematologists using a validated online questionnaire-based survey. Both hematologists (MD) and postgraduate MD students in hematology were included in the study. A total of 188 participants, including 35 hematologists (MD) and 153 MD hematology students, completed the survey. RESULTS Major challenges include "AI's level of autonomy" and "the complexity in the field of medicine." Major barriers and risks identified include "lack of trust," "management's level of understanding," "dehumanization of healthcare," and "reduction in physicians' skills." Statistically significant differences in perceptions of benefits including resources (p=0.0326, p<0.05) and knowledge (p=0.0262, p<0.05) were observed between genders. Older physicians were observed to be more concerned about the use of AI compared to younger physicians (p<0.05). CONCLUSION While AI use in hematology diagnosis and treatment decision-making is positively perceived, issues such as lack of trust, transparency, regulations, and poor AI awareness can affect the adoption of AI.
Collapse
Affiliation(s)
- Turki Alanzi
- Department of Health Information Management and Technology, College of Public Health, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Fehaid Alanazi
- Department of Clinical Laboratory Sciences, College of Applied Medical Sciences, Jouf University, Sakakah, SAU
| | | | | | | | | | - Saud Alamro
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | | | | | - Lena Alanazi
- Department of Clinical Laboratory Sciences, College of Applied Medical Sciences, Jouf University, Sakakah, SAU
| | | | - Raneem Alalouni
- College of Public Health, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Nouf Alanzi
- Department of Clinical Laboratory Sciences, College of Applied Medical Sciences, Jouf University, Sakakah, SAU
| | | |
Collapse
|
13
|
Viana JN, Pilbeam C, Howard M, Scholz B, Ge Z, Fisser C, Mitchell I, Raman S, Leach J. Maintaining High-Touch in High-Tech Digital Health Monitoring and Multi-Omics Prognostication: Ethical, Equity, and Societal Considerations in Precision Health for Palliative Care. OMICS : A JOURNAL OF INTEGRATIVE BIOLOGY 2023; 27:461-473. [PMID: 37861713 DOI: 10.1089/omi.2023.0120] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2023]
Abstract
Advances in digital health, systems biology, environmental monitoring, and artificial intelligence (AI) continue to revolutionize health care, ushering a precision health future. More than disease treatment and prevention, precision health aims at maintaining good health throughout the lifespan. However, how can precision health impact care for people with a terminal or life-limiting condition? We examine here the ethical, equity, and societal/relational implications of two precision health modalities, (1) integrated systems biology/multi-omics analysis for disease prognostication and (2) digital health technologies for health status monitoring and communication. We focus on three main ethical and societal considerations: benefits and risks associated with integration of these modalities into the palliative care system; inclusion of underrepresented and marginalized groups in technology development and deployment; and the impact of high-tech modalities on palliative care's highly personalized and "high-touch" practice. We conclude with 10 recommendations for ensuring that precision health technologies, such as multi-omics prognostication and digital health monitoring, for palliative care are developed, tested, and implemented ethically, inclusively, and equitably.
Collapse
Affiliation(s)
- John Noel Viana
- Australian National Centre for the Public Awareness of Science, College of Science, The Australian National University, Canberra, Australia
- Responsible Innovation Future Science Platform, Commonwealth Scientific and Industrial Research Organisation, Brisbane, Australia
| | - Caitlin Pilbeam
- School of Medicine and Psychology, College of Health and Medicine, The Australian National University, Canberra, Australia
| | - Mark Howard
- Monash Data Futures Institute, Monash University, Clayton, Australia
- Department of Philosophy, School of Philosophical, Historical and International Studies, Monash University, Clayton, Australia
| | - Brett Scholz
- School of Medicine and Psychology, College of Health and Medicine, The Australian National University, Canberra, Australia
| | - Zongyuan Ge
- Monash Data Futures Institute, Monash University, Clayton, Australia
- Department of Data Science & AI, Monash University, Clayton, Australia
| | - Carys Fisser
- Australian National Centre for the Public Awareness of Science, College of Science, The Australian National University, Canberra, Australia
- School of Medicine and Psychology, College of Health and Medicine, The Australian National University, Canberra, Australia
| | - Imogen Mitchell
- School of Medicine and Psychology, College of Health and Medicine, The Australian National University, Canberra, Australia
- Intensive Care Unit, Canberra Hospital, Canberra, Australia
| | - Sujatha Raman
- Australian National Centre for the Public Awareness of Science, College of Science, The Australian National University, Canberra, Australia
| | - Joan Leach
- Australian National Centre for the Public Awareness of Science, College of Science, The Australian National University, Canberra, Australia
| |
Collapse
|
14
|
McDonnell KJ. Leveraging the Academic Artificial Intelligence Silecosystem to Advance the Community Oncology Enterprise. J Clin Med 2023; 12:4830. [PMID: 37510945 PMCID: PMC10381436 DOI: 10.3390/jcm12144830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/05/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023] Open
Abstract
Over the last 75 years, artificial intelligence has evolved from a theoretical concept and novel paradigm describing the role that computers might play in our society to a tool with which we daily engage. In this review, we describe AI in terms of its constituent elements, the synthesis of which we refer to as the AI Silecosystem. Herein, we provide an historical perspective of the evolution of the AI Silecosystem, conceptualized and summarized as a Kuhnian paradigm. This manuscript focuses on the role that the AI Silecosystem plays in oncology and its emerging importance in the care of the community oncology patient. We observe that this important role arises out of a unique alliance between the academic oncology enterprise and community oncology practices. We provide evidence of this alliance by illustrating the practical establishment of the AI Silecosystem at the City of Hope Comprehensive Cancer Center and its team utilization by community oncology providers.
Collapse
Affiliation(s)
- Kevin J McDonnell
- Center for Precision Medicine, Department of Medical Oncology & Therapeutics Research, City of Hope Comprehensive Cancer Center, Duarte, CA 91010, USA
| |
Collapse
|
15
|
Zahid A, Sharma R. Personalized Health Care in a Data-Driven Era: A Post-COVID-19 Retrospective. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2023; 1:162-171. [PMID: 38013945 PMCID: PMC10178356 DOI: 10.1016/j.mcpdig.2023.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Affiliation(s)
- Arnob Zahid
- Waikato Management School, University of Waikato, Hamilton, New Zealand
| | - Ravishankar Sharma
- College of Technological Innovation, Zayed University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
16
|
Aquino YSJ, Rogers WA, Braunack-Mayer A, Frazer H, Win KT, Houssami N, Degeling C, Semsarian C, Carter SM. Utopia versus dystopia: Professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skills. Int J Med Inform 2023; 169:104903. [PMID: 36343512 DOI: 10.1016/j.ijmedinf.2022.104903] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 08/23/2022] [Accepted: 10/19/2022] [Indexed: 11/13/2022]
Abstract
BACKGROUND Alongside the promise of improving clinical work, advances in healthcare artificial intelligence (AI) raise concerns about the risk of deskilling clinicians. This purpose of this study is to examine the issue of deskilling from the perspective of diverse group of professional stakeholders with knowledge and/or experiences in the development, deployment and regulation of healthcare AI. METHODS We conducted qualitative, semi-structured interviews with 72 professionals with AI expertise and/or professional or clinical expertise who were involved in development, deployment and/or regulation of healthcare AI. Data analysis using combined constructivist grounded theory and framework approach was performed concurrently with data collection. FINDINGS Our analysis showed participants had diverse views on three contentious issues regarding AI and deskilling. The first involved competing views about the proper extent of AI-enabled automation in healthcare work, and which clinical tasks should or should not be automated. We identified a cluster of characteristics of tasks that were considered more suitable for automation. The second involved expectations about the impact of AI on clinical skills, and whether AI-enabled automation would lead to worse or better quality of healthcare. The third tension implicitly contrasted two models of healthcare work: a human-centric model and a technology-centric model. These models assumed different values and priorities for healthcare work and its relationship to AI-enabled automation. CONCLUSION Our study shows that a diverse group of professional stakeholders involved in healthcare AI development, acquisition, deployment and regulation are attentive to the potential impact of healthcare AI on clinical skills, but have different views about the nature and valence (positive or negative) of this impact. Detailed engagement with different types of professional stakeholders allowed us to identify relevant concepts and values that could guide decisions about AI algorithm development and deployment.
Collapse
Affiliation(s)
- Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, NSW, Australia.
| | - Wendy A Rogers
- Department of Philosophy and School of Medicine, Macquarie University, NSW, Australia
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, NSW, Australia
| | - Helen Frazer
- St Vincent's Hospital, Melbourne, VIC, Australia
| | - Khin Than Win
- Centre for Persuasive Technology and Society, School of Computing and Information Technology, University of Wollongong, NSW, Australia
| | - Nehmat Houssami
- School of Public Health, Faculty of Medicine and Health, University of Sydney, NSW, Australia; The Daffodil Centre, The University of Sydney, Joint Venture with Cancer Council NSW, Australia
| | - Christopher Degeling
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, NSW, Australia
| | - Christopher Semsarian
- Agnes Ginges Centre for Molecular Cardiology at Centenary Institute, The University of Sydney, Australia; Faculty of Medicine and Health, The University of Sydney, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, NSW, Australia
| |
Collapse
|