1
|
Evans RP, Bryant LD, Russell G, Absolom K. Trust and acceptability of data-driven clinical recommendations in everyday practice: A scoping review. Int J Med Inform 2024; 183:105342. [PMID: 38266426 DOI: 10.1016/j.ijmedinf.2024.105342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 12/08/2023] [Accepted: 01/14/2024] [Indexed: 01/26/2024]
Abstract
BACKGROUND Increasing attention is being given to the analysis of large health datasets to derive new clinical decision support systems (CDSS). However, few data-driven CDSS are being adopted into clinical practice. Trust in these tools is believed to be fundamental for acceptance and uptake but to date little attention has been given to defining or evaluating trust in clinical settings. OBJECTIVES A scoping review was conducted to explore how and where acceptability and trustworthiness of data-driven CDSS have been assessed from the health professional's perspective. METHODS Medline, Embase, PsycInfo, Web of Science, Scopus, ACM Digital, IEEE Xplore and Google Scholar were searched in March 2022 using terms expanded from: "data-driven" AND "clinical decision support" AND "acceptability". Included studies focused on healthcare practitioner-facing data-driven CDSS, relating directly to clinical care. They included trust or a proxy as an outcome, or in the discussion. The preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) is followed in the reporting of this review. RESULTS 3291 papers were screened, with 85 primary research studies eligible for inclusion. Studies covered a diverse range of clinical specialisms and intended contexts, but hypothetical systems (24) outnumbered those in clinical use (18). Twenty-five studies measured trust, via a wide variety of quantitative, qualitative and mixed methods. A further 24 discussed themes of trust without it being explicitly evaluated, and from these, themes of transparency, explainability, and supporting evidence were identified as factors influencing healthcare practitioner trust in data-driven CDSS. CONCLUSION There is a growing body of research on data-driven CDSS, but few studies have explored stakeholder perceptions in depth, with limited focused research on trustworthiness. Further research on healthcare practitioner acceptance, including requirements for transparency and explainability, should inform clinical implementation.
Collapse
Affiliation(s)
- Ruth P Evans
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| | | | - Gregor Russell
- Bradford District Care Trust, Bradford, New Mill, Victoria Rd, BD18 3LD, UK.
| | - Kate Absolom
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| |
Collapse
|
2
|
Schnoor K, Versluis A, Chavannes NH, Talboom-Kamp EPWA. Digital Triage Tools for Sexually Transmitted Infection Testing Compared With General Practitioners' Advice: Vignette-Based Qualitative Study With Interviews Among General Practitioners. JMIR Hum Factors 2024; 11:e49221. [PMID: 38252474 PMCID: PMC10845018 DOI: 10.2196/49221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 07/05/2023] [Accepted: 11/20/2023] [Indexed: 01/23/2024] Open
Abstract
BACKGROUND Digital triage tools for sexually transmitted infection (STI) testing can potentially be used as a substitute for the triage that general practitioners (GPs) perform to lower their work pressure. The studied tool is based on medical guidelines. The same guidelines support GPs' decision-making process. However, research has shown that GPs make decisions from a holistic perspective and, therefore, do not always adhere to those guidelines. To have a high-quality digital triage tool that results in an efficient care process, it is important to learn more about GPs' decision-making process. OBJECTIVE The first objective was to identify whether the advice of the studied digital triage tool aligned with GPs' daily medical practice. The second objective was to learn which factors influence GPs' decisions regarding referral for diagnostic testing. In addition, this study provides insights into GPs' decision-making process. METHODS A qualitative vignette-based study using semistructured interviews was conducted. In total, 6 vignettes representing patient cases were discussed with the participants (GPs). The participants needed to think aloud whether they would advise an STI test for the patient and why. A thematic analysis was conducted on the transcripts of the interviews. The vignette patient cases were also passed through the digital triage tool, resulting in advice to test or not for an STI. A comparison was made between the advice of the tool and that of the participants. RESULTS In total, 10 interviews were conducted. Participants (GPs) had a mean age of 48.30 (SD 11.88) years. For 3 vignettes, the advice of the digital triage tool and of all participants was the same. In those vignettes, the patients' risk factors were sufficiently clear for the participants to advise the same as the digital tool. For 3 vignettes, the advice of the digital tool differed from that of the participants. Patient-related factors that influenced the participants' decision-making process were the patient's anxiety, young age, and willingness to be tested. Participants would test at a lower threshold than the triage tool because of those factors. Sometimes, participants wanted more information than was provided in the vignette or would like to conduct a physical examination. These elements were not part of the digital triage tool. CONCLUSIONS The advice to conduct a diagnostic STI test differed between a digital triage tool and GPs. The digital triage tool considered only medical guidelines, whereas GPs were open to discussion reasoning from a holistic perspective. The GPs' decision-making process was influenced by patients' anxiety, willingness to be tested, and age. On the basis of these results, we believe that the digital triage tool for STI testing could support GPs and even replace consultations in the future. Further research must substantiate how this can be done safely.
Collapse
Affiliation(s)
- Kyma Schnoor
- Public Health and Primary Care, Leiden University Medical Center, Leiden, Netherlands
- National eHealth Living Lab, Leiden University Medical Center, Leiden, Netherlands
| | - Anke Versluis
- Public Health and Primary Care, Leiden University Medical Center, Leiden, Netherlands
- National eHealth Living Lab, Leiden University Medical Center, Leiden, Netherlands
| | - Niels H Chavannes
- Public Health and Primary Care, Leiden University Medical Center, Leiden, Netherlands
- National eHealth Living Lab, Leiden University Medical Center, Leiden, Netherlands
| | - Esther P W A Talboom-Kamp
- Public Health and Primary Care, Leiden University Medical Center, Leiden, Netherlands
- National eHealth Living Lab, Leiden University Medical Center, Leiden, Netherlands
- Zuyderland, Sittard-Geleen, Netherlands
| |
Collapse
|
3
|
Nagendran M, Festor P, Komorowski M, Gordon AC, Faisal AA. Quantifying the impact of AI recommendations with explanations on prescription decision making. NPJ Digit Med 2023; 6:206. [PMID: 37935953 PMCID: PMC10630476 DOI: 10.1038/s41746-023-00955-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/27/2023] [Indexed: 11/09/2023] Open
Abstract
The influence of AI recommendations on physician behaviour remains poorly characterised. We assess how clinicians' decisions may be influenced by additional information more broadly, and how this influence can be modified by either the source of the information (human peers or AI) and the presence or absence of an AI explanation (XAI, here using simple feature importance). We used a modified between-subjects design where intensive care doctors (N = 86) were presented on a computer for each of 16 trials with a patient case and prompted to prescribe continuous values for two drugs. We used a multi-factorial experimental design with four arms, where each clinician experienced all four arms on different subsets of our 24 patients. The four arms were (i) baseline (control), (ii) peer human clinician scenario showing what doses had been prescribed by other doctors, (iii) AI suggestion and (iv) XAI suggestion. We found that additional information (peer, AI or XAI) had a strong influence on prescriptions (significantly for AI, not so for peers) but simple XAI did not have higher influence than AI alone. There was no correlation between attitudes to AI or clinical experience on the AI-supported decisions and nor was there correlation between what doctors self-reported about how useful they found the XAI and whether the XAI actually influenced their prescriptions. Our findings suggest that the marginal impact of simple XAI was low in this setting and we also cast doubt on the utility of self-reports as a valid metric for assessing XAI in clinical experts.
Collapse
Affiliation(s)
- Myura Nagendran
- UKRI Centre for Doctoral Training in AI for Healthcare, Imperial College London, London, UK
- Division of Anaesthetics, Pain Medicine, and Intensive Care, Imperial College London, London, UK
- Brain and Behaviour Lab, Imperial College London, London, UK
| | - Paul Festor
- UKRI Centre for Doctoral Training in AI for Healthcare, Imperial College London, London, UK
- Brain and Behaviour Lab, Imperial College London, London, UK
- Department of Computing, Imperial College London, London, UK
| | - Matthieu Komorowski
- Division of Anaesthetics, Pain Medicine, and Intensive Care, Imperial College London, London, UK
| | - Anthony C Gordon
- Division of Anaesthetics, Pain Medicine, and Intensive Care, Imperial College London, London, UK
| | - Aldo A Faisal
- UKRI Centre for Doctoral Training in AI for Healthcare, Imperial College London, London, UK.
- Brain and Behaviour Lab, Imperial College London, London, UK.
- Department of Computing, Imperial College London, London, UK.
- Institute of Artificial & Human Intelligence, University of Bayreuth, Bayreuth, Germany.
| |
Collapse
|
4
|
Liaw WR, Ramos Silva Y, Soltero EG, Krist A, Stotts AL. An Assessment of How Clinicians and Staff Members Use a Diabetes Artificial Intelligence Prediction Tool: Mixed Methods Study. JMIR AI 2023; 2:e45032. [PMID: 38875578 DOI: 10.2196/45032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 03/09/2023] [Accepted: 04/22/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND Nearly one-third of patients with diabetes are poorly controlled (hemoglobin A1c≥9%). Identifying at-risk individuals and providing them with effective treatment is an important strategy for preventing poor control. OBJECTIVE This study aims to assess how clinicians and staff members would use a clinical decision support tool based on artificial intelligence (AI) and identify factors that affect adoption. METHODS This was a mixed methods study that combined semistructured interviews and surveys to assess the perceived usefulness and ease of use, intent to use, and factors affecting tool adoption. We recruited clinicians and staff members from practices that manage diabetes. During the interviews, participants reviewed a sample electronic health record alert and were informed that the tool uses AI to identify those at high risk for poor control. Participants discussed how they would use the tool, whether it would contribute to care, and the factors affecting its implementation. In a survey, participants reported their demographics; rank-ordered factors influencing the adoption of the tool; and reported their perception of the tool's usefulness as well as their intent to use, ease of use, and organizational support for use. Qualitative data were analyzed using a thematic content analysis approach. We used descriptive statistics to report demographics and analyze the findings of the survey. RESULTS In total, 22 individuals participated in the study. Two-thirds (14/22, 63%) of respondents were physicians. Overall, 36% (8/22) of respondents worked in academic health centers, whereas 27% (6/22) of respondents worked in federally qualified health centers. The interviews identified several themes: this tool has the potential to be useful because it provides information that is not currently available and can make care more efficient and effective; clinicians and staff members were concerned about how the tool affects patient-oriented outcomes and clinical workflows; adoption of the tool is dependent on its validation, transparency, actionability, and design and could be increased with changes to the interface and usability; and implementation would require buy-in and need to be tailored to the demands and resources of clinics and communities. Survey findings supported these themes, as 77% (17/22) of participants somewhat, moderately, or strongly agreed that they would use the tool, whereas these figures were 82% (18/22) for usefulness, 82% (18/22) for ease of use, and 68% (15/22) for clinic support. The 2 highest ranked factors affecting adoption were whether the tool improves health and the accuracy of the tool. CONCLUSIONS Most participants found the tool to be easy to use and useful, although they had concerns about alert fatigue, bias, and transparency. These data will be used to enhance the design of an AI tool.
Collapse
Affiliation(s)
- Winston R Liaw
- Department of Health Systems and Population Health Sciences, Tilman J Fertitta Family College of Medicine, University of Houston, Houston, TX, United States
| | | | - Erica G Soltero
- USDA/ARS Children's Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, TX, United States
| | - Alex Krist
- Department of Family Medicine & Population Health, Virginia Commonwealth University School of Medicine, Richmond, VA, United States
| | - Angela L Stotts
- Department of Family & Community Medicine, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
5
|
Terry AL, Kueper JK, Beleno R, Brown JB, Cejic S, Dang J, Leger D, McKay S, Meredith L, Pinto AD, Ryan BL, Stewart M, Zwarenstein M, Lizotte DJ. Is primary health care ready for artificial intelligence? What do primary health care stakeholders say? BMC Med Inform Decis Mak 2022; 22:237. [PMID: 36085203 PMCID: PMC9461192 DOI: 10.1186/s12911-022-01984-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 09/02/2022] [Indexed: 11/10/2022] Open
Abstract
Abstract
Background
Effective deployment of AI tools in primary health care requires the engagement of practitioners in the development and testing of these tools, and a match between the resulting AI tools and clinical/system needs in primary health care. To set the stage for these developments, we must gain a more in-depth understanding of the views of practitioners and decision-makers about the use of AI in primary health care. The objective of this study was to identify key issues regarding the use of AI tools in primary health care by exploring the views of primary health care and digital health stakeholders.
Methods
This study utilized a descriptive qualitative approach, including thematic data analysis. Fourteen in-depth interviews were conducted with primary health care and digital health stakeholders in Ontario. NVivo software was utilized in the coding of the interviews.
Results
Five main interconnected themes emerged: (1) Mismatch Between Envisioned Uses and Current Reality—denoting the importance of potential applications of AI in primary health care practice, with a recognition of the current reality characterized by a lack of available tools; (2) Mechanics of AI Don’t Matter: Just Another Tool in the Toolbox– reflecting an interest in what value AI tools could bring to practice, rather than concern with the mechanics of the AI tools themselves; (3) AI in Practice: A Double-Edged Sword—the possible benefits of AI use in primary health care contrasted with fundamental concern about the possible threats posed by AI in terms of clinical skills and capacity, mistakes, and loss of control; (4) The Non-Starters: A Guarded Stance Regarding AI Adoption in Primary Health Care—broader concerns centred on the ethical, legal, and social implications of AI use in primary health care; and (5) Necessary Elements: Facilitators of AI in Primary Health Care—elements required to support the uptake of AI tools, including co-creation, availability and use of high quality data, and the need for evaluation.
Conclusion
The use of AI in primary health care may have a positive impact, but many factors need to be considered regarding its implementation. This study may help to inform the development and deployment of AI tools in primary health care.
Collapse
|
6
|
Borsci S, Lehtola VV, Nex F, Yang MY, Augustijn EW, Bagheriye L, Brune C, Kounadi O, Li J, Moreira J, Van Der Nagel J, Veldkamp B, Le DV, Wang M, Wijnhoven F, Wolterink JM, Zurita-Milla R. Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle. AI & SOCIETY 2022. [DOI: 10.1007/s00146-021-01383-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
AbstractThe European Union (EU) Commission’s whitepaper on Artificial Intelligence (AI) proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: (i) the lack of a coherent EU vision to drive future decision-making processes at state and local levels and (ii) the lack of methods to support a sustainable diffusion of AI in our society. The lack of a coherent vision stems from not considering societal differences across the EU member states. We suggest that these differences may lead to a fractured market and an AI crisis in which different members of the EU will adopt nation-centric strategies to exploit AI, thus preventing the development of a frictionless market as envisaged by the EU. Moreover, the Commission aims at changing the AI development culture proposing a human-centred and safety-first perspective that is not supported by methodological advancements, thus taking the risks of unforeseen social and societal impacts of AI. We discuss potential societal, technical, and methodological gaps that should be filled to avoid the risks of developing AI systems at the expense of society. Our analysis results in the recommendation that the EU regulators and policymakers consider how to complement the EC programme with rules and compensatory mechanisms to avoid market fragmentation due to local and global ambitions. Moreover, regulators should go beyond the human-centred approach establishing a research agenda seeking answers to the technical and methodological open questions regarding the development and assessment of human-AI co-action aiming for a sustainable AI diffusion in the society.
Collapse
|