1
|
Kostick-Quenet KM, Lang B, Dorfman N, Estep J, Mehra MR, Bhimaraj A, Civitello A, Jorde U, Trachtenberg B, Uriel N, Kaplan H, Gilmore-Szott E, Volk R, Kassi M, Blumenthal-Barby JS. Patients' and physicians' beliefs and attitudes towards integrating personalized risk estimates into patient education about left ventricular assist device therapy. Patient Educ Couns 2024; 122:108157. [PMID: 38290171 DOI: 10.1016/j.pec.2024.108157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 01/06/2024] [Accepted: 01/14/2024] [Indexed: 02/01/2024]
Abstract
BACKGROUND Personalized risk (PR) estimates may enhance clinical decision making and risk communication by providing individualized estimates of patient outcomes. We explored stakeholder attitudes toward the utility, acceptability, usefulness and best-practices for integrating PR estimates into patient education and decision making about Left Ventricular Assist Device (LVAD). METHODS AND RESULTS As part of a 5-year multi-institutional AHRQ project, we conducted 40 interviews with stakeholders (physicians, nurse coordinators, patients, and caregivers), analyzed using Thematic Content Analysis. All stakeholder groups voiced positive views towards integrating PR in decision making. Patients, caregivers and coordinators emphasized that PR can help to better understand a patient's condition and risks, prepare mentally and logistically for likely outcomes, and meaningfully engage in decision making. Physicians felt it can improve their decision making by enhancing insight into outcomes, enhance tailored pre-emptive care, increase confidence in decisions, and reduce bias and subjectivity. All stakeholder groups also raised concerns about accuracy, representativeness and relevance of algorithms; predictive uncertainty; utility in relation to physician's expertise; potential negative reactions among patients; and overreliance. CONCLUSION Stakeholders are optimistic about integrating PR into clinical decision making, but acceptability depends on prospectively demonstrating accuracy, relevance and evidence that benefits of PR outweigh potential negative impacts on decision making quality.
Collapse
Affiliation(s)
| | - Benjamin Lang
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA
| | - Natalie Dorfman
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA
| | | | | | | | | | | | | | - Nir Uriel
- Columbia University Irving Medical Center, New York, NY, USA
| | - Holland Kaplan
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA
| | - Eleanor Gilmore-Szott
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA
| | - Robert Volk
- University of Texas M.D. Anderson Cancer Center, Houston, TX, USA
| | | | - J S Blumenthal-Barby
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
2
|
Abstract
As the use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. Less attention has fallen to addressing potential bias among AI/ML's human users or factors that influence user reliance. We argue for a systematic approach to identifying the existence and impacts of user biases while using AI/ML tools and call for the development of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI/ML.
Collapse
Affiliation(s)
| | - Sara Gerke
- Penn State Dickinson Law, Carlisle, PA, USA
| |
Collapse
|
3
|
Kostick-Quenet KM, Cohen IG, Gerke S, Lo B, Antaki J, Movahedi F, Njah H, Schoen L, Estep JE, Blumenthal-Barby JS. Mitigating Racial Bias in Machine Learning. J Law Med Ethics 2022; 50:92-100. [PMID: 35243993 DOI: 10.1017/jme.2022.13] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
When applied in the health sector, AI-based applications raise not only ethical but legal and safety concerns, where algorithms trained on data from majority populations can generate less accurate or reliable results for minorities and other disadvantaged groups.
Collapse
|
4
|
Kostick-Quenet KM, Lang B, Dorfman N, Blumenthal-Barby JS. A Call for Behavioral Science in Embedded Bioethics. Perspect Biol Med 2022; 65:672-679. [PMID: 36468396 PMCID: PMC10203975 DOI: 10.1353/pbm.2022.0059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Bioethicists today are taking a greater role in the design and implementation of emerging technologies by "embedding" within the development teams and providing their direct guidance and recommendations. Ideally, these collaborations allow ethical considerations to be addressed in an active, iterative, and ongoing process through regular exchanges between ethicists and members of the technological development team. This article discusses a challenge to this embedded ethics approach-namely, that bioethical guidance, even if embraced by the development team in theory, is not easily actionable in situ. Many of the ethical problems at issue in emerging technologies are associated with preexisting structural, socioeconomic, and political factors, making compliance with ethical recommendations sometimes less a matter of choice and more a matter of feasibility. Moreover, incentive structures within these systemic factors maintain them against reform efforts. The authors recommend that embedded bioethicists utilize principles from behavioral science (such as behavioral economics) to better understand and account for these incentive structures so as to encourage the ethically responsible uptake of technological innovations.
Collapse
|