1
|
Hassan M, Kushniruk A, Borycki E. Barriers to and Facilitators of Artificial Intelligence Adoption in Health Care: Scoping Review. JMIR Hum Factors 2024; 11:e48633. [PMID: 39207831 PMCID: PMC11393514 DOI: 10.2196/48633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 02/28/2024] [Accepted: 06/12/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) use cases in health care are on the rise, with the potential to improve operational efficiency and care outcomes. However, the translation of AI into practical, everyday use has been limited, as its effectiveness relies on successful implementation and adoption by clinicians, patients, and other health care stakeholders. OBJECTIVE As adoption is a key factor in the successful proliferation of an innovation, this scoping review aimed at presenting an overview of the barriers to and facilitators of AI adoption in health care. METHODS A scoping review was conducted using the guidance provided by the Joanna Briggs Institute and the framework proposed by Arksey and O'Malley. MEDLINE, IEEE Xplore, and ScienceDirect databases were searched to identify publications in English that reported on the barriers to or facilitators of AI adoption in health care. This review focused on articles published between January 2011 and December 2023. The review did not have any limitations regarding the health care setting (hospital or community) or the population (patients, clinicians, physicians, or health care administrators). A thematic analysis was conducted on the selected articles to map factors associated with the barriers to and facilitators of AI adoption in health care. RESULTS A total of 2514 articles were identified in the initial search. After title and abstract reviews, 50 (1.99%) articles were included in the final analysis. These articles were reviewed for the barriers to and facilitators of AI adoption in health care. Most articles were empirical studies, literature reviews, reports, and thought articles. Approximately 18 categories of barriers and facilitators were identified. These were organized sequentially to provide considerations for AI development, implementation, and the overall structure needed to facilitate adoption. CONCLUSIONS The literature review revealed that trust is a significant catalyst of adoption, and it was found to be impacted by several barriers identified in this review. A governance structure can be a key facilitator, among others, in ensuring all the elements identified as barriers are addressed appropriately. The findings demonstrate that the implementation of AI in health care is still, in many ways, dependent on the establishment of regulatory and legal frameworks. Further research into a combination of governance and implementation frameworks, models, or theories to enhance trust that would specifically enable adoption is needed to provide the necessary guidance to those translating AI research into practice. Future research could also be expanded to include attempts at understanding patients' perspectives on complex, high-risk AI use cases and how the use of AI applications affects clinical practice and patient care, including sociotechnical considerations, as more algorithms are implemented in actual clinical environments.
Collapse
Affiliation(s)
- Masooma Hassan
- Department of Health Information Science, University of Victoria, Victoria, BC, Canada
| | - Andre Kushniruk
- Department of Health Information Science, University of Victoria, Victoria, BC, Canada
| | - Elizabeth Borycki
- Department of Health Information Science, University of Victoria, Victoria, BC, Canada
| |
Collapse
|
2
|
Sriharan A, Sekercioglu N, Mitchell C, Senkaiahliyan S, Hertelendy A, Porter T, Banaszak-Holl J. Leadership for AI Transformation in Health Care Organization: Scoping Review. J Med Internet Res 2024; 26:e54556. [PMID: 39009038 PMCID: PMC11358667 DOI: 10.2196/54556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 03/12/2024] [Accepted: 07/15/2024] [Indexed: 07/17/2024] Open
Abstract
BACKGROUND The leaders of health care organizations are grappling with rising expenses and surging demands for health services. In response, they are increasingly embracing artificial intelligence (AI) technologies to improve patient care delivery, alleviate operational burdens, and efficiently improve health care safety and quality. OBJECTIVE In this paper, we map the current literature and synthesize insights on the role of leadership in driving AI transformation within health care organizations. METHODS We conducted a comprehensive search across several databases, including MEDLINE (via Ovid), PsycINFO (via Ovid), CINAHL (via EBSCO), Business Source Premier (via EBSCO), and Canadian Business & Current Affairs (via ProQuest), spanning articles published from 2015 to June 2023 discussing AI transformation within the health care sector. Specifically, we focused on empirical studies with a particular emphasis on leadership. We used an inductive, thematic analysis approach to qualitatively map the evidence. The findings were reported in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews) guidelines. RESULTS A comprehensive review of 2813 unique abstracts led to the retrieval of 97 full-text articles, with 22 included for detailed assessment. Our literature mapping reveals that successful AI integration within healthcare organizations requires leadership engagement across technological, strategic, operational, and organizational domains. Leaders must demonstrate a blend of technical expertise, adaptive strategies, and strong interpersonal skills to navigate the dynamic healthcare landscape shaped by complex regulatory, technological, and organizational factors. CONCLUSIONS In conclusion, leading AI transformation in healthcare requires a multidimensional approach, with leadership across technological, strategic, operational, and organizational domains. Organizations should implement a comprehensive leadership development strategy, including targeted training and cross-functional collaboration, to equip leaders with the skills needed for AI integration. Additionally, when upskilling or recruiting AI talent, priority should be given to individuals with a strong mix of technical expertise, adaptive capacity, and interpersonal acumen, enabling them to navigate the unique complexities of the healthcare environment.
Collapse
Affiliation(s)
- Abi Sriharan
- Krembil Centre for Health Management and Leadership, Schulich School of Business, York University, Toronto, ON, Canada
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Nigar Sekercioglu
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Cheryl Mitchell
- Gustavson School of Business, University of Victoria, Victoria, ON, Canada
| | - Senthujan Senkaiahliyan
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Attila Hertelendy
- College of Business, Florida International University, Florida, FL, United States
| | - Tracy Porter
- Department of Management, Cleveland State University, Cleveland, OH, United States
| | - Jane Banaszak-Holl
- Department of Health Services Administration, School of Health Professions, University of Alabama Birmingham, Birmingham, OH, United States
| |
Collapse
|
3
|
Ursin F, Müller R, Funer F, Liedtke W, Renz D, Wiertz S, Ranisch R. Non-empirical methods for ethics research on digital technologies in medicine, health care and public health: a systematic journal review. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2024:10.1007/s11019-024-10222-x. [PMID: 39120780 DOI: 10.1007/s11019-024-10222-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/27/2024] [Indexed: 08/10/2024]
Abstract
Bioethics has developed approaches to address ethical issues in health care, similar to how technology ethics provides guidelines for ethical research on artificial intelligence, big data, and robotic applications. As these digital technologies are increasingly used in medicine, health care and public health, thus, it is plausible that the approaches of technology ethics have influenced bioethical research. Similar to the "empirical turn" in bioethics, which led to intense debates about appropriate moral theories, ethical frameworks and meta-ethics due to the increased use of empirical methodologies from social sciences, the proliferation of health-related subtypes of technology ethics might have a comparable impact on current bioethical research. This systematic journal review analyses the reporting of ethical frameworks and non-empirical methods in argument-based research articles on digital technologies in medicine, health care and public health that have been published in high-impact bioethics journals. We focus on articles reporting non-empirical research in original contributions. Our aim is to describe currently used methods for the ethical analysis of ethical issues regarding the application of digital technologies in medicine, health care and public health. We confine our analysis to non-empirical methods because empirical methods have been well-researched elsewhere. Finally, we discuss our findings against the background of established methods for health technology assessment, the lack of a typology for non-empirical methods as well as conceptual and methodical change in bioethics. Our descriptive results may serve as a starting point for reflecting on whether current ethical frameworks and non-empirical methods are appropriate to research ethical issues deriving from the application of digital technologies in medicine, health care and public health.
Collapse
Affiliation(s)
- Frank Ursin
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Carl-Neuberg-Strasse 1, 30625, Hannover, Germany.
| | - Regina Müller
- Institute of Philosophy, University of Bremen, Enrique-Schmidt-Straße 7, 28359, Bremen, Germany
| | - Florian Funer
- Institute for Ethics and History of Medicine, Eberhard Karls University, Gartenstrasse 47, 72074, Tübingen, Tübingen, Germany
| | - Wenke Liedtke
- Faculty of Theology, University of Greifswald, Am Rubenowplatz 2-3, 17489, Greifswald, Germany
| | - David Renz
- Faculty of Protestant Theology, University of Bonn, Am Hofgarten 8, 53113, Bonn, Germany
| | - Svenja Wiertz
- Department of Medical Ethics and the History of Medicine, University of Freiburg, Stefan-Meier-Str. 26, 79104, Freiburg, Germany
| | - Robert Ranisch
- Junior Professorship for Medical Ethics with a Focus on Digitization, Faculty of Health Sciences Brandenburg, University of Potsdam, Am Mühlenberg 9, 14476, Potsdam, Golm, Germany
| |
Collapse
|
4
|
Khan SD, Hoodbhoy Z, Raja MHR, Kim JY, Hogg HDJ, Manji AAA, Gulamali F, Hasan A, Shaikh A, Tajuddin S, Khan NS, Patel MR, Balu S, Samad Z, Sendak MP. Frameworks for procurement, integration, monitoring, and evaluation of artificial intelligence tools in clinical settings: A systematic review. PLOS DIGITAL HEALTH 2024; 3:e0000514. [PMID: 38809946 PMCID: PMC11135672 DOI: 10.1371/journal.pdig.0000514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 04/18/2024] [Indexed: 05/31/2024]
Abstract
Research on the applications of artificial intelligence (AI) tools in medicine has increased exponentially over the last few years but its implementation in clinical practice has not seen a commensurate increase with a lack of consensus on implementing and maintaining such tools. This systematic review aims to summarize frameworks focusing on procuring, implementing, monitoring, and evaluating AI tools in clinical practice. A comprehensive literature search, following PRSIMA guidelines was performed on MEDLINE, Wiley Cochrane, Scopus, and EBSCO databases, to identify and include articles recommending practices, frameworks or guidelines for AI procurement, integration, monitoring, and evaluation. From the included articles, data regarding study aim, use of a framework, rationale of the framework, details regarding AI implementation involving procurement, integration, monitoring, and evaluation were extracted. The extracted details were then mapped on to the Donabedian Plan, Do, Study, Act cycle domains. The search yielded 17,537 unique articles, out of which 47 were evaluated for inclusion based on their full texts and 25 articles were included in the review. Common themes extracted included transparency, feasibility of operation within existing workflows, integrating into existing workflows, validation of the tool using predefined performance indicators and improving the algorithm and/or adjusting the tool to improve performance. Among the four domains (Plan, Do, Study, Act) the most common domain was Plan (84%, n = 21), followed by Study (60%, n = 15), Do (52%, n = 13), & Act (24%, n = 6). Among 172 authors, only 1 (0.6%) was from a low-income country (LIC) and 2 (1.2%) were from lower-middle-income countries (LMICs). Healthcare professionals cite the implementation of AI tools within clinical settings as challenging owing to low levels of evidence focusing on integration in the Do and Act domains. The current healthcare AI landscape calls for increased data sharing and knowledge translation to facilitate common goals and reap maximum clinical benefit.
Collapse
Affiliation(s)
- Sarim Dawar Khan
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Zahra Hoodbhoy
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
- Department of Paediatrics and Child Health, Aga Khan University, Karachi, Pakistan
| | | | - Jee Young Kim
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Henry David Jeffry Hogg
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Afshan Anwar Ali Manji
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Freya Gulamali
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Alifia Hasan
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Asim Shaikh
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Salma Tajuddin
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Nida Saddaf Khan
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Manesh R. Patel
- Duke Clinical Research Institute, Duke University School of Medicine, Durham, North Carolina, United States
- Division of Cardiology, Duke University School of Medicine, Durham, North Carolina, United States
| | - Suresh Balu
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Zainab Samad
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Mark P. Sendak
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| |
Collapse
|
5
|
Maccaro A, Stokes K, Statham L, He L, Williams A, Pecchia L, Piaggio D. Clearing the Fog: A Scoping Literature Review on the Ethical Issues Surrounding Artificial Intelligence-Based Medical Devices. J Pers Med 2024; 14:443. [PMID: 38793025 PMCID: PMC11121798 DOI: 10.3390/jpm14050443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 04/12/2024] [Accepted: 04/16/2024] [Indexed: 05/26/2024] Open
Abstract
The use of AI in healthcare has sparked much debate among philosophers, ethicists, regulators and policymakers who raised concerns about the implications of such technologies. The presented scoping review captures the progression of the ethical and legal debate and the proposed ethical frameworks available concerning the use of AI-based medical technologies, capturing key themes across a wide range of medical contexts. The ethical dimensions are synthesised in order to produce a coherent ethical framework for AI-based medical technologies, highlighting how transparency, accountability, confidentiality, autonomy, trust and fairness are the top six recurrent ethical issues. The literature also highlighted how it is essential to increase ethical awareness through interdisciplinary research, such that researchers, AI developers and regulators have the necessary education/competence or networks and tools to ensure proper consideration of ethical matters in the conception and design of new AI technologies and their norms. Interdisciplinarity throughout research, regulation and implementation will help ensure AI-based medical devices are ethical, clinically effective and safe. Achieving these goals will facilitate successful translation of AI into healthcare systems, which currently is lagging behind other sectors, to ensure timely achievement of health benefits to patients and the public.
Collapse
Affiliation(s)
- Alessia Maccaro
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| | - Katy Stokes
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| | - Laura Statham
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
- Warwick Medical School, University of Warwick, Coventry CV4 7AL, UK
| | - Lucas He
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
- Faculty of Engineering, Imperial College, London SW7 1AY, UK
| | - Arthur Williams
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| | - Leandro Pecchia
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
- Intelligent Technologies for Health and Well-Being: Sustainable Design, Management and Evaluation, Faculty of Engineering, Università Campus Bio-Medico Roma, Via Alvaro del Portillo, 21, 00128 Rome, Italy
| | - Davide Piaggio
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| |
Collapse
|
6
|
Coghlan S, Gyngell C, Vears DF. Ethics of artificial intelligence in prenatal and pediatric genomic medicine. J Community Genet 2024; 15:13-24. [PMID: 37796364 PMCID: PMC10857992 DOI: 10.1007/s12687-023-00678-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023] Open
Abstract
This paper examines the ethics of introducing emerging forms of artificial intelligence (AI) into prenatal and pediatric genomic medicine. Application of genomic AI to these early life settings has not received much attention in the ethics literature. We focus on three contexts: (1) prenatal genomic sequencing for possible fetal abnormalities, (2) rapid genomic sequencing for critically ill children, and (3) reanalysis of genomic data obtained from children for diagnostic purposes. The paper identifies and discusses various ethical issues in the possible application of genomic AI in these settings, especially as they relate to concepts of beneficence, nonmaleficence, respect for autonomy, justice, transparency, accountability, privacy, and trust. The examination will inform the ethically sound introduction of genomic AI in early human life.
Collapse
Affiliation(s)
- Simon Coghlan
- School of Computing and Information Systems (CIS), Centre for AI and Digital Ethics (CAIDE), The University of Melbourne, Grattan St, Melbourne, Victoria, 3010, Australia.
- Australian Research Council Centre of Excellence for Automated Decision Making and Society (ADM+S), Melbourne, Victoria, Australia.
| | - Christopher Gyngell
- Biomedical Ethics Research Group, Murdoch Children's Research Institute, The Royal Children's Hospital, 50 Flemington Rd, Parkville, Victoria, 3052, Australia
- University of Melbourne, Parkville, Victoria, 3052, Australia
| | - Danya F Vears
- Biomedical Ethics Research Group, Murdoch Children's Research Institute, The Royal Children's Hospital, 50 Flemington Rd, Parkville, Victoria, 3052, Australia
- University of Melbourne, Parkville, Victoria, 3052, Australia
- Centre for Biomedical Ethics and Law, KU Leuven, Kapucijnenvoer 35, 3000, Leuven, Belgium
| |
Collapse
|
7
|
Enríquez T, Alonso-Stuyck P, Martínez-Villaseñor L. The Language of Nature and Artificial Intelligence in Patient Care. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:6499. [PMID: 37569039 PMCID: PMC10419222 DOI: 10.3390/ijerph20156499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/08/2023] [Accepted: 07/18/2023] [Indexed: 08/13/2023]
Abstract
Given the development of artificial intelligence (AI) and the conditions of vulnerability of large sectors of the population, the question emerges: what are the ethical limits of technologies in patient care? This paper examines this question in the light of the "language of nature" and of Aristotelian causal analysis, in particular the concept of means and ends. Thus, it is possible to point out the root of the distinction between the identity of the person and the entity of any technology. Nature indicates that the person is always an end in itself. Technology, on the contrary, should only be a means to serve the person. The diversity of their respective natures also explains why their respective agencies enjoy diverse scopes. Technological operations (artificial agency, artificial intelligence) find their meaning in the results obtained through them (poiesis). Moreover, the person is capable of actions whose purpose is precisely the action itself (praxis), in which personal agency and, ultimately, the person themselves, is irreplaceable. Forgetting the distinction between what, by nature, is an end and what can only be a means is equivalent to losing sight of the instrumental nature of AI and, therefore, its specific meaning: the greatest good of the patient. It is concluded that the language of nature serves as a filter that supports the effective subordination of the use of AI to its specific purpose, the human good. The greatest contribution of this work is to draw attention to the nature of the person and technology, and about their respective agencies. In other words: listening to the language of nature, and attending to the diverse nature of the person and technology, personal agency, and artificial agency.
Collapse
Affiliation(s)
- Teresa Enríquez
- Instituto de Humanidades, Universidad Panamericana, Josemaría Escrivá de Balaguer 101, Aguascalientes 20296, Mexico
| | - Paloma Alonso-Stuyck
- Facultad de Psicología, Universitat Abat Oliba CEU, Bellesguard 30, 08022 Barcelona, Spain;
| | | |
Collapse
|
8
|
Busch F, Adams LC, Bressem KK. Biomedical Ethical Aspects Towards the Implementation of Artificial Intelligence in Medical Education. MEDICAL SCIENCE EDUCATOR 2023; 33:1007-1012. [PMID: 37546190 PMCID: PMC10403458 DOI: 10.1007/s40670-023-01815-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/31/2023] [Indexed: 08/08/2023]
Abstract
The increasing use of artificial intelligence (AI) in medicine is associated with new ethical challenges and responsibilities. However, special considerations and concerns should be addressed when integrating AI applications into medical education, where healthcare, AI, and education ethics collide. This commentary explores the biomedical ethical responsibilities of medical institutions in incorporating AI applications into medical education by identifying potential concerns and limitations, with the goal of implementing applicable recommendations. The recommendations presented are intended to assist in developing institutional guidelines for the ethical use of AI for medical educators and students.
Collapse
Affiliation(s)
- Felix Busch
- Department of Radiology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
- Department of Anesthesiology, Division of Operative Intensive Care Medicine, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Lisa C. Adams
- Department of Radiology, Stanford University School of Medicine, Stanford, CA USA
| | - Keno K. Bressem
- Department of Radiology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
- Berlin Institute of Health at Charité – Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
9
|
Fardeau E, Senghor AS, Racine E. The Impact of Socially Assistive Robots on Human Flourishing in the Context of Dementia: A Scoping Review. Int J Soc Robot 2023; 15:1-51. [PMID: 37359430 PMCID: PMC10115607 DOI: 10.1007/s12369-023-00980-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/13/2023] [Indexed: 06/28/2023]
Abstract
Socially assistive robots are being developed and tested to support social interactions and assist with healthcare needs, including in the context of dementia. These technologies bring their share of situations where moral values and principles can be profoundly questioned. Several aspects of these robots affect human relationships and social behavior, i.e., fundamental aspects of human existence and human flourishing. However, the impact of socially assistive robots on human flourishing is not yet well understood in the current state of the literature. We undertook a scoping review to study the literature on human flourishing as it relates to health uses of socially assistive robots. Searches were conducted between March and July 2021 on the following databases: Ovid MEDLINE, PubMed and PsycINFO. Twenty-eight articles were found and analyzed. Results show that no formal evaluation of the impact of socially assistive robots on human flourishing in the context of dementia in any of the articles retained for the literature review although several articles touched on at least one dimension of human flourishing and other related concepts. We submit that participatory methods to evaluate the impact of socially assistive robots on human flourishing could open research to other values at stake, particularly those prioritized by people with dementia which we have less evidence about. Such participatory approaches to human flourishing are congruent with empowerment theory.
Collapse
Affiliation(s)
- Erika Fardeau
- Pragmatic Health Ethics Research Unit, Institut de recherches cliniques de Montréal, 110 Avenue Des Pins Ouest, Montréal, QC H2W 1R7 Canada
| | - Abdou Simon Senghor
- Pragmatic Health Ethics Research Unit, Institut de recherches cliniques de Montréal, 110 Avenue Des Pins Ouest, Montréal, QC H2W 1R7 Canada
- Division of Experimental Medicine, McGill University, Montréal, QC Canada
| | - Eric Racine
- Pragmatic Health Ethics Research Unit, Institut de recherches cliniques de Montréal, 110 Avenue Des Pins Ouest, Montréal, QC H2W 1R7 Canada
- Division of Experimental Medicine, McGill University, Montréal, QC Canada
- Department of Neurology and Neurosurgery, McGill University, Montréal, QC Canada
- Department of Medicine and Department of Social and Preventive Medicine, Université de Montréal, Montréal, QC Canada
| |
Collapse
|
10
|
Bignami EG, Vittori A, Lanza R, Compagnone C, Cascella M, Bellini V. The Clinical Researcher Journey in the Artificial Intelligence Era: The PAC-MAN’s Challenge. Healthcare (Basel) 2023; 11:healthcare11070975. [PMID: 37046900 PMCID: PMC10093965 DOI: 10.3390/healthcare11070975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 03/22/2023] [Accepted: 03/23/2023] [Indexed: 03/31/2023] Open
Abstract
Artificial intelligence (AI) is a powerful tool that can assist researchers and clinicians in various settings. However, like any technology, it must be used with caution and awareness as there are numerous potential pitfalls. To provide a creative analogy, we have likened research to the PAC-MAN classic arcade video game. Just as the protagonist of the game is constantly seeking data, researchers are constantly seeking information that must be acquired and managed within the constraints of the research rules. In our analogy, the obstacles that researchers face are represented by “ghosts”, which symbolize major ethical concerns, low-quality data, legal issues, and educational challenges. In short, clinical researchers need to meticulously collect and analyze data from various sources, often navigating through intricate and nuanced challenges to ensure that the data they obtain are both precise and pertinent to their research inquiry. Reflecting on this analogy can foster a deeper comprehension of the significance of employing AI and other powerful technologies with heightened awareness and attentiveness.
Collapse
Affiliation(s)
- Elena Giovanna Bignami
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Viale Gramsci 14, 43126 Parma, Italy
| | - Alessandro Vittori
- Department of Anesthesia and Critical Care, ARCO ROMA, Ospedale Pediatrico Bambino Gesù IRCCS, Piazza S. Onofrio 4, 00165 Rome, Italy
- Correspondence: or ; Tel.: +39-0668592397
| | - Roberto Lanza
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Viale Gramsci 14, 43126 Parma, Italy
| | - Christian Compagnone
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Viale Gramsci 14, 43126 Parma, Italy
| | - Marco Cascella
- Department of Anesthesia and Critical Care, Istituto Nazionale Tumori—IRCCS, Fondazione Pascale, 80131 Naples, Italy
| | - Valentina Bellini
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Viale Gramsci 14, 43126 Parma, Italy
| |
Collapse
|
11
|
Taber P, Armin JS, Orozco G, Del Fiol G, Erdrich J, Kawamoto K, Israni ST. Artificial Intelligence and Cancer Control: Toward Prioritizing Justice, Equity, Diversity, and Inclusion (JEDI) in Emerging Decision Support Technologies. Curr Oncol Rep 2023; 25:387-424. [PMID: 36811808 DOI: 10.1007/s11912-023-01376-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/06/2022] [Indexed: 02/24/2023]
Abstract
PURPOSE FOR REVIEW This perspective piece has two goals: first, to describe issues related to artificial intelligence-based applications for cancer control as they may impact health inequities or disparities; and second, to report on a review of systematic reviews and meta-analyses of artificial intelligence-based tools for cancer control to ascertain the extent to which discussions of justice, equity, diversity, inclusion, or health disparities manifest in syntheses of the field's best evidence. RECENT FINDINGS We found that, while a significant proportion of existing syntheses of research on AI-based tools in cancer control use formal bias assessment tools, the fairness or equitability of models is not yet systematically analyzable across studies. Issues related to real-world use of AI-based tools for cancer control, such as workflow considerations, measures of usability and acceptance, or tool architecture, are more visible in the literature, but still addressed only in a minority of reviews. Artificial intelligence is poised to bring significant benefits to a wide range of applications in cancer control, but more thorough and standardized evaluations and reporting of model fairness are required to build the evidence base for AI-based tool design for cancer and to ensure that these emerging technologies promote equitable healthcare.
Collapse
Affiliation(s)
- Peter Taber
- Department of Biomedical Informatics, University of Utah School of Medicine, 421 Wakara Way, Salt Lake City, UT, 84108, USA.
| | - Julie S Armin
- Department of Family and Community Medicine, University of Arizona College of Medicine, Tucson, AZ, USA
| | | | - Guilherme Del Fiol
- Department of Biomedical Informatics, University of Utah School of Medicine, 421 Wakara Way, Salt Lake City, UT, 84108, USA
| | - Jennifer Erdrich
- Division of Surgical Oncology, University of Arizona College of Medicine, Tucson, AZ, USA
| | - Kensaku Kawamoto
- Department of Biomedical Informatics, University of Utah School of Medicine, 421 Wakara Way, Salt Lake City, UT, 84108, USA
| | | |
Collapse
|
12
|
Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
13
|
Solomonides AE, Koski E, Atabaki SM, Weinberg S, McGreevey JD, Kannry JL, Petersen C, Lehmann CU. Defining AMIA's artificial intelligence principles. J Am Med Inform Assoc 2022; 29:585-591. [PMID: 35190824 PMCID: PMC8922174 DOI: 10.1093/jamia/ocac006] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 01/14/2022] [Indexed: 08/08/2023] Open
Abstract
Recent advances in the science and technology of artificial intelligence (AI) and growing numbers of deployed AI systems in healthcare and other services have called attention to the need for ethical principles and governance. We define and provide a rationale for principles that should guide the commission, creation, implementation, maintenance, and retirement of AI systems as a foundation for governance throughout the lifecycle. Some principles are derived from the familiar requirements of practice and research in medicine and healthcare: beneficence, nonmaleficence, autonomy, and justice come first. A set of principles follow from the creation and engineering of AI systems: explainability of the technology in plain terms; interpretability, that is, plausible reasoning for decisions; fairness and absence of bias; dependability, including "safe failure"; provision of an audit trail for decisions; and active management of the knowledge base to remain up to date and sensitive to any changes in the environment. In organizational terms, the principles require benevolence-aiming to do good through the use of AI; transparency, ensuring that all assumptions and potential conflicts of interest are declared; and accountability, including active oversight of AI systems and management of any risks that may arise. Particular attention is drawn to the case of vulnerable populations, where extreme care must be exercised. Finally, the principles emphasize the need for user education at all levels of engagement with AI and for continuing research into AI and its biomedical and healthcare applications.
Collapse
Affiliation(s)
| | - Eileen Koski
- Center for Computational Health, IBM T. J. Watson Research Center, Yorktown Heights, New York, USA
| | - Shireen M Atabaki
- Pediatrics; Emergency Medicine, The George Washington University School of Medicine Children s National Hospital, Washington, District of Columbia, USA
| | - Scott Weinberg
- Public Policy, American Medical Informatics Association, Rockville, Maryland, USA
| | - John D McGreevey
- Center for Applied Health Informatics and Office of the Chief Medical Information Officer, University of Pennsylvania Health System, Philadelphia, Pennsylvania, USA
| | - Joseph L Kannry
- Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Carolyn Petersen
- Health Education & Content Services, Mayo Clinic, Rochester, Minnesota, USA
| | - Christoph U Lehmann
- Clinical Informatics Center, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
14
|
Kodera S, Akazawa H, Morita H, Komuro I. Prospects for cardiovascular medicine using artificial intelligence. J Cardiol 2021; 79:319-325. [PMID: 34772574 DOI: 10.1016/j.jjcc.2021.10.016] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 10/19/2021] [Indexed: 12/19/2022]
Abstract
As the importance of artificial intelligence (AI) in the clinical setting increases, the need for clinicians to understand AI is also increasing. This review focuses on the fundamental principles of AI and the current state of cardiovascular AI. Various types of cardiovascular AI have been developed for evaluating examinations such as X-rays, electrocardiogram, echocardiography, computed tomography, and magnetic resonance imaging. Cardiovascular AI achieves high accuracy in diagnostic support and prognosis prediction. Furthermore, it can even detect abnormalities that were previously difficult for cardiologists to detect. Randomized controlled trials begin to be reported to verify the usefulness of cardiovascular AI. The day is approaching when cardiovascular AI will be commonly used in clinical practice. Various types of medical AI will be used for cardiovascular care; however, it will not replace medical doctors. We need to understand the strengths and weaknesses of medical AI so that cardiologists can effectively use AI to improve the medical care of patients.
Collapse
Affiliation(s)
- Satoshi Kodera
- Department of Cardiovascular Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan.
| | - Hiroshi Akazawa
- Department of Cardiovascular Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Hiroyuki Morita
- Department of Cardiovascular Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Issei Komuro
- Department of Cardiovascular Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| |
Collapse
|
15
|
Formosa P. Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy. Minds Mach (Dordr) 2021. [DOI: 10.1007/s11023-021-09579-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
AbstractSocial robots are robots that can interact socially with humans. As social robots and the artificial intelligence (AI) that powers them becomes more advanced, they will likely take on more social and work roles. This has many important ethical implications. In this paper, we focus on one of the most central of these, the impacts that social robots can have on human autonomy. We argue that, due to their physical presence and social capacities, there is a strong potential for social robots to enhance human autonomy as well as several ways they can inhibit and disrespect it. We argue that social robots could improve human autonomy by helping us to achieve more valuable ends, make more authentic choices, and improve our autonomy competencies. We also argue that social robots have the potential to harm human autonomy by instead leading us to achieve fewer valuable ends ourselves, make less authentic choices, decrease our autonomy competencies, make our autonomy more vulnerable, and disrespect our autonomy. Whether the impacts of social robots on human autonomy are positive or negative overall will depend on the design, regulation, and use we make of social robots in the future.
Collapse
|