51
|
Iqbal J, Cortés Jaimes DC, Makineni P, Subramani S, Hemaida S, Thugu TR, Butt AN, Sikto JT, Kaur P, Lak MA, Augustine M, Shahzad R, Arain M. Reimagining Healthcare: Unleashing the Power of Artificial Intelligence in Medicine. Cureus 2023; 15:e44658. [PMID: 37799217 PMCID: PMC10549955 DOI: 10.7759/cureus.44658] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2023] [Indexed: 10/07/2023] Open
Abstract
Artificial intelligence (AI) has opened new medical avenues and revolutionized diagnostic and therapeutic practices, allowing healthcare providers to overcome significant challenges associated with cost, disease management, accessibility, and treatment optimization. Prominent AI technologies such as machine learning (ML) and deep learning (DL) have immensely influenced diagnostics, patient monitoring, novel pharmaceutical discoveries, drug development, and telemedicine. Significant innovations and improvements in disease identification and early intervention have been made using AI-generated algorithms for clinical decision support systems and disease prediction models. AI has remarkably impacted clinical drug trials by amplifying research into drug efficacy, adverse events, and candidate molecular design. AI's precision and analysis regarding patients' genetic, environmental, and lifestyle factors have led to individualized treatment strategies. During the COVID-19 pandemic, AI-assisted telemedicine set a precedent for remote healthcare delivery and patient follow-up. Moreover, AI-generated applications and wearable devices have allowed ambulatory monitoring of vital signs. However, apart from being immensely transformative, AI's contribution to healthcare is subject to ethical and regulatory concerns. AI-backed data protection and algorithm transparency should be strictly adherent to ethical principles. Vigorous governance frameworks should be in place before incorporating AI in mental health interventions through AI-operated chatbots, medical education enhancements, and virtual reality-based training. The role of AI in medical decision-making has certain limitations, necessitating the importance of hands-on experience. Therefore, reaching an optimal balance between AI's capabilities and ethical considerations to ensure impartial and neutral performance in healthcare applications is crucial. This narrative review focuses on AI's impact on healthcare and the importance of ethical and balanced incorporation to make use of its full potential.
Collapse
Affiliation(s)
| | - Diana Carolina Cortés Jaimes
- Epidemiology, Universidad Autónoma de Bucaramanga, Bucaramanga, COL
- Medicine, Pontificia Universidad Javeriana, Bogotá, COL
| | - Pallavi Makineni
- Medicine, All India Institute of Medical Sciences, Bhubaneswar, Bhubaneswar, IND
| | - Sachin Subramani
- Medicine and Surgery, Employees' State Insurance Corporation (ESIC) Medical College, Gulbarga, IND
| | - Sarah Hemaida
- Internal Medicine, Istanbul Okan University, Istanbul, TUR
| | - Thanmai Reddy Thugu
- Internal Medicine, Sri Padmavathi Medical College for Women, Sri Venkateswara Institute of Medical Sciences (SVIMS), Tirupati, IND
| | - Amna Naveed Butt
- Medicine/Internal Medicine, Allama Iqbal Medical College, Lahore, PAK
| | | | - Pareena Kaur
- Medicine, Punjab Institute of Medical Sciences, Jalandhar, IND
| | | | | | - Roheen Shahzad
- Medicine, Combined Military Hospital (CMH) Lahore Medical College and Institute of Dentistry, Lahore, PAK
| | - Mustafa Arain
- Internal Medicine, Civil Hospital Karachi, Karachi, PAK
| |
Collapse
|
52
|
van der Vegt AH, Scott IA, Dermawan K, Schnetler RJ, Kalke VR, Lane PJ. Implementation frameworks for end-to-end clinical AI: derivation of the SALIENT framework. J Am Med Inform Assoc 2023; 30:1503-1515. [PMID: 37208863 PMCID: PMC10436156 DOI: 10.1093/jamia/ocad088] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 04/17/2023] [Accepted: 05/09/2023] [Indexed: 05/21/2023] Open
Abstract
OBJECTIVE To derive a comprehensive implementation framework for clinical AI models within hospitals informed by existing AI frameworks and integrated with reporting standards for clinical AI research. MATERIALS AND METHODS (1) Derive a provisional implementation framework based on the taxonomy of Stead et al and integrated with current reporting standards for AI research: TRIPOD, DECIDE-AI, CONSORT-AI. (2) Undertake a scoping review of published clinical AI implementation frameworks and identify key themes and stages. (3) Perform a gap analysis and refine the framework by incorporating missing items. RESULTS The provisional AI implementation framework, called SALIENT, was mapped to 5 stages common to both the taxonomy and the reporting standards. A scoping review retrieved 20 studies and 247 themes, stages, and subelements were identified. A gap analysis identified 5 new cross-stage themes and 16 new tasks. The final framework comprised 5 stages, 7 elements, and 4 components, including the AI system, data pipeline, human-computer interface, and clinical workflow. DISCUSSION This pragmatic framework resolves gaps in existing stage- and theme-based clinical AI implementation guidance by comprehensively addressing the what (components), when (stages), and how (tasks) of AI implementation, as well as the who (organization) and why (policy domains). By integrating research reporting standards into SALIENT, the framework is grounded in rigorous evaluation methodologies. The framework requires validation as being applicable to real-world studies of deployed AI models. CONCLUSIONS A novel end-to-end framework has been developed for implementing AI within hospital clinical practice that builds on previous AI implementation frameworks and research reporting standards.
Collapse
Affiliation(s)
- Anton H van der Vegt
- Centre for Health Services Research, The University of Queensland, Brisbane, Australia
| | - Ian A Scott
- Department of Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Brisbane, Australia
| | - Krishna Dermawan
- Centre for Information Resilience, The University of Queensland, St Lucia, Australia
| | - Rudolf J Schnetler
- School of Information Technology and Electrical Engineering, The University of Queensland, St Lucia, Australia
| | - Vikrant R Kalke
- Patient Safety and Quality, Clinical Excellence Queensland, Queensland Health, Brisbane, Australia
| | - Paul J Lane
- Safety Quality & Innovation, The Prince Charles Hospital, Queensland Health, Brisbane, Australia
| |
Collapse
|
53
|
Polevikov S. Advancing AI in healthcare: A comprehensive review of best practices. Clin Chim Acta 2023; 548:117519. [PMID: 37595864 DOI: 10.1016/j.cca.2023.117519] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 08/14/2023] [Accepted: 08/15/2023] [Indexed: 08/20/2023]
Abstract
Artificial Intelligence (AI) and Machine Learning (ML) are powerful tools shaping the healthcare sector. This review considers twelve key aspects of AI in clinical practice: 1) Ethical AI; 2) Explainable AI; 3) Health Equity and Bias in AI; 4) Sponsorship Bias; 5) Data Privacy; 6) Genomics and Privacy; 7) Insufficient Sample Size and Self-Serving Bias; 8) Bridging the Gap Between Training Datasets and Real-World Scenarios; 9) Open Source and Collaborative Development; 10) Dataset Bias and Synthetic Data; 11) Measurement Bias; 12) Reproducibility in AI Research. These categories represent both the challenges and opportunities of AI implementation in healthcare. While AI holds significant potential for improving patient care, it also presents risks and challenges, such as ensuring privacy, combating bias, and maintaining transparency and ethics. The review underscores the necessity of developing comprehensive best practices for healthcare organizations and fostering a diverse dialogue involving data scientists, clinicians, patient advocates, ethicists, economists, and policymakers. We are at the precipice of significant transformation in healthcare powered by AI. By continuing to reassess and refine our approach, we can ensure that AI is implemented responsibly and ethically, maximizing its benefit to patient care and public health.
Collapse
|
54
|
Greenberg ZF, Graim KS, He M. Towards artificial intelligence-enabled extracellular vesicle precision drug delivery. Adv Drug Deliv Rev 2023:114974. [PMID: 37356623 DOI: 10.1016/j.addr.2023.114974] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 06/27/2023]
Abstract
Extracellular Vesicles (EVs), particularly exosomes, recently exploded into nanomedicine as an emerging drug delivery approach due to their superior biocompatibility, circulating stability, and bioavailability in vivo. However, EV heterogeneity makes molecular targeting precision a critical challenge. Deciphering key molecular drivers for controlling EV tissue targeting specificity is in great need. Artificial intelligence (AI) brings powerful prediction ability for guiding the rational design of engineered EVs in precision control for drug delivery. This review focuses on cutting-edge nano-delivery via integrating large-scale EV data with AI to develop AI-directed EV therapies and illuminate the clinical translation potential. We briefly review the current status of EVs in drug delivery, including the current frontier, limitations, and considerations to advance the field. Subsequently, we detail the future of AI in drug delivery and its impact on precision EV delivery. Our review discusses the current universal challenge of standardization and critical considerations when using AI combined with EVs for precision drug delivery. Finally, we will conclude this review with a perspective on future clinical translation led by a combined effort of AI and EV research.
Collapse
Affiliation(s)
- Zachary F Greenberg
- Department of Pharmaceutics, College of Pharmacy, University of Florida, Gainesville, Florida, 32610, USA
| | - Kiley S Graim
- Department of Computer & Information Science & Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, Florida, 32610, USA
| | - Mei He
- Department of Pharmaceutics, College of Pharmacy, University of Florida, Gainesville, Florida, 32610, USA.
| |
Collapse
|
55
|
Cohen RY, Kovacheva VP. A Methodology for a Scalable, Collaborative, and Resource-Efficient Platform, MERLIN, to Facilitate Healthcare AI Research. IEEE J Biomed Health Inform 2023; 27:3014-3025. [PMID: 37030761 PMCID: PMC10275625 DOI: 10.1109/jbhi.2023.3259395] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2023]
Abstract
Healthcare artificial intelligence (AI) holds the potential to increase patient safety, augment efficiency and improve patient outcomes, yet research is often limited by data access, cohort curation, and tools for analysis. Collection and translation of electronic health record data, live data, and real-time high-resolution device data can be challenging and time-consuming. The development of clinically relevant AI tools requires overcoming challenges in data acquisition, scarce hospital resources, and requirements for data governance. These bottlenecks may result in resource-heavy needs and long delays in research and development of AI systems. We present a system and methodology to accelerate data acquisition, dataset development and analysis, and AI model development. We created an interactive platform that relies on a scalable microservice architecture. This system can ingest 15,000 patient records per hour, where each record represents thousands of multimodal measurements, text notes, and high-resolution data. Collectively, these records can approach a terabyte of data. The platform can further perform cohort generation and preliminary dataset analysis in 2-5 minutes. As a result, multiple users can collaborate simultaneously to iterate on datasets and models in real time. We anticipate that this approach will accelerate clinical AI model development, and, in the long run, meaningfully improve healthcare delivery.
Collapse
|
56
|
Venkatesh KP, Brito G. Lessons on regulation and implementation from the first FDA-cleared autonomous AI - Interview with Chairman and Founder of Digital Diagnostics Michael Abramoff. HEALTHCARE (AMSTERDAM, NETHERLANDS) 2023; 11:100692. [PMID: 37201476 DOI: 10.1016/j.hjdsi.2023.100692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 05/11/2023] [Indexed: 05/20/2023]
|
57
|
Bleher H, Braun M. Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice. SCIENCE AND ENGINEERING ETHICS 2023; 29:21. [PMID: 37237246 PMCID: PMC10220094 DOI: 10.1007/s11948-023-00443-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 04/26/2023] [Indexed: 05/28/2023]
Abstract
Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory-practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze each of these three approaches by asking how they understand and conceptualize theory and practice. We outline the conceptual strengths as well as their shortcomings: an embedded ethics approach is context-oriented but risks being biased by it; ethically aligned approaches are principles-oriented but lack justification theories to deal with trade-offs between competing principles; and the interdisciplinary Value Sensitive Design approach is based on stakeholder values but needs linkage to political, legal, or social governance aspects. Against this background, we develop a meta-framework for applied AI ethics conceptions with three dimensions. Based on critical theory, we suggest these dimensions as starting points to critically reflect on the conceptualization of theory and practice. We claim, first, that the inclusion of the dimension of affects and emotions in the ethical decision-making process stimulates reflections on vulnerabilities, experiences of disregard, and marginalization already within the AI development process. Second, we derive from our analysis that considering the dimension of justifying normative background theories provides both standards and criteria as well as guidance for prioritizing or evaluating competing principles in cases of conflict. Third, we argue that reflecting the governance dimension in ethical decision-making is an important factor to reveal power structures as well as to realize ethical AI and its application because this dimension seeks to combine social, legal, technical, and political concerns. This meta-framework can thus serve as a reflective tool for understanding, mapping, and assessing the theory-practice conceptualizations within AI ethics approaches to address and overcome their blind spots.
Collapse
Affiliation(s)
- Hannah Bleher
- Chair of Social Ethics and Ethics of Technology, University of Bonn, Rabinstraße 8, 53111, Bonn, Germany.
| | - Matthias Braun
- Chair of Social Ethics and Ethics of Technology, University of Bonn, Rabinstraße 8, 53111, Bonn, Germany
| |
Collapse
|
58
|
Goldstein J, Weitzman D, Lemerond M, Jones A. Determinants for scalable adoption of autonomous AI in the detection of diabetic eye disease in diverse practice types: key best practices learned through collection of real-world data. Front Digit Health 2023; 5:1004130. [PMID: 37274764 PMCID: PMC10232822 DOI: 10.3389/fdgth.2023.1004130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 04/25/2023] [Indexed: 06/06/2023] Open
Abstract
Autonomous Artificial Intelligence (AI) has the potential to reduce disparities, improve quality of care, and reduce cost by improving access to specialty diagnoses at the point-of-care. Diabetes and related complications represent a significant source of health disparities. Vision loss is a complication of diabetes, and there is extensive evidence supporting annual eye exams for prevention. Prior to the use of autonomous AI, store-and-forward imaging approaches using remote reading centers (asynchronous telemedicine) attempted to increase diabetes related eye exams with limited success. In 2018, after rigorous clinical validation, the first fully autonomous AI system [LumineticsCore™ (formerly IDx-DR), Digital Diagnostics Inc., Coralville, IA, United States] received U.S. Food and Drug Administration (FDA) De Novo authorization. The system diagnoses diabetic retinopathy (including macular edema) without specialist physician overread at the point-of-care. In addition to regulatory clearance, reimbursement, and quality measure updates, successful adoption requires local optimization of the clinical workflow. The general challenges of frontline care clinical workflow have been well documented in the literature. Because healthcare AI is so new, there remains a gap in the literature about challenges and opportunities to embed diagnostic AI into the clinical workflow. The goal of this review is to identify common workflow themes leading to successful adoption, measured as attainment number of exams per month using the autonomous AI system against targets set for each health center. We characterized the workflow in four different US health centers over a 12-month period. Health centers were geographically dispersed across the Midwest, Southwest, Northeast, and West Coast and varied distinctly in terms of size, staffing, resources, financing and demographics of patient populations. After 1 year, the aggregated number of diabetes-related exams per month increased from 89 after the first month of initial deployment to 174 across all sites. Across the diverse practice types, three primary determinants underscored sustainable adoption: (1) Inclusion of Executive and Clinical Champions; (2) Underlining Health Center Resources; and (3) Clinical workflows that contemplate patient identification (pre-visit), LumineticsCore Exam Capture and Provider Consult (patient visit), and Timely Referral Triage (post-visit). In addition to regulatory clearance, reimbursement and quality measures, our review shows that addressing the core determinants for workflow optimization is an essential part of large-scale adoption of innovation. These best practices can be generalizable to other autonomous AI systems in front-line care settings, thereby increasing patient access, improving quality of care, and addressing health disparities.
Collapse
|
59
|
Youssef A, Abramoff M, Char D. Is the Algorithm Good in a Bad World, or Has It Learned to be Bad? The Ethical Challenges of "Locked" Versus "Continuously Learning" and "Autonomous" Versus "Assistive" AI Tools in Healthcare. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:43-45. [PMID: 37130390 DOI: 10.1080/15265161.2023.2191052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
|
60
|
Ciccarelli M, Giallauria F, Carrizzo A, Visco V, Silverio A, Cesaro A, Calabrò P, De Luca N, Mancusi C, Masarone D, Pacileo G, Tourkmani N, Vigorito C, Vecchione C. Artificial intelligence in cardiovascular prevention: new ways will open new doors. J Cardiovasc Med (Hagerstown) 2023; 24:e106-e115. [PMID: 37186561 DOI: 10.2459/jcm.0000000000001431] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Prevention and effective treatment of cardiovascular disease are progressive issues that grow in tandem with the average age of the world population. Over recent decades, the potential role of artificial intelligence in cardiovascular medicine has been increasingly recognized because of the incredible amount of real-world data (RWD) regarding patient health status and healthcare delivery that can be collated from a variety of sources wherein patient information is routinely collected, including patient registries, clinical case reports, reimbursement claims and billing reports, medical devices, and electronic health records. Like any other (health) data, RWD can be analysed in accordance with high-quality research methods, and its analysis can deliver valuable patient-centric insights complementing the information obtained from conventional clinical trials. Artificial intelligence application on RWD has the potential to detect a patient's health trajectory leading to personalized medicine and tailored treatment. This article reviews the benefits of artificial intelligence in cardiovascular prevention and management, focusing on diagnostic and therapeutic improvements without neglecting the limitations of this new scientific approach.
Collapse
Affiliation(s)
- Michele Ciccarelli
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, Italy
| | - Francesco Giallauria
- Department of Translational Medical Sciences, Federico II University, Naples, Italy
| | - Albino Carrizzo
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, Italy
- Vascular Physiopathology Unit, IRCCS Neuromed, Pozzilli
| | - Valeria Visco
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, Italy
| | - Angelo Silverio
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, Italy
| | - Arturo Cesaro
- Department of Translational Medical Sciences, University of Campania 'Luigi Vanvitelli', Naples, Italy
| | - Paolo Calabrò
- Department of Translational Medical Sciences, University of Campania 'Luigi Vanvitelli', Naples, Italy
| | - Nicola De Luca
- Department of Advanced Biomedical Sciences, Federico II University, Naples, Italy
| | - Costantino Mancusi
- Department of Advanced Biomedical Sciences, Federico II University, Naples, Italy
| | - Daniele Masarone
- Heart Failure Unit, Department of Cardiology, AORN dei Colli-Monaldi Hospital Naples, Naples, Italy
| | - Giuseppe Pacileo
- Heart Failure Unit, Department of Cardiology, AORN dei Colli-Monaldi Hospital Naples, Naples, Italy
| | - Nidal Tourkmani
- Cardiology and Cardiac Rehabilitation Unit, 'Mons. Giosuè Calaciura Clinic', Catania, Italy
- ABL, Guangzhou, China
| | - Carlo Vigorito
- Department of Translational Medical Sciences, Federico II University, Naples, Italy
| | - Carmine Vecchione
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, Italy
- Vascular Physiopathology Unit, IRCCS Neuromed, Pozzilli
| |
Collapse
|
61
|
Rubinger L, Gazendam A, Ekhtiari S, Bhandari M. Machine learning and artificial intelligence in research and healthcare. Injury 2023; 54 Suppl 3:S69-S73. [PMID: 35135685 DOI: 10.1016/j.injury.2022.01.046] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 01/29/2022] [Indexed: 02/02/2023]
Abstract
Artificial intelligence (AI) is a broad term referring to the application of computational algorithms that can analyze large data sets to classify, predict, or gain useful conclusions. Under the umbrella of AI is machine learning (ML). ML is the process of building or learning statistical models using previously observed real world data to predict outcomes, or categorize observations based on 'training' provided by humans. These predictions are then applied to future data, all the while folding in the new data into its perpetually improving and calibrated statistical model. The future of AI and ML in healthcare research is exciting and expansive. AI and ML are becoming cornerstones in the medical and healthcare-research domains and are integral in our continued processing and capitalization of robust patient EMR data. Considerations for the use and application of ML in healthcare settings include assessing the quality of data inputs and decision-making that serve as the foundations of the ML model, ensuring the end-product is interpretable, transparent, and ethical concerns are considered throughout the development process. The current and future applications of ML include improving the quality and quantity of data collected from EMRs to improve registry data, utilizing these robust datasets to improve and standardized research protocols and outcomes, clinical decision-making applications, natural language processing and improving the fundamentals of value-based care, to name only a few.
Collapse
Affiliation(s)
- Luc Rubinger
- Division of Orthopaedics, Department of Surgery, McMaster University, Hamilton, ON Canada; Centre for Evidence-Based Orthopaedics, 293 Wellington St. N, Suite 110, Hamilton, ON L8L 8E7 Canada.
| | - Aaron Gazendam
- Division of Orthopaedics, Department of Surgery, McMaster University, Hamilton, ON Canada; Centre for Evidence-Based Orthopaedics, 293 Wellington St. N, Suite 110, Hamilton, ON L8L 8E7 Canada
| | - Seper Ekhtiari
- Division of Orthopaedics, Department of Surgery, McMaster University, Hamilton, ON Canada; Centre for Evidence-Based Orthopaedics, 293 Wellington St. N, Suite 110, Hamilton, ON L8L 8E7 Canada
| | - Mohit Bhandari
- Division of Orthopaedics, Department of Surgery, McMaster University, Hamilton, ON Canada; Centre for Evidence-Based Orthopaedics, 293 Wellington St. N, Suite 110, Hamilton, ON L8L 8E7 Canada
| |
Collapse
|
62
|
Cagliero D, Deuitch N, Shah N, Feudtner C, Char D. A framework to identify ethical concerns with ML-guided care workflows: a case study of mortality prediction to guide advance care planning. J Am Med Inform Assoc 2023; 30:819-827. [PMID: 36826400 PMCID: PMC10114055 DOI: 10.1093/jamia/ocad022] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 02/02/2023] [Accepted: 02/09/2023] [Indexed: 02/25/2023] Open
Abstract
OBJECTIVE Identifying ethical concerns with ML applications to healthcare (ML-HCA) before problems arise is now a stated goal of ML design oversight groups and regulatory agencies. Lack of accepted standard methodology for ethical analysis, however, presents challenges. In this case study, we evaluate use of a stakeholder "values-collision" approach to identify consequential ethical challenges associated with an ML-HCA for advanced care planning (ACP). Identification of ethical challenges could guide revision and improvement of the ML-HCA. MATERIALS AND METHODS We conducted semistructured interviews of the designers, clinician-users, affiliated administrators, and patients, and inductive qualitative analysis of transcribed interviews using modified grounded theory. RESULTS Seventeen stakeholders were interviewed. Five "values-collisions"-where stakeholders disagreed about decisions with ethical implications-were identified: (1) end-of-life workflow and how model output is introduced; (2) which stakeholders receive predictions; (3) benefit-harm trade-offs; (4) whether the ML design team has a fiduciary relationship to patients and clinicians; and, (5) how and if to protect early deployment research from external pressures, like news scrutiny, before research is completed. DISCUSSION From these findings, the ML design team prioritized: (1) alternative workflow implementation strategies; (2) clarification that prediction was only evaluated for ACP need, not other mortality-related ends; and (3) shielding research from scrutiny until endpoint driven studies were completed. CONCLUSION In this case study, our ethical analysis of this ML-HCA for ACP was able to identify multiple sites of intrastakeholder disagreement that mark areas of ethical and value tension. These findings provided a useful initial ethical screening.
Collapse
Affiliation(s)
- Diana Cagliero
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Natalie Deuitch
- Department of Genetics, Stanford University School of Medicine, Stanford, California, USA
- National Institutes of Health, National Human Genome Research Institute, Bethesda, Maryland, USA
| | - Nigam Shah
- Center for Biomedical Informatics Research, Stanford University School of Medicine, Palo Alto, California, USA
| | - Chris Feudtner
- The Department of Medical Ethics, The Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
- Departments of Pediatrics, Medical Ethics and Healthcare Policy, The Perelman School of Medicine, The University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Danton Char
- Division of Pediatric Cardiac Anesthesia, Department of Anesthesiology, Stanford University School of Medicine, Stanford, California, USA
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
63
|
Ursin F, Lindner F, Ropinski T, Salloch S, Timmermann C. Ebenen der Explizierbarkeit für medizinische künstliche Intelligenz: Was brauchen wir normativ und was können wir technisch erreichen? Ethik Med 2023. [DOI: 10.1007/s00481-023-00761-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Abstract
Definition of the problem
The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI?
Arguments
We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example.
Conclusion
We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.
Collapse
|
64
|
Abstract
Artificial intelligence (AI) applications are an area of active investigation in clinical chemistry. Numerous publications have demonstrated the promise of AI across all phases of testing including preanalytic, analytic, and postanalytic phases; this includes novel methods for detecting common specimen collection errors, predicting laboratory results and diagnoses, and enhancing autoverification workflows. Although AI applications pose several ethical and operational challenges, these technologies are expected to transform the practice of the clinical chemistry laboratory in the near future.
Collapse
Affiliation(s)
- Dustin R Bunch
- Department of Pathology and Laboratory Medicine, Nationwide Children's Hospital, 700 Children's Drive, C1923, Columbus, OH 43205-2644, USA; Department of Pathology, College of Medicine, The Ohio State University, Columbus, OH 43210, USA
| | - Thomas Js Durant
- Department of Laboratory Medicine, Yale School of Medicine, 55 Park Street, Room PS 502A, New Haven, CT 06510, USA
| | - Joseph W Rudolf
- Department of Pathology, University of Utah School of Medicine, Salt Lake City, UT 84112, USA; ARUP Laboratories, 500 Chipeta Way, MC 115, Salt Lake City, UT 84108, USA.
| |
Collapse
|
65
|
Gómez-Carrillo A, Paquin V, Dumas G, Kirmayer LJ. Restoring the missing person to personalized medicine and precision psychiatry. Front Neurosci 2023; 17:1041433. [PMID: 36845417 PMCID: PMC9947537 DOI: 10.3389/fnins.2023.1041433] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Accepted: 01/09/2023] [Indexed: 02/11/2023] Open
Abstract
Precision psychiatry has emerged as part of the shift to personalized medicine and builds on frameworks such as the U.S. National Institute of Mental Health Research Domain Criteria (RDoC), multilevel biological "omics" data and, most recently, computational psychiatry. The shift is prompted by the realization that a one-size-fits all approach is inadequate to guide clinical care because people differ in ways that are not captured by broad diagnostic categories. One of the first steps in developing this personalized approach to treatment was the use of genetic markers to guide pharmacotherapeutics based on predictions of pharmacological response or non-response, and the potential risk of adverse drug reactions. Advances in technology have made a greater degree of specificity or precision potentially more attainable. To date, however, the search for precision has largely focused on biological parameters. Psychiatric disorders involve multi-level dynamics that require measures of phenomenological, psychological, behavioral, social structural, and cultural dimensions. This points to the need to develop more fine-grained analyses of experience, self-construal, illness narratives, interpersonal interactional dynamics, and social contexts and determinants of health. In this paper, we review the limitations of precision psychiatry arguing that it cannot reach its goal if it does not include core elements of the processes that give rise to psychopathological states, which include the agency and experience of the person. Drawing from contemporary systems biology, social epidemiology, developmental psychology, and cognitive science, we propose a cultural-ecosocial approach to integrating precision psychiatry with person-centered care.
Collapse
Affiliation(s)
- Ana Gómez-Carrillo
- Culture, Mind, and Brain Program, Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, QC, Canada
- Culture and Mental Health Research Unit, Lady Davis Institute, Jewish General Hospital, Montreal, QC, Canada
| | - Vincent Paquin
- Culture, Mind, and Brain Program, Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, QC, Canada
| | - Guillaume Dumas
- Culture, Mind, and Brain Program, Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, QC, Canada
- Precision Psychiatry and Social Physiology Laboratory at the CHU Sainte-Justine Research Center, Université de Montréal, Montreal, QC, Canada
- Mila-Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Laurence J Kirmayer
- Culture, Mind, and Brain Program, Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, QC, Canada
- Culture and Mental Health Research Unit, Lady Davis Institute, Jewish General Hospital, Montreal, QC, Canada
| |
Collapse
|
66
|
Dunn T, Cosgun E. A cloud-based pipeline for analysis of FHIR and long-read data. BIOINFORMATICS ADVANCES 2023; 3:vbac095. [PMID: 36726729 PMCID: PMC9872570 DOI: 10.1093/bioadv/vbac095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 11/02/2022] [Accepted: 01/19/2023] [Indexed: 01/22/2023]
Abstract
Motivation As genome sequencing becomes cheaper and more accurate, it is becoming increasingly viable to merge this data with electronic health information to inform clinical decisions. Results In this work, we demonstrate a full pipeline for working with both PacBio sequencing data and clinical FHIR® data, from initial data to tertiary analysis. The electronic health records are stored in FHIR® (Fast Healthcare Interoperability Resource) format, the current leading standard for healthcare data exchange. For the genomic data, we perform variant calling on long-read PacBio HiFi data using Cromwell on Azure. Both data formats are parsed, processed and merged in a single scalable pipeline which securely performs tertiary analyses using cloud-based Jupyter notebooks. We include three example applications: exporting patient information to a database, clustering patients and performing a simple pharmacogenomic study. Availability and implementation https://github.com/microsoft/genomicsnotebook/tree/main/fhirgenomics. Supplementary information Supplementary data are available at Bioinformatics Advances online.
Collapse
Affiliation(s)
- Tim Dunn
- To whom correspondence should be addressed.
| | - Erdal Cosgun
- Biomedical Platforms and Genomics, Microsoft Research, Redmond, WA 98052, USA
| |
Collapse
|
67
|
Rueda J, Rodríguez JD, Jounou IP, Hortal-Carmona J, Ausín T, Rodríguez-Arias D. "Just" accuracy? Procedural fairness demands explainability in AI-based medical resource allocations. AI & SOCIETY 2022:1-12. [PMID: 36573157 PMCID: PMC9769482 DOI: 10.1007/s00146-022-01614-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 12/05/2022] [Indexed: 12/24/2022]
Abstract
The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps to maximize patients' benefits and optimizes limited resources. However, we claim that the opaqueness of the algorithmic black box and its absence of explainability threatens core commitments of procedural fairness such as accountability, avoidance of bias, and transparency. To illustrate this, we discuss liver transplantation as a case of critical medical resources in which the lack of explainability in AI-based allocation algorithms is procedurally unfair. Finally, we provide a number of ethical recommendations for when considering the use of unexplainable algorithms in the distribution of health-related resources.
Collapse
Affiliation(s)
- Jon Rueda
- Department of Philosophy 1, University of Granada, Granada, Spain
- FiloLab Scientific Unit of Excellence, University of Granada, Granada, Spain
| | | | | | | | - Txetxu Ausín
- Institute of Philosophy, Spanish National Research Council, Madrid, Spain
| | - David Rodríguez-Arias
- Department of Philosophy 1, University of Granada, Granada, Spain
- FiloLab Scientific Unit of Excellence, University of Granada, Granada, Spain
| |
Collapse
|
68
|
Melekoglu E, Kocabicak U, Uçar MK, Bilgin C, Bozkurt MR, Cunkas M. A new diagnostic method for chronic obstructive pulmonary disease using the photoplethysmography signal and hybrid artificial intelligence. PeerJ Comput Sci 2022; 8:e1188. [PMID: 37346306 PMCID: PMC10280226 DOI: 10.7717/peerj-cs.1188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 11/22/2022] [Indexed: 06/23/2023]
Abstract
Background and Purpose Chronic obstructive pulmonary disease (COPD), is a primary public health issue globally and in our country, which continues to increase due to poor awareness of the disease and lack of necessary preventive measures. COPD is the result of a blockage of the air sacs known as alveoli within the lungs; it is a persistent sickness that causes difficulty in breathing, cough, and shortness of breath. COPD is characterized by breathing signs and symptoms and airflow challenge because of anomalies in the airways and alveoli that occurs as the result of significant exposure to harmful particles and gases. The spirometry test (breath measurement test), used for diagnosing COPD, is creating difficulties in reaching hospitals, especially in patients with disabilities or advanced disease and in children. To facilitate the diagnostic treatment and prevent these problems, it is far evaluated that using photoplethysmography (PPG) signal in the diagnosis of COPD disease would be beneficial in order to simplify and speed up the diagnosis process and make it more convenient for monitoring. A PPG signal includes numerous components, including volumetric changes in arterial blood that are related to heart activity, fluctuations in venous blood volume that modify the PPG signal, a direct current (DC) component that shows the optical properties of the tissues, and modest energy changes in the body. PPG has typically received the usage of a pulse oximeter, which illuminates the pores and skin and measures adjustments in mild absorption. PPG occurring with every heart rate is an easy signal to measure. PPG signal is modeled by machine learning to predict COPD. Methods During the studies, the PPG signal was cleaned of noise, and a brand-new PPG signal having three low-frequency bands of the PPG was obtained. Each of the four signals extracted 25 features. An aggregate of 100 features have been extracted. Additionally, weight, height, and age were also used as characteristics. In the feature selection process, we employed the Fisher method. The intention of using this method is to improve performance. Results This improved PPG prediction models have an accuracy rate of 0.95 performance value for all individuals. Classification algorithms used in feature selection algorithm has contributed to a performance increase. Conclusion According to the findings, PPG-based COPD prediction models are suitable for usage in practice.
Collapse
Affiliation(s)
| | - Umit Kocabicak
- Computer Engineering, Sakarya University, Sakarya, Turkey
| | | | - Cahit Bilgin
- Faculty of Medicine, Sakarya University, Sakarya, Turkey
| | | | - Mehmet Cunkas
- Electrical and Electronics Engineering, Selcuk University, Konya, Turkey
| |
Collapse
|
69
|
Koehle H, Kronk C, Lee YJ. Digital Health Equity: Addressing Power, Usability, and Trust to Strengthen Health Systems. Yearb Med Inform 2022; 31:20-32. [PMID: 36463865 PMCID: PMC9719765 DOI: 10.1055/s-0042-1742512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND Without specific attention to health equity considerations in design, implementation, and evaluation, the rapid expansion of digital health approaches threatens to exacerbate rather than ameliorate existing health disparities. METHODS We explored known factors that increase digital health inequity to contextualize the need for equity-centered informatics. This work used a narrative review method to summarize issues about inequities in digital health and to discuss future directions for researchers and clinicians. We searched literature using a combination of relevant keywords (e.g., "digital health", "health equity", etc.) using PubMed and Google Scholar. RESULTS We have highlighted strategies for addressing medical marginalization in informatics according to vectors of power such as race and ethnicity, gender identity and modality, sexuality, disability, housing status, citizenship status, and criminalization status. CONCLUSIONS We have emphasized collaboration with user and patient groups to define priorities, ensure accessibility and localization, and consider risks in development and utilization of digital health tools. Additionally, we encourage consideration of potential pitfalls in adopting these diversity, equity, and inclusion (DEI)-related strategies.
Collapse
Affiliation(s)
- Han Koehle
- Student Affairs Health Equity Initiative, University of California Santa Barbara, Santa Barbara, California, USA
| | - Clair Kronk
- Center for Medical Informatics, Yale University School of Medicine, Connecticut, USA,Correspondence to: Clair Kronk Center for Medical Informatics, Yale School of Medicine300 George Street, PO Box 208009 New Haven, CT 06520USA
| | - Young Ji Lee
- School of Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
70
|
Using model explanations to guide deep learning models towards consistent explanations for EHR data. Sci Rep 2022; 12:19899. [PMID: 36400825 PMCID: PMC9674624 DOI: 10.1038/s41598-022-24356-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 11/14/2022] [Indexed: 11/19/2022] Open
Abstract
It has been shown that identical deep learning (DL) architectures will produce distinct explanations when trained with different hyperparameters that are orthogonal to the task (e.g. random seed, training set order). In domains such as healthcare and finance, where transparency and explainability is paramount, this can be a significant barrier to DL adoption. In this study we present a further analysis of explanation (in)consistency on 6 tabular datasets/tasks, with a focus on Electronic Health Records data. We propose a novel deep learning ensemble architecture that trains its sub-models to produce consistent explanations, improving explanation consistency by as much as 315% (e.g. from 0.02433 to 0.1011 on MIMIC-IV), and on average by 124% (e.g. from 0.12282 to 0.4450 on the BCW dataset). We evaluate the effectiveness of our proposed technique and discuss the implications our results have for both industrial applications of DL and explainability as well as future methodological work.
Collapse
|
71
|
Prakash S, Balaji JN, Joshi A, Surapaneni KM. Ethical Conundrums in the Application of Artificial Intelligence (AI) in Healthcare-A Scoping Review of Reviews. J Pers Med 2022; 12:1914. [PMID: 36422090 PMCID: PMC9698424 DOI: 10.3390/jpm12111914] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 11/05/2022] [Accepted: 11/14/2022] [Indexed: 09/12/2023] Open
Abstract
BACKGROUND With the availability of extensive health data, artificial intelligence has an inordinate capability to expedite medical explorations and revamp healthcare.Artificial intelligence is set to reform the practice of medicine soon. Despite the mammoth advantages of artificial intelligence in the medical field, there exists inconsistency in the ethical and legal framework for the application of AI in healthcare. Although research has been conducted by various medical disciplines investigating the ethical implications of artificial intelligence in the healthcare setting, the literature lacks a holistic approach. OBJECTIVE The purpose of this review is to ascertain the ethical concerns of AI applications in healthcare, to identify the knowledge gaps and provide recommendations for an ethical and legal framework. METHODOLOGY Electronic databases Pub Med and Google Scholar were extensively searched based on the search strategy pertaining to the purpose of this review. Further screening of the included articles was done on the grounds of the inclusion and exclusion criteria. RESULTS The search yielded a total of 1238 articles, out of which 16 articles were identified to be eligible for this review. The selection was strictly based on the inclusion and exclusion criteria mentioned in the manuscript. CONCLUSION Artificial intelligence (AI) is an exceedingly puissant technology, with the prospect of advancing medical practice in the years to come. Nevertheless, AI brings with it a colossally abundant number of ethical and legal problems associated with its application in healthcare. There are manifold stakeholders in the legal and ethical issues revolving around AI and medicine. Thus, a multifaceted approach involving policymakers, developers, healthcare providers and patients is crucial to arrive at a feasible solution for mitigating the legal and ethical problems pertaining to AI in healthcare.
Collapse
Affiliation(s)
- Sreenidhi Prakash
- Panimalar Medical College Hospital & Research Institute, Varadharajapuram, Poonamallee, Chennai 600 123, Tamil Nadu, India
| | - Jyotsna Needamangalam Balaji
- Panimalar Medical College Hospital & Research Institute, Varadharajapuram, Poonamallee, Chennai 600 123, Tamil Nadu, India
| | - Ashish Joshi
- School of Public Health, The University of Memphis, Memphis, TN 38152, USA
- SMAART Population Health Informatics Intervention Center, Foundation of Healthcare Technologies Society, Panimalar Medical College Hospital & Research Institute, Varadharajapuram, Poonamallee, Chennai 600 123, Tamil Nadu, India
| | - Krishna Mohan Surapaneni
- SMAART Population Health Informatics Intervention Center, Foundation of Healthcare Technologies Society, Panimalar Medical College Hospital & Research Institute, Varadharajapuram, Poonamallee, Chennai 600 123, Tamil Nadu, India
- Bioethics Unit, Panimalar Medical College Hospital & Research Institute, Varadharajapuram, Poonamallee, Chennai 600 123, Tamil Nadu, India
- Departments of Biochemistry, Medical Education, Molecular Virology, Research, Clinical Skills & Simulation, Panimalar Medical College Hospital & Research Institute, Varadharajapuram, Poonamallee, Chennai 600 123, Tamil Nadu, India
| |
Collapse
|
72
|
Grote T. Randomised controlled trials in medical AI: ethical considerations. JOURNAL OF MEDICAL ETHICS 2022; 48:899-906. [PMID: 33990429 DOI: 10.1136/medethics-2020-107166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 03/30/2021] [Accepted: 04/08/2021] [Indexed: 06/12/2023]
Abstract
In recent years, there has been a surge of high-profile publications on applications of artificial intelligence (AI) systems for medical diagnosis and prognosis. While AI provides various opportunities for medical practice, there is an emerging consensus that the existing studies show considerable deficits and are unable to establish the clinical benefit of AI systems. Hence, the view that the clinical benefit of AI systems needs to be studied in clinical trials-particularly randomised controlled trials (RCTs)-is gaining ground. However, an issue that has been overlooked so far in the debate is that, compared with drug RCTs, AI RCTs require methodological adjustments, which entail ethical challenges. This paper sets out to develop a systematic account of the ethics of AI RCTs by focusing on the moral principles of clinical equipoise, informed consent and fairness. This way, the objective is to animate further debate on the (research) ethics of medical AI.
Collapse
Affiliation(s)
- Thomas Grote
- Ethics and Philosophy Lab, Cluster of Excellence "Machine Learning: New Perspectives for Science", University of Tübingen, Tübingen D-72076, Germany
| |
Collapse
|
73
|
Chen H, Gomez C, Huang CM, Unberath M. Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review. NPJ Digit Med 2022; 5:156. [PMID: 36261476 PMCID: PMC9581990 DOI: 10.1038/s41746-022-00699-2] [Citation(s) in RCA: 69] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/29/2022] [Indexed: 11/16/2022] Open
Abstract
Transparency in Machine Learning (ML), often also referred to as interpretability or explainability, attempts to reveal the working mechanisms of complex models. From a human-centered design perspective, transparency is not a property of the ML model but an affordance, i.e., a relationship between algorithm and users. Thus, prototyping and user evaluations are critical to attaining solutions that afford transparency. Following human-centered design principles in highly specialized and high stakes domains, such as medical image analysis, is challenging due to the limited access to end users and the knowledge imbalance between those users and ML designers. To investigate the state of transparent ML in medical image analysis, we conducted a systematic review of the literature from 2012 to 2021 in PubMed, EMBASE, and Compendex databases. We identified 2508 records and 68 articles met the inclusion criteria. Current techniques in transparent ML are dominated by computational feasibility and barely consider end users, e.g. clinical stakeholders. Despite the different roles and knowledge of ML developers and end users, no study reported formative user research to inform the design and development of transparent ML models. Only a few studies validated transparency claims through empirical user evaluations. These shortcomings put contemporary research on transparent ML at risk of being incomprehensible to users, and thus, clinically irrelevant. To alleviate these shortcomings in forthcoming research, we introduce the INTRPRT guideline, a design directive for transparent ML systems in medical image analysis. The INTRPRT guideline suggests human-centered design principles, recommending formative user research as the first step to understand user needs and domain requirements. Following these guidelines increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.
Collapse
Affiliation(s)
- Haomin Chen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Catalina Gomez
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Chien-Ming Huang
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
74
|
Rasheed K, Qayyum A, Ghaly M, Al-Fuqaha A, Razi A, Qadir J. Explainable, trustworthy, and ethical machine learning for healthcare: A survey. Comput Biol Med 2022; 149:106043. [PMID: 36115302 DOI: 10.1016/j.compbiomed.2022.106043] [Citation(s) in RCA: 62] [Impact Index Per Article: 20.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 08/15/2022] [Accepted: 08/20/2022] [Indexed: 12/18/2022]
Abstract
With the advent of machine learning (ML) and deep learning (DL) empowered applications for critical applications like healthcare, the questions about liability, trust, and interpretability of their outputs are raising. The black-box nature of various DL models is a roadblock to clinical utilization. Therefore, to gain the trust of clinicians and patients, we need to provide explanations about the decisions of models. With the promise of enhancing the trust and transparency of black-box models, researchers are in the phase of maturing the field of eXplainable ML (XML). In this paper, we provided a comprehensive review of explainable and interpretable ML techniques for various healthcare applications. Along with highlighting security, safety, and robustness challenges that hinder the trustworthiness of ML, we also discussed the ethical issues arising because of the use of ML/DL for healthcare. We also describe how explainable and trustworthy ML can resolve all these ethical problems. Finally, we elaborate on the limitations of existing approaches and highlight various open research problems that require further development.
Collapse
Affiliation(s)
- Khansa Rasheed
- IHSAN Lab, Information Technology University of the Punjab (ITU), Lahore, Pakistan.
| | - Adnan Qayyum
- IHSAN Lab, Information Technology University of the Punjab (ITU), Lahore, Pakistan.
| | - Mohammed Ghaly
- Research Center for Islamic Legislation and Ethics (CILE), College of Islamic Studies, Hamad Bin Khalifa University (HBKU), Doha, Qatar.
| | - Ala Al-Fuqaha
- Information and Computing Technology Division, College of Science and Engineering, Hamad Bin Khalifa University (HBKU), Doha, Qatar.
| | - Adeel Razi
- Turner Institute for Brain and Mental Health, Monash University, Clayton, Australia; Monash Biomedical Imaging, Monash University, Clayton, Australia; Wellcome Centre for Human Neuroimaging, UCL, London, United Kingdom; CIFAR Azrieli Global Scholars program, CIFAR, Toronto, Canada.
| | - Junaid Qadir
- Department of Computer Science and Engineering, College of Engineering, Qatar University, Doha, Qatar.
| |
Collapse
|
75
|
Drabiak K. Leveraging law and ethics to promote safe and reliable AI/ML in healthcare. FRONTIERS IN NUCLEAR MEDICINE (LAUSANNE, SWITZERLAND) 2022; 2:983340. [PMID: 39354991 PMCID: PMC11440832 DOI: 10.3389/fnume.2022.983340] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/12/2022] [Indexed: 10/03/2024]
Abstract
Artificial intelligence and machine learning (AI/ML) is poised to disrupt the structure and delivery of healthcare, promising to optimize care clinical care delivery and information management. AI/ML offers potential benefits in healthcare, such as creating novel clinical decision support tools, pattern recognition software, and predictive modeling systems. This raises questions about how AI/ML will impact the physician-patient relationship and the practice of medicine. Effective utilization and reliance on AI/ML also requires that these technologies are safe and reliable. Potential errors could not only pose serious risks to patient safety, but also expose physicians, hospitals, and AI/ML manufacturers to liability. This review describes how the law provides a mechanism to promote safety and reliability of AI/ML systems. On the front end, the Food and Drug Administration (FDA) intends to regulate many AI/ML as medical devices, which corresponds to a set of regulatory requirements prior to product marketing and use. Post-development, a variety of mechanisms in the law provide guardrails for careful deployment into clinical practice that can also incentivize product improvement. This review provides an overview of potential areas of liability arising from AI/ML including malpractice, informed consent, corporate liability, and products liability. Finally, this review summarizes strategies to minimize risk and promote safe and reliable AI/ML.
Collapse
Affiliation(s)
- Katherine Drabiak
- College of Public Health, University of South Florida, Tampa, FL United States
| |
Collapse
|
76
|
González-Gonzalo C, Thee EF, Klaver CCW, Lee AY, Schlingemann RO, Tufail A, Verbraak F, Sánchez CI. Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2022; 90:101034. [PMID: 34902546 PMCID: PMC11696120 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
Affiliation(s)
- Cristina González-Gonzalo
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands.
| | - Eric F Thee
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Caroline C W Klaver
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the Netherlands; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Aaron Y Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Reinier O Schlingemann
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Frank Verbraak
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands
| | - Clara I Sánchez
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Amsterdam, the Netherlands
| |
Collapse
|
77
|
Lam K, Abràmoff MD, Balibrea JM, Bishop SM, Brady RR, Callcut RA, Chand M, Collins JW, Diener MK, Eisenmann M, Fermont K, Neto MG, Hager GD, Hinchliffe RJ, Horgan A, Jannin P, Langerman A, Logishetty K, Mahadik A, Maier-Hein L, Antona EM, Mascagni P, Mathew RK, Müller-Stich BP, Neumuth T, Nickel F, Park A, Pellino G, Rudzicz F, Shah S, Slack M, Smith MJ, Soomro N, Speidel S, Stoyanov D, Tilney HS, Wagner M, Darzi A, Kinross JM, Purkayastha S. A Delphi consensus statement for digital surgery. NPJ Digit Med 2022; 5:100. [PMID: 35854145 PMCID: PMC9296639 DOI: 10.1038/s41746-022-00641-6] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 06/24/2022] [Indexed: 12/13/2022] Open
Abstract
The use of digital technology is increasing rapidly across surgical specialities, yet there is no consensus for the term 'digital surgery'. This is critical as digital health technologies present technical, governance, and legal challenges which are unique to the surgeon and surgical patient. We aim to define the term digital surgery and the ethical issues surrounding its clinical application, and to identify barriers and research goals for future practice. 38 international experts, across the fields of surgery, AI, industry, law, ethics and policy, participated in a four-round Delphi exercise. Issues were generated by an expert panel and public panel through a scoping questionnaire around key themes identified from the literature and voted upon in two subsequent questionnaire rounds. Consensus was defined if >70% of the panel deemed the statement important and <30% unimportant. A final online meeting was held to discuss consensus statements. The definition of digital surgery as the use of technology for the enhancement of preoperative planning, surgical performance, therapeutic support, or training, to improve outcomes and reduce harm achieved 100% consensus agreement. We highlight key ethical issues concerning data, privacy, confidentiality and public trust, consent, law, litigation and liability, and commercial partnerships within digital surgery and identify barriers and research goals for future practice. Developers and users of digital surgery must not only have an awareness of the ethical issues surrounding digital applications in healthcare, but also the ethical considerations unique to digital surgery. Future research into these issues must involve all digital surgery stakeholders including patients.
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, Imperial College, London, UK
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Michael D Abràmoff
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, USA
| | - José M Balibrea
- Department of Gastrointestinal Surgery, Hospital Clínic de Barcelona, Barcelona, Spain
- Universitat de Barcelona, Barcelona, Spain
| | | | - Richard R Brady
- Newcastle Centre for Bowel Disease Research Hub, Newcastle University, Newcastle, UK
- Department of Colorectal Surgery, Newcastle Hospitals, Newcastle, UK
| | | | - Manish Chand
- Department of Surgery and Interventional Sciences, University College London, London, UK
| | - Justin W Collins
- CMR Surgical Limited, Cambridge, UK
- Department of Surgery and Interventional Sciences, University College London, London, UK
| | - Markus K Diener
- Department of General and Visceral Surgery, University of Freiburg, Freiburg im Breisgau, Germany
- Faculty of Medicine, University of Freiburg, Freiburg im Breisgau, Germany
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Kelly Fermont
- Solicitor of the Senior Courts of England and Wales, Independent Researcher, Bristol, UK
| | - Manoel Galvao Neto
- Endovitta Institute, Sao Paulo, Brazil
- FMABC Medical School, Santo Andre, Brazil
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD, USA
| | | | - Alan Horgan
- Department of Colorectal Surgery, Newcastle Hospitals, Newcastle, UK
| | - Pierre Jannin
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Alexander Langerman
- Otolaryngology, Head & Neck Surgery and Radiology & Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- International Centre for Surgical Safety, Li Ka Shing Knowledge Institute, St. Michael's Hospital, University of Toronto, Toronto, ON, Canada
| | | | | | - Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- Medical Faculty, Heidelberg University, Heidelberg, Germany
- LKSK Institute of St. Michael's Hospital, Toronto, ON, Canada
| | | | - Pietro Mascagni
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
- ICube, University of Strasbourg, Strasbourg, France
| | - Ryan K Mathew
- School of Medicine, University of Leeds, Leeds, UK
- Department of Neurosurgery, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Beat P Müller-Stich
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases, Heidelberg, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), Universität Leipzig, Leipzig, Germany
| | - Felix Nickel
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Adrian Park
- Department of Surgery, Anne Arundel Medical Center, School of Medicine, Johns Hopkins University, Annapolis, MD, USA
| | - Gianluca Pellino
- Department of Advanced Medical and Surgical Sciences, Università degli Studi della Campania "Luigi Vanvitelli", Naples, Italy
- Colorectal Surgery, Vall d'Hebron University Hospital, Barcelona, Spain
| | - Frank Rudzicz
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
- Unity Health Toronto, Toronto, ON, Canada
- Surgical Safety Technologies Inc, Toronto, ON, Canada
| | - Sam Shah
- Faculty of Future Health, College of Medicine and Dentistry, Ulster University, Birmingham, UK
| | - Mark Slack
- CMR Surgical Limited, Cambridge, UK
- Department of Urogynaecology, Addenbrooke's Hospital, Cambridge, UK
- University of Cambridge, Cambridge, UK
| | - Myles J Smith
- The Royal Marsden Hospital, London, UK
- Institute of Cancer Research, London, UK
| | - Naeem Soomro
- Department of Urology, Newcastle Upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Henry S Tilney
- Department of Surgery and Cancer, Imperial College, London, UK
- Department of Colorectal Surgery, Frimley Health NHS Foundation Trust, Frimley, UK
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases, Heidelberg, Germany
| | - Ara Darzi
- Department of Surgery and Cancer, Imperial College, London, UK
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - James M Kinross
- Department of Surgery and Cancer, Imperial College, London, UK.
| | | |
Collapse
|
78
|
Char D. Important Design Questions for Algorithmic Ethics Consultation. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:38-40. [PMID: 35737487 DOI: 10.1080/15265161.2022.2075054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
|
79
|
Meier LJ, Hein A, Diepold K, Buyx A. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:4-20. [PMID: 35293841 DOI: 10.1080/15265161.2022.2040647] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress' prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the difficult task of operationalizing the principles of beneficence, non-maleficence and patient autonomy, and describe how we selected suitable input parameters that we extracted from a training dataset of clinical cases. The first performance results are promising, but an algorithmic approach to ethics also comes with several weaknesses and limitations. Should one really entrust the sensitive domain of clinical ethics to machine intelligence?
Collapse
Affiliation(s)
- Lukas J Meier
- Technical University of Munich
- University of Cambridge
| | | | | | | |
Collapse
|
80
|
Levy JJ, Lima JF, Miller MW, Freed GL, O'Malley AJ, Emeny RT. Machine Learning Approaches for Hospital Acquired Pressure Injuries: A Retrospective Study of Electronic Medical Records. FRONTIERS IN MEDICAL TECHNOLOGY 2022; 4:926667. [PMID: 35782577 PMCID: PMC9243224 DOI: 10.3389/fmedt.2022.926667] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 05/24/2022] [Indexed: 11/24/2022] Open
Abstract
Background Many machine learning heuristics integrate well with Electronic Medical Record (EMR) systems yet often fail to surpass traditional statistical models for biomedical applications. Objective We sought to compare predictive performances of 12 machine learning and traditional statistical techniques to predict the occurrence of Hospital Acquired Pressure Injuries (HAPI). Methods EMR information was collected from 57,227 hospitalizations acquired from Dartmouth Hitchcock Medical Center (April 2011 to December 2016). Twelve classification algorithms, chosen based upon classic regression and recent machine learning techniques, were trained to predict HAPI incidence and performance was assessed using the Area Under the Receiver Operating Characteristic Curve (AUC). Results Logistic regression achieved a performance (AUC = 0.91 ± 0.034) comparable to the other machine learning approaches. We report discordance between machine learning derived predictors compared to the traditional statistical model. We visually assessed important patient-specific factors through Shapley Additive Explanations. Conclusions Machine learning models will continue to inform clinical decision-making processes but should be compared to traditional modeling approaches to ensure proper utilization. Disagreements between important predictors found by traditional and machine learning modeling approaches can potentially confuse clinicians and need to be reconciled. These developments represent important steps forward in developing real-time predictive models that can be integrated into EMR systems to reduce unnecessary harm.
Collapse
Affiliation(s)
- Joshua J. Levy
- Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH, United States
- Department of Pathology, Dartmouth Hitchcock Medical Center, Lebanon, NH, United States
- Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Hanover, NH, United States
| | - Jorge F. Lima
- Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Hanover, NH, United States
| | - Megan W. Miller
- Department of Wound Care Services, Dartmouth Hitchcock Medical Center, Lebanon, NH, United States
| | - Gary L. Freed
- Department of Wound Care Services, Dartmouth Hitchcock Medical Center, Lebanon, NH, United States
- Department of Plastic Surgery, Dartmouth Hitchcock Medical Center, Lebanon, NH, United States
| | - A. James O'Malley
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH, United States
- The Dartmouth Institute for Health Policy and Clinical Practice, Geisel School of Medicine at Dartmouth, Hanover, NH, United States
| | - Rebecca T. Emeny
- The Dartmouth Institute for Health Policy and Clinical Practice, Geisel School of Medicine at Dartmouth, Hanover, NH, United States
| |
Collapse
|
81
|
Abràmoff MD, Roehrenbeck C, Trujillo S, Goldstein J, Graves AS, Repka MX, Silva Iii EZ. A reimbursement framework for artificial intelligence in healthcare. NPJ Digit Med 2022; 5:72. [PMID: 35681002 PMCID: PMC9184542 DOI: 10.1038/s41746-022-00621-w] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 05/25/2022] [Indexed: 11/09/2022] Open
Affiliation(s)
- Michael D Abràmoff
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA. .,AI Healthcare Coalition, Washington, DC, USA. .,Digital Diagnostics, Coralville, IA, USA.
| | - Cybil Roehrenbeck
- AI Healthcare Coalition, Washington, DC, USA.,Hogan Lovells LLP, Washington, DC, USA
| | | | | | | | - Michael X Repka
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, USA
| | - Ezequiel Zeke Silva Iii
- South Texas Radiology, San Antonio, TX, USA.,University of Texas Health, Long School of Medicine, San Antonio, TX, USA
| |
Collapse
|
82
|
Willem T, Krammer S, Böhm A, French LE, Hartmann D, Lasser T, Buyx A. Risks and benefits of dermatological machine learning healthcare applications – an overview and ethical analysis. J Eur Acad Dermatol Venereol 2022; 36:1660-1668. [DOI: 10.1111/jdv.18192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 04/07/2022] [Indexed: 11/30/2022]
Affiliation(s)
- Theresa Willem
- Technical University of Munich School of Medicine, Institute of History and Ethics in Medicine Germany
- Technical University of Munich School of Social Sciences and Technology, Department of Science, Technology and Society (STS)
| | - Sebastian Krammer
- Ludwig Maximilian University of Munich Department of Dermatology and Allergology Munich Germany
| | - Anne‐Sophie Böhm
- Ludwig Maximilian University of Munich Department of Dermatology and Allergology Munich Germany
| | - Lars E. French
- Ludwig Maximilian University of Munich Department of Dermatology and Allergology Munich Germany
- Dr. Philip Frost Department of Dermatology and Cutaneous Surgery University of Miami Miller School of Medicine Miami FL USA
| | - Daniela Hartmann
- Ludwig Maximilian University of Munich Department of Dermatology and Allergology Munich Germany
| | - Tobias Lasser
- Technical University of Munich School of Computation, Information and Technology, Department of Informatics Germany
- Technical University of Munich Institute of Biomedical Engineering Germany Munich
| | - Alena Buyx
- Technical University of Munich School of Medicine, Institute of History and Ethics in Medicine Germany
| |
Collapse
|
83
|
Char D. Challenges of Local Ethics Review in a Global Healthcare AI Market. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:39-41. [PMID: 35475961 DOI: 10.1080/15265161.2022.2055214] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
|
84
|
Gundersen T, Bærøe K. The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models. SCIENCE AND ENGINEERING ETHICS 2022; 28:17. [PMID: 35362822 PMCID: PMC8975759 DOI: 10.1007/s11948-022-00369-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 02/21/2022] [Indexed: 05/14/2023]
Abstract
This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied in patient care, which we call the ordinary evidence model, the ethical design model, the collaborative model, and the public deliberation model. We argue that the collaborative model is the most promising for covering most AI technology, while the public deliberation model is called for when the technology is recognized as fundamentally transforming the conditions for ethical shared decision-making.
Collapse
Affiliation(s)
- Torbjørn Gundersen
- Centre for the Study of Professions, Oslo Metropolitan University, Oslo, Norway.
| | - Kristine Bærøe
- Department of Global Public Health and Primary Care, University of Bergen, Bergen, Norway
| |
Collapse
|
85
|
Vearrier L, Derse AR, Basford JB, Larkin GL, Moskop JC. Artificial Intelligence in Emergency Medicine: Benefits, Risks, and Recommendations. J Emerg Med 2022; 62:492-499. [PMID: 35164977 DOI: 10.1016/j.jemermed.2022.01.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 12/12/2021] [Accepted: 01/16/2022] [Indexed: 01/04/2023]
Abstract
BACKGROUND Artificial intelligence (AI) can be described as the use of computers to perform tasks that formerly required human cognition. The American Medical Association prefers the term 'augmented intelligence' over 'artificial intelligence' to emphasize the assistive role of computers in enhancing physician skills as opposed to replacing them. The integration of AI into emergency medicine, and clinical practice at large, has increased in recent years, and that trend is likely to continue. DISCUSSION AI has demonstrated substantial potential benefit for physicians and patients. These benefits are transforming the therapeutic relationship from the traditional physician-patient dyad into a triadic doctor-patient-machine relationship. New AI technologies, however, require careful vetting, legal standards, patient safeguards, and provider education. Emergency physicians (EPs) should recognize the limits and risks of AI as well as its potential benefits. CONCLUSIONS EPs must learn to partner with, not capitulate to, AI. AI has proven to be superior to, or on a par with, certain physician skills, such as interpreting radiographs and making diagnoses based on visual cues, such as skin cancer. AI can provide cognitive assistance, but EPs must interpret AI results within the clinical context of individual patients. They must also advocate for patient confidentiality, professional liability coverage, and the essential role of specialty-trained EPs.
Collapse
Affiliation(s)
- Laura Vearrier
- Department of Emergency Medicine, University of Mississippi Medical Center, Jackson, Mississippi
| | - Arthur R Derse
- Center for Bioethics, Medical Humanities, and Department of Emergency Medicine, Medical College of Wisconsin, Wauwatosa, Wisconsin
| | - Jesse B Basford
- Departments of Family and Emergency Medicine, Alabama College of Osteopathic Medicine, Dothan, Alabama
| | - Gregory Luke Larkin
- Department of Emergency Medicine, Northeast Ohio Medical University, Rootstown, Ohio
| | - John C Moskop
- Department of Internal Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina
| |
Collapse
|
86
|
Naik N, Hameed BMZ, Shetty DK, Swain D, Shah M, Paul R, Aggarwal K, Ibrahim S, Patil V, Smriti K, Shetty S, Rai BP, Chlosta P, Somani BK. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Front Surg 2022; 9:862322. [PMID: 35360424 PMCID: PMC8963864 DOI: 10.3389/fsurg.2022.862322] [Citation(s) in RCA: 221] [Impact Index Per Article: 73.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 02/18/2022] [Indexed: 01/04/2023] Open
Abstract
The legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment. Concerns about newer digital technologies becoming a new source of inaccuracy and data breaches have arisen as a result of its use. Mistakes in the procedure or protocol in the field of healthcare can have devastating consequences for the patient who is the victim of the error. Because patients come into contact with physicians at moments in their lives when they are most vulnerable, it is crucial to remember this. Currently, there are no well-defined regulations in place to address the legal and ethical issues that may arise due to the use of artificial intelligence in healthcare settings. This review attempts to address these pertinent issues highlighting the need for algorithmic transparency, privacy, and protection of all the beneficiaries involved and cybersecurity of associated vulnerabilities.
Collapse
Affiliation(s)
- Nithesh Naik
- Department of Mechanical and Manufacturing Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, India
- International Training and Research in Uro-Oncology and Endourology Group, Manipal, India
| | - B. M. Zeeshan Hameed
- International Training and Research in Uro-Oncology and Endourology Group, Manipal, India
- Department of Urology, Father Muller Medical College, Mangalore, India
| | - Dasharathraj K. Shetty
- Department of Humanities and Management, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, India
| | - Dishant Swain
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, India
| | - Milap Shah
- International Training and Research in Uro-Oncology and Endourology Group, Manipal, India
- Robotics and Urooncology, Max Hospital and Max Institute of Cancer Care, New Delhi, India
| | - Rahul Paul
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Kaivalya Aggarwal
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, India
| | - Sufyan Ibrahim
- International Training and Research in Uro-Oncology and Endourology Group, Manipal, India
- Kasturba Medical College, Manipal Academy of Higher Education, Manipal, India
| | - Vathsala Patil
- Department of Oral Medicine and Radiology, Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Komal Smriti
- Department of Oral Medicine and Radiology, Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Suyog Shetty
- Department of Urology, Father Muller Medical College, Mangalore, India
| | - Bhavan Prasad Rai
- International Training and Research in Uro-Oncology and Endourology Group, Manipal, India
- Department of Urology, Freeman Hospital, Newcastle upon Tyne, United Kingdom
| | - Piotr Chlosta
- Department of Urology, Jagiellonian University in Krakow, Kraków, Poland
| | - Bhaskar K. Somani
- International Training and Research in Uro-Oncology and Endourology Group, Manipal, India
- Department of Urology, University Hospital Southampton National Health Service (NHS) Trust, Southampton, United Kingdom
| |
Collapse
|
87
|
Singh D, Nagaraj S, Mashouri P, Drysdale E, Fischer J, Goldenberg A, Brudno M. Assessment of Machine Learning-Based Medical Directives to Expedite Care in Pediatric Emergency Medicine. JAMA Netw Open 2022; 5:e222599. [PMID: 35294539 PMCID: PMC8928004 DOI: 10.1001/jamanetworkopen.2022.2599] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 01/06/2022] [Indexed: 12/26/2022] Open
Abstract
Importance Increased wait times and long lengths of stay in emergency departments (EDs) are associated with poor patient outcomes. Systems to improve ED efficiency would be useful. Specifically, minimizing the time to diagnosis by developing novel workflows that expedite test ordering can help accelerate clinical decision-making. Objective To explore the use of machine learning-based medical directives (MLMDs) to automate diagnostic testing at triage for patients with common pediatric ED diagnoses. Design, Setting, and Participants Machine learning models trained on retrospective electronic health record data were evaluated in a decision analytical model study conducted at the ED of the Hospital for Sick Children Toronto, Canada. Data were collected on all patients aged 0 to 18 years presenting to the ED from July 1, 2018, to June 30, 2019 (77 219 total patient visits). Exposure Machine learning models were trained to predict the need for urinary dipstick testing, electrocardiogram, abdominal ultrasonography, testicular ultrasonography, bilirubin level testing, and forearm radiographs. Main Outcomes and Measures Models were evaluated using area under the receiver operator curve, true-positive rate, false-positive rate, and positive predictive values. Model decision thresholds were determined to limit the total number of false-positive results and achieve high positive predictive values. The time difference between patient triage completion and test ordering was assessed for each use of MLMD. Error rates were analyzed to assess model bias. In addition, model explainability was determined using Shapley Additive Explanations values. Results There was a total of 42 238 boys (54.7%) included in model development; mean (SD) age of the children was 5.4 (4.8) years. Models obtained high area under the receiver operator curve (0.89-0.99) and positive predictive values (0.77-0.94) across each of the use cases. The proposed implementation of MLMDs would streamline care for 22.3% of all patient visits and make test results available earlier by 165 minutes (weighted mean) per affected patient. Model explainability for each MLMD demonstrated clinically relevant features having the most influence on model predictions. Models also performed with minimal to no sex bias. Conclusions and Relevance The findings of this study suggest the potential for clinical automation using MLMDs. When integrated into clinical workflows, MLMDs may have the potential to autonomously order common ED tests early in a patient's visit with explainability provided to patients and clinicians.
Collapse
Affiliation(s)
- Devin Singh
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- The Hospital for Sick Children, Toronto, Ontario, Canada
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Sujay Nagaraj
- The Hospital for Sick Children, Toronto, Ontario, Canada
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Pouria Mashouri
- DATA Team, Techna Institute, University Health Network, Toronto, Ontario, Canada
| | - Erik Drysdale
- The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Jason Fischer
- The Hospital for Sick Children, Toronto, Ontario, Canada
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Anna Goldenberg
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- The Hospital for Sick Children, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario Canada
- Canadian Institute for Advanced Research, Toronto, Ontario Canada
| | - Michael Brudno
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- DATA Team, Techna Institute, University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario Canada
- Canadian Institute for Advanced Research, Toronto, Ontario Canada
| |
Collapse
|
88
|
SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Soc Sci Med 2022; 296:114782. [DOI: 10.1016/j.socscimed.2022.114782] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 02/02/2022] [Accepted: 02/03/2022] [Indexed: 12/12/2022]
|
89
|
Abràmoff MD, Cunningham B, Patel B, Eydelman MB, Leng T, Sakamoto T, Blodi B, Grenon SM, Wolf RM, Manrai AK, Ko JM, Chiang MF, Char D. Foundational Considerations for Artificial Intelligence Using Ophthalmic Images. Ophthalmology 2022; 129:e14-e32. [PMID: 34478784 PMCID: PMC9175066 DOI: 10.1016/j.ophtha.2021.08.023] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 07/29/2021] [Accepted: 08/24/2021] [Indexed: 02/03/2023] Open
Abstract
IMPORTANCE The development of artificial intelligence (AI) and other machine diagnostic systems, also known as software as a medical device, and its recent introduction into clinical practice requires a deeply rooted foundation in bioethics for consideration by regulatory agencies and other stakeholders around the globe. OBJECTIVES To initiate a dialogue on the issues to consider when developing a bioethically sound foundation for AI in medicine, based on images of eye structures, for discussion with all stakeholders. EVIDENCE REVIEW The scope of the issues and summaries of the discussions under consideration by the Foundational Principles of Ophthalmic Imaging and Algorithmic Interpretation Working Group, as first presented during the Collaborative Community on Ophthalmic Imaging inaugural meeting on September 7, 2020, and afterward in the working group. FINDINGS Artificial intelligence has the potential to improve health care access and patient outcome fundamentally while decreasing disparities, lowering cost, and enhancing the care team. Nevertheless, substantial concerns exist. Bioethicists, AI algorithm experts, as well as the Food and Drug Administration and other regulatory agencies, industry, patient advocacy groups, clinicians and their professional societies, other provider groups, and payors (i.e., stakeholders) working together in collaborative communities to resolve the fundamental ethical issues of nonmaleficence, autonomy, and equity are essential to attain this potential. Resolution impacts all levels of the design, validation, and implementation of AI in medicine. Design, validation, and implementation of AI warrant meticulous attention. CONCLUSIONS AND RELEVANCE The development of a bioethically sound foundation may be possible if it is based in the fundamental ethical principles of nonmaleficence, autonomy, and equity for considerations for the design, validation, and implementation for AI systems. Achieving such a foundation will be helpful for continuing successful introduction into medicine before consideration by regulatory agencies. Important improvements in accessibility and quality of health care, decrease in health disparities, and lower cost thereby can be achieved. These considerations should be discussed with all stakeholders and expanded on as a useful initiation of this dialogue.
Collapse
Affiliation(s)
- Michael D. Abràmoff
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, Iowa.,Department of Elecrical and Computer Engineering, University of Iowa, Iowa City, Iowa.,Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa
| | - Brad Cunningham
- Center for Devices and Radiological Health, Office of Health Technology 1, United States Food and Drug Administration, Silver Springs, Maryland
| | - Bakul Patel
- Center for Devices and Radiological Health, Digital Health Center of Excellence, United States Food and Drug Administration, Silver Springs, Maryland
| | - Malvina B. Eydelman
- Center for Devices and Radiological Health, Office of Health Technology 1, United States Food and Drug Administration, Silver Springs, Maryland
| | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan.,Japanese Vitreous Retina Society, Osaka, Japan
| | - Barbara Blodi
- Department of Ophthalmology, University of Wisconsin, Madison, Wisconsin
| | - S. Marlene Grenon
- Innovation Ventures, University of California, San Francisco, San Francisco, California.,Division of Vascular and Endovascular Surgery, Universify of California San Francisco, California
| | - Risa M. Wolf
- Department of Pediatric Endocrinology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Arjun K. Manrai
- Computational Health Informatics Program, Boston Children’s Hospital, Boston, Massachusetts.,Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts
| | - Justin M. Ko
- Department of Dermatology, Stanford University School of Medicine, Stanford, California
| | | | - Danton Char
- Division of Pediatric Cardiac Anesthesia, Department of Anesthesiology, Stanford University School of Medicine, San Francisco, California.,Center for Biomedical Ethics, Stanford University School of Medicine, San Francisco, California
| | | |
Collapse
|
90
|
Crossnohere NL, Elsaid M, Paskett J, Bose-Brill S, Bridges JFP. Guidelines for artificial intelligence in medicine: A literature review and content analysis of frameworks (Preprint). J Med Internet Res 2022; 24:e36823. [PMID: 36006692 PMCID: PMC9459836 DOI: 10.2196/36823] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 06/02/2022] [Accepted: 07/14/2022] [Indexed: 12/15/2022] Open
Abstract
Background Artificial intelligence (AI) is rapidly expanding in medicine despite a lack of consensus on its application and evaluation. Objective We sought to identify current frameworks guiding the application and evaluation of AI for predictive analytics in medicine and to describe the content of these frameworks. We also assessed what stages along the AI translational spectrum (ie, AI development, reporting, evaluation, implementation, and surveillance) the content of each framework has been discussed. Methods We performed a literature review of frameworks regarding the oversight of AI in medicine. The search included key topics such as “artificial intelligence,” “machine learning,” “guidance as topic,” and “translational science,” and spanned the time period 2014-2022. Documents were included if they provided generalizable guidance regarding the use or evaluation of AI in medicine. Included frameworks are summarized descriptively and were subjected to content analysis. A novel evaluation matrix was developed and applied to appraise the frameworks’ coverage of content areas across translational stages. Results Fourteen frameworks are featured in the review, including six frameworks that provide descriptive guidance and eight that provide reporting checklists for medical applications of AI. Content analysis revealed five considerations related to the oversight of AI in medicine across frameworks: transparency, reproducibility, ethics, effectiveness, and engagement. All frameworks include discussions regarding transparency, reproducibility, ethics, and effectiveness, while only half of the frameworks discuss engagement. The evaluation matrix revealed that frameworks were most likely to report AI considerations for the translational stage of development and were least likely to report considerations for the translational stage of surveillance. Conclusions Existing frameworks for the application and evaluation of AI in medicine notably offer less input on the role of engagement in oversight and regarding the translational stage of surveillance. Identifying and optimizing strategies for engagement are essential to ensure that AI can meaningfully benefit patients and other end users.
Collapse
Affiliation(s)
- Norah L Crossnohere
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
- Division of General Internal Medicine, Department of Internal Medicine, The Ohio State University College of Medicine, Columbus, OH, United States
| | - Mohamed Elsaid
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
| | - Jonathan Paskett
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
| | - Seuli Bose-Brill
- Division of General Internal Medicine, Department of Internal Medicine, The Ohio State University College of Medicine, Columbus, OH, United States
| | - John F P Bridges
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
| |
Collapse
|
91
|
Su J, Zhang Y, Ke QQ, Su JK, Yang QH. Mobilizing artificial intelligence to cardiac telerehabilitation. Rev Cardiovasc Med 2022; 23:45. [PMID: 35229536 DOI: 10.31083/j.rcm2302045] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 01/03/2022] [Accepted: 01/12/2022] [Indexed: 01/04/2025] Open
Abstract
Cardiac telerehabilitation is a method that uses digital technologies to deliver cardiac rehabilitation from a distance. It has been shown to have benefits to improve patients' disease outcomes and quality of life, and further reduce readmission and adverse cardiac events. The outbreak of the coronavirus pandemic has brought considerable new challenges to cardiac rehabilitation, which foster cardiac telerehabilitation to be broadly applied. This transformation is associated with some difficulties that urgently need some innovations to search for the right path. Artificial intelligence, which has a high level of data mining and interpretation, may provide a potential solution. This review evaluates the current application and limitations of artificial intelligence in cardiac telerehabilitation and offers prospects.
Collapse
Affiliation(s)
- Jin Su
- School of Nursing, Jinan University, 510632 Guangzhou, Guangdong, China
| | - Ye Zhang
- School of Nursing, Jinan University, 510632 Guangzhou, Guangdong, China
| | - Qi-Qi Ke
- School of Nursing, Jinan University, 510632 Guangzhou, Guangdong, China
| | - Ju-Kun Su
- School of Nursing, Jinan University, 510632 Guangzhou, Guangdong, China
| | - Qiao-Hong Yang
- School of Nursing, Jinan University, 510632 Guangzhou, Guangdong, China
| |
Collapse
|
92
|
Sikstrom L, Maslej MM, Hui K, Findlay Z, Buchman DZ, Hill SL. Conceptualising fairness: three pillars for medical algorithms and health equity. BMJ Health Care Inform 2022; 29:e100459. [PMID: 35012941 PMCID: PMC8753410 DOI: 10.1136/bmjhci-2021-100459] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 12/14/2021] [Indexed: 12/25/2022] Open
Abstract
OBJECTIVES Fairness is a core concept meant to grapple with different forms of discrimination and bias that emerge with advances in Artificial Intelligence (eg, machine learning, ML). Yet, claims to fairness in ML discourses are often vague and contradictory. The response to these issues within the scientific community has been technocratic. Studies either measure (mathematically) competing definitions of fairness, and/or recommend a range of governance tools (eg, fairness checklists or guiding principles). To advance efforts to operationalise fairness in medicine, we synthesised a broad range of literature. METHODS We conducted an environmental scan of English language literature on fairness from 1960-July 31, 2021. Electronic databases Medline, PubMed and Google Scholar were searched, supplemented by additional hand searches. Data from 213 selected publications were analysed using rapid framework analysis. Search and analysis were completed in two rounds: to explore previously identified issues (a priori), as well as those emerging from the analysis (de novo). RESULTS Our synthesis identified 'Three Pillars for Fairness': transparency, impartiality and inclusion. We draw on these insights to propose a multidimensional conceptual framework to guide empirical research on the operationalisation of fairness in healthcare. DISCUSSION We apply the conceptual framework generated by our synthesis to risk assessment in psychiatry as a case study. We argue that any claim to fairness must reflect critical assessment and ongoing social and political deliberation around these three pillars with a range of stakeholders, including patients. CONCLUSION We conclude by outlining areas for further research that would bolster ongoing commitments to fairness and health equity in healthcare.
Collapse
Affiliation(s)
- Laura Sikstrom
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Anthropology, University of Toronto, Toronto, Ontario, Canada
| | - Marta M Maslej
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Katrina Hui
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Zoe Findlay
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| | - Daniel Z Buchman
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Sean L Hill
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
93
|
Abstract
Unique pneumonia due to an unknown source emerged in December 2019 in the city of Wuhan, China. Consequently, the World Health Organization (WHO) declared this condition as a new coronavirus disease-19 also known as COVID-19 on February 11, 2020, which on March 13, 2020 was declared as a pandemic. The virus that causes COVID-19 was found to have a similar genome (80% similarity) with the previously known acute respiratory syndrome also known as SARS-CoV. The novel virus was later named Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). SARS-CoV-2 falls in the family of Coronaviridae which is further divided into Nidovirales and another subfamily called Orthocoronavirinae. The four generations of the coronaviruses belongs to the Orthocoronavirinae family that consists of alpha, beta, gamma and delta coronavirus which are denoted as α-CoV, β-CoV, γ-CoV, δ-CoV respectively. The α-CoV and β-CoVs are mainly known to infect mammals whereas γ-CoV and δ-CoV are generally found in birds. The β-CoVs also comprise of SARS-CoV and also include another virus that was found in the Middle East called the Middle East respiratory syndrome virus (MERS-CoV) and the cause of current pandemic SARS-CoV-2. These viruses initially cause the development of pneumonia in the patients and further development of a severe case of acute respiratory distress syndrome (ARDS) and other related symptoms that can be fatal leading to death.
Collapse
|
94
|
Diniz P, Abreu M, Lacerda D, Martins A, Pereira H, Ferreira FC, Kerkhoffs GMMJ, Fred A. Pre-injury performance is most important for predicting the level of match participation after Achilles tendon ruptures in elite soccer players: a study using a machine learning classifier. Knee Surg Sports Traumatol Arthrosc 2022; 30:4225-4237. [PMID: 35941323 PMCID: PMC9360634 DOI: 10.1007/s00167-022-07082-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 07/18/2022] [Indexed: 11/08/2022]
Abstract
PURPOSE Achilles tendon ruptures (ATR) are career-threatening injuries in elite soccer players due to the decreased sports performance they commonly inflict. This study presents an exploratory data analysis of match participation before and after ATRs and an evaluation of the performance of a machine learning (ML) model based on pre-injury features to predict whether a player will return to a previous level of match participation. METHODS The website transfermarkt.com was mined, between January and March of 2021, for relevant entries regarding soccer players who suffered an ATR while playing in first or second leagues. The difference between average minutes played per match (MPM) 1 year before injury and between 1 and 2 years after the injury was used to identify patterns in match participation after injury. Clustering analysis was performed using k-means clustering. Predictions of post-injury match participation were made using the XGBoost classification algorithm. The performance of this model was evaluated using the area under the receiver operating characteristic curve (AUROC) and Brier score loss (BSL). RESULTS Two hundred and nine players were included in the study. Data from 32,853 matches was analysed. Exploratory data analysis revealed that forwards, midfielders and defenders increased match participation during the first year after injury, with goalkeepers still improving at 2 years. Players were grouped into four clusters regarding the difference between MPMs 1 year before injury and between 1 and 2 years after the injury. These groups ranged between a severe decrease (n = 34; - 59 ± 13 MPM), moderate decrease (n = 75; - 25 ± 8 MPM), maintenance (n = 70; 0 ± 8 MPM), or increase (n = 30; 32 ± 13 MPM). Regarding the predictive model, the average AUROC after cross-validation was 0.81 ± 0.10, and the BSL was 0.12, with the most important features relating to pre-injury match participation. CONCLUSION Most players take 1 year to reach peak match participation after an ATR. Good performance was attained using a ML classifier to predict the level of match participation following an ATR, with features related to pre-injury match participation displaying the highest importance. LEVEL OF EVIDENCE I.
Collapse
Affiliation(s)
- Pedro Diniz
- Department of Orthopaedic Surgery, Hospital de Sant'Ana, Rua de Benguela, 501, 2775-028, Parede, Portugal. .,Department of Bioengineering and iBB, Institute for Bioengineering and Biosciences, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal. .,Associate Laboratory i4HB, Institute for Health and Bioeconomy, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal. .,Fisiogaspar, Lisbon, Portugal.
| | - Mariana Abreu
- grid.9983.b0000 0001 2181 4263Department of Bioengineering and iBB, Institute for Bioengineering and Biosciences, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal ,grid.421174.50000 0004 0393 4941Instituto de Telecomunicações, Lisbon, Portugal
| | - Diogo Lacerda
- Department of Orthopaedic Surgery, Hospital de Sant’Ana, Rua de Benguela, 501, 2775-028 Parede, Portugal
| | - António Martins
- Department of Orthopaedic Surgery, Hospital de Sant’Ana, Rua de Benguela, 501, 2775-028 Parede, Portugal ,Fisiogaspar, Lisbon, Portugal
| | - Hélder Pereira
- Orthopaedic Department, Centro Hospitalar Póvoa de Varzim, Vila do Conde, Portugal ,Ripoll y De Prado Sports Clinic: FIFA Medical Centre of Excellence, Murcia-Madrid, Spain ,grid.10328.380000 0001 2159 175XUniversity of Minho ICVS/3B’s-PT Government Associate Laboratory, Braga/Guimarães, Portugal
| | - Frederico Castelo Ferreira
- grid.9983.b0000 0001 2181 4263Department of Bioengineering and iBB, Institute for Bioengineering and Biosciences, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal ,grid.9983.b0000 0001 2181 4263Associate Laboratory i4HB, Institute for Health and Bioeconomy, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Gino MMJ Kerkhoffs
- grid.509540.d0000 0004 6880 3010Department of Orthopaedic Surgery, Amsterdam Movement Sciences, Amsterdam University Medical Centers, Amsterdam, The Netherlands ,grid.491090.5Academic Center for Evidence Based Sports Medicine (ACES), Amsterdam, The Netherlands ,grid.512724.7Amsterdam Collaboration for Health and Safety in Sports (ACHSS), Amsterdam, The Netherlands
| | - Ana Fred
- grid.9983.b0000 0001 2181 4263Department of Bioengineering and iBB, Institute for Bioengineering and Biosciences, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal ,grid.421174.50000 0004 0393 4941Instituto de Telecomunicações, Lisbon, Portugal
| |
Collapse
|
95
|
Data-driven machine learning: A new approach to process and utilize biomedical data. PREDICTIVE MODELING IN BIOMEDICAL DATA MINING AND ANALYSIS 2022. [PMCID: PMC9464259 DOI: 10.1016/b978-0-323-99864-2.00017-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
96
|
Nwanosike EM, Conway BR, Merchant HA, Hasan SS. Potential applications and performance of machine learning techniques and algorithms in clinical practice: A systematic review. Int J Med Inform 2021; 159:104679. [PMID: 34990939 DOI: 10.1016/j.ijmedinf.2021.104679] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 12/08/2021] [Accepted: 12/27/2021] [Indexed: 12/11/2022]
Abstract
PURPOSE The advent of clinically adapted machine learning algorithms can solve numerous problems ranging from disease diagnosis and prognosis to therapy recommendations. This systematic review examines the performance of machine learning (ML) algorithms and evaluates the progress made to date towards their implementation in clinical practice. METHODS Systematic searching of databases (PubMed, MEDLINE, Scopus, Google Scholar, Cochrane Library and WHO Covid-19 database) to identify original articles published between January 2011 and October 2021. Studies reporting ML techniques in clinical practice involving humans and ML algorithms with a performance metric were considered. RESULTS Of 873 unique articles identified, 36 studies were eligible for inclusion. The XGBoost (extreme gradient boosting) algorithm showed the highest potential for clinical applications (n = 7 studies); this was followed jointly by random forest algorithm, logistic regression, and the support vector machine, respectively (n = 5 studies). Prediction of outcomes (n = 33), in particular Inflammatory diseases (n = 7) received the most attention followed by cancer and neuropsychiatric disorders (n = 5 for each) and Covid-19 (n = 4). Thirty-three out of the thirty-six included studies passed more than 50% of the selected quality assessment criteria in the TRIPOD checklist. In contrast, none of the studies could achieve an ideal overall bias rating of 'low' based on the PROBAST checklist. In contrast, only three studies showed evidence of the deployment of ML algorithm(s) in clinical practice. CONCLUSIONS ML is potentially a reliable tool for clinical decision support. Although advocated widely in clinical practice, work is still in progress to validate clinically adapted ML algorithms. Improving quality standards, transparency, and interpretability of ML models will further lower the barriers to acceptability.
Collapse
Affiliation(s)
- Ezekwesiri Michael Nwanosike
- Department of Pharmacy, School of Applied Sciences, University of Huddersfield, Queensgate Huddersfield HD1 3DH, West Yorkshire, United Kingdom
| | - Barbara R Conway
- Department of Pharmacy, School of Applied Sciences, University of Huddersfield, Queensgate Huddersfield HD1 3DH, West Yorkshire, United Kingdom
| | - Hamid A Merchant
- Department of Pharmacy, School of Applied Sciences, University of Huddersfield, Queensgate Huddersfield HD1 3DH, West Yorkshire, United Kingdom
| | - Syed Shahzad Hasan
- Department of Pharmacy, School of Applied Sciences, University of Huddersfield, Queensgate Huddersfield HD1 3DH, West Yorkshire, United Kingdom; School of Biomedical Sciences & Pharmacy, University of Newcastle, Callaghan, Australia.
| |
Collapse
|
97
|
Abramoff MD, Mortensen Z, Tava C. Commentary: Diagnosing Diabetic Retinopathy With Artificial Intelligence: What Information Should Be Included to Ensure Ethical Informed Consent? Front Med (Lausanne) 2021; 8:765936. [PMID: 34901082 PMCID: PMC8651697 DOI: 10.3389/fmed.2021.765936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 11/03/2021] [Indexed: 11/30/2022] Open
Affiliation(s)
- Michael D Abramoff
- Deartments of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, United States.,Digital Diagnostics Inc., Coralville, Iowa City, IA, United States
| | - Zachary Mortensen
- Deartments of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, United States
| | - Chris Tava
- Digital Diagnostics Inc., Coralville, Iowa City, IA, United States
| |
Collapse
|
98
|
Abstract
Trust in artificial intelligence (AI) by society and the development of trustworthy AI systems and ecosystems are critical for the progress and implementation of AI technology in medicine. With the growing use of AI in a variety of medical and imaging applications, it is more vital than ever to make these systems dependable and trustworthy. Fourteen core principles are considered in this article aiming to move the needle more closely to systems that are accurate, resilient, fair, explainable, safe, and transparent: toward trustworthy AI.
Collapse
|
99
|
McCoy LG, Brenna CTA, Chen S, Vold K, Das S. Believing in Black Boxes: Machine Learning for Healthcare Does Not Need Explainability to be Evidence-Based. J Clin Epidemiol 2021; 142:252-257. [PMID: 34748907 DOI: 10.1016/j.jclinepi.2021.11.001] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 10/25/2021] [Accepted: 11/01/2021] [Indexed: 12/31/2022]
Abstract
OBJECTIVE To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application. STUDY DESIGN AND SETTING This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC. RESULTS We find that concerns regarding explainability are not limited to MLHC, but rather extend to numerous well-validated treatment interventions as well as to human clinical judgment itself. We examine the role of evidence-based medicine in evaluating inexplicable treatments and technologies, and highlight the analogy between the concept of explainability in MLHC and the related concept of mechanistic reasoning in evidence-based medicine. CONCLUSION Ultimately, we conclude that the value of explainability in MLHC is not intrinsic, but is instead instrumental to achieving greater imperatives such as performance and trust. We caution against the uncompromising pursuit of explainability, and advocate instead for the development of robust empirical methods to successfully evaluate increasingly inexplicable algorithmic systems.
Collapse
Affiliation(s)
- Liam G McCoy
- Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Connor T A Brenna
- Department of Anesthesiology & Pain Medicine, University of Toronto, Toronto, Ontario, Canada; Department of Philosophy, University of Toronto, Toronto, Ontario, Canada
| | - Stacy Chen
- Joint Centre for Bioethics, University of Toronto, Toronto, Ontario, Canada
| | - Karina Vold
- Institute for the History and Philosophy of Science and Technology, University of Toronto, Toronto, Ontario, Canada; Schwartz Reisman Institute for Technology and Society, University of Toronto, Toronto, Ontario, Canada; Centre for Ethics, University of Toronto, Toronto, Ontario, Canada; Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
| | - Sunit Das
- Centre for Ethics, University of Toronto, Toronto, Ontario, Canada; Division of Neurosurgery, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
100
|
Bélisle-Pipon JC, Couture V, Roy MC, Ganache I, Goetghebeur M, Cohen IG. What Makes Artificial Intelligence Exceptional in Health Technology Assessment? Front Artif Intell 2021; 4:736697. [PMID: 34796318 PMCID: PMC8594317 DOI: 10.3389/frai.2021.736697] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 09/23/2021] [Indexed: 12/20/2022] Open
Abstract
The application of artificial intelligence (AI) may revolutionize the healthcare system, leading to enhance efficiency by automatizing routine tasks and decreasing health-related costs, broadening access to healthcare delivery, targeting more precisely patient needs, and assisting clinicians in their decision-making. For these benefits to materialize, governments and health authorities must regulate AI, and conduct appropriate health technology assessment (HTA). Many authors have highlighted that AI health technologies (AIHT) challenge traditional evaluation and regulatory processes. To inform and support HTA organizations and regulators in adapting their processes to AIHTs, we conducted a systematic review of the literature on the challenges posed by AIHTs in HTA and health regulation. Our research question was: What makes artificial intelligence exceptional in HTA? The current body of literature appears to portray AIHTs as being exceptional to HTA. This exceptionalism is expressed along 5 dimensions: 1) AIHT's distinctive features; 2) their systemic impacts on health care and the health sector; 3) the increased expectations towards AI in health; 4) the new ethical, social and legal challenges that arise from deploying AI in the health sector; and 5) the new evaluative constraints that AI poses to HTA. Thus, AIHTs are perceived as exceptional because of their technological characteristics and potential impacts on society at large. As AI implementation by governments and health organizations carries risks of generating new, and amplifying existing, challenges, there are strong arguments for taking into consideration the exceptional aspects of AIHTs, especially as their impacts on the healthcare system will be far greater than that of drugs and medical devices. As AIHTs begin to be increasingly introduced into the health care sector, there is a window of opportunity for HTA agencies and scholars to consider AIHTs' exceptionalism and to work towards only deploying clinically, economically, socially acceptable AIHTs in the health care system.
Collapse
Affiliation(s)
| | | | | | - Isabelle Ganache
- Institut National D’Excellence en Santé et en Services Sociaux (INESSS), Montréal, Québec, QC, Canada
| | - Mireille Goetghebeur
- Institut National D’Excellence en Santé et en Services Sociaux (INESSS), Montréal, Québec, QC, Canada
| | | |
Collapse
|