1
|
Derksen ME, van Beek M, de Bruijn T, Stuit F, Blankers M, Goudriaan AE. Ethical aspects and user preferences in applying machine learning to adjust eHealth addressing substance use: A mixed-methods study. Int J Med Inform 2025; 199:105897. [PMID: 40157245 DOI: 10.1016/j.ijmedinf.2025.105897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Revised: 03/19/2025] [Accepted: 03/21/2025] [Indexed: 04/01/2025]
Abstract
BACKGROUND Digital health interventions targeting substance use disorders are being increasingly implemented. Data science methodology has the potential to enhance involvement and efficacy of these interventions, though application may raise ethical considerations. This study aimed to explore possible ethical aspects and preferences among users of an online digital intervention for substance use and gambling disorder regarding the application of supervised machine learning (ML) methodology. METHODS We recruited participants from a widely used, evidence-based online substance use and gambling intervention from the Netherlands (Jellinek Digital Self-help). Initially, we conducted two online focus groups (n = 5 each) to explore topics related to ethical considerations and user preferences regarding the application of ML for adapting unguided digital interventions. Subsequently, the findings from these focus groups informed the development of an online, quantitative, self-reported questionnaire study regarding this topic (n = 157). Data collection and analyses were guided by the principles of biomedical ethics by Beauchamp and Childress. RESULTS Our qualitative and quantitative results revealed that digital intervention users found the application of machine learning analyses to be ethically acceptable, although they had difficulties conceptualizing ML applications. Participants believed that it could benefit the intervention and subsequently their well-being. Both qualitative and quantitative results emphasized the importance of preserving user autonomy in applying supervised ML to adjust digital interventions. In addition, based on both data sources we found that digital intervention users trusted Jellinek's integrity to apply ML. Ethical concerns identified in the qualitative data (e.g., data security, human control), were not confirmed in our quantitative findings. CONCLUSIONS This mixed-methods study revealed that users of digital intervention demonstrated limited concern for ethical aspects regarding applying ML to adapt digital interventions. Ethical aspects were primarily pertained to their needs for autonomy and privacy.
Collapse
Affiliation(s)
- Marloes E Derksen
- Arkin Mental Health Care and Amsterdam Institute for Addiction Research, Amsterdam, Netherlands; Amsterdam UMC, location University of Amsterdam, Department of Medical Informatics, eHealth Living & Learning Lab Amsterdam, Meibergdreef 9, Amsterdam, Netherlands; Amsterdam Public Health, Digital Health & Mental Health, Amsterdam, Netherlands.
| | - Max van Beek
- Arkin Mental Health Care and Amsterdam Institute for Addiction Research, Amsterdam, Netherlands; Amsterdam Public Health, Digital Health & Mental Health, Amsterdam, Netherlands; Amsterdam UMC, location University of Amsterdam, Department of Psychiatry, Meibergdreef 9, Amsterdam, the Netherlands
| | - Tamara de Bruijn
- Arkin Mental Health Care and Amsterdam Institute for Addiction Research, Amsterdam, Netherlands; Jellinek Prevention, Amsterdam, Netherlands
| | - Floor Stuit
- Arkin Mental Health Care and Amsterdam Institute for Addiction Research, Amsterdam, Netherlands
| | - Matthijs Blankers
- Arkin Mental Health Care and Amsterdam Institute for Addiction Research, Amsterdam, Netherlands; Amsterdam Public Health, Digital Health & Mental Health, Amsterdam, Netherlands; Trimbos Institute, The Netherlands Institute of Mental Health and Addiction, Utrecht, Netherlands
| | - Anneke E Goudriaan
- Arkin Mental Health Care and Amsterdam Institute for Addiction Research, Amsterdam, Netherlands; Amsterdam Public Health, Digital Health & Mental Health, Amsterdam, Netherlands; Amsterdam UMC, location University of Amsterdam, Department of Psychiatry, Meibergdreef 9, Amsterdam, the Netherlands
| |
Collapse
|
2
|
Wu Y, Liu Y, Yang Y, Yao MS, Yang W, Shi X, Yang L, Li D, Liu Y, Yin S, Lei C, Zhang M, Gee JC, Yang X, Wei W, Gu S. A concept-based interpretable model for the diagnosis of choroid neoplasias using multimodal data. Nat Commun 2025; 16:3504. [PMID: 40223097 PMCID: PMC11994757 DOI: 10.1038/s41467-025-58801-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Accepted: 04/02/2025] [Indexed: 04/15/2025] Open
Abstract
Diagnosing rare diseases remains a critical challenge in clinical practice, often requiring specialist expertise. Despite the promising potential of machine learning, the scarcity of data on rare diseases and the need for interpretable, reliable artificial intelligence (AI) models complicates development. This study introduces a multimodal concept-based interpretable model tailored to distinguish uveal melanoma (0.4-0.6 per million in Asians) from hemangioma and metastatic carcinoma following the clinical practice. We collected a comprehensive dataset on Asians to date on choroid neoplasm imaging with radiological reports, encompassing over 750 patients from 2013 to 2019. Our model integrates domain expert insights from radiological reports and differentiates between three types of choroidal tumors, achieving an F1 score of 0.91. This performance not only matches senior ophthalmologists but also improves the diagnostic accuracy of less experienced clinicians by 42%. The results underscore the potential of interpretable AI to enhance rare disease diagnosis and pave the way for future advancements in medical AI.
Collapse
Affiliation(s)
- Yifan Wu
- University of Pennsylvania, Philadelphia, PA, USA
| | - Yang Liu
- University of Electronic Science and Technology of China, Chengdu, China
| | - Yue Yang
- University of Pennsylvania, Philadelphia, PA, USA
| | | | - Wenli Yang
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuehui Shi
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lihong Yang
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Dongjun Li
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yueming Liu
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Shiyi Yin
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chunyan Lei
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu, China
| | - Meixia Zhang
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu, China
| | - James C Gee
- University of Pennsylvania, Philadelphia, PA, USA
| | - Xuan Yang
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
| | - Shi Gu
- University of Electronic Science and Technology of China, Chengdu, China.
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China.
- State Key Laboratory of Brain Machine Intelligence, Zhejiang University, Hangzhou, China.
| |
Collapse
|
3
|
El Zoghbi M, Malhotra A, Bilal M, Shaukat A. Impact of Artificial Intelligence on Clinical Research. Gastrointest Endosc Clin N Am 2025; 35:445-455. [PMID: 40021240 DOI: 10.1016/j.giec.2024.10.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2025]
Abstract
Artificial intelligence (AI) has potential to significantly impact clinical research when it comes to research preparation and data interpretation. Development of AI tools that can help in performing literature searches, synthesizing and streamlining data collection and analysis, and formatting of study could make the clinical research process more efficient. Several of these tools have been developed and trialed and many more are being rapidly developed. This article highlights the AI applications in clinical research in gastroenterology including its impact on drug discovery and explores areas where further guidance is needed to supplement the current understanding and enhance its use.
Collapse
Affiliation(s)
- Maysaa El Zoghbi
- Department of Medicine, NYU Grossman School of Medicine, New York, NY, USA
| | - Ashish Malhotra
- Department of Medicine, NYU Grossman School of Medicine, New York, NY, USA
| | - Mohammad Bilal
- University of Minnesota, Minneapolis VA Medical Center, Minneapolis, MN, USA
| | - Aasma Shaukat
- Department of Medicine, NYU Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|
4
|
Sofi JI, Nabi FN, Nabi J. Informed Agency: Reimagining Patient Autonomy in the Age of Machine-Assisted Healthcare. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2025; 25:156-158. [PMID: 39992817 DOI: 10.1080/15265161.2025.2457716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/26/2025]
Affiliation(s)
| | | | - Junaid Nabi
- Pardee RAND Graduate School
- Global Innovators Group, The Aspen Institute
| |
Collapse
|
5
|
Starke G, Gille F, Termine A, Aquino YSJ, Chavarriaga R, Ferrario A, Hastings J, Jongsma K, Kellmeyer P, Kulynych B, Postan E, Racine E, Sahin D, Tomaszewska P, Vold K, Webb J, Facchini A, Ienca M. Finding Consensus on Trust in AI in Health Care: Recommendations From a Panel of International Experts. J Med Internet Res 2025; 27:e56306. [PMID: 39969962 PMCID: PMC11888049 DOI: 10.2196/56306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 07/31/2024] [Accepted: 11/28/2024] [Indexed: 02/20/2025] Open
Abstract
BACKGROUND The integration of artificial intelligence (AI) into health care has become a crucial element in the digital transformation of health systems worldwide. Despite the potential benefits across diverse medical domains, a significant barrier to the successful adoption of AI systems in health care applications remains the prevailing low user trust in these technologies. Crucially, this challenge is exacerbated by the lack of consensus among experts from different disciplines on the definition of trust in AI within the health care sector. OBJECTIVE We aimed to provide the first consensus-based analysis of trust in AI in health care based on an interdisciplinary panel of experts from different domains. Our findings can be used to address the problem of defining trust in AI in health care applications, fostering the discussion of concrete real-world health care scenarios in which humans interact with AI systems explicitly. METHODS We used a combination of framework analysis and a 3-step consensus process involving 18 international experts from the fields of computer science, medicine, philosophy of technology, ethics, and social sciences. Our process consisted of a synchronous phase during an expert workshop where we discussed the notion of trust in AI in health care applications, defined an initial framework of important elements of trust to guide our analysis, and agreed on 5 case studies. This was followed by a 2-step iterative, asynchronous process in which the authors further developed, discussed, and refined notions of trust with respect to these specific cases. RESULTS Our consensus process identified key contextual factors of trust, namely, an AI system's environment, the actors involved, and framing factors, and analyzed causes and effects of trust in AI in health care. Our findings revealed that certain factors were applicable across all discussed cases yet also pointed to the need for a fine-grained, multidisciplinary analysis bridging human-centered and technology-centered approaches. While regulatory boundaries and technological design features are critical to successful AI implementation in health care, ultimately, communication and positive lived experiences with AI systems will be at the forefront of user trust. Our expert consensus allowed us to formulate concrete recommendations for future research on trust in AI in health care applications. CONCLUSIONS This paper advocates for a more refined and nuanced conceptual understanding of trust in the context of AI in health care. By synthesizing insights into commonalities and differences among specific case studies, this paper establishes a foundational basis for future debates and discussions on trusting AI in health care.
Collapse
Affiliation(s)
- Georg Starke
- Institute for History and Ethics of Medicine, Technical University of Munich, Munich, Germany
- College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Felix Gille
- Digital Society Initiative, University of Zurich, Zurich, Switzerland
- Institute for Implementation Science in Health Care, Faculty of Medicine, University of Zurich, Zurich, Switzerland
| | - Alberto Termine
- Institute for History and Ethics of Medicine, Technical University of Munich, Munich, Germany
- College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Dalle Molle Institute for Artificial Intelligence (IDSIA), The University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Lugano, Switzerland
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Wollongong, Australia
| | - Ricardo Chavarriaga
- Centre for Artificial Intelligence, Zurich University of Applied Sciences (ZHAW), Zurich, Switzerland
| | - Andrea Ferrario
- Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland
| | - Janna Hastings
- Institute for Implementation Science in Health Care, Faculty of Medicine, University of Zurich, Zurich, Switzerland
- School of Medicine, University of St. Gallen, St. Gallen, Switzerland
| | - Karin Jongsma
- Bioethics & Health Humanities, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Philipp Kellmeyer
- Data and Web Science Group, School of Business Informatics and Mathematics, University of Mannheim, Mannheim, Germany
- Department of Neurosurgery, University of Freiburg - Medical Center, Freiburg im Breisgau, Germany
| | | | - Emily Postan
- Edinburgh Law School, University of Edinburgh, Edinburgh, United Kingdom
| | - Elise Racine
- The Ethox Centre and Wellcome Centre for Ethics and Humanities, Nuffield Department of Population Health, University of Oxford, Oxford, United Kingdom
- The Institute for Ethics in AI, Faculty of Philosophy, University of Oxford, Oxford, United Kingdom
| | - Derya Sahin
- Development Economics (DEC), World Bank Group, Washington, DC, United States
| | - Paulina Tomaszewska
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland
| | - Karina Vold
- Institute for the History and Philosophy of Science and Technology, University of Toronto, Toronto, ON, Canada
- Schwartz Reisman Institute for Technology and Society, University of Toronto, Toronto, ON, Canada
| | - Jamie Webb
- The Centre for Technomoral Futures, University of Edinburgh, Edinburgh, United Kingdom
| | - Alessandro Facchini
- Dalle Molle Institute for Artificial Intelligence (IDSIA), The University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Lugano, Switzerland
| | - Marcello Ienca
- Institute for History and Ethics of Medicine, Technical University of Munich, Munich, Germany
- College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
6
|
Rijken L, Zwetsloot S, Smorenburg S, Wolterink J, Išgum I, Marquering H, van Duivenvoorde J, Ploem C, Jessen R, Catarinella F, Lee R, Bera K, Buisan J, Zhang P, Dias-Neto M, Raffort J, Lareyre F, Muller C, Koncar I, Tomic I, Živković M, Djuric T, Stankovic A, Venermo M, Tulamo R, Behrendt CA, Smit N, Schijven M, van den Born BJ, Delewi R, Jongkind V, Ayyalasomayajula V, Yeung KK. Developing Trustworthy Artificial Intelligence Models to Predict Vascular Disease Progression: the VASCUL-AID-RETRO Study Protocol. J Endovasc Ther 2025:15266028251313963. [PMID: 39921236 DOI: 10.1177/15266028251313963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2025]
Abstract
INTRODUCTION Abdominal aortic aneurysms (AAAs) and peripheral artery disease (PAD) are two vascular diseases with a significant risk of major adverse cardiovascular events and mortality. A challenge in current disease management is the unpredictable disease progression in individual patients. The VASCUL-AID-RETRO study aims to develop trustworthy multimodal predictive artificial intelligence (AI) models for multiple tasks including risk stratification of disease progression and cardiovascular events in patients with AAA and PAD. METHODS The VASCUL-AID-RETRO study will collect data from 5000 AAA and 6000 PAD patients across multiple European centers of the VASCUL-AID consortium using electronic health records from 2015 to 2024. This retrospectively-collected data will be enriched with additional data from existing biobanks and registries. Multimodal data, including clinical records, radiological imaging, proteomics, and genomics, will be collected to develop AI models predicting disease progression and cardiovascular risks. This will be done while integrating the international ethics guidelines and legal standards for trustworthy AI, to ensure a socially-responsible data integration and analysis. PROPOSED ANALYSES A consensus-based variable list of clinical parameters and core outcome set for both diseases will be developed through meetings with key opinion leaders. Blood, plasma, and tissue samples from existing biobanks will be analyzed for proteomic and genomic variations. AI models will be trained on segmented AAA and PAD artery geometries for estimation of hemodynamic parameters to quantify disease progression. Initially, risk prediction models will be developed for each modality separately, and subsequently, all data will be combined to be used as input to multimodal prediction models. During all processes, data security, data quality, and ethical guidelines and legal standards will be carefully considered. As a next step, the developed models will be further adjusted with prospective data and internally validated in a prospective cohort (VASCUL-AID-PRO study). CONCLUSION The VASCUL-AID-RETRO study will utilize advanced AI techniques and integrate clinical, imaging, and multi-omics data to predict AAA and PAD progression and cardiovascular events. CLINICAL TRIAL REGISTRATION The VASCUL-AID-RETRO study is registered at www.clinicaltrials.gov under the identification number NCT06206369. CLINICAL IMPACT The VASCUL-AID-RETRO study aims to improve clinical practice of vascular surgery by developing artificial intelligence-driven multimodal predictive models for patients with abdominal aortic aneurysms or peripheral artery disease, enhancing personalized medicine. By integrating comprehensive data sets including clinical, imaging, and multi-omics data, these models have the potential to provide accurate risk stratification for disease progression and cardiovascular events. An innovation lies in the extensive European data set in combination with multimodal analyses approaches, which enables the development of advanced models to facilitate better understanding of disease mechanisms and progression. For clinicians, this means that more precise, individualized treatment plans can be established, ultimately aiming to improve patient outcomes.
Collapse
Affiliation(s)
- Lotte Rijken
- Department of Surgery, Amsterdam University Medical Center, Location Vrije Universiteit, Amsterdam, The Netherlands
- Atherosclerosis and Ischemic Syndromes, Amsterdam Cardiovascular Sciences, Amsterdam University Medical Center, Amsterdam, The Netherlands
- Digital Health Amsterdam Public Health, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Sabrina Zwetsloot
- Department of Surgery, Amsterdam University Medical Center, Location Vrije Universiteit, Amsterdam, The Netherlands
- Atherosclerosis and Ischemic Syndromes, Amsterdam Cardiovascular Sciences, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Stefan Smorenburg
- Department of Surgery, Amsterdam University Medical Center, Location Vrije Universiteit, Amsterdam, The Netherlands
- Atherosclerosis and Ischemic Syndromes, Amsterdam Cardiovascular Sciences, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Jelmer Wolterink
- Department of Applied Mathematics, Technical Medical Centre, University of Twente, Enschede, The Netherlands
| | - Ivana Išgum
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, The Netherlands
| | - Henk Marquering
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands
| | - Jan van Duivenvoorde
- Department of Surgery, Amsterdam University Medical Center, Location Vrije Universiteit, Amsterdam, The Netherlands
- Atherosclerosis and Ischemic Syndromes, Amsterdam Cardiovascular Sciences, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Corrette Ploem
- Department of Ethics, Law and Humanities, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands
| | - Roosmarie Jessen
- Department of Ethics, Law and Humanities, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands
| | | | - Regent Lee
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
| | - Katarzyna Bera
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
| | - Jenny Buisan
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
| | - Ping Zhang
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
| | - Marina Dias-Neto
- Department of Angiology and Vascular Surgery, Centro Hospitalar Universitário de São João, Porto, Portugal
- UnIC@RISE, Department of Surgery and Physiology, Faculty of Medicine, University of Porto, Porto, Portugal
| | - Juliette Raffort
- Clinical Chemistry Laboratory, University Hospital of Nice, Nice, France
- Institute 3IA Côte d'Azur, Université Côte d'Azur, Nice, France
- CNRS, UMR7370, LP2M, Université Côte d'Azur, Nice, France
| | - Fabien Lareyre
- CNRS, UMR7370, LP2M, Université Côte d'Azur, Nice, France
- Department of Vascular Surgery, Hospital of Antibes Juan-les-Pins, Antibes, France
| | | | - Igor Koncar
- Faculty of Medicine, University of Belgrade, Belgrade, Serbia
- Clinic for Vascular and Endovascular Surgery, Clinical Center of Serbia, Belgrade, Serbia
| | - Ivan Tomic
- Faculty of Medicine, University of Belgrade, Belgrade, Serbia
- Clinic for Vascular and Endovascular Surgery, Clinical Center of Serbia, Belgrade, Serbia
| | - Maja Živković
- Laboratory for Radiobiology and Molecular Genetics, VINCA Institute of Nuclear Sciences-National Institute of the Republic of Serbia, University of Belgrade, Belgrade, Serbia
| | - Tamara Djuric
- Laboratory for Radiobiology and Molecular Genetics, VINCA Institute of Nuclear Sciences-National Institute of the Republic of Serbia, University of Belgrade, Belgrade, Serbia
| | - Aleksandra Stankovic
- Laboratory for Radiobiology and Molecular Genetics, VINCA Institute of Nuclear Sciences-National Institute of the Republic of Serbia, University of Belgrade, Belgrade, Serbia
| | - Maarit Venermo
- Department of Vascular Surgery, Helsinki University Hospital, Helsinki, Finland
- Department of Vascular Surgery, University of Helsinki, Helsinki, Finland
| | - Riikka Tulamo
- Department of Vascular Surgery, Helsinki University Hospital, Helsinki, Finland
- Department of Vascular Surgery, University of Helsinki, Helsinki, Finland
| | - Christian-Alexander Behrendt
- Department of Vascular and Endovascular Surgery, Asklepios Clinic Wandsbek, Asklepios Medical School, Hamburg, Germany
| | - Noeska Smit
- Department of Informatics, University of Bergen, Bergen, Norway
- Department of Radiology, Mohn Medical Imaging and Visualization Centre, Haukeland University Hospital, Bergen, Norway
| | - Marlies Schijven
- Digital Health Amsterdam Public Health, Amsterdam University Medical Center, Amsterdam, The Netherlands
- Department of Surgery, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Bert-Jan van den Born
- Department of Public and Occupational Health, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands
- Department of Vascular Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands
| | - Ronak Delewi
- Department of Cardiology, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Cardiovascular Sciences, Amsterdam, The Netherlands
| | - Vincent Jongkind
- Department of Surgery, Amsterdam University Medical Center, Location Vrije Universiteit, Amsterdam, The Netherlands
- Microcirculation, Amsterdam Cardiovascular Sciences, Amsterdam, The Netherlands
| | - Venkat Ayyalasomayajula
- Department of Surgery, Amsterdam University Medical Center, Location Vrije Universiteit, Amsterdam, The Netherlands
- Atherosclerosis and Ischemic Syndromes, Amsterdam Cardiovascular Sciences, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Kak Khee Yeung
- Department of Surgery, Amsterdam University Medical Center, Location Vrije Universiteit, Amsterdam, The Netherlands
- Atherosclerosis and Ischemic Syndromes, Amsterdam Cardiovascular Sciences, Amsterdam University Medical Center, Amsterdam, The Netherlands
| |
Collapse
|
7
|
Pal R, Le J, Rudas A, Chiang JN, Williams T, Alexander B, Joosten A, Cannesson M. A review of machine learning methods for non-invasive blood pressure estimation. J Clin Monit Comput 2025; 39:95-106. [PMID: 39305449 DOI: 10.1007/s10877-024-01221-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 09/09/2024] [Indexed: 02/13/2025]
Abstract
Blood pressure is a very important clinical measurement, offering valuable insights into the hemodynamic status of patients. Regular monitoring is crucial for early detection, prevention, and treatment of conditions like hypotension and hypertension, both of which increasing morbidity for a wide variety of reasons. This monitoring can be done either invasively or non-invasively and intermittently vs. continuously. An invasive method is considered the gold standard and provides continuous measurement, but it carries higher risks of complications such as infection, bleeding, and thrombosis. Non-invasive techniques, in contrast, reduce these risks and can provide intermittent or continuous blood pressure readings. This review explores modern machine learning-based non-invasive methods for blood pressure estimation, discussing their advantages, limitations, and clinical relevance.
Collapse
Affiliation(s)
- Ravi Pal
- Department of Anesthesiology & Perioperative Medicine, David Geffen School of Medicine, University of California Los Angeles, Ronald Reagan UCLA Medical Center, 757 Westwood Plaza, Los Angeles, CA, 90095, USA.
| | - Joshua Le
- Larner College of Medicine, University of Vermont, Burlington, USA
| | - Akos Rudas
- Department of Anesthesiology & Perioperative Medicine, David Geffen School of Medicine, University of California Los Angeles, Ronald Reagan UCLA Medical Center, 757 Westwood Plaza, Los Angeles, CA, 90095, USA
| | - Jeffrey N Chiang
- Department of Computational Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Tiffany Williams
- Department of Anesthesiology & Perioperative Medicine, David Geffen School of Medicine, University of California Los Angeles, Ronald Reagan UCLA Medical Center, 757 Westwood Plaza, Los Angeles, CA, 90095, USA
| | - Brenton Alexander
- Department of Anesthesiology & Perioperative Medicine, University of California San Diego, San Diego, CA, USA
| | - Alexandre Joosten
- Department of Anesthesiology & Perioperative Medicine, David Geffen School of Medicine, University of California Los Angeles, Ronald Reagan UCLA Medical Center, 757 Westwood Plaza, Los Angeles, CA, 90095, USA
| | - Maxime Cannesson
- Department of Anesthesiology & Perioperative Medicine, David Geffen School of Medicine, University of California Los Angeles, Ronald Reagan UCLA Medical Center, 757 Westwood Plaza, Los Angeles, CA, 90095, USA
| |
Collapse
|
8
|
Hassan SU, Abdulkadir SJ, Zahid MSM, Al-Selwi SM. Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review. Comput Biol Med 2025; 185:109569. [PMID: 39705792 DOI: 10.1016/j.compbiomed.2024.109569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 10/30/2024] [Accepted: 12/10/2024] [Indexed: 12/23/2024]
Abstract
BACKGROUND The interpretability and explainability of machine learning (ML) and artificial intelligence systems are critical for generating trust in their outcomes in fields such as medicine and healthcare. Errors generated by these systems, such as inaccurate diagnoses or treatments, can have serious and even life-threatening effects on patients. Explainable Artificial Intelligence (XAI) is emerging as an increasingly significant area of research nowadays, focusing on the black-box aspect of sophisticated and difficult-to-interpret ML algorithms. XAI techniques such as Local Interpretable Model-Agnostic Explanations (LIME) can give explanations for these models, raising confidence in the systems and improving trust in their predictions. Numerous works have been published that respond to medical problems through the use of ML models in conjunction with XAI algorithms to give interpretability and explainability. The primary objective of the study is to evaluate the performance of the newly emerging LIME techniques within healthcare domains that require more attention in the realm of XAI research. METHOD A systematic search was conducted in numerous databases (Scopus, Web of Science, IEEE Xplore, ScienceDirect, MDPI, and PubMed) that identified 1614 peer-reviewed articles published between 2019 and 2023. RESULTS 52 articles were selected for detailed analysis that showed a growing trend in the application of LIME techniques in healthcare, with significant improvements in the interpretability of ML models used for diagnostic and prognostic purposes. CONCLUSION The findings suggest that the integration of XAI techniques, particularly LIME, enhances the transparency and trustworthiness of AI systems in healthcare, thereby potentially improving patient outcomes and fostering greater acceptance of AI-driven solutions among medical professionals.
Collapse
Affiliation(s)
- Shahab Ul Hassan
- Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Centre for Intelligent Signal & Imaging Research (CISIR), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.
| | - Said Jadid Abdulkadir
- Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Center for Research in Data Science (CeRDaS), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.
| | - M Soperi Mohd Zahid
- Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Centre for Intelligent Signal & Imaging Research (CISIR), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.
| | - Safwan Mahmood Al-Selwi
- Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Center for Research in Data Science (CeRDaS), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.
| |
Collapse
|
9
|
Li X, Zhao L, Zhang L, Wu Z, Liu Z, Jiang H, Cao C, Xu S, Li Y, Dai H, Yuan Y, Liu J, Li G, Zhu D, Yan P, Li Q, Liu W, Liu T, Shen D. Artificial General Intelligence for Medical Imaging Analysis. IEEE Rev Biomed Eng 2025; 18:113-129. [PMID: 39509310 DOI: 10.1109/rbme.2024.3493775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2024]
Abstract
Large-scale Artificial General Intelligence (AGI) models, including Large Language Models (LLMs) such as ChatGPT/GPT-4, have achieved unprecedented success in a variety of general domain tasks. Yet, when applied directly to specialized domains like medical imaging, which require in-depth expertise, these models face notable challenges arising from the medical field's inherent complexities and unique characteristics. In this review, we delve into the potential applications of AGI models in medical imaging and healthcare, with a primary focus on LLMs, Large Vision Models, and Large Multimodal Models. We provide a thorough overview of the key features and enabling techniques of LLMs and AGI, and further examine the roadmaps guiding the evolution and implementation of AGI models in the medical sector, summarizing their present applications, potentialities, and associated challenges. In addition, we highlight potential future research directions, offering a holistic view on upcoming ventures. This comprehensive review aims to offer insights into the future implications of AGI in medical imaging, healthcare, and beyond.
Collapse
|
10
|
Sperling J, Welsh W, Haseley E, Quenstedt S, Muhigaba PB, Brown A, Ephraim P, Shafi T, Waitzkin M, Casarett D, Goldstein BA. Machine learning-based prediction models in medical decision-making in kidney disease: patient, caregiver, and clinician perspectives on trust and appropriate use. J Am Med Inform Assoc 2025; 32:51-62. [PMID: 39545362 DOI: 10.1093/jamia/ocae255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Revised: 08/22/2024] [Accepted: 09/25/2024] [Indexed: 11/17/2024] Open
Abstract
OBJECTIVES This study aims to improve the ethical use of machine learning (ML)-based clinical prediction models (CPMs) in shared decision-making for patients with kidney failure on dialysis. We explore factors that inform acceptability, interpretability, and implementation of ML-based CPMs among multiple constituent groups. MATERIALS AND METHODS We collected and analyzed qualitative data from focus groups with varied end users, including: dialysis support providers (clinical providers and additional dialysis support providers such as dialysis clinic staff and social workers); patients; patients' caregivers (n = 52). RESULTS Participants were broadly accepting of ML-based CPMs, but with concerns on data sources, factors included in the model, and accuracy. Use was desired in conjunction with providers' views and explanations. Differences among respondent types were minimal overall but most prevalent in discussions of CPM presentation and model use. DISCUSSION AND CONCLUSION Evidence of acceptability of ML-based CPM usage provides support for ethical use, but numerous specific considerations in acceptability, model construction, and model use for shared clinical decision-making must be considered. There are specific steps that could be taken by data scientists and health systems to engender use that is accepted by end users and facilitates trust, but there are also ongoing barriers or challenges in addressing desires for use. This study contributes to emerging literature on interpretability, mechanisms for sharing complexities, including uncertainty regarding the model results, and implications for decision-making. It examines numerous stakeholder groups including providers, patients, and caregivers to provide specific considerations that can influence health system use and provide a basis for future research.
Collapse
Affiliation(s)
- Jessica Sperling
- Social Science Research Institute, Duke University, Durham, NC 27708, United States
- Clinical and Translational Science Institute, Duke University School of Medicine, Durham, NC 27701, United States
- Department of Medicine, Duke University School of Medicine, Durham, NC 27708, United States
| | - Whitney Welsh
- Social Science Research Institute, Duke University, Durham, NC 27708, United States
| | - Erin Haseley
- Social Science Research Institute, Duke University, Durham, NC 27708, United States
| | - Stella Quenstedt
- Clinical and Translational Science Institute, Duke University School of Medicine, Durham, NC 27701, United States
| | - Perusi B Muhigaba
- Clinical and Translational Science Institute, Duke University School of Medicine, Durham, NC 27701, United States
| | - Adrian Brown
- Social Science Research Institute, Duke University, Durham, NC 27708, United States
| | - Patti Ephraim
- Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030, United States
| | - Tariq Shafi
- Department of Medicine, Houston Methodist, Houston, TX 77030, United States
| | - Michael Waitzkin
- Science & Society, Duke University, Durham, NC 27708, United States
| | - David Casarett
- Department of Medicine, Duke University School of Medicine, Durham, NC 27708, United States
| | - Benjamin A Goldstein
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC 27708, United States
| |
Collapse
|
11
|
Willem T, Fritzsche MC, Zimmermann BM, Sierawska A, Breuer S, Braun M, Ruess AK, Bak M, Schönweitz FB, Meier LJ, Fiske A, Tigard D, Müller R, McLennan S, Buyx A. Embedded Ethics in Practice: A Toolbox for Integrating the Analysis of Ethical and Social Issues into Healthcare AI Research. SCIENCE AND ENGINEERING ETHICS 2024; 31:3. [PMID: 39718728 PMCID: PMC11668859 DOI: 10.1007/s11948-024-00523-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 11/07/2024] [Indexed: 12/25/2024]
Abstract
Integrating artificial intelligence (AI) into critical domains such as healthcare holds immense promise. Nevertheless, significant challenges must be addressed to avoid harm, promote the well-being of individuals and societies, and ensure ethically sound and socially just technology development. Innovative approaches like Embedded Ethics, which refers to integrating ethics and social science into technology development based on interdisciplinary collaboration, are emerging to address issues of bias, transparency, misrepresentation, and more. This paper aims to develop this approach further to enable future projects to effectively deploy it. Based on the practical experience of using ethics and social science methodology in interdisciplinary AI-related healthcare consortia, this paper presents several methods that have proven helpful for embedding ethical and social science analysis and inquiry. They include (1) stakeholder analyses, (2) literature reviews, (3) ethnographic approaches, (4) peer-to-peer interviews, (5) focus groups, (6) interviews with affected groups and external stakeholders, (7) bias analyses, (8) workshops, and (9) interdisciplinary results dissemination. We believe that applying Embedded Ethics offers a pathway to stimulate reflexivity, proactively anticipate social and ethical concerns, and foster interdisciplinary inquiry into such concerns at every stage of technology development. This approach can help shape responsible, inclusive, and ethically aware technology innovation in healthcare and beyond.
Collapse
Affiliation(s)
- Theresa Willem
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany.
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany.
| | - Marie-Christine Fritzsche
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
| | - Bettina M Zimmermann
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Institute of Philosophy & Multidisciplinary Center for Infectious Diseases, University of Bern, Bern, Switzerland
| | - Anna Sierawska
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- TUD Dresden University of Technology, Dresden, Germany
| | - Svenja Breuer
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Department of Economics and Policy, School of Management, Technical University of Munich, Munich, Germany
- Center for Responsible AI Technologies, Technical University of Munich & University of Augsburg & Munich School of Philosophy, Munich, Germany
| | - Maximilian Braun
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Department of Economics and Policy, School of Management, Technical University of Munich, Munich, Germany
| | - Anja K Ruess
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Department of Economics and Policy, School of Management, Technical University of Munich, Munich, Germany
| | - Marieke Bak
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Amsterdam UMC, Department of Ethics, Law and Humanities, University of Amsterdam, Amsterdam, The Netherlands
| | - Franziska B Schönweitz
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
| | - Lukas J Meier
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Churchill College, University of Cambridge, Cambridge, UK
- Edmond & Lily Safra Center for Ethics, Harvard University, Cambridge, USA
| | - Amelia Fiske
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
| | - Daniel Tigard
- Department of Philosophy, University of San Diego, San Diego, USA
- Institute for Experiential AI, Northeastern University, Boston, USA
| | - Ruth Müller
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Department of Economics and Policy, School of Management, Technical University of Munich, Munich, Germany
- Center for Responsible AI Technologies, Technical University of Munich & University of Augsburg & Munich School of Philosophy, Munich, Germany
| | - Stuart McLennan
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| | - Alena Buyx
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
- Department of Science, Technology and Society (STS), School of Social Science and Technology, Technical University of Munich, Munich, Germany
- Center for Responsible AI Technologies, Technical University of Munich & University of Augsburg & Munich School of Philosophy, Munich, Germany
| |
Collapse
|
12
|
Abràmoff MD, Lavin PT, Jakubowski JR, Blodi BA, Keeys M, Joyce C, Folk JC. Mitigation of AI adoption bias through an improved autonomous AI system for diabetic retinal disease. NPJ Digit Med 2024; 7:369. [PMID: 39702673 DOI: 10.1038/s41746-024-01389-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Accepted: 12/12/2024] [Indexed: 12/21/2024] Open
Abstract
Where adopted, Autonomous artificial Intelligence (AI) for Diabetic Retinal Disease (DRD) resolves longstanding racial, ethnic, and socioeconomic disparities, but AI adoption bias persists. This preregistered trial determined sensitivity and specificity of a previously FDA authorized AI, improved to compensate for lower contrast and smaller imaged area of a widely adopted, lower cost, handheld fundus camera (RetinaVue700, Baxter Healthcare, Deerfield, IL) to identify DRD in participants with diabetes without known DRD, in primary care. In 626 participants (1252 eyes) 50.8% male, 45.7% Hispanic, 17.3% Black, DRD prevalence was 29.0%, all prespecified non-inferiority endpoints were met and no racial, ethnic or sex bias was identified, against a Wisconsin Reading Center level I prognostic standard using widefield stereoscopic photography and macular Optical Coherence Tomography. Results suggest this improved autonomous AI system can mitigate AI adoption bias, while preserving safety and efficacy, potentially contributing to rapid scaling of health access equity. ClinicalTrials.gov NCT05808699 (3/29/2023).
Collapse
Affiliation(s)
- Michael D Abràmoff
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA.
- Veterans Administration Medical Center, Iowa City, IA, USA.
- Digital Diagnostics, Inc., Coralville, IA, USA.
| | - Philip T Lavin
- Boston Biostatistics Research Foundation, Inc., Framingham, MA, USA
| | | | - Barbara A Blodi
- Department of Ophthalmology and Visual Sciences, Wisconsin Reading Center, University of Wisconsin, Madison, WI, USA
| | - Mia Keeys
- Department of Public Health, George Washington University, Washington, DC, USA
- Womens' Commissioner, Washington, DC, USA
| | - Cara Joyce
- Department of Medicine, Stritch School of Medicine, Loyola University Chicago, Chicago, IL, USA
| | - James C Folk
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA
- Veterans Administration Medical Center, Iowa City, IA, USA
| |
Collapse
|
13
|
Arriagada-Bruneau G, López C, Davidoff A. A Bias Network Approach (BNA) to Encourage Ethical Reflection Among AI Developers. SCIENCE AND ENGINEERING ETHICS 2024; 31:1. [PMID: 39688772 PMCID: PMC11652403 DOI: 10.1007/s11948-024-00526-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 11/11/2024] [Indexed: 12/18/2024]
Abstract
We introduce the Bias Network Approach (BNA) as a sociotechnical method for AI developers to identify, map, and relate biases across the AI development process. This approach addresses the limitations of what we call the "isolationist approach to AI bias," a trend in AI literature where biases are seen as separate occurrences linked to specific stages in an AI pipeline. Dealing with these multiple biases can trigger a sense of excessive overload in managing each potential bias individually or promote the adoption of an uncritical approach to understanding the influence of biases in developers' decision-making. The BNA fosters dialogue and a critical stance among developers, guided by external experts, using graphical representations to depict biased connections. To test the BNA, we conducted a pilot case study on the "waiting list" project, involving a small AI developer team creating a healthcare waiting list NPL model in Chile. The analysis showed promising findings: (i) the BNA aids in visualizing interconnected biases and their impacts, facilitating ethical reflection in a more accessible way; (ii) it promotes transparency in decision-making throughout AI development; and (iii) more focus is necessary on professional biases and material limitations as sources of bias in AI development.
Collapse
Affiliation(s)
- Gabriela Arriagada-Bruneau
- Instituto de Éticas Aplicadas, Instituto de Ingeniería Matemática y Computacional, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna, 4860, Santiago, Chile.
- Centro Nacional de Inteligencia Artificial (CENIA), Santiago, Chile.
| | - Claudia López
- Departamento de Informática, Universidad Técnica Federico Santa María, Avenida España, 1680, Valparaíso, Chile
| | - Alexandra Davidoff
- Sociology of Childhood and Children's Rights, Social Research Institute, UCL. 20 Bedford Way, London, UK
- Nucleo Futures of Artificial Intelligence Research (FAIR), Santiago, Chile
| |
Collapse
|
14
|
Ursin F, Müller R, Funer F, Liedtke W, Renz D, Wiertz S, Ranisch R. Non-empirical methods for ethics research on digital technologies in medicine, health care and public health: a systematic journal review. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2024; 27:513-528. [PMID: 39120780 PMCID: PMC11519279 DOI: 10.1007/s11019-024-10222-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/27/2024] [Indexed: 08/10/2024]
Abstract
Bioethics has developed approaches to address ethical issues in health care, similar to how technology ethics provides guidelines for ethical research on artificial intelligence, big data, and robotic applications. As these digital technologies are increasingly used in medicine, health care and public health, thus, it is plausible that the approaches of technology ethics have influenced bioethical research. Similar to the "empirical turn" in bioethics, which led to intense debates about appropriate moral theories, ethical frameworks and meta-ethics due to the increased use of empirical methodologies from social sciences, the proliferation of health-related subtypes of technology ethics might have a comparable impact on current bioethical research. This systematic journal review analyses the reporting of ethical frameworks and non-empirical methods in argument-based research articles on digital technologies in medicine, health care and public health that have been published in high-impact bioethics journals. We focus on articles reporting non-empirical research in original contributions. Our aim is to describe currently used methods for the ethical analysis of ethical issues regarding the application of digital technologies in medicine, health care and public health. We confine our analysis to non-empirical methods because empirical methods have been well-researched elsewhere. Finally, we discuss our findings against the background of established methods for health technology assessment, the lack of a typology for non-empirical methods as well as conceptual and methodical change in bioethics. Our descriptive results may serve as a starting point for reflecting on whether current ethical frameworks and non-empirical methods are appropriate to research ethical issues deriving from the application of digital technologies in medicine, health care and public health.
Collapse
Affiliation(s)
- Frank Ursin
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Carl-Neuberg-Strasse 1, 30625, Hannover, Germany.
| | - Regina Müller
- Institute of Philosophy, University of Bremen, Enrique-Schmidt-Straße 7, 28359, Bremen, Germany
| | - Florian Funer
- Institute for Ethics and History of Medicine, Eberhard Karls University, Gartenstrasse 47, 72074, Tübingen, Tübingen, Germany
| | - Wenke Liedtke
- Faculty of Theology, University of Greifswald, Am Rubenowplatz 2-3, 17489, Greifswald, Germany
| | - David Renz
- Faculty of Protestant Theology, University of Bonn, Am Hofgarten 8, 53113, Bonn, Germany
| | - Svenja Wiertz
- Department of Medical Ethics and the History of Medicine, University of Freiburg, Stefan-Meier-Str. 26, 79104, Freiburg, Germany
| | - Robert Ranisch
- Junior Professorship for Medical Ethics with a Focus on Digitization, Faculty of Health Sciences Brandenburg, University of Potsdam, Am Mühlenberg 9, 14476, Potsdam, Golm, Germany
| |
Collapse
|
15
|
Haga SB. Artificial intelligence, medications, pharmacogenomics, and ethics. Pharmacogenomics 2024; 25:611-622. [PMID: 39545629 DOI: 10.1080/14622416.2024.2428587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Accepted: 11/08/2024] [Indexed: 11/17/2024] Open
Abstract
Artificial Intelligence (AI) and Machine Learning (ML) are revolutionizing various scientific and clinical disciplines including pharmacogenomics (PGx) by enabling the analysis of complex datasets and the development of predictive models. The integration of AI and ML with PGx has the potential to provide more precise, data-driven insights into new drug targets, drug efficacy, drug selection, and risk of adverse events. While significant effort to develop and validate these tools remain, ongoing advancements in AI technologies, coupled with improvements in data quality and depth is anticipated to drive the transition of these tools into clinical practice and delivery of individualized treatments and improved patient outcomes. The successful development and integration of AI-assisted PGx tools will require careful consideration of ethical, legal, and social issues (ELSI) in research and clinical practice. This paper explores the intersection of PGx with AI, highlighting current research and potential clinical applications, and ELSI including privacy, oversight, patient and provider knowledge and acceptance, and the impact on patient-provider relationship and new roles.
Collapse
Affiliation(s)
- Susanne B Haga
- Department of Medicine, Division of General Internal Medicine, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
16
|
Bélisle-Pipon JC, Victor G. Ethics dumping in artificial intelligence. Front Artif Intell 2024; 7:1426761. [PMID: 39582547 PMCID: PMC11582056 DOI: 10.3389/frai.2024.1426761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 09/16/2024] [Indexed: 11/26/2024] Open
Abstract
Artificial Intelligence (AI) systems encode not just statistical models and complex algorithms designed to process and analyze data, but also significant normative baggage. This ethical dimension, derived from the underlying code and training data, shapes the recommendations given, behaviors exhibited, and perceptions had by AI. These factors influence how AI is regulated, used, misused, and impacts end-users. The multifaceted nature of AI's influence has sparked extensive discussions across disciplines like Science and Technology Studies (STS), Ethical, Legal and Social Implications (ELSI) studies, public policy analysis, and responsible innovation-underscoring the need to examine AI's ethical ramifications. While the initial wave of AI ethics focused on articulating principles and guidelines, recent scholarship increasingly emphasizes the practical implementation of ethical principles, regulatory oversight, and mitigating unforeseen negative consequences. Drawing from the concept of "ethics dumping" in research ethics, this paper argues that practices surrounding AI development and deployment can, unduly and in a very concerning way, offload ethical responsibilities from developers and regulators to ill-equipped users and host environments. Four key trends illustrating such ethics dumping are identified: (1) AI developers embedding ethics through coded value assumptions, (2) AI ethics guidelines promoting broad or unactionable principles disconnected from local contexts, (3) institutions implementing AI systems without evaluating ethical implications, and (4) decision-makers enacting ethical governance frameworks disconnected from practice. Mitigating AI ethics dumping requires empowering users, fostering stakeholder engagement in norm-setting, harmonizing ethical guidelines while allowing flexibility for local variation, and establishing clear accountability mechanisms across the AI ecosystem.
Collapse
Affiliation(s)
| | - Gavin Victor
- Philosophy Department, Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
17
|
Reiter GS, Mai J, Riedl S, Birner K, Frank S, Bogunovic H, Schmidt-Erfurth U. AI in the clinical management of GA: A novel therapeutic universe requires novel tools. Prog Retin Eye Res 2024; 103:101305. [PMID: 39343193 DOI: 10.1016/j.preteyeres.2024.101305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 09/25/2024] [Accepted: 09/26/2024] [Indexed: 10/01/2024]
Abstract
Regulatory approval of the first two therapeutic substances for the management of geographic atrophy (GA) secondary to age-related macular degeneration (AMD) is a major breakthrough following failure of numerous previous trials. However, in the absence of therapeutic standards, diagnostic tools are a key challenge as functional parameters in GA are hard to provide. The majority of anatomical biomarkers are subclinical, necessitating advanced and sensitive image analyses. In contrast to fundus autofluorescence (FAF), optical coherence tomography (OCT) provides high-resolution visualization of neurosensory layers, including photoreceptors, and other features that are beyond the scope of human expert assessment. Artificial intelligence (AI)-based methodology strongly enhances identification and quantification of clinically relevant GA-related sub-phenotypes. Introduction of OCT-based biomarker analysis provides novel insight into the pathomechanisms of disease progression and therapeutic, moving beyond the limitations of conventional descriptive assessment. Accordingly, the Food and Drug Administration (FDA) has provided a paradigm-shift in recognizing ellipsoid zone (EZ) attenuation as a primary outcome measure in GA clinical trials. In this review, the transition from previous to future GA classification and management is described. With the advent of AI tools, diagnostic and therapeutic concepts have changed substantially in monitoring and screening of GA disease. Novel technology combined with pathophysiological knowledge and understanding of the therapeutic response to GA treatments, is currently opening the path for an automated, efficient and individualized patient care with great potential to improve access to timely treatment and reduce health disparities.
Collapse
Affiliation(s)
- Gregor S Reiter
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Julia Mai
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Sophie Riedl
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Klaudia Birner
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Sophie Frank
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Hrvoje Bogunovic
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Ursula Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| |
Collapse
|
18
|
Benito GV, Goldberg X, Brachowicz N, Castaño-Vinyals G, Blay N, Espinosa A, Davidhi F, Torres D, Kogevinas M, de Cid R, Petrone P. Machine learning for anxiety and depression profiling and risk assessment in the aftermath of an emergency. Artif Intell Med 2024; 157:102991. [PMID: 39383706 DOI: 10.1016/j.artmed.2024.102991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 09/23/2024] [Accepted: 09/26/2024] [Indexed: 10/11/2024]
Abstract
BACKGROUND & OBJECTIVES Mental health disorders pose an increasing public health challenge worsened by the COVID-19 pandemic. The pandemic highlighted gaps in preparedness, emphasizing the need for early identification of at-risk groups and targeted interventions. This study aims to develop a risk assessment tool for anxiety, depression, and self-perceived stress using machine learning (ML) and explainable AI to identify key risk factors and stratify the population into meaningful risk profiles. METHODS We utilized a cohort of 9291 individuals from Northern Spain, with extensive post-COVID-19 mental health surveys. ML classification algorithms predicted depression, anxiety, and self-reported stress in three classes: healthy, mild, and severe outcomes. A novel combination of SHAP (SHapley Additive exPlanations) and UMAP (Uniform Manifold Approximation and Projection) was employed to interpret model predictions and facilitate the identification of high-risk phenotypic clusters. RESULTS The mean macro-averaged one-vs-one AUROC was 0.77 (± 0.01) for depression, 0.72 (± 0.01) for anxiety, and 0.73 (± 0.02) for self-perceived stress. Key risk factors included poor self-reported health, chronic mental health conditions, and poor social support. High-risk profiles, such as women with reduced sleep hours, were identified for self-perceived stress. Binary classification of healthy vs. at-risk classes yielded F1-Scores over 0.70. CONCLUSIONS Combining SHAP with UMAP for risk profile stratification offers valuable insights for developing effective interventions and shaping public health policies. This data-driven approach to mental health preparedness, when validated in real-world scenarios, can significantly address the mental health impact of public health crises like COVID-19.
Collapse
Affiliation(s)
- Guillermo Villanueva Benito
- Barcelona Institute for Global Health (ISGlobal), C/ del Dr. Aiguader, 88, Barcelona 08003, Catalonia, Spain; Universitat Pompeu Fabra (UPF), Spain
| | - Ximena Goldberg
- Barcelona Institute for Global Health (ISGlobal), C/ del Dr. Aiguader, 88, Barcelona 08003, Catalonia, Spain
| | - Nicolai Brachowicz
- Barcelona Institute for Global Health (ISGlobal), C/ del Dr. Aiguader, 88, Barcelona 08003, Catalonia, Spain
| | - Gemma Castaño-Vinyals
- Barcelona Institute for Global Health (ISGlobal), C/ del Dr. Aiguader, 88, Barcelona 08003, Catalonia, Spain; Universitat Pompeu Fabra (UPF), Spain; CIBER de Epidemiología y Salud Pública (CIBERESP), Spain
| | - Natalia Blay
- Genomes for Life-GCAT lab. CORE program. Germans Trias I Pujol Research Institute (IGTP), Camí de les Escoles, s/n, Badalona 08916, Catalonia, Spain
| | - Ana Espinosa
- Barcelona Institute for Global Health (ISGlobal), C/ del Dr. Aiguader, 88, Barcelona 08003, Catalonia, Spain
| | - Flavia Davidhi
- Barcelona Institute for Global Health (ISGlobal), C/ del Dr. Aiguader, 88, Barcelona 08003, Catalonia, Spain
| | - Diego Torres
- Barcelona Institute for Global Health (ISGlobal), C/ del Dr. Aiguader, 88, Barcelona 08003, Catalonia, Spain
| | - Manolis Kogevinas
- Barcelona Institute for Global Health (ISGlobal), C/ del Dr. Aiguader, 88, Barcelona 08003, Catalonia, Spain
| | - Rafael de Cid
- Genomes for Life-GCAT lab. CORE program. Germans Trias I Pujol Research Institute (IGTP), Camí de les Escoles, s/n, Badalona 08916, Catalonia, Spain
| | - Paula Petrone
- Barcelona Institute for Global Health (ISGlobal), C/ del Dr. Aiguader, 88, Barcelona 08003, Catalonia, Spain.
| |
Collapse
|
19
|
Rhim J, Gallois H, Ravitsky V, Bélisle-Pipon JC. Beyond Consent: The MAMLS in the Room. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024; 24:85-88. [PMID: 39283388 DOI: 10.1080/15265161.2024.2388737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/20/2024]
|
20
|
Chen S, Yu J, Chamouni S, Wang Y, Li Y. Integrating machine learning and artificial intelligence in life-course epidemiology: pathways to innovative public health solutions. BMC Med 2024; 22:354. [PMID: 39218895 PMCID: PMC11367811 DOI: 10.1186/s12916-024-03566-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 08/19/2024] [Indexed: 09/04/2024] Open
Abstract
The integration of machine learning (ML) and artificial intelligence (AI) techniques in life-course epidemiology offers remarkable opportunities to advance our understanding of the complex interplay between biological, social, and environmental factors that shape health trajectories across the lifespan. This perspective summarizes the current applications, discusses future potential and challenges, and provides recommendations for harnessing ML and AI technologies to develop innovative public health solutions. ML and AI have been increasingly applied in epidemiological studies, demonstrating their ability to handle large, complex datasets, identify intricate patterns and associations, integrate multiple and multimodal data types, improve predictive accuracy, and enhance causal inference methods. In life-course epidemiology, these techniques can help identify sensitive periods and critical windows for intervention, model complex interactions between risk factors, predict individual and population-level disease risk trajectories, and strengthen causal inference in observational studies. By leveraging the five principles of life-course research proposed by Elder and Shanahan-lifespan development, agency, time and place, timing, and linked lives-we discuss a framework for applying ML and AI to uncover novel insights and inform targeted interventions. However, the successful integration of these technologies faces challenges related to data quality, model interpretability, bias, privacy, and equity. To fully realize the potential of ML and AI in life-course epidemiology, fostering interdisciplinary collaborations, developing standardized guidelines, advocating for their integration in public health decision-making, prioritizing fairness, and investing in training and capacity building are essential. By responsibly harnessing the power of ML and AI, we can take significant steps towards creating healthier and more equitable futures across the life course.
Collapse
Affiliation(s)
- Shanquan Chen
- Faculty of Epidemiology and Population Health, London School of Hygiene & Tropical Medicine, Keppel Street, London, WC1E 7HT, UK.
| | - Jiazhou Yu
- Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Sarah Chamouni
- Faculty of Epidemiology and Population Health, London School of Hygiene & Tropical Medicine, Keppel Street, London, WC1E 7HT, UK
| | - Yuqi Wang
- Department of Computer Science, University College London, London, WC1E 6BT, UK
| | - Yunfei Li
- Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, 171 64, Sweden.
| |
Collapse
|
21
|
Ciaparrone C, Maffei E, L'Imperio V, Pisapia P, Eloy C, Fraggetta F, Zeppa P, Caputo A. Computer-assisted urine cytology: Faster, cheaper, better? Cytopathology 2024; 35:634-641. [PMID: 38894608 DOI: 10.1111/cyt.13412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 06/05/2024] [Accepted: 06/07/2024] [Indexed: 06/21/2024]
Abstract
Recent advancements in computer-assisted diagnosis (CAD) have catalysed significant progress in pathology, particularly in the realm of urine cytopathology. This review synthesizes the latest developments and challenges in CAD for diagnosing urothelial carcinomas, addressing the limitations of traditional urinary cytology. Through a literature review, we identify and analyse CAD models and algorithms developed for urine cytopathology, highlighting their methodologies and performance metrics. We discuss the potential of CAD to improve diagnostic accuracy, efficiency and patient outcomes, emphasizing its role in streamlining workflow and reducing errors. Furthermore, CAD tools have shown potential in exploring pathological conditions, uncovering novel biomarkers and prognostic/predictive features previously unknown or unseen. Finally, we examine the practical issues surrounding the integration of CAD into clinical practice, including regulatory approval, validation and training for pathologists. Despite the promising results, challenges remain, necessitating further research and validation efforts. Overall, CAD presents a transformative opportunity to revolutionize diagnostic practices in urine cytopathology, paving the way for enhanced patient care and outcomes.
Collapse
Affiliation(s)
- Chiara Ciaparrone
- Department of Pathology, University Hospital of Salerno, Salerno, Italy
| | - Elisabetta Maffei
- Department of Pathology, University Hospital of Salerno, Salerno, Italy
| | - Vincenzo L'Imperio
- Department of Medicine and Surgery, Pathology, IRCCS Fondazione San Gerardo dei Tintori, University of Milano-Bicocca, Milan, Italy
| | - Pasquale Pisapia
- Department of Public Health, University of Naples "Federico II", Naples, Italy
| | - Catarina Eloy
- Pathology Laboratory, Institute of Molecular Pathology and Immunology of University of Porto (IPATIMUP), Porto, Portugal
| | | | - Pio Zeppa
- Department of Pathology, University Hospital of Salerno, Salerno, Italy
- Department of Medicine and Surgery, University of Salerno, Baronissi, Italy
| | - Alessandro Caputo
- Department of Pathology, University Hospital of Salerno, Salerno, Italy
- Department of Medicine and Surgery, University of Salerno, Baronissi, Italy
| |
Collapse
|
22
|
Patino GA, Roberts LW. The Need for Greater Transparency in Journal Submissions That Report Novel Machine Learning Models in Health Professions Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:935-937. [PMID: 38924500 DOI: 10.1097/acm.0000000000005793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/28/2024]
|
23
|
Abramoff MD, Char D. What Do We Do with Physicians When Autonomous AI-Enabled Workflow is Better for Patient Outcomes? THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024; 24:93-96. [PMID: 39225989 DOI: 10.1080/15265161.2024.2377111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
|
24
|
Lobato LC, Paul S, Cordioli JA. Stochastic modeling of the human middle ear dynamics under pathological conditions. Comput Biol Med 2024; 179:108802. [PMID: 38959526 DOI: 10.1016/j.compbiomed.2024.108802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 06/24/2024] [Accepted: 06/24/2024] [Indexed: 07/05/2024]
Abstract
BACKGROUND Although the dynamics of the middle ear (ME) have been modeled since the mid-twentieth century, only recently stochastic approaches started to be applied. In this study, a stochastic model of the ME was utilized to predict the ME dynamics under both healthy and pathological conditions. METHODS The deterministic ME model is based on a lumped-parameter representation, while the stochastic model was developed using a probabilistic non-parametric approach that randomizes the deterministic model. Subsequently, the ME model was modified to represent the ME under pathological conditions. Furthermore, the simulated data was used to develop a classifier model of the ME condition based on a machine learning algorithm. RESULTS The ME model under healthy conditions exhibited good agreement with statistical experimental results. The ranges of probabilities from models under pathological conditions were qualitatively compared to individual experimental data, revealing similarities. Moreover, the classifier model presented promising results. DISCUSSION The results aimed to elucidate how the ME dynamics, under different conditions, can overlap across various frequency ranges. Despite the promising results, improvements in the stochastic and classifier models are necessary. Nevertheless, this study serves as a starting point that can yield valuable tools for researchers and clinicians.
Collapse
Affiliation(s)
- Lucas C Lobato
- Acoustic and Vibration Laboratory, Federal University of Santa Catarina, Florianopolis, 88040-900, Brazil.
| | - Stephan Paul
- Acoustic and Vibration Laboratory, Federal University of Santa Catarina, Florianopolis, 88040-900, Brazil
| | - Júlio A Cordioli
- Acoustic and Vibration Laboratory, Federal University of Santa Catarina, Florianopolis, 88040-900, Brazil
| |
Collapse
|
25
|
Rajagopal A, Ayanian S, Ryu AJ, Qian R, Legler SR, Peeler EA, Issa M, Coons TJ, Kawamoto K. Machine Learning Operations in Health Care: A Scoping Review. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2024; 2:421-437. [PMID: 40206123 PMCID: PMC11975983 DOI: 10.1016/j.mcpdig.2024.06.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
The use of machine learning tools in health care is rapidly expanding. However, the processes that support these tools in deployment, that is, machine learning operations, are still emerging. The purpose of this work was not only to provide a comprehensive synthesis of existing literature in the field but also to identify gaps and offer insights for adoption in clinical practice. A scoping review was conducted using the MEDLINE, PubMed, Google Scholar, Embase, and Scopus databases. We used MeSH and non-MeSH search terms to identify pertinent articles, with the authors performing 2 screening phases and assigning relevance scores: 148 English language articles most salient to the review were eligible for inclusion; 98 offered the most unique information and these were supplemented by 50 additional sources, yielding 148 references. From the 148 references, we distilled 7 key topic areas, based on a synthesis of the available literature and how that aligned with practitioner needs. The 7 topic areas were machine learning model monitoring; automated retraining systems; ethics, equity, and bias; clinical workflow integration; infrastructure, human resources, and technology stack; regulatory considerations; and financial considerations. This review provides an overview of best practices and knowledge gaps of this domain in health care and identifies the strengths and weaknesses of the literature, which may be useful to health care machine learning practitioners and consumers.
Collapse
Affiliation(s)
- Anjali Rajagopal
- Department of Medicine, Artificial Intelligence and Innovation, Mayo Clinic Rochester, MN
| | - Shant Ayanian
- Division of Hospital Internal Medicine, Department of Medicine, Mayo Clinic, Rochester, MN
| | - Alexander J. Ryu
- Division of Hospital Internal Medicine, Department of Medicine, Mayo Clinic, Rochester, MN
| | - Ray Qian
- Division of Hospital Internal Medicine, Department of Medicine, Mayo Clinic, Rochester, MN
| | - Sean R. Legler
- Division of Hospital Internal Medicine, Department of Medicine, Mayo Clinic, Rochester, MN
| | - Eric A. Peeler
- Division of Hospital Internal Medicine, Department of Medicine, Mayo Clinic, Rochester, MN
| | - Meltiady Issa
- Division of Hospital Internal Medicine, Department of Medicine, Mayo Clinic, Rochester, MN
| | - Trevor J. Coons
- Heart, Vascular and Thoracic Institute, Cleveland Clinic Abu Dhabi, United Arab Emirates
| | - Kensaku Kawamoto
- Department of Biomedical Informatics, University of Utah, Salt Lake City, UT
| |
Collapse
|
26
|
Federico CA, Trotsyuk AA. Biomedical Data Science, Artificial Intelligence, and Ethics: Navigating Challenges in the Face of Explosive Growth. Annu Rev Biomed Data Sci 2024; 7:1-14. [PMID: 38598860 DOI: 10.1146/annurev-biodatasci-102623-104553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Advances in biomedical data science and artificial intelligence (AI) are profoundly changing the landscape of healthcare. This article reviews the ethical issues that arise with the development of AI technologies, including threats to privacy, data security, consent, and justice, as they relate to donors of tissue and data. It also considers broader societal obligations, including the importance of assessing the unintended consequences of AI research in biomedicine. In addition, this article highlights the challenge of rapid AI development against the backdrop of disparate regulatory frameworks, calling for a global approach to address concerns around data misuse, unintended surveillance, and the equitable distribution of AI's benefits and burdens. Finally, a number of potential solutions to these ethical quandaries are offered. Namely, the merits of advocating for a collaborative, informed, and flexible regulatory approach that balances innovation with individual rights and public welfare, fostering a trustworthy AI-driven healthcare ecosystem, are discussed.
Collapse
Affiliation(s)
- Carole A Federico
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California, USA; ,
| | - Artem A Trotsyuk
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California, USA; ,
| |
Collapse
|
27
|
Makarov V, Chabbert C, Koletou E, Psomopoulos F, Kurbatova N, Ramirez S, Nelson C, Natarajan P, Neupane B. Good machine learning practices: Learnings from the modern pharmaceutical discovery enterprise. Comput Biol Med 2024; 177:108632. [PMID: 38788373 DOI: 10.1016/j.compbiomed.2024.108632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Revised: 05/07/2024] [Accepted: 05/18/2024] [Indexed: 05/26/2024]
Abstract
Machine Learning (ML) and Artificial Intelligence (AI) have become an integral part of the drug discovery and development value chain. Many teams in the pharmaceutical industry nevertheless report the challenges associated with the timely, cost effective and meaningful delivery of ML and AI powered solutions for their scientists. We sought to better understand what these challenges were and how to overcome them by performing an industry wide assessment of the practices in AI and Machine Learning. Here we report results of the systematic business analysis of the personas in the modern pharmaceutical discovery enterprise in relation to their work with the AI and ML technologies. We identify 23 common business problems that individuals in these roles face when they encounter AI and ML technologies at work, and describe best practices (Good Machine Learning Practices) that address these issues.
Collapse
Affiliation(s)
- Vladimir Makarov
- The Pistoia Alliance, 401 Edgewater Place, Suite 600, Wakefield, MA, 01880, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
28
|
Bouhouita-Guermech S, Haidar H. Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of "Responsibility" in Artificial Intelligence within the Healthcare Context. Asian Bioeth Rev 2024; 16:315-344. [PMID: 39022380 PMCID: PMC11250714 DOI: 10.1007/s41649-024-00292-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 07/20/2024] Open
Abstract
The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to "responsibility" and "AI in healthcare", and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders' responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.
Collapse
Affiliation(s)
| | - Hazar Haidar
- Ethics Programs, Department of Letters and Humanities, University of Quebec at Rimouski, Rimouski, Québec Canada
| |
Collapse
|
29
|
Taveekitworachai P, Chanmas G, Paliyawan P, Thawonmas R, Nukoolkit C, Dajpratham P, Thawonmas R. A systematic review of major evaluation metrics for simulator-based automatic assessment of driving after stroke. Heliyon 2024; 10:e32930. [PMID: 39021930 PMCID: PMC11252877 DOI: 10.1016/j.heliyon.2024.e32930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 06/12/2024] [Accepted: 06/12/2024] [Indexed: 07/20/2024] Open
Abstract
Background: Simulator-based driving assessments (SA) have recently been used and studied for various purposes, particularly for post-stroke patients. Automating such assessment has potential benefits especially on reducing financial cost and time. Nevertheless, there currently exists no clear guideline on assessment techniques and metrics available for SA for post-stroke patients. Therefore, this systematic review is conducted to explore such techniques and establish guidelines for evaluation metrics. Objective: This review aims to find: (a) major evaluation metrics for automatic SA in post-stroke patients and (b) assessment inputs and techniques for such evaluation metrics. Methods: The study follows the PRISMA guideline. Systematic searches were performed on PubMed, Web of Science, ScienceDirect, ACM Digital Library, and IEEE Xplore Digital Library for articles published from January 1, 2010, to December 31, 2023. This review targeted journal articles written in English about automatic performance assessment of simulator-based driving by post-stroke patients. A narrative synthesis was provided for the included studies. Results: The review included six articles with a total of 239 participants. Across all of the included studies, we discovered 49 distinct assessment inputs. Threshold-based, machine-learning-based, and driving simulator calculation approaches are three primary types of assessment techniques and evaluation metrics identified in the review. Discussion: Most studies incorporated more than one type of input, indicating the importance of a comprehensive evaluation of driving abilities. Threshold-based techniques and metrics were the most commonly used in all studies, likely due to their simplicity. An existing relevant review also highlighted the limited number of studies in this area, underscoring the need for further research to establish the validity and effectiveness of simulator-based automatic assessment of driving (SAAD). Conclusions: More studies should be conducted on various aspects of SAAD to explore and validate this type of assessment.
Collapse
Affiliation(s)
- Pittawat Taveekitworachai
- Graduate School of Information Science and Engineering, Ritsumeikan University, 2-150 Iwakura-cho, Ibaraki, 567-8570, Osaka, Japan
| | - Gunt Chanmas
- Graduate School of Information Science and Engineering, Ritsumeikan University, 2-150 Iwakura-cho, Ibaraki, 567-8570, Osaka, Japan
| | - Pujana Paliyawan
- Ritsumeikan Center for Game Studies, Ritsumeikan University, 56-1 Toji-in Kitamachi, Kita, 603-8577, Kyoto, Japan
| | - Ramita Thawonmas
- School of Tropical Medicine and Global Health, Nagasaki University, 1-12-4 Sakamoto, Nagasaki City, 852-8523, Nagasaki, Japan
| | - Chakarida Nukoolkit
- School of Information Technology, King Mongkut's University of Technology Thonburi, 126 Pracha Uthit Road, Bang Mod, Thung Khru, 10140, Bangkok, Thailand
| | - Piyapat Dajpratham
- Department of Rehabilitation Medicine, Faculty of Medicine Siriraj Hospital, Mahidol University, 2 Wanglang Road, Siriraj, Bangkok Noi, 10700, Bangkok, Thailand
| | - Ruck Thawonmas
- Department of Information Science and Engineering, College School of Information Science and Engineering, Ritsumeikan University, 2-150 Iwakura-cho, Ibaraki, 567-8570, Osaka, Japan
| |
Collapse
|
30
|
Zhou K, Gattinger G. The Evolving Regulatory Paradigm of AI in MedTech: A Review of Perspectives and Where We Are Today. Ther Innov Regul Sci 2024; 58:456-464. [PMID: 38528278 PMCID: PMC11043174 DOI: 10.1007/s43441-024-00628-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 02/04/2024] [Indexed: 03/27/2024]
Abstract
Artificial intelligence (AI)-enabled technologies in the MedTech sector hold the promise to transform healthcare delivery by improving access, quality, and outcomes. As the regulatory contours of these technologies are being defined, there is a notable lack of literature on the key stakeholders such as the organizations and interest groups that have a significant input in shaping the regulatory framework. This article explores the perspectives and contributions of these stakeholders in shaping the regulatory paradigm of AI-enabled medical technologies. The formation of an AI regulatory framework requires the convergence of ethical, regulatory, technical, societal, and practical considerations. These multiple perspectives contribute to the various dimensions of an evolving regulatory paradigm. From the global governance guidelines set by the World Health Organization (WHO) to national regulations, the article sheds light not just on these multiple perspectives but also on their interconnectedness in shaping the regulatory landscape of AI.
Collapse
Affiliation(s)
- Karen Zhou
- Northeastern University, Toronto, ON, Canada.
| | | |
Collapse
|
31
|
Khan SD, Hoodbhoy Z, Raja MHR, Kim JY, Hogg HDJ, Manji AAA, Gulamali F, Hasan A, Shaikh A, Tajuddin S, Khan NS, Patel MR, Balu S, Samad Z, Sendak MP. Frameworks for procurement, integration, monitoring, and evaluation of artificial intelligence tools in clinical settings: A systematic review. PLOS DIGITAL HEALTH 2024; 3:e0000514. [PMID: 38809946 PMCID: PMC11135672 DOI: 10.1371/journal.pdig.0000514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 04/18/2024] [Indexed: 05/31/2024]
Abstract
Research on the applications of artificial intelligence (AI) tools in medicine has increased exponentially over the last few years but its implementation in clinical practice has not seen a commensurate increase with a lack of consensus on implementing and maintaining such tools. This systematic review aims to summarize frameworks focusing on procuring, implementing, monitoring, and evaluating AI tools in clinical practice. A comprehensive literature search, following PRSIMA guidelines was performed on MEDLINE, Wiley Cochrane, Scopus, and EBSCO databases, to identify and include articles recommending practices, frameworks or guidelines for AI procurement, integration, monitoring, and evaluation. From the included articles, data regarding study aim, use of a framework, rationale of the framework, details regarding AI implementation involving procurement, integration, monitoring, and evaluation were extracted. The extracted details were then mapped on to the Donabedian Plan, Do, Study, Act cycle domains. The search yielded 17,537 unique articles, out of which 47 were evaluated for inclusion based on their full texts and 25 articles were included in the review. Common themes extracted included transparency, feasibility of operation within existing workflows, integrating into existing workflows, validation of the tool using predefined performance indicators and improving the algorithm and/or adjusting the tool to improve performance. Among the four domains (Plan, Do, Study, Act) the most common domain was Plan (84%, n = 21), followed by Study (60%, n = 15), Do (52%, n = 13), & Act (24%, n = 6). Among 172 authors, only 1 (0.6%) was from a low-income country (LIC) and 2 (1.2%) were from lower-middle-income countries (LMICs). Healthcare professionals cite the implementation of AI tools within clinical settings as challenging owing to low levels of evidence focusing on integration in the Do and Act domains. The current healthcare AI landscape calls for increased data sharing and knowledge translation to facilitate common goals and reap maximum clinical benefit.
Collapse
Affiliation(s)
- Sarim Dawar Khan
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Zahra Hoodbhoy
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
- Department of Paediatrics and Child Health, Aga Khan University, Karachi, Pakistan
| | | | - Jee Young Kim
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Henry David Jeffry Hogg
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Afshan Anwar Ali Manji
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Freya Gulamali
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Alifia Hasan
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Asim Shaikh
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Salma Tajuddin
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Nida Saddaf Khan
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Manesh R. Patel
- Duke Clinical Research Institute, Duke University School of Medicine, Durham, North Carolina, United States
- Division of Cardiology, Duke University School of Medicine, Durham, North Carolina, United States
| | - Suresh Balu
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Zainab Samad
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Mark P. Sendak
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| |
Collapse
|
32
|
Huguet N, Chen J, Parikh RB, Marino M, Flocke SA, Likumahuwa-Ackman S, Bekelman J, DeVoe JE. Applying Machine Learning Techniques to Implementation Science. Online J Public Health Inform 2024; 16:e50201. [PMID: 38648094 DOI: 10.2196/50201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 11/15/2023] [Accepted: 03/14/2024] [Indexed: 04/25/2024] Open
Abstract
Machine learning (ML) approaches could expand the usefulness and application of implementation science methods in clinical medicine and public health settings. The aim of this viewpoint is to introduce a roadmap for applying ML techniques to address implementation science questions, such as predicting what will work best, for whom, under what circumstances, and with what predicted level of support, and what and when adaptation or deimplementation are needed. We describe how ML approaches could be used and discuss challenges that implementation scientists and methodologists will need to consider when using ML throughout the stages of implementation.
Collapse
Affiliation(s)
- Nathalie Huguet
- Department of Family Medicine, Oregon Health & Science University, Portland, OR, United States
- BRIDGE-C2 Implementation Science Center for Cancer Control, Oregon Health & Science University, Portland, OR, United States
| | - Jinying Chen
- Section of Preventive Medicine and Epidemiology, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States
- Data Science Core, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States
- iDAPT Implementation Science Center for Cancer Control, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Ravi B Parikh
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Miguel Marino
- Department of Family Medicine, Oregon Health & Science University, Portland, OR, United States
- BRIDGE-C2 Implementation Science Center for Cancer Control, Oregon Health & Science University, Portland, OR, United States
| | - Susan A Flocke
- Department of Family Medicine, Oregon Health & Science University, Portland, OR, United States
- BRIDGE-C2 Implementation Science Center for Cancer Control, Oregon Health & Science University, Portland, OR, United States
| | - Sonja Likumahuwa-Ackman
- Department of Family Medicine, Oregon Health & Science University, Portland, OR, United States
- BRIDGE-C2 Implementation Science Center for Cancer Control, Oregon Health & Science University, Portland, OR, United States
| | - Justin Bekelman
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Penn Center for Cancer Care Innovation, Abramson Cancer Center, Penn Medicine, Philadelphia, PA, United States
| | - Jennifer E DeVoe
- Department of Family Medicine, Oregon Health & Science University, Portland, OR, United States
- BRIDGE-C2 Implementation Science Center for Cancer Control, Oregon Health & Science University, Portland, OR, United States
| |
Collapse
|
33
|
Gomez-Cabello CA, Borna S, Pressman S, Haider SA, Haider CR, Forte AJ. Artificial-Intelligence-Based Clinical Decision Support Systems in Primary Care: A Scoping Review of Current Clinical Implementations. Eur J Investig Health Psychol Educ 2024; 14:685-698. [PMID: 38534906 PMCID: PMC10969561 DOI: 10.3390/ejihpe14030045] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 03/10/2024] [Accepted: 03/11/2024] [Indexed: 11/11/2024] Open
Abstract
Primary Care Physicians (PCPs) are the first point of contact in healthcare. Because PCPs face the challenge of managing diverse patient populations while maintaining up-to-date medical knowledge and updated health records, this study explores the current outcomes and effectiveness of implementing Artificial Intelligence-based Clinical Decision Support Systems (AI-CDSSs) in Primary Healthcare (PHC). Following the PRISMA-ScR guidelines, we systematically searched five databases, PubMed, Scopus, CINAHL, IEEE, and Google Scholar, and manually searched related articles. Only CDSSs powered by AI targeted to physicians and tested in real clinical PHC settings were included. From a total of 421 articles, 6 met our criteria. We found AI-CDSSs from the US, Netherlands, Spain, and China whose primary tasks included diagnosis support, management and treatment recommendations, and complication prediction. Secondary objectives included lessening physician work burden and reducing healthcare costs. While promising, the outcomes were hindered by physicians' perceptions and cultural settings. This study underscores the potential of AI-CDSSs in improving clinical management, patient satisfaction, and safety while reducing physician workload. However, further work is needed to explore the broad spectrum of applications that the new AI-CDSSs have in several PHC real clinical settings and measure their clinical outcomes.
Collapse
Affiliation(s)
| | - Sahar Borna
- Division of Plastic Surgery, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Sophia Pressman
- Division of Plastic Surgery, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Syed Ali Haider
- Division of Plastic Surgery, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Clifton R. Haider
- Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN 55902, USA
| | - Antonio J. Forte
- Division of Plastic Surgery, Mayo Clinic, Jacksonville, FL 32224, USA
| |
Collapse
|
34
|
Chen J, Yuan D, Dong R, Cai J, Ai Z, Zhou S. Artificial intelligence significantly facilitates development in the mental health of college students: a bibliometric analysis. Front Psychol 2024; 15:1375294. [PMID: 38515973 PMCID: PMC10955080 DOI: 10.3389/fpsyg.2024.1375294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 02/26/2024] [Indexed: 03/23/2024] Open
Abstract
Objective College students are currently grappling with severe mental health challenges, and research on artificial intelligence (AI) related to college students mental health, as a crucial catalyst for promoting psychological well-being, is rapidly advancing. Employing bibliometric methods, this study aim to analyze and discuss the research on AI in college student mental health. Methods Publications pertaining to AI and college student mental health were retrieved from the Web of Science core database. The distribution of publications were analyzed to gage the predominant productivity. Data on countries, authors, journal, and keywords were analyzed using VOSViewer, exploring collaboration patterns, disciplinary composition, research hotspots and trends. Results Spanning 2003 to 2023, the study encompassed 1722 publications, revealing notable insights: (1) a gradual rise in annual publications, reaching its zenith in 2022; (2) Journal of Affective Disorders and Psychiatry Research emerged were the most productive and influential sources in this field, with significant contributions from China, the United States, and their affiliated higher education institutions; (3) the primary mental health issues were depression and anxiety, with machine learning and AI having the widest range of applications; (4) an imperative for enhanced international and interdisciplinary collaboration; (5) research hotspots exploring factors influencing college student mental health and AI applications. Conclusion This study provides a succinct yet comprehensive overview of this field, facilitating a nuanced understanding of prospective applications of AI in college student mental health. Professionals can leverage this research to discern the advantages, risks, and potential impacts of AI in this critical field.
Collapse
Affiliation(s)
- Jing Chen
- Wuhan University China Institute of Boundary and Ocean Studies, Wuhan, China
| | - Dongfeng Yuan
- Faculty of Pharmacy, Hubei University of Chinese Medicine, Wuhan, China
| | - Ruotong Dong
- Faculty of Pharmacy, Hubei University of Chinese Medicine, Wuhan, China
| | - Jingyi Cai
- Faculty of Pharmacy, Hubei University of Chinese Medicine, Wuhan, China
| | - Zhongzhu Ai
- Faculty of Pharmacy, Hubei University of Chinese Medicine, Wuhan, China
- Hubei Shizhen Laboratory, Wuhan, China
| | - Shanshan Zhou
- Hubei Shizhen Laboratory, Wuhan, China
- The First Clinical Medical School, Hubei University of Chinese Medicine, Wuhan, China
| |
Collapse
|
35
|
Zafar F, Fakhare Alam L, Vivas RR, Wang J, Whei SJ, Mehmood S, Sadeghzadegan A, Lakkimsetti M, Nazir Z. The Role of Artificial Intelligence in Identifying Depression and Anxiety: A Comprehensive Literature Review. Cureus 2024; 16:e56472. [PMID: 38638735 PMCID: PMC11025697 DOI: 10.7759/cureus.56472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/18/2024] [Indexed: 04/20/2024] Open
Abstract
This narrative literature review undertakes a comprehensive examination of the burgeoning field, tracing the development of artificial intelligence (AI)-powered tools for depression and anxiety detection from the level of intricate algorithms to practical applications. Delivering essential mental health care services is now a significant public health priority. In recent years, AI has become a game-changer in the early identification and intervention of these pervasive mental health disorders. AI tools can potentially empower behavioral healthcare services by helping psychiatrists collect objective data on patients' progress and tasks. This study emphasizes the current understanding of AI, the different types of AI, its current use in multiple mental health disorders, advantages, disadvantages, and future potentials. As technology develops and the digitalization of the modern era increases, there will be a rise in the application of artificial intelligence in psychiatry; therefore, a comprehensive understanding will be needed. We searched PubMed, Google Scholar, and Science Direct using keywords for this. In a recent review of studies using electronic health records (EHR) with AI and machine learning techniques for diagnosing all clinical conditions, roughly 99 publications have been found. Out of these, 35 studies were identified for mental health disorders in all age groups, and among them, six studies utilized EHR data sources. By critically analyzing prominent scholarly works, we aim to illuminate the current state of this technology, exploring its successes, limitations, and future directions. In doing so, we hope to contribute to a nuanced understanding of AI's potential to revolutionize mental health diagnostics and pave the way for further research and development in this critically important domain.
Collapse
Affiliation(s)
- Fabeha Zafar
- Internal Medicine, Dow University of Health Sciences (DUHS), Karachi, PAK
| | | | - Rafael R Vivas
- Nutrition, Food and Exercise Sciences, Florida State University College of Human Sciences, Tallahassee, USA
| | - Jada Wang
- Medicine, St. George's University, Brooklyn, USA
| | - See Jia Whei
- Internal Medicine, Sriwijaya University, Palembang, IDN
| | | | | | | | - Zahra Nazir
- Internal Medicine, Combined Military Hospital, Quetta, Quetta, PAK
| |
Collapse
|
36
|
Kasun M, Ryan K, Paik J, Lane-McKinley K, Dunn LB, Roberts LW, Kim JP. Academic machine learning researchers' ethical perspectives on algorithm development for health care: a qualitative study. J Am Med Inform Assoc 2024; 31:563-573. [PMID: 38069455 PMCID: PMC10873830 DOI: 10.1093/jamia/ocad238] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 10/20/2023] [Accepted: 12/05/2023] [Indexed: 02/18/2024] Open
Abstract
OBJECTIVES We set out to describe academic machine learning (ML) researchers' ethical considerations regarding the development of ML tools intended for use in clinical care. MATERIALS AND METHODS We conducted in-depth, semistructured interviews with a sample of ML researchers in medicine (N = 10) as part of a larger study investigating stakeholders' ethical considerations in the translation of ML tools in medicine. We used a qualitative descriptive design, applying conventional qualitative content analysis in order to allow participant perspectives to emerge directly from the data. RESULTS Every participant viewed their algorithm development work as holding ethical significance. While participants shared positive attitudes toward continued ML innovation, they described concerns related to data sampling and labeling (eg, limitations to mitigating bias; ensuring the validity and integrity of data), and algorithm training and testing (eg, selecting quantitative targets; assessing reproducibility). Participants perceived a need to increase interdisciplinary training across stakeholders and to envision more coordinated and embedded approaches to addressing ethics issues. DISCUSSION AND CONCLUSION Participants described key areas where increased support for ethics may be needed; technical challenges affecting clinical acceptability; and standards related to scientific integrity, beneficence, and justice that may be higher in medicine compared to other industries engaged in ML innovation. Our results help shed light on the perspectives of ML researchers in medicine regarding the range of ethical issues they encounter or anticipate in their work, including areas where more attention may be needed to support the successful development and integration of medical ML tools.
Collapse
Affiliation(s)
- Max Kasun
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94305, United States
| | - Katie Ryan
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94305, United States
| | - Jodi Paik
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94305, United States
| | - Kyle Lane-McKinley
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94305, United States
| | - Laura Bodin Dunn
- Department of Psychiatry, University of Arkansas for Medical Sciences, Little Rock, AK 72205, United States
| | - Laura Weiss Roberts
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94305, United States
| | - Jane Paik Kim
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94305, United States
| |
Collapse
|
37
|
Hassan J, Saeed SM, Deka L, Uddin MJ, Das DB. Applications of Machine Learning (ML) and Mathematical Modeling (MM) in Healthcare with Special Focus on Cancer Prognosis and Anticancer Therapy: Current Status and Challenges. Pharmaceutics 2024; 16:260. [PMID: 38399314 PMCID: PMC10892549 DOI: 10.3390/pharmaceutics16020260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/29/2024] [Accepted: 02/07/2024] [Indexed: 02/25/2024] Open
Abstract
The use of data-driven high-throughput analytical techniques, which has given rise to computational oncology, is undisputed. The widespread use of machine learning (ML) and mathematical modeling (MM)-based techniques is widely acknowledged. These two approaches have fueled the advancement in cancer research and eventually led to the uptake of telemedicine in cancer care. For diagnostic, prognostic, and treatment purposes concerning different types of cancer research, vast databases of varied information with manifold dimensions are required, and indeed, all this information can only be managed by an automated system developed utilizing ML and MM. In addition, MM is being used to probe the relationship between the pharmacokinetics and pharmacodynamics (PK/PD interactions) of anti-cancer substances to improve cancer treatment, and also to refine the quality of existing treatment models by being incorporated at all steps of research and development related to cancer and in routine patient care. This review will serve as a consolidation of the advancement and benefits of ML and MM techniques with a special focus on the area of cancer prognosis and anticancer therapy, leading to the identification of challenges (data quantity, ethical consideration, and data privacy) which are yet to be fully addressed in current studies.
Collapse
Affiliation(s)
- Jasmin Hassan
- Drug Delivery & Therapeutics Lab, Dhaka 1212, Bangladesh; (J.H.); (S.M.S.)
| | | | - Lipika Deka
- Faculty of Computing, Engineering and Media, De Montfort University, Leicester LE1 9BH, UK;
| | - Md Jasim Uddin
- Department of Pharmaceutical Technology, Faculty of Pharmacy, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Diganta B. Das
- Department of Chemical Engineering, Loughborough University, Loughborough LE11 3TU, UK
| |
Collapse
|
38
|
Sen SK, Green ED, Hutter CM, Craven M, Ideker T, Di Francesco V. Opportunities for basic, clinical, and bioethics research at the intersection of machine learning and genomics. CELL GENOMICS 2024; 4:100466. [PMID: 38190108 PMCID: PMC10794834 DOI: 10.1016/j.xgen.2023.100466] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 07/14/2023] [Accepted: 11/20/2023] [Indexed: 01/09/2024]
Abstract
The data-intensive fields of genomics and machine learning (ML) are in an early stage of convergence. Genomics researchers increasingly seek to harness the power of ML methods to extract knowledge from their data; conversely, ML scientists recognize that genomics offers a wealth of large, complex, and well-annotated datasets that can be used as a substrate for developing biologically relevant algorithms and applications. The National Human Genome Research Institute (NHGRI) inquired with researchers working in these two fields to identify common challenges and receive recommendations to better support genomic research efforts using ML approaches. Those included increasing the amount and variety of training datasets by integrating genomic with multiomics, context-specific (e.g., by cell type), and social determinants of health datasets; reducing the inherent biases of training datasets; prioritizing transparency and interpretability of ML methods; and developing privacy-preserving technologies for research participants' data.
Collapse
Affiliation(s)
- Shurjo K Sen
- National Human Genome Research Institute, National Institutes of Health, Bethesda, MD 20892, USA.
| | - Eric D Green
- National Human Genome Research Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Carolyn M Hutter
- National Human Genome Research Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Mark Craven
- Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI 53792, USA; Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI 53792, USA
| | - Trey Ideker
- Division of Genetics, Department of Medicine, University of California San Diego, La Jolla, CA 92093, USA
| | - Valentina Di Francesco
- National Human Genome Research Institute, National Institutes of Health, Bethesda, MD 20892, USA.
| |
Collapse
|
39
|
Alì M, Fantesini A, Morcella MT, Ibba S, D'Anna G, Fazzini D, Papa S. Adoption of AI in Oncological Imaging: Ethical, Regulatory, and Medical-Legal Challenges. Crit Rev Oncog 2024; 29:29-35. [PMID: 38505879 DOI: 10.1615/critrevoncog.2023050584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
Artificial Intelligence (AI) algorithms have shown great promise in oncological imaging, outperforming or matching radiologists in retrospective studies, signifying their potential for advanced screening capabilities. These AI tools offer valuable support to radiologists, assisting them in critical tasks such as prioritizing reporting, early cancer detection, and precise measurements, thereby bolstering clinical decision-making. With the healthcare landscape witnessing a surge in imaging requests and a decline in available radiologists, the integration of AI has become increasingly appealing. By streamlining workflow efficiency and enhancing patient care, AI presents a transformative solution to the challenges faced by oncological imaging practices. Nevertheless, successful AI integration necessitates navigating various ethical, regulatory, and medical-legal challenges. This review endeavors to provide a comprehensive overview of these obstacles, aiming to foster a responsible and effective implementation of AI in oncological imaging.
Collapse
Affiliation(s)
- Marco Alì
- Radiology Unit, CDI, Centro Diagnostico Italiano, Via Simone Saint Bon, 20, 20147 Milan, Italy
| | - Arianna Fantesini
- Suor Orsola Benincasa University, Corso Vittorio Emanuele 292, Naples, Italy; RE:LAB s.r.l., Via Tamburini, 5, 42122 Reggio Emilia, Italy
| | | | - Simona Ibba
- CDI Centro Diagnostico Italiano, Via Saint Bon 20, Milan, Italy
| | - Gennaro D'Anna
- Neuroimaging Unit, ASST Ovest Milanese, Via Papa Giovanni Paolo II, Legnano (Milan), Italy
| | - Deborah Fazzini
- CDI Centro Diagnostico Italiano, Via Saint Bon 20, Milan, Italy
| | - Sergio Papa
- Radiology Unit, CDI, Centro Diagnostico Italiano, Via Simone Saint Bon, 20, 20147 Milan, Italy
| |
Collapse
|
40
|
Gupta N, Kasula V, Sanmugananthan P, Panico N, Dubin AH, Sykes DAW, D'Amico RS. SmartWear body sensors for neurological and neurosurgical patients: A review of current and future technologies. World Neurosurg X 2024; 21:100247. [PMID: 38033718 PMCID: PMC10682285 DOI: 10.1016/j.wnsx.2023.100247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 10/24/2023] [Indexed: 12/02/2023] Open
Abstract
Background/objective Recent technological advances have allowed for the development of smart wearable devices (SmartWear) which can be used to monitor various aspects of patient healthcare. These devices provide clinicians with continuous biometric data collection for patients in both inpatient and outpatient settings. Although these devices have been widely used in fields such as cardiology and orthopedics, their use in the field of neurosurgery and neurology remains in its infancy. Methods A comprehensive literature search for the current and future applications of SmartWear devices in the above conditions was conducted, focusing on outpatient monitoring. Findings Through the integration of sensors which measure parameters such as physical activity, hemodynamic variables, and electrical conductivity - these devices have been applied to patient populations such as those at risk for stroke, suffering from epilepsy, with neurodegenerative disease, with spinal cord injury and/or recovering from neurosurgical procedures. Further, these devices are being tested in various clinical trials and there is a demonstrated interest in the development of new technologies. Conclusion This review provides an in-depth evaluation of the use of SmartWear in selected neurological diseases and neurosurgical applications. It is clear that these devices have demonstrated efficacy in a variety of neurological and neurosurgical applications, however challenges such as data privacy and management must be addressed.
Collapse
Affiliation(s)
- Nithin Gupta
- Campbell University School of Osteopathic Medicine, Lillington, NC, USA
| | - Varun Kasula
- Campbell University School of Osteopathic Medicine, Lillington, NC, USA
| | | | | | - Aimee H. Dubin
- Campbell University School of Osteopathic Medicine, Lillington, NC, USA
| | - David AW. Sykes
- Department of Neurosurgery, Duke University Medical School, Durham, NC, USA
| | - Randy S. D'Amico
- Lenox Hill Hospital, Department of Neurosurgery, New York, NY, USA
| |
Collapse
|
41
|
Zheng Y, Rowell B, Chen Q, Kim JY, Kontar RA, Yang XJ, Lester CA. Designing Human-Centered AI to Prevent Medication Dispensing Errors: Focus Group Study With Pharmacists. JMIR Form Res 2023; 7:e51921. [PMID: 38145475 PMCID: PMC10775023 DOI: 10.2196/51921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 11/17/2023] [Accepted: 11/22/2023] [Indexed: 12/26/2023] Open
Abstract
BACKGROUND Medication errors, including dispensing errors, represent a substantial worldwide health risk with significant implications in terms of morbidity, mortality, and financial costs. Although pharmacists use methods like barcode scanning and double-checking for dispensing verification, these measures exhibit limitations. The application of artificial intelligence (AI) in pharmacy verification emerges as a potential solution, offering precision, rapid data analysis, and the ability to recognize medications through computer vision. For AI to be embraced, it must be designed with the end user in mind, fostering trust, clear communication, and seamless collaboration between AI and pharmacists. OBJECTIVE This study aimed to gather pharmacists' feedback in a focus group setting to help inform the initial design of the user interface and iterative designs of the AI prototype. METHODS A multidisciplinary research team engaged pharmacists in a 3-stage process to develop a human-centered AI system for medication dispensing verification. To design the AI model, we used a Bayesian neural network that predicts the dispensed pills' National Drug Code (NDC). Discussion scripts regarding how to design the system and feedback in focus groups were collected through audio recordings and professionally transcribed, followed by a content analysis guided by the Systems Engineering Initiative for Patient Safety and Human-Machine Teaming theoretical frameworks. RESULTS A total of 8 pharmacists participated in 3 rounds of focus groups to identify current challenges in medication dispensing verification, brainstorm solutions, and provide feedback on our AI prototype. Participants considered several teaming scenarios, generally favoring a hybrid teaming model where the AI assists in the verification process and a pharmacist intervenes based on medication risk level and the AI's confidence level. Pharmacists highlighted the need for improving the interpretability of AI systems, such as adding stepwise checkmarks, probability scores, and details about drugs the AI model frequently confuses with the target drug. Pharmacists emphasized the need for simplicity and accessibility. They favored displaying only essential information to prevent overwhelming users with excessive data. Specific design features, such as juxtaposing pill images with their packaging for quick comparisons, were requested. Pharmacists preferred accept, reject, or unsure options. The final prototype interface included (1) checkmarks to compare pill characteristics between the AI-predicted NDC and the prescription's expected NDC, (2) a histogram showing predicted probabilities for the AI-identified NDC, (3) an image of an AI-provided "confused" pill, and (4) an NDC match status (ie, match, unmatched, or unsure). CONCLUSIONS In partnership with pharmacists, we developed a human-centered AI prototype designed to enhance AI interpretability and foster trust. This initiative emphasized human-machine collaboration and positioned AI as an augmentative tool rather than a replacement. This study highlights the process of designing a human-centered AI for dispensing verification, emphasizing its interpretability, confidence visualization, and collaborative human-machine teaming styles.
Collapse
Affiliation(s)
- Yifan Zheng
- Department of Clinical Pharmacy, College of Pharmacy, University of Michigan, Ann Arbor, MI, United States
| | - Brigid Rowell
- Department of Clinical Pharmacy, College of Pharmacy, University of Michigan, Ann Arbor, MI, United States
| | - Qiyuan Chen
- Department of Industrial and Operations Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Jin Yong Kim
- Department of Industrial and Operations Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Raed Al Kontar
- Department of Industrial and Operations Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - X Jessie Yang
- Department of Industrial and Operations Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Corey A Lester
- Department of Clinical Pharmacy, College of Pharmacy, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
42
|
Tan TF, Thirunavukarasu AJ, Campbell JP, Keane PA, Pasquale LR, Abramoff MD, Kalpathy-Cramer J, Lum F, Kim JE, Baxter SL, Ting DSW. Generative Artificial Intelligence Through ChatGPT and Other Large Language Models in Ophthalmology: Clinical Applications and Challenges. OPHTHALMOLOGY SCIENCE 2023; 3:100394. [PMID: 37885755 PMCID: PMC10598525 DOI: 10.1016/j.xops.2023.100394] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 08/07/2023] [Accepted: 08/30/2023] [Indexed: 10/28/2023]
Abstract
The rapid progress of large language models (LLMs) driving generative artificial intelligence applications heralds the potential of opportunities in health care. We conducted a review up to April 2023 on Google Scholar, Embase, MEDLINE, and Scopus using the following terms: "large language models," "generative artificial intelligence," "ophthalmology," "ChatGPT," and "eye," based on relevance to this review. From a clinical viewpoint specific to ophthalmologists, we explore from the different stakeholders' perspectives-including patients, physicians, and policymakers-the potential LLM applications in education, research, and clinical domains specific to ophthalmology. We also highlight the foreseeable challenges of LLM implementation into clinical practice, including the concerns of accuracy, interpretability, perpetuating bias, and data security. As LLMs continue to mature, it is essential for stakeholders to jointly establish standards for best practices to safeguard patient safety. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Ting Fang Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Arun James Thirunavukarasu
- University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
- Corpus Christi College, University of Cambridge, Cambridge, United Kingdom
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon
| | - Pearse A. Keane
- Moorfields Eye Hospital, University of College London, London, United Kingdom
| | - Louis R. Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York City, New York
| | - Michael D. Abramoff
- American Medical Association's Digital Medicine Payment Advisory Group (DMPAG) Artificial Intelligence Workgroup, American Medical Association, Chicago, Illinois
- Department of Ophthalmology, University of Iowa, Iowa City, Iowa
- Digital Diagnostics, Inc, Coralville, Iowa
| | | | - Flora Lum
- American Academy of Ophthalmology, San Francisco, California
| | - Judy E. Kim
- Department of Ophthalmology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Sally L. Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, La Jolla, California
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Byers Eye Institute, Stanford University, Stanford, California
| |
Collapse
|
43
|
Arora A, Alderman JE, Palmer J, Ganapathi S, Laws E, McCradden MD, Oakden-Rayner L, Pfohl SR, Ghassemi M, McKay F, Treanor D, Rostamzadeh N, Mateen B, Gath J, Adebajo AO, Kuku S, Matin R, Heller K, Sapey E, Sebire NJ, Cole-Lewis H, Calvert M, Denniston A, Liu X. The value of standards for health datasets in artificial intelligence-based applications. Nat Med 2023; 29:2929-2938. [PMID: 37884627 PMCID: PMC10667100 DOI: 10.1038/s41591-023-02608-w] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 09/22/2023] [Indexed: 10/28/2023]
Abstract
Artificial intelligence as a medical device is increasingly being applied to healthcare for diagnosis, risk stratification and resource allocation. However, a growing body of evidence has highlighted the risk of algorithmic bias, which may perpetuate existing health inequity. This problem arises in part because of systemic inequalities in dataset curation, unequal opportunity to participate in research and inequalities of access. This study aims to explore existing standards, frameworks and best practices for ensuring adequate data diversity in health datasets. Exploring the body of existing literature and expert views is an important step towards the development of consensus-based guidelines. The study comprises two parts: a systematic review of existing standards, frameworks and best practices for healthcare datasets; and a survey and thematic analysis of stakeholder views of bias, health equity and best practices for artificial intelligence as a medical device. We found that the need for dataset diversity was well described in literature, and experts generally favored the development of a robust set of guidelines, but there were mixed views about how these could be implemented practically. The outputs of this study will be used to inform the development of standards for transparency of data diversity in health datasets (the STANDING Together initiative).
Collapse
Affiliation(s)
- Anmol Arora
- School of Clinical Medicine, University of Cambridge, Cambridge, UK
| | - Joseph E Alderman
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | - Joanne Palmer
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | | | - Elinor Laws
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | - Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics and Genome Biology, Peter Gilgan Centre for Research and Learning, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, Toronto, Ontario, Canada
| | - Lauren Oakden-Rayner
- The Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | | | - Marzyeh Ghassemi
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for Medical Engineering & Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Vector Institute, Toronto, Ontario, Canada
| | - Francis McKay
- The Ethox Centre and the Wellcome Centre for Ethics and Humanities, Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Darren Treanor
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- University of Leeds, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | | | - Bilal Mateen
- Institute for Health Informatics, University College London, London, UK
- Wellcome Trust, London, UK
| | - Jacqui Gath
- Patient and Public Involvement and Engagement (PPIE) Group, STANDING Together, Birmingham, UK
| | - Adewole O Adebajo
- Patient and Public Involvement and Engagement (PPIE) Group, STANDING Together, Birmingham, UK
| | | | - Rubeta Matin
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | | | - Elizabeth Sapey
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- PIONEER, HDR UK Hub in Acute Care, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
| | - Neil J Sebire
- National Institute for Health and Care Research, Great Ormond Street Hospital Biomedical Research Centre, London, UK
- Great Ormond Street Institute of Child Health, University Hospital London, London, UK
| | | | - Melanie Calvert
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
- Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Applied Research Collaboration West Midlands, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Birmingham-Oxford Blood and Transplant Research Unit in Precision Transplant and Cellular Therapeutics, University of Birmingham, Birmingham, UK
- DEMAND Hub, University of Birmingham, Birmingham, UK
- UK SPINE, University of Birmingham, Birmingham, UK
| | - Alastair Denniston
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Biomedical Research Centre, Moorfields Eye Hospital/University College London, London, UK
| | - Xiaoxuan Liu
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK.
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK.
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK.
| |
Collapse
|
44
|
Abramoff MD, Whitestone N, Patnaik JL, Rich E, Ahmed M, Husain L, Hassan MY, Tanjil MSH, Weitzman D, Dai T, Wagner BD, Cherwek DH, Congdon N, Islam K. Autonomous artificial intelligence increases real-world specialist clinic productivity in a cluster-randomized trial. NPJ Digit Med 2023; 6:184. [PMID: 37794054 PMCID: PMC10550906 DOI: 10.1038/s41746-023-00931-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 09/20/2023] [Indexed: 10/06/2023] Open
Abstract
Autonomous artificial intelligence (AI) promises to increase healthcare productivity, but real-world evidence is lacking. We developed a clinic productivity model to generate testable hypotheses and study design for a preregistered cluster-randomized clinical trial, in which we tested the hypothesis that a previously validated US FDA-authorized AI for diabetic eye exams increases clinic productivity (number of completed care encounters per hour per specialist physician) among patients with diabetes. Here we report that 105 clinic days are cluster randomized to either intervention (using AI diagnosis; 51 days; 494 patients) or control (not using AI diagnosis; 54 days; 499 patients). The prespecified primary endpoint is met: AI leads to 40% higher productivity (1.59 encounters/hour, 95% confidence interval [CI]: 1.37-1.80) than control (1.14 encounters/hour, 95% CI: 1.02-1.25), p < 0.00; the secondary endpoint (productivity in all patients) is also met. Autonomous AI increases healthcare system productivity, which could potentially increase access and reduce health disparities. ClinicalTrials.gov NCT05182580.
Collapse
Affiliation(s)
- Michael D Abramoff
- University of Iowa, Iowa City, Iowa, USA.
- Digital Diagnostics Inc, Coralville, Iowa, USA.
- Iowa City Veterans Affairs Medical Center, Iowa City, Iowa, USA.
- Department of Biomedical Engineering, The University of Iowa, Iowa City, USA.
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa, USA.
| | | | - Jennifer L Patnaik
- Orbis International, New York, New York, USA
- Department of Ophthalmology, University of Colorado School of Medicine, Aurora, Colorado, USA
| | - Emily Rich
- Orbis International, New York, New York, USA
- Centre for Public Health, Queen's University Belfast, Belfast, UK
| | | | | | | | | | | | - Tinglong Dai
- Carey Business School, Johns Hopkins University, Baltimore, Maryland, USA
- Hopkins Business of Health Initiative, Johns Hopkins University, Baltimore, Maryland, USA
- School of Nursing, Johns Hopkins University, Baltimore, Maryland, USA
| | - Brandie D Wagner
- Department of Ophthalmology, University of Colorado School of Medicine, Aurora, Colorado, USA
- Department of Biostatistics and Informatics, Colorado School of Public Health, Aurora, Colorado, USA
| | | | - Nathan Congdon
- Orbis International, New York, New York, USA
- Centre for Public Health, Queen's University Belfast, Belfast, UK
- Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | | |
Collapse
|
45
|
Mohan A, Asghar Z, Abid R, Subedi R, Kumari K, Kumar S, Majumder K, Bhurgri AI, Tejwaney U, Kumar S. Revolutionizing healthcare by use of artificial intelligence in esophageal carcinoma - a narrative review. Ann Med Surg (Lond) 2023; 85:4920-4927. [PMID: 37811030 PMCID: PMC10553069 DOI: 10.1097/ms9.0000000000001175] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 08/05/2023] [Indexed: 10/10/2023] Open
Abstract
Esophageal cancer is a major cause of cancer-related mortality worldwide, with significant regional disparities. Early detection of precursor lesions is essential to improve patient outcomes. Artificial intelligence (AI) techniques, including deep learning and machine learning, have proved to be of assistance to both gastroenterologists and pathologists in the diagnosis and characterization of upper gastrointestinal malignancies by correlating with the histopathology. The primary diagnostic method in gastroenterology is white light endoscopic evaluation, but conventional endoscopy is partially inefficient in detecting esophageal cancer. However, other endoscopic modalities, such as narrow-band imaging, endocytoscopy, and endomicroscopy, have shown improved visualization of mucosal structures and vasculature, which provides a set of baseline data to develop efficient AI-assisted predictive models for quick interpretation. The main challenges in managing esophageal cancer are identifying high-risk patients and the disease's poor prognosis. Thus, AI techniques can play a vital role in improving the early detection and diagnosis of precursor lesions, assisting gastroenterologists in performing targeted biopsies and real-time decisions of endoscopic mucosal resection or endoscopic submucosal dissection. Combining AI techniques and endoscopic modalities can enhance the diagnosis and management of esophageal cancer, improving patient outcomes and reducing cancer-related mortality rates. The aim of this review is to grasp a better understanding of the application of AI in the diagnosis, treatment, and prognosis of esophageal cancer and how computer-aided diagnosis and computer-aided detection can act as vital tools for clinicians in the long run.
Collapse
Affiliation(s)
| | | | - Rabia Abid
- Liaquat College of Medicine and Dentistry
| | - Rasish Subedi
- Universal College of Medical Sciences, Siddharthanagar, Nepal
| | | | | | | | - Aqsa I. Bhurgri
- Shaheed Muhtarma Benazir Bhutto Medical University, Larkana, Pakistan
| | | | - Sarwan Kumar
- Department of Medicine, Chittagong Medical College, Chittagong, Bangladesh
- Wayne State University, Michigan, USA
| |
Collapse
|
46
|
Drabiak K, Kyzer S, Nemov V, El Naqa I. AI and machine learning ethics, law, diversity, and global impact. Br J Radiol 2023; 96:20220934. [PMID: 37191072 PMCID: PMC10546451 DOI: 10.1259/bjr.20220934] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 03/20/2023] [Accepted: 03/29/2023] [Indexed: 05/17/2023] Open
Abstract
Artificial intelligence (AI) and its machine learning (ML) algorithms are offering new promise for personalized biomedicine and more cost-effective healthcare with impressive technical capability to mimic human cognitive capabilities. However, widespread application of this promising technology has been limited in the medical domain and expectations have been tampered by ethical challenges and concerns regarding patient privacy, legal responsibility, trustworthiness, and fairness. To balance technical innovation with ethical applications of AI/ML, developers must demonstrate the AI functions as intended and adopt strategies to minimize the risks for failure or bias. This review describes the new ethical challenges created by AI/ML for clinical care and identifies specific considerations for its practice in medicine. We provide an overview of regulatory and legal issues applicable in Europe and the United States, a description of technical aspects to consider, and present recommendations for trustworthy AI/ML that promote transparency, minimize risks of bias or error, and protect the patient well-being.
Collapse
Affiliation(s)
- Katherine Drabiak
- Colleges of Public Health and Medicine, University of South Florida, Tampa, FL, USA
| | - Skylar Kyzer
- Colleges of Public Health and Medicine, University of South Florida, Tampa, FL, USA
| | - Valerie Nemov
- Colleges of Public Health and Medicine, University of South Florida, Tampa, FL, USA
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, USA
| |
Collapse
|
47
|
Levy JJ, Chan N, Marotti JD, Kerr DA, Gutmann EJ, Glass RE, Dodge CP, Suriawinata AA, Christensen B, Liu X, Vaickus LJ. Large-scale validation study of an improved semiautonomous urine cytology assessment tool: AutoParis-X. Cancer Cytopathol 2023; 131:637-654. [PMID: 37377320 PMCID: PMC11251731 DOI: 10.1002/cncy.22732] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 05/11/2023] [Accepted: 05/12/2023] [Indexed: 06/29/2023]
Abstract
BACKGROUND Adopting a computational approach for the assessment of urine cytology specimens has the potential to improve the efficiency, accuracy, and reliability of bladder cancer screening, which has heretofore relied on semisubjective manual assessment methods. As rigorous, quantitative criteria and guidelines have been introduced for improving screening practices (e.g., The Paris System for Reporting Urinary Cytology), algorithms to emulate semiautonomous diagnostic decision-making have lagged behind, in part because of the complex and nuanced nature of urine cytology reporting. METHODS In this study, the authors report on the development and large-scale validation of a deep-learning tool, AutoParis-X, which can facilitate rapid, semiautonomous examination of urine cytology specimens. RESULTS The results of this large-scale, retrospective validation study indicate that AutoParis-X can accurately determine urothelial cell atypia and aggregate a wide variety of cell-related and cluster-related information across a slide to yield an atypia burden score, which correlates closely with overall specimen atypia and is predictive of Paris system diagnostic categories. Importantly, this approach accounts for challenges associated with the assessment of overlapping cell cluster borders, which improve the ability to predict specimen atypia and accurately estimate the nuclear-to-cytoplasm ratio for cells in these clusters. CONCLUSIONS The authors developed a publicly available, open-source, interactive web application that features a simple, easy-to-use display for examining urine cytology whole-slide images and determining the level of atypia in specific cells, flagging the most abnormal cells for pathologist review. The accuracy of AutoParis-X (and other semiautomated digital pathology systems) indicates that these technologies are approaching clinical readiness and necessitates full evaluation of these algorithms in head-to-head clinical trials.
Collapse
Affiliation(s)
- Joshua J. Levy
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Department of Dermatology, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Natt Chan
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Jonathan D. Marotti
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Darcy A. Kerr
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Edward J. Gutmann
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | | | | | - Arief A. Suriawinata
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Brock Christensen
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
- Department of Molecular and Systems Biology, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
- Department of Community and Family Medicine, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Xiaoying Liu
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Louis J. Vaickus
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| |
Collapse
|
48
|
Wang Y, Song Y, Ma Z, Han X. Multidisciplinary considerations of fairness in medical AI: A scoping review. Int J Med Inform 2023; 178:105175. [PMID: 37595374 DOI: 10.1016/j.ijmedinf.2023.105175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 08/02/2023] [Accepted: 08/04/2023] [Indexed: 08/20/2023]
Abstract
INTRODUCTION Artificial Intelligence (AI) technology has been developed significantly in recent years. The fairness of medical AI is of great concern due to its direct relation to human life and health. This review aims to analyze the existing research literature on fairness in medical AI from the perspectives of computer science, medical science, and social science (including law and ethics). The objective of the review is to examine the similarities and differences in the understanding of fairness, explore influencing factors, and investigate potential measures to implement fairness in medical AI across English and Chinese literature. METHODS This study employed a scoping review methodology and selected the following databases: Web of Science, MEDLINE, Pubmed, OVID, CNKI, WANFANG Data, etc., for the fairness issues in medical AI through February 2023. The search was conducted using various keywords such as "artificial intelligence," "machine learning," "medical," "algorithm," "fairness," "decision-making," and "bias." The collected data were charted, synthesized, and subjected to descriptive and thematic analysis. RESULTS After reviewing 468 English papers and 356 Chinese papers, 53 and 42 were included in the final analysis. Our results show the three different disciplines all show significant differences in the research on the core issues. Data is the foundation that affects medical AI fairness in addition to algorithmic bias and human bias. Legal, ethical, and technological measures all promote the implementation of medical AI fairness. CONCLUSIONS Our review indicates a consensus regarding the importance of data fairness as the foundation for achieving fairness in medical AI across multidisciplinary perspectives. However, there are substantial discrepancies in core aspects such as the concept, influencing factors, and implementation measures of fairness in medical AI. Consequently, future research should facilitate interdisciplinary discussions to bridge the cognitive gaps between different fields and enhance the practical implementation of fairness in medical AI.
Collapse
Affiliation(s)
- Yue Wang
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Yaxin Song
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Zhuo Ma
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Xiaoxue Han
- Xi'an Jiaotong University Library, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| |
Collapse
|
49
|
Abràmoff MD, Tarver ME, Loyo-Berrios N, Trujillo S, Char D, Obermeyer Z, Eydelman MB, Maisel WH. Considerations for addressing bias in artificial intelligence for health equity. NPJ Digit Med 2023; 6:170. [PMID: 37700029 PMCID: PMC10497548 DOI: 10.1038/s41746-023-00913-9] [Citation(s) in RCA: 75] [Impact Index Per Article: 37.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 08/21/2023] [Indexed: 09/14/2023] Open
Abstract
Health equity is a primary goal of healthcare stakeholders: patients and their advocacy groups, clinicians, other providers and their professional societies, bioethicists, payors and value based care organizations, regulatory agencies, legislators, and creators of artificial intelligence/machine learning (AI/ML)-enabled medical devices. Lack of equitable access to diagnosis and treatment may be improved through new digital health technologies, especially AI/ML, but these may also exacerbate disparities, depending on how bias is addressed. We propose an expanded Total Product Lifecycle (TPLC) framework for healthcare AI/ML, describing the sources and impacts of undesirable bias in AI/ML systems in each phase, how these can be analyzed using appropriate metrics, and how they can be potentially mitigated. The goal of these "Considerations" is to educate stakeholders on how potential AI/ML bias may impact healthcare outcomes and how to identify and mitigate inequities; to initiate a discussion between stakeholders on these issues, in order to ensure health equity along the expanded AI/ML TPLC framework, and ultimately, better health outcomes for all.
Collapse
Affiliation(s)
- Michael D Abràmoff
- Departments of Ophthalmology and Visual Sciences, and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, USA.
| | - Michelle E Tarver
- Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD, USA
| | - Nilsa Loyo-Berrios
- Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD, USA
| | | | - Danton Char
- Center for Biomedical Ethics, Stanford University School of Medicine, San Francisco, CA, USA
- Department of Anesthesiology, Stanford University School of Medicine, Division of Pediatric Cardiac Anesthesia, San Francisco, CA, USA
| | - Ziad Obermeyer
- School of Public Health, University of California, Berkeley, CA, USA
| | - Malvina B Eydelman
- Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD, USA
| | - William H Maisel
- Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD, USA
| |
Collapse
|
50
|
Iqbal J, Cortés Jaimes DC, Makineni P, Subramani S, Hemaida S, Thugu TR, Butt AN, Sikto JT, Kaur P, Lak MA, Augustine M, Shahzad R, Arain M. Reimagining Healthcare: Unleashing the Power of Artificial Intelligence in Medicine. Cureus 2023; 15:e44658. [PMID: 37799217 PMCID: PMC10549955 DOI: 10.7759/cureus.44658] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2023] [Indexed: 10/07/2023] Open
Abstract
Artificial intelligence (AI) has opened new medical avenues and revolutionized diagnostic and therapeutic practices, allowing healthcare providers to overcome significant challenges associated with cost, disease management, accessibility, and treatment optimization. Prominent AI technologies such as machine learning (ML) and deep learning (DL) have immensely influenced diagnostics, patient monitoring, novel pharmaceutical discoveries, drug development, and telemedicine. Significant innovations and improvements in disease identification and early intervention have been made using AI-generated algorithms for clinical decision support systems and disease prediction models. AI has remarkably impacted clinical drug trials by amplifying research into drug efficacy, adverse events, and candidate molecular design. AI's precision and analysis regarding patients' genetic, environmental, and lifestyle factors have led to individualized treatment strategies. During the COVID-19 pandemic, AI-assisted telemedicine set a precedent for remote healthcare delivery and patient follow-up. Moreover, AI-generated applications and wearable devices have allowed ambulatory monitoring of vital signs. However, apart from being immensely transformative, AI's contribution to healthcare is subject to ethical and regulatory concerns. AI-backed data protection and algorithm transparency should be strictly adherent to ethical principles. Vigorous governance frameworks should be in place before incorporating AI in mental health interventions through AI-operated chatbots, medical education enhancements, and virtual reality-based training. The role of AI in medical decision-making has certain limitations, necessitating the importance of hands-on experience. Therefore, reaching an optimal balance between AI's capabilities and ethical considerations to ensure impartial and neutral performance in healthcare applications is crucial. This narrative review focuses on AI's impact on healthcare and the importance of ethical and balanced incorporation to make use of its full potential.
Collapse
Affiliation(s)
| | - Diana Carolina Cortés Jaimes
- Epidemiology, Universidad Autónoma de Bucaramanga, Bucaramanga, COL
- Medicine, Pontificia Universidad Javeriana, Bogotá, COL
| | - Pallavi Makineni
- Medicine, All India Institute of Medical Sciences, Bhubaneswar, Bhubaneswar, IND
| | - Sachin Subramani
- Medicine and Surgery, Employees' State Insurance Corporation (ESIC) Medical College, Gulbarga, IND
| | - Sarah Hemaida
- Internal Medicine, Istanbul Okan University, Istanbul, TUR
| | - Thanmai Reddy Thugu
- Internal Medicine, Sri Padmavathi Medical College for Women, Sri Venkateswara Institute of Medical Sciences (SVIMS), Tirupati, IND
| | - Amna Naveed Butt
- Medicine/Internal Medicine, Allama Iqbal Medical College, Lahore, PAK
| | | | - Pareena Kaur
- Medicine, Punjab Institute of Medical Sciences, Jalandhar, IND
| | | | | | - Roheen Shahzad
- Medicine, Combined Military Hospital (CMH) Lahore Medical College and Institute of Dentistry, Lahore, PAK
| | - Mustafa Arain
- Internal Medicine, Civil Hospital Karachi, Karachi, PAK
| |
Collapse
|