1
|
Ramoni D, Scuricini A, Carbone F, Liberale L, Montecucco F. Artificial intelligence in gastroenterology: Ethical and diagnostic challenges in clinical practice. World J Gastroenterol 2025; 31:102725. [PMID: 40093670 PMCID: PMC11886536 DOI: 10.3748/wjg.v31.i10.102725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Revised: 01/16/2025] [Accepted: 01/23/2025] [Indexed: 02/26/2025] Open
Abstract
This article discusses the manuscript recently published in the World Journal of Gastroenterology, which explores the application of deep learning models in decision-making processes via wireless capsule endoscopy. Integrating artificial intelligence (AI) into gastrointestinal disease diagnosis represents a transformative step toward precision medicine, enhancing real-time accuracy in detecting multi-category lesions at earlier stages, including small bowel lesions and precancerous polyps, ultimately improving patient outcomes. However, the use of AI in clinical settings raises ethical considerations that extend beyond technological potential. Issues of patient privacy, data security, and potential diagnostic biases require careful attention. AI models must prioritize diverse and representative datasets to mitigate inequities and ensure diagnostic accuracy across populations. Furthermore, balancing AI with clinical expertise is crucial, positioning AI as a supportive tool rather than a replacement for physician judgment. Addressing these ethical challenges will support the responsible deployment of AI, through equitable contribution to patient-centered care.
Collapse
Affiliation(s)
- Davide Ramoni
- Department of Internal Medicine, University of Genoa, Genoa 16132, Italy
| | | | - Federico Carbone
- Department of Internal Medicine, University of Genoa, Genoa 16132, Italy
- First Clinic of Internal Medicine, Department of Internal Medicine, Italian Cardiovascular Network, IRCCS Ospedale Policlinico San Martino, Genoa 16132, Italy
| | - Luca Liberale
- Department of Internal Medicine, University of Genoa, Genoa 16132, Italy
- First Clinic of Internal Medicine, Department of Internal Medicine, Italian Cardiovascular Network, IRCCS Ospedale Policlinico San Martino, Genoa 16132, Italy
| | - Fabrizio Montecucco
- Department of Internal Medicine, University of Genoa, Genoa 16132, Italy
- First Clinic of Internal Medicine, Department of Internal Medicine, Italian Cardiovascular Network, IRCCS Ospedale Policlinico San Martino, Genoa 16132, Italy
| |
Collapse
|
2
|
Fehr J, Citro B, Malpani R, Lippert C, Madai VI. A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare. Front Digit Health 2024; 6:1267290. [PMID: 38455991 PMCID: PMC10919164 DOI: 10.3389/fdgth.2024.1267290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 02/05/2024] [Indexed: 03/09/2024] Open
Abstract
Trustworthy medical AI requires transparency about the development and testing of underlying algorithms to identify biases and communicate potential risks of harm. Abundant guidance exists on how to achieve transparency for medical AI products, but it is unclear whether publicly available information adequately informs about their risks. To assess this, we retrieved public documentation on the 14 available CE-certified AI-based radiology products of the II b risk category in the EU from vendor websites, scientific publications, and the European EUDAMED database. Using a self-designed survey, we reported on their development, validation, ethical considerations, and deployment caveats, according to trustworthy AI guidelines. We scored each question with either 0, 0.5, or 1, to rate if the required information was "unavailable", "partially available," or "fully available." The transparency of each product was calculated relative to all 55 questions. Transparency scores ranged from 6.4% to 60.9%, with a median of 29.1%. Major transparency gaps included missing documentation on training data, ethical considerations, and limitations for deployment. Ethical aspects like consent, safety monitoring, and GDPR-compliance were rarely documented. Furthermore, deployment caveats for different demographics and medical settings were scarce. In conclusion, public documentation of authorized medical AI products in Europe lacks sufficient public transparency to inform about safety and risks. We call on lawmakers and regulators to establish legally mandated requirements for public and substantive transparency to fulfill the promise of trustworthy AI for health.
Collapse
Affiliation(s)
- Jana Fehr
- Digital Health & Machine Learning, Hasso Plattner Institute, Potsdam, Germany
- Digital Engineering Faculty, University of Potsdam, Potsdam, Germany
- QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Brian Citro
- Independent Researcher, Chicago, IL, United States
| | | | - Christoph Lippert
- Digital Health & Machine Learning, Hasso Plattner Institute, Potsdam, Germany
- Digital Engineering Faculty, University of Potsdam, Potsdam, Germany
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Vince I. Madai
- QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité Universitätsmedizin Berlin, Berlin, Germany
- Faculty of Computing, Engineering and the Built Environment, School of Computing and Digital Technology, Birmingham City University, Birmingham, United Kingdom
| |
Collapse
|
3
|
Tamburis O, Benis A. Leveraging Data and Technology to Enhance Interdisciplinary Collaboration and Health Outcomes. Yearb Med Inform 2023; 32:84-88. [PMID: 38147852 PMCID: PMC10751125 DOI: 10.1055/s-0043-1768753] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023] Open
Abstract
OBJECTIVE To give an overview of recent research and propose a selection of best papers published in 2022 in Informatics for One Health. METHODS An extensive search using PubMed and Web of Science was conducted to identify peer-reviewed articles published between December 2021 and December 2022, in order to find relevant publications in the 'Informatics for One Health' field. The selection process comprised three steps: (i) eight candidate best papers were first selected by the two section editors; (ii) external reviewers from internationally renowned research teams reviewed each candidate best paper; and (iii) the editorial committee of the Yearbook conducted the final best paper selection. RESULTS The candidate best papers represent studies that characterized significant challenges facing Informatics for One Health. Other trends of interest related to the deployment of medical artificial intelligence tools and the implementation of the FAIR principles within the One Health broad scenario. In general, papers identified in the search fell into one of the following categories: 1) Health improvement via digital technology; 2) Climate change/Environment/Biodiversity; and 3) Maturity of healthcare services. CONCLUSION The topic turns extremely important in the next future for what concerns the need to understand complex interactions in order to safeguard the health of populations and ecosystems.
Collapse
Affiliation(s)
- Oscar Tamburis
- Institute of Biostructures and Bioimaging, National Research Council, Naples, Italy
| | - Arriel Benis
- Department of Digital Medical Technologies, Holon Institute of Technology, Israel
| | | |
Collapse
|
4
|
Jeyaraman M, Balaji S, Jeyaraman N, Yadav S. Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare. Cureus 2023; 15:e43262. [PMID: 37692617 PMCID: PMC10492220 DOI: 10.7759/cureus.43262] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/10/2023] [Indexed: 09/12/2023] Open
Abstract
The integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education. However, as AI in healthcare gains momentum, it brings forth profound ethical challenges that demand careful consideration. This comprehensive review explores key ethical concerns in the domain, including privacy, transparency, trust, responsibility, bias, and data quality. Protecting patient privacy in data-driven healthcare is crucial, with potential implications for psychological well-being and data sharing. Strategies like homomorphic encryption (HE) and secure multiparty computation (SMPC) are vital to preserving confidentiality. Transparency and trustworthiness of AI systems are essential, particularly in high-risk decision-making scenarios. Explainable AI (XAI) emerges as a critical aspect, ensuring a clear understanding of AI-generated predictions. Cybersecurity becomes a pressing concern as AI's complexity creates vulnerabilities for potential breaches. Determining responsibility in AI-driven outcomes raises important questions, with debates on AI's moral agency and human accountability. Shifting from data ownership to data stewardship enables responsible data management in compliance with regulations. Addressing bias in healthcare data is crucial to avoid AI-driven inequities. Biases present in data collection and algorithm development can perpetuate healthcare disparities. A public-health approach is advocated to address inequalities and promote diversity in AI research and the workforce. Maintaining data quality is imperative in AI applications, with convolutional neural networks showing promise in multi-input/mixed data models, offering a comprehensive patient perspective. In this ever-evolving landscape, it is imperative to adopt a multidimensional approach involving policymakers, developers, healthcare practitioners, and patients to mitigate ethical concerns. By understanding and addressing these challenges, we can harness the full potential of AI in healthcare while ensuring ethical and equitable outcomes.
Collapse
Affiliation(s)
- Madhan Jeyaraman
- Orthopedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sangeetha Balaji
- Orthopedics, Government Medical College, Omandurar Government Estate, Chennai, IND
| | - Naveen Jeyaraman
- Orthopedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sankalp Yadav
- Medicine, Shri Madan Lal Khurana Chest Clinic, New Delhi, IND
| |
Collapse
|
5
|
González-Nóvoa JA, Campanioni S, Busto L, Fariña J, Rodríguez-Andina JJ, Vila D, Íñiguez A, Veiga C. Improving Intensive Care Unit Early Readmission Prediction Using Optimized and Explainable Machine Learning. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:3455. [PMID: 36834150 PMCID: PMC9960143 DOI: 10.3390/ijerph20043455] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 02/10/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
It is of great interest to develop and introduce new techniques to automatically and efficiently analyze the enormous amount of data generated in today's hospitals, using state-of-the-art artificial intelligence methods. Patients readmitted to the ICU in the same hospital stay have a higher risk of mortality, morbidity, longer length of stay, and increased cost. The methodology proposed to predict ICU readmission could improve the patients' care. The objective of this work is to explore and evaluate the potential improvement of existing models for predicting early ICU patient readmission by using optimized artificial intelligence algorithms and explainability techniques. In this work, XGBoost is used as a predictor model, combined with Bayesian techniques to optimize it. The results obtained predicted early ICU readmission (AUROC of 0.92 ± 0.03) improves state-of-the-art consulted works (whose AUROC oscillate between 0.66 and 0.78). Moreover, we explain the internal functioning of the model by using Shapley Additive Explanation-based techniques, allowing us to understand the model internal performance and to obtain useful information, as patient-specific information, the thresholds from which a feature begins to be critical for a certain group of patients, and the feature importance ranking.
Collapse
Affiliation(s)
- José A. González-Nóvoa
- Galicia Sur Health Research Institute (IIS Galicia Sur), Álvaro Cunqueiro Hospital, 36310 Vigo, Spain
| | - Silvia Campanioni
- Galicia Sur Health Research Institute (IIS Galicia Sur), Álvaro Cunqueiro Hospital, 36310 Vigo, Spain
| | - Laura Busto
- Galicia Sur Health Research Institute (IIS Galicia Sur), Álvaro Cunqueiro Hospital, 36310 Vigo, Spain
| | - José Fariña
- Department of Electronic Technology, University of Vigo, 36310 Vigo, Spain
| | | | - Dolores Vila
- Intensive Care Unit Department, Complexo Hospitalario Universitario de Vigo (SERGAS), Álvaro Cunqueiro Hospital, 36213 Vigo, Spain
| | - Andrés Íñiguez
- Cardiology Department, Complexo Hospitalario Universitario de Vigo (SERGAS), Álvaro Cunqueiro Hospital, 36213 Vigo, Spain
| | - César Veiga
- Galicia Sur Health Research Institute (IIS Galicia Sur), Álvaro Cunqueiro Hospital, 36310 Vigo, Spain
| |
Collapse
|