1
|
Li YH, Li YL, Wei MY, Li GY. Innovation and challenges of artificial intelligence technology in personalized healthcare. Sci Rep 2024; 14:18994. [PMID: 39152194 PMCID: PMC11329630 DOI: 10.1038/s41598-024-70073-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 08/12/2024] [Indexed: 08/19/2024] Open
Abstract
As the burgeoning field of Artificial Intelligence (AI) continues to permeate the fabric of healthcare, particularly in the realms of patient surveillance and telemedicine, a transformative era beckons. This manuscript endeavors to unravel the intricacies of recent AI advancements and their profound implications for reconceptualizing the delivery of medical care. Through the introduction of innovative instruments such as virtual assistant chatbots, wearable monitoring devices, predictive analytic models, personalized treatment regimens, and automated appointment systems, AI is not only amplifying the quality of care but also empowering patients and fostering a more interactive dynamic between the patient and the healthcare provider. Yet, this progressive infiltration of AI into the healthcare sphere grapples with a plethora of challenges hitherto unseen. The exigent issues of data security and privacy, the specter of algorithmic bias, the requisite adaptability of regulatory frameworks, and the matter of patient acceptance and trust in AI solutions demand immediate and thoughtful resolution .The importance of establishing stringent and far-reaching policies, ensuring technological impartiality, and cultivating patient confidence is paramount to ensure that AI-driven enhancements in healthcare service provision remain both ethically sound and efficient. In conclusion, we advocate for an expansion of research efforts aimed at navigating the ethical complexities inherent to a technology-evolving landscape, catalyzing policy innovation, and devising AI applications that are not only clinically effective but also earn the trust of the patient populace. By melding expertise across disciplines, we stand at the threshold of an era wherein AI's role in healthcare is both ethically unimpeachable and conducive to elevating the global health quotient.
Collapse
Affiliation(s)
- Yu-Hao Li
- International School, Beijing University of Posts and Telecommunications, Bei Jing, 100876, China
| | - Yu-Lin Li
- Department of Ophthalmology, The Second Norman Bethune Hospital of Jilin University, Changchun, 130000, China
| | - Mu-Yang Wei
- Department of Ophthalmology, The Second Norman Bethune Hospital of Jilin University, Changchun, 130000, China
| | - Guang-Yu Li
- Department of Ophthalmology, The Second Norman Bethune Hospital of Jilin University, Changchun, 130000, China.
| |
Collapse
|
2
|
Adams DR, van Karnebeek CDM, Agulló SB, Faùndes V, Jamuar SS, Lynch SA, Pintos-Morell G, Puri RD, Shai R, Steward CA, Tumiene B, Verloes A. Addressing diagnostic gaps and priorities of the global rare diseases community: Recommendations from the IRDiRC diagnostics scientific committee. Eur J Med Genet 2024; 70:104951. [PMID: 38848991 DOI: 10.1016/j.ejmg.2024.104951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Accepted: 06/05/2024] [Indexed: 06/09/2024]
Abstract
The International Rare Diseases Research Consortium (IRDiRC) Diagnostic Scientific Committee (DSC) is charged with discussion and contribution to progress on diagnostic aspects of the IRDiRC core mission. Specifically, IRDiRC goals include timely diagnosis, use of globally coordinated diagnostic pipelines, and assessing the impact of rare diseases on affected individuals. As part of this mission, the DSC endeavored to create a list of research priorities to achieve these goals. We present a discussion of those priorities along with aspects of current, global rare disease needs and opportunities that support our prioritization. In support of this discussion, we also provide clinical vignettes illustrating real-world examples of diagnostic challenges.
Collapse
Affiliation(s)
- David R Adams
- National Human Genome Research Institute, National Institutes of Health, USA.
| | - Clara D M van Karnebeek
- Departments of Pediatrics and Human Genetics, Emma Center for Personalized Medicine, Amsterdam Gastro-enterology Endocrinology Metabolism, Amsterdam University Medical Centers, the Netherlands
| | - Sergi Beltran Agulló
- Centre Nacional d'Anàlisi Genòmica (CNAG), Spain; Departament de Genètica, Microbiologia i Estadística, Facultat de Biologia, Universitat de Barcelona (UB), Spain
| | - Víctor Faùndes
- Laboratorio de Genética y Enfermedades Metabólicas, Instituto de Nutrición y Tecnología de los Alimentos, Universidad de Chile, Chile
| | - Saumya Shekhar Jamuar
- Genetics Service, KK Women's and Children's Hospital and Paediatrics ACP, Duke-NUS Medical School, Singapore; Singhealth Duke-NUS Institute of Precision Medicine, Singapore
| | | | - Guillem Pintos-Morell
- Vall d'Hebron Research Institute (VHIR), Vall d'Hebron Barcelona Hospital, Spain; MPS-Spain Patient Advocacy Organization, Spain
| | - Ratna Dua Puri
- Institute of Medical Genetics and Genomics, Sir Ganga Ram Hospital, India
| | - Ruty Shai
- Pediatric Cancer Molecular Lab, Sheba Medical Center, Israel
| | | | - Biruté Tumiene
- Vilnius University, Faculty of Medicine, Institute of Biomedical Sciences, Lithuania
| | - Alain Verloes
- Département de Génétique, CHU Paris - Hôpital Robert Debré, France
| |
Collapse
|
3
|
Tjhin Y, Kewlani B, Singh HKSI, Pawa N. Artificial intelligence in colorectal multidisciplinary team meetings. What are the medicolegal implications? Colorectal Dis 2024. [PMID: 39011560 DOI: 10.1111/codi.17091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 06/16/2024] [Accepted: 06/21/2024] [Indexed: 07/17/2024]
Abstract
AIM To give an insight into areas for future development and suggestions in the complexities of incorporation of AI into human colorectal cancer (CRC) care while bringing into focus the importance of clinicians' roles in patient care. METHODS Existing literature around AI use in CRC care is reviewed and potential regulatory issues and medicolegal implications around its implementation in CRC multidisciplinary team meetings (MDTs) are identified. RESULTS Challenges with patient privacy and confidentiality, patient consent, inequity and bias, patient autonomy, as well as AI system transparency and the liability and accountability issues arising from complications that arise from AI-aided clinical decisions are important focusses associated with the use of AI in CRC MDTs. CONCLUSION Consideration of various medicolegal aspects of the use of AI in CRC MDTs is warranted to ensure its safe and smooth incorporation into CRC MDTs. AI function as a clinical decision support system and does not replace professional expertise.
Collapse
Affiliation(s)
- Yovita Tjhin
- Chelsea and Westminster Hospital NHS Foundation Trust, London, UK
| | - Bharti Kewlani
- Chelsea and Westminster Hospital NHS Foundation Trust, London, UK
| | | | - Nikhil Pawa
- Chelsea and Westminster Hospital NHS Foundation Trust, London, UK
| |
Collapse
|
4
|
Contaldo MT, Pasceri G, Vignati G, Bracchi L, Triggiani S, Carrafiello G. AI in Radiology: Navigating Medical Responsibility. Diagnostics (Basel) 2024; 14:1506. [PMID: 39061643 PMCID: PMC11276428 DOI: 10.3390/diagnostics14141506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 07/10/2024] [Accepted: 07/10/2024] [Indexed: 07/28/2024] Open
Abstract
The application of Artificial Intelligence (AI) facilitates medical activities by automating routine tasks for healthcare professionals. AI augments but does not replace human decision-making, thus complicating the process of addressing legal responsibility. This study investigates the legal challenges associated with the medical use of AI in radiology, analyzing relevant case law and literature, with a specific focus on professional liability attribution. In the case of an error, the primary responsibility remains with the physician, with possible shared liability with developers according to the framework of medical device liability. If there is disagreement with the AI's findings, the physician must not only pursue but also justify their choices according to prevailing professional standards. Regulations must balance the autonomy of AI systems with the need for responsible clinical practice. Effective use of AI-generated evaluations requires knowledge of data dynamics and metrics like sensitivity and specificity, even without a clear understanding of the underlying algorithms: the opacity (referred to as the "black box phenomenon") of certain systems raises concerns about the interpretation and actual usability of results for both physicians and patients. AI is redefining healthcare, underscoring the imperative for robust liability frameworks, meticulous updates of systems, and transparent patient communication regarding AI involvement.
Collapse
Affiliation(s)
- Maria Teresa Contaldo
- Postgraduation School in Radiodiagnostics, University of Milan, 20122 Milan, Italy; (G.V.); (S.T.); (G.C.)
| | - Giovanni Pasceri
- Information Society Law Center, Department “Cesare Beccaria”, University of Milan, 20122 Milan, Italy
| | - Giacomo Vignati
- Postgraduation School in Radiodiagnostics, University of Milan, 20122 Milan, Italy; (G.V.); (S.T.); (G.C.)
| | | | - Sonia Triggiani
- Postgraduation School in Radiodiagnostics, University of Milan, 20122 Milan, Italy; (G.V.); (S.T.); (G.C.)
| | - Gianpaolo Carrafiello
- Postgraduation School in Radiodiagnostics, University of Milan, 20122 Milan, Italy; (G.V.); (S.T.); (G.C.)
- Radiology and Inverventional Radiology Department, Fondazione IRCCS Cà Granda, Policlinico di Milano Ospedale Maggiore, 20122 Milan, Italy
| |
Collapse
|
5
|
Shlobin NA, Ward M, Shah HA, Brown EDL, Sciubba DM, Langer D, D'Amico RS. Ethical Incorporation of Artificial Intelligence into Neurosurgery: A Generative Pretrained Transformer Chatbot-Based, Human-Modified Approach. World Neurosurg 2024; 187:e769-e791. [PMID: 38723944 DOI: 10.1016/j.wneu.2024.04.165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 04/25/2024] [Accepted: 04/26/2024] [Indexed: 05/31/2024]
Abstract
INTRODUCTION Artificial intelligence (AI) has become increasingly used in neurosurgery. Generative pretrained transformers (GPTs) have been of particular interest. However, ethical concerns regarding the incorporation of AI into the field remain underexplored. We delineate key ethical considerations using a novel GPT-based, human-modified approach, synthesize the most common considerations, and present an ethical framework for the involvement of AI in neurosurgery. METHODS GPT-4, ChatGPT, Bing Chat/Copilot, You, Perplexity.ai, and Google Bard were queried with the prompt "How can artificial intelligence be ethically incorporated into neurosurgery?". Then, a layered GPT-based thematic analysis was performed. The authors synthesized the results into considerations for the ethical incorporation of AI into neurosurgery. Separate Pareto analyses with 20% threshold and 10% threshold were conducted to determine salient themes. The authors refined these salient themes. RESULTS Twelve key ethical considerations focusing on stakeholders, clinical implementation, and governance were identified. Refinement of the Pareto analysis of the top 20% most salient themes in the aggregated GPT outputs yielded 10 key considerations. Additionally, from the top 10% most salient themes, 5 considerations were retrieved. An ethical framework for the use of AI in neurosurgery was developed. CONCLUSIONS It is critical to address the ethical considerations associated with the use of AI in neurosurgery. The framework described in this manuscript may facilitate the integration of AI into neurosurgery, benefitting both patients and neurosurgeons alike. We urge neurosurgeons to use AI only for validated purposes and caution against automatic adoption of its outputs without neurosurgeon interpretation.
Collapse
Affiliation(s)
- Nathan A Shlobin
- Department of Neurological Surgery, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA.
| | - Max Ward
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Harshal A Shah
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Ethan D L Brown
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Daniel M Sciubba
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - David Langer
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Randy S D'Amico
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| |
Collapse
|
6
|
Sharma H, Ruikar M. Artificial intelligence at the pen's edge: Exploring the ethical quagmires in using artificial intelligence models like ChatGPT for assisted writing in biomedical research. Perspect Clin Res 2024; 15:108-115. [PMID: 39140014 PMCID: PMC11318783 DOI: 10.4103/picr.picr_196_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 08/09/2023] [Accepted: 08/11/2023] [Indexed: 08/15/2024] Open
Abstract
Chat generative pretrained transformer (ChatGPT) is a conversational language model powered by artificial intelligence (AI). It is a sophisticated language model that employs deep learning methods to generate human-like text outputs to inputs in the natural language. This narrative review aims to shed light on ethical concerns about using AI models like ChatGPT in writing assistance in the health care and medical domains. Currently, all the AI models like ChatGPT are in the infancy stage; there is a risk of inaccuracy of the generated content, lack of contextual understanding, dynamic knowledge gaps, limited discernment, lack of responsibility and accountability, issues of privacy, data security, transparency, and bias, lack of nuance, and originality. Other issues such as authorship, unintentional plagiarism, falsified and fabricated content, and the threat of being red-flagged as AI-generated content highlight the need for regulatory compliance, transparency, and disclosure. If the legitimate issues are proactively considered and addressed, the potential applications of AI models as writing assistance could be rewarding.
Collapse
Affiliation(s)
- Hunny Sharma
- Department of Community and Family Medicine, All India Institute of Medical Sciences, Raipur, Chhattisgarh, India
| | - Manisha Ruikar
- Department of Community and Family Medicine, All India Institute of Medical Sciences, Raipur, Chhattisgarh, India
| |
Collapse
|
7
|
Giacobbe DR, Marelli C, Guastavino S, Mora S, Rosso N, Signori A, Campi C, Giacomini M, Bassetti M. Explainable and Interpretable Machine Learning for Antimicrobial Stewardship: Opportunities and Challenges. Clin Ther 2024; 46:474-480. [PMID: 38519371 DOI: 10.1016/j.clinthera.2024.02.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 02/23/2024] [Accepted: 02/27/2024] [Indexed: 03/24/2024]
Abstract
There is growing interest in exploiting the advances in artificial intelligence and machine learning (ML) for improving and monitoring antimicrobial prescriptions in line with antimicrobial stewardship principles. Against this background, the concepts of interpretability and explainability are becoming increasingly essential to understanding how ML algorithms could predict antimicrobial resistance or recommend specific therapeutic agents, to avoid unintended biases related to the "black box" nature of complex models. In this commentary, we review and discuss some relevant topics on the use of ML algorithms for antimicrobial stewardship interventions, highlighting opportunities and challenges, with particular attention paid to interpretability and explainability of employed models. As in other fields of medicine, the exponential growth of artificial intelligence and ML indicates the potential for improving the efficacy of antimicrobial stewardship interventions, at least in part by reducing time-consuming tasks for overwhelmed health care personnel. Improving our knowledge about how complex ML models work could help to achieve crucial advances in promoting the appropriate use of antimicrobials, as well as in preventing antimicrobial resistance selection and dissemination.
Collapse
Affiliation(s)
- Daniele Roberto Giacobbe
- Department of Health Sciences, University of Genoa, Genoa, Italy; UO Clinica Malattie Infettive, Istituto di Ricovero e Cura a Carattere Scientifico Ospedale Policlinico San Martino, Genoa, Italy.
| | - Cristina Marelli
- UO Clinica Malattie Infettive, Istituto di Ricovero e Cura a Carattere Scientifico Ospedale Policlinico San Martino, Genoa, Italy
| | | | - Sara Mora
- UO Information and Communication Technologies, Istituto di Ricovero e Cura a Carattere Scientifico Ospedale Policlinico San Martino, Genoa, Italy
| | - Nicola Rosso
- UO Information and Communication Technologies, Istituto di Ricovero e Cura a Carattere Scientifico Ospedale Policlinico San Martino, Genoa, Italy
| | - Alessio Signori
- Section of Biostatistics, Department of Health Sciences, University of Genoa, Genoa, Italy
| | - Cristina Campi
- Department of Mathematics, University of Genoa, Genoa, Italy; Life Science Computational Laboratory, Istituto di Ricovero e Cura a Carattere Scientifico Ospedale Policlinico San Martino, Genoa, Italy
| | - Mauro Giacomini
- Department of Informatics, Bioengineering, Robotics and System Engineering, University of Genoa, Genoa, Italy
| | - Matteo Bassetti
- Department of Health Sciences, University of Genoa, Genoa, Italy; UO Clinica Malattie Infettive, Istituto di Ricovero e Cura a Carattere Scientifico Ospedale Policlinico San Martino, Genoa, Italy
| |
Collapse
|
8
|
Assis de Souza A, Stubbs AP, Hesselink DA, Baan CC, Boer K. Cherry on Top or Real Need? A Review of Explainable Machine Learning in Kidney Transplantation. Transplantation 2024:00007890-990000000-00768. [PMID: 38773859 DOI: 10.1097/tp.0000000000005063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2024]
Abstract
Research on solid organ transplantation has taken advantage of the substantial acquisition of medical data and the use of artificial intelligence (AI) and machine learning (ML) to answer diagnostic, prognostic, and therapeutic questions for many years. Nevertheless, despite the question of whether AI models add value to traditional modeling approaches, such as regression models, their "black box" nature is one of the factors that have hindered the translation from research to clinical practice. Several techniques that make such models understandable to humans were developed with the promise of increasing transparency in the support of medical decision-making. These techniques should help AI to close the gap between theory and practice by yielding trust in the model by doctors and patients, allowing model auditing, and facilitating compliance with emergent AI regulations. But is this also happening in the field of kidney transplantation? This review reports the use and explanation of "black box" models to diagnose and predict kidney allograft rejection, delayed graft function, graft failure, and other related outcomes after kidney transplantation. In particular, we emphasize the discussion on the need (or not) to explain ML models for biological discovery and clinical implementation in kidney transplantation. We also discuss promising future research paths for these computational tools.
Collapse
Affiliation(s)
- Alvaro Assis de Souza
- Department of Internal Medicine, Erasmus MC Transplant Institute, University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Andrew P Stubbs
- Department of Pathology and Clinical Bioinformatics, Erasmus MC Stubbs Group, University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Dennis A Hesselink
- Department of Internal Medicine, Erasmus MC Transplant Institute, University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Carla C Baan
- Department of Internal Medicine, Erasmus MC Transplant Institute, University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Karin Boer
- Department of Internal Medicine, Erasmus MC Transplant Institute, University Medical Center Rotterdam, Rotterdam, the Netherlands
| |
Collapse
|
9
|
Khan SD, Hoodbhoy Z, Raja MHR, Kim JY, Hogg HDJ, Manji AAA, Gulamali F, Hasan A, Shaikh A, Tajuddin S, Khan NS, Patel MR, Balu S, Samad Z, Sendak MP. Frameworks for procurement, integration, monitoring, and evaluation of artificial intelligence tools in clinical settings: A systematic review. PLOS DIGITAL HEALTH 2024; 3:e0000514. [PMID: 38809946 PMCID: PMC11135672 DOI: 10.1371/journal.pdig.0000514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 04/18/2024] [Indexed: 05/31/2024]
Abstract
Research on the applications of artificial intelligence (AI) tools in medicine has increased exponentially over the last few years but its implementation in clinical practice has not seen a commensurate increase with a lack of consensus on implementing and maintaining such tools. This systematic review aims to summarize frameworks focusing on procuring, implementing, monitoring, and evaluating AI tools in clinical practice. A comprehensive literature search, following PRSIMA guidelines was performed on MEDLINE, Wiley Cochrane, Scopus, and EBSCO databases, to identify and include articles recommending practices, frameworks or guidelines for AI procurement, integration, monitoring, and evaluation. From the included articles, data regarding study aim, use of a framework, rationale of the framework, details regarding AI implementation involving procurement, integration, monitoring, and evaluation were extracted. The extracted details were then mapped on to the Donabedian Plan, Do, Study, Act cycle domains. The search yielded 17,537 unique articles, out of which 47 were evaluated for inclusion based on their full texts and 25 articles were included in the review. Common themes extracted included transparency, feasibility of operation within existing workflows, integrating into existing workflows, validation of the tool using predefined performance indicators and improving the algorithm and/or adjusting the tool to improve performance. Among the four domains (Plan, Do, Study, Act) the most common domain was Plan (84%, n = 21), followed by Study (60%, n = 15), Do (52%, n = 13), & Act (24%, n = 6). Among 172 authors, only 1 (0.6%) was from a low-income country (LIC) and 2 (1.2%) were from lower-middle-income countries (LMICs). Healthcare professionals cite the implementation of AI tools within clinical settings as challenging owing to low levels of evidence focusing on integration in the Do and Act domains. The current healthcare AI landscape calls for increased data sharing and knowledge translation to facilitate common goals and reap maximum clinical benefit.
Collapse
Affiliation(s)
- Sarim Dawar Khan
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Zahra Hoodbhoy
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
- Department of Paediatrics and Child Health, Aga Khan University, Karachi, Pakistan
| | | | - Jee Young Kim
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Henry David Jeffry Hogg
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Afshan Anwar Ali Manji
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Freya Gulamali
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Alifia Hasan
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Asim Shaikh
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Salma Tajuddin
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Nida Saddaf Khan
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Manesh R. Patel
- Duke Clinical Research Institute, Duke University School of Medicine, Durham, North Carolina, United States
- Division of Cardiology, Duke University School of Medicine, Durham, North Carolina, United States
| | - Suresh Balu
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Zainab Samad
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Mark P. Sendak
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| |
Collapse
|
10
|
Stoel BC, Staring M, Reijnierse M, van der Helm-van Mil AHM. Deep learning in rheumatological image interpretation. Nat Rev Rheumatol 2024; 20:182-195. [PMID: 38332242 DOI: 10.1038/s41584-023-01074-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/20/2023] [Indexed: 02/10/2024]
Abstract
Artificial intelligence techniques, specifically deep learning, have already affected daily life in a wide range of areas. Likewise, initial applications have been explored in rheumatology. Deep learning might not easily surpass the accuracy of classic techniques when performing classification or regression on low-dimensional numerical data. With images as input, however, deep learning has become so successful that it has already outperformed the majority of conventional image-processing techniques developed during the past 50 years. As with any new imaging technology, rheumatologists and radiologists need to consider adapting their arsenal of diagnostic, prognostic and monitoring tools, and even their clinical role and collaborations. This adaptation requires a basic understanding of the technical background of deep learning, to efficiently utilize its benefits but also to recognize its drawbacks and pitfalls, as blindly relying on deep learning might be at odds with its capabilities. To facilitate such an understanding, it is necessary to provide an overview of deep-learning techniques for automatic image analysis in detecting, quantifying, predicting and monitoring rheumatic diseases, and of currently published deep-learning applications in radiological imaging for rheumatology, with critical assessment of possible limitations, errors and confounders, and conceivable consequences for rheumatologists and radiologists in clinical practice.
Collapse
Affiliation(s)
- Berend C Stoel
- Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.
| | - Marius Staring
- Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Monique Reijnierse
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | | |
Collapse
|
11
|
Rubinic I, Kurtov M, Rubinic I, Likic R, Dargan PI, Wood DM. Artificial intelligence in clinical pharmacology: A case study and scoping review of large language models and bioweapon potential. Br J Clin Pharmacol 2024; 90:620-628. [PMID: 37658550 DOI: 10.1111/bcp.15899] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 08/23/2023] [Accepted: 08/24/2023] [Indexed: 09/03/2023] Open
Abstract
This paper aims to explore the possibility of employing large language models (LLMs) - a type of artificial intelligence (AI) - in clinical pharmacology, with a focus on its possible misuse in bioweapon development. Additionally, ethical considerations, legislation and potential risk reduction measures are analysed. The existing literature is reviewed to investigate the potential misuse of AI and LLMs in bioweapon creation. The search includes articles from PubMed, Scopus and Web of Science Core Collection that were identified using a specific protocol. To explore the regulatory landscape, the OECD.ai platform was used. The review highlights the dual-use vulnerability of AI and LLMs, with a focus on bioweapon development. Subsequently, a case study is used to illustrate the potential of AI manipulation resulting in harmful substance synthesis. Existing regulations inadequately address the ethical concerns tied to AI and LLMs. Mitigation measures are proposed, including technical solutions (explainable AI), establishing ethical guidelines through collaborative efforts, and implementing policy changes to create a comprehensive regulatory framework. The integration of AI and LLMs into clinical pharmacology presents invaluable opportunities, while also introducing significant ethical and safety considerations. Addressing the dual-use nature of AI requires robust regulations, as well as adopting a strategic approach grounded in technical solutions and ethical values following the principles of transparency, accountability and safety. Additionally, AI's potential role in developing countermeasures against novel hazardous substances is underscored. By adopting a proactive approach, the potential benefits of AI and LLMs can be fully harnessed while minimizing the associated risks.
Collapse
Affiliation(s)
- Igor Rubinic
- University of Rijeka School of Medicine, Rijeka, Croatia
- Clinical Hospital Centre Rijeka, Rijeka, Croatia
| | | | - Ivan Rubinic
- School of Engineering, University of Rijeka, Rijeka, Croatia
| | - Robert Likic
- University of Zagreb School of Medicine, Zagreb, Croatia
- Clinical Hospital Centre Zagreb, Zagreb, Croatia
| | - Paul I Dargan
- Faculty of Life Sciences and Medicine, King's College London, London, UK
- Clinical Toxicology, Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - David M Wood
- Faculty of Life Sciences and Medicine, King's College London, London, UK
- Clinical Toxicology, Guy's and St Thomas' NHS Foundation Trust, London, UK
| |
Collapse
|
12
|
Palaniappan K, Lin EYT, Vogel S. Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector. Healthcare (Basel) 2024; 12:562. [PMID: 38470673 PMCID: PMC10930608 DOI: 10.3390/healthcare12050562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 02/23/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024] Open
Abstract
The healthcare sector is faced with challenges due to a shrinking healthcare workforce and a rise in chronic diseases that are worsening with demographic and epidemiological shifts. Digital health interventions that include artificial intelligence (AI) are being identified as some of the potential solutions to these challenges. The ultimate aim of these AI systems is to improve the patient's health outcomes and satisfaction, the overall population's health, and the well-being of healthcare professionals. The applications of AI in healthcare services are vast and are expected to assist, automate, and augment several healthcare services. Like any other emerging innovation, AI in healthcare also comes with its own risks and requires regulatory controls. A review of the literature was undertaken to study the existing regulatory landscape for AI in the healthcare services sector in developed nations. In the global regulatory landscape, most of the regulations for AI revolve around Software as a Medical Device (SaMD) and are regulated under digital health products. However, it is necessary to note that the current regulations may not suffice as AI-based technologies are capable of working autonomously, adapting their algorithms, and improving their performance over time based on the new real-world data that they have encountered. Hence, a global regulatory convergence for AI in healthcare, similar to the voluntary AI code of conduct that is being developed by the US-EU Trade and Technology Council, would be beneficial to all nations, be it developing or developed.
Collapse
Affiliation(s)
- Kavitha Palaniappan
- Centre of Regulatory Excellence, Duke-NUS Medical School, Singapore 169857, Singapore
| | | | | |
Collapse
|
13
|
Affiliation(s)
- Ahmad Z Al Meslamani
- College of Pharmacy, Al Ain University, Abu Dhabi, UAE
- AAU Health and Biomedical Research Center, Al Ain University, Abu Dhabi, UAE
| |
Collapse
|
14
|
Cacciamani GE, Chen A, Gill IS, Hung AJ. Artificial intelligence and urology: ethical considerations for urologists and patients. Nat Rev Urol 2024; 21:50-59. [PMID: 37524914 DOI: 10.1038/s41585-023-00796-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/22/2023] [Indexed: 08/02/2023]
Abstract
The use of artificial intelligence (AI) in medicine and in urology specifically has increased over the past few years, during which time it has enabled optimization of patient workflow, increased diagnostic accuracy and enhanced computer analysis of radiological and pathological images. However, before further use of AI is undertaken, possible ethical issues need to be evaluated to improve understanding of this technology and to protect patients and providers. Possible ethical issues that require consideration when applying AI in clinical practice include patient safety, cybersecurity, transparency and interpretability of the data, inclusivity and equity, fostering responsibility and accountability, and the preservation of providers' decision-making and autonomy. Ethical principles for the application of AI to health care and in urology are proposed to guide urologists, patients and regulators to improve use of AI technologies and guide policy-making.
Collapse
Affiliation(s)
- Giovanni E Cacciamani
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA.
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Andrew Chen
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir S Gill
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Andrew J Hung
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
15
|
Moynihan A, Hardy N, Dalli J, Aigner F, Arezzo A, Hompes R, Knol J, Tuynman J, Cucek J, Rojc J, Rodríguez-Luna MR, Cahill R. CLASSICA: Validating artificial intelligence in classifying cancer in real time during surgery. Colorectal Dis 2023; 25:2392-2402. [PMID: 37932915 DOI: 10.1111/codi.16769] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/14/2023] [Accepted: 09/16/2023] [Indexed: 11/08/2023]
Abstract
AIM Treatment pathways for significant rectal polyps differ depending on the underlying pathology, but pre-excision profiling is imperfect. It has been demonstrated that differences in fluorescence perfusion signals following injection of indocyanine green (ICG) can be analysed mathematically and, with the assistance of artificial intelligence (AI), used to classify tumours endoscopically as benign or malignant. This study aims to validate this method of characterization across multiple clinical sites regarding its generalizability, usability and accuracy while developing clinical-grade software to enable it to become a useful method. METHODS The CLASSICA study is a prospective, unblinded multicentre European observational study aimed to validate the use of AI analysis of ICG fluorescence for intra-operative tissue characterization. Six hundred patients undergoing transanal endoscopic evaluation of significant rectal polyps and tumours will be enrolled in at least five clinical sites across the European Union over a 4-year period. Video recordings will be analysed regarding dynamic fluorescence patterns centrally as software is developed to enable analysis with automatic classification to happen locally. AI-based classification and subsequently guided intervention will be compared with the current standard of care including biopsies, final specimen pathology and patient outcomes. DISCUSSION CLASSICA will validate the use of AI in the analysis of ICG fluorescence for the purposes of classifying significant rectal polyps and tumours endoscopically. Follow-on studies will compare AI-guided targeted biopsy or, indeed, AI characterization alone with traditional biopsy and AI-guided local excision versus traditional excision with regard to marginal clearance and recurrence.
Collapse
Affiliation(s)
- A Moynihan
- University College Dublin, Dublin, Ireland
| | - N Hardy
- University College Dublin, Dublin, Ireland
| | - J Dalli
- University College Dublin, Dublin, Ireland
| | - F Aigner
- Krankenhaus der Barmherzigen Brüder Graz, Graz, Austria
| | - A Arezzo
- Department of Surgical Sciences, University of Torino, Torino, Italy
- European Association for Endoscopic Surgery, Eindhoven, The Netherlands
| | - R Hompes
- Ziekenhuis Oost-Limburg Autonome Verzorgingsinstelling, Genk, Belgium
| | - J Knol
- Ziekenhuis Oost-Limburg Autonome Verzorgingsinstelling, Genk, Belgium
| | - J Tuynman
- Stitching VUMC, Amsterdam, The Netherlands
| | - J Cucek
- Arctur, Nova Gorica, Slovenia
| | - J Rojc
- Arctur, Nova Gorica, Slovenia
| | | | - R Cahill
- University College Dublin, Dublin, Ireland
- Mater Misericordiae University Hospital, Dublin, Ireland
| |
Collapse
|
16
|
Giacobbe DR, Zhang Y, de la Fuente J. Explainable artificial intelligence and machine learning: novel approaches to face infectious diseases challenges. Ann Med 2023; 55:2286336. [PMID: 38010090 PMCID: PMC10836268 DOI: 10.1080/07853890.2023.2286336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 11/16/2023] [Indexed: 11/29/2023] Open
Abstract
Artificial intelligence (AI) and machine learning (ML) are revolutionizing human activities in various fields, with medicine and infectious diseases being not exempt from their rapid and exponential growth. Furthermore, the field of explainable AI and ML has gained particular relevance and is attracting increasing interest. Infectious diseases have already started to benefit from explainable AI/ML models. For example, they have been employed or proposed to better understand complex models aimed at improving the diagnosis and management of coronavirus disease 2019, in the field of antimicrobial resistance prediction and in quantum vaccine algorithms. Although some issues concerning the dichotomy between explainability and interpretability still require careful attention, an in-depth understanding of how complex AI/ML models arrive at their predictions or recommendations is becoming increasingly essential to properly face the growing challenges of infectious diseases in the present century.
Collapse
Affiliation(s)
- Daniele Roberto Giacobbe
- Department of Health Sciences (DISSAL), University of Genoa, Genoa, Italy
- Clinica Malattie Infettive, IRCCS Ospedale Policlinico San Martino, Italy
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - José de la Fuente
- SaBio (Health and Biotechnology), Instituto de Investigación en Recursos Cinegéticos IREC-CSIC-UCLM-JCCM, Ciudad Real, Spain
- Department of Veterinary Pathobiology, Center for Veterinary Health Sciences, Oklahoma State University, Stillwater, OK, USA
| |
Collapse
|
17
|
Wong RSY, Ming LC, Raja Ali RA. The Intersection of ChatGPT, Clinical Medicine, and Medical Education. JMIR MEDICAL EDUCATION 2023; 9:e47274. [PMID: 37988149 DOI: 10.2196/47274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 06/16/2023] [Accepted: 06/30/2023] [Indexed: 11/22/2023]
Abstract
As we progress deeper into the digital age, the robust development and application of advanced artificial intelligence (AI) technology, specifically generative language models like ChatGPT (OpenAI), have potential implications in all sectors including medicine. This viewpoint article aims to present the authors' perspective on the integration of AI models such as ChatGPT in clinical medicine and medical education. The unprecedented capacity of ChatGPT to generate human-like responses, refined through Reinforcement Learning with Human Feedback, could significantly reshape the pedagogical methodologies within medical education. Through a comprehensive review and the authors' personal experiences, this viewpoint article elucidates the pros, cons, and ethical considerations of using ChatGPT within clinical medicine and notably, its implications for medical education. This exploration is crucial in a transformative era where AI could potentially augment human capability in the process of knowledge creation and dissemination, potentially revolutionizing medical education and clinical practice. The importance of maintaining academic integrity and professional standards is highlighted. The relevance of establishing clear guidelines for the responsible and ethical use of AI technologies in clinical medicine and medical education is also emphasized.
Collapse
Affiliation(s)
- Rebecca Shin-Yee Wong
- Department of Medical Education, School of Medical and Life Sciences, Sunway University, Selangor, Malaysia
- Faculty of Medicine, Nursing and Health Sciences, SEGi University, Petaling Jaya, Malaysia
| | - Long Chiau Ming
- School of Medical and Life Sciences, Sunway University, Selangor, Malaysia
| | - Raja Affendi Raja Ali
- School of Medical and Life Sciences, Sunway University, Selangor, Malaysia
- GUT Research Group, Faculty of Medicine, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| |
Collapse
|
18
|
Elragal R, Elragal A, Habibipour A. Healthcare analytics-A literature review and proposed research agenda. Front Big Data 2023; 6:1277976. [PMID: 37869248 PMCID: PMC10585099 DOI: 10.3389/fdata.2023.1277976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 09/19/2023] [Indexed: 10/24/2023] Open
Abstract
This research addresses the demanding need for research in healthcare analytics, by explaining how previous studies have used big data, AI, and machine learning to identify, address, or solve healthcare problems. Healthcare science methods are combined with contemporary data science techniques to examine the literature, identify research gaps, and propose a research agenda for researchers, academic institutions, and governmental healthcare organizations. The study contributes to the body of literature by providing a state-of-the-art review of healthcare analytics as well as proposing a research agenda to advance the knowledge in this area. The results of this research can be beneficial for both healthcare science and data science researchers as well as practitioners in the field.
Collapse
Affiliation(s)
| | - Ahmed Elragal
- Department of Computer Science, Electrical, and Space Engineering, Luleå University of Technology, Luleå, Sweden
| | | |
Collapse
|
19
|
Raz A, Minari J. AI-driven risk scores: should social scoring and polygenic scores based on ethnicity be equally prohibited? Front Genet 2023; 14:1169580. [PMID: 37323663 PMCID: PMC10267818 DOI: 10.3389/fgene.2023.1169580] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 05/17/2023] [Indexed: 06/17/2023] Open
Affiliation(s)
- Aviad Raz
- Department of Sociology and Anthropology, Ben-Gurion University of the Negev, Beer-Sheba, Israel
| | - Jusaku Minari
- Uehiro Research Division for iPS Cell Ethics, Center for iPS Cell Research and Application (CiRA), Kyoto University, Kyoto, Japan
| |
Collapse
|