1
|
Awuah WA, Adebusoye FT, Wellington J, David L, Salam A, Weng Yee AL, Lansiaux E, Yarlagadda R, Garg T, Abdul-Rahman T, Kalmanovich J, Miteu GD, Kundu M, Mykolaivna NI. Recent Outcomes and Challenges of Artificial Intelligence, Machine Learning, and Deep Learning in Neurosurgery. World Neurosurg X 2024; 23:100301. [PMID: 38577317 PMCID: PMC10992893 DOI: 10.1016/j.wnsx.2024.100301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/23/2023] [Accepted: 02/21/2024] [Indexed: 04/06/2024] Open
Abstract
Neurosurgeons receive extensive technical training, which equips them with the knowledge and skills to specialise in various fields and manage the massive amounts of information and decision-making required throughout the various stages of neurosurgery, including preoperative, intraoperative, and postoperative care and recovery. Over the past few years, artificial intelligence (AI) has become more useful in neurosurgery. AI has the potential to improve patient outcomes by augmenting the capabilities of neurosurgeons and ultimately improving diagnostic and prognostic outcomes as well as decision-making during surgical procedures. By incorporating AI into both interventional and non-interventional therapies, neurosurgeons may provide the best care for their patients. AI, machine learning (ML), and deep learning (DL) have made significant progress in the field of neurosurgery. These cutting-edge methods have enhanced patient outcomes, reduced complications, and improved surgical planning.
Collapse
Affiliation(s)
| | | | - Jack Wellington
- Cardiff University School of Medicine, Cardiff University, Wales, United Kingdom
| | - Lian David
- Norwich Medical School, University of East Anglia, United Kingdom
| | - Abdus Salam
- Department of Surgery, Khyber Teaching Hospital, Peshawar, Pakistan
| | | | | | - Rohan Yarlagadda
- Rowan University School of Osteopathic Medicine, Stratford, NJ, USA
| | - Tulika Garg
- Government Medical College and Hospital Chandigarh, India
| | | | | | | | - Mrinmoy Kundu
- Institute of Medical Sciences and SUM Hospital, Bhubaneswar, India
| | | |
Collapse
|
2
|
Graham Y, Spencer AE, Velez GE, Herbell K. Engaging Youth Voice and Family Partnerships to Improve Children's Mental Health Outcomes. Child Adolesc Psychiatr Clin N Am 2024; 33:343-354. [PMID: 38823808 DOI: 10.1016/j.chc.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/03/2024]
Abstract
Promoting active participation of families and youth in mental health systems of care is the cornerstone of creating a more inclusive, effective, and responsive care network. This article focuses on the inclusion of parent and youth voice in transforming our mental health care system to promote increased engagement at all levels of service delivery. Youth and parent peer support delivery models, digital innovation, and technology not only empower the individuals involved, but also have the potential to enhance the overall efficacy of the mental health care system.
Collapse
Affiliation(s)
- Yolanda Graham
- Morehouse School of Medicine, Devereux Advanced Behavioral Health, 444 Devereux Drive, Villanova, PA 19085, USA.
| | - Andrea E Spencer
- Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine, 225 East Chicago Avenue, Chicago, IL 60611, USA
| | - German E Velez
- New York-Presbyterian Hospital, Weill Cornell Medical College/ Columbia University College of Physicians and Surgeons, 525 E. 68th Street, Box 140, New York, NY 10065, USA
| | - Kayla Herbell
- Martha S. Pitzer Center for Women, Children, and Youth, The Ohio State University, 1577 Neil Avenue, Columbus, OH 43210, USA
| |
Collapse
|
3
|
Olver IN. Ethics of artificial intelligence in supportive care in cancer. Med J Aust 2024; 220:499-501. [PMID: 38714360 DOI: 10.5694/mja2.52297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 12/22/2023] [Indexed: 05/09/2024]
|
4
|
Salloch S, Eriksen A. What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024:1-12. [PMID: 38767971 DOI: 10.1080/15265161.2024.2353800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as "human in the loop" or "meaningful human control" are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the role of the "human in the loop" and to overcome the perspective of rivaling ethical principles in the evaluation of AI in health care. We argue that patients should be perceived as "fellow workers" and epistemic partners in the interpretation of ML_CDSS outputs. We further highlight that a meaningful process of integrating (rather than weighing and balancing) ethical principles is most appropriate in the evaluation of medical AI.
Collapse
|
5
|
Shlobin NA, Ward M, Shah HA, Brown EDL, Sciubba DM, Langer D, D'Amico RS. Ethical Incorporation of Artificial Intelligence into Neurosurgery: A Generative Pretrained Transformer Chatbot-Based, Human-Modified Approach. World Neurosurg 2024:S1878-8750(24)00738-1. [PMID: 38723944 DOI: 10.1016/j.wneu.2024.04.165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 04/25/2024] [Accepted: 04/26/2024] [Indexed: 05/31/2024]
Abstract
INTRODUCTION Artificial intelligence (AI) has become increasingly used in neurosurgery. Generative pretrained transformers (GPTs) have been of particular interest. However, ethical concerns regarding the incorporation of AI into the field remain underexplored. We delineate key ethical considerations using a novel GPT-based, human-modified approach, synthesize the most common considerations, and present an ethical framework for the involvement of AI in neurosurgery. METHODS GPT-4, ChatGPT, Bing Chat/Copilot, You, Perplexity.ai, and Google Bard were queried with the prompt "How can artificial intelligence be ethically incorporated into neurosurgery?". Then, a layered GPT-based thematic analysis was performed. The authors synthesized the results into considerations for the ethical incorporation of AI into neurosurgery. Separate Pareto analyses with 20% threshold and 10% threshold were conducted to determine salient themes. The authors refined these salient themes. RESULTS Twelve key ethical considerations focusing on stakeholders, clinical implementation, and governance were identified. Refinement of the Pareto analysis of the top 20% most salient themes in the aggregated GPT outputs yielded 10 key considerations. Additionally, from the top 10% most salient themes, 5 considerations were retrieved. An ethical framework for the use of AI in neurosurgery was developed. CONCLUSIONS It is critical to address the ethical considerations associated with the use of AI in neurosurgery. The framework described in this manuscript may facilitate the integration of AI into neurosurgery, benefitting both patients and neurosurgeons alike. We urge neurosurgeons to use AI only for validated purposes and caution against automatic adoption of its outputs without neurosurgeon interpretation.
Collapse
Affiliation(s)
- Nathan A Shlobin
- Department of Neurological Surgery, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA.
| | - Max Ward
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Harshal A Shah
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Ethan D L Brown
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Daniel M Sciubba
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - David Langer
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Randy S D'Amico
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| |
Collapse
|
6
|
Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif Intell Med 2024; 151:102861. [PMID: 38555850 DOI: 10.1016/j.artmed.2024.102861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 03/19/2024] [Accepted: 03/25/2024] [Indexed: 04/02/2024]
Abstract
Healthcare organizations have realized that Artificial intelligence (AI) can provide a competitive edge through personalized patient experiences, improved patient outcomes, early diagnosis, augmented clinician capabilities, enhanced operational efficiencies, or improved medical service accessibility. However, deploying AI-driven tools in the healthcare ecosystem could be challenging. This paper categorizes AI applications in healthcare and comprehensively examines the challenges associated with deploying AI in medical practices at scale. As AI continues to make strides in healthcare, its integration presents various challenges, including production timelines, trust generation, privacy concerns, algorithmic biases, and data scarcity. The paper highlights that flawed business models and wrong workflows in healthcare practices cannot be rectified merely by deploying AI-driven tools. Healthcare organizations should re-evaluate root problems such as misaligned financial incentives (e.g., fee-for-service models), dysfunctional medical workflows (e.g., high rates of patient readmissions), poor care coordination between different providers, fragmented electronic health records systems, and inadequate patient education and engagement models in tandem with AI adoption. This study also explores the need for a cultural shift in viewing AI not as a threat but as an enabler that can enhance healthcare delivery and create new employment opportunities while emphasizing the importance of addressing underlying operational issues. The necessity of investments beyond finance is discussed, emphasizing the importance of human capital, continuous learning, and a supportive environment for AI integration. The paper also highlights the crucial role of clear regulations in building trust, ensuring safety, and guiding the ethical use of AI, calling for coherent frameworks addressing transparency, model accuracy, data quality control, liability, and ethics. Furthermore, this paper underscores the importance of advancing AI literacy within academia to prepare future healthcare professionals for an AI-driven landscape. Through careful navigation and proactive measures addressing these challenges, the healthcare community can harness AI's transformative power responsibly and effectively, revolutionizing healthcare delivery and patient care. The paper concludes with a vision and strategic suggestions for the future of healthcare with AI, emphasizing thoughtful, responsible, and innovative engagement as the pathway to realizing its full potential to unlock immense benefits for healthcare organizations, physicians, nurses, and patients while proactively mitigating risks.
Collapse
Affiliation(s)
- Pouyan Esmaeilzadeh
- Department of Information Systems and Business Analytics, College of Business, Florida International University (FIU), Modesto A. Maidique Campus, 11200 S.W. 8th St, RB 261B, Miami, FL 33199, United States.
| |
Collapse
|
7
|
Khalil H, Campbell F, Danial K, Pollock D, Munn Z, Welsh V, Saran A, Hoppe D, Tricco AC. Advancing the methodology of mapping reviews: A scoping review. Res Synth Methods 2024; 15:384-397. [PMID: 38169156 DOI: 10.1002/jrsm.1694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 10/30/2023] [Accepted: 11/18/2023] [Indexed: 01/05/2024]
Abstract
This scoping review aims to identify and systematically review published mapping reviews to assess their commonality and heterogeneity and determine whether additional efforts should be made to standardise methodology and reporting. The following databases were searched; Ovid MEDLINE, Embase, CINAHL, PsycINFO, Campbell collaboration database, Social Science Abstracts, Library and Information Science Abstracts (LISA). Following a pilot-test on a random sample of 20 citations included within title and abstracts, two team members independently completed all screening. Ten articles were piloted at full-text screening, and then each citation was reviewed independently by two team members. Discrepancies at both stages were resolved through discussion. Following a pilot-test on a random sample of five relevant full-text articles, one team member abstracted all the relevant data. Uncertainties in the data abstraction were resolved by another team member. A total of 335 articles were eligible for this scoping review and subsequently included. There was an increasing growth in the number of published mapping reviews over the years from 5 in 2010 to 73 in 2021. Moreover, there was a significant variability in reporting the included mapping reviews including their research question, priori protocol, methodology, data synthesis and reporting. This work has further highlighted the gaps in evidence synthesis methodologies. Further guidance developed by evidence synthesis organisations, such as JBI and Campbell, has the potential to clarify challenges experienced by researchers, given the magnitude of mapping reviews published every year.
Collapse
Affiliation(s)
- Hanan Khalil
- La Trobe University, School of Psychology and Public Health, Department of Public Health, Melbourne, Australia
| | - Fiona Campbell
- Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK
| | - Katrina Danial
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Danielle Pollock
- Health Evidence Synthesis Recommendations and Impact, School of Public Health, Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, Australia
| | - Zachary Munn
- Health Evidence Synthesis Recommendations and Impact, School of Public Health, Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, Australia
| | - Vivian Welsh
- Bruyère Research Institute, Ottawa, Ontario, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
| | | | - Dimi Hoppe
- La Trobe University, School of Psychology and Public Health, Department of Public Health, Melbourne, Australia
| | - Andrea C Tricco
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, Canada
- Epidemiology Division and Institute for Health Policy, Management, and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
- Queen's Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, Queen's University, Kingston, Canada
| |
Collapse
|
8
|
Privitera AJ, Ng SHS, Kong APH, Weekes BS. AI and Aphasia in the Digital Age: A Critical Review. Brain Sci 2024; 14:383. [PMID: 38672032 PMCID: PMC11047933 DOI: 10.3390/brainsci14040383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 04/11/2024] [Accepted: 04/14/2024] [Indexed: 04/28/2024] Open
Abstract
Aphasiology has a long and rich tradition of contributing to understanding how culture, language, and social environment contribute to brain development and function. Recent breakthroughs in AI can transform the role of aphasiology in the digital age by leveraging speech data in all languages to model how damage to specific brain regions impacts linguistic universals such as grammar. These tools, including generative AI (ChatGPT) and natural language processing (NLP) models, could also inform practitioners working with clinical populations in the assessment and treatment of aphasia using AI-based interventions such as personalized therapy and adaptive platforms. Although these possibilities have generated enthusiasm in aphasiology, a rigorous interrogation of their limitations is necessary before AI is integrated into practice. We explain the history and first principles of reciprocity between AI and aphasiology, highlighting how lesioning neural networks opened the black box of cognitive neurolinguistic processing. We then argue that when more data from aphasia across languages become digitized and available online, deep learning will reveal hitherto unreported patterns of language processing of theoretical interest for aphasiologists. We also anticipate some problems using AI, including language biases, cultural, ethical, and scientific limitations, a misrepresentation of marginalized languages, and a lack of rigorous validation of tools. However, as these challenges are met with better governance, AI could have an equitable impact.
Collapse
Affiliation(s)
- Adam John Privitera
- Centre for Research and Development in Learning, Nanyang Technological University, Singapore 637335, Singapore;
| | - Siew Hiang Sally Ng
- Centre for Research and Development in Learning, Nanyang Technological University, Singapore 637335, Singapore;
- Institute for Pedagogical Innovation, Research, and Excellence, Nanyang Technological University, Singapore 637335, Singapore
| | - Anthony Pak-Hin Kong
- Academic Unit of Human Communication, Learning, and Development, The University of Hong Kong, Pokfulam, Hong Kong;
- Aphasia Research and Therapy (ART) Laboratory, The University of Hong Kong, Pokfulam, Hong Kong
| | - Brendan Stuart Weekes
- Faculty of Education, The University of Hong Kong, Pokfulam, Hong Kong
- Melbourne School of Psychological Sciences, University of Melbourne, Parkville 3010, Australia
| |
Collapse
|
9
|
Perets O, Stagno E, Yehuda EB, McNichol M, Anthony Celi L, Rappoport N, Dorotic M. Inherent Bias in Electronic Health Records: A Scoping Review of Sources of Bias. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.04.09.24305594. [PMID: 38680842 PMCID: PMC11046491 DOI: 10.1101/2024.04.09.24305594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
Objectives 1.1Biases inherent in electronic health records (EHRs), and therefore in medical artificial intelligence (AI) models may significantly exacerbate health inequities and challenge the adoption of ethical and responsible AI in healthcare. Biases arise from multiple sources, some of which are not as documented in the literature. Biases are encoded in how the data has been collected and labeled, by implicit and unconscious biases of clinicians, or by the tools used for data processing. These biases and their encoding in healthcare records undermine the reliability of such data and bias clinical judgments and medical outcomes. Moreover, when healthcare records are used to build data-driven solutions, the biases are further exacerbated, resulting in systems that perpetuate biases and induce healthcare disparities. This literature scoping review aims to categorize the main sources of biases inherent in EHRs. Methods 1.2We queried PubMed and Web of Science on January 19th, 2023, for peer-reviewed sources in English, published between 2016 and 2023, using the PRISMA approach to stepwise scoping of the literature. To select the papers that empirically analyze bias in EHR, from the initial yield of 430 papers, 27 duplicates were removed, and 403 studies were screened for eligibility. 196 articles were removed after the title and abstract screening, and 96 articles were excluded after the full-text review resulting in a final selection of 116 articles. Results 1.3Systematic categorizations of diverse sources of bias are scarce in the literature, while the effects of separate studies are often convoluted and methodologically contestable. Our categorization of published empirical evidence identified the six main sources of bias: a) bias arising from past clinical trials; b) data-related biases arising from missing, incomplete information or poor labeling of data; human-related bias induced by c) implicit clinician bias, d) referral and admission bias; e) diagnosis or risk disparities bias and finally, (f) biases in machinery and algorithms. Conclusions 1.4Machine learning and data-driven solutions can potentially transform healthcare delivery, but not without limitations. The core inputs in the systems (data and human factors) currently contain several sources of bias that are poorly documented and analyzed for remedies. The current evidence heavily focuses on data-related biases, while other sources are less often analyzed or anecdotal. However, these different sources of biases add to one another exponentially. Therefore, to understand the issues holistically we need to explore these diverse sources of bias. While racial biases in EHR have been often documented, other sources of biases have been less frequently investigated and documented (e.g. gender-related biases, sexual orientation discrimination, socially induced biases, and implicit, often unconscious, human-related cognitive biases). Moreover, some existing studies lack causal evidence, illustrating the different prevalences of disease across groups, which does not per se prove the causality. Our review shows that data-, human- and machine biases are prevalent in healthcare and they significantly impact healthcare outcomes and judgments and exacerbate disparities and differential treatment. Understanding how diverse biases affect AI systems and recommendations is critical. We suggest that researchers and medical personnel should develop safeguards and adopt data-driven solutions with a "bias-in-mind" approach. More empirical evidence is needed to tease out the effects of different sources of bias on health outcomes.
Collapse
|
10
|
Wang W, Wang Y, Chen L, Ma R, Zhang M. Justice at the Forefront: Cultivating felt accountability towards Artificial Intelligence among healthcare professionals. Soc Sci Med 2024; 347:116717. [PMID: 38518481 DOI: 10.1016/j.socscimed.2024.116717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 02/10/2024] [Accepted: 02/20/2024] [Indexed: 03/24/2024]
Abstract
The advent of AI has ushered in a new era of patient care, but with it emerges a contentious debate surrounding accountability for algorithmic medical decisions. Within this discourse, a spectrum of views prevails, ranging from placing accountability on AI solution providers to laying it squarely on the shoulders of healthcare professionals. In response to this debate, this study, grounded in the mutualistic partner choice (MPC) model of the evolution of morality, seeks to establish a configurational framework for cultivating felt accountability towards AI among healthcare professionals. This framework underscores two pivotal conditions: AI ethics enactment and trusting belief in AI and considers the influence of organizational complexity in the implementation of this framework. Drawing on Fuzzy-set Qualitative Comparative Analysis (fsQCA) of a sample of 401 healthcare professionals, this study reveals that a) focusing justice and autonomy in AI ethics enactment along with building trusting belief in AI reliability and functionality reinforces healthcare professionals' sense of felt accountability towards AI, b) fostering felt accountability towards AI necessitates ensuring the establishment of trust in its functionality for high complexity hospitals, and c) prioritizing justice in AI ethics enactment and trust in AI reliability is essential for low complexity hospitals.
Collapse
Affiliation(s)
- Weisha Wang
- Research Center for Smarter Supply Chain, Business School, Soochow University, 50 Donghuan Road, Suzhou, 215006, China.
| | - Yichuan Wang
- Sheffield University Management School, University of Sheffield, Conduit Rd, Sheffield, S10 1FL, United Kingdom.
| | - Long Chen
- Brunel University London, United Kingdom.
| | - Rui Ma
- Greenwich Business School, University of Greenwich, United Kingdom.
| | - Minhao Zhang
- University of Bristol School of Management, University of Bristol, United Kingdom.
| |
Collapse
|
11
|
Pesapane F, Giambersio E, Capetti B, Monzani D, Grasso R, Nicosia L, Rotili A, Sorce A, Meneghetti L, Carriero S, Santicchia S, Carrafiello G, Pravettoni G, Cassano E. Patients' Perceptions and Attitudes to the Use of Artificial Intelligence in Breast Cancer Diagnosis: A Narrative Review. Life (Basel) 2024; 14:454. [PMID: 38672725 PMCID: PMC11051490 DOI: 10.3390/life14040454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 03/26/2024] [Accepted: 03/27/2024] [Indexed: 04/28/2024] Open
Abstract
Breast cancer remains the most prevalent cancer among women worldwide, necessitating advancements in diagnostic methods. The integration of artificial intelligence (AI) into mammography has shown promise in enhancing diagnostic accuracy. However, understanding patient perspectives, particularly considering the psychological impact of breast cancer diagnoses, is crucial. This narrative review synthesizes literature from 2000 to 2023 to examine breast cancer patients' attitudes towards AI in breast imaging, focusing on trust, acceptance, and demographic influences on these views. Methodologically, we employed a systematic literature search across databases such as PubMed, Embase, Medline, and Scopus, selecting studies that provided insights into patients' perceptions of AI in diagnostics. Our review included a sample of seven key studies after rigorous screening, reflecting varied patient trust and acceptance levels towards AI. Overall, we found a clear preference among patients for AI to augment rather than replace the diagnostic process, emphasizing the necessity of radiologists' expertise in conjunction with AI to enhance decision-making accuracy. This paper highlights the importance of aligning AI implementation in clinical settings with patient needs and expectations, emphasizing the need for human interaction in healthcare. Our findings advocate for a model where AI augments the diagnostic process, underlining the necessity for educational efforts to mitigate concerns and enhance patient trust in AI-enhanced diagnostics.
Collapse
Affiliation(s)
- Filippo Pesapane
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Emilia Giambersio
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (E.G.); (A.S.)
| | - Benedetta Capetti
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
| | - Dario Monzani
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
- Department of Psychology, Educational Science and Human Movement (SPPEFF), University of Palermo, 90133 Palermo, Italy
| | - Roberto Grasso
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy;
| | - Luca Nicosia
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Anna Rotili
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Adriana Sorce
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (E.G.); (A.S.)
| | - Lorenza Meneghetti
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Serena Carriero
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy; (S.C.); (S.S.)
| | - Sonia Santicchia
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy; (S.C.); (S.S.)
| | - Gianpaolo Carrafiello
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy;
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy; (S.C.); (S.S.)
| | - Gabriella Pravettoni
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy;
| | - Enrico Cassano
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| |
Collapse
|
12
|
Rivard L, Lehoux P, Rocha de Oliveira R, Alami H. Thematic analysis of tools for health innovators and organisation leaders to develop digital health solutions fit for climate change. BMJ LEADER 2024; 8:32-38. [PMID: 37407065 DOI: 10.1136/leader-2022-000697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 06/23/2023] [Indexed: 07/07/2023]
Abstract
OBJECTIVES While ethicists have largely underscored the risks raised by digital health solutions that operate with or without artificial intelligence (AI), limited research has addressed the need to also mitigate their environmental footprint and equip health innovators as well as organisation leaders to meet responsibility requirements that go beyond clinical safety, efficacy and ethics. Drawing on the Responsible Innovation in Health framework, this qualitative study asks: (1) what are the practice-oriented tools available for innovators to develop environmentally sustainable digital solutions and (2) how are organisation leaders supposed to support them in this endeavour? METHODS Focusing on a subset of 34 tools identified through a comprehensive scoping review (health sciences, computer sciences, engineering and social sciences), our qualitative thematic analysis identifies and illustrates how two responsibility principles-environmental sustainability and organisational responsibility-are meant to be put in practice. RESULTS Guidance to make environmentally sustainable digital solutions is found in 11 tools whereas organisational responsibility is described in 33 tools. The former tools focus on reducing energy and materials consumption as well as pollution and waste production. The latter tools highlight executive roles for data risk management, data ethics and AI ethics. Only four tools translate environmental sustainability issues into tangible organisational responsibilities. CONCLUSIONS Recognising that key design and development decisions in the digital health industry are largely shaped by market considerations, this study indicates that significant work lies ahead for medical and organisation leaders to support the development of solutions fit for climate change.
Collapse
Affiliation(s)
- Lysanne Rivard
- Center for Public Health Research, Universite de Montreal, Montreal, Quebec, Canada
| | - Pascale Lehoux
- Center for Public Health Research, Universite de Montreal, Montreal, Quebec, Canada
- Department of Health Management, Evaluation and Policy, Universite de Montreal, Montreal, Quebec, Canada
| | | | - Hassane Alami
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| |
Collapse
|
13
|
Armitage RC. Digital health technologies: Compounding the existing ethical challenges of the 'right' not to know. J Eval Clin Pract 2024. [PMID: 38493485 DOI: 10.1111/jep.13980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 02/13/2024] [Accepted: 03/02/2024] [Indexed: 03/19/2024]
Abstract
INTRODUCTION Doctors hold a prima facie duty to respect the autonomy of their patients. This manifests as the patient's 'right' not to know when patients wish to remain unaware of medical information regarding their health, and poses ethical challenges for good medical practice. This paper explores how the emergence of digital health technologies might impact upon the patient's 'right' not to know. METHOD The capabilities of digital health technologies are surveyed and ethical implications of their effects on the 'right' not to know are explored. FINDINGS Digital health technologies are increasingly collecting, processing and presenting medical data as clinically useful information, which simultaneously presents large opportunities for improved health outcomes while compounding the existing ethical challenges generated by the patient's 'right' not to know. CONCLUSION These digital tools should be designed to include functionality that mitigates these ethical challenges, and allows the preservation of their user's autonomy with regard to the medical information they wish to learn and not learn about.
Collapse
Affiliation(s)
- Richard C Armitage
- Academic Unit of Population and Lifespan Sciences, School of Medicine, University of Nottingham, Nottingham, UK
| |
Collapse
|
14
|
Cai G, Huang F, Gao Y, Li X, Chi J, Xie J, Zhou L, Feng Y, Huang H, Deng T, Zhou Y, Zhang C, Luo X, Xie X, Gao Q, Zhen X, Liu J. Artificial intelligence-based models enabling accurate diagnosis of ovarian cancer using laboratory tests in China: a multicentre, retrospective cohort study. Lancet Digit Health 2024; 6:e176-e186. [PMID: 38212232 DOI: 10.1016/s2589-7500(23)00245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 10/26/2023] [Accepted: 11/22/2023] [Indexed: 01/13/2024]
Abstract
BACKGROUND Ovarian cancer is the most lethal gynecological malignancy. Timely diagnosis of ovarian cancer is difficult due to the lack of effective biomarkers. Laboratory tests are widely applied in clinical practice, and some have shown diagnostic and prognostic relevance to ovarian cancer. We aimed to systematically evaluate the value of routine laboratory tests on the prediction of ovarian cancer, and develop a robust and generalisable ensemble artificial intelligence (AI) model to assist in identifying patients with ovarian cancer. METHODS In this multicentre, retrospective cohort study, we collected 98 laboratory tests and clinical features of women with or without ovarian cancer admitted to three hospitals in China during Jan 1, 2012 and April 4, 2021. A multi-criteria decision making-based classification fusion (MCF) risk prediction framework was used to make a model that combined estimations from 20 AI classification models to reach an integrated prediction tool developed for ovarian cancer diagnosis. It was evaluated on an internal validation set (3007 individuals) and two external validation sets (5641 and 2344 individuals). The primary outcome was the prediction accuracy of the model in identifying ovarian cancer. FINDINGS Based on 52 features (51 laboratory tests and age), the MCF achieved an area under the receiver-operating characteristic curve (AUC) of 0·949 (95% CI 0·948-0·950) in the internal validation set, and AUCs of 0·882 (0·880-0·885) and 0·884 (0·882-0·887) in the two external validation sets. The model showed higher AUC and sensitivity compared with CA125 and HE4 in identifying ovarian cancer, especially in patients with early-stage ovarian cancer. The MCF also yielded acceptable prediction accuracy with the exclusion of highly ranked laboratory tests that indicate ovarian cancer, such as CA125 and other tumour markers, and outperformed state-of-the-art models in ovarian cancer prediction. The MCF was wrapped as an ovarian cancer prediction tool, and made publicly available to provide estimated probability of ovarian cancer with input laboratory test values. INTERPRETATION The MCF model consistently achieved satisfactory performance in ovarian cancer prediction when using laboratory tests from the three validation sets. This model offers a low-cost, easily accessible, and accurate diagnostic tool for ovarian cancer. The included laboratory tests, not only CA125 which was the highest ranked laboratory test in importance of diagnostic assistance, contributed to the characterisation of patients with ovarian cancer. FUNDING Ministry of Science and Technology of China; National Natural Science Foundation of China; Natural Science Foundation of Guangdong Province, China; and Science and Technology Project of Guangzhou, China. TRANSLATION For the Chinese translation of the abstract see Supplementary Materials section.
Collapse
Affiliation(s)
- Guangyao Cai
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Fangjun Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Yue Gao
- Cancer Biology Research Centre (Key Laboratory of the Ministry of Education), Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiao Li
- Department of Gynecologic Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Jianhua Chi
- Cancer Biology Research Centre (Key Laboratory of the Ministry of Education), Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jincheng Xie
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Linghong Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Yanling Feng
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - He Huang
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Ting Deng
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Yun Zhou
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Chuyao Zhang
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Xiaolin Luo
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Xing Xie
- Department of Gynecologic Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Qinglei Gao
- Cancer Biology Research Centre (Key Laboratory of the Ministry of Education), Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
| | - Xin Zhen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China.
| | - Jihong Liu
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China.
| |
Collapse
|
15
|
Gianola S, Bargeri S, Castellini G, Cook C, Palese A, Pillastrini P, Salvalaggio S, Turolla A, Rossettini G. Performance of ChatGPT Compared to Clinical Practice Guidelines in Making Informed Decisions for Lumbosacral Radicular Pain: A Cross-sectional Study. J Orthop Sports Phys Ther 2024; 54:1-7. [PMID: 38284363 DOI: 10.2519/jospt.2024.12151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/30/2024]
Abstract
OBJECTIVE: To compare the accuracy of an artificial intelligence chatbot to clinical practice guidelines (CPGs) recommendations for providing answers to complex clinical questions on lumbosacral radicular pain. DESIGN: Cross-sectional study. METHODS: We extracted recommendations from recent CPGs for diagnosing and treating lumbosacral radicular pain. Relative clinical questions were developed and queried to OpenAI's ChatGPT (GPT-3.5). We compared ChatGPT answers to CPGs recommendations by assessing the (1) internal consistency of ChatGPT answers by measuring the percentage of text wording similarity when a clinical question was posed 3 times, (2) reliability between 2 independent reviewers in grading ChatGPT answers, and (3) accuracy of ChatGPT answers compared to CPGs recommendations. Reliability was estimated using Fleiss' kappa (κ) coefficients, and accuracy by interobserver agreement as the frequency of the agreements among all judgments. RESULTS: We tested 9 clinical questions. The internal consistency of text ChatGPT answers was unacceptable across all 3 trials in all clinical questions (mean percentage of 49%, standard deviation of 15). Intrareliability (reviewer 1: κ = 0.90, standard error [SE] = 0.09; reviewer 2: κ = 0.90, SE = 0.10) and interreliability (κ = 0.85, SE = 0.15) between the 2 reviewers was "almost perfect." Accuracy between ChatGPT answers and CPGs recommendations was slight, demonstrating agreement in 33% of recommendations. CONCLUSION: ChatGPT performed poorly in internal consistency and accuracy of the indications generated compared to clinical practice guideline recommendations for lumbosacral radicular pain. J Orthop Sports Phys Ther 2024;54(3):1-7. Epub 29 January 2024. doi:10.2519/jospt.2024.12151.
Collapse
|
16
|
Kusta O, Bearman M, Gorur R, Risør T, Brodersen JB, Hoeyer K. Speed, accuracy, and efficiency: The promises and practices of digitization in pathology. Soc Sci Med 2024; 345:116650. [PMID: 38364720 DOI: 10.1016/j.socscimed.2024.116650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 12/17/2023] [Accepted: 02/02/2024] [Indexed: 02/18/2024]
Abstract
Digitization is often presented in policy discourse as a panacea to a multitude of contemporary problems, not least in healthcare. How can policy promises relating to digitization be assessed and potentially countered in particular local contexts? Based on a study in Denmark, we suggest scrutinizing the politics of digitization by comparing policy promises about the future with practitioners' experience in the present. While Denmark is one of the most digitalized countries in the world, digitization of pathology has only recently been given full policy attention. As pathology departments are faced with an increased demand for pathology analysis and a shortage of pathologists, Danish policymakers have put forward digitization as a way to address these challenges. Who is it that wants to digitize pathology, why, and how does digitization unfold in routine work practices? Using online search and document analysis, we identify actors and analyze the policy promises describing expectations associated with digitization. We then use interviews and observations to juxtapose these expectations with observations of everyday pathology practices as experienced by pathologists. We show that policymakers expect digitization to improve speed, patient safety, and diagnostic accuracy, as well as efficiency. In everyday practice, however, digitization does not deliver on these expectations. Fulfillment of policy expectations instead hinges on the types of artificial intelligence (AI) applications that are still to be developed and implemented. Some pathologists remark that AI might work in the easy cases, but this would leave them with only the difficult cases, which they consider too burdensome. Our particular mode of juxtaposing policy and practice throws new light on the political work done by policy promises and helps to explain why the discipline of pathology does not seem to easily lend itself to the digital embrace.
Collapse
Affiliation(s)
- Olsi Kusta
- Department of Public Health, University of Copenhagen, Denmark; Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Melbourne, Australia; Øster Farimagsgade 5 opg. B, Building: 15-0-11, 1014, Copenhagen, Denmark.
| | - Margaret Bearman
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Melbourne, Australia; Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Level 12, Tower 2, 727 Collins St, Docklands, Melbourne, VIC, 3008, Australia.
| | - Radhika Gorur
- School of Education, Deakin University, Melbourne, Australia; Deakin University (Deakin), 221 Burwood Hwy, Burwood, VIC, 3125, Australia.
| | - Torsten Risør
- Centre for General Practice, Department of Public Health, University of Copenhagen, Denmark; Norwegian Centre for E-health Research, UiT The Arctic University of Norway, Tromsø, Norway; Øster Farimagsgade 5 opg. Q, Building: 24-1, 1014, Copenhagen, Denmark.
| | - John Brandt Brodersen
- Centre for General Practice, Department of Public Health, University of Copenhagen, Denmark; Primary Health Care Research Unit, Region Zealand, Denmark; Øster Farimagsgade 5 opg. Q, Building: 24-1-21, 1014, Copenhagen, Denmark.
| | - Klaus Hoeyer
- Section for Health Services Research, Department of Public Health, University of Copenhagen, Denmark; Øster Farimagsgade 5 opg. B, 1353, København K, Copenhagen, Denmark.
| |
Collapse
|
17
|
Hassan J, Saeed SM, Deka L, Uddin MJ, Das DB. Applications of Machine Learning (ML) and Mathematical Modeling (MM) in Healthcare with Special Focus on Cancer Prognosis and Anticancer Therapy: Current Status and Challenges. Pharmaceutics 2024; 16:260. [PMID: 38399314 PMCID: PMC10892549 DOI: 10.3390/pharmaceutics16020260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/29/2024] [Accepted: 02/07/2024] [Indexed: 02/25/2024] Open
Abstract
The use of data-driven high-throughput analytical techniques, which has given rise to computational oncology, is undisputed. The widespread use of machine learning (ML) and mathematical modeling (MM)-based techniques is widely acknowledged. These two approaches have fueled the advancement in cancer research and eventually led to the uptake of telemedicine in cancer care. For diagnostic, prognostic, and treatment purposes concerning different types of cancer research, vast databases of varied information with manifold dimensions are required, and indeed, all this information can only be managed by an automated system developed utilizing ML and MM. In addition, MM is being used to probe the relationship between the pharmacokinetics and pharmacodynamics (PK/PD interactions) of anti-cancer substances to improve cancer treatment, and also to refine the quality of existing treatment models by being incorporated at all steps of research and development related to cancer and in routine patient care. This review will serve as a consolidation of the advancement and benefits of ML and MM techniques with a special focus on the area of cancer prognosis and anticancer therapy, leading to the identification of challenges (data quantity, ethical consideration, and data privacy) which are yet to be fully addressed in current studies.
Collapse
Affiliation(s)
- Jasmin Hassan
- Drug Delivery & Therapeutics Lab, Dhaka 1212, Bangladesh; (J.H.); (S.M.S.)
| | | | - Lipika Deka
- Faculty of Computing, Engineering and Media, De Montfort University, Leicester LE1 9BH, UK;
| | - Md Jasim Uddin
- Department of Pharmaceutical Technology, Faculty of Pharmacy, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Diganta B. Das
- Department of Chemical Engineering, Loughborough University, Loughborough LE11 3TU, UK
| |
Collapse
|
18
|
Coghlan S, Gyngell C, Vears DF. Ethics of artificial intelligence in prenatal and pediatric genomic medicine. J Community Genet 2024; 15:13-24. [PMID: 37796364 PMCID: PMC10857992 DOI: 10.1007/s12687-023-00678-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023] Open
Abstract
This paper examines the ethics of introducing emerging forms of artificial intelligence (AI) into prenatal and pediatric genomic medicine. Application of genomic AI to these early life settings has not received much attention in the ethics literature. We focus on three contexts: (1) prenatal genomic sequencing for possible fetal abnormalities, (2) rapid genomic sequencing for critically ill children, and (3) reanalysis of genomic data obtained from children for diagnostic purposes. The paper identifies and discusses various ethical issues in the possible application of genomic AI in these settings, especially as they relate to concepts of beneficence, nonmaleficence, respect for autonomy, justice, transparency, accountability, privacy, and trust. The examination will inform the ethically sound introduction of genomic AI in early human life.
Collapse
Affiliation(s)
- Simon Coghlan
- School of Computing and Information Systems (CIS), Centre for AI and Digital Ethics (CAIDE), The University of Melbourne, Grattan St, Melbourne, Victoria, 3010, Australia.
- Australian Research Council Centre of Excellence for Automated Decision Making and Society (ADM+S), Melbourne, Victoria, Australia.
| | - Christopher Gyngell
- Biomedical Ethics Research Group, Murdoch Children's Research Institute, The Royal Children's Hospital, 50 Flemington Rd, Parkville, Victoria, 3052, Australia
- University of Melbourne, Parkville, Victoria, 3052, Australia
| | - Danya F Vears
- Biomedical Ethics Research Group, Murdoch Children's Research Institute, The Royal Children's Hospital, 50 Flemington Rd, Parkville, Victoria, 3052, Australia
- University of Melbourne, Parkville, Victoria, 3052, Australia
- Centre for Biomedical Ethics and Law, KU Leuven, Kapucijnenvoer 35, 3000, Leuven, Belgium
| |
Collapse
|
19
|
Castonguay A, Wagner G, Motulsky A, Paré G. AI maturity in health care: An overview of 10 OECD countries. Health Policy 2024; 140:104938. [PMID: 38157771 DOI: 10.1016/j.healthpol.2023.104938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 07/13/2023] [Accepted: 11/01/2023] [Indexed: 01/03/2024]
Abstract
BACKGROUND Artificial Intelligence (AI) and its applications in health care are on the agenda of policymakers around the world, but a major challenge remains, namely, to set policies that will ensure wide acceptance and capture the value of AI while mitigating associated risks. OBJECTIVE This study aims to provide an overview of how OECD countries strategize about how to integrate AI into health care and to determine their actual level of AI maturity. METHODS A scan of government-based AI strategies and initiatives adopted in 10 proactive OECD countries was conducted. Available documentation was analyzed, using the Broadband Commission for Sustainable Development's roadmap to AI maturity as a conceptual framework. RESULTS The findings reveal that most selected OECD countries are at the Emerging stage (Level 2) of AI in health maturity. Despite considerable funding and a variety of approaches to the development of an AI in health supporting ecosystem, only the United Kingdom and United States have reached the highest level of maturity, an integrated and collaborative AI in health ecosystem (Level 3). CONCLUSION Despite policymakers looking for opportunities to expedite efforts related to AI, there is no one-size-fits-all approach to ensure the sustainable development and safe use of AI in health. The principles of equifinality and mindfulness must thus guide policymaking in the development of AI in health care.
Collapse
Affiliation(s)
- Alexandre Castonguay
- Faculté des sciences infirmières, Pavillon Marguerite-d'Youville, C.P. 6128 succ. Centre-ville, Montréal, Québec, H3C 3J7, Canada.
| | - Gerit Wagner
- Faculty Information Systems and Applied Computer Sciences, University of Bamberg, Kapuzinerstraße 16, D-96047, Bamberg, Germany
| | - Aude Motulsky
- École de Santé Publique de l'Université de Montréal, C.P. 6128 succursale centre-ville, Montreal, Québec, H3C 3J7, Canada
| | - Guy Paré
- Département de technologies de l'information, HEC Montréal. 3000, chemin de la Côte-Sainte-Catherine, Montréal, Québec, H3T 2A7, Canada
| |
Collapse
|
20
|
Appel JM. Artificial intelligence in medicine and the negative outcome penalty paradox. JOURNAL OF MEDICAL ETHICS 2024:jme-2023-109848. [PMID: 38290853 DOI: 10.1136/jme-2023-109848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 01/18/2024] [Indexed: 02/01/2024]
Abstract
Artificial intelligence (AI) holds considerable promise for transforming clinical diagnostics. While much has been written both about public attitudes toward the use of AI tools in medicine and about uncertainty regarding legal liability that may be delaying its adoption, the interface of these two issues has so far drawn less attention. However, understanding this interface is essential to determining how jury behaviour is likely to influence adoption of AI by physicians. One distinctive concern identified in this paper is a 'negative outcome penalty paradox' (NOPP) in which physicians risk being penalised by juries in cases with negative outcomes, whether they overrule AI determinations or accept them. The paper notes three reasons why AI in medicine is uniquely susceptible to the NOPP and urges serious further consideration of this complex dilemma.
Collapse
Affiliation(s)
- Jacob M Appel
- Psychiatry, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
21
|
Balas M, Wadden JJ, Hébert PC, Mathison E, Warren MD, Seavilleklein V, Wyzynski D, Callahan A, Crawford SA, Arjmand P, Ing EB. Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4. JOURNAL OF MEDICAL ETHICS 2024; 50:90-96. [PMID: 37945336 DOI: 10.1136/jme-2023-109549] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to eight ethical vignettes.The main outcomes measured were relevance, reasoning, depth, technical and non-technical clarity, as well as acceptability of GPT-4's responses. The readability of the responses was also assessed. Of the six metrics evaluating the effectiveness of GPT-4's responses, the overall mean score was 4.1/5. GPT-4 was rated highest in providing technical (4.7/5) and non-technical clarity (4.4/5), whereas the lowest rated metrics were depth (3.8/5) and acceptability (3.8/5). There was poor-to-moderate inter-rater reliability characterised by an intraclass coefficient of 0.54 (95% CI: 0.30 to 0.71). Based on panellist feedback, GPT-4 was able to identify and articulate key ethical issues but struggled to appreciate the nuanced aspects of ethical dilemmas and misapplied certain moral principles.This study reveals limitations in the ability of GPT-4 to appreciate the depth and nuanced acceptability of real-world ethical dilemmas, particularly those that require a thorough understanding of relational complexities and context-specific values. Ongoing evaluation of LLM capabilities within medical ethics remains paramount, and further refinement is needed before it can be used effectively in clinical settings.
Collapse
Affiliation(s)
- Michael Balas
- Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Jordan Joseph Wadden
- Centre for Clinical Ethics, Unity Health Toronto, Toronto, Ontario, Canada
- Clinical Ethics, Scarborough Health Network, Scarborough, Ontario, Canada
| | - Philip C Hébert
- Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Department of Family and Community Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Eric Mathison
- Philosophy, University of Toronto, Toronto, Ontario, Canada
| | - Marika D Warren
- Bioethics, Dalhousie University, Halifax, Nova scotia, Canada
| | | | - Daniel Wyzynski
- Office of Health Ethics, London Health Sciences Centre, London, Ontario, Canada
| | - Alison Callahan
- Ethics Department, Ontario Shores Centre for Mental Health Sciences, Whitby, Ontario, Canada
| | - Sean A Crawford
- Division of Vascular Surgery, Department of Surgery, University Health Network, Toronto, Ontario, Canada
| | | | - Edsel B Ing
- Ophthalmology, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
22
|
Terranova C, Cestonaro C, Fava L, Cinquetti A. AI and professional liability assessment in healthcare. A revolution in legal medicine? Front Med (Lausanne) 2024; 10:1337335. [PMID: 38259835 PMCID: PMC10800912 DOI: 10.3389/fmed.2023.1337335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 12/18/2023] [Indexed: 01/24/2024] Open
Abstract
The adoption of advanced artificial intelligence (AI) systems in healthcare is transforming the healthcare-delivery landscape. Artificial intelligence may enhance patient safety and improve healthcare outcomes, but it presents notable ethical and legal dilemmas. Moreover, as AI streamlines the analysis of the multitude of factors relevant to malpractice claims, including informed consent, adherence to standards of care, and causation, the evaluation of professional liability might also benefit from its use. Beginning with an analysis of the basic steps in assessing professional liability, this article examines the potential new medical-legal issues that an expert witness may encounter when analyzing malpractice cases and the potential integration of AI in this context. These changes related to the use of integrated AI, will necessitate efforts on the part of judges, experts, and clinicians, and may require new legislative regulations. A new expert witness will be likely necessary in the evaluation of professional liability cases. On the one hand, artificial intelligence will support the expert witness; however, on the other hand, it will introduce specific elements into the activities of healthcare workers. These elements will necessitate an expert witness with a specialized cultural background. Examining the steps of professional liability assessment indicates that the likely path for AI in legal medicine involves its role as a collaborative and integrated tool. The combination of AI with human judgment in these assessments can enhance comprehensiveness and fairness. However, it is imperative to adopt a cautious and balanced approach to prevent complete automation in this field.
Collapse
Affiliation(s)
- Claudio Terranova
- Legal Medicine and Toxicology, Department of Cardiac, Thoracic, Vascular Sciences and Public Health, University of Padua, Padua, Italy
| | | | | | | |
Collapse
|
23
|
Park HJ. Patient perspectives on informed consent for medical AI: A web-based experiment. Digit Health 2024; 10:20552076241247938. [PMID: 38698829 PMCID: PMC11064747 DOI: 10.1177/20552076241247938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 03/28/2024] [Indexed: 05/05/2024] Open
Abstract
Objective Despite the increasing use of AI applications as a clinical decision support tool in healthcare, patients are often unaware of their use in the physician's decision-making process. This study aims to determine whether doctors should disclose the use of AI tools in diagnosis and what kind of information should be provided. Methods A survey experiment with 1000 respondents in South Korea was conducted to estimate the patients' perceived importance of information regarding the use of an AI tool in diagnosis in deciding whether to receive the treatment. Results The study found that the use of an AI tool increases the perceived importance of information related to its use, compared with when a physician consults with a human radiologist. Information regarding the AI tool when AI is used was perceived by participants either as more important than or similar to the regularly disclosed information regarding short-term effects when AI is not used. Further analysis revealed that gender, age, and income have a statistically significant effect on the perceived importance of every piece of AI information. Conclusions This study supports the disclosure of AI use in diagnosis during the informed consent process. However, the disclosure should be tailored to the individual patient's needs, as patient preferences for information regarding AI use vary across gender, age and income levels. It is recommended that ethical guidelines be developed for informed consent when using AI in diagnoses that go beyond mere legal requirements.
Collapse
Affiliation(s)
- Hai Jin Park
- Center for AI and Law, Hanyang University Law School, Seoul, South Korea
| |
Collapse
|
24
|
Hussain W, Mabrok M, Gao H, Rabhi FA, Rashed EA. Revolutionising healthcare with artificial intelligence: A bibliometric analysis of 40 years of progress in health systems. Digit Health 2024; 10:20552076241258757. [PMID: 38817839 PMCID: PMC11138196 DOI: 10.1177/20552076241258757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 05/14/2024] [Indexed: 06/01/2024] Open
Abstract
The development of artificial intelligence (AI) has revolutionised the medical system, empowering healthcare professionals to analyse complex nonlinear big data and identify hidden patterns, facilitating well-informed decisions. Over the last decade, there has been a notable trend of research in AI, machine learning (ML), and their associated algorithms in health and medical systems. These approaches have transformed the healthcare system, enhancing efficiency, accuracy, personalised treatment, and decision-making. Recognising the importance and growing trend of research in the topic area, this paper presents a bibliometric analysis of AI in health and medical systems. The paper utilises the Web of Science (WoS) Core Collection database, considering documents published in the topic area for the last four decades. A total of 64,063 papers were identified from 1983 to 2022. The paper evaluates the bibliometric data from various perspectives, such as annual papers published, annual citations, highly cited papers, and most productive institutions, and countries. The paper visualises the relationship among various scientific actors by presenting bibliographic coupling and co-occurrences of the author's keywords. The analysis indicates that the field began its significant growth in the late 1970s and early 1980s, with significant growth since 2019. The most influential institutions are in the USA and China. The study also reveals that the scientific community's top keywords include 'ML', 'Deep Learning', and 'Artificial Intelligence'.
Collapse
Affiliation(s)
- Walayat Hussain
- Peter Faber Business School, Australian Catholic University, North Sydney, Australia
| | - Mohamed Mabrok
- Department of Mathematics and Statistics, Qatar University, Doha, Qatar
| | - Honghao Gao
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Fethi A. Rabhi
- School of Computer Science and Engineering, University of New South Wales (UNSW), Sydney, Australia
| | - Essam A. Rashed
- Graduate School of Information Science, University of Hyogo, Kobe, Japan
| |
Collapse
|
25
|
Ueda D, Kakinuma T, Fujita S, Kamagata K, Fushimi Y, Ito R, Matsui Y, Nozaki T, Nakaura T, Fujima N, Tatsugami F, Yanagawa M, Hirata K, Yamada A, Tsuboyama T, Kawamura M, Fujioka T, Naganawa S. Fairness of artificial intelligence in healthcare: review and recommendations. Jpn J Radiol 2024; 42:3-15. [PMID: 37540463 PMCID: PMC10764412 DOI: 10.1007/s11604-023-01474-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 07/17/2023] [Indexed: 08/05/2023]
Abstract
In this review, we address the issue of fairness in the clinical integration of artificial intelligence (AI) in the medical field. As the clinical adoption of deep learning algorithms, a subfield of AI, progresses, concerns have arisen regarding the impact of AI biases and discrimination on patient health. This review aims to provide a comprehensive overview of concerns associated with AI fairness; discuss strategies to mitigate AI biases; and emphasize the need for cooperation among physicians, AI researchers, AI developers, policymakers, and patients to ensure equitable AI integration. First, we define and introduce the concept of fairness in AI applications in healthcare and radiology, emphasizing the benefits and challenges of incorporating AI into clinical practice. Next, we delve into concerns regarding fairness in healthcare, addressing the various causes of biases in AI and potential concerns such as misdiagnosis, unequal access to treatment, and ethical considerations. We then outline strategies for addressing fairness, such as the importance of diverse and representative data and algorithm audits. Additionally, we discuss ethical and legal considerations such as data privacy, responsibility, accountability, transparency, and explainability in AI. Finally, we present the Fairness of Artificial Intelligence Recommendations in healthcare (FAIR) statement to offer best practices. Through these efforts, we aim to provide a foundation for discussing the responsible and equitable implementation and deployment of AI in healthcare.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-Machi, Abeno-ku, Osaka, 545-8585, Japan.
| | | | - Shohei Fujita
- Department of Radiology, University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Sakyoku, Kyoto, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Kita-ku, Okayama, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Shinjuku-ku, Tokyo, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Chuo-ku, Kumamoto, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Minami-ku, Hiroshima, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita City, Osaka, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita-ku, Sapporo, Hokkaido, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita City, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo-ku, Tokyo, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
26
|
Funer F, Liedtke W, Tinnemeyer S, Klausen AD, Schneider D, Zacharias HU, Langanke M, Salloch S. Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals' preferences and concerns. JOURNAL OF MEDICAL ETHICS 2023; 50:6-11. [PMID: 37217277 PMCID: PMC10803986 DOI: 10.1136/jme-2022-108814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 03/11/2023] [Indexed: 05/24/2023]
Abstract
Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals' attitudes to potential changes of responsibility and decision-making authority when using ML-CDSS. Twenty-seven semistructured interviews were conducted with German medical students and nursing trainees. The data were analysed based on qualitative content analysis according to Kuckartz. Interviewees' reflections are presented under three themes the interviewees describe as closely related: (self-)attribution of responsibility, decision-making authority and need of (professional) experience. The results illustrate the conceptual interconnectedness of professional responsibility and its structural and epistemic preconditions to be able to fulfil clinicians' responsibility in a meaningful manner. The study also sheds light on the four relata of responsibility understood as a relational concept. The article closes with concrete suggestions for the ethically sound clinical implementation of ML-CDSS.
Collapse
Affiliation(s)
- Florian Funer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
- Institute of Ethics and History of Medicine, Eberhard Karls University Tübingen, Tübingen, Germany
| | - Wenke Liedtke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sara Tinnemeyer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | | | - Diana Schneider
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Helena U Zacharias
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Hannover Medical School, Hannover, Germany
| | - Martin Langanke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sabine Salloch
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| |
Collapse
|
27
|
Pagallo U, O’Sullivan S, Nevejans N, Holzinger A, Friebe M, Jeanquartier F, Jean-Quartier C, Miernik A. The underuse of AI in the health sector: Opportunity costs, success stories, risks and recommendations. HEALTH AND TECHNOLOGY 2023; 14:1-14. [PMID: 38229886 PMCID: PMC10788319 DOI: 10.1007/s12553-023-00806-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 11/16/2023] [Indexed: 01/18/2024]
Abstract
Purpose This contribution explores the underuse of artificial intelligence (AI) in the health sector, what this means for practice, and how much the underuse can cost. Attention is drawn to the relevance of an issue that the European Parliament has outlined as a "major threat" in 2020. At its heart is the risk that research and development on trusted AI systems for medicine and digital health will pile up in lab centers without generating further practical relevance. Our analysis highlights why researchers, practitioners and especially policymakers, should pay attention to this phenomenon. Methods The paper examines the ways in which governments and public agencies are addressing the underuse of AI. As governments and international organizations often acknowledge the limitations of their own initiatives, the contribution explores the causes of the current issues and suggests ways to improve initiatives for digital health. Results Recommendations address the development of standards, models of regulatory governance, assessment of the opportunity costs of underuse of technology, and the urgency of the problem. Conclusions The exponential pace of AI advances and innovations makes the risks of underuse of AI increasingly threatening. Graphical Abstract
Collapse
Affiliation(s)
- Ugo Pagallo
- Law School, University of Turin, Turin, Italy
| | - Shane O’Sullivan
- Department of Urology, Faculty of Medicine, University of Freiburg - Medical Centre, Freiburg im Breisgau, Germany
| | - Nathalie Nevejans
- Ethics and Procedures Center (CDEP), Faculty of Law of Douai, University of Artois, Arras, France
| | - Andreas Holzinger
- Human-Centered AI Lab, Medical University of Graz, Graz, Austria
- University of Natural Resources and Life Sciences Vienna, Human-Centered AI Lab, Vienna, Austria
| | - Michael Friebe
- Department of Measurements and Electronics, AGH University of Science and Technology, Krak’ow, Poland
- Faculty of Medicine, Otto-von-Guericke-University, Magdeburg, Germany
- Center for Innovation and Business Development, FOM University of Applied Sciences, Essen, Germany
| | | | | | - Arkadiusz Miernik
- Department of Urology, Faculty of Medicine, University of Freiburg - Medical Centre, Freiburg im Breisgau, Germany
| |
Collapse
|
28
|
Haze T, Kawano R, Takase H, Suzuki S, Hirawa N, Tamura K. Influence on the accuracy in ChatGPT: Differences in the amount of information per medical field. Int J Med Inform 2023; 180:105283. [PMID: 37931432 DOI: 10.1016/j.ijmedinf.2023.105283] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 10/30/2023] [Accepted: 10/31/2023] [Indexed: 11/08/2023]
Abstract
OBJECTIVES Although ChatGPT was not developed for medical use, there is growing interest in its use in medical fields. Understanding its capabilities and precautions for its use in the medical field is an urgent matter. We hypothesized that differences in the amounts of information published in different medical fields would be proportionate to the amounts of training ChatGPT receives in those fields, and hence its accuracy in providing answers. STUDY DESIGN A non-clinical experimental study. METHODS We administered the Japanese National Medical Examination to GPT-3.5 and GPT-4 to examine the rates of accuracy and consistency in their responses. We counted the total number of documents in the Web of Science Core Collection per medical field and assessed the relationship with ChatGPT's accuracy. We also performed multivariate-adjusted models to investigate the risk factors for incorrect answers. RESULTS For GPT-4, we confirmed an accuracy rate of 81.0 % and a consistency rate of 88.8 % on the exam; both showed improvement compared to those for GPT-3.5. A positive correlation was observed between the accuracy rate and consistency rate (R = 0.51, P < 0.001). The number of documents per medical field was significantly correlated with the accuracy rate in that medical field (R = 0.44, P < 0.05), with relatively few publications being an independent risk factor for incorrect answers. CONCLUSIONS Checking consistency may help identify incorrect answers when using ChatGPT. Users should be aware that the accuracy of the answers by ChatGPT may decrease when it is asked about topics with limited published information, such as new drugs and diseases.
Collapse
Affiliation(s)
- Tatsuya Haze
- Department of Medical Science and Cardiorenal Medicine, Yokohama City University Graduate School of Medicine, Yokohama, Japan; Department of Nephrology and Hypertension, Yokohama City University Medical Center, Yokohama, Japan; YCU Center for Novel and Exploratory Clinical Trials (Y-NEXT), Yokohama City University Hospital, Yokohama, Japan.
| | - Rina Kawano
- Department of Medical Science and Cardiorenal Medicine, Yokohama City University Graduate School of Medicine, Yokohama, Japan; Department of Nephrology and Hypertension, Yokohama City University Medical Center, Yokohama, Japan
| | - Hajime Takase
- YCU Center for Novel and Exploratory Clinical Trials (Y-NEXT), Yokohama City University Hospital, Yokohama, Japan; Department of Neurosurgery, Yokohama City University Graduate School of Medicine, Yokohama, Japan
| | - Shota Suzuki
- Department of Medical Science and Cardiorenal Medicine, Yokohama City University Graduate School of Medicine, Yokohama, Japan; Department of Nephrology and Hypertension, Yokohama City University Medical Center, Yokohama, Japan
| | - Nobuhito Hirawa
- Department of Medical Science and Cardiorenal Medicine, Yokohama City University Graduate School of Medicine, Yokohama, Japan; Department of Nephrology and Hypertension, Yokohama City University Medical Center, Yokohama, Japan; Clinical Education and Training Center, Yokohama City University Medical Center, Yokohama, Japan
| | - Kouichi Tamura
- Department of Medical Science and Cardiorenal Medicine, Yokohama City University Graduate School of Medicine, Yokohama, Japan
| |
Collapse
|
29
|
Nichol AA, Sankar PL, Halley MC, Federico CA, Cho MK. Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis. J Med Internet Res 2023; 25:e47609. [PMID: 37971798 PMCID: PMC10690528 DOI: 10.2196/47609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/24/2023] [Accepted: 09/30/2023] [Indexed: 11/19/2023] Open
Abstract
BACKGROUND Machine learning predictive analytics (MLPA) is increasingly used in health care to reduce costs and improve efficacy; it also has the potential to harm patients and trust in health care. Academic and regulatory leaders have proposed a variety of principles and guidelines to address the challenges of evaluating the safety of machine learning-based software in the health care context, but accepted practices do not yet exist. However, there appears to be a shift toward process-based regulatory paradigms that rely heavily on self-regulation. At the same time, little research has examined the perspectives about the harms of MLPA developers themselves, whose role will be essential in overcoming the "principles-to-practice" gap. OBJECTIVE The objective of this study was to understand how MLPA developers of health care products perceived the potential harms of those products and their responses to recognized harms. METHODS We interviewed 40 individuals who were developing MLPA tools for health care at 15 US-based organizations, including data scientists, software engineers, and those with mid- and high-level management roles. These 15 organizations were selected to represent a range of organizational types and sizes from the 106 that we previously identified. We asked developers about their perspectives on the potential harms of their work, factors that influence these harms, and their role in mitigation. We used standard qualitative analysis of transcribed interviews to identify themes in the data. RESULTS We found that MLPA developers recognized a range of potential harms of MLPA to individuals, social groups, and the health care system, such as issues of privacy, bias, and system disruption. They also identified drivers of these harms related to the characteristics of machine learning and specific to the health care and commercial contexts in which the products are developed. MLPA developers also described strategies to respond to these drivers and potentially mitigate the harms. Opportunities included balancing algorithm performance goals with potential harms, emphasizing iterative integration of health care expertise, and fostering shared company values. However, their recognition of their own responsibility to address potential harms varied widely. CONCLUSIONS Even though MLPA developers recognized that their products can harm patients, public, and even health systems, robust procedures to assess the potential for harms and the need for mitigation do not exist. Our findings suggest that, to the extent that new oversight paradigms rely on self-regulation, they will face serious challenges if harms are driven by features that developers consider inescapable in health care and business environments. Furthermore, effective self-regulation will require MLPA developers to accept responsibility for safety and efficacy and know how to act accordingly. Our results suggest that, at the very least, substantial education will be necessary to fill the "principles-to-practice" gap.
Collapse
Affiliation(s)
- Ariadne A Nichol
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, CA, United States
| | - Pamela L Sankar
- Department of Medical Ethics & Health Policy, University of Pennsylvania, Philadelphia, PA, United States
| | - Meghan C Halley
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, CA, United States
| | - Carole A Federico
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, CA, United States
| | - Mildred K Cho
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, CA, United States
| |
Collapse
|
30
|
Alanzi T, Alhajri A, Almulhim S, Alharbi S, Alfaifi S, Almarhoun E, Mulla R, Alasafra ZO, Alalwan Z, Alnasser F, Almukhtar F, Al Ghadeer F, Amro S, Alodhayb I, Alanzi N. Artificial Intelligence and Patient Autonomy in Obesity Treatment Decisions: An Empirical Study of the Challenges. Cureus 2023; 15:e49725. [PMID: 38161816 PMCID: PMC10757560 DOI: 10.7759/cureus.49725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/28/2023] [Indexed: 01/03/2024] Open
Abstract
Background This study aims to explore the factors associated with artificial intelligence (AI) and patient autonomy in obesity treatment decision-making. Methodology A cross-sectional, online, descriptive survey design was adopted in this study. The survey instrument incorporated the Ideal Patient Autonomy Scale (IPAS) and other factors affecting patient autonomy in the AI-patient relationship. The study participants included 74 physicians, 55 dieticians, and 273 obese patients. Results Different views were expressed in the scales AI knows the best (μ = 2.95-3.15) and the patient should decide (μ = 2.95-3.16). Ethical concerns (μ = 3.24) and perceived privacy risks (μ = 3.58) were identified as having a more negative influence on patient autonomy compared to personal innovativeness (μ = 2.41) and trust (μ = 2.85). Physicians and dieticians expressed significantly higher trust in AI compared to patients (p < 0.05). Conclusions Patient autonomy in the AI-patient relationship is significantly affected by privacy, trust, and ethical issues. As trust is a multifaceted factor and AI is a novel technology in healthcare, it is essential to fully explore the various factors influencing trust and patient autonomy.
Collapse
Affiliation(s)
- Turki Alanzi
- Department of Health Information Management and Technology, College of Public Health, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Ahlam Alhajri
- College of Agricultural and Food Sciences, King Faisal University, Al Hofuf, SAU
| | - Sara Almulhim
- College of Medicine, King Faisal University, Al Hofuf, SAU
| | - Sara Alharbi
- College of Applied Medical Sciences, Umm Al-Qura University, Makkah, SAU
| | - Samya Alfaifi
- College of Pharmacy, Umm Al-Qura University, Makkah, SAU
| | - Eslam Almarhoun
- Family Medicine, South Khobar Primary Healthcare Center, Khobar, SAU
| | - Raghad Mulla
- College of Medicine, King Abdulaziz University, Jeddah, SAU
| | | | - Zainab Alalwan
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Fatima Alnasser
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Fatima Almukhtar
- Medicine and Surgery, King Fahad University Hospital, Khobar, SAU
| | | | - Sara Amro
- Family Medicine, King Fahad Armed Forces Hospital, Jeddah, SAU
| | - Ibrahim Alodhayb
- College of Agriculture and Veterinary Medicine, Qassim University, Buraydah, SAU
| | - Nouf Alanzi
- Department of Clinical Laboratory Sciences, College of Applied Medical Sciences, Jouf university, Sakakah, SAU
| |
Collapse
|
31
|
Olawade DB, Wada OJ, David-Olawade AC, Kunonga E, Abaire O, Ling J. Using artificial intelligence to improve public health: a narrative review. Front Public Health 2023; 11:1196397. [PMID: 37954052 PMCID: PMC10637620 DOI: 10.3389/fpubh.2023.1196397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 09/26/2023] [Indexed: 11/14/2023] Open
Abstract
Artificial intelligence (AI) is a rapidly evolving tool revolutionizing many aspects of healthcare. AI has been predominantly employed in medicine and healthcare administration. However, in public health, the widespread employment of AI only began recently, with the advent of COVID-19. This review examines the advances of AI in public health and the potential challenges that lie ahead. Some of the ways AI has aided public health delivery are via spatial modeling, risk prediction, misinformation control, public health surveillance, disease forecasting, pandemic/epidemic modeling, and health diagnosis. However, the implementation of AI in public health is not universal due to factors including limited infrastructure, lack of technical understanding, data paucity, and ethical/privacy issues.
Collapse
Affiliation(s)
- David B. Olawade
- Department of Allied and Public Health, School of Health, Sport and Bioscience, University of East London, London, United Kingdom
| | - Ojima J. Wada
- Division of Sustainable Development, Qatar Foundation, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | | | - Edward Kunonga
- School of Health and Life Sciences, Teesside University, Middlesbrough, United Kingdom
| | - Olawale Abaire
- Department of Biochemistry, Adekunle Ajasin University, Akungba-Akoko, Nigeria
| | - Jonathan Ling
- Independent Researcher, Stockton-on-Tees, United Kingdom
| |
Collapse
|
32
|
Ahun E, Demir A, Yiğit Y, Tulgar YK, Doğan M, Thomas DT, Tulgar S. Perceptions and concerns of emergency medicine practitioners about artificial intelligence in emergency triage management during the pandemic: a national survey-based study. Front Public Health 2023; 11:1285390. [PMID: 37965502 PMCID: PMC10640989 DOI: 10.3389/fpubh.2023.1285390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 10/05/2023] [Indexed: 11/16/2023] Open
Abstract
Objective There have been continuous discussions over the ethics of using AI in healthcare. We sought to identify the ethical issues and viewpoints of Turkish emergency care doctors about the use of AI during epidemic triage. Materials and methods Ten emergency specialists were initially enlisted for this project, and their responses to open-ended questions about the ethical issues surrounding AI in the emergency room provided valuable information. A 15-question survey was created based on their input and was refined through a pilot test with 15 emergency specialty doctors. Following that, the updated survey was sent to emergency specialists via email, social media, and private email distribution. Results 167 emergency medicine specialists participated in the study, with an average age of 38.22 years and 6.79 years of professional experience. The majority agreed that AI could benefit patients (54.50%) and healthcare professionals (70.06%) in emergency department triage during pandemics. Regarding responsibility, 63.47% believed in shared responsibility between emergency medicine specialists and AI manufacturers/programmers for complications. Additionally, 79.04% of participants agreed that the responsibility for complications in AI applications varies depending on the nature of the complication. Concerns about privacy were expressed by 20.36% regarding deep learning-based applications, while 61.68% believed that anonymity protected privacy. Additionally, 70.66% of participants believed that AI systems would be as sensitive as humans in terms of non-discrimination. Conclusion The potential advantages of deploying AI programs in emergency department triage during pandemics for patients and healthcare providers were acknowledged by emergency medicine doctors in Turkey. Nevertheless, they expressed notable ethical concerns related to the responsibility and accountability aspects of utilizing AI systems in this context.
Collapse
Affiliation(s)
- Erhan Ahun
- Department of Emergency Medicine, Sabuncuoglu Serefeddin Training and Research Hospital, Amasya, Türkiye
| | - Ahmet Demir
- Department of Emergency Medicine, Faculty of Medicine, Mugla Sitki Kocman University, Mugla, Türkiye
| | - Yavuz Yiğit
- Department of Emergency Medicine, Hamad Medical Corporation, Doha, Qatar
- Blizard Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | - Yasemin Koçer Tulgar
- Department of Medical History and Ethics, Samsun University Faculty of Medicine, Samsun, Türkiye
- Department of Medical History and Ethics, Kocaeli University Faculty of Medicine, Kocaeli, Türkiye
| | - Meltem Doğan
- Department of Medical History and Ethics, Kocaeli University Faculty of Medicine, Kocaeli, Türkiye
| | - David Terence Thomas
- Department of Medical Education, Maltepe University Faculty of Medicine, Istanbul, Türkiye
- Department of Pediatric Surgery, Maltepe University Faculty of Medicine, Istanbul, Türkiye
| | - Serkan Tulgar
- Department of Anesthesiology, Samsun University Faculty of Medicine, Samsun Training and Research Hospital, Samsun, Türkiye
| |
Collapse
|
33
|
Baumgartner R, Arora P, Bath C, Burljaev D, Ciereszko K, Custers B, Ding J, Ernst W, Fosch-Villaronga E, Galanos V, Gremsl T, Hendl T, Kropp C, Lenk C, Martin P, Mbelu S, Morais Dos Santos Bruss S, Napiwodzka K, Nowak E, Roxanne T, Samerski S, Schneeberger D, Tampe-Mai K, Vlantoni K, Wiggert K, Williams R. Fair and equitable AI in biomedical research and healthcare: Social science perspectives. Artif Intell Med 2023; 144:102658. [PMID: 37783540 DOI: 10.1016/j.artmed.2023.102658] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 06/30/2023] [Accepted: 09/01/2023] [Indexed: 10/04/2023]
Abstract
Artificial intelligence (AI) offers opportunities but also challenges for biomedical research and healthcare. This position paper shares the results of the international conference "Fair medicine and AI" (online 3-5 March 2021). Scholars from science and technology studies (STS), gender studies, and ethics of science and technology formulated opportunities, challenges, and research and development desiderata for AI in healthcare. AI systems and solutions, which are being rapidly developed and applied, may have undesirable and unintended consequences including the risk of perpetuating health inequalities for marginalized groups. Socially robust development and implications of AI in healthcare require urgent investigation. There is a particular dearth of studies in human-AI interaction and how this may best be configured to dependably deliver safe, effective and equitable healthcare. To address these challenges, we need to establish diverse and interdisciplinary teams equipped to develop and apply medical AI in a fair, accountable and transparent manner. We formulate the importance of including social science perspectives in the development of intersectionally beneficent and equitable AI for biomedical research and healthcare, in part by strengthening AI health evaluation.
Collapse
Affiliation(s)
- Renate Baumgartner
- Center of Gender- and Diversity Research, University of Tübingen, Wilhelmstrasse 56, 72074 Tübingen, Germany; Athena Institute, Vrije Universiteit Amsterdam, De Boelelaan 1085, 1081 HV Amsterdam, The Netherlands.
| | - Payal Arora
- Erasmus School of Philosophy, Erasmus University Rotterdam, Burgemeester Oudlaan 50, 3062 PA Rotterdam, The Netherlands
| | - Corinna Bath
- Gender, Technology and Mobility, Institute for Flight Guidance, TU Braunschweig, Hermann-Blenk-Str. 27, 38108 Braunschweig, Germany
| | - Darja Burljaev
- Center of Gender- and Diversity Research, University of Tübingen, Wilhelmstrasse 56, 72074 Tübingen, Germany
| | - Kinga Ciereszko
- Department of Philosophy, Adam Mickiewicz University in Poznan, Szamarzewski Street 89C, 60-569 Poznan, Poland
| | - Bart Custers
- eLaw - Center for Law and Digital Technologies, Leiden University, Steenschuur 25, 2311 ES Leiden, Netherlands
| | - Jin Ding
- iHuman and Department of Sociological Studies, University of Sheffield, ICOSS, 219 Portobello, Sheffield S1 4DP, United Kingdom
| | - Waltraud Ernst
- Institute for Women's and Gender Studies, Johannes Kepler University Linz, Altenberger Strasse 69, 4040 Linz, Austria
| | - Eduard Fosch-Villaronga
- eLaw - Center for Law and Digital Technologies, Leiden University, Steenschuur 25, 2311 ES Leiden, Netherlands
| | - Vassilis Galanos
- Science, Technology and Innovation Studies, School of Social and Political Science, University of Edinburgh, Old Surgeons' Hall, High School Yards, Edinburgh EH1 1LZ, United Kingdom
| | - Thomas Gremsl
- Institute of Ethics and Social Teaching, Faculty of Catholic Theology, University of Graz, Heinrichstraße 78b/2, 8010 Graz, Austria
| | - Tereza Hendl
- Professorship for Ethics of Medicine, University of Augsburg, Stenglinstraße 2, 86156 Augsburg, Germany; Institute of Ethics, History and Theory of Medicine, Ludwig-Maximilians-University in Munich, Lessingstr. 2, 80336 Munich, Germany
| | - Cordula Kropp
- Center for Interdisciplinary Risk and Innovation Studies (ZIRIUS), University of Stuttgart, Seidenstraße 36, 70174 Stuttgart, Germany
| | - Christian Lenk
- Institute of the History, Philosophy and Ethics of Medicine, Ulm University, Parkstraße 11, 89073 Ulm, Germany
| | - Paul Martin
- iHuman and Department of Sociological Studies, University of Sheffield, ICOSS, 219 Portobello, Sheffield S1 4DP, United Kingdom
| | - Somto Mbelu
- Erasmus School of Philosophy, Erasmus University Rotterdam, 10A Ademola Close off Remi Fani Kayode Street, GRA Ikeja, Lagos, Nigeria
| | | | - Karolina Napiwodzka
- Department of Philosophy, Adam Mickiewicz University in Poznan, Szamarzewski Street 89C, 60-569 Poznan, Poland
| | - Ewa Nowak
- Department of Philosophy, Adam Mickiewicz University in Poznan, Szamarzewski Street 89C, 60-569 Poznan, Poland
| | - Tiara Roxanne
- Data & Society Institute, 228 Park Ave S PMB 83075, New York, NY 10003-1502, United States of America
| | - Silja Samerski
- Fachbereich Soziale Arbeit und Gesundheit, Hochschule Emden/Leer, Constantiaplatz 4, 26723 Emden, Germany
| | - David Schneeberger
- Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, Auenbruggerplatz 2, 8036 Graz, Austria
| | - Karolin Tampe-Mai
- Center for Interdisciplinary Risk and Innovation Studies (ZIRIUS), University of Stuttgart, Seidenstraße 36, 70174 Stuttgart, Germany
| | - Katerina Vlantoni
- Department of History and Philosophy of Science, School of Science, National and Kapodistrian University of Athens, Panepistimioupoli, Ilisia, Athens 15771, Greece
| | - Kevin Wiggert
- Institute of Sociology, Department Sociology of Technology and Innovation, Technical University of Berlin, Fraunhoferstraße 33-36, 10623 Berlin, Germany
| | - Robin Williams
- Science, Technology and Innovation Studies, School of Social and Political Science, University of Edinburgh, Old Surgeons' Hall, High School Yards, Edinburgh EH1 1LZ, United Kingdom
| |
Collapse
|
34
|
Wang Y, Song Y, Ma Z, Han X. Multidisciplinary considerations of fairness in medical AI: A scoping review. Int J Med Inform 2023; 178:105175. [PMID: 37595374 DOI: 10.1016/j.ijmedinf.2023.105175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 08/02/2023] [Accepted: 08/04/2023] [Indexed: 08/20/2023]
Abstract
INTRODUCTION Artificial Intelligence (AI) technology has been developed significantly in recent years. The fairness of medical AI is of great concern due to its direct relation to human life and health. This review aims to analyze the existing research literature on fairness in medical AI from the perspectives of computer science, medical science, and social science (including law and ethics). The objective of the review is to examine the similarities and differences in the understanding of fairness, explore influencing factors, and investigate potential measures to implement fairness in medical AI across English and Chinese literature. METHODS This study employed a scoping review methodology and selected the following databases: Web of Science, MEDLINE, Pubmed, OVID, CNKI, WANFANG Data, etc., for the fairness issues in medical AI through February 2023. The search was conducted using various keywords such as "artificial intelligence," "machine learning," "medical," "algorithm," "fairness," "decision-making," and "bias." The collected data were charted, synthesized, and subjected to descriptive and thematic analysis. RESULTS After reviewing 468 English papers and 356 Chinese papers, 53 and 42 were included in the final analysis. Our results show the three different disciplines all show significant differences in the research on the core issues. Data is the foundation that affects medical AI fairness in addition to algorithmic bias and human bias. Legal, ethical, and technological measures all promote the implementation of medical AI fairness. CONCLUSIONS Our review indicates a consensus regarding the importance of data fairness as the foundation for achieving fairness in medical AI across multidisciplinary perspectives. However, there are substantial discrepancies in core aspects such as the concept, influencing factors, and implementation measures of fairness in medical AI. Consequently, future research should facilitate interdisciplinary discussions to bridge the cognitive gaps between different fields and enhance the practical implementation of fairness in medical AI.
Collapse
Affiliation(s)
- Yue Wang
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Yaxin Song
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Zhuo Ma
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Xiaoxue Han
- Xi'an Jiaotong University Library, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| |
Collapse
|
35
|
Adebamowo CA, Callier S, Akintola S, Maduka O, Jegede A, Arima C, Ogundiran T, Adebamowo SN. The promise of data science for health research in Africa. Nat Commun 2023; 14:6084. [PMID: 37770478 PMCID: PMC10539491 DOI: 10.1038/s41467-023-41809-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 09/15/2023] [Indexed: 09/30/2023] Open
Abstract
Data science health research promises tremendous benefits for African populations, but its implementation is fraught with substantial ethical governance risks that could thwart the delivery of these anticipated benefits. We discuss emerging efforts to build ethical governance frameworks for data science health research in Africa and the opportunities to advance these through investments by African governments and institutions, international funding organizations and collaborations for research and capacity development.
Collapse
Affiliation(s)
- Clement A Adebamowo
- Department of Epidemiology and Public Health, and Greenebaum Comprehensive Cancer Center, University of Maryland School of Medicine, Baltimore, MD, USA.
- Department of Research, Center for Bioethics and Research, Ibadan, Nigeria.
| | - Shawneequa Callier
- Department of Clinical Research and Leadership, School of Medicine and Health Sciences, The George Washington University, Washington DC, USA
- Center for Research on Genomics and Global Health, National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, USA
| | - Simisola Akintola
- Department of Research, Center for Bioethics and Research, Ibadan, Nigeria
- Department of Business Law, Faculty of Law, University of Ibadan, Ibadan, Nigeria
- Department of Bioethics and Medical Humanities, Faculty of Multidisciplinary Studies, University of Ibadan, Ibadan, Nigeria
| | - Oluchi Maduka
- Department of Research, Center for Bioethics and Research, Ibadan, Nigeria
| | - Ayodele Jegede
- Department of Research, Center for Bioethics and Research, Ibadan, Nigeria
- Department of Bioethics and Medical Humanities, Faculty of Multidisciplinary Studies, University of Ibadan, Ibadan, Nigeria
- Department of Sociology, University of Ibadan, Ibadan, Nigeria
| | | | - Temidayo Ogundiran
- Department of Research, Center for Bioethics and Research, Ibadan, Nigeria
- Department of Bioethics and Medical Humanities, Faculty of Multidisciplinary Studies, University of Ibadan, Ibadan, Nigeria
- Department of Surgery, College of Medicine, University of Ibadan, Ibadan, Nigeria
| | - Sally N Adebamowo
- Department of Epidemiology and Public Health, and Greenebaum Comprehensive Cancer Center, University of Maryland School of Medicine, Baltimore, MD, USA
- Department of Research, Center for Bioethics and Research, Ibadan, Nigeria
| |
Collapse
|
36
|
Reddy S. Navigating the AI Revolution: The Case for Precise Regulation in Health Care. J Med Internet Res 2023; 25:e49989. [PMID: 37695650 PMCID: PMC10520760 DOI: 10.2196/49989] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/27/2023] [Accepted: 08/02/2023] [Indexed: 09/12/2023] Open
Abstract
Health care is undergoing a profound transformation through the integration of artificial intelligence (AI). However, the rapid integration and expansive growth of AI within health care systems present ethical and legal challenges that warrant careful consideration. In this viewpoint, the author argues that the health care domain, due to its complexity, requires specialized approaches to regulating AI. Precise regulation can provide clear guidelines for addressing these challenges, thereby ensuring ethical and legal AI implementations.
Collapse
|
37
|
Ventura CAI, Denton EE. Artificial Intelligence Chatbots and Emergency Medical Services: Perspectives on the Implications of Generative AI in Prehospital Care. Open Access Emerg Med 2023; 15:289-292. [PMID: 37701881 PMCID: PMC10494922 DOI: 10.2147/oaem.s420764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 08/30/2023] [Indexed: 09/14/2023] Open
Abstract
Emergency Medical Services (EMS) is likely to experience transformative changes due to the rapid advancements in artificial intelligence (AI), such as OpenAI's ChatGPT. In this short commentary, we aim to preliminarily explore some profound implications of AI advancements on EMS systems and practice.
Collapse
Affiliation(s)
- Christian Angelo I Ventura
- Department of Health, Behavior and Society, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| | - Edward E Denton
- Department of Emergency Medicine, College of Medicine, University of Arkansas for Medical Sciences, Little Rock, AR, USA
- Department of Health Policy and Management, Fay W. Boozman College of Public Health, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| |
Collapse
|
38
|
Iqbal J, Cortés Jaimes DC, Makineni P, Subramani S, Hemaida S, Thugu TR, Butt AN, Sikto JT, Kaur P, Lak MA, Augustine M, Shahzad R, Arain M. Reimagining Healthcare: Unleashing the Power of Artificial Intelligence in Medicine. Cureus 2023; 15:e44658. [PMID: 37799217 PMCID: PMC10549955 DOI: 10.7759/cureus.44658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2023] [Indexed: 10/07/2023] Open
Abstract
Artificial intelligence (AI) has opened new medical avenues and revolutionized diagnostic and therapeutic practices, allowing healthcare providers to overcome significant challenges associated with cost, disease management, accessibility, and treatment optimization. Prominent AI technologies such as machine learning (ML) and deep learning (DL) have immensely influenced diagnostics, patient monitoring, novel pharmaceutical discoveries, drug development, and telemedicine. Significant innovations and improvements in disease identification and early intervention have been made using AI-generated algorithms for clinical decision support systems and disease prediction models. AI has remarkably impacted clinical drug trials by amplifying research into drug efficacy, adverse events, and candidate molecular design. AI's precision and analysis regarding patients' genetic, environmental, and lifestyle factors have led to individualized treatment strategies. During the COVID-19 pandemic, AI-assisted telemedicine set a precedent for remote healthcare delivery and patient follow-up. Moreover, AI-generated applications and wearable devices have allowed ambulatory monitoring of vital signs. However, apart from being immensely transformative, AI's contribution to healthcare is subject to ethical and regulatory concerns. AI-backed data protection and algorithm transparency should be strictly adherent to ethical principles. Vigorous governance frameworks should be in place before incorporating AI in mental health interventions through AI-operated chatbots, medical education enhancements, and virtual reality-based training. The role of AI in medical decision-making has certain limitations, necessitating the importance of hands-on experience. Therefore, reaching an optimal balance between AI's capabilities and ethical considerations to ensure impartial and neutral performance in healthcare applications is crucial. This narrative review focuses on AI's impact on healthcare and the importance of ethical and balanced incorporation to make use of its full potential.
Collapse
Affiliation(s)
| | - Diana Carolina Cortés Jaimes
- Epidemiology, Universidad Autónoma de Bucaramanga, Bucaramanga, COL
- Medicine, Pontificia Universidad Javeriana, Bogotá, COL
| | - Pallavi Makineni
- Medicine, All India Institute of Medical Sciences, Bhubaneswar, Bhubaneswar, IND
| | - Sachin Subramani
- Medicine and Surgery, Employees' State Insurance Corporation (ESIC) Medical College, Gulbarga, IND
| | - Sarah Hemaida
- Internal Medicine, Istanbul Okan University, Istanbul, TUR
| | - Thanmai Reddy Thugu
- Internal Medicine, Sri Padmavathi Medical College for Women, Sri Venkateswara Institute of Medical Sciences (SVIMS), Tirupati, IND
| | - Amna Naveed Butt
- Medicine/Internal Medicine, Allama Iqbal Medical College, Lahore, PAK
| | | | - Pareena Kaur
- Medicine, Punjab Institute of Medical Sciences, Jalandhar, IND
| | | | | | - Roheen Shahzad
- Medicine, Combined Military Hospital (CMH) Lahore Medical College and Institute of Dentistry, Lahore, PAK
| | - Mustafa Arain
- Internal Medicine, Civil Hospital Karachi, Karachi, PAK
| |
Collapse
|
39
|
Cho MK, Martinez-Martin N. Epistemic Rights and Responsibilities of Digital Simulacra for Biomedicine. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:43-54. [PMID: 36507873 PMCID: PMC10258225 DOI: 10.1080/15265161.2022.2146785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Big data and AI have enabled digital simulation for prediction of future health states or behaviors of specific individuals, populations or humans in general. "Digital simulacra" use multimodal datasets to develop computational models that are virtual representations of people or groups, generating predictions of how systems evolve and react to interventions over time. These include digital twins and virtual patients for in silico clinical trials, both of which seek to transform research and health care by speeding innovation and bridging the epistemic gap between population-based research findings and their application to the individual. Nevertheless, digital simulacra mark a major milestone on a trajectory to embrace the epistemic culture of data science and a potential abandonment of medical epistemological concepts of causality and representation. In doing so, "data first" approaches potentially shift moral attention from actual patients and principles, such as equity, to simulated patients and patient data.
Collapse
|
40
|
Reis LO. ChatGPT for medical applications and urological science. Int Braz J Urol 2023; 49:652-656. [PMID: 37338818 PMCID: PMC10482461 DOI: 10.1590/s1677-5538.ibju.2023.0112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 04/30/2023] [Indexed: 06/21/2023] Open
Affiliation(s)
- Leonardo O. Reis
- Universidade Estadual de CampinasFaculdade de Ciências MédicasDepartamento de UrologiaSão PauloCampinasBrasilUroScience e Departamento de Urologia, Faculdade de Ciências Médicas, Universidade Estadual de Campinas - UNICAMP, Campinas, São Paulo, Brasil
- Pontifícia Universidade Católica de CampinasFaculdade de Ciências da VidaDepartamento de ImunoncologiaSão PauloCampinasBrasilDepartamento de Imunoncologia, Faculdade de Ciências da Vida, Pontifícia Universidade Católica de Campinas, PUC-Campinas, Campinas, São Paulo, Brasil
| |
Collapse
|
41
|
Blutt SE, Coarfa C, Neu J, Pammi M. Multiomic Investigations into Lung Health and Disease. Microorganisms 2023; 11:2116. [PMID: 37630676 PMCID: PMC10459661 DOI: 10.3390/microorganisms11082116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 08/08/2023] [Accepted: 08/13/2023] [Indexed: 08/27/2023] Open
Abstract
Diseases of the lung account for more than 5 million deaths worldwide and are a healthcare burden. Improving clinical outcomes, including mortality and quality of life, involves a holistic understanding of the disease, which can be provided by the integration of lung multi-omics data. An enhanced understanding of comprehensive multiomic datasets provides opportunities to leverage those datasets to inform the treatment and prevention of lung diseases by classifying severity, prognostication, and discovery of biomarkers. The main objective of this review is to summarize the use of multiomics investigations in lung disease, including multiomics integration and the use of machine learning computational methods. This review also discusses lung disease models, including animal models, organoids, and single-cell lines, to study multiomics in lung health and disease. We provide examples of lung diseases where multi-omics investigations have provided deeper insight into etiopathogenesis and have resulted in improved preventative and therapeutic interventions.
Collapse
Affiliation(s)
- Sarah E. Blutt
- Department of Molecular Virology and Microbiology, Baylor College of Medicine, Houston, TX 77030, USA;
- Department of Molecular and Cellular Biology, Baylor College of Medicine, Houston, TX 77030, USA;
| | - Cristian Coarfa
- Department of Molecular and Cellular Biology, Baylor College of Medicine, Houston, TX 77030, USA;
- Dan L Duncan Comprehensive Cancer Center, Baylor College of Medicine, Houston, TX 77030, USA
| | - Josef Neu
- Department of Pediatrics, Section of Neonatology, University of Florida, Gainesville, FL 32611, USA;
| | - Mohan Pammi
- Department of Pediatrics, Section of Neonatology, Baylor College of Medicine and Texas Children’s Hospital, Houston, TX 77030, USA
| |
Collapse
|
42
|
Pozzi G. Testimonial injustice in medical machine learning. JOURNAL OF MEDICAL ETHICS 2023; 49:536-540. [PMID: 36635066 DOI: 10.1136/jme-2022-108630] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 01/02/2023] [Indexed: 06/17/2023]
Abstract
Machine learning (ML) systems play an increasingly relevant role in medicine and healthcare. As their applications move ever closer to patient care and cure in clinical settings, ethical concerns about the responsibility of their use come to the fore. I analyse an aspect of responsible ML use that bears not only an ethical but also a significant epistemic dimension. I focus on ML systems' role in mediating patient-physician relations. I thereby consider how ML systems may silence patients' voices and relativise the credibility of their opinions, which undermines their overall credibility status without valid moral and epistemic justification. More specifically, I argue that withholding credibility due to how ML systems operate can be particularly harmful to patients and, apart from adverse outcomes, qualifies as a form of testimonial injustice. I make my case for testimonial injustice in medical ML by considering ML systems currently used in the USA to predict patients' risk of misusing opioids (automated Prediction Drug Monitoring Programmes, PDMPs for short). I argue that the locus of testimonial injustice in ML-mediated medical encounters is found in the fact that these systems are treated as markers of trustworthiness on which patients' credibility is assessed. I further show how ML-based PDMPs exacerbate and further propagate social inequalities at the expense of vulnerable social groups.
Collapse
Affiliation(s)
- Giorgia Pozzi
- Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
43
|
Ma M, Li Y, Gao L, Xie Y, Zhang Y, Wang Y, Zhao L, Liu X, Jiang D, Fan C, Wang Y, Demuyakor I, Jiao M, Li Y. The need for digital health education among next-generation health workers in China: a cross-sectional survey on digital health education. BMC MEDICAL EDUCATION 2023; 23:541. [PMID: 37525126 PMCID: PMC10388510 DOI: 10.1186/s12909-023-04407-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 05/26/2023] [Indexed: 08/02/2023]
Abstract
BACKGROUND Digital health is important for sustainable health systems and universal health coverage. Since the outbreak of COVID-19, many countries, including China, have promoted the introduction of digital health in their medical services. Developing the next generation of physicians with digital health knowledge and skills is a prerequisite for maximizing the potential of digital health. OBJECTIVE We aimed to understand the perception of digital health among Chinese medical students, the current implementation of digital health education in China, and the urgent need of medical students. METHODS Our cross-sectional survey was conducted online and anonymously among current medical students in China. We used descriptive statistical analysis to examine participant demographic characteristics and the demand for digital health education. Additional analysis was conducted by grouping responses by current participation in a digital health course. RESULTS A total of 2122 valid responses were received from 467 medical schools. Most medical students had positive expectations that digital health will change the future of medicine. Compared with wearable devices (85.53%), telemedicine (84.16%), and medical big data (86.38%), fewer respondents believed in the benefits of clinical decision support systems (CDSS) (63.81%). Most respondents said they urgently needed digital health knowledge and skills, and the teaching method of practical training and internship (78.02%) was more popular than the traditional lecture (10.54%). However, only 41.45% wanted to learn about the ethical and legal issues surrounding digital health. CONCLUSIONS Our study shows that the current needs of Chinese medical students for digital health education remain unmet. A national initiative on digital health education, is necessary and attention should be paid to digital health equity and education globally, focusing on CDSS and artificial intelligence. Ethics knowledge must also be included in medical curriculum. Students as Partners (SAP) is a promising approach for designing digital health courses.
Collapse
Affiliation(s)
- Mingxue Ma
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China
| | - Yuanheng Li
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China
| | - Lei Gao
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China
| | - Yuzhuo Xie
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China
| | - Yuwei Zhang
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China
| | - Yazhou Wang
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China
| | - Lu Zhao
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China
| | - Xinyan Liu
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China
| | - Deyou Jiang
- Heilongjiang University of Traditional Chinese Medicine, 24 Heping Road, Xiangfang District, Harbin, 150006, Heilongjiang, China
| | - Chao Fan
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China
| | - Yushu Wang
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China
| | - Isaac Demuyakor
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China
| | - Mingli Jiao
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China.
| | - Ye Li
- Harbin Medical University, 157 Baojian Road, Nangang District, Harbin, 150086, Heilongjiang, China.
| |
Collapse
|
44
|
Grassini S. Development and validation of the AI attitude scale (AIAS-4): a brief measure of general attitude toward artificial intelligence. Front Psychol 2023; 14:1191628. [PMID: 37554139 PMCID: PMC10406504 DOI: 10.3389/fpsyg.2023.1191628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 06/16/2023] [Indexed: 08/10/2023] Open
Abstract
The rapid advancement of artificial intelligence (AI) has generated an increasing demand for tools that can assess public attitudes toward AI. This study proposes the development and the validation of the AI Attitude Scale (AIAS), a concise self-report instrument designed to evaluate public perceptions of AI technology. The first version of the AIAS that the present manuscript proposes comprises five items, including one reverse-scored item, which aims to gauge individuals' beliefs about AI's influence on their lives, careers, and humanity overall. The scale is designed to capture attitudes toward AI, focusing on the perceived utility and potential impact of technology on society and humanity. The psychometric properties of the scale were investigated using diverse samples in two separate studies. An exploratory factor analysis was initially conducted on a preliminary 5-item version of the scale. Such exploratory validation study revealed the need to divide the scale into two factors. While the results demonstrated satisfactory internal consistency for the overall scale and its correlation with related psychometric measures, separate analyses for each factor showed robust internal consistency for Factor 1 but insufficient internal consistency for Factor 2. As a result, a second version of the scale is developed and validated, omitting the item that displayed weak correlation with the remaining items in the questionnaire. The refined final 1-factor, 4-item AIAS demonstrated superior overall internal consistency compared to the initial 5-item scale and the proposed factors. Further confirmatory factor analyses, performed on a different sample of participants, confirmed that the 1-factor model (4-items) of the AIAS exhibited an adequate fit to the data, providing additional evidence for the scale's structural validity and generalizability across diverse populations. In conclusion, the analyses reported in this article suggest that the developed and validated 4-items AIAS can be a valuable instrument for researchers and professionals working on AI development who seek to understand and study users' general attitudes toward AI.
Collapse
Affiliation(s)
- Simone Grassini
- Department of Psychosocial Science, University of Bergen, Bergen, Norway
- Cognitive and Behavioral Neuroscience Lab, University of Stavanger, Stavanger, Norway
| |
Collapse
|
45
|
McKay F, Williams BJ, Prestwich G, Bansal D, Treanor D, Hallowell N. Artificial intelligence and medical research databases: ethical review by data access committees. BMC Med Ethics 2023; 24:49. [PMID: 37422629 PMCID: PMC10329342 DOI: 10.1186/s12910-023-00927-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 06/22/2023] [Indexed: 07/10/2023] Open
Abstract
BACKGROUND It has been argued that ethics review committees-e.g., Research Ethics Committees, Institutional Review Boards, etc.- have weaknesses in reviewing big data and artificial intelligence research. For instance, they may, due to the novelty of the area, lack the relevant expertise for judging collective risks and benefits of such research, or they may exempt it from review in instances involving de-identified data. MAIN BODY Focusing on the example of medical research databases we highlight here ethical issues around de-identified data sharing which motivate the need for review where oversight by ethics committees is weak. Though some argue for ethics committee reform to overcome these weaknesses, it is unclear whether or when that will happen. Hence, we argue that ethical review can be done by data access committees, since they have de facto purview of big data and artificial intelligence projects, relevant technical expertise and governance knowledge, and already take on some functions of ethical review. That said, like ethics committees, they may have functional weaknesses in their review capabilities. To strengthen that function, data access committees must think clearly about the kinds of ethical expertise, both professional and lay, that they draw upon to support their work. CONCLUSION Data access committees can undertake ethical review of medical research databases provided they enhance that review function through professional and lay ethical expertise.
Collapse
Affiliation(s)
- Francis McKay
- Population Health Sciences Institute, University of Newcastle, NE2 4AX Newcastle Upon Tyne, UK
| | - Bethany J. Williams
- National Pathology Imaging Co-operative, Leeds Teaching Hospitals NHS Trust, Leeds, LS9 7TF UK
| | - Graham Prestwich
- Yorkshire and Humber Academic Health Science Network, Unit 1, Calder Close, Calder Park, Wakefield, WF4 3BA UK
| | - Daljeet Bansal
- National Pathology Imaging Co-operative, Leeds Teaching Hospitals NHS Trust, Leeds, LS9 7TF UK
| | - Darren Treanor
- National Pathology Imaging Co-operative, Leeds Teaching Hospitals NHS Trust, Leeds, LS9 7TF UK
- Department of Pathology, University of Leeds, Leeds, UK
- Department of Clinical Pathology, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Nina Hallowell
- The Ethox Centre and the Wellcome Centre for Ethics and Humanities, Nuffield Department of Population Health, University of Oxford, Oxford, OX3 7LF UK
| |
Collapse
|
46
|
Gedefaw L, Liu CF, Ip RKL, Tse HF, Yeung MHY, Yip SP, Huang CL. Artificial Intelligence-Assisted Diagnostic Cytology and Genomic Testing for Hematologic Disorders. Cells 2023; 12:1755. [PMID: 37443789 PMCID: PMC10340428 DOI: 10.3390/cells12131755] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/21/2023] [Accepted: 06/28/2023] [Indexed: 07/15/2023] Open
Abstract
Artificial intelligence (AI) is a rapidly evolving field of computer science that involves the development of computational programs that can mimic human intelligence. In particular, machine learning and deep learning models have enabled the identification and grouping of patterns within data, leading to the development of AI systems that have been applied in various areas of hematology, including digital pathology, alpha thalassemia patient screening, cytogenetics, immunophenotyping, and sequencing. These AI-assisted methods have shown promise in improving diagnostic accuracy and efficiency, identifying novel biomarkers, and predicting treatment outcomes. However, limitations such as limited databases, lack of validation and standardization, systematic errors, and bias prevent AI from completely replacing manual diagnosis in hematology. In addition, the processing of large amounts of patient data and personal information by AI poses potential data privacy issues, necessitating the development of regulations to evaluate AI systems and address ethical concerns in clinical AI systems. Nonetheless, with continued research and development, AI has the potential to revolutionize the field of hematology and improve patient outcomes. To fully realize this potential, however, the challenges facing AI in hematology must be addressed and overcome.
Collapse
Affiliation(s)
- Lealem Gedefaw
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China; (L.G.); (C.-F.L.); (M.H.Y.Y.)
| | - Chia-Fei Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China; (L.G.); (C.-F.L.); (M.H.Y.Y.)
| | - Rosalina Ka Ling Ip
- Department of Pathology, Pamela Youde Nethersole Eastern Hospital, Hong Kong, China; (R.K.L.I.); (H.-F.T.)
| | - Hing-Fung Tse
- Department of Pathology, Pamela Youde Nethersole Eastern Hospital, Hong Kong, China; (R.K.L.I.); (H.-F.T.)
| | - Martin Ho Yin Yeung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China; (L.G.); (C.-F.L.); (M.H.Y.Y.)
| | - Shea Ping Yip
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China; (L.G.); (C.-F.L.); (M.H.Y.Y.)
| | - Chien-Ling Huang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China; (L.G.); (C.-F.L.); (M.H.Y.Y.)
| |
Collapse
|
47
|
Walsh G, Stogiannos N, van de Venter R, Rainey C, Tam W, McFadden S, McNulty JP, Mekis N, Lewis S, O'Regan T, Kumar A, Huisman M, Bisdas S, Kotter E, Pinto dos Santos D, Sá dos Reis C, van Ooijen P, Brady AP, Malamateniou C. Responsible AI practice and AI education are central to AI implementation: a rapid review for all medical imaging professionals in Europe. BJR Open 2023; 5:20230033. [PMID: 37953871 PMCID: PMC10636340 DOI: 10.1259/bjro.20230033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 05/27/2023] [Accepted: 05/30/2023] [Indexed: 11/14/2023] Open
Abstract
Artificial intelligence (AI) has transitioned from the lab to the bedside, and it is increasingly being used in healthcare. Radiology and Radiography are on the frontline of AI implementation, because of the use of big data for medical imaging and diagnosis for different patient groups. Safe and effective AI implementation requires that responsible and ethical practices are upheld by all key stakeholders, that there is harmonious collaboration between different professional groups, and customised educational provisions for all involved. This paper outlines key principles of ethical and responsible AI, highlights recent educational initiatives for clinical practitioners and discusses the synergies between all medical imaging professionals as they prepare for the digital future in Europe. Responsible and ethical AI is vital to enhance a culture of safety and trust for healthcare professionals and patients alike. Educational and training provisions for medical imaging professionals on AI is central to the understanding of basic AI principles and applications and there are many offerings currently in Europe. Education can facilitate the transparency of AI tools, but more formalised, university-led training is needed to ensure the academic scrutiny, appropriate pedagogy, multidisciplinarity and customisation to the learners' unique needs are being adhered to. As radiographers and radiologists work together and with other professionals to understand and harness the benefits of AI in medical imaging, it becomes clear that they are faced with the same challenges and that they have the same needs. The digital future belongs to multidisciplinary teams that work seamlessly together, learn together, manage risk collectively and collaborate for the benefit of the patients they serve.
Collapse
Affiliation(s)
- Gemma Walsh
- Division of Midwifery & Radiography, City University of London, London, United Kingdom
| | | | | | - Clare Rainey
- School of Health Sciences, Ulster University, Derry~Londonderry, Northern Ireland
| | - Winnie Tam
- Division of Midwifery & Radiography, City University of London, London, United Kingdom
| | - Sonyia McFadden
- School of Health Sciences, Ulster University, Coleraine, United Kingdom
| | | | - Nejc Mekis
- Medical Imaging and Radiotherapy Department, University of Ljubljana, Faculty of Health Sciences, Ljubljana, Slovenia
| | - Sarah Lewis
- Discipline of Medical Imaging Science, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Tracy O'Regan
- The Society and College of Radiographers, London, United Kingdom
| | - Amrita Kumar
- Frimley Health NHS Foundation Trust, Frimley, United Kingdom
| | - Merel Huisman
- Department of Radiology, University Medical Center Utrecht, Utrecht, Netherlands
| | | | | | | | - Cláudia Sá dos Reis
- School of Health Sciences (HESAV), University of Applied Sciences and Arts Western Switzerland (HES-SO), Lausanne, Switzerland
| | | | | | | |
Collapse
|
48
|
Greenberg ZF, Graim KS, He M. Towards artificial intelligence-enabled extracellular vesicle precision drug delivery. Adv Drug Deliv Rev 2023:114974. [PMID: 37356623 DOI: 10.1016/j.addr.2023.114974] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 06/27/2023]
Abstract
Extracellular Vesicles (EVs), particularly exosomes, recently exploded into nanomedicine as an emerging drug delivery approach due to their superior biocompatibility, circulating stability, and bioavailability in vivo. However, EV heterogeneity makes molecular targeting precision a critical challenge. Deciphering key molecular drivers for controlling EV tissue targeting specificity is in great need. Artificial intelligence (AI) brings powerful prediction ability for guiding the rational design of engineered EVs in precision control for drug delivery. This review focuses on cutting-edge nano-delivery via integrating large-scale EV data with AI to develop AI-directed EV therapies and illuminate the clinical translation potential. We briefly review the current status of EVs in drug delivery, including the current frontier, limitations, and considerations to advance the field. Subsequently, we detail the future of AI in drug delivery and its impact on precision EV delivery. Our review discusses the current universal challenge of standardization and critical considerations when using AI combined with EVs for precision drug delivery. Finally, we will conclude this review with a perspective on future clinical translation led by a combined effort of AI and EV research.
Collapse
Affiliation(s)
- Zachary F Greenberg
- Department of Pharmaceutics, College of Pharmacy, University of Florida, Gainesville, Florida, 32610, USA
| | - Kiley S Graim
- Department of Computer & Information Science & Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, Florida, 32610, USA
| | - Mei He
- Department of Pharmaceutics, College of Pharmacy, University of Florida, Gainesville, Florida, 32610, USA.
| |
Collapse
|
49
|
Rockwell HD, Cyphers ED, Makary MS, Keller EJ. Ethical Considerations for Artificial Intelligence in Interventional Radiology: Balancing Innovation and Patient Care. Semin Intervent Radiol 2023; 40:323-326. [PMID: 37484438 PMCID: PMC10359128 DOI: 10.1055/s-0043-1769905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Affiliation(s)
- Helena D. Rockwell
- School of Medicine, University of California, San Diego, La Jolla, California
| | - Eric D. Cyphers
- Department of Bioethics, Columbia University, New York, New York
- Philadelphia College of Osteopathic Medicine, Philadelphia, Pennsylvania
| | - Mina S. Makary
- Division of Interventional Radiology, Department of Radiology, The Ohio State University, Columbus, Ohio
| | - Eric J. Keller
- Division of Interventional Radiology, Department of Radiology, Stanford University Medical Center, Stanford, California
| |
Collapse
|
50
|
Hallowell N, Badger S, McKay F, Kerasidou A, Nellåker C. Democratising or disrupting diagnosis? Ethical issues raised by the use of AI tools for rare disease diagnosis. SSM. QUALITATIVE RESEARCH IN HEALTH 2023; 3:100240. [PMID: 37426704 PMCID: PMC10323712 DOI: 10.1016/j.ssmqr.2023.100240] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 02/13/2023] [Accepted: 02/13/2023] [Indexed: 07/11/2023]
Abstract
Computational phenotyping (CP) technology uses facial recognition algorithms to classify and potentially diagnose rare genetic disorders on the basis of digitised facial images. This AI technology has a number of research as well as clinical applications, such as supporting diagnostic decision-making. Using the example of CP, we examine stakeholders' views of the benefits and costs of using AI as a diagnostic tool within the clinic. Through a series of in-depth interviews (n = 20) with: clinicians, clinical researchers, data scientists, industry and support group representatives, we report stakeholder views regarding the adoption of this technology in a clinical setting. While most interviewees were supportive of employing CP as a diagnostic tool in some capacity we observed ambivalence around the potential for artificial intelligence to overcome diagnostic uncertainty in a clinical context. Thus, while there was widespread agreement amongst interviewees concerning the public benefits of AI assisted diagnosis, namely, its potential to increase diagnostic yield and enable faster more objective and accurate diagnoses by up skilling non specialists and thereby enabling access to diagnosis that is potentially lacking, interviewees also raised concerns about ensuring algorithmic reliability, expunging algorithmic bias and that the use of AI could result in deskilling the specialist clinical workforce. We conclude that, prior to widespread clinical implementation, on-going reflection is needed regarding the trade-offs required to determine acceptable levels of bias and conclude that diagnostic AI tools should only be employed as an assistive technology within the dysmorphology clinic.
Collapse
Affiliation(s)
- Nina Hallowell
- The Ethox Centre and Wellcome Centre for Ethics & Humanities, Nuffield Department of Population Health and Big Data Institute, University of Oxford, UK
| | | | - Francis McKay
- The Ethox Centre and Wellcome Centre for Ethics & Humanities, Nuffield Department of Population Health and Big Data Institute, University of Oxford, UK
| | - Angeliki Kerasidou
- The Ethox Centre and Wellcome Centre for Ethics & Humanities, Nuffield Department of Population Health and Big Data Institute, University of Oxford, UK
| | - Christoffer Nellåker
- Nuffield Department of Women's and Reproductive Health and Big Data Institute, University of Oxford, UK
| |
Collapse
|