1
|
Pearce A, Carter S, Frazer HML, Houssami N, Macheras‐Magias M, Webb G, Marinovich ML. Implementing artificial intelligence in breast cancer screening: Women's preferences. Cancer 2025; 131:e35859. [PMID: 40262029 PMCID: PMC12013981 DOI: 10.1002/cncr.35859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 02/20/2025] [Accepted: 03/14/2025] [Indexed: 04/24/2025]
Abstract
BACKGROUND Artificial intelligence (AI) could improve accuracy and efficiency of breast cancer screening. However, many women distrust AI in health care, potentially jeopardizing breast cancer screening participation rates. The aim was to quantify community preferences for models of AI implementation within breast cancer screening. METHODS An online discrete choice experiment survey of people eligible for breast cancer screening aged 40 to 74 years in Australia. Respondents answered 10 questions where they chose between two screening options created by an experimental design. Each screening option described the role of AI (supplementing current practice, replacing one radiologist, replacing both radiologists, or triaging), and the AI accuracy, ownership, representativeness, privacy, and waiting time. Analysis included conditional and latent class models, willingness-to-pay, and predicted screening uptake. RESULTS The 802 participants preferred screening where AI was more accurate, Australian owned, more representative and had shorter waiting time for results (all p < .001). There were strong preferences (p < .001) against AI alone or as triage. Three patterns of preferences emerged: positive about AI if accuracy improves (40% of sample), strongly against AI (42%), and concerned about AI (18%). Participants were willing to accept AI replacing one human reader if their results were available 10 days faster than current practice but would need results 21 days faster for AI as triage. Implementing AI inconsistent with community preferences could reduce participation by up to 22%.
Collapse
Affiliation(s)
- Alison Pearce
- The Daffodil CentreThe University of SydneyA Joint Venture With Cancer Council New South WalesSydneyNew South WalesAustralia
- Sydney School of Public HealthThe University of SydneySydneyNew South WalesAustralia
| | - Stacy Carter
- Australian Centre for Health Engagement, Evidence and ValuesSchool of Health and SocietyUniversity of WollongongWollongongNew South WalesAustralia
| | - Helen ML Frazer
- St Vincent’s Hospital MelbourneFitzroyVictoriaAustralia
- BreastScreen VictoriaCarltonVictoriaAustralia
| | - Nehmat Houssami
- The Daffodil CentreThe University of SydneyA Joint Venture With Cancer Council New South WalesSydneyNew South WalesAustralia
- Sydney School of Public HealthThe University of SydneySydneyNew South WalesAustralia
| | - Mary Macheras‐Magias
- Seat at the Table representativeBreast Cancer Network AustraliaCamberwellVictoriaAustralia
| | - Genevieve Webb
- Health Consumers New South WalesSydneyNew South WalesAustralia
| | - M. Luke Marinovich
- The Daffodil CentreThe University of SydneyA Joint Venture With Cancer Council New South WalesSydneyNew South WalesAustralia
- Sydney School of Public HealthThe University of SydneySydneyNew South WalesAustralia
| |
Collapse
|
2
|
Robles-Medranda C, Verpalen I, Schulz D, Spadaccini M. Artificial Intelligence in Biliopancreatic Disorders: Applications in Cross-Imaging and Endoscopy. Gastroenterology 2025:S0016-5085(25)00648-1. [PMID: 40311821 DOI: 10.1053/j.gastro.2025.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Revised: 03/15/2025] [Accepted: 04/11/2025] [Indexed: 05/03/2025]
Abstract
This review explores the transformative potential of artificial intelligence (AI) in the diagnosis and management of biliopancreatic disorders. By leveraging cutting-edge techniques, such as deep learning and convolutional neural networks, AI has significantly advanced gastroenterology, particularly in endoscopic procedures, such as colonoscopy; upper endoscopy; and capsule endoscopy. These applications enhance adenoma detection rates and improve lesion characterization and diagnostic accuracy. AI's integration in cross-sectional imaging modalities, such as computed tomography and magnetic resonance imaging, has remarkable potential. Models have demonstrated high accuracy in identifying pancreatic ductal adenocarcinoma; pancreatic cystic lesions; and pancreatic neuroendocrine tumors, aiding in early diagnosis; resectability assessment; and personalized treatment planning. In advanced endoscopic procedures, such as digital single-operator cholangioscopy and endoscopic ultrasound, AI enhances anatomic recognition and improves lesion classification, with a potential for reduction in procedural variability, enabling more consistent diagnostic and therapeutic outcomes. Promising applications in biliopancreatic endoscopy include the detection of biliary stenosis, classification of dysplastic precursor lesions, and assessment of pancreatic abnormalities. This review aims to capture the current state of AI application in biliopancreatic disorders, summarizing the results of early studies and paving the path for future directions.
Collapse
Affiliation(s)
| | - Inez Verpalen
- Amsterdam University Medical Center, Amsterdam, The Netherlands
| | | | - Marco Spadaccini
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy; Humanitas Clinical and Research Center, Endoscopy Unit, Scientific Institute for Research, Hospitalization and Healthcare, Rozzano, Italy
| |
Collapse
|
3
|
J BR, Sood A, Pattnaik T, Malhotra R, Nayyar V, Narayan B, Mishra D, Surya V. Medical imaging privacy: A systematic scoping review of key parameters in dataset construction and data protection. J Med Imaging Radiat Sci 2025; 56:101914. [PMID: 40288182 DOI: 10.1016/j.jmir.2025.101914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2025] [Revised: 04/01/2025] [Accepted: 04/01/2025] [Indexed: 04/29/2025]
Abstract
BACKGROUND With digitalization in the field of healthcare, using patient image based data, there is also increasing concerns on protection of patient privacy. Globally various legal rules and regulations have been adopted for stringent measures on data privacy. However, despite the growing importance of privacy, there are currently no universally defined protocols outlining the specific parameters for the de-identification/pseudo-anonymization of medical images. OBJECTIVES The study aims to assess current methods for protecting patient privacy in medical image datasets used in research and healthcare technology development. METHODS A comprehensive, systematic search was conducted with a defined search string across databases, including PubMed/Medline, Scopus, Web of Science, Embase, and Google Scholar. Studies were selected based on their focus on the procedures used for anonymization, pseudo-anonymization, and de-identification of medical images during the creation of datasets. RESULTS From an initial pool of 324 potentially relevant articles, 13 studies were ultimately included in the final review after meeting the inclusion criteria. Of these, the majority focused on open-source datasets, which are accessible for use in research and algorithm development. Methods of de-identification of images included burn-in annotation, defacing processes, removal of DICOM tags, and facial de-identification. A medical image protection checklist was created based on the findings of our review. DISCUSSION The review explores techniques such as removal or masking of personal identifiers, DICOM tag removal, facial de-identification GOAL: The insights gathered aim to help develop standardized privacy protocols to be adhered by healthcare professionals for responsible use of medical imaging data, ensuring the responsible use of medical imaging data for healthcare advancements. CONCLUSION The findings of this review highlight several key considerations for effective pseudo-anonymization and de-identification of medical images. The review emphasizes the need for a careful balance between protecting patient privacy and ensuring that medical datasets retain sufficient quality and richness for research and technological development.
Collapse
Affiliation(s)
- Beryl Rachel J
- Oral Pathology and Microbiology, Centre for Dental Education and Research, All India Institute of Medical Sciences, New Delhi, India
| | - Anubhuti Sood
- Oral Pathology and Microbiology, Centre for Dental Education and Research, All India Institute of Medical Sciences, New Delhi, India
| | - Tanurag Pattnaik
- Oral Pathology and Microbiology, Centre for Dental Education and Research, All India Institute of Medical Sciences, New Delhi, India
| | - Rewa Malhotra
- Oral Pathology and Microbiology, Centre for Dental Education and Research, All India Institute of Medical Sciences, New Delhi, India
| | - Vivek Nayyar
- Oral Pathology and Microbiology, Centre for Dental Education and Research, All India Institute of Medical Sciences, New Delhi, India
| | - Bhaskar Narayan
- Oral Pathology and Microbiology, Centre for Dental Education and Research, All India Institute of Medical Sciences, New Delhi, India
| | - Deepika Mishra
- Oral Pathology and Microbiology, Centre for Dental Education and Research, All India Institute of Medical Sciences, New Delhi, India.
| | - Varun Surya
- Oral Pathology and Microbiology, Centre for Dental Education and Research, All India Institute of Medical Sciences, New Delhi, India.
| |
Collapse
|
4
|
Shahin MH, Desai P, Terranova N, Guan Y, Helikar T, Lobentanzer S, Liu Q, Lu J, Madhavan S, Mo G, Musuamba FT, Podichetty JT, Shen J, Xie L, Wiens M, Musante CJ. AI-Driven Applications in Clinical Pharmacology and Translational Science: Insights From the ASCPT 2024 AI Preconference. Clin Transl Sci 2025; 18:e70203. [PMID: 40214191 PMCID: PMC11987044 DOI: 10.1111/cts.70203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Revised: 02/05/2025] [Accepted: 02/12/2025] [Indexed: 04/14/2025] Open
Abstract
Artificial intelligence (AI) is driving innovation in clinical pharmacology and translational science with tools to advance drug development, clinical trials, and patient care. This review summarizes the key takeaways from the AI preconference at the American Society for Clinical Pharmacology and Therapeutics (ASCPT) 2024 Annual Meeting in Colorado Springs, where experts from academia, industry, and regulatory bodies discussed how AI is streamlining drug discovery, dosing strategies, outcome assessment, and patient care. The theme of the preconference was centered around how AI can empower clinical pharmacologists and translational researchers to make informed decisions and translate research findings into practice. The preconference also looked at the impact of large language models in biomedical research and how these tools are democratizing data analysis and empowering researchers. The application of explainable AI in predicting drug efficacy and safety, and the ethical considerations that should be applied when integrating AI into clinical and biomedical research were also touched upon. By sharing these diverse perspectives and real-world examples, this review shows how AI can be used in clinical pharmacology and translational science to bring efficiency and accelerate drug discovery and development to address patients' unmet clinical needs.
Collapse
Affiliation(s)
| | - Prashant Desai
- Drug Metabolism and Pharmacokinetics, GenentechSouth San FranciscoCaliforniaUSA
| | - Nadia Terranova
- Quantitative Pharmacology, Ares Trading S.A. (an Affiliate of Merck KGaA, Darmstadt, Germany)LausanneSwitzerland
| | - Yuanfang Guan
- Gilbert S. Omenn Department of Computational Medicine & BioinformaticsUniversity of MichiganAnn ArborMichiganUSA
| | - Tomáš Helikar
- Department of BiochemistryUniversity of Nebraska‐LincolnLincolnNebraskaUSA
| | - Sebastian Lobentanzer
- Faculty of Medicine and Heidelberg University Hospital, Institute for Computational BiomedicineHeidelberg UniversityHeidelbergGermany
| | - Qi Liu
- Office of Clinical Pharmacology, Office of Translational Sciences, Center for Drug Evaluation and ResearchU.S. Food and Drug AdministrationSilver SpringMarylandUSA
| | - James Lu
- Modeling & Simulation/Clinical Pharmacology, Genentech Research & Early DevelopmentSouth San FranciscoCaliforniaUSA
| | | | - Gary Mo
- Pfizer Research & DevelopmentGrotonConnecticutUSA
| | - Flora T. Musuamba
- Federal Agency for Medicines and Health ProductsBrusselsBelgium
- Clinical Pharmacology and Toxicology Research Unit, University of NamurNamurBelgium
| | | | - Jie Shen
- Clinical Sciences, AbbVieNorth ChicagoIllinoisUSA
| | - Lei Xie
- Department of Computer ScienceHunter College, The City University of New YorkNew YorkNew YorkUSA
- Ph.D. Program in Computer Science, Biology & BiochemistryThe City University of New YorkNew YorkNew YorkUSA
- NeuroscienceWeill Cornell MedicineNew YorkNew YorkUSA
| | | | | |
Collapse
|
5
|
Virk A, Alasmari S, Patel D, Allison K. Digital Health Policy and Cybersecurity Regulations Regarding Artificial Intelligence (AI) Implementation in Healthcare. Cureus 2025; 17:e80676. [PMID: 40236368 PMCID: PMC11999725 DOI: 10.7759/cureus.80676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/26/2025] [Indexed: 04/17/2025] Open
Abstract
The landscape of healthcare is rapidly changing with the increasing usage of machine and deep learning artificial intelligence and digital tools to assist in various sectors. This study aims to analyze the feasibility of the implementation of artificial intelligence (AI) models into healthcare systems. This review included English-language publications from databases such as SCOPUS, PubMed, and Google Scholar between 2000 and 2024. AI integration in healthcare systems will assist in large-scale dataset analysis, access to healthcare information, surgery data and simulation, and clinical decision-making in addition to many other healthcare services. However, with the reliance on AI, issues regarding medical liability, cybersecurity, and health disparities can form. This necessitates updates and transparency on health policy, AI training, and cybersecurity measures. To support the implementation of AI in healthcare, transparency regarding AI algorithm training and analytical approaches is key to allowing physicians to trust and make informed decisions about the applicability of AI results. Transparency will also allow healthcare systems to adapt appropriately, provide AI services, and create viable security measures. Furthermore, the increased diversity of data used in AI algorithm training will allow for greater generalizability of AI solutions in patient care. With the growth of AI usage and interaction with patient data, security measures and safeguards, such as system monitoring and cybersecurity training, should take precedence. Stricter digital policy and data protection guidelines will add additional layers of security for patient data. This collaboration will further bolster security measures amongst different regions and healthcare systems in addition to providing more means to innovative care. With the growing digitization of healthcare, advancing cybersecurity will allow effective and safe implementation of AI and other digital systems into healthcare and can improve the safety of patients and their personal health information.
Collapse
Affiliation(s)
- Abdullah Virk
- Department of Ophthalmology, Flaum Eye Institute, University of Rochester, Rochester, USA
| | - Safanah Alasmari
- School of Health Sciences and Practice, New York Medical College, New York, USA
| | - Deepkumar Patel
- Department of Public Health, School of Health Science and Practice, New York Medical College, Valhalla, USA
| | - Karen Allison
- Department of Ophthalmology, Flaum Eye Institute, University of Rochester, Rochester, USA
| |
Collapse
|
6
|
Crowe B, Shah S, Teng D, Ma SP, DeCamp M, Rosenberg EI, Rodriguez JA, Collins BX, Huber K, Karches K, Zucker S, Kim EJ, Rotenstein L, Rodman A, Jones D, Richman IB, Henry TL, Somlo D, Pitts SI, Chen JH, Mishuris RG. Recommendations for Clinicians, Technologists, and Healthcare Organizations on the Use of Generative Artificial Intelligence in Medicine: A Position Statement from the Society of General Internal Medicine. J Gen Intern Med 2025; 40:694-702. [PMID: 39531100 PMCID: PMC11861482 DOI: 10.1007/s11606-024-09102-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 09/27/2024] [Indexed: 11/16/2024]
Abstract
Generative artificial intelligence (generative AI) is a new technology with potentially broad applications across important domains of healthcare, but serious questions remain about how to balance the promise of generative AI against unintended consequences from adoption of these tools. In this position statement, we provide recommendations on behalf of the Society of General Internal Medicine on how clinicians, technologists, and healthcare organizations can approach the use of these tools. We focus on three major domains of medical practice where clinicians and technology experts believe generative AI will have substantial immediate and long-term impacts: clinical decision-making, health systems optimization, and the patient-physician relationship. Additionally, we highlight our most important generative AI ethics and equity considerations for these stakeholders. For clinicians, we recommend approaching generative AI similarly to other important biomedical advancements, critically appraising its evidence and utility and incorporating it thoughtfully into practice. For technologists developing generative AI for healthcare applications, we recommend a major frameshift in thinking away from the expectation that clinicians will "supervise" generative AI. Rather, these organizations and individuals should hold themselves and their technologies to the same set of high standards expected of the clinical workforce and strive to design high-performing, well-studied tools that improve care and foster the therapeutic relationship, not simply those that improve efficiency or market share. We further recommend deep and ongoing partnerships with clinicians and patients as necessary collaborators in this work. And for healthcare organizations, we recommend pursuing a combination of both incremental and transformative change with generative AI, directing resources toward both endeavors, and avoiding the urge to rapidly displace the human clinical workforce with generative AI. We affirm that the practice of medicine remains a fundamentally human endeavor which should be enhanced by technology, not displaced by it.
Collapse
Affiliation(s)
- Byron Crowe
- Division of General Internal Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA.
- Harvard Medical School, Boston, MA, USA.
| | - Shreya Shah
- Department of Medicine, Stanford University, Palo Alto, CA, USA
- Division of Primary Care and Population Health, Stanford Healthcare AI Applied Research Team, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Derek Teng
- Division of General Internal Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Stephen P Ma
- Division of Hospital Medicine, Stanford, CA, USA
| | - Matthew DeCamp
- Department of Medicine, University of Colorado, Aurora, CO, USA
| | - Eric I Rosenberg
- Division of General Internal Medicine, Department of Medicine, University of Florida College of Medicine, Gainesville, FL, USA
| | - Jorge A Rodriguez
- Harvard Medical School, Boston, MA, USA
- Division of General Internal Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - Benjamin X Collins
- Division of General Internal Medicine and Public Health, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, USA
| | - Kathryn Huber
- Department of Internal Medicine, Kaiser Permanente, Denver, CO, School of Medicine, University of Colorado, Aurora, CO, USA
| | - Kyle Karches
- Department of Internal Medicine, Saint Louis University, Saint Louis, MO, USA
| | - Shana Zucker
- Department of Internal Medicine, University of Miami Miller School of Medicine, Jackson Memorial Hospital, Miami, FL, USA
| | - Eun Ji Kim
- Northwell Health, New Hyde Park, NY, USA
| | - Lisa Rotenstein
- Divisions of General Internal Medicine and Clinical Informatics, Department of Medicine, University of California at San Francisco, San Francisco, CA, USA
| | - Adam Rodman
- Division of General Internal Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Danielle Jones
- Division of General Internal Medicine, Emory University School of Medicine, Atlanta, GA, USA
| | - Ilana B Richman
- Section of General Internal Medicine, Yale School of Medicine, New Haven, CT, USA
| | - Tracey L Henry
- Division of General Internal Medicine, Emory University School of Medicine, Atlanta, GA, USA
| | - Diane Somlo
- Harvard Medical School, Boston, MA, USA
- Department of Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Samantha I Pitts
- Division of General Internal Medicine, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jonathan H Chen
- Stanford Center for Biomedical Informatics Research, Stanford, CA, USA
- Division of Hospital Medicine, Stanford, CA, USA
- Clinical Excellence Research Center, Stanford, CA, USA
| | - Rebecca G Mishuris
- Harvard Medical School, Boston, MA, USA
- Division of General Internal Medicine, Brigham and Women's Hospital, Boston, MA, USA
- Digital, Mass General Brigham, Somerville, MA, USA
| |
Collapse
|
7
|
Arslantaş S. Artificial intelligence and big data from digital health applications: publication trends and analysis. J Health Organ Manag 2024. [PMID: 39565082 DOI: 10.1108/jhom-06-2024-0241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2024]
Abstract
PURPOSE The integration of big data with artificial intelligence in the field of digital health has brought a new dimension to healthcare service delivery. AI technologies that provide value by using big data obtained in the provision of health services are being added to each passing day. There are also some problems related to the use of AI technologies in health service delivery. In this respect, it is aimed to understand the use of digital health, AI and big data technologies in healthcare services and to analyze the developments and trends in the sector. DESIGN/METHODOLOGY/APPROACH In this research, 191 studies published between 2016 and 2023 on digital health, AI and its sub-branches and big data were analyzed using VOSviewer and Rstudio Bibliometrix programs for bibliometric analysis. We summarized the type, year, countries, journals and categories of publications; matched the most cited publications and authors; explored scientific collaborative relationships between authors and determined the evolution of research over the years through keyword analysis and factor analysis of publications. The content of the publications is briefly summarized. FINDINGS The data obtained showed that significant progress has been made in studies on the use of AI technologies and big data in the field of health, but research in the field is still ongoing and has not yet reached saturation. RESEARCH LIMITATIONS/IMPLICATIONS Although the bibliometric analysis study conducted has comprehensively covered the literature, a single database has been utilized and limited to some keywords in order to reach the most appropriate publications on the subject. PRACTICAL IMPLICATIONS The analysis has addressed important issues regarding the use of developing digital technologies in health services and is thought to form a basis for future researchers. ORIGINALITY/VALUE In today's world, where significant developments are taking place in the field of health, it is necessary to closely follow the development of digital technologies in the health sector and analyze the current situation in order to guide both stakeholders and those who will work in this field.
Collapse
Affiliation(s)
- Selma Arslantaş
- Eldivan Vocational School of Health Services, Çankırı Karatekin University, Çankırı, Turkey
| |
Collapse
|
8
|
Brück O, Sanmark E, Ponkilainen V, Bützow A, Reito A, Kauppila JH, Kuitunen I. European health regulations reduce registry-based research. Health Res Policy Syst 2024; 22:135. [PMID: 39350115 PMCID: PMC11443657 DOI: 10.1186/s12961-024-01228-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Accepted: 09/18/2024] [Indexed: 10/04/2024] Open
Abstract
BACKGROUND The European Health Data Space (EHDS) regulation has been proposed to harmonize health data processing. Given its parallels with the Act on Secondary Use of Health and Social Data (Secondary Use Act) implemented in Finland in 2020, this study examines the consequences of heightened privacy constraints on registry-based medical research. METHODS We collected study permit counts approved by university hospitals in Finland in 2014-2023 and the data authority Findata in 2020‒2023. The changes in the study permit counts were analysed before and after the implementation of the General Data Protection Regulation (GDPR) and the Secondary Use Act. By fitting a linear regression model, we estimated the deficit in study counts following the Secondary Use Act. RESULTS Between 2020 and 2023, a median of 5.5% fewer data permits were approved annually by Finnish university hospitals. On the basis of linear regression modelling, we estimated a reduction of 46.9% in new data permits nationally in 2023 compared with the expected count. Similar changes were neither observed after the implementation of the GDPR nor in permit counts of other medical research types, confirming that the deficit was caused by the Secondary Use Act. CONCLUSIONS This study highlights concerns related to data privacy laws for registry-based medical research and future patient care.
Collapse
Affiliation(s)
- Oscar Brück
- Hematoscope Lab, Comprehensive Cancer Center & Department of Clinical Chemistry, Diagnostic Center, Helsinki University Hospital & University of Helsinki, Biomedicum I, Haartmaninkatu 8, P.O. Box 700, 00290, Helsinki, Finland.
| | - Enni Sanmark
- Department of Otorhinolaryngology and Phoniatrics-Head and Neck Surgery, Helsinki University Hospital, Finland and Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Ville Ponkilainen
- Center for Musculoskeletal Diseases, Tampere University Hospital, Tampere, Finland
| | | | - Aleksi Reito
- Center for Musculoskeletal Diseases, Tampere University Hospital, Tampere, Finland
| | - Joonas H Kauppila
- Department of Surgery, Oulu University Hospital, University of Oulu, Oulu, Finland
- Department of Molecular Medicine and Surgery, Karolinska Institutet, Karolinska University Hospital, Stockholm, Sweden
| | - Ilari Kuitunen
- Department of Pediatrics, Institute of Clinical Medicine, University of Eastern Finland and Kuopio University Hospital, Kuopio, Finland
| |
Collapse
|
9
|
Federico CA, Trotsyuk AA. Biomedical Data Science, Artificial Intelligence, and Ethics: Navigating Challenges in the Face of Explosive Growth. Annu Rev Biomed Data Sci 2024; 7:1-14. [PMID: 38598860 DOI: 10.1146/annurev-biodatasci-102623-104553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Advances in biomedical data science and artificial intelligence (AI) are profoundly changing the landscape of healthcare. This article reviews the ethical issues that arise with the development of AI technologies, including threats to privacy, data security, consent, and justice, as they relate to donors of tissue and data. It also considers broader societal obligations, including the importance of assessing the unintended consequences of AI research in biomedicine. In addition, this article highlights the challenge of rapid AI development against the backdrop of disparate regulatory frameworks, calling for a global approach to address concerns around data misuse, unintended surveillance, and the equitable distribution of AI's benefits and burdens. Finally, a number of potential solutions to these ethical quandaries are offered. Namely, the merits of advocating for a collaborative, informed, and flexible regulatory approach that balances innovation with individual rights and public welfare, fostering a trustworthy AI-driven healthcare ecosystem, are discussed.
Collapse
Affiliation(s)
- Carole A Federico
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California, USA; ,
| | - Artem A Trotsyuk
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California, USA; ,
| |
Collapse
|
10
|
Ho A, Bavli I, Mahal R, McKeown MJ. Multi-Level Ethical Considerations of Artificial Intelligence Health Monitoring for People Living with Parkinson's Disease. AJOB Empir Bioeth 2024; 15:178-191. [PMID: 37889210 DOI: 10.1080/23294515.2023.2274582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Artificial intelligence (AI) has garnered tremendous attention in health care, and many hope that AI can enhance our health system's ability to care for people with chronic and degenerative conditions, including Parkinson's Disease (PD). This paper reports the themes and lessons derived from a qualitative study with people living with PD, family caregivers, and health care providers regarding the ethical dimensions of using AI to monitor, assess, and predict PD symptoms and progression. Thematic analysis identified ethical concerns at four intersecting levels: personal, interpersonal, professional/institutional, and societal levels. Reflecting on potential benefits of predictive algorithms that can continuously collect and process longitudinal data, participants expressed a desire for more timely, ongoing, and accurate information that could enhance management of day-to-day fluctuations and facilitate clinical and personal care as their disease progresses. Nonetheless, they voiced concerns about intersecting ethical questions around evolving illness identities, familial and professional care relationships, privacy, and data ownership/governance. The multi-layer analysis provides a helpful way to understand the ethics of using AI in monitoring and managing PD and other chronic/degenerative conditions.
Collapse
Affiliation(s)
- Anita Ho
- Centre for Applied Ethics, School of Population and Public Health, University of British Columbia, Vancouver, Canada
| | - Itai Bavli
- Centre for Applied Ethics, School of Population and Public Health, University of British Columbia, Vancouver, Canada
| | - Ravneet Mahal
- Pacific Parkinson's Research Centre, University of British Columbia, Vancouver, Canada
| | - Martin J McKeown
- Pacific Parkinson's Research Centre, University of British Columbia, Vancouver, Canada
| |
Collapse
|
11
|
Hamilton A. Artificial Intelligence and Healthcare Simulation: The Shifting Landscape of Medical Education. Cureus 2024; 16:e59747. [PMID: 38840993 PMCID: PMC11152357 DOI: 10.7759/cureus.59747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/20/2024] [Indexed: 06/07/2024] Open
Abstract
The impact of artificial intelligence (AI) will be felt not only in the arena of patient care and deliverable therapies but will also be uniquely disruptive in medical education and healthcare simulation (HCS), in particular. As HCS is intertwined with computer technology, it offers opportunities for rapid scalability with AI and, therefore, will be the most practical place to test new AI applications. This will ensure the acquisition of AI literacy for graduates from the country's various healthcare professional schools. Artificial intelligence has proven to be a useful adjunct in developing interprofessional education and team and leadership skills assessments. Outcome-driven medical simulation has been extensively used to train students in image-centric disciplines such as radiology, ultrasound, echocardiography, and pathology. Allowing students and trainees in healthcare to first apply diagnostic decision support systems (DDSS) under simulated conditions leads to improved diagnostic accuracy, enhanced communication with patients, safer triage decisions, and improved outcomes from rapid response teams. However, the issue of bias, hallucinations, and the uncertainty of emergent properties may undermine the faith of healthcare professionals as they see AI systems deployed in the clinical setting and participating in diagnostic judgments. Also, the demands of ensuring AI literacy in our healthcare professional curricula will place burdens on simulation assets and faculty to adapt to a rapidly changing technological landscape. Nevertheless, the introduction of AI will place increased emphasis on virtual reality platforms, thereby improving the availability of self-directed learning and making it available 24/7, along with uniquely personalized evaluations and customized coaching. Yet, caution must be exercised concerning AI, especially as society's earlier, delayed, and muted responses to the inherent dangers of social media raise serious questions about whether the American government and its citizenry can anticipate the security and privacy guardrails that need to be in place to protect our healthcare practitioners, medical students, and patients.
Collapse
Affiliation(s)
- Allan Hamilton
- Artificial Intelligence Division, Arizona Simulation Technology and Education Center (ASTEC) University of Arizona, Tucson, USA
| |
Collapse
|
12
|
McLennan S, Fiske A, Celi LA. Building a house without foundations? A 24-country qualitative interview study on artificial intelligence in intensive care medicine. BMJ Health Care Inform 2024; 31:e101052. [PMID: 38642921 PMCID: PMC11033632 DOI: 10.1136/bmjhci-2024-101052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 04/08/2024] [Indexed: 04/22/2024] Open
Abstract
OBJECTIVES To explore the views of intensive care professionals in high-income countries (HICs) and lower-to-middle-income countries (LMICs) regarding the use and implementation of artificial intelligence (AI) technologies in intensive care units (ICUs). METHODS Individual semi-structured qualitative interviews were conducted between December 2021 and August 2022 with 59 intensive care professionals from 24 countries. Transcripts were analysed using conventional content analysis. RESULTS Participants had generally positive views about the potential use of AI in ICUs but also reported some well-known concerns about the use of AI in clinical practice and important technical and non-technical barriers to the implementation of AI. Important differences existed between ICUs regarding their current readiness to implement AI. However, these differences were not primarily between HICs and LMICs, but between a small number of ICUs in large tertiary hospitals in HICs, which were reported to have the necessary digital infrastructure for AI, and nearly all other ICUs in both HICs and LMICs, which were reported to neither have the technical capability to capture the necessary data or use AI, nor the staff with the right knowledge and skills to use the technology. CONCLUSION Pouring massive amounts of resources into developing AI without first building the necessary digital infrastructure foundation needed for AI is unethical. Real-world implementation and routine use of AI in the vast majority of ICUs in both HICs and LMICs included in our study is unlikely to occur any time soon. ICUs should not be using AI until certain preconditions are met.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Munich, Bavaria, Germany
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| | - Amelia Fiske
- Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Munich, Bavaria, Germany
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA 02215, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA 02115, USA
| |
Collapse
|
13
|
Ezell JM, Ajayi BP, Parikh T, Miller K, Rains A, Scales D. Drug Use and Artificial Intelligence: Weighing Concerns and Possibilities for Prevention. Am J Prev Med 2024; 66:568-572. [PMID: 38056683 DOI: 10.1016/j.amepre.2023.11.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 11/30/2023] [Accepted: 11/30/2023] [Indexed: 12/08/2023]
Affiliation(s)
- Jerel M Ezell
- Community Health Sciences, School of Public Health, University of California Berkeley, Berkeley, California; Berkeley Center for Cultural Humility, University of California Berkeley, Berkeley, California.
| | - Babatunde Patrick Ajayi
- Community Health Sciences, School of Public Health, University of California Berkeley, Berkeley, California
| | - Tapan Parikh
- Information Science, The College of Arts & Sciences, Cornell University, New York, New York
| | - Kyle Miller
- Department of Medicine, Southern Illinois University, Carbondale, Illinois
| | - Alex Rains
- Pritzer School of Medicine, The University of Chicago, Chicago, Illinois
| | - David Scales
- Division of General Internal Medicine, Joan and Sanford I. Weill Department of Medicine, Weill Cornell Medicine, New York, New York
| |
Collapse
|
14
|
Stogiannos N, O'Regan T, Scurr E, Litosseliti L, Pogose M, Harvey H, Kumar A, Malik R, Barnes A, McEntee MF, Malamateniou C. AI implementation in the UK landscape: Knowledge of AI governance, perceived challenges and opportunities, and ways forward for radiographers. Radiography (Lond) 2024; 30:612-621. [PMID: 38325103 DOI: 10.1016/j.radi.2024.01.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 01/26/2024] [Indexed: 02/09/2024]
Abstract
INTRODUCTION Despite the rapid increase of AI-enabled applications deployed in clinical practice, many challenges exist around AI implementation, including the clarity of governance frameworks, usability of validation of AI models, and customisation of training for radiographers. This study aimed to explore the perceptions of diagnostic and therapeutic radiographers, with existing theoretical and/or practical knowledge of AI, on issues of relevance to the field, such as AI implementation, including knowledge of AI governance and procurement, perceptions about enablers and challenges and future priorities for AI adoption. METHODS An online survey was designed and distributed to UK-based qualified radiographers who work in medical imaging and/or radiotherapy and have some previous theoretical and/or practical knowledge of working with AI. Participants were recruited through the researchers' professional networks on social media with support from the AI advisory group of the Society and College of Radiographers. Survey questions related to AI training/education, knowledge of AI governance frameworks, data privacy procedures, AI implementation considerations, and priorities for AI adoption. Descriptive statistics were employed to analyse the data, and chi-square tests were used to explore significant relationships between variables. RESULTS In total, 88 valid responses were received. Most radiographers (56.6 %) had not received any AI-related training. Also, although approximately 63 % of them used an evaluation framework to assess AI models' performance before implementation, many (36.9 %) were still unsure about suitable evaluation methods. Radiographers requested clearer guidance on AI governance, ample time to implement AI in their practice safely, adequate funding, effective leadership, and targeted support from AI champions. AI training, robust governance frameworks, and patient and public involvement were seen as priorities for the successful implementation of AI by radiographers. CONCLUSION AI implementation is progressing within radiography, but without customised training, clearer governance, key stakeholder engagement and suitable new roles created, it will be hard to harness its benefits and minimise related risks. IMPLICATIONS FOR PRACTICE The results of this study highlight some of the priorities and challenges for radiographers in relation to AI adoption, namely the need for developing robust AI governance frameworks and providing optimal AI training.
Collapse
Affiliation(s)
- N Stogiannos
- Division of Midwifery & Radiography, City, University of London, UK; Medical Imaging Department, Corfu General Hospital, Greece.
| | - T O'Regan
- The Society and College of Radiographers, London, UK.
| | - E Scurr
- The Royal Marsden NHS Foundation Trust, UK.
| | - L Litosseliti
- School of Health & Psychological Sciences, City, University of London, UK.
| | - M Pogose
- Quality Assurance and Regulatory Affairs, Hardian Health, UK.
| | | | - A Kumar
- Frimley Health NHS Foundation Trust, UK.
| | - R Malik
- Bolton NHS Foundation Trust, UK.
| | - A Barnes
- King's Technology Evaluation Centre (KiTEC), School of Biomedical Engineering & Imaging Science, King's College London, UK.
| | - M F McEntee
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, Ireland.
| | - C Malamateniou
- Division of Midwifery & Radiography, City, University of London, UK; Society and College of Radiographers AI Advisory Group, London, UK; European Society of Medical Imaging Informatics, Vienna, Austria; European Federation of Radiographer Societies, Cumieira, Portugal.
| |
Collapse
|
15
|
Akyüz K, Cano Abadía M, Goisauf M, Mayrhofer MT. Unlocking the potential of big data and AI in medicine: insights from biobanking. Front Med (Lausanne) 2024; 11:1336588. [PMID: 38357641 PMCID: PMC10864616 DOI: 10.3389/fmed.2024.1336588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 01/19/2024] [Indexed: 02/16/2024] Open
Abstract
Big data and artificial intelligence are key elements in the medical field as they are expected to improve accuracy and efficiency in diagnosis and treatment, particularly in identifying biomedically relevant patterns, facilitating progress towards individually tailored preventative and therapeutic interventions. These applications belong to current research practice that is data-intensive. While the combination of imaging, pathological, genomic, and clinical data is needed to train algorithms to realize the full potential of these technologies, biobanks often serve as crucial infrastructures for data-sharing and data flows. In this paper, we argue that the 'data turn' in the life sciences has increasingly re-structured major infrastructures, which often were created for biological samples and associated data, as predominantly data infrastructures. These have evolved and diversified over time in terms of tackling relevant issues such as harmonization and standardization, but also consent practices and risk assessment. In line with the datafication, an increased use of AI-based technologies marks the current developments at the forefront of the big data research in life science and medicine that engender new issues and concerns along with opportunities. At a time when secure health data environments, such as European Health Data Space, are in the making, we argue that such meta-infrastructures can benefit both from the experience and evolution of biobanking, but also the current state of affairs in AI in medicine, regarding good governance, the social aspects and practices, as well as critical thinking about data practices, which can contribute to trustworthiness of such meta-infrastructures.
Collapse
Affiliation(s)
- Kaya Akyüz
- Department of ELSI Services and Research, BBMRI-ERIC, Graz, Austria
| | | | | | | |
Collapse
|
16
|
Aspell N, Goldsteen A, Renwick R. Dicing with data: the risks, benefits, tensions and tech of health data in the iToBoS project. Front Digit Health 2024; 6:1272709. [PMID: 38357640 PMCID: PMC10864635 DOI: 10.3389/fdgth.2024.1272709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 01/09/2024] [Indexed: 02/16/2024] Open
Abstract
This paper will discuss the European funded iToBoS project, tasked by the European Commission to develop an AI diagnostic platform for the early detection of skin melanoma. The paper will outline the project, provide an overview of the data being processed, describe the impact assessment processes, and explain the AI privacy risk mitigation methods being deployed. Following this, the paper will offer a brief discussion of some of the more complex aspects: (1) the relatively low population clinical trial study cohort, which poses risks associated with data distinguishability and the masking ability of the applied anonymisation tools, (2) the project's ability to obtain informed consent from the study cohort given the complexity of the technologies, (3) the project's commitment to an open research data strategy and the additional privacy risk mitigations required to protect the multi-modal study data, and (4) the ability of the project to adequately explain the outputs of the algorithmic components to a broad range of stakeholders. The paper will discuss how the complexities have caused tension which are reflective of wider tensions in the health domain. A project level solution includes collaboration with a melanoma patient network, as an avenue for fair and representative qualification of risks and benefits with the patient stakeholder group. However, it is unclear how scalable this process is given the relentless pursuit of innovation within the health domain, accentuated by the continued proliferation of artificial intelligence, open data strategies, and the integration of multi-modal data sets inclusive of genomics.
Collapse
Affiliation(s)
- Niamh Aspell
- Innovation & Research, Trilateral Research Ltd., Waterford, Ireland
| | | | - Robin Renwick
- Innovation & Research, Trilateral Research Ltd., Waterford, Ireland
| |
Collapse
|
17
|
Prasanna A, Jing B, Plopper G, Miller KK, Sanjak J, Feng A, Prezek S, Vidyaprakash E, Thovarai V, Maier EJ, Bhattacharya A, Naaman L, Stephens H, Watford S, Boscardin WJ, Johanson E, Lienau A. Synthetic Health Data Can Augment Community Research Efforts to Better Inform the Public During Emerging Pandemics. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.12.11.23298687. [PMID: 38168217 PMCID: PMC10760275 DOI: 10.1101/2023.12.11.23298687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
The COVID-19 pandemic had disproportionate effects on the Veteran population due to the increased prevalence of medical and environmental risk factors. Synthetic electronic health record (EHR) data can help meet the acute need for Veteran population-specific predictive modeling efforts by avoiding the strict barriers to access, currently present within Veteran Health Administration (VHA) datasets. The U.S. Food and Drug Administration (FDA) and the VHA launched the precisionFDA COVID-19 Risk Factor Modeling Challenge to develop COVID-19 diagnostic and prognostic models; identify Veteran population-specific risk factors; and test the usefulness of synthetic data as a substitute for real data. The use of synthetic data boosted challenge participation by providing a dataset that was accessible to all competitors. Models trained on synthetic data showed similar but systematically inflated model performance metrics to those trained on real data. The important risk factors identified in the synthetic data largely overlapped with those identified from the real data, and both sets of risk factors were validated in the literature. Tradeoffs exist between synthetic data generation approaches based on whether a real EHR dataset is required as input. Synthetic data generated directly from real EHR input will more closely align with the characteristics of the relevant cohort. This work shows that synthetic EHR data will have practical value to the Veterans' health research community for the foreseeable future.
Collapse
Affiliation(s)
| | - Bocheng Jing
- Northern California Institute for Research and Education
- San Francisco VA Medical Center
| | | | | | | | | | | | | | | | | | | | | | | | - Sean Watford
- Booz Allen Hamilton
- Currently U.S. Environmental Protection Agency
| | - W John Boscardin
- University of California, San Francisco, Department of Medicine
- University of California, San Francisco, Department of Epidemiology & Biostatistics
| | | | | |
Collapse
|
18
|
Bak M. AI Can Show You the World. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:107-110. [PMID: 37812112 DOI: 10.1080/15265161.2023.2250312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Affiliation(s)
- Marieke Bak
- Amsterdam UMC, University of Amsterdam
- Technical University of Munich
| |
Collapse
|
19
|
Singh S, Kumar R, Payra S, Singh SK. Artificial Intelligence and Machine Learning in Pharmacological Research: Bridging the Gap Between Data and Drug Discovery. Cureus 2023; 15:e44359. [PMID: 37779744 PMCID: PMC10539991 DOI: 10.7759/cureus.44359] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/31/2023] [Indexed: 10/03/2023] Open
Abstract
Artificial intelligence (AI) has transformed pharmacological research through machine learning, deep learning, and natural language processing. These advancements have greatly influenced drug discovery, development, and precision medicine. AI algorithms analyze vast biomedical data identifying potential drug targets, predicting efficacy, and optimizing lead compounds. AI has diverse applications in pharmacological research, including target identification, drug repurposing, virtual screening, de novo drug design, toxicity prediction, and personalized medicine. AI improves patient selection, trial design, and real-time data analysis in clinical trials, leading to enhanced safety and efficacy outcomes. Post-marketing surveillance utilizes AI-based systems to monitor adverse events, detect drug interactions, and support pharmacovigilance efforts. Machine learning models extract patterns from complex datasets, enabling accurate predictions and informed decision-making, thus accelerating drug discovery. Deep learning, specifically convolutional neural networks (CNN), excels in image analysis, aiding biomarker identification and optimizing drug formulation. Natural language processing facilitates the mining and analysis of scientific literature, unlocking valuable insights and information. However, the adoption of AI in pharmacological research raises ethical considerations. Ensuring data privacy and security, addressing algorithm bias and transparency, obtaining informed consent, and maintaining human oversight in decision-making are crucial ethical concerns. The responsible deployment of AI necessitates robust frameworks and regulations. The future of AI in pharmacological research is promising, with integration with emerging technologies like genomics, proteomics, and metabolomics offering the potential for personalized medicine and targeted therapies. Collaboration among academia, industry, and regulatory bodies is essential for the ethical implementation of AI in drug discovery and development. Continuous research and development in AI techniques and comprehensive training programs will empower scientists and healthcare professionals to fully exploit AI's potential, leading to improved patient outcomes and innovative pharmacological interventions.
Collapse
Affiliation(s)
- Shruti Singh
- Department of Pharmacology, All India Institute of Medical Sciences, Patna, IND
| | - Rajesh Kumar
- Department of Pharmacology, All India Institute of Medical Sciences, Patna, IND
| | - Shuvasree Payra
- Department of Pharmacology, All India Institute of Medical Sciences, Patna, IND
| | - Sunil K Singh
- Department of Pharmacology, All India Institute of Medical Sciences, Patna, IND
| |
Collapse
|
20
|
Fehr J, Jaramillo-Gutierrez G, Oala L, Gröschel MI, Bierwirth M, Balachandran P, Werneck-Leite A, Lippert C. Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools. Healthcare (Basel) 2022; 10:1923. [PMID: 36292369 PMCID: PMC9601535 DOI: 10.3390/healthcare10101923] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 09/18/2022] [Accepted: 09/21/2022] [Indexed: 11/04/2022] Open
Abstract
Artificial intelligence (AI) offers the potential to support healthcare delivery, but poorly trained or validated algorithms bear risks of harm. Ethical guidelines stated transparency about model development and validation as a requirement for trustworthy AI. Abundant guidance exists to provide transparency through reporting, but poorly reported medical AI tools are common. To close this transparency gap, we developed and piloted a framework to quantify the transparency of medical AI tools with three use cases. Our framework comprises a survey to report on the intended use, training and validation data and processes, ethical considerations, and deployment recommendations. The transparency of each response was scored with either 0, 0.5, or 1 to reflect if the requested information was not, partially, or fully provided. Additionally, we assessed on an analogous three-point scale if the provided responses fulfilled the transparency requirement for a set of trustworthiness criteria from ethical guidelines. The degree of transparency and trustworthiness was calculated on a scale from 0% to 100%. Our assessment of three medical AI use cases pin-pointed reporting gaps and resulted in transparency scores of 67% for two use cases and one with 59%. We report anecdotal evidence that business constraints and limited information from external datasets were major obstacles to providing transparency for the three use cases. The observed transparency gaps also lowered the degree of trustworthiness, indicating compliance gaps with ethical guidelines. All three pilot use cases faced challenges to provide transparency about medical AI tools, but more studies are needed to investigate those in the wider medical AI sector. Applying this framework for an external assessment of transparency may be infeasible if business constraints prevent the disclosure of information. New strategies may be necessary to enable audits of medical AI tools while preserving business secrets.
Collapse
Affiliation(s)
- Jana Fehr
- Digital Engineering Faculty, University of Potsdam, 14482 Potsdam, Germany
- Digital Health & Machine Learning, Hasso Plattner Institute, 14482 Potsdam, Germany
| | | | - Luis Oala
- Department of Artificial Intelligence, Fraunhofer HHI, 10587 Berlin, Germany
| | - Matthias I. Gröschel
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, USA
| | - Manuel Bierwirth
- ITU/WHO Focus Group AI4H, 1211 Geneva, Switzerland
- Alumnus Goethe Frankfurt University, 60323 Frankfurt am Main, Germany
| | - Pradeep Balachandran
- ITU/WHO Focus Group AI4H, 1211 Geneva, Switzerland
- Technical Consultant (Digital Health), Thiruvananthapuram 695010, India
| | | | - Christoph Lippert
- Digital Engineering Faculty, University of Potsdam, 14482 Potsdam, Germany
- Digital Health & Machine Learning, Hasso Plattner Institute, 14482 Potsdam, Germany
| |
Collapse
|
21
|
McLennan S, Rachut S, Lange J, Fiske A, Heckmann D, Buyx A. Practices and attitudes of Bavarian stakeholders regarding the secondary-use of health data for research purposes during the COVID-19 pandemic: a qualitative interview study (Preprint). J Med Internet Res 2022; 24:e38754. [PMID: 35696598 PMCID: PMC9239567 DOI: 10.2196/38754] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/28/2022] [Accepted: 05/29/2022] [Indexed: 01/14/2023] Open
Abstract
Background The COVID-19 pandemic is a threat to global health and requires collaborative health research efforts across organizations and countries to address it. Although routinely collected digital health data are a valuable source of information for researchers, benefiting from these data requires accessing and sharing the data. Health care organizations focusing on individual risk minimization threaten to undermine COVID-19 research efforts, and it has been argued that there is an ethical obligation to use the European Union’s General Data Protection Regulation (GDPR) scientific research exemption during the COVID-19 pandemic to support collaborative health research. Objective This study aims to explore the practices and attitudes of stakeholders in the German federal state of Bavaria regarding the secondary use of health data for research purposes during the COVID-19 pandemic, with a specific focus on the GDPR scientific research exemption. Methods Individual semistructured qualitative interviews were conducted between December 2020 and January 2021 with a purposive sample of 17 stakeholders from 3 different groups in Bavaria: researchers involved in COVID-19 research (n=5, 29%), data protection officers (n=6, 35%), and research ethics committee representatives (n=6, 35%). The transcripts were analyzed using conventional content analysis. Results Participants identified systemic challenges in conducting collaborative secondary-use health data research in Bavaria; secondary health data research generally only happens when patient consent has been obtained, or the data have been fully anonymized. The GDPR research exemption has not played a significant role during the pandemic and is currently seldom and restrictively used. Participants identified 3 key groups of barriers that led to difficulties: the wider ecosystem at many Bavarian health care organizations, legal uncertainty that leads to risk-adverse approaches, and ethical positions that patient consent ought to be obtained whenever possible to respect patient autonomy. To improve health data research in Bavaria and across Germany, participants wanted greater legal certainty regarding the use of pseudonymized data for research purposes without the patient’s consent. Conclusions The current balance between enabling the positive goals of health data research and avoiding associated data protection risks is heavily skewed toward avoiding risks; so much so that it makes reaching the goals of health data research extremely difficult. This is important, as it is widely recognized that there is an ethical imperative to use health data to improve care. The current approach also creates a problematic conflict with the ambitions of Germany, and the federal state of Bavaria, to be a leader in artificial intelligence. A recent development in the field of German public administration known as norm screening (Normenscreening) could potentially provide a systematic approach to minimize legal barriers. This approach would likely be beneficial to other countries.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute of History and Ethics in Medicine, TUM School of Medicine, Technical University of Munich, Munich, Germany
| | - Sarah Rachut
- TUM Center for Digital Public Services, Department Governance, TUM School of Social Sciences and Technology, Technical University of Munich, Munich, Germany
| | - Johannes Lange
- Institute of History and Ethics in Medicine, TUM School of Medicine, Technical University of Munich, Munich, Germany
| | - Amelia Fiske
- Institute of History and Ethics in Medicine, TUM School of Medicine, Technical University of Munich, Munich, Germany
| | - Dirk Heckmann
- TUM Center for Digital Public Services, Department Governance, TUM School of Social Sciences and Technology, Technical University of Munich, Munich, Germany
| | - Alena Buyx
- Institute of History and Ethics in Medicine, TUM School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|