1
|
van der Mee FAM, Ottenheijm RPG, Gentry EGS, Nobel JM, Zijta FM, Cals JWL, Jansen J. The impact of different radiology report formats on patient information processing: a systematic review. Eur Radiol 2025; 35:2644-2657. [PMID: 39545980 PMCID: PMC12021958 DOI: 10.1007/s00330-024-11165-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 08/21/2024] [Accepted: 09/26/2024] [Indexed: 11/17/2024]
Abstract
BACKGROUND Since radiology reports are primarily written for health professionals, patients may experience difficulties understanding jargon and terminology used, leading to anxiety and confusion. OBJECTIVES This review evaluates the impact of different radiology report formats on outcomes related to patient information processing, including perception, decision (behavioral intention), action (actual health behavior), and memory (recall of information). METHODS PubMed, Web of Science, EMBASE, and PsycInfo were searched for relevant qualitative and quantitative articles describing or comparing ways of presenting diagnostic radiology reports to patients. Two reviewers independently screened for relevant articles and extracted data from those included. The quality of articles was assessed using the Mixed Methods Appraisal Tool. RESULTS Eighteen studies, two qualitative and sixteen quantitative, were included. Sixteen studies compared multiple presentation formats, most frequently traditional unmodified reports (n = 15), or reports with anatomic illustrations (n = 8), lay summaries (n = 6) or glossaries (n = 6). Glossaries, illustrations, lay summaries, lay reports or lay conclusions all significantly improved participants' cognitive perception and perception of communication of radiology reports, compared to traditional reports. Furthermore, these formats increased affective perception (e.g., reduced anxiety and worry), although only significant for lay reports and conclusions. CONCLUSION Modifying traditional radiology reports with glossaries, illustrations or lay language enhances patient information processing. KEY POINTS Question Identifying the impact of different radiology report formats on outcomes related to patient information processing to enhance patient engagement through online access to radiology reports. Findings Lay language summaries, glossaries with patient-oriented definitions, and anatomic illustrations increase patients' satisfaction with and understanding of their radiology reports. Clinical relevance To increase patients' satisfaction, perceived usefulness and understanding with radiology reports, the use of lay language summaries, glossaries with patient-oriented definitions, and anatomic illustrations is recommended. These modifications decrease patients' unnecessary insecurity, confusion, anxiety and physician consultations after viewing reports.
Collapse
Affiliation(s)
- F A M van der Mee
- Department of Family Medicine, Care and Public Health Research Institute, Maastricht University, Maastricht, The Netherlands.
| | - R P G Ottenheijm
- Department of Family Medicine, Care and Public Health Research Institute, Maastricht University, Maastricht, The Netherlands
| | - E G S Gentry
- Department of Family Medicine, Care and Public Health Research Institute, Maastricht University, Maastricht, The Netherlands
| | - J M Nobel
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Center+, Maastricht, The Netherlands
- GROW Research Institute for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - F M Zijta
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Center+, Maastricht, The Netherlands
| | - J W L Cals
- Department of Family Medicine, Care and Public Health Research Institute, Maastricht University, Maastricht, The Netherlands
| | - J Jansen
- Department of Family Medicine, Care and Public Health Research Institute, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
2
|
Tandon M, Chetla N, Mallepally A, Zebari B, Samayamanthula S, Silva J, Vaja S, Chen J, Cullen M, Sukhija K. Can Artificial Intelligence Diagnose Knee Osteoarthritis? JMIR BIOMEDICAL ENGINEERING 2025; 10:e67481. [PMID: 40266670 DOI: 10.2196/67481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2024] [Revised: 03/13/2025] [Accepted: 03/25/2025] [Indexed: 04/24/2025] Open
Abstract
This study analyzed the capability of GPT-4o to properly identify knee osteoarthritis and found that the model had good sensitivity but poor specificity in identifying knee osteoarthritis; patients and clinicians should practice caution when using GPT-4o for image analysis in knee osteoarthritis.
Collapse
Affiliation(s)
- Mihir Tandon
- Albany Medical College, Albany, NY, United States
| | - Nitin Chetla
- University of Virginia School of Medicine, Charlottesville, VA, United States
| | - Adarsh Mallepally
- School of Medicine, Virginia Commonwealth University, Richmond, United States
| | - Botan Zebari
- St. James School of Medicine, Binghamton, NY, United States
| | - Sai Samayamanthula
- University of Virginia School of Medicine, Charlottesville, VA, United States
| | | | - Swapna Vaja
- Rush Medical College, Chicago, IL, United States
| | - John Chen
- Albany Medical College, Albany, NY, United States
| | | | | |
Collapse
|
3
|
Gundlack J, Negash S, Thiel C, Buch C, Schildmann J, Unverzagt S, Mikolajczyk R, Frese T. Artificial Intelligence in Medical Care - Patients' Perceptions on Caregiving Relationships and Ethics: A Qualitative Study. Health Expect 2025; 28:e70216. [PMID: 40094179 PMCID: PMC11911933 DOI: 10.1111/hex.70216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2024] [Revised: 02/07/2025] [Accepted: 02/25/2025] [Indexed: 03/19/2025] Open
Abstract
INTRODUCTION Artificial intelligence (AI) offers several opportunities to enhance medical care, but practical application is limited. Consideration of patient needs is essential for the successful implementation of AI-based systems. Few studies have explored patients' perceptions, especially in Germany, resulting in insufficient exploration of perspectives of outpatients, older patients and patients with chronic diseases. We aimed to explore how patients perceive AI in medical care, focusing on relationships to physicians and ethical aspects. METHODS We conducted a qualitative study with six semi-structured focus groups from June 2022 to March 2023. We analysed data using a content analysis approach by systemising the textual material via a coding system. Participants were mostly recruited from outpatient settings in the regions of Halle and Erlangen, Germany. They were enrolled primarily through convenience sampling supplemented by purposive sampling. RESULTS Patients (N = 35; 13 females, 22 males) with a median age of 50 years participated. Participants were mixed in socioeconomic status and affinity for new technology. Most had chronic diseases. Perceived main advantages of AI were its efficient and flawless functioning, its ability to process and provide large data volume, and increased patient safety. Major perceived disadvantages were impersonality, potential data security issues, and fear of errors based on medical staff relying too much on AI. A dominant theme was that human interaction, personal conversation, and understanding of emotions cannot be replaced by AI. Participants emphasised the need to involve everyone in the informing process about AI. Most considered physicians as responsible for decisions resulting from AI applications. Transparency of data use and data protection were other important points. CONCLUSIONS Patients could generally imagine AI as support in medical care if its usage is focused on patient well-being and the human relationship is maintained. Including patients' needs in the development of AI and adequate communication about AI systems are essential for successful implementation in practice. PATIENT OR PUBLIC CONTRIBUTION Patients' perceptions as participants in this study were crucial. Further, patients assessed the presentation and comprehensibility of the research material during a pretest, and recommended adaptations were implemented. After each FG, space was provided for requesting modifications and discussion.
Collapse
Affiliation(s)
- Jana Gundlack
- Institute of General Practice & Family Medicine, Interdisciplinary Center of Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Sarah Negash
- Institute for Medical Epidemiology, Biometrics and Informatics, Interdisciplinary Center for Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Carolin Thiel
- Institute of General Practice & Family Medicine, Interdisciplinary Center of Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
- SRH University of Applied Health SciencesHeidelbergGermany
| | - Charlotte Buch
- Institute for History and Ethics of Medicine, Interdisciplinary Center for Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Jan Schildmann
- Institute for History and Ethics of Medicine, Interdisciplinary Center for Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Susanne Unverzagt
- Institute of General Practice & Family Medicine, Interdisciplinary Center of Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Rafael Mikolajczyk
- Institute for Medical Epidemiology, Biometrics and Informatics, Interdisciplinary Center for Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| | - Thomas Frese
- Institute of General Practice & Family Medicine, Interdisciplinary Center of Health SciencesMedical Faculty of the Martin Luther University Halle‐WittenbergHalle (Saale)Germany
| |
Collapse
|
4
|
Frost EK, Aquino YSJ, Braunack‐Mayer A, Carter SM. Understanding Public Judgements on Artificial Intelligence in Healthcare: Dialogue Group Findings From Australia. Health Expect 2025; 28:e70185. [PMID: 40150867 PMCID: PMC11949843 DOI: 10.1111/hex.70185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2024] [Revised: 01/29/2025] [Accepted: 02/04/2025] [Indexed: 03/29/2025] Open
Abstract
INTRODUCTION There is a rapidly increasing number of applications of healthcare artificial intelligence (HCAI). Alongside this, a new field of research is investigating public support for HCAI. We conducted a study to identify the conditions on Australians' support for HCAI, with an emphasis on identifying the instances where using AI in healthcare systems was seen as acceptable or unacceptable. METHODS We conducted eight dialogue groups with 47 Australians, aiming for diversity in age, gender, working status, and experience with information and communication technologies. The moderators encouraged participants to discuss the reasons and conditions for their support for AI in health care. RESULTS Most participants were conditionally supportive of HCAI. The participants felt strongly that AI should be developed, implemented and controlled with patient interests in mind. They supported HCAI principally as an informational tool and hoped that it would empower people by enabling greater access to personalised information about their health. They were opposed to HCAI as a decision-making tool or as a replacement for physician-patient interaction. CONCLUSION Our findings indicate that Australians support HCAI as a tool that enhances rather than replaces human decision-making in health care. Australians value HCAI as an epistemic tool that can expand access to personalised health information but remain cautious about its use in clinical decision-making. Developers of HCAI tools should consider Australians' preferences for AI tools that provide epistemic resources, and their aversion to tools which make decisions autonomously, or replace interactions with their physicians. PATIENT OR PUBLIC CONTRIBUTION Members of the public were participants in this study. The participants made contributions by sharing their views and judgements.
Collapse
Affiliation(s)
- Emma K. Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Social Science, Faculty of the Arts, Social Science and HumanitiesUniversity of WollongongGwynnevilleNew South WalesAustralia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Social Science, Faculty of the Arts, Social Science and HumanitiesUniversity of WollongongGwynnevilleNew South WalesAustralia
| | - Annette Braunack‐Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Social Science, Faculty of the Arts, Social Science and HumanitiesUniversity of WollongongGwynnevilleNew South WalesAustralia
| | - Stacy M. Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Social Science, Faculty of the Arts, Social Science and HumanitiesUniversity of WollongongGwynnevilleNew South WalesAustralia
| |
Collapse
|
5
|
Varol Arısoy M, Arısoy A, Uysal İ. A vision attention driven Language framework for medical report generation. Sci Rep 2025; 15:10704. [PMID: 40155699 PMCID: PMC11953376 DOI: 10.1038/s41598-025-95666-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2025] [Accepted: 03/24/2025] [Indexed: 04/01/2025] Open
Abstract
This study introduces the Medical Vision Attention Generation (MedVAG) model, a novel framework designed to facilitate the automated generation of medical reports. MedVAG integrates Vision Transformer (ViT)-based visual feature extraction and GPT-2 language modeling, enhanced by graph-based feature fusion and multiple attention mechanisms (co-attention, cross-attention, memory-guided attention), to ensure semantic coherence and diagnostic accuracy. Evaluated on IU X-Ray and COV-CTR datasets, the model achieved state-of-the-art performance across natural language generation metrics (BLEU, METEOR, ROUGE, CIDEr) and clinical effectiveness measures. Ablation studies highlighted the critical role of attention mechanisms and feature fusion in aligning visual and textual features. MedVAG demonstrates strong potential as an assistive technology, aiming to support radiologists by reducing workload and enhancing diagnostic accuracy.
Collapse
Affiliation(s)
- Merve Varol Arısoy
- Bucak Faculty of Computer and Informatics, Information Systems Engineering Department, Burdur Mehmet Akif Ersoy University, Burdur, Turkey.
| | - Ayhan Arısoy
- Bucak Faculty of Computer and Informatics, Information Systems Engineering Department, Burdur Mehmet Akif Ersoy University, Burdur, Turkey
| | - İlhan Uysal
- Information Systems and Technologies. Depart, Burdur Mehmet Akif Ersoy University, Bucak Zeliha Tolunay School of Applied Technology and Business, Burdur, Turkey
| |
Collapse
|
6
|
Kim SH, Schramm S, Adams LC, Braren R, Bressem KK, Keicher M, Platzek PS, Paprottka KJ, Zimmer C, Hedderich DM, Wiestler B. Benchmarking the diagnostic performance of open source LLMs in 1933 Eurorad case reports. NPJ Digit Med 2025; 8:97. [PMID: 39934372 DOI: 10.1038/s41746-025-01488-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2024] [Accepted: 01/28/2025] [Indexed: 02/13/2025] Open
Abstract
Recent advancements in large language models (LLMs) have created new ways to support radiological diagnostics. While both open-source and proprietary LLMs can address privacy concerns through local or cloud deployment, open-source models provide advantages in continuity of access, and potentially lower costs. This study evaluated the diagnostic performance of fifteen open-source LLMs and one closed-source LLM (GPT-4o) in 1,933 cases from the Eurorad library. LLMs provided differential diagnoses based on clinical history and imaging findings. Responses were considered correct if the true diagnosis appeared in the top three suggestions. Models were further tested on 60 non-public brain MRI cases from a tertiary hospital to assess generalizability. In both datasets, GPT-4o demonstrated superior performance, closely followed by Llama-3-70B, revealing how open-source LLMs are rapidly closing the gap to proprietary models. Our findings highlight the potential of open-source LLMs as decision support tools for radiological differential diagnosis in challenging, real-world cases.
Collapse
Affiliation(s)
- Su Hwan Kim
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, School of Medicine and Health, Technical University of Munich, Munich, Germany.
| | - Severin Schramm
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, School of Medicine and Health, Technical University of Munich, Munich, Germany
| | - Lisa C Adams
- Department of Diagnostic and Interventional Radiology, Klinikum rechts der Isar, School of Medicine and Health, Technical University of Munich, Munich, Germany
| | - Rickmer Braren
- Department of Diagnostic and Interventional Radiology, Klinikum rechts der Isar, School of Medicine and Health, Technical University of Munich, Munich, Germany
| | - Keno K Bressem
- Department of Cardiovascular Radiology and Nuclear Medicine, German Heart Center Munich, School of Medicine and Health, Technical University of Munich, Munich, Germany
| | - Matthias Keicher
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Paul-Sören Platzek
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, School of Medicine and Health, Technical University of Munich, Munich, Germany
| | - Karolin Johanna Paprottka
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, School of Medicine and Health, Technical University of Munich, Munich, Germany
| | - Claus Zimmer
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, School of Medicine and Health, Technical University of Munich, Munich, Germany
| | - Dennis M Hedderich
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, School of Medicine and Health, Technical University of Munich, Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, School of Medicine and Health, Technical University of Munich, Munich, Germany
- AI for Image-Guided Diagnosis and Therapy, School of Medicine and Health, Technical University of Munich, Munich, Germany
| |
Collapse
|
7
|
Zheng A, Long L, Govathson C, Chetty-Makkan C, Morris S, Rech D, Fox MP, Pascoe S. Designing AI-powered healthcare assistants to effectively reach vulnerable populations with health care services: A discrete choice experiment among South African university students. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.01.30.25321409. [PMID: 39974107 PMCID: PMC11838649 DOI: 10.1101/2025.01.30.25321409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
Introduction South African young adults are at increased risk for HIV acquisition and other non-communicable diseases and face significant barriers to accessing healthcare services. The rapid development of artificial intelligence (AI), in particular AI-powered healthcare assistants (AIPHA), presents a unique opportunity to increase access to health information and linkage to healthcare services and providers. While successful implementation and uptake of such tools require understanding user preferences, limited understanding of these preferences exist. We sought to understand what preferences are important to university students in South Africa when engaging with a hypothetical AIPHA to access health information using a discrete choice experiment. Methods We conducted an unlabeled, forced choice discrete choice experiment among adult South African university students through Prolific Academic, an online research platform, in 2024. Each choice option described a hypothetical AIPHA using eight attribute characteristics (cost, confidentiality, security, healthcare topics, language, persona, access, services). Participants were presented with ten choice sets each comprised of two choice options and asked to choose between the two. A conditional logit model was used to evaluate preferences. Results 300 participants were recruited and enrolled. Most participants were Black, born in South Africa, heterosexual, working for a wage, and a mean age of 26.5 years (SD: 6.0). Results from the discrete choice experiment identified that language, security, and receiving personally tailored advice were the most important attributes for AIPHA. Participants strongly preferred the ability to communicate with the AIPHA in any South African language of their choosing instead of only English and to receive information about health topics specific to their context including information on clinics geographically near them. Results were consistent when stratified by sex and socioeconomic status. Conclusions Participants had strong preferences for security and language which is in line with previous studies where successful uptake and implementation of such health interventions clearly addressed these concerns. These results build the evidence base for how we might engage young adults in healthcare through technology effectively.
Collapse
|
8
|
Fathi M, Vakili K, Hajibeygi R, Bahrami A, Behzad S, Tafazolimoghadam A, Aghabozorgi H, Eshraghi R, Bhatt V, Gholamrezanezhad A. Cultivating diagnostic clarity: The importance of reporting artificial intelligence confidence levels in radiologic diagnoses. Clin Imaging 2025; 117:110356. [PMID: 39566394 DOI: 10.1016/j.clinimag.2024.110356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/01/2024] [Accepted: 11/09/2024] [Indexed: 11/22/2024]
Abstract
Accurate image interpretation is essential in the field of radiology to the healthcare team in order to provide optimal patient care. This article discusses the use of artificial intelligence (AI) confidence levels to enhance the accuracy and dependability of its radiological diagnoses. The current advances in AI technologies have changed how radiologists and clinicians make the diagnoses of pathological conditions such as aneurysms, hemorrhages, pneumothorax, pneumoperitoneum, and particularly fractures. To enhance the utility of these AI models, radiologists need a more comprehensive understanding of the model's levels of confidence and certainty behind the results they produce. This allows radiologists to make more informed decisions that have the potential to drastically change a patient's clinical management. Several AI models, especially those utilizing deep learning models (DL) with convolutional neural networks (CNNs), have demonstrated significant potential in identifying subtle findings in medical imaging that are often missed by radiologists. It is necessary to create standardized levels of confidence metrics in order for AI systems to be relevant and reliable in the clinical setting. Incorporating AI into clinical practice does have certain obstacles like the need for clinical validation, concerns regarding the interpretability of AI system results, and addressing confusion and misunderstandings within the medical community. This study emphasizes the importance of AI systems to clearly convey their level of confidence in radiological diagnosis. This paper highlights the importance of conducting research to establish AI confidence level metrics that are limited to a specific anatomical region or lesion type. KEY POINT OF THE VIEW: Accurate fracture diagnosis relies on radiologic certainty, where Artificial intelligence (AI), especially convolutional neural networks (CNNs) and deep learning (DL), shows promise in enhancing X-ray interpretation amidst a shortage of radiologists. Overcoming integration challenges through improved AI interpretability and education is crucial for widespread acceptance and better patient outcomes.
Collapse
Affiliation(s)
- Mobina Fathi
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran; School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Kimia Vakili
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ramtin Hajibeygi
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran; Tehran University of Medical Science (TUMS), School of Medicine, Tehran, Iran
| | - Ashkan Bahrami
- Faculty of Medicine, Kashan University of Medical Science, Kashan, Iran
| | - Shima Behzad
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran
| | | | - Hadiseh Aghabozorgi
- Student Research Committee, Shahrekord University of Medical Sciences, Shahrekord, Iran
| | - Reza Eshraghi
- Faculty of Medicine, Kashan University of Medical Science, Kashan, Iran
| | - Vivek Bhatt
- University of California, Riverside, School of Medicine, Riverside, CA, United States of America
| | - Ali Gholamrezanezhad
- Keck School of Medicine of University of Southern California, Los Angeles, CA, United States of America; Department of Radiology, Cedars Sinai Hospital, Los Angeles, CA, United States of America.
| |
Collapse
|
9
|
Stogiannos N, O'Regan T, Scurr E, Litosseliti L, Pogose M, Harvey H, Kumar A, Malik R, Barnes A, McEntee MF, Malamateniou C. Lessons on AI implementation from senior clinical practitioners: An exploratory qualitative study in medical imaging and radiotherapy in the UK. J Med Imaging Radiat Sci 2025; 56:101797. [PMID: 39579457 DOI: 10.1016/j.jmir.2024.101797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 10/21/2024] [Accepted: 10/28/2024] [Indexed: 11/25/2024]
Abstract
INTRODUCTION Artificial Intelligence (AI) has the potential to transform medical imaging and radiotherapy; both fields where radiographers' use of AI tools is increasing. This study aimed to explore the views of those professionals who are now using AI tools. METHODS A small-scale exploratory research process was employed, where qualitative data was obtained from five UK-based participants; all professionals working in medical imaging and radiotherapy who use AI in clinical practice. Five semi-structured interviews were conducted online. Verbatim transcription was performed using an open-source automatic speech recognition model. Conceptual content analysis was performed to analyse the data and identify common themes. RESULTS Participants spoke about the possibility of AI deskilling staff and changing their roles, they discussed issues around data protection and data sharing strategies, the important role of effective leadership of AI teams, and the seamless integration into workflows. Participants thought that the benefits of adopting AI were smoother clinical workflows, support for the workforce in decision-making, and enhanced patient safety/care. They also highlighted the need for tailored AI education/training, multidisciplinary teamwork and support. CONCLUSION Participants who are now using AI tools felt that clinical staff should be empowered to support AI implementation by adopting new and clearly defined roles and responsibilities. They suggest that attention to patient care and safety is a key to successful AI adoption. Despite the increasing adoption of AI, participants in the UK described a gap in knowledge with professionals still needing clear guidance, education and training regarding AI in preparation for more widespread adoption.
Collapse
Affiliation(s)
- Nikolaos Stogiannos
- Department of Midwifery & Radiography, City St George's, University of London, UK; Magnitiki Tomografia Kerkiras, Corfu, Greece.
| | - Tracy O'Regan
- The Society and College of Radiographers, London, UK
| | | | - Lia Litosseliti
- School of Health & Psychological Sciences, City St George's, University of London, UK
| | | | | | | | | | - Anna Barnes
- King's Technology Evaluation Centre (KiTEC), School of Biomedical Engineering & Imaging Science, King's College London, UK
| | - Mark F McEntee
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, Ireland
| | - Christina Malamateniou
- Department of Midwifery & Radiography, City St George's, University of London, UK; The Society and College of Radiographers, London, UK; European Society of Medical Imaging Informatics, Vienna, Austria; European Federation of Radiographer Societies, Cumieira, Portugal
| |
Collapse
|
10
|
Kim B, Ryan K, Kim JP. Assessing the impact of information on patient attitudes toward artificial intelligence-based clinical decision support (AI/CDS): a pilot web-based SMART vignette study. JOURNAL OF MEDICAL ETHICS 2024:jme-2024-110080. [PMID: 39667845 DOI: 10.1136/jme-2024-110080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 10/25/2024] [Indexed: 12/14/2024]
Abstract
BACKGROUND It is increasingly recognised that the success of artificial intelligence-based clinical decision support (AI/CDS) tools will depend on physician and patient trust, but factors impacting patients' views on clinical care reliant on AI have been less explored. OBJECTIVE This pilot study explores whether, and in what contexts, detail of explanation provided about AI/CDS tools impacts patients' attitudes toward the tools and their clinical care. METHODS We designed a Sequential Multiple Assignment Randomized Trial vignette web-based survey. Participants recruited through Amazon Mechanical Turk were presented with hypothetical vignettes describing health concerns and were sequentially randomised along three factors: (1) the level of detail of explanation regarding an AI/CDS tool; (2) the AI/CDS result; and (3) the physician's level of agreement with the AI/CDS result. We compared mean ratings of comfort and confidence by the level of detail of explanation using t-tests. Regression models were fit to confirm conditional effects of detail of explanation. RESULTS The detail of explanation provided regarding the AI/CDS tools was positively related to respondents' comfort and confidence in the usage of the tools and their perception of the physician's final decision. The effects of detail of explanation on their perception of the physician's final decision were different given the AI/CDS result and the physician's agreement or disagreement with the result. CONCLUSIONS More information provided by physicians regarding the use of AI/CDS tools may improve patient attitudes toward healthcare involving AI/CDS tools in general and in certain contexts of the AI/CDS result and physician agreement.
Collapse
Affiliation(s)
- Bohye Kim
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, California, USA
| | - Katie Ryan
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, California, USA
| | - Jane Paik Kim
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
11
|
Lee L, Salami RK, Martin H, Shantharam L, Thomas K, Ashworth E, Allan E, Yung KW, Pauling C, Leyden D, Arthurs OJ, Shelmerdine SC. "How I would like AI used for my imaging": children and young persons' perspectives. Eur Radiol 2024; 34:7751-7764. [PMID: 38900281 PMCID: PMC11557655 DOI: 10.1007/s00330-024-10839-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 04/11/2024] [Accepted: 04/27/2024] [Indexed: 06/21/2024]
Abstract
OBJECTIVES Artificial intelligence (AI) tools are becoming more available in modern healthcare, particularly in radiology, although less attention has been paid to applications for children and young people. In the development of these, it is critical their views are heard. MATERIALS AND METHODS A national, online survey was publicised to UK schools, universities and charity partners encouraging any child or young adult to participate. The survey was "live" for one year (June 2022 to 2023). Questions about views of AI in general, and in specific circumstances (e.g. bone fractures) were asked. RESULTS One hundred and seventy-one eligible responses were received, with a mean age of 19 years (6-23 years) with representation across all 4 UK nations. Most respondents agreed or strongly agreed they wanted to know the accuracy of an AI tool that was being used (122/171, 71.3%), that accuracy was more important than speed (113/171, 66.1%), and that AI should be used with human oversight (110/171, 64.3%). Many respondents (73/171, 42.7%) felt AI would be more accurate at finding problems on bone X-rays than humans, with almost all respondents who had sustained a missed fracture strongly agreeing with that sentiment (12/14, 85.7%). CONCLUSIONS Children and young people in our survey had positive views regarding AI, and felt it should be integrated into modern healthcare, but expressed a preference for a "medical professional in the loop" and accuracy of findings over speed. Key themes regarding information on AI performance and governance were raised and should be considered prior to future AI implementation for paediatric healthcare. CLINICAL RELEVANCE STATEMENT Artificial intelligence (AI) integration into clinical practice must consider all stakeholders, especially paediatric patients who have largely been ignored. Children and young people favour AI involvement with human oversight, seek assurances for safety, accuracy, and clear accountability in case of failures. KEY POINTS Paediatric patient's needs and voices are often overlooked in AI tool design and deployment. Children and young people approved of AI, if paired with human oversight and reliability. Children and young people are stakeholders for developing and deploying AI tools in paediatrics.
Collapse
Affiliation(s)
- Lauren Lee
- Young Persons Advisory Group (YPAG), Great Ormond Street Hospital for Children, London, WC1H 3JH, UK
| | | | - Helena Martin
- Guy's and St Thomas' NHS Foundation Trust, London, UK
| | | | - Kate Thomas
- Royal Hospital for Children & Young People, Edinburgh, Scotland, UK
| | - Emily Ashworth
- St George's Hospital, Blackshaw Road, Tooting London, London, UK
| | - Emma Allan
- Department of Clinical Radiology, Great Ormond Street Hospital for Children, London, WC1H 3JH, UK
| | - Ka-Wai Yung
- Wellcome/ EPSRC Centre for Interventional and Surgical Sciences, Charles Bell House, 43-45 Foley Street, London, W1W 7TY, UK
| | - Cato Pauling
- University College London, Gower Street, London, WC1E 6BT, UK.
| | - Deirdre Leyden
- Young Persons Advisory Group (YPAG), Great Ormond Street Hospital for Children, London, WC1H 3JH, UK
| | - Owen J Arthurs
- Department of Clinical Radiology, Great Ormond Street Hospital for Children, London, WC1H 3JH, UK
- UCL Great Ormond Street Institute of Child Health, Great Ormond Street Hospital for Children, London, UK, WC1N 1EH, UK
- NIHR Great Ormond Street Hospital Biomedical Research Centre, 30 Guilford Street, Bloomsbury, London, WC1N 1EH, UK
| | - Susan Cheng Shelmerdine
- Department of Clinical Radiology, Great Ormond Street Hospital for Children, London, WC1H 3JH, UK
- UCL Great Ormond Street Institute of Child Health, Great Ormond Street Hospital for Children, London, UK, WC1N 1EH, UK
- NIHR Great Ormond Street Hospital Biomedical Research Centre, 30 Guilford Street, Bloomsbury, London, WC1N 1EH, UK
| |
Collapse
|
12
|
Tepe M, Emekli E. Decoding medical jargon: The use of AI language models (ChatGPT-4, BARD, microsoft copilot) in radiology reports. PATIENT EDUCATION AND COUNSELING 2024; 126:108307. [PMID: 38743965 DOI: 10.1016/j.pec.2024.108307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 03/20/2024] [Accepted: 04/24/2024] [Indexed: 05/16/2024]
Abstract
OBJECTIVE Evaluate Artificial Intelligence (AI) language models (ChatGPT-4, BARD, Microsoft Copilot) in simplifying radiology reports, assessing readability, understandability, actionability, and urgency classification. METHODS This study evaluated the effectiveness of these AI models in translating radiology reports into patient-friendly language and providing understandable and actionable suggestions and urgency classifications. Thirty radiology reports were processed using AI tools, and their outputs were assessed for readability (Flesch Reading Ease, Flesch-Kincaid Grade Level), understandability (PEMAT), and the accuracy of urgency classification. ANOVA and Chi-Square tests were performed to compare the models' performances. RESULTS All three AI models successfully transformed medical jargon into more accessible language, with BARD showing superior readability scores. In terms of understandability, all models achieved scores above 70%, with ChatGPT-4 and BARD leading (p < 0.001, both). However, the AI models varied in accuracy of urgency recommendations, with no significant statistical difference (p = 0.284). CONCLUSION AI language models have proven effective in simplifying radiology reports, thereby potentially improving patient comprehension and engagement in their health decisions. However, their accuracy in assessing the urgency of medical conditions based on radiology reports suggests a need for further refinement. PRACTICE IMPLICATIONS Incorporating AI in radiology communication can empower patients, but further development is crucial for comprehensive and actionable patient support.
Collapse
Affiliation(s)
- Murat Tepe
- Department of Radiology, King's College Hospital London, Dubai, United Arab Emirates.
| | - Emre Emekli
- Department of Radiology, Eskişehir Osmangazi University, Eskişehir, Turkiye; Department of Medical Education, Gazi University, Ankara, Turkiye
| |
Collapse
|
13
|
Frost EK, Bosward R, Aquino YSJ, Braunack-Mayer A, Carter SM. Facilitating public involvement in research about healthcare AI: A scoping review of empirical methods. Int J Med Inform 2024; 186:105417. [PMID: 38564959 DOI: 10.1016/j.ijmedinf.2024.105417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 03/06/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
OBJECTIVE With the recent increase in research into public views on healthcare artificial intelligence (HCAI), the objective of this review is to examine the methods of empirical studies on public views on HCAI. We map how studies provided participants with information about HCAI, and we examine the extent to which studies framed publics as active contributors to HCAI governance. MATERIALS AND METHODS We searched 5 academic databases and Google Advanced for empirical studies investigating public views on HCAI. We extracted information including study aims, research instruments, and recommendations. RESULTS Sixty-two studies were included. Most were quantitative (N = 42). Most (N = 47) reported providing participants with background information about HCAI. Despite this, studies often reported participants' lack of prior knowledge about HCAI as a limitation. Over three quarters (N = 48) of the studies made recommendations that envisaged public views being used to guide governance of AI. DISCUSSION Provision of background information is an important component of facilitating research with publics on HCAI. The high proportion of studies reporting participants' lack of knowledge about HCAI as a limitation reflects the need for more guidance on how information should be presented. A minority of studies adopted technocratic positions that construed publics as passive beneficiaries of AI, rather than as active stakeholders in HCAI design and implementation. CONCLUSION This review draws attention to how public roles in HCAI governance are constructed in empirical studies. To facilitate active participation, we recommend that research with publics on HCAI consider methodological designs that expose participants to diverse information sources.
Collapse
Affiliation(s)
- Emma Kellie Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Rebecca Bosward
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| |
Collapse
|
14
|
Godoy Junior CA, Miele F, Mäkitie L, Fiorenzato E, Koivu M, Bakker LJ, Groot CUD, Redekop WK, van Deen WK. Attitudes Toward the Adoption of Remote Patient Monitoring and Artificial Intelligence in Parkinson's Disease Management: Perspectives of Patients and Neurologists. THE PATIENT 2024; 17:275-285. [PMID: 38182935 DOI: 10.1007/s40271-023-00669-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/10/2023] [Indexed: 01/07/2024]
Abstract
OBJECTIVE Early detection of Parkinson's Disease (PD) progression remains a challenge. As remote patient monitoring solutions (RMS) and artificial intelligence (AI) technologies emerge as potential aids for PD management, there's a gap in understanding how end users view these technologies. This research explores patient and neurologist perspectives on AI-assisted RMS. METHODS Qualitative interviews and focus-groups were conducted with 27 persons with PD (PwPD) and six neurologists from Finland and Italy. The discussions covered traditional disease progression detection and the prospects of integrating AI and RMS. Sessions were recorded, transcribed, and underwent thematic analysis. RESULTS The study involved five individual interviews (four Italian participants and one Finnish) and six focus-groups (four Finnish and two Italian) with PwPD. Additionally, six neurologists (three from each country) were interviewed. Both cohorts voiced frustration with current monitoring methods due to their limited real-time detection capabilities. However, there was enthusiasm for AI-assisted RMS, contingent upon its value addition, user-friendliness, and preservation of the doctor-patient bond. While some PwPD had privacy and trust concerns, the anticipated advantages in symptom regulation seemed to outweigh these apprehensions. DISCUSSION The study reveals a willingness among PwPD and neurologists to integrate RMS and AI into PD management. Widespread adoption requires these technologies to provide tangible clinical benefits, remain user-friendly, and uphold trust within the physician-patient relationship. CONCLUSION This study offers insights into the potential drivers and barriers for adopting AI-assisted RMS in PD care. Recognizing these factors is pivotal for the successful integration of these digital health tools in PD management.
Collapse
Affiliation(s)
- Carlos Antonio Godoy Junior
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Burgemeester Oudlaan 50, 3062 PA, Rotterdam, Netherlands.
| | - Francesco Miele
- Department of Political and Social Sciences, University of Trieste, Trieste, Italy
| | - Laura Mäkitie
- Department of Neurology, Brain Center, Helsinki University Hospital, Helsinki, Finland
- Department of Clinical Neurosciences, University of Helsinki, Helsinki, Finland
| | | | - Maija Koivu
- Department of Neurology, Brain Center, Helsinki University Hospital, Helsinki, Finland
- Department of Clinical Neurosciences, University of Helsinki, Helsinki, Finland
| | - Lytske Jantien Bakker
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Burgemeester Oudlaan 50, 3062 PA, Rotterdam, Netherlands
| | - Carin Uyl-de Groot
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Burgemeester Oudlaan 50, 3062 PA, Rotterdam, Netherlands
| | - William Ken Redekop
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Burgemeester Oudlaan 50, 3062 PA, Rotterdam, Netherlands
| | - Welmoed Kirsten van Deen
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Burgemeester Oudlaan 50, 3062 PA, Rotterdam, Netherlands
| |
Collapse
|
15
|
Lastrucci A, Wandael Y, Ricci R, Maccioni G, Giansanti D. The Integration of Deep Learning in Radiotherapy: Exploring Challenges, Opportunities, and Future Directions through an Umbrella Review. Diagnostics (Basel) 2024; 14:939. [PMID: 38732351 PMCID: PMC11083654 DOI: 10.3390/diagnostics14090939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/23/2024] [Accepted: 04/24/2024] [Indexed: 05/13/2024] Open
Abstract
This study investigates, through a narrative review, the transformative impact of deep learning (DL) in the field of radiotherapy, particularly in light of the accelerated developments prompted by the COVID-19 pandemic. The proposed approach was based on an umbrella review following a standard narrative checklist and a qualification process. The selection process identified 19 systematic review studies. Through an analysis of current research, the study highlights the revolutionary potential of DL algorithms in optimizing treatment planning, image analysis, and patient outcome prediction in radiotherapy. It underscores the necessity of further exploration into specific research areas to unlock the full capabilities of DL technology. Moreover, the study emphasizes the intricate interplay between digital radiology and radiotherapy, revealing how advancements in one field can significantly influence the other. This interdependence is crucial for addressing complex challenges and advancing the integration of cutting-edge technologies into clinical practice. Collaborative efforts among researchers, clinicians, and regulatory bodies are deemed essential to effectively navigate the evolving landscape of DL in radiotherapy. By fostering interdisciplinary collaborations and conducting thorough investigations, stakeholders can fully leverage the transformative power of DL to enhance patient care and refine therapeutic strategies. Ultimately, this promises to usher in a new era of personalized and optimized radiotherapy treatment for improved patient outcomes.
Collapse
Affiliation(s)
- Andrea Lastrucci
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy; (A.L.); (Y.W.); (R.R.)
| | - Yannick Wandael
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy; (A.L.); (Y.W.); (R.R.)
| | - Renzo Ricci
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy; (A.L.); (Y.W.); (R.R.)
| | | | | |
Collapse
|
16
|
Lepri G, Oddi F, Gulino RA, Giansanti D. Beyond the Clinic Walls: Examining Radiology Technicians' Experiences in Home-Based Radiography. Healthcare (Basel) 2024; 12:732. [PMID: 38610154 PMCID: PMC11011261 DOI: 10.3390/healthcare12070732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Revised: 03/14/2024] [Accepted: 03/22/2024] [Indexed: 04/14/2024] Open
Abstract
In recent years, the landscape of diagnostic imaging has undergone a significant transformation with the emergence of home radiology, challenging the traditional paradigm. This shift, bringing diagnostic imaging directly to patients, has gained momentum and has been further accelerated by the global COVID-19 pandemic, highlighting the increasing importance and convenience of decentralized healthcare services. This study aims to offer a nuanced understanding of the attitudes and experiences influencing the integration of in-home radiography into contemporary healthcare practices. The research methodology involves a survey administered through Computer-Aided Web Interviewing (CAWI) tools, enabling real-time engagement with a diverse cohort of medical radiology technicians in the health domain. A second CAWI tool is submitted to experts to assess their feedback on the methodology. The survey explores key themes, including perceived advantages and challenges associated with domiciliary imaging, its impact on patient care, and the technological intricacies specific to conducting radiologic procedures outside the conventional clinical environment. Findings from a sample of 26 medical radiology technicians (drawn from a larger pool of 186 respondents) highlight a spectrum of opinions and constructive feedback. Enthusiasm is evident for the potential of domiciliary imaging to enhance patient convenience and provide a more patient-centric approach to healthcare. Simultaneously, this study suggests areas of intervention to improve the diffusion of home-based radiology. The methodology based on CAWI tools proves instrumental in the efficiency and depth of data collection, as evaluated by 16 experts from diverse professional backgrounds. The dynamic and responsive nature of this approach allows for a more allocated exploration of technicians' opinions, contributing to a comprehensive understanding of the evolving landscape of medical imaging services. Emphasis is placed on the need for national and international initiatives in the field, supported by scientific societies, to further explore the evolving landscape of teleradiology and the integration of artificial intelligence in radiology. This study encourages expansion involving other key figures in this practice, including, naturally, medical radiologists, general practitioners, medical physicists, and other stakeholders.
Collapse
Affiliation(s)
- Graziano Lepri
- Azienda Unità Sanitaria Locale Umbria 1, Via Guerriero Guerra 21, 06127 Perugia, Italy;
| | - Francesco Oddi
- Facoltà di Ingegneria, Università di Tor Vergata, Via del Politecnico, 1, 00133 Rome, Italy; (F.O.); (R.A.G.)
| | - Rosario Alfio Gulino
- Facoltà di Ingegneria, Università di Tor Vergata, Via del Politecnico, 1, 00133 Rome, Italy; (F.O.); (R.A.G.)
| | - Daniele Giansanti
- Centro Nazionale TISP, Istituto Superiore di Sanità, Viale Regina Elena 299, 00161 Rome, Italy
| |
Collapse
|
17
|
Subramanian HV, Canfield C, Shank DB. Designing explainable AI to improve human-AI team performance: A medical stakeholder-driven scoping review. Artif Intell Med 2024; 149:102780. [PMID: 38462282 DOI: 10.1016/j.artmed.2024.102780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 12/20/2023] [Accepted: 01/14/2024] [Indexed: 03/12/2024]
Abstract
The rise of complex AI systems in healthcare and other sectors has led to a growing area of research called Explainable AI (XAI) designed to increase transparency. In this area, quantitative and qualitative studies focus on improving user trust and task performance by providing system- and prediction-level XAI features. We analyze stakeholder engagement events (interviews and workshops) on the use of AI for kidney transplantation. From this we identify themes which we use to frame a scoping literature review on current XAI features. The stakeholder engagement process lasted over nine months covering three stakeholder group's workflows, determining where AI could intervene and assessing a mock XAI decision support system. Based on the stakeholder engagement, we identify four major themes relevant to designing XAI systems - 1) use of AI predictions, 2) information included in AI predictions, 3) personalization of AI predictions for individual differences, and 4) customizing AI predictions for specific cases. Using these themes, our scoping literature review finds that providing AI predictions before, during, or after decision-making could be beneficial depending on the complexity of the stakeholder's task. Additionally, expert stakeholders like surgeons prefer minimal to no XAI features, AI prediction, and uncertainty estimates for easy use cases. However, almost all stakeholders prefer to have optional XAI features to review when needed, especially in hard-to-predict cases. The literature also suggests that providing both system- and prediction-level information is necessary to build the user's mental model of the system appropriately. Although XAI features improve users' trust in the system, human-AI team performance is not always enhanced. Overall, stakeholders prefer to have agency over the XAI interface to control the level of information based on their needs and task complexity. We conclude with suggestions for future research, especially on customizing XAI features based on preferences and tasks.
Collapse
Affiliation(s)
- Harishankar V Subramanian
- Engineering Management & Systems Engineering, Missouri University of Science and Technology, 600 W 14(th) Street, Rolla, MO 65409, United States of America
| | - Casey Canfield
- Engineering Management & Systems Engineering, Missouri University of Science and Technology, 600 W 14(th) Street, Rolla, MO 65409, United States of America.
| | - Daniel B Shank
- Psychological Science, Missouri University of Science and Technology, 500 W 14(th) Street, Rolla, MO 65409, United States of America
| |
Collapse
|
18
|
Shevtsova D, Ahmed A, Boot IWA, Sanges C, Hudecek M, Jacobs JJL, Hort S, Vrijhoef HJM. Trust in and Acceptance of Artificial Intelligence Applications in Medicine: Mixed Methods Study. JMIR Hum Factors 2024; 11:e47031. [PMID: 38231544 PMCID: PMC10831593 DOI: 10.2196/47031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 09/25/2023] [Accepted: 11/20/2023] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI)-powered technologies are being increasingly used in almost all fields, including medicine. However, to successfully implement medical AI applications, ensuring trust and acceptance toward such technologies is crucial for their successful spread and timely adoption worldwide. Although AI applications in medicine provide advantages to the current health care system, there are also various associated challenges regarding, for instance, data privacy, accountability, and equity and fairness, which could hinder medical AI application implementation. OBJECTIVE The aim of this study was to identify factors related to trust in and acceptance of novel AI-powered medical technologies and to assess the relevance of those factors among relevant stakeholders. METHODS This study used a mixed methods design. First, a rapid review of the existing literature was conducted, aiming to identify various factors related to trust in and acceptance of novel AI applications in medicine. Next, an electronic survey including the rapid review-derived factors was disseminated among key stakeholder groups. Participants (N=22) were asked to assess on a 5-point Likert scale (1=irrelevant to 5=relevant) to what extent they thought the various factors (N=19) were relevant to trust in and acceptance of novel AI applications in medicine. RESULTS The rapid review (N=32 papers) yielded 110 factors related to trust and 77 factors related to acceptance toward AI technology in medicine. Closely related factors were assigned to 1 of the 19 overarching umbrella factors, which were further grouped into 4 categories: human-related (ie, the type of institution AI professionals originate from), technology-related (ie, the explainability and transparency of AI application processes and outcomes), ethical and legal (ie, data use transparency), and additional factors (ie, AI applications being environment friendly). The categorized 19 umbrella factors were presented as survey statements, which were evaluated by relevant stakeholders. Survey participants (N=22) represented researchers (n=18, 82%), technology providers (n=5, 23%), hospital staff (n=3, 14%), and policy makers (n=3, 14%). Of the 19 factors, 16 (84%) human-related, technology-related, ethical and legal, and additional factors were considered to be of high relevance to trust in and acceptance of novel AI applications in medicine. The patient's gender, age, and education level were found to be of low relevance (3/19, 16%). CONCLUSIONS The results of this study could help the implementers of medical AI applications to understand what drives trust and acceptance toward AI-powered technologies among key stakeholders in medicine. Consequently, this would allow the implementers to identify strategies that facilitate trust in and acceptance of medical AI applications among key stakeholders and potential users.
Collapse
Affiliation(s)
- Daria Shevtsova
- Panaxea bv, Den Bosch, Netherlands
- Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | | | | | | | | | | | - Simon Hort
- Fraunhofer Institute for Production Technology, Aachen, Germany
| | | |
Collapse
|
19
|
Wicki S, Clark IC, Amann M, Christ SM, Schettle M, Hertler C, Theile G, Blum D. Acceptance of Digital Health Technologies in Palliative Care Patients. Palliat Med Rep 2024; 5:34-42. [PMID: 38249831 PMCID: PMC10797306 DOI: 10.1089/pmr.2023.0062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/28/2023] [Indexed: 01/23/2024] Open
Abstract
Background Digital health technologies have potential to transform palliative care (PC) services. The global aging population poses unique challenges for PC, which digital health technologies may help overcome. Evaluation of attitudes and perceptions combined with quantification of prior use habits favor an understanding of psychological barriers to PC patient acceptance of digital health technologies including artificial intelligence (AI). Objectives We aimed to evaluate the attitudes and perceptions of PC patients regarding a broad range of digital health technologies used in their routine monitoring and treatment and identify barriers to use. Methods We used a 39-item questionnaire to evaluate acceptance and use of smartphone-based electronic patient report outcome measures, wearables, AI, data privacy, and virtual reality (VR) in 29 female and male PC inpatients. Results A majority of patients indicated an interest in (69.0%) and positive attitude toward (75.9%) digital health technologies. Nearly all (93.1%) patients believe that digital health technologies will become more important in medicine in the future. Most patients would consider using their smartphone (79.3%) or wearable (69.0%) more often for their health. The most feasible technologies were smartphones, wearables, and VR. Barriers to acceptance included unfamiliarity, data security, errors in data interpretation, and loss of personal interaction through AI. Conclusion In this patient survey, acceptance of new technologies in a PC patient population was high, encouraging its use also at the end-of-life.
Collapse
Affiliation(s)
- Stefan Wicki
- Department of Radiation Oncology, Competence Center Palliative Care, University Hospital and University of Zurich, Zurich, Switzerland
| | - Ian C. Clark
- Department of Radiation Oncology, Competence Center Palliative Care, University Hospital and University of Zurich, Zurich, Switzerland
- Department of Radiation Oncology, University Hospital and University of Zurich, Zurich, Switzerland
| | - Manuel Amann
- Department of Radiation Oncology, Competence Center Palliative Care, University Hospital and University of Zurich, Zurich, Switzerland
| | - Sebastian M. Christ
- Department of Radiation Oncology, Competence Center Palliative Care, University Hospital and University of Zurich, Zurich, Switzerland
- Department of Radiation Oncology, University Hospital and University of Zurich, Zurich, Switzerland
| | - Markus Schettle
- Department of Radiation Oncology, Competence Center Palliative Care, University Hospital and University of Zurich, Zurich, Switzerland
| | - Caroline Hertler
- Department of Radiation Oncology, Competence Center Palliative Care, University Hospital and University of Zurich, Zurich, Switzerland
| | - Gudrun Theile
- Department of Radiation Oncology, Competence Center Palliative Care, University Hospital and University of Zurich, Zurich, Switzerland
| | - David Blum
- Department of Radiation Oncology, Competence Center Palliative Care, University Hospital and University of Zurich, Zurich, Switzerland
| |
Collapse
|
20
|
Chae A, Yao MS, Sagreiya H, Goldberg AD, Chatterjee N, MacLean MT, Duda J, Elahi A, Borthakur A, Ritchie MD, Rader D, Kahn CE, Witschey WR, Gee JC. Strategies for Implementing Machine Learning Algorithms in the Clinical Practice of Radiology. Radiology 2024; 310:e223170. [PMID: 38259208 PMCID: PMC10831483 DOI: 10.1148/radiol.223170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 08/24/2023] [Accepted: 08/29/2023] [Indexed: 01/24/2024]
Abstract
Despite recent advancements in machine learning (ML) applications in health care, there have been few benefits and improvements to clinical medicine in the hospital setting. To facilitate clinical adaptation of methods in ML, this review proposes a standardized framework for the step-by-step implementation of artificial intelligence into the clinical practice of radiology that focuses on three key components: problem identification, stakeholder alignment, and pipeline integration. A review of the recent literature and empirical evidence in radiologic imaging applications justifies this approach and offers a discussion on structuring implementation efforts to help other hospital practices leverage ML to improve patient care. Clinical trial registration no. 04242667 © RSNA, 2024 Supplemental material is available for this article.
Collapse
Affiliation(s)
| | | | - Hersh Sagreiya
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Ari D. Goldberg
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Neil Chatterjee
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Matthew T. MacLean
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Jeffrey Duda
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Ameena Elahi
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Arijitt Borthakur
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Marylyn D. Ritchie
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Daniel Rader
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | - Charles E. Kahn
- From the Departments of Bioengineering (M.S.Y.), Radiology (H.S.,
N.C., M.T.M., J.D., A.B., C.E.K., W.R.W., J.C.G.), Genetics (M.D.R.), and
Medicine (D.R.), Perelman School of Medicine (A.C., M.S.Y., H.S., A.B., C.E.K.,
W.R.W., J.C.G.), University of Pennsylvania, 3400 Civic Center Blvd,
Philadelphia, PA 19104; Department of Radiology, Loyola University Medical
Center, Maywood, Ill (A.D.G.); Department of Information Services, University of
Pennsylvania, Philadelphia, Pa (A.E.); and Leonard Davis Institute of Health
Economics, University of Pennsylvania, Philadelphia, Pa (A.B.)
| | | | | |
Collapse
|
21
|
He X, Zheng X, Ding H. Existing Barriers Faced by and Future Design Recommendations for Direct-to-Consumer Health Care Artificial Intelligence Apps: Scoping Review. J Med Internet Res 2023; 25:e50342. [PMID: 38109173 PMCID: PMC10758939 DOI: 10.2196/50342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Revised: 09/20/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023] Open
Abstract
BACKGROUND Direct-to-consumer (DTC) health care artificial intelligence (AI) apps hold the potential to bridge the spatial and temporal disparities in health care resources, but they also come with individual and societal risks due to AI errors. Furthermore, the manner in which consumers interact directly with health care AI is reshaping traditional physician-patient relationships. However, the academic community lacks a systematic comprehension of the research overview for such apps. OBJECTIVE This paper systematically delineated and analyzed the characteristics of included studies, identified existing barriers and design recommendations for DTC health care AI apps mentioned in the literature and also provided a reference for future design and development. METHODS This scoping review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews guidelines and was conducted according to Arksey and O'Malley's 5-stage framework. Peer-reviewed papers on DTC health care AI apps published until March 27, 2023, in Web of Science, Scopus, the ACM Digital Library, IEEE Xplore, PubMed, and Google Scholar were included. The papers were analyzed using Braun and Clarke's reflective thematic analysis approach. RESULTS Of the 2898 papers retrieved, 32 (1.1%) covering this emerging field were included. The included papers were recently published (2018-2023), and most (23/32, 72%) were from developed countries. The medical field was mostly general practice (8/32, 25%). In terms of users and functionalities, some apps were designed solely for single-consumer groups (24/32, 75%), offering disease diagnosis (14/32, 44%), health self-management (8/32, 25%), and health care information inquiry (4/32, 13%). Other apps connected to physicians (5/32, 16%), family members (1/32, 3%), nursing staff (1/32, 3%), and health care departments (2/32, 6%), generally to alert these groups to abnormal conditions of consumer users. In addition, 8 barriers and 6 design recommendations related to DTC health care AI apps were identified. Some more subtle obstacles that are particularly worth noting and corresponding design recommendations in consumer-facing health care AI systems, including enhancing human-centered explainability, establishing calibrated trust and addressing overtrust, demonstrating empathy in AI, improving the specialization of consumer-grade products, and expanding the diversity of the test population, were further discussed. CONCLUSIONS The booming DTC health care AI apps present both risks and opportunities, which highlights the need to explore their current status. This paper systematically summarized and sorted the characteristics of the included studies, identified existing barriers faced by, and made future design recommendations for such apps. To the best of our knowledge, this is the first study to systematically summarize and categorize academic research on these apps. Future studies conducting the design and development of such systems could refer to the results of this study, which is crucial to improve the health care services provided by DTC health care AI apps.
Collapse
Affiliation(s)
- Xin He
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, China
| | - Xi Zheng
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, China
| | - Huiyuan Ding
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
22
|
Ho V, Brown Johnson C, Ghanzouri I, Amal S, Asch S, Ross E. Physician- and Patient-Elicited Barriers and Facilitators to Implementation of a Machine Learning-Based Screening Tool for Peripheral Arterial Disease: Preimplementation Study With Physician and Patient Stakeholders. JMIR Cardio 2023; 7:e44732. [PMID: 37930755 PMCID: PMC10660241 DOI: 10.2196/44732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 07/23/2023] [Accepted: 08/21/2023] [Indexed: 11/07/2023] Open
Abstract
BACKGROUND Peripheral arterial disease (PAD) is underdiagnosed, partially due to a high prevalence of atypical symptoms and a lack of physician and patient awareness. Implementing clinical decision support tools powered by machine learning algorithms may help physicians identify high-risk patients for diagnostic workup. OBJECTIVE This study aims to evaluate barriers and facilitators to the implementation of a novel machine learning-based screening tool for PAD among physician and patient stakeholders using the Consolidated Framework for Implementation Research (CFIR). METHODS We performed semistructured interviews with physicians and patients from the Stanford University Department of Primary Care and Population Health, Division of Cardiology, and Division of Vascular Medicine. Participants answered questions regarding their perceptions toward machine learning and clinical decision support for PAD detection. Rapid thematic analysis was performed using templates incorporating codes from CFIR constructs. RESULTS A total of 12 physicians (6 primary care physicians and 6 cardiovascular specialists) and 14 patients were interviewed. Barriers to implementation arose from 6 CFIR constructs: complexity, evidence strength and quality, relative priority, external policies and incentives, knowledge and beliefs about intervention, and individual identification with the organization. Facilitators arose from 5 CFIR constructs: intervention source, relative advantage, learning climate, patient needs and resources, and knowledge and beliefs about intervention. Physicians felt that a machine learning-powered diagnostic tool for PAD would improve patient care but cited limited time and authority in asking patients to undergo additional screening procedures. Patients were interested in having their physicians use this tool but raised concerns about such technologies replacing human decision-making. CONCLUSIONS Patient- and physician-reported barriers toward the implementation of a machine learning-powered PAD diagnostic tool followed four interdependent themes: (1) low familiarity or urgency in detecting PAD; (2) concerns regarding the reliability of machine learning; (3) differential perceptions of responsibility for PAD care among primary care versus specialty physicians; and (4) patient preference for physicians to remain primary interpreters of health care data. Facilitators followed two interdependent themes: (1) enthusiasm for clinical use of the predictive model and (2) willingness to incorporate machine learning into clinical care. Implementation of machine learning-powered diagnostic tools for PAD should leverage provider support while simultaneously educating stakeholders on the importance of early PAD diagnosis. High predictive validity is necessary for machine learning models but not sufficient for implementation.
Collapse
Affiliation(s)
- Vy Ho
- Division of Vascular Surgery, Department of Surgery, Stanford University School of Medicine, Stanford, CA, United States
| | - Cati Brown Johnson
- Division of Primary Care and Population Health, Department of Medicine, Stanford University School of Medicine, Stanford, CA, United States
| | - Ilies Ghanzouri
- Division of Vascular Surgery, Department of Surgery, Stanford University School of Medicine, Stanford, CA, United States
| | - Saeed Amal
- College of Engineering, Northeastern University, Boston, MA, United States
| | - Steven Asch
- Division of Primary Care and Population Health, Department of Medicine, Stanford University School of Medicine, Stanford, CA, United States
- Center for Innovation to Implementation, Veterans Affairs Palo Alto Healthcare System, Palo Alto, CA, United States
| | - Elsie Ross
- Division of Vascular Surgery, Department of Surgery, Stanford University School of Medicine, Stanford, CA, United States
| |
Collapse
|
23
|
Cresswell K, Rigby M, Magrabi F, Scott P, Brender J, Craven CK, Wong ZSY, Kukhareva P, Ammenwerth E, Georgiou A, Medlock S, De Keizer NF, Nykänen P, Prgomet M, Williams R. The need to strengthen the evaluation of the impact of Artificial Intelligence-based decision support systems on healthcare provision. Health Policy 2023; 136:104889. [PMID: 37579545 DOI: 10.1016/j.healthpol.2023.104889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 08/04/2023] [Indexed: 08/16/2023]
Abstract
Despite the renewed interest in Artificial Intelligence-based clinical decision support systems (AI-CDS), there is still a lack of empirical evidence supporting their effectiveness. This underscores the need for rigorous and continuous evaluation and monitoring of processes and outcomes associated with the introduction of health information technology. We illustrate how the emergence of AI-CDS has helped to bring to the fore the critical importance of evaluation principles and action regarding all health information technology applications, as these hitherto have received limited attention. Key aspects include assessment of design, implementation and adoption contexts; ensuring systems support and optimise human performance (which in turn requires understanding clinical and system logics); and ensuring that design of systems prioritises ethics, equity, effectiveness, and outcomes. Going forward, information technology strategy, implementation and assessment need to actively incorporate these dimensions. International policy makers, regulators and strategic decision makers in implementing organisations therefore need to be cognisant of these aspects and incorporate them in decision-making and in prioritising investment. In particular, the emphasis needs to be on stronger and more evidence-based evaluation surrounding system limitations and risks as well as optimisation of outcomes, whilst ensuring learning and contextual review. Otherwise, there is a risk that applications will be sub-optimally embodied in health systems with unintended consequences and without yielding intended benefits.
Collapse
Affiliation(s)
- Kathrin Cresswell
- The University of Edinburgh, Usher Institute, Edinburgh, United Kingdom.
| | - Michael Rigby
- Keele University, School of Social, Political and Global Studies and School of Primary, Community and Social Care, Keele, United Kingdom
| | - Farah Magrabi
- Macquarie University, Australian Institute of Health Innovation, Sydney, Australia
| | - Philip Scott
- University of Wales Trinity Saint David, Swansea, United Kingdom
| | - Jytte Brender
- Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Catherine K Craven
- University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Zoie Shui-Yee Wong
- St. Luke's International University, Graduate School of Public Health, Tokyo, Japan
| | - Polina Kukhareva
- Department of Biomedical Informatics, University of Utah, United States of America
| | - Elske Ammenwerth
- UMIT TIROL, Private University for Health Sciences and Health Informatics, Institute of Medical Informatics, Hall in Tirol, Austria
| | - Andrew Georgiou
- Macquarie University, Australian Institute of Health Innovation, Sydney, Australia
| | - Stephanie Medlock
- Amsterdam UMC location University of Amsterdam, Department of Medical Informatics, Meibergdreef 9, Amsterdam, the Netherlands; Amsterdam Public Health research institute, Digital Health and Quality of Care Amsterdam, the Netherlands
| | - Nicolette F De Keizer
- Amsterdam UMC location University of Amsterdam, Department of Medical Informatics, Meibergdreef 9, Amsterdam, the Netherlands; Amsterdam Public Health research institute, Digital Health and Quality of Care Amsterdam, the Netherlands
| | - Pirkko Nykänen
- Tampere University, Faculty for Information Technology and Communication Sciences, Finland
| | - Mirela Prgomet
- Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| | - Robin Williams
- The University of Edinburgh, Institute for the Study of Science, Technology and Innovation, Edinburgh, United Kingdom
| |
Collapse
|
24
|
Borondy Kitts A. Patient Perspectives on Artificial Intelligence in Radiology. J Am Coll Radiol 2023; 20:863-867. [PMID: 37453601 DOI: 10.1016/j.jacr.2023.05.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/24/2023] [Accepted: 05/03/2023] [Indexed: 07/18/2023]
Abstract
There are two major areas for patient engagement in radiology artificial intelligence (AI). One is in the sharing of data for AI development; the second is the use of AI in patient care. In general, individuals support sharing deidentified data if used for the common good, to help others with similar health conditions, or for research. However, there is concern with risk to privacy including reidentification and use for other than intended purposes. Lack of trust is mentioned as a barrier for data sharing. Individuals want to be involved in the data-sharing process. In the use of AI in medical care, patients generally support AI as an assist to the radiologist but lack trust in unsupervised AI. Patients worry about liability in case of bad outcomes. Patients are concerned about loss of the human connection and the loss of empathy during a vulnerable time in their lives. Patients expressed concern about risk of discrimination due to bias in AI algorithms. Building trust in AI requires transparency, explainability, security, and privacy protection. Radiologists can take action to prepare their patients to become more trusting of AI. Developing and implementing data-sharing agreements allows patients to voluntarily help in the algorithm development process. Developing AI disclosure guidelines and having AI use disclosure discussions with patients will help them understand the use of AI in their care. As the use of AI increases, there is an opportunity for radiologists to develop and maintain close relationships with their patients and to become more involved in their care.
Collapse
|
25
|
Asan O, Choi E, Wang X. Artificial Intelligence-Based Consumer Health Informatics Application: Scoping Review. J Med Internet Res 2023; 25:e47260. [PMID: 37647122 PMCID: PMC10500367 DOI: 10.2196/47260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 07/02/2023] [Accepted: 07/18/2023] [Indexed: 09/01/2023] Open
Abstract
BACKGROUND There is no doubt that the recent surge in artificial intelligence (AI) research will change the trajectory of next-generation health care, making it more approachable and accessible to patients. Therefore, it is critical to research patient perceptions and outcomes because this trend will allow patients to be the primary consumers of health technology and decision makers for their own health. OBJECTIVE This study aimed to review and analyze papers on AI-based consumer health informatics (CHI) for successful future patient-centered care. METHODS We searched for all peer-reviewed papers in PubMed published in English before July 2022. Research on an AI-based CHI tool or system that reports patient outcomes or perceptions was identified for the scoping review. RESULTS We identified 20 papers that met our inclusion criteria. The eligible studies were summarized and discussed with respect to the role of the AI-based CHI system, patient outcomes, and patient perceptions. The AI-based CHI systems identified included systems in mobile health (13/20, 65%), robotics (5/20, 25%), and telemedicine (2/20, 10%). All the systems aimed to provide patients with personalized health care. Patient outcomes and perceptions across various clinical disciplines were discussed, demonstrating the potential of an AI-based CHI system to benefit patients. CONCLUSIONS This scoping review showed the trend in AI-based CHI systems and their impact on patient outcomes as well as patients' perceptions of these systems. Future studies should also explore how clinicians and health care professionals perceive these consumer-based systems and integrate them into the overall workflow.
Collapse
Affiliation(s)
- Onur Asan
- School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ, United States
| | - Euiji Choi
- Department of Computer Science, Stevens Institute of Technology, Hoboken, NJ, United States
| | - Xiaomei Wang
- Department of Industrial Engieering, University of Louisville, Louisville, KY, United States
| |
Collapse
|
26
|
Rockwell HD, Cyphers ED, Makary MS, Keller EJ. Ethical Considerations for Artificial Intelligence in Interventional Radiology: Balancing Innovation and Patient Care. Semin Intervent Radiol 2023; 40:323-326. [PMID: 37484438 PMCID: PMC10359128 DOI: 10.1055/s-0043-1769905] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Affiliation(s)
- Helena D. Rockwell
- School of Medicine, University of California, San Diego, La Jolla, California
| | - Eric D. Cyphers
- Department of Bioethics, Columbia University, New York, New York
- Philadelphia College of Osteopathic Medicine, Philadelphia, Pennsylvania
| | - Mina S. Makary
- Division of Interventional Radiology, Department of Radiology, The Ohio State University, Columbus, Ohio
| | - Eric J. Keller
- Division of Interventional Radiology, Department of Radiology, Stanford University Medical Center, Stanford, California
| |
Collapse
|
27
|
Bahakeem BH, Alobaidi SF, Alzahrani AS, Alhasawi R, Alzahrani A, Alqahtani W, Alhashmi Alamer L, Bin Laswad BM, Al Shanbari N. The General Population's Perspectives on Implementation of Artificial Intelligence in Radiology in the Western Region of Saudi Arabia. Cureus 2023; 15:e37391. [PMID: 37182053 PMCID: PMC10171828 DOI: 10.7759/cureus.37391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/10/2023] [Indexed: 05/16/2023] Open
Abstract
Background Artificial intelligence (AI) is a broad spectrum of computer-executed operations that mimics the human intellect. It is expected to improve healthcare practice in general and radiology in particular by enhancing image acquisition, image analysis, and processing speed. Despite the rapid development of AI systems, successful application in radiology requires analysis of social factors such as the public's perspectives toward the technology. Objectives The current study aims to investigate the general population's perspectives on AI implementation in radiology in the Western region of Saudi Arabia. Methods A cross-sectional study was conducted from November 2022 and July 2023 utilizing a self-administrative online survey distributed via social media platforms. A convenience sampling technique was used to recruit the study participants. After obtaining Institutional Review Board approval, data were collected from citizens and residents of the western region of Saudi Arabia aged 18 years or older. Results A total of 1,024 participants were included in the present study, with the mean age of respondents being 29.6 ± 11.3. Of them, 49.9% (511) were men, and 50.1% (513) were women. The comprehensive mean score of the first four domains among our participants was 3.93 out of 5.00. Higher mean scores suggest being more negative regarding AI in radiology, except for the fifth domain. Respondents had less trust in AI utilization in radiology, as evidenced by their overall distrust and accountability domain mean score of 3.52 out of 5. The majority of respondents agreed that it is essential to understand every step of the diagnostic process, and the mean score for the procedural knowledge domain was 4.34 out of 5. The mean score for the personal interaction domain was 4.31 out of 5, indicating that the participants agreed on the value of direct communication between the patient and the radiologist for discussing test results and asking questions. Our data show that people think AI is more effective than human doctors in making accurate diagnoses and decreasing patient wait times, with an overall mean score of the efficiency domain of 3.56 out of 5. Finally, the fifth domain, "being informed," had a mean score of 3.91 out of 5. Conclusion The application of AI in radiologic assessment and interpretation is generally viewed negatively. Even though people think AI is more efficient and accurate at diagnosing than humans, they still think that computers will never be able to match a specialist doctor's years of training.
Collapse
Affiliation(s)
- Basem H Bahakeem
- Department of Medical Imaging, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Sultan F Alobaidi
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Amjad S Alzahrani
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Roudin Alhasawi
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Abdulkarem Alzahrani
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Wed Alqahtani
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Lujain Alhashmi Alamer
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Bassam M Bin Laswad
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| | - Nasser Al Shanbari
- Department of Medicine and Surgery, College of Medicine, Umm Al-Qura University, Makkah, SAU
| |
Collapse
|
28
|
Morrow E, Zidaru T, Ross F, Mason C, Patel KD, Ream M, Stockley R. Artificial intelligence technologies and compassion in healthcare: A systematic scoping review. Front Psychol 2023; 13:971044. [PMID: 36733854 PMCID: PMC9887144 DOI: 10.3389/fpsyg.2022.971044] [Citation(s) in RCA: 46] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 12/05/2022] [Indexed: 01/18/2023] Open
Abstract
Background Advances in artificial intelligence (AI) technologies, together with the availability of big data in society, creates uncertainties about how these developments will affect healthcare systems worldwide. Compassion is essential for high-quality healthcare and research shows how prosocial caring behaviors benefit human health and societies. However, the possible association between AI technologies and compassion is under conceptualized and underexplored. Objectives The aim of this scoping review is to provide a comprehensive depth and a balanced perspective of the emerging topic of AI technologies and compassion, to inform future research and practice. The review questions were: How is compassion discussed in relation to AI technologies in healthcare? How are AI technologies being used to enhance compassion in healthcare? What are the gaps in current knowledge and unexplored potential? What are the key areas where AI technologies could support compassion in healthcare? Materials and methods A systematic scoping review following five steps of Joanna Briggs Institute methodology. Presentation of the scoping review conforms with PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews). Eligibility criteria were defined according to 3 concept constructs (AI technologies, compassion, healthcare) developed from the literature and informed by medical subject headings (MeSH) and key words for the electronic searches. Sources of evidence were Web of Science and PubMed databases, articles published in English language 2011-2022. Articles were screened by title/abstract using inclusion/exclusion criteria. Data extracted (author, date of publication, type of article, aim/context of healthcare, key relevant findings, country) was charted using data tables. Thematic analysis used an inductive-deductive approach to generate code categories from the review questions and the data. A multidisciplinary team assessed themes for resonance and relevance to research and practice. Results Searches identified 3,124 articles. A total of 197 were included after screening. The number of articles has increased over 10 years (2011, n = 1 to 2021, n = 47 and from Jan-Aug 2022 n = 35 articles). Overarching themes related to the review questions were: (1) Developments and debates (7 themes) Concerns about AI ethics, healthcare jobs, and loss of empathy; Human-centered design of AI technologies for healthcare; Optimistic speculation AI technologies will address care gaps; Interrogation of what it means to be human and to care; Recognition of future potential for patient monitoring, virtual proximity, and access to healthcare; Calls for curricula development and healthcare professional education; Implementation of AI applications to enhance health and wellbeing of the healthcare workforce. (2) How AI technologies enhance compassion (10 themes) Empathetic awareness; Empathetic response and relational behavior; Communication skills; Health coaching; Therapeutic interventions; Moral development learning; Clinical knowledge and clinical assessment; Healthcare quality assessment; Therapeutic bond and therapeutic alliance; Providing health information and advice. (3) Gaps in knowledge (4 themes) Educational effectiveness of AI-assisted learning; Patient diversity and AI technologies; Implementation of AI technologies in education and practice settings; Safety and clinical effectiveness of AI technologies. (4) Key areas for development (3 themes) Enriching education, learning and clinical practice; Extending healing spaces; Enhancing healing relationships. Conclusion There is an association between AI technologies and compassion in healthcare and interest in this association has grown internationally over the last decade. In a range of healthcare contexts, AI technologies are being used to enhance empathetic awareness; empathetic response and relational behavior; communication skills; health coaching; therapeutic interventions; moral development learning; clinical knowledge and clinical assessment; healthcare quality assessment; therapeutic bond and therapeutic alliance; and to provide health information and advice. The findings inform a reconceptualization of compassion as a human-AI system of intelligent caring comprising six elements: (1) Awareness of suffering (e.g., pain, distress, risk, disadvantage); (2) Understanding the suffering (significance, context, rights, responsibilities etc.); (3) Connecting with the suffering (e.g., verbal, physical, signs and symbols); (4) Making a judgment about the suffering (the need to act); (5) Responding with an intention to alleviate the suffering; (6) Attention to the effect and outcomes of the response. These elements can operate at an individual (human or machine) and collective systems level (healthcare organizations or systems) as a cyclical system to alleviate different types of suffering. New and novel approaches to human-AI intelligent caring could enrich education, learning, and clinical practice; extend healing spaces; and enhance healing relationships. Implications In a complex adaptive system such as healthcare, human-AI intelligent caring will need to be implemented, not as an ideology, but through strategic choices, incentives, regulation, professional education, and training, as well as through joined up thinking about human-AI intelligent caring. Research funders can encourage research and development into the topic of AI technologies and compassion as a system of human-AI intelligent caring. Educators, technologists, and health professionals can inform themselves about the system of human-AI intelligent caring.
Collapse
Affiliation(s)
| | - Teodor Zidaru
- Department of Anthropology, London School of Economics and Political Sciences, London, United Kingdom
| | - Fiona Ross
- Faculty of Health, Science, Social Care and Education, Kingston University London, London, United Kingdom
| | - Cindy Mason
- Artificial Intelligence Researcher (Independent), Palo Alto, CA, United States
| | | | - Melissa Ream
- Kent Surrey Sussex Academic Health Science Network (AHSN) and the National AHSN Network Artificial Intelligence (AI) Initiative, Surrey, United Kingdom
| | - Rich Stockley
- Head of Research and Engagement, Surrey Heartlands Health and Care Partnership, Surrey, United Kingdom
| |
Collapse
|
29
|
Wu C, Xu H, Bai D, Chen X, Gao J, Jiang X. Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis. BMJ Open 2023; 13:e066322. [PMID: 36599634 PMCID: PMC9815015 DOI: 10.1136/bmjopen-2022-066322] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 12/05/2022] [Indexed: 01/05/2023] Open
Abstract
OBJECTIVES Medical artificial intelligence (AI) has been used widely applied in clinical field due to its convenience and innovation. However, several policy and regulatory issues such as credibility, sharing of responsibility and ethics have raised concerns in the use of AI. It is therefore necessary to understand the general public's views on medical AI. Here, a meta-synthesis was conducted to analyse and summarise the public's understanding of the application of AI in the healthcare field, to provide recommendations for future use and management of AI in medical practice. DESIGN This was a meta-synthesis of qualitative studies. METHOD A search was performed on the following databases to identify studies published in English and Chinese: MEDLINE, CINAHL, Web of science, Cochrane library, Embase, PsycINFO, CNKI, Wanfang and VIP. The search was conducted from database inception to 25 December 2021. The meta-aggregation approach of JBI was used to summarise findings from qualitative studies, focusing on the public's perception of the application of AI in healthcare. RESULTS Of the 5128 studies screened, 12 met the inclusion criteria, hence were incorporated into analysis. Three synthesised findings were used as the basis of our conclusions, including advantages of medical AI from the public's perspective, ethical and legal concerns about medical AI from the public's perspective, and public suggestions on the application of AI in medical field. CONCLUSION Results showed that the public acknowledges the unique advantages and convenience of medical AI. Meanwhile, several concerns about the application of medical AI were observed, most of which involve ethical and legal issues. The standard application and reasonable supervision of medical AI is key to ensuring its effective utilisation. Based on the public's perspective, this analysis provides insights and suggestions for health managers on how to implement and apply medical AI smoothly, while ensuring safety in healthcare practice. PROSPERO REGISTRATION NUMBER CRD42022315033.
Collapse
Affiliation(s)
- Chenxi Wu
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Huiqiong Xu
- West China School of Nursing,Sichuan University/ Abdominal Oncology Ward, Cancer Center,West China Hospital, Sichuan University, Chengdu, Sichuan, People's Republic of China
| | - Dingxi Bai
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xinyu Chen
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Jing Gao
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xiaolian Jiang
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
30
|
Lechien JR, Rameau A, De Marrez LG, Le Bosse G, Negro K, Sebestyen A, Baudouin R, Saussez S, Hans S. Usefulness, acceptation and feasibility of electronic medical history tool in reflux disease. Eur Arch Otorhinolaryngol 2023; 280:259-267. [PMID: 35763082 DOI: 10.1007/s00405-022-07520-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 06/19/2022] [Indexed: 01/07/2023]
Abstract
OBJECTIVES To investigate usefulness, feasibility, and patient satisfaction of an electronic pre-consultation medical history tool (EPMH) in laryngopharyngeal reflux (LPR) work-up. METHODS Seventy-five patients with LPR were invited to complete electronic medical history assessment prior to laryngology consultation. EPMH collected the following parameters: demographic and epidemiological data, medication, medical and surgical histories, diet habits, stress and symptom findings. Stress and symptoms were assessed with perceived stress scale and reflux symptom score. Duration of consultation, acceptance, and satisfaction of patients (feasibility, usefulness, effectiveness, understanding of questions) were evaluated through a 9-item patient-reported outcome questionnaire. RESULTS Seventy patients completed the evaluation (93% participation rate). The mean age of cohort was 51.2 ± 15.6 years old. There were 35 females and 35 males. Patients who refused to participate (N = 5) were > 65 years old. The consultation duration was significantly lower in patients who used the EPMH (11.3 ± 2.7 min) compared with a control group (18.1 ± 5.1 min; p = 0.001). Ninety percent of patients were satisfied about EPMH easiness and usefulness, while 97.1% thought that EPMH may improve the disease management. Patients would recommend similar approach for otolaryngological or other specialty consultations in 98.6% and 92.8% of cases, respectively. CONCLUSION The use of EPMH is associated with adequate usefulness, feasibility, and satisfaction outcomes in patients with LPR. This software is a preliminary step in the development of an AI-based diagnostic decision support tool to help laryngologists in their daily practice. Future randomized controlled studies are needed to investigate the gain of similar approaches on the traditional consultation format.
Collapse
Affiliation(s)
- Jerome R Lechien
- Department of Otolaryngology, Elsan Hospital, Paris, France.
- Department of Otolaryngology-Head and Neck Surgery, Foch Hospital, School of Medicine, University Paris Saclay, Worth street, 40, 92150, Paris, Suresnes, France.
- Department of Otolaryngology-Head and Neck Surgery, CHU Saint-Pierre, Brussels, Belgium.
- Department of Human Anatomy and Experimental Oncology, Faculty of Medicine, UMONS Research Institute for Health Sciences and Technology, University of Mons (UMons), Mons, Belgium.
| | - Anaïs Rameau
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medicine, New York, NY, USA
| | - Lisa G De Marrez
- Department of Otolaryngology-Head and Neck Surgery, Foch Hospital, School of Medicine, University Paris Saclay, Worth street, 40, 92150, Paris, Suresnes, France
| | - Gautier Le Bosse
- Department of Otolaryngology-Head and Neck Surgery, Foch Hospital, School of Medicine, University Paris Saclay, Worth street, 40, 92150, Paris, Suresnes, France
- Department of Artificial Intelligence Applied to Medical Structure, Special School of Mechanic and Electricity (ESME) Sudria, Paris, France
| | - Karina Negro
- Department of Otolaryngology-Head and Neck Surgery, Foch Hospital, School of Medicine, University Paris Saclay, Worth street, 40, 92150, Paris, Suresnes, France
- Department of Artificial Intelligence Applied to Medical Structure, Special School of Mechanic and Electricity (ESME) Sudria, Paris, France
| | - Andra Sebestyen
- Department of Otolaryngology-Head and Neck Surgery, Foch Hospital, School of Medicine, University Paris Saclay, Worth street, 40, 92150, Paris, Suresnes, France
| | - Robin Baudouin
- Department of Otolaryngology-Head and Neck Surgery, Foch Hospital, School of Medicine, University Paris Saclay, Worth street, 40, 92150, Paris, Suresnes, France
| | - Sven Saussez
- Department of Otolaryngology-Head and Neck Surgery, CHU Saint-Pierre, Brussels, Belgium
- Department of Human Anatomy and Experimental Oncology, Faculty of Medicine, UMONS Research Institute for Health Sciences and Technology, University of Mons (UMons), Mons, Belgium
| | - Stéphane Hans
- Department of Otolaryngology-Head and Neck Surgery, Foch Hospital, School of Medicine, University Paris Saclay, Worth street, 40, 92150, Paris, Suresnes, France
| |
Collapse
|
31
|
Cellina M, Cè M, Khenkina N, Sinichich P, Cervelli M, Poggi V, Boemi S, Ierardi AM, Carrafiello G. Artificial Intellgence in the Era of Precision Oncological Imaging. Technol Cancer Res Treat 2022; 21:15330338221141793. [PMID: 36426565 PMCID: PMC9703524 DOI: 10.1177/15330338221141793] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Rapid-paced development and adaptability of artificial intelligence algorithms have secured their almost ubiquitous presence in the field of oncological imaging. Artificial intelligence models have been created for a variety of tasks, including risk stratification, automated detection, and segmentation of lesions, characterization, grading and staging, prediction of prognosis, and treatment response. Soon, artificial intelligence could become an essential part of every step of oncological workup and patient management. Integration of neural networks and deep learning into radiological artificial intelligence algorithms allow for extrapolating imaging features otherwise inaccessible to human operators and pave the way to truly personalized management of oncological patients.Although a significant proportion of currently available artificial intelligence solutions belong to basic and translational cancer imaging research, their progressive transfer to clinical routine is imminent, contributing to the development of a personalized approach in oncology. We thereby review the main applications of artificial intelligence in oncological imaging, describe the example of their successful integration into research and clinical practice, and highlight the challenges and future perspectives that will shape the field of oncological radiology.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, Milano, Italy,Michaela Cellina, MD, Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121, Milano, Italy.
| | - Maurizio Cè
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Natallia Khenkina
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Polina Sinichich
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Marco Cervelli
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Vittoria Poggi
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Sara Boemi
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | | | - Gianpaolo Carrafiello
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy,Radiology Department, Fondazione IRCCS Cà Granda, Milan, Italy
| |
Collapse
|
32
|
Artificial Intelligence Confirming Treatment Success: The Role of Gender- and Age-Specific Scales in Performance Evaluation. Plast Reconstr Surg 2022; 150:34S-40S. [PMID: 36170434 PMCID: PMC9512241 DOI: 10.1097/prs.0000000000009671] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
In plastic surgery and cosmetic dermatology, photographic data are an invaluable element of research and clinical practice. Additionally, the use of before and after images is a standard documentation method for procedures, and these images are particularly useful in consultations for effective communication with the patient. An artificial intelligence (AI)-based approach has been proven to have significant results in medical dermatology, plastic surgery, and antiaging procedures in recent years, with applications ranging from skin cancer screening to 3D face reconstructions, the prediction of biological age and perceived age. The increasing use of AI and computer vision methods is due to their noninvasive nature and their potential to provide remote diagnostics. This is especially helpful in instances where traveling to a physical office is complicated, as we have experienced in recent years with the global coronavirus pandemic. However, one question remains: how should the results of AI-based analysis be presented to enable personalization? In this paper, the author investigates the benefit of using gender- and age-specific scales to present skin parameter scores calculated using AI-based systems when analyzing image data.
Collapse
|
33
|
Giansanti D. Comment on Patel, B.; Makaryus, A.N. Artificial Intelligence Advances in the World of Cardiovascular Imaging. Healthcare 2022, 10, 154. Healthcare (Basel) 2022; 10:727. [PMID: 35455904 PMCID: PMC9032641 DOI: 10.3390/healthcare10040727] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 04/01/2022] [Accepted: 04/07/2022] [Indexed: 02/05/2023] Open
Abstract
Regarding Dr. Makaryus's interesting review study [...].
Collapse
Affiliation(s)
- Daniele Giansanti
- Centro Nazionale Tecnologie Innovative in Sanità Pubblica, Istituto Superiore di Sanità, 00161 Rome, Italy
| |
Collapse
|
34
|
Giansanti D, Di Basilio F. The Artificial Intelligence in Digital Radiology: Part 1: The Challenges, Acceptance and Consensus. Healthcare (Basel) 2022; 10:509. [PMID: 35326987 PMCID: PMC8949694 DOI: 10.3390/healthcare10030509] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 03/03/2022] [Accepted: 03/08/2022] [Indexed: 12/27/2022] Open
Abstract
Artificial intelligence is having important developments in the world of digital radiology also thanks to the boost given to the research sector by the COVID-19 pandemic. In the last two years, there was an important development of studies focused on both challenges and acceptance and consensus in the field of Artificial Intelligence. The challenges and acceptance and consensus are two strategic aspects in the development and integration of technologies in the health domain. The study conducted two narrative reviews by means of two parallel points of view to take stock both on the ongoing challenges and on initiatives conducted to face the acceptance and consensus in this area. The methodology of the review was based on: (I) search of PubMed and Scopus and (II) an eligibility assessment, using parameters with 5 levels of score. The results have: (a) highlighted and categorized the important challenges in place. (b) Illustrated the different types of studies conducted through original questionnaires. The study suggests for future research based on questionnaires a better calibration and inclusion of the challenges in place together with validation and administration paths at an international level.
Collapse
|
35
|
Di Basilio F, Esposisto G, Monoscalco L, Giansanti D. The Artificial Intelligence in Digital Radiology: Part 2: Towards an Investigation of acceptance and consensus on the Insiders. Healthcare (Basel) 2022; 10:153. [PMID: 35052316 PMCID: PMC8775988 DOI: 10.3390/healthcare10010153] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 12/19/2021] [Accepted: 01/10/2022] [Indexed: 02/04/2023] Open
Abstract
Background. The study deals with the introduction of the artificial intelligence in digital radiology. There is a growing interest in this area of scientific research in acceptance and consensus studies involving both insiders and the public, based on surveys focused mainly on single professionals. Purpose. The goal of the study is to perform a contemporary investigation on the acceptance and the consensus of the three key professional figures approaching in this field of application: (1) Medical specialists in image diagnostics: the medical specialists (MS)s; (2) experts in physical imaging processes: the medical physicists (MP)s; (3) AI designers: specialists of applied sciences (SAS)s. Methods. Participants (MSs = 92: 48 males/44 females, averaged age 37.9; MPs = 91: 43 males/48 females, averaged age 36.1; SAS = 90: 47 males/43 females, averaged age 37.3) were properly recruited based on specific training. An electronic survey was designed and submitted to the participants with a wide range questions starting from the training and background up to the different applications of the AI and the environment of application. Results. The results show that generally, the three professionals show (a) a high degree of encouraging agreement on the introduction of AI both in imaging and in non-imaging applications using both standalone applications and/or mHealth/eHealth, and (b) a different consent on AI use depending on the training background. Conclusions. The study highlights the usefulness of focusing on both the three key professionals and the usefulness of the investigation schemes facing a wide range of issues. The study also suggests the importance of different methods of administration to improve the adhesion and the need to continue these investigations both with federated and specific initiatives.
Collapse
Affiliation(s)
- Francesco Di Basilio
- Facoltà di Medicina e Psicologia, Sapienza University, Piazzale Aldo Moro, 00185 Rome, Italy; (F.D.B.); (G.E.)
| | - Gianluca Esposisto
- Facoltà di Medicina e Psicologia, Sapienza University, Piazzale Aldo Moro, 00185 Rome, Italy; (F.D.B.); (G.E.)
| | - Lisa Monoscalco
- Faculty of Engineering, Tor Vergata University, 00133 Rome, Italy;
| | | |
Collapse
|
36
|
Monoscalco L, Simeoni R, Maccioni G, Giansanti D. Information Security in Medical Robotics: A Survey on the Level of Training, Awareness and Use of the Physiotherapist. Healthcare (Basel) 2022; 10:159. [PMID: 35052322 PMCID: PMC8775601 DOI: 10.3390/healthcare10010159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 01/03/2022] [Accepted: 01/06/2022] [Indexed: 01/27/2023] Open
Abstract
Cybersecurity is becoming an increasingly important aspect to investigate for the adoption and use of care robots, in term of both patients' safety, and the availability, integrity and privacy of their data. This study focuses on opinions about cybersecurity relevance and related skills for physiotherapists involved in rehabilitation and assistance thanks to the aid of robotics. The goal was to investigate the awareness among insiders about some facets of cybersecurity concerning human-robot interactions. We designed an electronic questionnaire and submitted it to a relevant sample of physiotherapists. The questionnaire allowed us to collect data related to: (i) use of robots and its relationship with cybersecurity in the context of physiotherapy; (ii) training in cybersecurity and robotics for the insiders; (iii) insiders' self-assessment on cybersecurity and robotics in some usage scenarios, and (iv) their experiences of cyber-attacks in this area and proposals for improvement. Besides contributing some specific statistics, the study highlights the importance of both acculturation processes in this field and monitoring initiatives based on surveys. The study exposes direct suggestions for continuation of these types of investigations in the context of scientific societies operating in the rehabilitation and assistance robotics. The study also shows the need to stimulate similar initiatives in other sectors of medical robotics (robotic surgery, care and socially assistive robots, rehabilitation systems, training for health and care workers) involving insiders.
Collapse
Affiliation(s)
- Lisa Monoscalco
- Faculty of Engineering, Tor Vergata University, Via Cracovia, 00133 Rome, Italy;
| | - Rossella Simeoni
- Facoltà di Medicina e Chirurgia, Università Cattolica del Sacro Cuore, Largo Francesco Vito, 1, 00168 Rome, Italy;
| | | | | |
Collapse
|
37
|
Chew HSJ, Achananuparp P. Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review. J Med Internet Res 2022; 24:e32939. [PMID: 35029538 PMCID: PMC8800095 DOI: 10.2196/32939] [Citation(s) in RCA: 56] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 11/08/2021] [Accepted: 12/03/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to improve the efficiency and effectiveness of health care service delivery. However, the perceptions and needs of such systems remain elusive, hindering efforts to promote AI adoption in health care. OBJECTIVE This study aims to provide an overview of the perceptions and needs of AI to increase its adoption in health care. METHODS A systematic scoping review was conducted according to the 5-stage framework by Arksey and O'Malley. Articles that described the perceptions and needs of AI in health care were searched across nine databases: ACM Library, CINAHL, Cochrane Central, Embase, IEEE Xplore, PsycINFO, PubMed, Scopus, and Web of Science for studies that were published from inception until June 21, 2021. Articles that were not specific to AI, not research studies, and not written in English were omitted. RESULTS Of the 3666 articles retrieved, 26 (0.71%) were eligible and included in this review. The mean age of the participants ranged from 30 to 72.6 years, the proportion of men ranged from 0% to 73.4%, and the sample sizes for primary studies ranged from 11 to 2780. The perceptions and needs of various populations in the use of AI were identified for general, primary, and community health care; chronic diseases self-management and self-diagnosis; mental health; and diagnostic procedures. The use of AI was perceived to be positive because of its availability, ease of use, and potential to improve efficiency and reduce the cost of health care service delivery. However, concerns were raised regarding the lack of trust in data privacy, patient safety, technological maturity, and the possibility of full automation. Suggestions for improving the adoption of AI in health care were highlighted: enhancing personalization and customizability; enhancing empathy and personification of AI-enabled chatbots and avatars; enhancing user experience, design, and interconnectedness with other devices; and educating the public on AI capabilities. Several corresponding mitigation strategies were also identified in this study. CONCLUSIONS The perceptions and needs of AI in its use in health care are crucial in improving its adoption by various stakeholders. Future studies and implementations should consider the points highlighted in this study to enhance the acceptability and adoption of AI in health care. This would facilitate an increase in the effectiveness and efficiency of health care service delivery to improve patient outcomes and satisfaction.
Collapse
Affiliation(s)
- Han Shi Jocelyn Chew
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Palakorn Achananuparp
- Living Analytics Research Centre, Singapore Management University, Singapore, Singapore
| |
Collapse
|
38
|
Fritsch SJ, Blankenheim A, Wahl A, Hetfeld P, Maassen O, Deffge S, Kunze J, Rossaint R, Riedel M, Marx G, Bickenbach J. Attitudes and perception of artificial intelligence in healthcare: A cross-sectional survey among patients. Digit Health 2022; 8:20552076221116772. [PMID: 35983102 PMCID: PMC9380417 DOI: 10.1177/20552076221116772] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 07/13/2022] [Indexed: 12/23/2022] Open
Abstract
Objective The attitudes about the usage of artificial intelligence in healthcare are
controversial. Unlike the perception of healthcare professionals, the
attitudes of patients and their companions have been of less interest so
far. In this study, we aimed to investigate the perception of artificial
intelligence in healthcare among this highly relevant group along with the
influence of digital affinity and sociodemographic factors. Methods We conducted a cross-sectional study using a paper-based questionnaire with
patients and their companions at a German tertiary referral hospital from
December 2019 to February 2020. The questionnaire consisted of three
sections examining (a) the respondents’ technical affinity, (b) their
perception of different aspects of artificial intelligence in healthcare and
(c) sociodemographic characteristics. Results From a total of 452 participants, more than 90% already read or heard about
artificial intelligence, but only 24% reported good or expert knowledge.
Asked on their general perception, 53.18% of the respondents rated the use
of artificial intelligence in medicine as positive or very positive, but
only 4.77% negative or very negative. The respondents denied concerns about
artificial intelligence, but strongly agreed that artificial intelligence
must be controlled by a physician. Older patients, women, persons with lower
education and technical affinity were more cautious on the
healthcare-related artificial intelligence usage. Conclusions German patients and their companions are open towards the usage of artificial
intelligence in healthcare. Although showing only a mediocre knowledge about
artificial intelligence, a majority rated artificial intelligence in
healthcare as positive. Particularly, patients insist that a physician
supervises the artificial intelligence and keeps ultimate responsibility for
diagnosis and therapy.
Collapse
Affiliation(s)
- Sebastian J Fritsch
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
- Juelich Supercomputing Centre, Forschungszentrum Juelich, Germany
| | - Andrea Blankenheim
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
| | - Alina Wahl
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
| | - Petra Hetfeld
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Oliver Maassen
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Saskia Deffge
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Julian Kunze
- SMITH Consortium of the German Medical Informatics Initiative, Germany
- Department of Anesthesiology, University Hospital RWTH Aachen, Germany
| | - Rolf Rossaint
- Department of Anesthesiology, University Hospital RWTH Aachen, Germany
| | - Morris Riedel
- SMITH Consortium of the German Medical Informatics Initiative, Germany
- Juelich Supercomputing Centre, Forschungszentrum Juelich, Germany
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, University of Iceland, Iceland
| | - Gernot Marx
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Johannes Bickenbach
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| |
Collapse
|