1
|
Arkoh S, Akudjedu TN, Amedu C, Antwi WK, Elshami W, Ohene-Botwe B. Current Radiology workforce perspective on the integration of artificial intelligence in clinical practice: A systematic review. J Med Imaging Radiat Sci 2025; 56:101769. [PMID: 39437624 DOI: 10.1016/j.jmir.2024.101769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 08/15/2024] [Accepted: 09/09/2024] [Indexed: 10/25/2024]
Abstract
INTRODUCTION Artificial Intelligence (AI) represents the application of computer systems to tasks traditionally performed by humans. The medical imaging profession has experienced a transformative shift through the integration of AI. While there have been several independent primary studies describing various aspects of AI, the current review employs a systematic approach towards describing the perspectives of radiologists and radiographers about the integration of AI in clinical practice. This review provides a holistic view from a professional standpoint towards understanding how the broad spectrum of AI tools are perceived as a unit in medical imaging practice. METHODS The study utilised a systematic review approach to collect data from quantitative, qualitative, and mixed-methods studies. Inclusion criteria encompassed articles concentrating on the viewpoints of either radiographers or radiologists regarding the incorporation of AI in medical imaging practice. A stepwise approach was employed in the systematic search across various databases. The included studies underwent quality assessment using the Quality Assessment Tool for Studies with Diverse Designs (QATSSD) checklist. A parallel-result convergent synthesis approach was employed to independently synthesise qualitative and quantitative evidence and to integrate the findings during the discussion phase. RESULTS Forty-one articles were included, all of which employed a cross-sectional study design. The main findings were themed around considerations and perspectives relating to AI education, impact on image quality and radiation dose, ethical and medico-legal implications for the use of AI, patient considerations and their perceived significance of AI for their care, and factors that influence development, implementation and job security. Despite varying emphasis, these themes collectively provide a global perspective on AI in medical imaging practice. CONCLUSION While expertise levels are varied and different, both radiographers and radiologists were generally optimistic about incorporation of AI in medical imaging practice. However, low levels of AI education and knowledge remain a critical barrier. Furthermore, equipment errors, cost, data security and operational difficulties, ethical constraints, job displacement concerns and insufficient implementation efforts are integration challenges that should merit the attention of stakeholders.
Collapse
Affiliation(s)
- Samuel Arkoh
- Department of Radiography, Scarborough Hospital, York and Scarborough NHS Foundation Trust, UK.
| | - Theophilus N Akudjedu
- Institute of Medical Imaging and Visualisation, Department of Medical Science & Public Health, Faculty of Health and Social Sciences, Bournemouth University, UK
| | - Cletus Amedu
- Diagnostic Radiography, Department of Midwifery & Radiography School of Health & Psychological Sciences City St George's, University of London, Northampton Square London EC1V 0HB, UK
| | - William K Antwi
- Department of Radiography, School of Biomedical & Allied Health Sciences, College of Health Sciences, University of Ghana, Ghana
| | - Wiam Elshami
- Faculty, Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, United Arab Emirates
| | - Benard Ohene-Botwe
- Diagnostic Radiography, Department of Midwifery & Radiography School of Health & Psychological Sciences City St George's, University of London, Northampton Square London EC1V 0HB, UK
| |
Collapse
|
2
|
Rakers M, Mwale D, de Mare L, Chirambo L, Bierling B, Likumbo A, Langton J, Chavannes N, van Os H, Calis J, Dellimore K, Villalobos-Quesada M. Cautiously optimistic: paediatric critical care nurses' perspectives on data-driven algorithms in low-resource settings-a human-centred design study in Malawi. BMC GLOBAL AND PUBLIC HEALTH 2024; 2:80. [PMID: 39681976 PMCID: PMC11622966 DOI: 10.1186/s44263-024-00108-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 11/04/2024] [Indexed: 12/18/2024]
Abstract
BACKGROUND Paediatric critical care nurses face challenges in promptly detecting patient deterioration and delivering high-quality care, especially in low-resource settings (LRS). Patient monitors equipped with data-driven algorithms that monitor and integrate clinical data can optimise scarce resources (e.g. trained staff) offering solutions to these challenges. Poor algorithm output design and workflow integration, however, are important factors hindering successful implementation. This study aims to explore nurses' perspectives to inform the development of a data-driven algorithm and user-friendly interface for future integration into a continuous vital signs monitoring system for critical care in LRS. METHODS Human-centred design methods, including contextual inquiry, semi-structured interviews, prototyping and co-design sessions, were carried out at the high-dependency units of Queen Elizabeth Central Hospital and Zomba Central Hospital in Malawi between March and July 2023. Triangulating these methods, we identified what algorithm could assist nurses and used co-creation methods to design a user interface prototype. Data were analysed using qualitative content analysis. RESULTS Workflow observations demonstrated the effects of personnel shortages and limited monitor equipment for vital signs monitoring. Interviews identified four themes: workload and workflow, patient prioritisation, interaction with guardians, and perspectives on data-driven algorithms. The interviews emphasised the advantages of predictive algorithms in anticipating patient deterioration, underlining the need to integrate the algorithm's output, the (constant) monitoring data, and the patient's present clinical condition. Nurses preferred a scoring system represented with familiar scales and colour codes. During co-design sessions, trust, usability and context specificity were emphasised as requirements for these algorithms. Four prototype components were examined, with nurses favouring scores represented by colour codes and visual representations of score changes. CONCLUSIONS Nurses in the LRS studied, perceived that data-driven algorithms, especially for predicting patient deterioration, could improve the provision of critical care. This can be achieved by translating nurses' perspectives into design strategies, as has been carried out in this study. The lessons learned were summarised as actionable pre-implementation recommendations for the development and implementation of data-driven algorithms in LRS.
Collapse
Affiliation(s)
- Margot Rakers
- Department of Public Health and Primary Care, Leiden University Medical Center, Albinusdreef 2, Leiden, 2333 ZA, The Netherlands
- National eHealth Living Lab (NeLL), Leiden University Medical Center, Leiden, The Netherlands
| | - Daniel Mwale
- Kamuzu University of Health Sciences, Blantyre, Malawi
| | | | | | | | - Alice Likumbo
- Training Research Unit of Excellence (TRUE), Blantyre, Malawi
| | - Josephine Langton
- Kamuzu University of Health Sciences, Blantyre, Malawi
- Paediatric Department, College of Medicine, Queen Elizabeth Central Hospital, Blantyre, Malawi
| | - Niels Chavannes
- Department of Public Health and Primary Care, Leiden University Medical Center, Albinusdreef 2, Leiden, 2333 ZA, The Netherlands
- National eHealth Living Lab (NeLL), Leiden University Medical Center, Leiden, The Netherlands
| | - Hendrikus van Os
- Department of Public Health and Primary Care, Leiden University Medical Center, Albinusdreef 2, Leiden, 2333 ZA, The Netherlands
- National eHealth Living Lab (NeLL), Leiden University Medical Center, Leiden, The Netherlands
| | - Job Calis
- Paediatric Department, College of Medicine, Queen Elizabeth Central Hospital, Blantyre, Malawi
- Department of Paediatric Intensive Care, Emma Children's Hospital, Academic Medical Center, Amsterdam, The Netherlands
| | | | - María Villalobos-Quesada
- Department of Public Health and Primary Care, Leiden University Medical Center, Albinusdreef 2, Leiden, 2333 ZA, The Netherlands.
- National eHealth Living Lab (NeLL), Leiden University Medical Center, Leiden, The Netherlands.
| |
Collapse
|
3
|
Chinene B, Mudadi LS, Choto TA, Soko ND, Gonde L, Mushosho EY, Mutandiro LC. Insights into GenAI: Perspectives of radiography and pharmacy students at a leading institution in Zimbabwe. Radiography (Lond) 2024; 30 Suppl 2:114-119. [PMID: 39547045 DOI: 10.1016/j.radi.2024.11.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2024] [Accepted: 11/04/2024] [Indexed: 11/17/2024]
Abstract
INTRODUCTION This study aimed to use a tertiary institution in Zimbabwe as a case study to evaluate radiography and pharmacy students' knowledge, willingness, and concerns regarding GenAI technologies. METHODS A cross-sectional survey, involving 147 participating students, was conducted using a structured questionnaire. Data analysis was performed using STATA 14. Descriptive statistics were presented, and demographic variables were compared using Student t-test and ANOVA. RESULTS A sizeable proportion of students (74.83 %) believed that knowing GenAI was essential for healthcare students to stay competitive in the evolving academic landscape. A significant proportion of students reported that they were willing to use GenAI technologies in their day-to-day activities. They noted that AI technologies such as ChatGPT can help them save time (77.56 %) with tasks such as assignments. Most students (56.46 %) were worried that the use of GenAI technology can cause people and students alike to lose the human touch in their daily lives. They also noted that the use of AI technologies such as ChatGPT to complete assignments has the potential to undermine the value of university education (44.9 %). CONCLUSION Addressing concerns raised by students about the implications of GenAI on human interaction and educational value is essential for a balanced technology integration approach in education. Future research should concentrate on solutions for addressing issues while leveraging the benefits of GenAI in healthcare education. IMPLICATIONS FOR PRACTICE Understanding students' knowledge, willingness, and concerns regarding the use of GenAI tools can help educators better integrate these technologies into the learning process, ensuring they complement and enhance traditional teaching methods.
Collapse
Affiliation(s)
- B Chinene
- Department of Radiography, School of Allied Health Sciences, Harare Institute of Technology, Zimbabwe.
| | - L-S Mudadi
- Royal Papworth Hospital, NHS Foundation Trust, Cambridge, United Kingdom
| | - T A Choto
- Department of Pharmaceutical Technology, School of Allied Health Sciences, Harare Institute of Technology, Zimbabwe
| | - N D Soko
- Department of Pharmaceutical Technology, School of Allied Health Sciences, Harare Institute of Technology, Zimbabwe
| | - L Gonde
- Department of Pharmaceutical Technology, School of Allied Health Sciences, Harare Institute of Technology, Zimbabwe
| | - E Y Mushosho
- School of Allied Health Sciences, Harare Institute of Technology, Zimbabwe
| | - L C Mutandiro
- Department of Radiography, School of Allied Health Sciences, Harare Institute of Technology, Zimbabwe
| |
Collapse
|
4
|
Hua D, Petrina N, Young N, Cho JG, Poon SK. Understanding the factors influencing acceptability of AI in medical imaging domains among healthcare professionals: A scoping review. Artif Intell Med 2024; 147:102698. [PMID: 38184343 DOI: 10.1016/j.artmed.2023.102698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 09/29/2023] [Accepted: 10/29/2023] [Indexed: 01/08/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has the potential to transform medical practice within the medical imaging industry and materially improve productivity and patient outcomes. However, low acceptability of AI as a digital healthcare intervention among medical professionals threatens to undermine user uptake levels, hinder meaningful and optimal value-added engagement, and ultimately prevent these promising benefits from being realised. Understanding the factors underpinning AI acceptability will be vital for medical institutions to pinpoint areas of deficiency and improvement within their AI implementation strategies. This scoping review aims to survey the literature to provide a comprehensive summary of the key factors influencing AI acceptability among healthcare professionals in medical imaging domains and the different approaches which have been taken to investigate them. METHODS A systematic literature search was performed across five academic databases including Medline, Cochrane Library, Web of Science, Compendex, and Scopus from January 2013 to September 2023. This was done in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. Overall, 31 articles were deemed appropriate for inclusion in the scoping review. RESULTS The literature has converged towards three overarching categories of factors underpinning AI acceptability including: user factors involving trust, system understanding, AI literacy, and technology receptiveness; system usage factors entailing value proposition, self-efficacy, burden, and workflow integration; and socio-organisational-cultural factors encompassing social influence, organisational readiness, ethicality, and perceived threat to professional identity. Yet, numerous studies have overlooked a meaningful subset of these factors that are integral to the use of medical AI systems such as the impact on clinical workflow practices, trust based on perceived risk and safety, and compatibility with the norms of medical professions. This is attributable to reliance on theoretical frameworks or ad-hoc approaches which do not explicitly account for healthcare-specific factors, the novelties of AI as software as a medical device (SaMD), and the nuances of human-AI interaction from the perspective of medical professionals rather than lay consumer or business end users. CONCLUSION This is the first scoping review to survey the health informatics literature around the key factors influencing the acceptability of AI as a digital healthcare intervention in medical imaging contexts. The factors identified in this review suggest that existing theoretical frameworks used to study AI acceptability need to be modified to better capture the nuances of AI deployment in healthcare contexts where the user is a healthcare professional influenced by expert knowledge and disciplinary norms. Increasing AI acceptability among medical professionals will critically require designing human-centred AI systems which go beyond high algorithmic performance to consider accessibility to users with varying degrees of AI literacy, clinical workflow practices, the institutional and deployment context, and the cultural, ethical, and safety norms of healthcare professions. As investment into AI for healthcare increases, it would be valuable to conduct a systematic review and meta-analysis of the causal contribution of these factors to achieving high levels of AI acceptability among medical professionals.
Collapse
Affiliation(s)
- David Hua
- School of Computer Science, The University of Sydney, Australia; Sydney Law School, The University of Sydney, Australia
| | - Neysa Petrina
- School of Computer Science, The University of Sydney, Australia
| | - Noel Young
- Sydney Medical School, The University of Sydney, Australia; Lumus Imaging, Australia
| | - Jin-Gun Cho
- Sydney Medical School, The University of Sydney, Australia; Western Sydney Local Health District, Australia; Lumus Imaging, Australia
| | - Simon K Poon
- School of Computer Science, The University of Sydney, Australia; Western Sydney Local Health District, Australia.
| |
Collapse
|