1
|
Dingel J, Kleine AK, Cecil J, Sigl AL, Lermer E, Gaube S. Predictors of Health Care Practitioners' Intention to Use AI-Enabled Clinical Decision Support Systems: Meta-Analysis Based on the Unified Theory of Acceptance and Use of Technology. J Med Internet Res 2024; 26:e57224. [PMID: 39102675 PMCID: PMC11333871 DOI: 10.2196/57224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 05/03/2024] [Accepted: 05/13/2024] [Indexed: 08/07/2024] Open
Abstract
BACKGROUND Artificial intelligence-enabled clinical decision support systems (AI-CDSSs) offer potential for improving health care outcomes, but their adoption among health care practitioners remains limited. OBJECTIVE This meta-analysis identified predictors influencing health care practitioners' intention to use AI-CDSSs based on the Unified Theory of Acceptance and Use of Technology (UTAUT). Additional predictors were examined based on existing empirical evidence. METHODS The literature search using electronic databases, forward searches, conference programs, and personal correspondence yielded 7731 results, of which 17 (0.22%) studies met the inclusion criteria. Random-effects meta-analysis, relative weight analyses, and meta-analytic moderation and mediation analyses were used to examine the relationships between relevant predictor variables and the intention to use AI-CDSSs. RESULTS The meta-analysis results supported the application of the UTAUT to the context of the intention to use AI-CDSSs. The results showed that performance expectancy (r=0.66), effort expectancy (r=0.55), social influence (r=0.66), and facilitating conditions (r=0.66) were positively associated with the intention to use AI-CDSSs, in line with the predictions of the UTAUT. The meta-analysis further identified positive attitude (r=0.63), trust (r=0.73), anxiety (r=-0.41), perceived risk (r=-0.21), and innovativeness (r=0.54) as additional relevant predictors. Trust emerged as the most influential predictor overall. The results of the moderation analyses show that the relationship between social influence and use intention becomes weaker with increasing age. In addition, the relationship between effort expectancy and use intention was stronger for diagnostic AI-CDSSs than for devices that combined diagnostic and treatment recommendations. Finally, the relationship between facilitating conditions and use intention was mediated through performance and effort expectancy. CONCLUSIONS This meta-analysis contributes to the understanding of the predictors of intention to use AI-CDSSs based on an extended UTAUT model. More research is needed to substantiate the identified relationships and explain the observed variations in effect sizes by identifying relevant moderating factors. The research findings bear important implications for the design and implementation of training programs for health care practitioners to ease the adoption of AI-CDSSs into their practice.
Collapse
Affiliation(s)
- Julius Dingel
- Human-AI-Interaction Group, Center for Leadership and People Management, Ludwig Maximilian University of Munich, Munich, Germany
| | - Anne-Kathrin Kleine
- Human-AI-Interaction Group, Center for Leadership and People Management, Ludwig Maximilian University of Munich, Munich, Germany
| | - Julia Cecil
- Human-AI-Interaction Group, Center for Leadership and People Management, Ludwig Maximilian University of Munich, Munich, Germany
| | - Anna Leonie Sigl
- Department of Liberal Arts and Sciences, Technical University of Applied Sciences Augsburg, Augsburg, Germany
| | - Eva Lermer
- Human-AI-Interaction Group, Center for Leadership and People Management, Ludwig Maximilian University of Munich, Munich, Germany
- Department of Liberal Arts and Sciences, Technical University of Applied Sciences Augsburg, Augsburg, Germany
| | - Susanne Gaube
- Human Factors in Healthcare, Global Business School for Health, University College London, London, United Kingdom
| |
Collapse
|
2
|
Alshutayli AAM, Asiri FM, Abutaleb YBA, Alomair BA, Almasaud AK, Almaqhawi A. Assessing Public Knowledge and Acceptance of Using Artificial Intelligence Doctors as a Partial Alternative to Human Doctors in Saudi Arabia: A Cross-Sectional Study. Cureus 2024; 16:e64461. [PMID: 39135842 PMCID: PMC11318498 DOI: 10.7759/cureus.64461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/13/2024] [Indexed: 08/15/2024] Open
Abstract
Objective To assess the public acceptance of using artificial intelligence (AI) doctors to diagnose and treat patients as a partial alternative to human physicians in Saudi Arabia. Methodology An observational cross-sectional study was conducted from January to March 2024. A link to an online questionnaire was distributed through social media applications to citizens and residents aged 18 years and older across various regions in Saudi Arabia. The sample size was calculated using the Raosoft online survey size calculator, which estimated that the minimum sample size should be 385. Results Of the 386 participants surveyed, 85.8% reported being aware of AI, and 47.9% reported having some knowledge about different AI fields in daily life. However, almost one-third (32.9%) reported a lack of knowledge about the use of AI in healthcare. In terms of acceptance, 52.3% of respondents indicated they felt comfortable with the use of AI tools as partial alternatives to human doctors, and 30.8% believed AI is useful in the field of health. The most common concern (63.7%) about the use of AI tools accessible to patients was the difficulty of describing symptoms using these tools. Conclusion The findings of this study provide valuable insights into the public's knowledge and acceptance of AI in medicine within the Saudi Arabian context. Overall, this study underscores the importance of proactively addressing the public's concerns and knowledge gaps regarding AI in healthcare. By fostering greater understanding and acceptance, healthcare stakeholders can better harness the potential of AI to improve patient outcomes and enhance the efficiency of medical services in Saudi Arabia.
Collapse
Affiliation(s)
| | - Faisal M Asiri
- College of Medicine, Prince Sattam Bin Abdulaziz University, Al-Kharj, SAU
| | | | | | | | | |
Collapse
|
3
|
Zondag AGM, Rozestraten R, Grimmelikhuijsen SG, Jongsma KR, van Solinge WW, Bots ML, Vernooij RWM, Haitjema S. The Effect of Artificial Intelligence on Patient-Physician Trust: Cross-Sectional Vignette Study. J Med Internet Res 2024; 26:e50853. [PMID: 38805702 PMCID: PMC11167322 DOI: 10.2196/50853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 03/21/2024] [Accepted: 04/16/2024] [Indexed: 05/30/2024] Open
Abstract
BACKGROUND Clinical decision support systems (CDSSs) based on routine care data, using artificial intelligence (AI), are increasingly being developed. Previous studies focused largely on the technical aspects of using AI, but the acceptability of these technologies by patients remains unclear. OBJECTIVE We aimed to investigate whether patient-physician trust is affected when medical decision-making is supported by a CDSS. METHODS We conducted a vignette study among the patient panel (N=860) of the University Medical Center Utrecht, the Netherlands. Patients were randomly assigned into 4 groups-either the intervention or control groups of the high-risk or low-risk cases. In both the high-risk and low-risk case groups, a physician made a treatment decision with (intervention groups) or without (control groups) the support of a CDSS. Using a questionnaire with a 7-point Likert scale, with 1 indicating "strongly disagree" and 7 indicating "strongly agree," we collected data on patient-physician trust in 3 dimensions: competence, integrity, and benevolence. We assessed differences in patient-physician trust between the control and intervention groups per case using Mann-Whitney U tests and potential effect modification by the participant's sex, age, education level, general trust in health care, and general trust in technology using multivariate analyses of (co)variance. RESULTS In total, 398 patients participated. In the high-risk case, median perceived competence and integrity were lower in the intervention group compared to the control group but not statistically significant (5.8 vs 5.6; P=.16 and 6.3 vs 6.0; P=.06, respectively). However, the effect of a CDSS application on the perceived competence of the physician depended on the participant's sex (P=.03). Although no between-group differences were found in men, in women, the perception of the physician's competence and integrity was significantly lower in the intervention compared to the control group (P=.009 and P=.01, respectively). In the low-risk case, no differences in trust between the groups were found. However, increased trust in technology positively influenced the perceived benevolence and integrity in the low-risk case (P=.009 and P=.04, respectively). CONCLUSIONS We found that, in general, patient-physician trust was high. However, our findings indicate a potentially negative effect of AI applications on the patient-physician relationship, especially among women and in high-risk situations. Trust in technology, in general, might increase the likelihood of embracing the use of CDSSs by treating professionals.
Collapse
Affiliation(s)
- Anna G M Zondag
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Raoul Rozestraten
- Utrecht University School of Governance, Utrecht University, Utrecht, Netherlands
| | | | - Karin R Jongsma
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Wouter W van Solinge
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Michiel L Bots
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Robin W M Vernooij
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
- Department of Nephrology and Hypertension, University Medical Center Utrecht, Utrecht, Netherlands
| | - Saskia Haitjema
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
4
|
Chen H, Li HX, Li L, Zhang XH, Gu JW, Wang Q, Wu CM, Wu YQ. Factors Associated with Intention to Use Telerehabilitation for Children with Special Needs: A Cross-Sectional Study. Telemed J E Health 2024; 30:1425-1435. [PMID: 38346325 DOI: 10.1089/tmj.2023.0325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2024] Open
Abstract
Background: Children with special health care needs (CSHCN) require long-term and ongoing rehabilitation interventions supporting their development. Telerehabilitation can provide continuous rehabilitation services for CSHCN. However, few studies have explored the intention of CSHCN and their caregivers to use telerehabilitation and its impact on them. Objective: The objective of this study was to identify factors that influence the intention to use telerehabilitation among CSHCN and their caregivers. Methods: This study was a cross-sectional study. Based on the unified theory of acceptance and use of technology, extended with additional predictors (trust and perceived risk [PR]), this study developed a research model and proposed 10 hypotheses. A structured questionnaire was distributed to 176 caregivers. Data were analyzed and research hypotheses were tested using partial least squares structural equation modeling to better understand the factors influencing the use of telerehabilitation. Results: A total of 164 valid questionnaires were collected. CSHCN and their caregivers were overall satisfied with this telerehabilitation medical service. The results of the structural model analysis indicated that social influence (SI), facilitating conditions (FC), and trust had significant effects on behavioral intention (BI) to use telerehabilitation, while the paths between performance expectancy (PE), effort expectancy (EE), and PR and BI were not significant. PE, EE, and SI had a significant effect on trust. Moreover, EE and SI had indirect effects on BI, with trust as the mediator. Conclusions: The results indicated that SI, FC, and trust are significant factors influencing CSHCN and their caregivers' use of telerehabilitation. Trust is also an important mediator for the intention and highly influenced by PE, EE, and SI.
Collapse
Affiliation(s)
- Hong Chen
- Department of Pediatric Rehabilitation, Shaanxi Rehabilitation Hospital, Xi'an, China
| | - Hong-Xia Li
- Department of Pediatric Rehabilitation, Shaanxi Rehabilitation Hospital, Xi'an, China
| | - Ling Li
- Department of Pediatric Rehabilitation, Shaanxi Rehabilitation Hospital, Xi'an, China
| | - Xiao-Hong Zhang
- Department of Pediatric Rehabilitation, Shaanxi Rehabilitation Hospital, Xi'an, China
| | - Jun-Wang Gu
- School of Public Health and Health Management, Gannan Medical University, Ganzhou, China
| | - Qi Wang
- School of Public Health and Health Management, Gannan Medical University, Ganzhou, China
| | - Chun-Mei Wu
- School of Public Health and Health Management, Gannan Medical University, Ganzhou, China
| | - Yong-Qiang Wu
- Department of Rehabilitation, Xi'an Children's Hospital, Xi'an, China
| |
Collapse
|
5
|
Gordon M, Daniel M, Ajiboye A, Uraiby H, Xu NY, Bartlett R, Hanson J, Haas M, Spadafore M, Grafton-Clarke C, Gasiea RY, Michie C, Corral J, Kwan B, Dolmans D, Thammasitboon S. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. MEDICAL TEACHER 2024; 46:446-470. [PMID: 38423127 DOI: 10.1080/0142159x.2024.2314198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 01/31/2024] [Indexed: 03/02/2024]
Abstract
BACKGROUND Artificial Intelligence (AI) is rapidly transforming healthcare, and there is a critical need for a nuanced understanding of how AI is reshaping teaching, learning, and educational practice in medical education. This review aimed to map the literature regarding AI applications in medical education, core areas of findings, potential candidates for formal systematic review and gaps for future research. METHODS This rapid scoping review, conducted over 16 weeks, employed Arksey and O'Malley's framework and adhered to STORIES and BEME guidelines. A systematic and comprehensive search across PubMed/MEDLINE, EMBASE, and MedEdPublish was conducted without date or language restrictions. Publications included in the review spanned undergraduate, graduate, and continuing medical education, encompassing both original studies and perspective pieces. Data were charted by multiple author pairs and synthesized into various thematic maps and charts, ensuring a broad and detailed representation of the current landscape. RESULTS The review synthesized 278 publications, with a majority (68%) from North American and European regions. The studies covered diverse AI applications in medical education, such as AI for admissions, teaching, assessment, and clinical reasoning. The review highlighted AI's varied roles, from augmenting traditional educational methods to introducing innovative practices, and underscores the urgent need for ethical guidelines in AI's application in medical education. CONCLUSION The current literature has been charted. The findings underscore the need for ongoing research to explore uncharted areas and address potential risks associated with AI use in medical education. This work serves as a foundational resource for educators, policymakers, and researchers in navigating AI's evolving role in medical education. A framework to support future high utility reporting is proposed, the FACETS framework.
Collapse
Affiliation(s)
- Morris Gordon
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
- Blackpool Hospitals NHS Foundation Trust, Blackpool, UK
| | - Michelle Daniel
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Aderonke Ajiboye
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
| | - Hussein Uraiby
- Department of Cellular Pathology, University Hospitals of Leicester NHS Trust, Leicester, UK
| | - Nicole Y Xu
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Rangana Bartlett
- Department of Cognitive Science, University of California, San Diego, CA, USA
| | - Janice Hanson
- Department of Medicine and Office of Education, School of Medicine, Washington University in Saint Louis, Saint Louis, MO, USA
| | - Mary Haas
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Maxwell Spadafore
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | | | | | - Colin Michie
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
| | - Janet Corral
- Department of Medicine, University of Nevada Reno, School of Medicine, Reno, NV, USA
| | - Brian Kwan
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Diana Dolmans
- School of Health Professions Education, Faculty of Health, Maastricht University, Maastricht, NL, USA
| | - Satid Thammasitboon
- Center for Research, Innovation and Scholarship in Health Professions Education, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
6
|
Cortez PM, Ong AKS, Diaz JFT, German JD, Singh Jagdeep SJS. Analyzing Preceding factors affecting behavioral intention on communicational artificial intelligence as an educational tool. Heliyon 2024; 10:e25896. [PMID: 38356557 PMCID: PMC10865406 DOI: 10.1016/j.heliyon.2024.e25896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/23/2024] [Accepted: 02/05/2024] [Indexed: 02/16/2024] Open
Abstract
During the pandemic, artificial intelligence was employed and utilized by students around the globe. Students' conduct changed in a variety of ways when schooling returned to regular instruction. This study aimed to analyze the student's behavioral intention and actual academic use of communicational AI (CAI) as an educational tool. This study identified the variables by utilizing an integrated framework based on the Unified Theory of Acceptance and Use of Technology (UTAUT2) and self-determination theory. Through the use of an online survey and Structural Equation Modeling, data from 533 respondents were analyzed. The results showed that perceived relatedness has the most significant effect on the behavioral intention of students in using CAI as an educational tool, followed by perceived autonomy. It showed that students use CAI based on the objective and the possibility of increasing their productivity, rather than any other purpose in the education setting. Among the UTAUT2 domains, only facilitating conditions, habit, and performance expectancy provided a significant direct effect on behavioral intention and an indirect effect on actual academic use. Further implications were presented. Moreover, the methodology and framework of this study could be extended and applied to educational technology-related studies. Lastly, the outcome of this study may be considered in analyzing the behavioral intention of the students as the teaching-learning environment is still continuously expanding and developing.
Collapse
Affiliation(s)
- Patrick M. Cortez
- School of Industrial Engineering and Engineering Management, Mapúa University, 658 Muralla St., Intramuros, Manila, 1002, Philippines
| | - Ardvin Kester S. Ong
- School of Industrial Engineering and Engineering Management, Mapúa University, 658 Muralla St., Intramuros, Manila, 1002, Philippines
- E.T. Yuchengo School of Business, Mapúa University, 1191 Pablo Ocampo Sr. Ext., Makati, Metro Manila 1205, Philippines
| | - John Francis T. Diaz
- Department of Finance and Accounting, Asian Institute of Management, 123 Paseo de Roxas, Legazpi Village, Makati, 1229, Metro Manila, Philippines
| | - Josephine D. German
- School of Industrial Engineering and Engineering Management, Mapúa University, 658 Muralla St., Intramuros, Manila, 1002, Philippines
| | | |
Collapse
|
7
|
Schulz PJ, Lwin MO, Kee KM, Goh WWB, Lam TYT, Sung JJY. Modeling the influence of attitudes, trust, and beliefs on endoscopists' acceptance of artificial intelligence applications in medical practice. Front Public Health 2023; 11:1301563. [PMID: 38089040 PMCID: PMC10715310 DOI: 10.3389/fpubh.2023.1301563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 11/03/2023] [Indexed: 12/18/2023] Open
Abstract
Introduction The potential for deployment of Artificial Intelligence (AI) technologies in various fields of medicine is vast, yet acceptance of AI amongst clinicians has been patchy. This research therefore examines the role of antecedents, namely trust, attitude, and beliefs in driving AI acceptance in clinical practice. Methods We utilized online surveys to gather data from clinicians in the field of gastroenterology. Results A total of 164 participants responded to the survey. Participants had a mean age of 44.49 (SD = 9.65). Most participants were male (n = 116, 70.30%) and specialized in gastroenterology (n = 153, 92.73%). Based on the results collected, we proposed and tested a model of AI acceptance in medical practice. Our findings showed that while the proposed drivers had a positive impact on AI tools' acceptance, not all effects were direct. Trust and belief were found to fully mediate the effects of attitude on AI acceptance by clinicians. Discussion The role of trust and beliefs as primary mediators of the acceptance of AI in medical practice suggest that these should be areas of focus in AI education, engagement and training. This has implications for how AI systems can gain greater clinician acceptance to engender greater trust and adoption amongst public health systems and professional networks which in turn would impact how populations interface with AI. Implications for policy and practice, as well as future research in this nascent field, are discussed.
Collapse
Affiliation(s)
- Peter J. Schulz
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore
| | - May O. Lwin
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore
| | - Kalya M. Kee
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore
| | - Wilson W. B. Goh
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- School of Biological Sciences, Nanyang Technological University, Singapore, Singapore
- Center for Biomedical Informatics, Nanyang Technological University, Singapore, Singapore
| | - Thomas Y. T Lam
- Faculty of Medicine, Institute of Digestive Diseases, The Chinese University of Hong Kong, Hong Kong, China
| | - Joseph J. Y. Sung
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
8
|
Alanzi T, Alotaibi R, Alajmi R, Bukhamsin Z, Fadaq K, AlGhamdi N, Bu Khamsin N, Alzahrani L, Abdullah R, Alsayer R, Al Muarfaj AM, Alanzi N. Barriers and Facilitators of Artificial Intelligence in Family Medicine: An Empirical Study With Physicians in Saudi Arabia. Cureus 2023; 15:e49419. [PMID: 38149160 PMCID: PMC10750222 DOI: 10.7759/cureus.49419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/23/2023] [Indexed: 12/28/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is a novel technology that has been widely acknowledged for its potential to improve the processes' efficiency across industries. However, its barriers and facilitators in healthcare are not completely understood due to its novel nature. STUDY PURPOSE The purpose of this study is to explore the intricate landscape of AI use in family medicine, aiming to uncover the factors that either hinder or enable its successful adoption. METHODS A cross-sectional survey design is adopted in this study. The questionnaire included 10 factors (performance expectancy, effort expectancy, social influence, facilitating conditions, behavioral intention, trust, perceived privacy risk, personal innovativeness, ethical concerns, and facilitators) affecting the acceptance of AI. A total of 157 family physicians participated in the online survey. RESULTS Effort expectancy (μ = 3.85) and facilitating conditions (μ = 3.77) were identified to be strong influence factors. Access to data (μ = 4.33), increased computing power (μ = 3.92), and telemedicine (μ = 3.78) were identified as major facilitators; regulatory support (μ = 2.29) and interoperability standards (μ = 2.71) were identified as barriers along with privacy and ethical concerns. Younger individuals tend to have more positive attitudes and expectations toward AI-enabled assistants compared to older participants (p < .05). Perceived privacy risk is negatively correlated with all factors. CONCLUSION Although there are various barriers and concerns regarding the use of AI in healthcare, the preference for AI use in healthcare, especially family medicine, is increasing.
Collapse
Affiliation(s)
- Turki Alanzi
- Department of Health Information Management and Technology, College of Public Health, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Raghad Alotaibi
- Department of Family Medicine, King Fahad Medical City, Riyadh, SAU
| | - Rahaf Alajmi
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Zainab Bukhamsin
- College of Clinical Pharmacy, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Khadija Fadaq
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Nouf AlGhamdi
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | | | | | - Ruya Abdullah
- Faculty of Medicine, Ibn Sina National College, Jeddah, SAU
| | - Razan Alsayer
- College of Medicine, Northern Border University, Arar, SAU
| | - Afrah M Al Muarfaj
- Department of Health Affairs, General Directorate of Health Affairs in Assir Region, Ministry of Health, Abha, SAU
| | - Nouf Alanzi
- Department of Clinical Laboratory Sciences, College of Applied Medical Sciences, Jouf University, Sakakah, SAU
| |
Collapse
|
9
|
Shahzad MF, Xu S, Naveed W, Nusrat S, Zahid I. Investigating the impact of artificial intelligence on human resource functions in the health sector of China: A mediated moderation model. Heliyon 2023; 9:e21818. [PMID: 38034787 PMCID: PMC10685199 DOI: 10.1016/j.heliyon.2023.e21818] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 10/25/2023] [Accepted: 10/30/2023] [Indexed: 12/02/2023] Open
Abstract
Artificial intelligence (AI) is rapidly transforming the way human resources (HR) functions are carried out in the health sector of China. This study aims to scrutinize the impact of artificial intelligence on the human resource functions operating in the healthcare sector through technological awareness, social media influence, and personal innovativeness. Additionally, this study examines the moderating role of perceived risk between technological awareness and human resources functions. An online questionnaire was administered to human resources professionals in the health sector of China to gather data from 363 respondents. Partial least squares structural equation modeling (PLS-SEM), a statistical procedure, is implemented to investigate the hypothesis of the projected model of artificial intelligence and human resource functions. The research findings reveal that artificial intelligence significantly influences human resource functions through technological awareness, social media influence, and personal innovativeness. Furthermore, perceived risk significantly moderates the relationship between technological awareness and human resource functions. The findings of this study have important implications for HR practitioners and policymakers in the health sectors of China, who can leverage artificial intelligence technologies to optimize and improve organizational performance. However, its adoption needs to be carefully planned and managed to reap the full benefits of this transformative technology.
Collapse
Affiliation(s)
| | - Shuo Xu
- College of Economics and Management, Beijing University of Technology, Beijing 100124, PR China
| | - Waliha Naveed
- Institute of Business & Management, University of Engineering and Technology, Lahore 54000, Pakistan
| | - Shahneela Nusrat
- College of Environment and Life Science, Beijing University of Technology, Beijing 100124, PR China
| | - Imran Zahid
- Department of Mechanical Engineering and Technology, Government College University Faisalabad, Pakistan
| |
Collapse
|
10
|
Chen Y, Wu Z, Wang P, Xie L, Yan M, Jiang M, Yang Z, Zheng J, Zhang J, Zhu J. Radiology Residents' Perceptions of Artificial Intelligence: Nationwide Cross-Sectional Survey Study. J Med Internet Res 2023; 25:e48249. [PMID: 37856181 PMCID: PMC10623237 DOI: 10.2196/48249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 07/07/2023] [Accepted: 09/01/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming various fields, with health care, especially diagnostic specialties such as radiology, being a key but controversial battleground. However, there is limited research systematically examining the response of "human intelligence" to AI. OBJECTIVE This study aims to comprehend radiologists' perceptions regarding AI, including their views on its potential to replace them, its usefulness, and their willingness to accept it. We examine the influence of various factors, encompassing demographic characteristics, working status, psychosocial aspects, personal experience, and contextual factors. METHODS Between December 1, 2020, and April 30, 2021, a cross-sectional survey was completed by 3666 radiology residents in China. We used multivariable logistic regression models to examine factors and associations, reporting odds ratios (ORs) and 95% CIs. RESULTS In summary, radiology residents generally hold a positive attitude toward AI, with 29.90% (1096/3666) agreeing that AI may reduce the demand for radiologists, 72.80% (2669/3666) believing AI improves disease diagnosis, and 78.18% (2866/3666) feeling that radiologists should embrace AI. Several associated factors, including age, gender, education, region, eye strain, working hours, time spent on medical images, resilience, burnout, AI experience, and perceptions of residency support and stress, significantly influence AI attitudes. For instance, burnout symptoms were associated with greater concerns about AI replacement (OR 1.89; P<.001), less favorable views on AI usefulness (OR 0.77; P=.005), and reduced willingness to use AI (OR 0.71; P<.001). Moreover, after adjusting for all other factors, perceived AI replacement (OR 0.81; P<.001) and AI usefulness (OR 5.97; P<.001) were shown to significantly impact the intention to use AI. CONCLUSIONS This study profiles radiology residents who are accepting of AI. Our comprehensive findings provide insights for a multidimensional approach to help physicians adapt to AI. Targeted policies, such as digital health care initiatives and medical education, can be developed accordingly.
Collapse
Affiliation(s)
- Yanhua Chen
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Ziye Wu
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Peicheng Wang
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Linbo Xie
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Mengsha Yan
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Maoqing Jiang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jianjun Zheng
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jingfeng Zhang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jiming Zhu
- Vanke School of Public Health, Tsinghua University, Beijing, China
- Institute for Healthy China, Tsinghua University, Beijing, China
| |
Collapse
|
11
|
Shamszare H, Choudhury A. Clinicians' Perceptions of Artificial Intelligence: Focus on Workload, Risk, Trust, Clinical Decision Making, and Clinical Integration. Healthcare (Basel) 2023; 11:2308. [PMID: 37628506 PMCID: PMC10454426 DOI: 10.3390/healthcare11162308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 08/09/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
Artificial intelligence (AI) offers the potential to revolutionize healthcare, from improving diagnoses to patient safety. However, many healthcare practitioners are hesitant to adopt AI technologies fully. To understand why, this research explored clinicians' views on AI, especially their level of trust, their concerns about potential risks, and how they believe AI might affect their day-to-day workload. We surveyed 265 healthcare professionals from various specialties in the U.S. The survey aimed to understand their perceptions and any concerns they might have about AI in their clinical practice. We further examined how these perceptions might align with three hypothetical approaches to integrating AI into healthcare: no integration, sequential (step-by-step) integration, and parallel (side-by-side with current practices) integration. The results reveal that clinicians who view AI as a workload reducer are more inclined to trust it and are more likely to use it in clinical decision making. However, those perceiving higher risks with AI are less inclined to adopt it in decision making. While the role of clinical experience was found to be statistically insignificant in influencing trust in AI and AI-driven decision making, further research might explore other potential moderating variables, such as technical aptitude, previous exposure to AI, or the specific medical specialty of the clinician. By evaluating three hypothetical scenarios of AI integration in healthcare, our study elucidates the potential pitfalls of sequential AI integration and the comparative advantages of parallel integration. In conclusion, this study underscores the necessity of strategic AI integration into healthcare. AI should be perceived as a supportive tool rather than an intrusive entity, augmenting the clinicians' skills and facilitating their workflow rather than disrupting it. As we move towards an increasingly digitized future in healthcare, comprehending the among AI technology, clinician perception, trust, and decision making is fundamental.
Collapse
Affiliation(s)
| | - Avishek Choudhury
- Industrial and Management Systems Engineering, West Virginia University, Morgantown, WV 26506, USA;
| |
Collapse
|
12
|
Kleine AK, Kokje E, Lermer E, Gaube S. Attitudes Toward the Adoption of 2 Artificial Intelligence-Enabled Mental Health Tools Among Prospective Psychotherapists: Cross-sectional Study. JMIR Hum Factors 2023; 10:e46859. [PMID: 37436801 PMCID: PMC10372564 DOI: 10.2196/46859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 05/08/2023] [Accepted: 05/14/2023] [Indexed: 07/13/2023] Open
Abstract
BACKGROUND Despite growing efforts to develop user-friendly artificial intelligence (AI) applications for clinical care, their adoption remains limited because of the barriers at individual, organizational, and system levels. There is limited research on the intention to use AI systems in mental health care. OBJECTIVE This study aimed to address this gap by examining the predictors of psychology students' and early practitioners' intention to use 2 specific AI-enabled mental health tools based on the Unified Theory of Acceptance and Use of Technology. METHODS This cross-sectional study included 206 psychology students and psychotherapists in training to examine the predictors of their intention to use 2 AI-enabled mental health care tools. The first tool provides feedback to the psychotherapist on their adherence to motivational interviewing techniques. The second tool uses patient voice samples to derive mood scores that the therapists may use for treatment decisions. Participants were presented with graphic depictions of the tools' functioning mechanisms before measuring the variables of the extended Unified Theory of Acceptance and Use of Technology. In total, 2 structural equation models (1 for each tool) were specified, which included direct and mediated paths for predicting tool use intentions. RESULTS Perceived usefulness and social influence had a positive effect on the intention to use the feedback tool (P<.001) and the treatment recommendation tool (perceived usefulness, P=.01 and social influence, P<.001). However, trust was unrelated to use intentions for both the tools. Moreover, perceived ease of use was unrelated (feedback tool) and even negatively related (treatment recommendation tool) to use intentions when considering all predictors (P=.004). In addition, a positive relationship between cognitive technology readiness (P=.02) and the intention to use the feedback tool and a negative relationship between AI anxiety and the intention to use the feedback tool (P=.001) and the treatment recommendation tool (P<.001) were observed. CONCLUSIONS The results shed light on the general and tool-dependent drivers of AI technology adoption in mental health care. Future research may explore the technological and user group characteristics that influence the adoption of AI-enabled tools in mental health care.
Collapse
Affiliation(s)
- Anne-Kathrin Kleine
- Department of Psychology, Ludwig Maximilian University of Munich, Munich, Germany
| | - Eesha Kokje
- Department of Psychology, Ludwig Maximilian University of Munich, Munich, Germany
| | - Eva Lermer
- Department of Psychology, Ludwig Maximilian University of Munich, Munich, Germany
- Technical University of Applied Sciences Augsburg, Augsburg, Germany
| | - Susanne Gaube
- Department of Psychology, Ludwig Maximilian University of Munich, Munich, Germany
| |
Collapse
|
13
|
Mesko B. The ChatGPT (Generative Artificial Intelligence) Revolution Has Made Artificial Intelligence Approachable for Medical Professionals. J Med Internet Res 2023; 25:e48392. [PMID: 37347508 PMCID: PMC10337400 DOI: 10.2196/48392] [Citation(s) in RCA: 26] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 06/02/2023] [Accepted: 06/07/2023] [Indexed: 06/23/2023] Open
Abstract
In November 2022, OpenAI publicly launched its large language model (LLM), ChatGPT, and reached the milestone of having over 100 million users in only 2 months. LLMs have been shown to be useful in a myriad of health care-related tasks and processes. In this paper, I argue that attention to, public access to, and debate about LLMs have initiated a wave of products and services using generative artificial intelligence (AI), which had previously found it hard to attract physicians. This paper describes what AI tools have become available since the beginning of the ChatGPT revolution and contemplates how it they might change physicians' perceptions about this breakthrough technology.
Collapse
Affiliation(s)
- Bertalan Mesko
- The Medical Futurist Institute, Budapest, Hungary
- Department of Behavioural Sciences, Semmelweis University, Budapest, Hungary
| |
Collapse
|
14
|
Wagner G, Raymond L, Paré G. Understanding Prospective Physicians' Intention to Use Artificial Intelligence in Their Future Medical Practice: Configurational Analysis. JMIR MEDICAL EDUCATION 2023; 9:e45631. [PMID: 36947121 PMCID: PMC10131981 DOI: 10.2196/45631] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/20/2023] [Accepted: 02/24/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND Prospective physicians are expected to find artificial intelligence (AI) to be a key technology in their future practice. This transformative change has caught the attention of scientists, educators, and policy makers alike, with substantive efforts dedicated to the selection and delivery of AI topics and competencies in the medical curriculum. Less is known about the behavioral perspective or the necessary and sufficient preconditions for medical students' intention to use AI in the first place. OBJECTIVE Our study focused on medical students' knowledge, experience, attitude, and beliefs related to AI and aimed to understand whether they are necessary conditions and form sufficient configurations of conditions associated with behavioral intentions to use AI in their future medical practice. METHODS We administered a 2-staged questionnaire operationalizing the variables of interest (ie, knowledge, experience, attitude, and beliefs related to AI, as well as intention to use AI) and recorded 184 responses at t0 (February 2020, before the COVID-19 pandemic) and 138 responses at t1 (January 2021, during the COVID-19 pandemic). Following established guidelines, we applied necessary condition analysis and fuzzy-set qualitative comparative analysis to analyze the data. RESULTS Findings from the fuzzy-set qualitative comparative analysis show that the intention to use AI is only observed when students have a strong belief in the role of AI (individually necessary condition); certain AI profiles, that is, combinations of knowledge and experience, attitudes and beliefs, and academic level and gender, are always associated with high intentions to use AI (equifinal and sufficient configurations); and profiles associated with nonhigh intentions cannot be inferred from profiles associated with high intentions (causal asymmetry). CONCLUSIONS Our work contributes to prior knowledge by showing that a strong belief in the role of AI in the future of medical professions is a necessary condition for behavioral intentions to use AI. Moreover, we suggest that the preparation of medical students should go beyond teaching AI competencies and that educators need to account for the different AI profiles associated with high or nonhigh intentions to adopt AI.
Collapse
Affiliation(s)
- Gerit Wagner
- Faculty Information Systems and Applied Computer Sciences, University of Bamberg, Bamberg, Germany
| | - Louis Raymond
- Université du Québec à Trois-Rivières, Trois-Rivières, QC, Canada
| | - Guy Paré
- Department of Information Technologies, École des Hautes Études commerciales Montréal, Montréal, QC, Canada
| |
Collapse
|
15
|
Wu A, Xue P, Abulizi G, Tuerxun D, Rezhake R, Qiao Y. Artificial intelligence in colposcopic examination: A promising tool to assist junior colposcopists. Front Med (Lausanne) 2023; 10:1060451. [PMID: 37056736 PMCID: PMC10088560 DOI: 10.3389/fmed.2023.1060451] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 02/08/2023] [Indexed: 03/17/2023] Open
Abstract
Introduction Well-trained colposcopists are in huge shortage worldwide, especially in low-resource areas. Here, we aimed to evaluate the Colposcopic Artificial Intelligence Auxiliary Diagnostic System (CAIADS) to detect abnormalities based on digital colposcopy images, especially focusing on its role in assisting junior colposcopist to correctly identify the lesion areas where biopsy should be performed. Materials and methods This is a hospital-based retrospective study, which recruited the women who visited colposcopy clinics between September 2021 to January 2022. A total of 366 of 1,146 women with complete medical information recorded by a senior colposcopist and valid histology results were included. Anonymized colposcopy images were reviewed by CAIADS and a junior colposcopist separately, and the junior colposcopist reviewed the colposcopy images with CAIADS results (named CAIADS-Junior). The diagnostic accuracy and biopsy efficiency of CAIADS and CAIADS-Junior were assessed in detecting cervical intraepithelial neoplasia grade 2 or worse (CIN2+), CIN3+, and cancer in comparison with the senior and junior colposcipists. The factors influencing the accuracy of CAIADS were explored. Results For CIN2 + and CIN3 + detection, CAIADS showed a sensitivity at ~80%, which was not significantly lower than the sensitivity achieved by the senior colposcopist (for CIN2 +: 80.6 vs. 91.3%, p = 0.061 and for CIN3 +: 80.0 vs. 90.0%, p = 0.189). The sensitivity of the junior colposcopist was increased significantly with the assistance of CAIADS (for CIN2 +: 95.1 vs. 79.6%, p = 0.002 and for CIN3 +: 97.1 vs. 85.7%, p = 0.039) and was comparable to those of the senior colposcopists (for CIN2 +: 95.1 vs. 91.3%, p = 0.388 and for CIN3 +: 97.1 vs. 90.0%, p = 0.125). In detecting cervical cancer, CAIADS achieved the highest sensitivity at 100%. For all endpoints, CAIADS showed the highest specificity (55-64%) and positive predictive values compared to both senior and junior colposcopists. When CIN grades became higher, the average biopsy numbers decreased for the subspecialists and CAIADS required a minimum number of biopsies to detect per case (2.2-2.6 cut-points). Meanwhile, the biopsy sensitivity of the junior colposcopist was the lowest, but the CAIADS-assisted junior colposcopist achieved a higher biopsy sensitivity. Conclusion Colposcopic Artificial Intelligence Auxiliary Diagnostic System could assist junior colposcopists to improve diagnostic accuracy and biopsy efficiency, which might be a promising solution to improve the quality of cervical cancer screening in low-resource settings.
Collapse
Affiliation(s)
- Aiyuan Wu
- The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, China
| | - Peng Xue
- School of Population Medicine and Public Health, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Guzhalinuer Abulizi
- The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, China
| | - Dilinuer Tuerxun
- The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, China
| | - Remila Rezhake
- The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, China
| | - Youlin Qiao
- The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, China
- School of Population Medicine and Public Health, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| |
Collapse
|
16
|
Mousavi Baigi SF, Sarbaz M, Ghaddaripouri K, Ghaddaripouri M, Mousavi AS, Kimiafar K. Attitudes, knowledge, and skills towards artificial intelligence among healthcare students: A systematic review. Health Sci Rep 2023; 6:e1138. [PMID: 36923372 PMCID: PMC10009305 DOI: 10.1002/hsr2.1138] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 01/19/2023] [Accepted: 02/16/2023] [Indexed: 03/14/2023] Open
Abstract
Background and Aims This systematic review examined healthcare students' attitudes, knowledge, and skill in Artificial Intelligence (AI). Methods On August 3, 2022, studies were retrieved from the PubMed, Embase, Scopus, and Web of Science databases. Preferred Reporting Items for Systematic Reviews and Meta-Analyses recommendations were followed. We included cross-sectional studies that examined healthcare students' knowledge, attitudes, skills, and perceptions of AI in this review. Using the eligibility requirements as a guide, titles and abstracts were screened. Complete texts were then retrieved and independently reviewed per the eligibility requirements. To collect data, a standardized form was used. Results Of the 38 included studies, 29 (76%) of healthcare students had a positive and promising attitude towards AI in the clinical profession and its use in he future; however, in nine of the studies (24%), students considered AI a threat to healthcare fields and had a negative attitude towards it. Furthermore, 26 studies evaluated the knowledge of healthcare students about AI. Among these, 18 studies evaluated the level of student knowledge as low (50%). On the other hand, in six of the studies, students' high knowledge of AI was reported, and two of the studies reported average student general knowledge (almost 50%). Of the six studies, four (67%) of the students had very low skills, so they stated that they had never worked with AI. Conclusion Evidence from this review shows that healthcare students had a positive and promising attitude towards AI in medicine; however, most students had low knowledge and limited skills in working with AI. Face-to-face instruction, training manuals, and detailed instructions are therefore crucial for implementing and comprehending how AI technology works and raising students' knowledge of the advantages of AI.
Collapse
Affiliation(s)
- Seyyedeh Fatemeh Mousavi Baigi
- Department of Health Information Technology, School of Paramedical and Rehabilitation SciencesMashhad University of Medical SciencesMashhadIran
- Student Research CommitteeMashhad University of Medical SciencesMashhadIran
| | - Masoumeh Sarbaz
- Department of Health Information Technology, School of Paramedical and Rehabilitation SciencesMashhad University of Medical SciencesMashhadIran
| | - Kosar Ghaddaripouri
- Department of Health Information TechnologyVarastegan Institute of Medical SciencesMashhadIran
| | - Maryam Ghaddaripouri
- Department of Laboratory Sciences, School of Paramedical and Rehabilitation SciencesMashhad University of Medical SciencesMashhadIran
| | - Atefeh Sadat Mousavi
- Department of Health Information Technology, School of Paramedical and Rehabilitation SciencesMashhad University of Medical SciencesMashhadIran
| | - Khalil Kimiafar
- Department of Health Information Technology, School of Paramedical and Rehabilitation SciencesMashhad University of Medical SciencesMashhadIran
| |
Collapse
|
17
|
Qian X, Jingying H, Xian S, Yuqing Z, Lili W, Baorui C, Wei G, Yefeng Z, Qiang Z, Chunyan C, Cheng B, Kai M, Yi Q. The effectiveness of artificial intelligence-based automated grading and training system in education of manual detection of diabetic retinopathy. Front Public Health 2022; 10:1025271. [PMID: 36419999 PMCID: PMC9678340 DOI: 10.3389/fpubh.2022.1025271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 10/18/2022] [Indexed: 11/09/2022] Open
Abstract
Background The purpose of this study is to develop an artificial intelligence (AI)-based automated diabetic retinopathy (DR) grading and training system from a real-world diabetic dataset of China, and in particular, to investigate its effectiveness as a learning tool of DR manual grading for medical students. Methods We developed an automated DR grading and training system equipped with an AI-driven diagnosis algorithm to highlight highly prognostic related regions in the input image. Less experienced prospective physicians received pre- and post-training tests by the AI diagnosis platform. Then, changes in the diagnostic accuracy of the participants were evaluated. Results We randomly selected 8,063 cases diagnosed with DR and 7,925 with non-DR fundus images from type 2 diabetes patients. The automated DR grading system we developed achieved accuracy, sensitivity/specificity, and AUC values of 0.965, 0.965/0.966, and 0.980 for moderate or worse DR (95 percent CI: 0.976-0.984). When the graders received assistance from the output of the AI system, the metrics were enhanced in varying degrees. The automated DR grading system helped to improve the accuracy of human graders, i.e., junior residents and medical students, from 0.947 and 0.915 to 0.978 and 0.954, respectively. Conclusion The AI-based systemdemonstrated high diagnostic accuracy for the detection of DR on fundus images from real-world diabetics, and could be utilized as a training aid system for trainees lacking formal instruction on DR management.
Collapse
Affiliation(s)
- Xu Qian
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China,Key Laboratory of Cardiovascular Proteomics of Shandong Province, Jinan, China,Jinan Clinical Research Center for Geriatric Medicine (202132001), Jinan, China
| | - Han Jingying
- School of Basic Medical Sciences, Shandong University, Jinan, China
| | - Song Xian
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China
| | - Zhao Yuqing
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China
| | - Wu Lili
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China
| | - Chu Baorui
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China
| | - Guo Wei
- Lunan Eye Hospital, Linyi, China
| | | | | | | | | | - Ma Kai
- Tencent Healthcare, Shenzhen, China
| | - Qu Yi
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China,Key Laboratory of Cardiovascular Proteomics of Shandong Province, Jinan, China,Jinan Clinical Research Center for Geriatric Medicine (202132001), Jinan, China,*Correspondence: Qu Yi
| |
Collapse
|
18
|
Chen M, Zhang B, Cai Z, Seery S, Gonzalez MJ, Ali NM, Ren R, Qiao Y, Xue P, Jiang Y. Acceptance of clinical artificial intelligence among physicians and medical students: A systematic review with cross-sectional survey. Front Med (Lausanne) 2022; 9:990604. [PMID: 36117979 PMCID: PMC9472134 DOI: 10.3389/fmed.2022.990604] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
Background Artificial intelligence (AI) needs to be accepted and understood by physicians and medical students, but few have systematically assessed their attitudes. We investigated clinical AI acceptance among physicians and medical students around the world to provide implementation guidance. Materials and methods We conducted a two-stage study, involving a foundational systematic review of physician and medical student acceptance of clinical AI. This enabled us to design a suitable web-based questionnaire which was then distributed among practitioners and trainees around the world. Results Sixty studies were included in this systematic review, and 758 respondents from 39 countries completed the online questionnaire. Five (62.50%) of eight studies reported 65% or higher awareness regarding the application of clinical AI. Although, only 10–30% had actually used AI and 26 (74.28%) of 35 studies suggested there was a lack of AI knowledge. Our questionnaire uncovered 38% awareness rate and 20% utility rate of clinical AI, although 53% lacked basic knowledge of clinical AI. Forty-five studies mentioned attitudes toward clinical AI, and over 60% from 38 (84.44%) studies were positive about AI, although they were also concerned about the potential for unpredictable, incorrect results. Seventy-seven percent were optimistic about the prospect of clinical AI. The support rate for the statement that AI could replace physicians ranged from 6 to 78% across 40 studies which mentioned this topic. Five studies recommended that efforts should be made to increase collaboration. Our questionnaire showed 68% disagreed that AI would become a surrogate physician, but believed it should assist in clinical decision-making. Participants with different identities, experience and from different countries hold similar but subtly different attitudes. Conclusion Most physicians and medical students appear aware of the increasing application of clinical AI, but lack practical experience and related knowledge. Overall, participants have positive but reserved attitudes about AI. In spite of the mixed opinions around clinical AI becoming a surrogate physician, there was a consensus that collaborations between the two should be strengthened. Further education should be conducted to alleviate anxieties associated with change and adopting new technologies.
Collapse
Affiliation(s)
- Mingyang Chen
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bo Zhang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ziting Cai
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Samuel Seery
- Faculty of Health and Medicine, Division of Health Research, Lancaster University, Lancaster, United Kingdom
| | | | - Nasra M. Ali
- The First Affiliated Hospital, Dalian Medical University, Dalian, China
| | - Ran Ren
- Global Health Research Center, Dalian Medical University, Dalian, China
| | - Youlin Qiao
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- *Correspondence: Youlin Qiao,
| | - Peng Xue
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Peng Xue,
| | - Yu Jiang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Yu Jiang,
| |
Collapse
|