1
|
Lin HL, Liao LL, Wang YN, Chang LC. Attitude and utilization of ChatGPT among registered nurses: A cross-sectional study. Int Nurs Rev 2025; 72:e13012. [PMID: 38979771 DOI: 10.1111/inr.13012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 06/10/2024] [Indexed: 07/10/2024]
Abstract
AIM This study explores the influencing factors of attitudes and behaviors toward use of ChatGPT based on the Technology Acceptance Model among registered nurses in Taiwan. BACKGROUND The complexity of medical services and nursing shortages increases workloads. ChatGPT swiftly answers medical questions, provides clinical guidelines, and assists with patient information management, thereby improving nursing efficiency. INTRODUCTION To facilitate the development of effective ChatGPT training programs, it is essential to examine registered nurses' attitudes toward and utilization of ChatGPT across diverse workplace settings. METHODS An anonymous online survey was used to collect data from over 1000 registered nurses recruited through social media platforms between November 2023 and January 2024. Descriptive statistics and multiple linear regression analyses were conducted for data analysis. RESULTS Among respondents, some were unfamiliar with ChatGPT, while others had used it before, with higher usage among males, higher-educated individuals, experienced nurses, and supervisors. Gender and work settings influenced perceived risks, and those familiar with ChatGPT recognized its social impact. Perceived risk and usefulness significantly influenced its adoption. DISCUSSION Nurse attitudes to ChatGPT vary based on gender, education, experience, and role. Positive perceptions emphasize its usefulness, while risk concerns affect adoption. The insignificant role of perceived ease of use highlights ChatGPT's user-friendly nature. CONCLUSION Over half of the surveyed nurses had used or were familiar with ChatGPT and showed positive attitudes toward its use. Establishing rigorous guidelines to enhance their interaction with ChatGPT is crucial for future training. IMPLICATIONS FOR NURSING AND HEALTH POLICY Nurse managers should understand registered nurses' attitudes toward ChatGPT and integrate it into in-service education with tailored support and training, including appropriate prompt formulation and advanced decision-making, to prevent misuse.
Collapse
Affiliation(s)
- Hui-Ling Lin
- Department of Nursing, Linkou Branch, Chang Gung Memorial Hospital, Taoyuan, Taiwan, ROC
- School of Nursing, College of Medicine, Chang Gung University, Taoyuan, Taiwan, ROC
- School of Nursing, Chang Gung University of Science and Technology, Gui-Shan Town, Taoyuan, Taiwan, ROC
- Taipei Medical University, Taipei, Taiwan
| | - Li-Ling Liao
- Department of Public Health, College of Health Science, Kaohsiung Medical University, Kaohsiung City, Taiwan
- Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung City, Taiwan
| | - Ya-Ni Wang
- School of Nursing, College of Medicine, Chang Gung University, Taoyuan, Taiwan, ROC
| | - Li-Chun Chang
- Department of Nursing, Linkou Branch, Chang Gung Memorial Hospital, Taoyuan, Taiwan, ROC
- School of Nursing, College of Medicine, Chang Gung University, Taoyuan, Taiwan, ROC
- School of Nursing, Chang Gung University of Science and Technology, Gui-Shan Town, Taoyuan, Taiwan, ROC
| |
Collapse
|
2
|
Henzler D, Schmidt S, Koçar A, Herdegen S, Lindinger GL, Maris MT, Bak MAR, Willems DL, Tan HL, Lauerer M, Nagel E, Hindricks G, Dagres N, Konopka MJ. Healthcare professionals' perspectives on artificial intelligence in patient care: a systematic review of hindering and facilitating factors on different levels. BMC Health Serv Res 2025; 25:633. [PMID: 40312413 PMCID: PMC12046968 DOI: 10.1186/s12913-025-12664-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2024] [Accepted: 03/27/2025] [Indexed: 05/03/2025] Open
Abstract
BACKGROUND Artificial intelligence (AI) applications present opportunities to enhance the diagnosis, prognosis, and treatment of various diseases. To successfully integrate and utilize AI in healthcare, it is crucial to understand the perspectives of healthcare professionals and to address challenges they associate with AI adoption at an early stage. Therefore, the aim of this review is to provide a comprehensive overview of empirical studies that explore healthcare professionals' perspectives on AI in healthcare. METHODS The review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework. The databases MEDLINE, PsycINFO, and Web of Science were searched in the timeline of 2017 to 2024 using terms related to 'healthcare professionals', 'artificial intelligence', and 'perspectives'. Eligible were peer-reviewed articles that employed quantitative, qualitative, or mixed-methods approaches. Extracted facilitating and hindering factors were analysed according to the dimensions of the socio-ecological model. RESULTS Our search yielded 4,499 articles published up to February 2024. After title abstract screening, 150 full-texts were assessed for eligibility, and 72 studies were ultimately included in our synthesis. The extracted perspectives on AI were thematically analyzed using the socioecological model in order to identify various levels of influence and to categorize them into facilitating and hindering factors. In total, we identified 49 facilitating and 43 hindering factors across all levels of the socioecological model. CONCLUSIONS: The findings from this review can serve as a foundation for developing guidelines for AI implementation adressing various stakeholders, from healthcare professionals to policymakers. Future research should focus on the empirical adoption of AI applications and, if possible, further examine the hindering factors associated with different types of AI.
Collapse
Affiliation(s)
- Dennis Henzler
- Institute of Management for Medicine and Health Sciences, University of Bayreuth, Prieserstr. 2, Bayreuth, 95444, Germany.
| | - Sebastian Schmidt
- Institute of Management for Medicine and Health Sciences, University of Bayreuth, Prieserstr. 2, Bayreuth, 95444, Germany
| | - Ayca Koçar
- Institute of Management for Medicine and Health Sciences, University of Bayreuth, Prieserstr. 2, Bayreuth, 95444, Germany
| | - Sophie Herdegen
- Institute of Management for Medicine and Health Sciences, University of Bayreuth, Prieserstr. 2, Bayreuth, 95444, Germany
| | - Georg L Lindinger
- Institute of Management for Medicine and Health Sciences, University of Bayreuth, Prieserstr. 2, Bayreuth, 95444, Germany
| | - Menno T Maris
- Department of Ethics, Law and Humanities, Amsterdam UMC, De Boelelaan 1089a, Amsterdam, 1081 HV, The Netherlands
| | - Marieke A R Bak
- Department of Ethics, Law and Humanities, Amsterdam UMC, De Boelelaan 1089a, Amsterdam, 1081 HV, The Netherlands
| | - Dick L Willems
- Department of Ethics, Law and Humanities, Amsterdam UMC, De Boelelaan 1089a, Amsterdam, 1081 HV, The Netherlands
| | - Hanno L Tan
- Department of Clinical and Experimental Cardiology, Heart Center, Amsterdam UMC, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands
- Netherlands Heart Institute, Moreelsepark 1, Utrecht, 3511 EP, The Netherlands
| | - Michael Lauerer
- Institute of Management for Medicine and Health Sciences, University of Bayreuth, Prieserstr. 2, Bayreuth, 95444, Germany
| | - Eckhard Nagel
- Institute of Management for Medicine and Health Sciences, University of Bayreuth, Prieserstr. 2, Bayreuth, 95444, Germany
| | - Gerhard Hindricks
- German Heart Center of the Charité-University Medicine Berlin, Augustenburger Pl. 1, Berlin, 13353, Germany
| | - Nikolaos Dagres
- German Heart Center of the Charité-University Medicine Berlin, Augustenburger Pl. 1, Berlin, 13353, Germany
| | - Magdalena J Konopka
- Institute of Management for Medicine and Health Sciences, University of Bayreuth, Prieserstr. 2, Bayreuth, 95444, Germany
- Department of Epidemiology, Maastricht University, Peter Debyeplein 1, 6229 HA, Maastricht, The Netherlands
| |
Collapse
|
3
|
Zhang J, Wang J, Zhang J, Xia X, Zhou Z, Zhou X, Wu Y. Young Adult Perspectives on Artificial Intelligence-Based Medication Counseling in China: Discrete Choice Experiment. J Med Internet Res 2025; 27:e67744. [PMID: 40203305 PMCID: PMC12018864 DOI: 10.2196/67744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2024] [Revised: 01/12/2025] [Accepted: 03/12/2025] [Indexed: 04/11/2025] Open
Abstract
BACKGROUND As artificial intelligence (AI) permeates the current society, the young generation is becoming increasingly accustomed to using digital solutions. AI-based medication counseling services may help people take medications more accurately and reduce adverse events. However, it is not known which AI-based medication counseling service will be preferred by young people. OBJECTIVE This study aims to assess young people's preferences for AI-based medication counseling services. METHODS A discrete choice experiment (DCE) approach was the main analysis method applied in this study, involving 6 attributes: granularity, linguistic comprehensibility, symptom-specific results, access platforms, content model, and costs. The participants in this study were screened and recruited through web-based registration and investigator visits, and the questionnaire was filled out online, with the questionnaire platform provided by Questionnaire Star. The sample population in this study consisted of young adults aged 18-44 years. A mixed logit model was used to estimate attribute preference coefficients and to estimate the willingness to pay (WTP) and relative importance (RI) scores. Subgroups were also analyzed to check for heterogeneity in preferences. RESULTS In this analysis, 340 participants were included, generating 8160 DCE observations. Participants exhibited a strong preference for receiving 100% symptom-specific results (β=3.18, 95% CI 2.54-3.81; P<.001), and the RI of the attributes (RI=36.99%) was consistent with this. Next, they showed preference for the content model of the video (β=0.86, 95% CI 0.51-1.22; P<.001), easy-to-understand language (β=0.81, 95% CI 0.46-1.16; P<.001), and when considering the granularity, refined content was preferred over general information (β=0.51, 95% CI 0.21-0.8; P<.001). Finally, participants exhibited a notable preference for accessing information through WeChat applets rather than websites (β=0.66, 95% CI 0.27-1.05; P<.001). The WTP for AI-based medication counseling services ranked from the highest to the lowest for symptom-specific results, easy-to-understand language, video content, WeChat applet platform, and refined medication counseling. Among these, the WTP for 100% symptom-specific results was the highest (¥24.01, 95% CI 20.16-28.77; US $1=¥7.09). High-income participants exhibited significantly higher WTP for highly accurate results (¥45.32) compared to low-income participants (¥20.65). Similarly, participants with higher education levels showed greater preferences for easy-to-understand language (¥5.93) and video content (¥12.53). CONCLUSIONS We conducted an in-depth investigation of the preference of young people for AI-based medication counseling services. Service providers should pay attention to symptom-specific results, support more convenient access platforms, and optimize the language description, content models that add multiple digital media interactions, and more refined medication counseling to develop AI-based medication counseling services.
Collapse
Affiliation(s)
- Jia Zhang
- Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Jing Wang
- School of Public Health, Peking University, Beijing, China
| | - JingBo Zhang
- Beijing University of Chinese Medicine, Evidence Based Medicine Research Center, Beijing, China
| | - XiaoQian Xia
- Department of Public Health, Environments & Society, London School of Hygiene and Tropical Medicine, London, United Kingdom
| | - ZiYun Zhou
- Xiangya School of Nursing, Central South University, Changsha, China
| | - XiaoMing Zhou
- Department of Research, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - YiBo Wu
- School of Public Health, Peking University, Beijing, China
| |
Collapse
|
4
|
Alghitran A, AlOsaimi HM, Albuluwi A, Almalki EO, Aldowayan AZ, Alharthi R, Qattan JM, Alghamdi F, AlHalabi M, Almalki NA, Alharthi A, Alshammari A, Kanan M. Integrating ChatGPT as a Tool in Pharmacy Practice: A Cross-Sectional Exploration Among Pharmacists in Saudi Arabia. INTEGRATED PHARMACY RESEARCH AND PRACTICE 2025; 14:31-43. [PMID: 40125532 PMCID: PMC11927492 DOI: 10.2147/iprp.s500689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Accepted: 02/21/2025] [Indexed: 03/25/2025] Open
Abstract
Purpose Artificial Intelligence (AI), especially ChatGPT, is rapidly assimilating into healthcare, providing significant advantages in pharmacy practice, such as improved clinical decision-making, patient counselling, and drug information management. The adoption of AI tools is heavily contingent upon pharmacy practitioners' knowledge, attitudes, and practices (KAP). This study sought to evaluate the knowledge and practices of pharmacists in Saudi Arabia concerning the utilization of ChatGPT in their daily activities. Patients and Methods A cross-sectional study was performed from May 2023 to July 2024 including pharmacists in Riyadh, Saudi Arabia. An online pre-validated KAP questionnaire was disseminated, collecting data on demographics, knowledge, attitudes, and practices about ChatGPT. Descriptive statistics and regression analyses were conducted using SPSS. Results Of 1022 respondents, 78.7% were familiar with AI in pharmacy, while 90.1% correctly identified ChatGPT as an advanced AI chatbot. Positive attitudes towards ChatGPT were reported by 64.1% of pharmacists, although only 24.3% used AI tools regularly. Significant predictors of positive attitudes and practices included academic/research roles (β=0.7, p=0.005) and 6-10 years of experience (β=0.9, p=0.05). Ethical concerns were raised by 64% of respondents, and 92% reported a lack of formal training. Conclusion While the majority of pharmacists held positive attitudes toward ChatGPT, practical implementation remains limited due to ethical concerns and inadequate training. Addressing these barriers is essential for successful AI integration in pharmacy, supporting Saudi Arabia's Vision 2030 initiative.
Collapse
Affiliation(s)
- Abdulrahman Alghitran
- A General Administration of Pharmaceutical Care, Ministry of Health, Riyadh, Saudi Arabia
| | - Hind M AlOsaimi
- Department of Pharmacy Services Administration, King Fahad Medical City, Riyadh Second Health Cluster, Riyadh, Saudi Arabia
| | - Ahmad Albuluwi
- A General Administration of Pharmaceutical Care, Ministry of Health, Riyadh, Saudi Arabia
| | | | | | - Rakan Alharthi
- College of Pharmacy, Taif University, Taif, 26513, Saudi Arabia
| | | | - Fahd Alghamdi
- College of Pharmacy, Taif University, Taif, 26513, Saudi Arabia
| | | | | | | | - Asma Alshammari
- Department of Pharmaceutical Care, King Abdulaziz Medical City, Riyadh, Saudi Arabia
| | - Muhammad Kanan
- Department of Pharmacy Services Administration, Rafha General Hospital, Northern Border Cluster, Saudi Arabia
| |
Collapse
|
5
|
Benaïche A, Billaut-Laden I, Randriamihaja H, Bertocchio JP. Assessment of the Efficiency of a ChatGPT-Based Tool, MyGenAssist, in an Industry Pharmacovigilance Department for Case Documentation: Cross-Over Study. J Med Internet Res 2025; 27:e65651. [PMID: 40063946 PMCID: PMC11933758 DOI: 10.2196/65651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2024] [Revised: 11/17/2024] [Accepted: 01/13/2025] [Indexed: 03/28/2025] Open
Abstract
BACKGROUND At the end of 2023, Bayer AG launched its own internal large language model (LLM), MyGenAssist, based on ChatGPT technology to overcome data privacy concerns. It may offer the possibility to decrease their harshness and save time spent on repetitive and recurrent tasks that could then be dedicated to activities with higher added value. Although there is a current worldwide reflection on whether artificial intelligence should be integrated into pharmacovigilance, medical literature does not provide enough data concerning LLMs and their daily applications in such a setting. Here, we studied how this tool could improve the case documentation process, which is a duty for authorization holders as per European and French good vigilance practices. OBJECTIVE The aim of the study is to test whether the use of an LLM could improve the pharmacovigilance documentation process. METHODS MyGenAssist was trained to draft templates for case documentation letters meant to be sent to the reporters. Information provided within the template changes depending on the case: such data come from a table sent to the LLM. We then measured the time spent on each case for a period of 4 months (2 months before using the tool and 2 months after its implementation). A multiple linear regression model was created with the time spent on each case as the explained variable, and all parameters that could influence this time were included as explanatory variables (use of MyGenAssist, type of recipient, number of questions, and user). To test if the use of this tool impacts the process, we compared the recipients' response rates with and without the use of MyGenAssist. RESULTS An average of 23.3% (95% CI 13.8%-32.8%) of time saving was made thanks to MyGenAssist (P<.001; adjusted R2=0.286) on each case, which could represent an average of 10.7 (SD 3.6) working days saved each year. The answer rate was not modified by the use of MyGenAssist (20/48, 42% vs 27/74, 36%; P=.57) whether the recipient was a physician or a patient. No significant difference was found regarding the time spent by the recipient to answer (mean 2.20, SD 3.27 days vs mean 2.65, SD 3.30 days after the last attempt of contact; P=.64). The implementation of MyGenAssist for this activity only required a 2-hour training session for the pharmacovigilance team. CONCLUSIONS Our study is the first to show that a ChatGPT-based tool can improve the efficiency of a good practice activity without needing a long training session for the affected workforce. These first encouraging results could be an incentive for the implementation of LLMs in other processes.
Collapse
Affiliation(s)
| | | | | | - Jean-Philippe Bertocchio
- SKEZI, Les Papèteries-Image Factory, Annecy, France
- Assistance Publique-Hôpitaux de Paris, Pitié-Salpêtrière Hospital, Paris, France
| |
Collapse
|
6
|
Wang YM, Shen HW, Chen TJ, Chiang SC, Lin TG. Performance of ChatGPT-3.5 and ChatGPT-4 in the Taiwan National Pharmacist Licensing Examination: Comparative Evaluation Study. JMIR MEDICAL EDUCATION 2025; 11:e56850. [PMID: 39864950 PMCID: PMC11769692 DOI: 10.2196/56850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 09/26/2024] [Accepted: 12/17/2024] [Indexed: 01/28/2025]
Abstract
Background OpenAI released versions ChatGPT-3.5 and GPT-4 between 2022 and 2023. GPT-3.5 has demonstrated proficiency in various examinations, particularly the United States Medical Licensing Examination. However, GPT-4 has more advanced capabilities. Objective This study aims to examine the efficacy of GPT-3.5 and GPT-4 within the Taiwan National Pharmacist Licensing Examination and to ascertain their utility and potential application in clinical pharmacy and education. Methods The pharmacist examination in Taiwan consists of 2 stages: basic subjects and clinical subjects. In this study, exam questions were manually fed into the GPT-3.5 and GPT-4 models, and their responses were recorded; graphic-based questions were excluded. This study encompassed three steps: (1) determining the answering accuracy of GPT-3.5 and GPT-4, (2) categorizing question types and observing differences in model performance across these categories, and (3) comparing model performance on calculation and situational questions. Microsoft Excel and R software were used for statistical analyses. Results GPT-4 achieved an accuracy rate of 72.9%, overshadowing GPT-3.5, which achieved 59.1% (P<.001). In the basic subjects category, GPT-4 significantly outperformed GPT-3.5 (73.4% vs 53.2%; P<.001). However, in clinical subjects, only minor differences in accuracy were observed. Specifically, GPT-4 outperformed GPT-3.5 in the calculation and situational questions. Conclusions This study demonstrates that GPT-4 outperforms GPT-3.5 in the Taiwan National Pharmacist Licensing Examination, particularly in basic subjects. While GPT-4 shows potential for use in clinical practice and pharmacy education, its limitations warrant caution. Future research should focus on refining prompts, improving model stability, integrating medical databases, and designing questions that better assess student competence and minimize guessing.
Collapse
Affiliation(s)
- Ying-Mei Wang
- Department of Medical Education and Research, Taipei Veterans General Hospital Hsinchu Branch, 81, Section 1, Zhongfeng Road, Zhudong, Hsinchu, 310, Taiwan, 886 03-5962134 ext 127
- Department of Pharmacy, Taipei Veterans General Hospital Hsinchu Branch, Hsinchu, Taiwan
- School of Medicine, National Tsing Hua University, Hsinchu, Taiwan
- Hsinchu County Pharmacists Association, Hsinchu, Taiwan
| | - Hung-Wei Shen
- Department of Medical Education and Research, Taipei Veterans General Hospital Hsinchu Branch, 81, Section 1, Zhongfeng Road, Zhudong, Hsinchu, 310, Taiwan, 886 03-5962134 ext 127
- Department of Pharmacy, Taipei Veterans General Hospital Hsinchu Branch, Hsinchu, Taiwan
- Hsinchu County Pharmacists Association, Hsinchu, Taiwan
| | - Tzeng-Ji Chen
- Department of Family Medicine, Taipei Veterans General Hospital Hsinchu Branch, Hsinchu, Taiwan
- Department of Family Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Post-Baccalaureate Medicine, National Chung Hsing University, Taichung, Taiwan
| | - Shu-Chiung Chiang
- Department of Medical Education and Research, Taipei Veterans General Hospital Hsinchu Branch, 81, Section 1, Zhongfeng Road, Zhudong, Hsinchu, 310, Taiwan, 886 03-5962134 ext 127
- Institute of Hospital and Health Care Administration, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ting-Guan Lin
- Department of Pharmacy, Taipei Veterans General Hospital Hsinchu Branch, Hsinchu, Taiwan
- Hsinchu County Pharmacists Association, Hsinchu, Taiwan
| |
Collapse
|
7
|
Anderson HD, Kwon S, Linnebur LA, Valdez CA, Linnebur SA. Pharmacy student use of ChatGPT: A survey of students at a U.S. School of Pharmacy. CURRENTS IN PHARMACY TEACHING & LEARNING 2024; 16:102156. [PMID: 39029382 DOI: 10.1016/j.cptl.2024.102156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 07/01/2024] [Accepted: 07/02/2024] [Indexed: 07/21/2024]
Abstract
OBJECTIVE To learn how students in an accredited PharmD program in the United States are using ChatGPT for personal, academic, and clinical reasons, and whether students think ChatGPT training should be incorporated into their program's curriculum. METHODS In August 2023, an 18-item survey was developed, pilot tested, and sent to all students who were enrolled during the Spring 2023 semester in the entry-level PharmD program at the University of Colorado. E-mail addresses were separated from survey responses to maintain anonymity. Responses were described using descriptive statistics. RESULTS 206 pharmacy students responded to the survey for a 49% response rate. Nearly one-half (48.5%) indicated they had used ChatGPT for personal reasons; 30.2% had used it for academic reasons; and 7.5% had used it for clinical reasons. The most common personal use for ChatGPT was answering questions and looking-up information (67.0%). The top academic reason for using ChatGPT was summarizing information or a body of text (42.6%), while the top clinical reason was simplifying a complex topic (53.3%). Most respondents (61.8%) indicated they would be interested in learning about how ChatGPT could help them in pharmacy school, and 28.1% thought ChatGPT training should be incorporated into their pharmacy curriculum. CONCLUSION At the time of the survey, ChatGPT was being used by approximately one-half of our pharmacy student respondents for personal, academic, or clinical reasons. Overall, many students indicated they want to learn how to use ChatGPT to help them with their education and think ChatGPT training should be integrated into their curriculum.
Collapse
Affiliation(s)
- Heather D Anderson
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| | - Sue Kwon
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| | - Lauren A Linnebur
- University of Colorado Anschutz Medical Campus, School of Medicine, Division of Geriatric Medicine, 12631 East 17th Avenue, Suite 8111, Aurora, CO 80045, United States of America.
| | - Connie A Valdez
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| | - Sunny A Linnebur
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| |
Collapse
|
8
|
Bazzari FH, Bazzari AH. Utilizing ChatGPT in Telepharmacy. Cureus 2024; 16:e52365. [PMID: 38230387 PMCID: PMC10790595 DOI: 10.7759/cureus.52365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/15/2024] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND ChatGPT is an artificial intelligence-powered chatbot that has demonstrated capabilities in numerous fields, including medical and healthcare sciences. This study evaluates the potential for ChatGPT application in telepharmacy, the delivering of pharmaceutical care via means of telecommunications, through assessing its interactions, adherence to instructions, and ability to role-play as a pharmacist while handling a series of life-like scenario questions. METHODS Two versions (ChatGPT 3.5 and 4.0, OpenAI) were assessed using two independent trials each. ChatGPT was instructed to act as a pharmacist and answer patient inquiries, followed by a set of 20 assessment questions. Then, ChatGPT was instructed to stop its act, provide feedback and list its sources for drug information. The responses to the assessment questions were evaluated in terms of accuracy, precision and clarity using a 4-point Likert-like scale. RESULTS ChatGPT demonstrated the ability to follow detailed instructions, role-play as a pharmacist, and appropriately handle all questions. ChatGPT was able to understand case details, recognize generic and brand drug names, identify drug side effects, interactions, prescription requirements and precautions, and provide proper point-by-point instructions regarding administration, dosing, storage and disposal. The overall means of pooled scores were 3.425 (0.712) and 3.7 (0.61) for ChatGPT 3.5 and 4.0, respectively. The rank distribution of scores was not significantly different (P>0.05). None of the answers could be considered directly harmful or labeled as entirely or mostly incorrect, and most point deductions were due to other factors such as indecisiveness, adding immaterial information, missing certain considerations, or partial unclarity. The answers were similar in length across trials and appropriately concise. ChatGPT 4.0 showed superior performance, higher consistency, better character adherence and the ability to report various reliable information sources. However, it only allowed an input of 40 questions every three hours and provided inaccurate feedback regarding the number of assessed patients, compared to 3.5 which allowed unlimited input but was unable to provide feedback. CONCLUSIONS Integrating ChatGPT in telepharmacy holds promising potential; however, a number of drawbacks are to be overcome in order to function effectively.
Collapse
Affiliation(s)
| | - Amjad H Bazzari
- Basic Scientific Sciences, Applied Science Private University, Amman, JOR
| |
Collapse
|
9
|
Zawiah M, Al-Ashwal FY, Gharaibeh L, Abu Farha R, Alzoubi KH, Abu Hammour K, Qasim QA, Abrah F. ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students. J Multidiscip Healthc 2023; 16:4099-4110. [PMID: 38116306 PMCID: PMC10729768 DOI: 10.2147/jmdh.s439223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 12/04/2023] [Indexed: 12/21/2023] Open
Abstract
Background The emergence of Chat-Generative Pre-trained Transformer (ChatGPT) by OpenAI has revolutionized AI technology, demonstrating significant potential in healthcare and pharmaceutical education, yet its real-world applicability in clinical training warrants further investigation. Methods A cross-sectional study was conducted between April and May 2023 to assess PharmD students' perceptions, concerns, and experiences regarding the integration of ChatGPT into clinical pharmacy education. The study utilized a convenient sampling method through online platforms and involved a questionnaire with sections on demographics, perceived benefits, concerns, and experience with ChatGPT. Statistical analysis was performed using SPSS, including descriptive and inferential analyses. Results The findings of the study involving 211 PharmD students revealed that the majority of participants were male (77.3%), and had prior experience with artificial intelligence (68.2%). Over two-thirds were aware of ChatGPT. Most students (n= 139, 65.9%) perceived potential benefits in using ChatGPT for various clinical tasks, with concerns including over-reliance, accuracy, and ethical considerations. Adoption of ChatGPT in clinical training varied, with some students not using it at all, while others utilized it for tasks like evaluating drug-drug interactions and developing care plans. Previous users tended to have higher perceived benefits and lower concerns, but the differences were not statistically significant. Conclusion Utilizing ChatGPT in clinical training offers opportunities, but students' lack of trust in it for clinical decisions highlights the need for collaborative human-ChatGPT decision-making. It should complement healthcare professionals' expertise and be used strategically to compensate for human limitations. Further research is essential to optimize ChatGPT's effective integration.
Collapse
Affiliation(s)
- Mohammed Zawiah
- Department of Clinical Pharmacy, College of Pharmacy, Northern Border University, Rafha, 91911, Saudi Arabia
- Department of Pharmacy Practice, College of Clinical Pharmacy, Hodeidah University, Al Hodeidah, Yemen
| | - Fahmi Y Al-Ashwal
- Department of Clinical Pharmacy, College of Pharmacy, Al-Ayen University, Thi-Qar, Iraq
| | - Lobna Gharaibeh
- Pharmacological and Diagnostic Research Center, Faculty of Pharmacy, Al-Ahliyya Amman University, Amman, Jordan
| | - Rana Abu Farha
- Clinical Pharmacy and Therapeutics Department, Faculty of Pharmacy, Applied Science Private University, Amman, Jordan
| | - Karem H Alzoubi
- Department of Pharmacy Practice and Pharmacotherapeutics, University of Sharjah, Sharjah, 27272, United Arab Emirates
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Khawla Abu Hammour
- Department of Clinical Pharmacy and Biopharmaceutics, Faculty of Pharmacy, University of Jordan, Amman, Jordan
| | - Qutaiba A Qasim
- Department of Clinical Pharmacy, College of Pharmacy, Al-Ayen University, Thi-Qar, Iraq
| | - Fahd Abrah
- Discipline of Social and Administrative Pharmacy, School of Pharmaceutical Sciences, Universiti Sains Malaysia, Penang, Malaysia
| |
Collapse
|