1
|
Munir F, Abdulbaki E, Saiyad Z, Ipema H. Taking the plunge together: A student-led faculty learning seminar series on artificial intelligence. CURRENTS IN PHARMACY TEACHING & LEARNING 2025; 17:102370. [PMID: 40318343 DOI: 10.1016/j.cptl.2025.102370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2025] [Revised: 04/15/2025] [Accepted: 04/18/2025] [Indexed: 05/07/2025]
Abstract
OBJECTIVE This pilot study explored the effectiveness of a student-led faculty development series by evaluating two key outcomes: the capacity of students to deliver meaningful professional development sessions to faculty and the impact of these sessions on faculty perceptions of generative artificial intelligence (AI). METHODS In a flipped classroom model, two pharmacy students and 12 faculty members engaged in a semester-long learning series on AI. Each week, students presented on a selected topic followed by discussions that facilitated self-directed learning, including decision-making and project management. Faculty perceptions of AI were evaluated before and after the series using an anonymous survey tool (Technology Acceptance Model Edited to Assess ChatGPT Adoption, TAME-ChatGPT). Respondents created a self-chosen code to link their responses. Additionally, students completed a questionnaire to gauge their reflective thinking after the series. RESULTS Faculty participation averaged 7 members per session. Twelve faculty completed the pre-survey, while 8 faculty completed the post-survey. Among those who had used ChatGPT (n = 4 pre [33 %], n = 2 post [25 %]), scores for usefulness increased, while concerns about risks decreased. In contrast, faculty who had not used ChatGPT (n = 8 pre [67 %], n = 6 post [75 %]) reported unchanged or improved scores for ease of use and reduced anxiety. Both students responded positively to the reflective thinking questionnaire. CONCLUSION This pilot study demonstrated that a student-led faculty learning series effectively fostered mutual collaborative learning, benefiting both faculty and students. Pharmacy students, often an underutilized resource, can play a valuable role in faculty development. Colleges of pharmacy may enhance faculty engagement by integrating student-led initiatives into their programs.
Collapse
Affiliation(s)
- Faria Munir
- The University of Illinois Chicago College of Pharmacy: 833 S Wood St, Chicago, IL 60612, United States of America
| | - Elma Abdulbaki
- The University of Illinois Chicago College of Pharmacy: 833 S Wood St, Chicago, IL 60612, United States of America
| | - Zeba Saiyad
- The University of Illinois Chicago College of Pharmacy: 833 S Wood St, Chicago, IL 60612, United States of America
| | - Heather Ipema
- The University of Illinois Chicago College of Pharmacy: 833 S Wood St, Chicago, IL 60612, United States of America
| |
Collapse
|
2
|
Do V, Donohoe KL, Peddi AN, Carr E, Kim C, Mele V, Patel D, Crawford AN. Artificial intelligence (AI) performance on pharmacy skills laboratory course assignments. CURRENTS IN PHARMACY TEACHING & LEARNING 2025; 17:102367. [PMID: 40273883 DOI: 10.1016/j.cptl.2025.102367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2024] [Revised: 04/10/2025] [Accepted: 04/15/2025] [Indexed: 04/26/2025]
Abstract
OBJECTIVE To compare pharmacy student scores to scores of artificial intelligence (AI)-generated results of three common platforms on pharmacy skills laboratory assignments. METHODS Pharmacy skills laboratory course assignments were completed by four fourth-year pharmacy student investigators with three free AI platforms: ChatGPT, Copilot, and Gemini. Assignments evaluated were calculations, patient case vignettes, in-depth patient cases, drug information questions, and a reflection activity. Course coordinators graded the AI-generated submissions. Descriptive statistics were utilized to summarize AI scores and compare averages to recent pharmacy student cohorts. Interrater reliability for the four student investigators completing the assignments was assessed. RESULTS Fourteen skills laboratory assignments were completed utilizing three different AI platforms (ChatGPT, Copilot, and Gemini) by four fourth-year student investigators (n = 168 AI-generated submissions). Copilot was unable to complete 12; therefore, 156 AI-generated submissions were graded by the faculty course coordinators for accuracy and scored from 0 to 100 %. Pharmacy student cohort scores were higher than the average AI scores for all of the skills laboratory assignments except for two in-depth patient cases completed with ChatGPT. CONCLUSION Pharmacy students on average performed better on most skills laboratory assignments than three commonly used artificial intelligence platforms. Teaching students the strengths and weaknesses of utilizing AI in the classroom is essential.
Collapse
Affiliation(s)
- Vivian Do
- Class of 2025, Virginia Commonwealth University School of Pharmacy, Richmond, VA, United States of America.
| | - Krista L Donohoe
- BCPS, BCGP, Virginia Commonwealth University School of Pharmacy, Richmond, VA, United States of America.
| | - Apryl N Peddi
- BCACP, Virginia Commonwealth University School of Pharmacy, Richmond, VA, United States of America.
| | - Eleanor Carr
- Class of 2025, Virginia Commonwealth University School of Pharmacy, Richmond, VA, United States of America.
| | - Christina Kim
- Class of 2025, Virginia Commonwealth University School of Pharmacy, Richmond, VA, United States of America.
| | - Virginia Mele
- Class of 2025, Virginia Commonwealth University School of Pharmacy, Richmond, VA, United States of America.
| | - Dhruv Patel
- Class of 2025, Virginia Commonwealth University School of Pharmacy, Richmond, VA, United States of America.
| | - Alexis N Crawford
- BCCCP, BCPS, Virginia Commonwealth University School of Pharmacy, Richmond, VA, United States of America.
| |
Collapse
|
3
|
Edwards CJ, Cornelison B, Erstad BL. Comparison of a generative large language model to pharmacy student performance on therapeutics examinations. CURRENTS IN PHARMACY TEACHING & LEARNING 2025; 17:102394. [PMID: 40409210 DOI: 10.1016/j.cptl.2025.102394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Revised: 03/17/2025] [Accepted: 05/09/2025] [Indexed: 05/25/2025]
Abstract
OBJECTIVE To compare the performance of a generative language model (ChatGPT-3.5) to pharmacy students on therapeutics examinations. METHODS Questions were drawn from two pharmacotherapeutics courses in a 4-year PharmD program. Questions were classified as case based or non-case based and application or recall. Questions were entered into ChatGPT version 3.5 and responses were scored. ChatGPT's score for each exam was calculated by dividing the number of correct responses by the total number of questions. The mean composite score for ChatGPT was calculated by adding individual scores from each exam and dividing by the number of exams. The mean composite score for the students was calculated by dividing the sum of the mean class performance on each exam divided by the number of exams. Chi-square was used to identify factors associated with incorrect responses from ChatGPT. RESULTS The mean composite score across 6 exams for ChatGPT was 53 (SD = 19.2) compared to 82 (SD = 4) for the pharmacy students (p = 0.0048). ChatGPT answered 51 % of questions correctly. ChatGPT was less likely to answer application-based questions correctly compared to recall-based questions (44 % vs 80 %) and less likely to answer case-based questions correctly compared to non-case-based questions (45 % vs 74 %). CONCLUSION ChatGPT scored lower than the average grade for pharmacy students and was less likely to answer application-based and case-based questions correctly. These findings provide valuable insight into how this technology will perform which can help to inform best practices for item development and helps highlight the limitations of this technology.
Collapse
Affiliation(s)
- Christopher J Edwards
- Department of Pharmacy Practice & Science, University of Arizona R. Ken Coit College of Pharmacy, Tucson, AZ, United States of America.
| | - Bernadette Cornelison
- Department of Pharmacy Practice & Science, University of Arizona R. Ken Coit College of Pharmacy, Tucson, AZ, United States of America.
| | - Brian L Erstad
- Department of Pharmacy Practice & Science, University of Arizona R. Ken Coit College of Pharmacy, Tucson, AZ, United States of America.
| |
Collapse
|
4
|
Shultz B, DiDomenico RJ, Goliak K, Mucksavage J. Exploratory Assessment of GPT-4's Effectiveness in Generating Valid Exam Items in Pharmacy Education. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2025; 89:101405. [PMID: 40246172 DOI: 10.1016/j.ajpe.2025.101405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 03/20/2025] [Accepted: 04/11/2025] [Indexed: 04/19/2025]
Abstract
OBJECTIVE To evaluate the effectiveness of GPT-4 in generating valid multiple-choice exam items for assessing therapeutic knowledge in pharmacy education. METHODS A custom GPT application was developed to create 60 case-based items from a pharmacotherapy textbook. Nine subject matter experts reviewed items for content validity, difficulty, and quality. Valid items were compiled into a 38-question exam administered to 46 fourth-year pharmacy students. Classical test theory and Rasch analysis were used to assess psychometric properties. RESULTS Of 60 generated items, 38 met content validity requirements, with only 6 accepted without revisions. The exam demonstrated moderate reliability and correlated well with a prior cumulative therapeutics exam. Classical item analysis revealed that most items had acceptable point biserial correlations, though fewer than half fell within the recommended difficulty range. Rasch analysis indicated potential multidimensionality and suboptimal targeting of item difficulty to student ability. CONCLUSION GPT-4 offers a preliminary step toward generating exam content in pharmacy education but has clear limitations that require further investigation and validation. Substantial human oversight and psychometric evaluation are necessary to ensure clinical realism and appropriate difficulty. Future research with larger samples is needed to further validate the effectiveness of artificial intelligence in item generation for high-stakes assessments in pharmacy education.
Collapse
Affiliation(s)
- Benjamin Shultz
- University of Illinois Chicago, Retzky College of Pharmacy, Chicago, IL, USA.
| | - Robert J DiDomenico
- University of Illinois Chicago, Retzky College of Pharmacy, Chicago, IL, USA
| | - Kristen Goliak
- University of Illinois Chicago, Retzky College of Pharmacy, Chicago, IL, USA
| | - Jeffrey Mucksavage
- University of Illinois Chicago, Retzky College of Pharmacy, Chicago, IL, USA
| |
Collapse
|
5
|
Ritter C. [Digital learning methods in pharmacy]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2025; 68:511-518. [PMID: 40167765 PMCID: PMC12075329 DOI: 10.1007/s00103-025-04041-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2024] [Accepted: 03/12/2025] [Indexed: 04/02/2025]
Abstract
With the outbreak of the SARS-CoV‑2 pandemic in March 2020 and the associated restrictions on teaching, digital learning methods were increasingly used at many universities. Digital learning methods generally include fully or partially digitized learning elements such as lecture recordings, open learning materials, or e‑portfolios. Fully or partially digitized learning formats include game-based learning, the inverted classroom, mobile learning, the use of social media, online peer and collaborative learning, and adaptive learning. Digitized realities are created in the context of simulation-based learning and in augmented and virtual reality. Online-based event formats and online degree programs are characterized by an almost exclusive proportion of internet-based learning phases.The extent to which digital learning methods are used in pharmacy courses in Germany is explained in this article using selected practical examples. The selected examples include the creation of an audio podcast to assess the performance of a clinical chemistry internship as a form of digital learning element, the use of a digital analysis tool to carry out medication analyses as an example of mobile learning, a blended learning concept to teach the basics of clinical pharmacy, an online concept of virtual bedside teaching, and a game-like simulation for dispensing medicines. The inclusion of artificial intelligence can be helpful in the development and implementation of digital learning offerings. However, a sufficiently high quality and critical approach must be guaranteed.
Collapse
Affiliation(s)
- Christoph Ritter
- Institut für Pharmazie, Klinische Pharmazie, Universität Greifswald, Friedrich-Ludwig-Jahn-Str. 17, 17489, Greifswald, Deutschland.
| |
Collapse
|
6
|
Alexander KM, Johnson M, Farland MZ, Blue A, Bald EK. Exploring Generative Artificial Intelligence to Enhance Reflective Writing in Pharmacy Education. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2025; 89:101416. [PMID: 40311683 DOI: 10.1016/j.ajpe.2025.101416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 04/17/2025] [Accepted: 04/27/2025] [Indexed: 05/03/2025]
Abstract
The integration of generative artificial intelligence (AI) holds the potential to impact teaching and learning. In this commentary, we explore the opportunity for AI to enhance reflective writing (RW) among student pharmacists. AI-guided RW has the potential to strengthen students' reflective capacity, deepen their autobiographical memory, and develop their self-confidence. This commentary presents examples of how AI can be utilized to enrich RW and includes a sample prompt aimed at facilitating student self-reflection. We explore how integrating AI-facilitated RW assignments into the pharmacy curriculum can help students develop detailed examples for self-reflection and gain exposure to the potential uses of AI in their professional development and career advancement.
Collapse
Affiliation(s)
- Kaitlin M Alexander
- Department of Pharmacy Education and Practice, University of Florida College of Pharmacy, Gainesville, FL, USA.
| | - Margeaux Johnson
- UFIT Center for Instructional Technology and Training, University of Florida, Gainesville, FL, USA
| | - Michelle Z Farland
- Department of Pharmacy Education and Practice, University of Florida College of Pharmacy, Gainesville, FL, USA
| | - Amy Blue
- Office of Interprofessional Education, University of Florida Office of the Senior Vice President for Health Affairs, Gainesville, FL, USA
| | - Emily K Bald
- University Writing Program, University of Florida College of Liberal Arts and Sciences, Gainesville, FL, USA
| |
Collapse
|
7
|
Maqbool T, Ishaq H, Shakeel S, Zaib un Nisa A, Rehman H, Kashif S, Sadia H, Naveed S, Mumtaz N, Siddiqui S, Jamshed S. Future pharmacy practitioners' insights towards integration of artificial intelligence in healthcare education: Preliminary findings from Karachi, Pakistan. PLoS One 2025; 20:e0314045. [PMID: 39937780 PMCID: PMC11819520 DOI: 10.1371/journal.pone.0314045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Accepted: 11/04/2024] [Indexed: 02/14/2025] Open
Abstract
In an evolutionary era of medical education, "Artificial intelligence" (AI) is applied to replicate human intellect, encompassing abilities, logical reasoning and effective problem-solving skills. Previous research has explored the attitude of medical and dental students, toward the assimilation of AI in medicine; however, a significant gap exists in appraising the understanding and concerns of pharmacy students. Therefore, the current study was designed to explore undergraduate pharmacy students' perceptions of integrating AI into education and practice. METHODS A cross-sectional study was conducted among final-year pharmacy students from different public and private sector universities in Karachi. The sample size on 60% anticipated response rate and 99% CI was calculated to be 390. Data was collected after acquiring ethical approval using convenient sampling. Frequency and percentage of the socio-demographic features were analyzed and then goodness of fit and Pearson's chi-squared test of correlation was applied. Results were considered significant when p < 0.05. RESULTS The overall response rate of the study was 67%. More than 80% of the respondents were female. The students 35% (n = 202) strongly agreed and 59% (n = 334) agreed that AI plays an important role in healthcare, (χ2 = 505.6, p < 0.001). Around 79% (n = 453, χ2 = 384.3, p < 0.001) of students agreed on the replacement of patient care specialties with AI in the future, whereas 495 students (87%, χ2 = 682.3, p < 0.001) stated that they possess a strong comprehension of the fundamental principles governing the operation of AI. More than 80% of the students were comfortable in using AI terminologies (n = 475, χ2 = 598, p < 0.001) and 93% (n = 529, χ2 = 290, p < 0.001) were sure that AI inclusion in pharmacy education will develop a positive influence into the pharmacy curriculum (95%, n = 549, χ2 = 566.9, p < 0.001). A high and positive correlation was observed between the perception and willingness of students to adopt the AI changes in teaching undergraduate students (ρ = 0.491, p < 0.001). Furthermore, the outcomes showed students at private-sector universities stood out in computer literacy compared to public-sector universities (χ2 = 6.546, p < 0.05). CONCLUSION The current outcomes revealed the higher willingness of pharmacy students towards AI-infused learning. They understood the prerequisite of having both formal and informal learning experiences on the clinical application, technological constraints, and ethical considerations of the AI tools to be successful in this endeavor. The policymakers must take action to ensure that future pharmacists have a strong foundation of AI literacy and take initiatives to foster the interests and abilities of imminent pharmacists who will spearhead innovation in the field.
Collapse
Affiliation(s)
- Tahmina Maqbool
- Faculty of Pharmacy, Department of Pharmaceutics, Hamdard University, Madinat al-Hikmah, Karachi, Pakistan
| | - Humera Ishaq
- Faculty of Health and Medical Sciences, Hamdard University, Karachi, Pakistan
| | - Sadia Shakeel
- Faculty of Pharmaceutical Sciences, Department of Pharmacy Practice, Dow College of Pharmacy, Dow University of Health Sciences, Karachi, Pakistan
| | - Ayeshah Zaib un Nisa
- Faculty of Pharmacy, Department of Pharmacy Practice, Hamdard University, Madinat al-Hikmah, Karachi, Pakistan
| | - Hina Rehman
- Department of Pharmacy Practice, Institute of Pharmaceutical Sciences, Jinnah Sindh Medical University, Karachi, Pakistan
| | - Shadab Kashif
- Faculty of Pharmacy, Department of Pharmacy Practice, Salim Habib University, Karachi, Pakistan
| | - Halima Sadia
- Faculty of Pharmacy, Department of Pharmacy Practice, Jinnah University for Women, Karachi, Pakistan
| | - Safila Naveed
- Faculty of Pharmacy, Department of Pharmaceutical Chemistry, University of Karachi, Karachi, Pakistan
| | - Nazish Mumtaz
- Faculty of Pharmacy, Benazir Bhutto Shaheed University Lyari, Karachi, Pakistan
| | - Sidra Siddiqui
- Faculty of Pharmacy, Department of Pharmaceutics, Hamdard University, Madinat al-Hikmah, Karachi, Pakistan
| | - Shazia Jamshed
- Faculty of Pharmacy, Department of Pharmacy Practice, Jinnah University for Women, Karachi, Pakistan
- Department of Pharmacy Practice, School of Pharmacy, International Medical University, Kuala Lumpur, Malaysia
| |
Collapse
|
8
|
MacDougall C, Jeffres M. Bugs and drugs - what do pharmacists need to know and what's the best way to learn it? Am J Health Syst Pharm 2025; 82:235-239. [PMID: 39324640 DOI: 10.1093/ajhp/zxae258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Indexed: 09/27/2024] Open
Affiliation(s)
- Conan MacDougall
- Department of Clinical Pharmacy, University of California San Francisco School of Pharmacy, San Francisco, CA, USA
| | - Meghan Jeffres
- Department of Clinical Pharmacy, University of Colorado Anschutz School of Pharmacy, Aurora, CO, USA
| |
Collapse
|
9
|
Park SK, Chen AMH, Lebovitz L, Ellington TM, Lahiri M, Weldon D, Behnen E, Sease J, Vellurattil RP, Donohoe H, Bechtol R. A Scoping Review of Calls to Action in Pharmacy Education. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2025; 89:101363. [PMID: 39828011 DOI: 10.1016/j.ajpe.2025.101363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Revised: 01/03/2025] [Accepted: 01/14/2025] [Indexed: 01/22/2025]
Abstract
OBJECTIVES Calls to action in pharmacy education are frequently observed in the literature, with little information about their authors, audience, or focus, especially regarding whether these calls led to any changes. This scoping review aims to (1) quantitatively and qualitatively characterize the calls to action in pharmacy education and (2) examine the traits of published articles typically associated with effective advocacy. FINDINGS A systematic literature search for the scoping review was conducted using the PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Of 3287 articles, 232 were included and extracted for their specific call to action, including topics, audience, and call quality. Two-thirds (66.7%) of the calls were initiated by faculty groups, 49% were commentaries, opinions, or editorials, and 39% were focused on the Doctor of Pharmacy curriculum. More than 90% of the articles were published between 2013 and 2023, with 26% published in 2023 alone. Most calls were directed to colleges/schools of pharmacy (81%). Only 21% of articles had a strong call to action with next steps or recommendations for enacting change. SUMMARY The most frequently published calls to action were related to the pharmacy curriculum, authored by faculty groups, directed to pharmacy programs, and published in the postpandemic years, but were often not sufficiently strong to elicit change. To evoke change, calls to action should include several key characteristics according to this scoping review, such as being written in active voice, to a specific audience, with clearly stated problems, and with actionable solutions that could be implemented.
Collapse
Affiliation(s)
- Sharon K Park
- Notre Dame of Maryland University, School of Pharmacy and Health Professions, Baltimore, MD, USA.
| | - Aleda M H Chen
- Cedarville University, School of Pharmacy, Cedarville, OH, USA
| | - Lisa Lebovitz
- University of Maryland School of Pharmacy, Baltimore, MD, USA
| | | | - Minakshi Lahiri
- Rutgers, The State University of New Jersey, New Brunswick, NJ, USA
| | | | | | - Julie Sease
- University of South Carolina, Columbia, SC, USA
| | | | | | | |
Collapse
|
10
|
McLaughlin JE, Kelley K, Mortha SM, Bowen JF. Tools for Assessing Communication in Pharmacy Education: Review and Recommendations. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2024; 88:101328. [PMID: 39542402 DOI: 10.1016/j.ajpe.2024.101328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 09/23/2024] [Accepted: 11/07/2024] [Indexed: 11/17/2024]
Abstract
OBJECTIVES Well-developed and finely tuned communication skills are foundational for pharmacists and should be at the core of Doctor of Pharmacy curricula. This narrative review aimed to identify and summarize useful instruments for pharmacy educators interested in assessing communication skills. FINDINGS Fifty-seven studies were evaluated. Eighteen studies with communication assessment instruments that were readily available and deemed useful by the research team were included for further review. Most focused on oral communication (n = 15), included pharmacy students as the communicators (n = 14), and utilized instructors as the assessors in the didactic, simulation, objective structured clinical examination, or experiential settings (n = 18). The communication tasks (eg, patient counseling; medication history taking; subjective, objective, assessment, plan notes), contexts (eg, community pharmacy), and scales of measurement varied for each instrument. SUMMARY Although communication is a critical skill for pharmacy students, its assessment is complicated by the potential need for various types of assessors, communication tasks, and contexts. This review describes a set of useful assessment instruments to aid pharmacy educators in selecting an appropriate tool or adapting an existing one to meet their course or program assessment needs.
Collapse
Affiliation(s)
| | | | | | - Jane F Bowen
- Saint Joseph's University, Philadelphia College of Pharmacy, Philadelphia, PA, USA.
| |
Collapse
|
11
|
Knobloch J, Cozart K, Halford Z, Hilaire M, Richter LM, Arnoldi J. Students' perception of the use of artificial intelligence (AI) in pharmacy school. CURRENTS IN PHARMACY TEACHING & LEARNING 2024; 16:102181. [PMID: 39236450 DOI: 10.1016/j.cptl.2024.102181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 08/06/2024] [Accepted: 08/09/2024] [Indexed: 09/07/2024]
Abstract
INTRODUCTION The increasing adoption of artificial intelligence (AI) among college students, particularly in pharmacy education, raises ethical concerns and prompts debates on responsible usage. The promise of the potential to reduce workload is met with concerns of accuracy issues, algorithmic bias, and the lack of AI education and training. This study aims to understand pharmacy students' perspectives on the use of AI in pharmacy education. METHODS This study used an anonymous 14-question survey distributed among second, third, and fourth-year pharmacy students at four schools of pharmacy in the United States. RESULTS A total of 171 responses were analyzed. Demographic information included institution, class identification (P2, P3, P4), and age range. Regarding the use of AI, 43% of respondents were unaware of limitations of AI tools. Many respondents (45%) had used AI tools to complete assignments, while 42% considered it academic dishonesty. Fifty-six percent believed AI tools could be used ethically. Student perspectives on AI were varied but many expressed that it will be integral to pharmacy education and future practice. CONCLUSIONS This study highlights the nuances of AI usage among pharmacy students. Despite limited education and training on AI, students utilized tools for various tasks. This survey provides evidence that pharmacy students are exploring the use of AI and would likely benefit from education on using AI as a supplement to critical thinking.
Collapse
Affiliation(s)
- Joselyn Knobloch
- Southern Illinois University Edwardsville School of Pharmacy, 40 Hairpin Drive, Suite 3204, Campus Box 2000, Edwardsville, IL 62026-2000, United States of America
| | - Kate Cozart
- VA Tennessee Valley Healthcare System, 782 Weatherly Dr., Clarksville, TN 37043, United States of America.
| | - Zachery Halford
- Union University College of Pharmacy, 1050 Union University Dr., Jackson, TN 38305, United States of America.
| | - Michelle Hilaire
- University of Wyoming School of Pharmacy, 1000 E. University Avenue, Laramie, WY 82071, United States of America.
| | - Lisa M Richter
- North Dakota State University School of Pharmacy, Sudro 20A/Dept 2660, PO Box 6050, Fargo, ND 58108-6050, United States of America.
| | - Jennifer Arnoldi
- Clinical Professor of Pharmacy Practice, Southern Illinois University Edwardsville School of Pharmacy, 40 Hairpin Drive, Suite 3204, Campus Box 2000, Edwardsville, IL 62026-2000, United States of America.
| |
Collapse
|
12
|
Anderson HD, Kwon S, Linnebur LA, Valdez CA, Linnebur SA. Pharmacy student use of ChatGPT: A survey of students at a U.S. School of Pharmacy. CURRENTS IN PHARMACY TEACHING & LEARNING 2024; 16:102156. [PMID: 39029382 DOI: 10.1016/j.cptl.2024.102156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 07/01/2024] [Accepted: 07/02/2024] [Indexed: 07/21/2024]
Abstract
OBJECTIVE To learn how students in an accredited PharmD program in the United States are using ChatGPT for personal, academic, and clinical reasons, and whether students think ChatGPT training should be incorporated into their program's curriculum. METHODS In August 2023, an 18-item survey was developed, pilot tested, and sent to all students who were enrolled during the Spring 2023 semester in the entry-level PharmD program at the University of Colorado. E-mail addresses were separated from survey responses to maintain anonymity. Responses were described using descriptive statistics. RESULTS 206 pharmacy students responded to the survey for a 49% response rate. Nearly one-half (48.5%) indicated they had used ChatGPT for personal reasons; 30.2% had used it for academic reasons; and 7.5% had used it for clinical reasons. The most common personal use for ChatGPT was answering questions and looking-up information (67.0%). The top academic reason for using ChatGPT was summarizing information or a body of text (42.6%), while the top clinical reason was simplifying a complex topic (53.3%). Most respondents (61.8%) indicated they would be interested in learning about how ChatGPT could help them in pharmacy school, and 28.1% thought ChatGPT training should be incorporated into their pharmacy curriculum. CONCLUSION At the time of the survey, ChatGPT was being used by approximately one-half of our pharmacy student respondents for personal, academic, or clinical reasons. Overall, many students indicated they want to learn how to use ChatGPT to help them with their education and think ChatGPT training should be integrated into their curriculum.
Collapse
Affiliation(s)
- Heather D Anderson
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| | - Sue Kwon
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| | - Lauren A Linnebur
- University of Colorado Anschutz Medical Campus, School of Medicine, Division of Geriatric Medicine, 12631 East 17th Avenue, Suite 8111, Aurora, CO 80045, United States of America.
| | - Connie A Valdez
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| | - Sunny A Linnebur
- University of Colorado Anschutz Medical Campus, Skaggs School of Pharmacy and Pharmaceutical Sciences, Department of Clinical Pharmacy, 12850 E. Montview Blvd, Mail stop C238, Aurora, CO 80045, United States of America.
| |
Collapse
|
13
|
Ehlert A, Ehlert B, Cao B, Morbitzer K. Large Language Models and the North American Pharmacist Licensure Examination (NAPLEX) Practice Questions. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2024; 88:101294. [PMID: 39307190 DOI: 10.1016/j.ajpe.2024.101294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 09/16/2024] [Accepted: 09/17/2024] [Indexed: 10/08/2024]
Abstract
OBJECTIVE This study aims to test the accuracy of large language models (LLMs) in answering standardized pharmacy examination practice questions. METHODS The performance of 3 LLMs (generative pretrained transformer [GPT]-3.5, GPT-4, and Chatsonic) was evaluated on 2 independent North American Pharmacist Licensure Examination practice question sets sourced from McGraw Hill and RxPrep. These question sets were further classified into binary question categories of adverse drug reaction (ADR) questions, scenario questions, treatment questions, and select-all questions. Python was used to run χ2 tests to compare model and question-type accuracy. RESULTS Of the 3 LLMs tested, GPT-4 achieved the highest accuracy, with 87% accuracy on the McGraw Hill question set and 83.5% accuracy on the RxPrep question set. In comparison, GPT-3.5 had 68.0% and 60.0% accuracy on those question sets, respectively, and Chatsonic had 60.5% and 62.5% accuracy on those question sets, respectively. All models performed worse on select-all questions compared with non-select-all questions (GPT-3: 42.3% vs 66.2%; GPT-4: 73.1 vs 87.2%; Chatsonic: 36.5% vs 71.6%). GPT-4 had statistically higher accuracy in answering ADR questions (96.1%) compared with non-ADR questions (83.9%). CONCLUSION Our study found that GPT-4 outperformed GPT-3.5 and Chatsonic in answering North American Pharmacist Licensure Examination pharmacy licensure examination practice questions, particularly excelling in answering questions related to ADRs. These results suggest that advanced LLMs such as GPT-4 could be used for applications in pharmacy education.
Collapse
Affiliation(s)
- Alexa Ehlert
- University of North Carolina at Chapel Hill, UNC Eshelman School of Pharmacy, Division of Pharmaceutical Outcomes and Policy, Chapel Hill, NC, USA.
| | - Benjamin Ehlert
- Stanford University School of Medicine, Department of Biomedical Data Science, Stanford, CA, USA
| | - Binxin Cao
- University of North Carolina at Chapel Hill, UNC Eshelman School of Pharmacy, Division of Pharmaceutical Outcomes and Policy, Chapel Hill, NC, USA
| | - Kathryn Morbitzer
- University of North Carolina at Chapel Hill, UNC Eshelman School of Pharmacy, Chapel Hill, NC, USA; University of North Carolina at Chapel Hill, UNC Eshelman School of Pharmacy, Center for Innovative Pharmacy Education and Research, Chapel Hill, NC, USA
| |
Collapse
|
14
|
Dunjic M, Turini S, Nejkovic L, Sulovic N, Cvetkovic S, Dunjic M, Dunjic K, Dolovac D. Comparative Molecular Docking of Apigenin and Luteolin versus Conventional Ligands for TP-53, pRb, APOBEC3H, and HPV-16 E6: Potential Clinical Applications in Preventing Gynecological Malignancies. Curr Issues Mol Biol 2024; 46:11136-11155. [PMID: 39451541 PMCID: PMC11505693 DOI: 10.3390/cimb46100661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 09/25/2024] [Accepted: 09/29/2024] [Indexed: 10/26/2024] Open
Abstract
This study presents a comparative analysis of molecular docking data, focusing on the binding interactions of the natural compounds apigenin and luteolin with the proteins TP-53, pRb, and APOBEC, in comparison to conventional pharmacological ligands. Advanced bioinformatics techniques were employed to evaluate and contrast binding energies, showing that apigenin and luteolin demonstrate significantly higher affinities for TP-53, pRb, and APOBEC, with binding energies of -6.9 kcal/mol and -6.6 kcal/mol, respectively. These values suggest strong potential for therapeutic intervention against HPV-16. Conventional ligands, by comparison, exhibited lower affinities, with energies ranging from -4.5 to -5.5 kcal/mol. Additionally, protein-protein docking simulations were performed to assess the interaction between HPV-16 E6 oncoprotein and tumor suppressors TP-53 and pRb, which revealed high binding energies around -976.7 kcal/mol, indicative of their complex interaction. A conversion formula was applied to translate these protein-protein interaction energies to a comparable scale for non-protein interactions, further underscoring the superior binding potential of apigenin and luteolin. These findings highlight the therapeutic promise of these natural compounds in preventing HPV-16-induced oncogenesis, warranting further experimental validation for clinical applications.
Collapse
Affiliation(s)
- Momir Dunjic
- School of Medicine, University of Pristina, BB Anri Dinana, 38220 Kosovska Mitrovica, Serbia;
- Faculty of Pharmacy, Heroja Pinkija 4, 21000 Novi Sad, Serbia
- Alma Mater Europaea (AMEU-ECM), Slovenska Ulica/Street 17, 2000 Maribor, Slovenia;
- BDORT Center for Functional Supplementation and Integrative Medicine, Bulevar Oslobodjenja 2, 11000 Belgrade, Serbia;
| | - Stefano Turini
- Alma Mater Europaea (AMEU-ECM), Slovenska Ulica/Street 17, 2000 Maribor, Slovenia;
- BDORT Center for Functional Supplementation and Integrative Medicine, Bulevar Oslobodjenja 2, 11000 Belgrade, Serbia;
- Guard Plus Doo, Nemanjina 40, 11000 Belgrade, Serbia
- Worldwide Consultancy and Services, Division of Advanced Research and Development, Via Andrea Ferrara 45, 00165 Rome, Italy;
- Capri Campus Forensic and Security, Division of Environmental Medicine and Security, Via G. Orlandi 91 Anacapri, Capri Island, 80071 Naples, Italy
| | - Lazar Nejkovic
- Belgrade University, School of Medicine, dr Subotića Starijeg 8, 11000 Belgrade, Serbia;
- Clinic for Obstetrics and Gynecology, Kraljice Natalije 62, 11000 Belgrade, Serbia
| | - Nenad Sulovic
- School of Medicine, University of Pristina, BB Anri Dinana, 38220 Kosovska Mitrovica, Serbia;
| | - Sasa Cvetkovic
- School of Medicine, University of Pristina, BB Anri Dinana, 38220 Kosovska Mitrovica, Serbia;
| | - Marija Dunjic
- Worldwide Consultancy and Services, Division of Advanced Research and Development, Via Andrea Ferrara 45, 00165 Rome, Italy;
| | - Katarina Dunjic
- BDORT Center for Functional Supplementation and Integrative Medicine, Bulevar Oslobodjenja 2, 11000 Belgrade, Serbia;
| | - Dina Dolovac
- General Hospital, UI. Generala Zivkovica 1, 36300 Novi Pazar, Serbia;
| |
Collapse
|
15
|
Babin JL, Raber H, Mattingly Ii TJ. Prompt Pattern Engineering for Test Question Mapping Using ChatGPT: A Cross-Sectional Study. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2024; 88:101266. [PMID: 39153573 DOI: 10.1016/j.ajpe.2024.101266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 08/12/2024] [Accepted: 08/13/2024] [Indexed: 08/19/2024]
Abstract
OBJECTIVE This study aimed to develop a prompt engineering procedure for test question mapping and then determine the effectiveness of test question mapping using Chat Generative Pre-Trained Transformer (ChatGPT) compared to human faculty mapping. METHODS We conducted a cross-sectional study to compare ChatGPT and human mapping using a sample of 139 test questions from modules within the Integrated Pharmacotherapeutics course series. The test questions were mapped by 3 faculty members to both module objectives and the Accreditation Council for Pharmacy Education Standards 2016 (Standards 2016) to create the "correct answer". Prompt engineering procedures were created to facilitate mapping with ChatGPT, and ChatGPT mapping results were compared with human mapping. RESULTS ChatGPT mapped test questions directly to the "correct answer" based on human consensus in 68.0% of cases, and the program matched with at least one individual human response in another 20.1% of cases for a total of 88.1% agreement with human mappers. When humans fully agreed with the mapping decision, ChatGPT was more likely to map correctly. CONCLUSION This study presents a practical use case with prompt engineering tailored for college assessment or curriculum committees to facilitate efficient test questions and educational outcomes mapping.
Collapse
Affiliation(s)
- Jennifer L Babin
- Department of Pharmacotherapy, University of Utah College of Pharmacy, Salt Lake City, UT, USA.
| | - Hanna Raber
- Department of Pharmacotherapy, University of Utah College of Pharmacy, Salt Lake City, UT, USA
| | - T Joseph Mattingly Ii
- Department of Pharmacotherapy, University of Utah College of Pharmacy, Salt Lake City, UT, USA
| |
Collapse
|
16
|
Mortlock R, Lucas C. Generative artificial intelligence (Gen-AI) in pharmacy education: Utilization and implications for academic integrity: A scoping review. EXPLORATORY RESEARCH IN CLINICAL AND SOCIAL PHARMACY 2024; 15:100481. [PMID: 39184524 PMCID: PMC11341932 DOI: 10.1016/j.rcsop.2024.100481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 07/16/2024] [Accepted: 07/17/2024] [Indexed: 08/27/2024] Open
Abstract
Introduction Generative artificial intelligence (Gen-AI), exemplified by the widely adopted ChatGPT, has garnered significant attention in recent years. Its application spans various health education domains, including pharmacy, where its potential benefits and drawbacks have become increasingly apparent. Despite the growing adoption of Gen-AIsuch as ChatGPT in pharmacy education, there remains a critical need to assess and mitigate associated risks. This review exploresthe literature and potential strategies for mitigating risks associated with the integration of Gen-AI in pharmacy education. Aim To conduct a scoping review to identify implications of Gen-AI in pharmacy education, identify its use and emerging evidence, with a particular focus on strategies which mitigate potential risks to academic integrity. Methods A scoping review strategy was employed in accordance with the PRISMA-ScR guidelines. Databases searched includedPubMed, ERIC [Education Resources Information Center], Scopus and ProQuestfrom August 2023 to 20 February 2024 and included all relevant records from 1 January 2000 to 20 February 2024 relating specifically to LLM use within pharmacy education. A grey literature search was also conducted due to the emerging nature of this topic. Policies, procedures, and documents from institutions such as universities and colleges, including standards, guidelines, and policy documents, were hand searched and reviewed in their most updated form. These documents were not published in the scientific literature or indexed in academic search engines. Results Articles (n = 12) were derived from the scientific data bases and Records (n = 9) derived from the grey literature. Potential use and benefits of Gen-AI within pharmacy education were identified in all included published articles however there was a paucity of published articles related the degree of consideration to the potential risks to academic integrity. Grey literature recordsheld the largest proportion of risk mitigation strategies largely focusing on increased academic and student education and training relating to the ethical use of Gen-AI as well considerations for redesigning of current assessments likely to be a risk for Gen-AI use to academic integrity. Conclusion Drawing upon existing literature, this review highlights the importance of evidence-based approaches to address the challenges posed by Gen-AI such as ChatGPT in pharmacy education settings. Additionally, whilst mitigation strategies are suggested, primarily drawn from the grey literature, there is a paucity of traditionally published scientific literature outlining strategies for the practical and ethical implementation of Gen-AI within pharmacy education. Further research related to the responsible and ethical use of Gen-AIin pharmacy curricula; and studies related to strategies adopted to mitigate risks to academic integrity would be beneficial.
Collapse
Affiliation(s)
- R. Mortlock
- Graduate School of Health, Faculty of Health, University of Technology, Sydney, Australia
| | - C. Lucas
- Graduate School of Health, Faculty of Health, University of Technology, Sydney, Australia
- School of Population Health, Faculty of Medicine and Health, University of NSW, Sydney, Australia
- Connected Intelligence Centre (CIC), University of Technology Sydney, Australia
| |
Collapse
|
17
|
Shultz B, Kopale MS, Benken S, Mucksavage J. Evaluating the Quality of Examination Items From the Pathophysiology, Drug Action, and Therapeutics Course Series. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2024; 88:100757. [PMID: 38996841 DOI: 10.1016/j.ajpe.2024.100757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 06/13/2024] [Accepted: 07/08/2024] [Indexed: 07/14/2024]
Abstract
OBJECTIVE To determine the impact of item-writing flaws and cognitive level on student performance metrics in 1 course series across 2 semesters at a single institution. METHODS Four investigators reviewed 928 multiple-choice items from an integrated therapeutics course series. Differences in performance metrics were examined between flawed and standard items, flawed stems and flawed answer choices, and cognitive levels. RESULTS Reviewers found that 80% of the items were flawed, with the most common types being implausible distractors and unfocused stems. Flawed items were generally easier than standard ones, but the type of flaw significantly impacted the difficulty. Items with flawed stems had the same difficulty as standard items; however, those with flawed answer choices were significantly easier. Most items tested lower-level skills and have more flaws than higher-level items. There was no significant difference in difficulty between lower- and higher-level cognitive items, and higher-level items were more likely to have answer flaws than item flaws. CONCLUSION Item-writing flaws differently impact student performance. Implausible distractors artificially lower the difficulty of questions, even those designed to assess higher-level skills. This effect contributes to a lack of significant difference in difficulty between higher- and lower-level items. Unfocused stems, on the other hand, likely increase confusion and hinder performance, regardless of the question's cognitive complexity.
Collapse
Affiliation(s)
- Benjamin Shultz
- UIC College of Pharmacy, University of Illinois Chicago, Chicago, IL, USA.
| | | | - Scott Benken
- UIC College of Pharmacy, University of Illinois Chicago, Chicago, IL, USA
| | - Jeffrey Mucksavage
- UIC College of Pharmacy, University of Illinois Chicago, Chicago, IL, USA
| |
Collapse
|
18
|
Taesotikul S, Singhan W, Taesotikul T. ChatGPT vs pharmacy students in the pharmacotherapy time-limit test: A comparative study in Thailand. CURRENTS IN PHARMACY TEACHING & LEARNING 2024; 16:404-410. [PMID: 38641483 DOI: 10.1016/j.cptl.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 04/03/2024] [Accepted: 04/04/2024] [Indexed: 04/21/2024]
Abstract
OBJECTIVES ChatGPT is an innovative artificial intelligence designed to enhance human activities and serve as a potent tool for information retrieval. This study aimed to evaluate the performance and limitation of ChatGPT on fourth-year pharmacy student examination. METHODS This cross-sectional study was conducted on February 2023 at the Faculty of Pharmacy, Chiang Mai University, Thailand. The exam contained 16 multiple-choice questions and 2 short-answer questions, focusing on classification and medical management of shock and electrolyte disorders. RESULTS Out of the 18 questions, ChatGPT provided 44% (8 out of 18) correct responses. In contrast, the students provided a higher accuracy rate with 66% (12 out of 18) correctly answered questions. The findings of this study underscore that while AI exhibits proficiency, it encounters limitations when confronted with specific queries derived from practical scenarios, on the contrary with pharmacy students who possess the liberty to explore and collaborate, mirroring real-world scenarios. CONCLUSIONS Users must exercise caution regarding its reliability, and interpretations of AI-generated answers should be approached judiciously due to potential restrictions in multi-step analysis and reliance on outdated data. Future advancements in AI models, with refinements and tailored enhancements, offer the potential for improved performance.
Collapse
Affiliation(s)
- Suthinee Taesotikul
- Department of Pharmaceutical Care, Faculty of Pharmacy, Chiang Mai University, Chiang Mai 50200, Thailand.
| | - Wanchana Singhan
- Department of Pharmaceutical Care, Faculty of Pharmacy, Chiang Mai University, Chiang Mai 50200, Thailand.
| | - Theerada Taesotikul
- Department of Biomedicine and Health Informatics, Faculty of Pharmacy, Silpakorn University, Nakhon Pathom 73000, Thailand.
| |
Collapse
|
19
|
Culp ML, Mahmoud S, Liu D, Haworth IS. An Artificial Intelligence-Supported Medicinal Chemistry Project: An Example for Incorporating Artificial Intelligence Within the Pharmacy Curriculum. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2024; 88:100696. [PMID: 38574998 DOI: 10.1016/j.ajpe.2024.100696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 03/12/2024] [Accepted: 03/29/2024] [Indexed: 04/06/2024]
Abstract
OBJECTIVE This study aims to integrate and use AI to teach core concepts in a medicinal chemistry course and to increase the familiarity of pharmacy students with AI in pharmacy practice and drug development. Artificial intelligence (AI) is a multidisciplinary science that aims to build software tools that mimic human intelligence. AI is revolutionizing pharmaceutical research and patient care. Hence, it is important to include AI in pharmacy education to prepare a competent workforce of pharmacists with skills in this area. METHODS AI principles were introduced in a required medicinal chemistry course for first-year pharmacy students. An AI software, KNIME, was used to examine structure-activity relationships for 5 drugs. Students completed a data sheet that required comprehension of molecular structures and drug-protein interactions. These data were then used to make predictions for molecules with novel substituents using AI. The familiarity of students with AI was surveyed before and after this activity. RESULTS There was an increase in the number of students indicating familiarity with use of AI in pharmacy (before vs after: 25.3% vs 74.5%). The introduction of AI stimulated interest in the course content (> 60% of students indicated increased interest in medicinal chemistry) without compromising the learning outcomes. Almost 70% of students agreed that more AI should be taught in the PharmD curriculum. CONCLUSION This is a successful and transferable example of integrating AI in pharmacy education without changing the main learning objectives of a course. This approach is likely to stimulate student interest in AI applications in pharmacy.
Collapse
Affiliation(s)
- Megan L Culp
- University of Southern California, Alfred E. Mann School of Pharmacy and Pharmaceutical Sciences, Department of Pharmacology & Pharmaceutical Sciences, Los Angeles, CA, USA
| | - Sara Mahmoud
- University of the Pacific Thomas J. Long School of Pharmacy, Department of Pharmacy Practice, Stockton, CA, USA.
| | - Daniel Liu
- University of Southern California, Alfred E. Mann School of Pharmacy and Pharmaceutical Sciences, Department of Pharmacology & Pharmaceutical Sciences, Los Angeles, CA, USA
| | - Ian S Haworth
- University of Southern California, Alfred E. Mann School of Pharmacy and Pharmaceutical Sciences, Department of Pharmacology & Pharmaceutical Sciences, Los Angeles, CA, USA
| |
Collapse
|
20
|
A Fuller K, Morbitzer KA, Zeeman JM, M Persky A, C Savage A, McLaughlin JE. Exploring the use of ChatGPT to analyze student course evaluation comments. BMC MEDICAL EDUCATION 2024; 24:423. [PMID: 38641798 PMCID: PMC11031883 DOI: 10.1186/s12909-024-05316-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 03/14/2024] [Indexed: 04/21/2024]
Abstract
BACKGROUND Since the release of ChatGPT, numerous positive applications for this artificial intelligence (AI) tool in higher education have emerged. Faculty can reduce workload by implementing the use of AI. While course evaluations are a common tool used across higher education, the process of identifying useful information from multiple open-ended comments is often time consuming. The purpose of this study was to explore the use of ChatGPT in analyzing course evaluation comments, including the time required to generate themes and the level of agreement between instructor-identified and AI-identified themes. METHODS Course instructors independently analyzed open-ended student course evaluation comments. Five prompts were provided to guide the coding process. Instructors were asked to note the time required to complete the analysis, the general process they used, and how they felt during their analysis. Student comments were also analyzed through two independent Open-AI ChatGPT user accounts. Thematic analysis was used to analyze the themes generated by instructors and ChatGPT. Percent agreement between the instructor and ChatGPT themes were calculated for each prompt, along with an overall agreement statistic between the instructor and two ChatGPT themes. RESULTS There was high agreement between the instructor and ChatGPT results. The highest agreement was for course-related topics (range 0.71-0.82) and lowest agreement was for weaknesses of the course (range 0.53-0.81). For all prompts except themes related to student experience, the two ChatGPT accounts demonstrated higher agreement with one another than with the instructors. On average, instructors took 27.50 ± 15.00 min to analyze their data (range 20-50). The ChatGPT users took 10.50 ± 1.00 min (range 10-12) and 12.50 ± 2.89 min (range 10-15) to analyze the data. In relation to reviewing and analyzing their own open-ended course evaluations, instructors reported feeling anxiety prior to the process, satisfaction during the process, and frustration related to findings. CONCLUSIONS This study offers valuable insights into the potential of ChatGPT as a tool for analyzing open-ended student course evaluation comments in health professions education. However, it is crucial to ensure ChatGPT is used as a tool to assist with the analysis and to avoid relying solely on its outputs for conclusions.
Collapse
Affiliation(s)
- Kathryn A Fuller
- UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Kathryn A Morbitzer
- UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Center for Innovative Pharmacy Education and Research, UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jacqueline M Zeeman
- UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Adam M Persky
- UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Center for Innovative Pharmacy Education and Research, UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Amanda C Savage
- UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jacqueline E McLaughlin
- UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- Center for Innovative Pharmacy Education and Research, UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| |
Collapse
|