1
|
Kruse M, Stankeviciute S, Perry S. Clinical pharmacology-how it shapes the drug development journey. Eur J Clin Pharmacol 2025; 81:597-604. [PMID: 40000475 PMCID: PMC11922982 DOI: 10.1007/s00228-025-03811-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Accepted: 02/12/2025] [Indexed: 02/27/2025]
Abstract
Every drug development is a complex and long journey. Clinical pharmacology is an essential discipline in modern drug development. With its applications, computational modelling, and simulation techniques, it can significantly contribute to the efficiency in drug development today. In this perspective, we highlight why pharmacokinetics and pharmacodynamics are important, what developers need to consider in their clinical development programme, how modelling influences the development process, and discuss recent trends such as artificial intelligence and machine learning that have the potential to reshape future drug development.
Collapse
|
2
|
de Ruiter EJ, Eimermann VM, Rijcken C, Taxis K, Borgsteede SD. The extent and type of use, opportunities and concerns of ChatGPT in community pharmacy: A survey of community pharmacy staff. EXPLORATORY RESEARCH IN CLINICAL AND SOCIAL PHARMACY 2025; 17:100575. [PMID: 40026321 PMCID: PMC11872116 DOI: 10.1016/j.rcsop.2025.100575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 12/15/2024] [Accepted: 02/01/2025] [Indexed: 03/05/2025] Open
Abstract
Background Since the widespread availability of Chat Generative Pre-Trained Transformer (ChatGPT), the public is confronted with accessible artificial intelligence tools. There is limited knowledge on the use, concerns and opportunities of ChatGPT in pharmacy practice in the Netherlands. Objectives The aims of this study were to explore the extent and type of use of ChatGPT in community pharmacy and to identify concerns and opportunities for pharmacy practice. Methods A questionnaire was developed, tested and distributed to professionals that work in community pharmacy. The answers were analysed descriptively using frequency tables. Results Of all participants (n = 106), 50.9 % had used ChatGPT, and 38.7 % (n = 24) of these users has used it in pharmacy. Participants saw opportunities for using ChatGPT as writing assistant or in quickly answering clinical questions. Concerns included not knowing what ChatGPT could be used for in pharmacy and not knowing what ChatGPT's answer is based on. Conclusions This research shows that using ChatGPT as a writing assistant is valuable and can free up time. Although clinical questions seem promising, ChatGPT's answers are currently too unreliable and do not meet the required quality standards for good pharmaceutical care. If ChatGPT is used to answer clinical questions, crossreferencing with reliable sources is recommended.
Collapse
Affiliation(s)
- Emma Janske de Ruiter
- Health Base Foundation, Department of Clinical Decision Support, Houten, Netherlands
| | - Vesna Maria Eimermann
- Health Base Foundation, Department of Clinical Decision Support, Houten, Netherlands
| | | | - Katja Taxis
- Groningen Research Institute of Pharmacy, Unit of Pharmacotherapy, -Epidemiology & -Economics, University of Groningen, Groningen, Netherlands
| | | |
Collapse
|
3
|
Wang YM, Shen HW, Chen TJ, Chiang SC, Lin TG. Performance of ChatGPT-3.5 and ChatGPT-4 in the Taiwan National Pharmacist Licensing Examination: Comparative Evaluation Study. JMIR MEDICAL EDUCATION 2025; 11:e56850. [PMID: 39864950 PMCID: PMC11769692 DOI: 10.2196/56850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 09/26/2024] [Accepted: 12/17/2024] [Indexed: 01/28/2025]
Abstract
Background OpenAI released versions ChatGPT-3.5 and GPT-4 between 2022 and 2023. GPT-3.5 has demonstrated proficiency in various examinations, particularly the United States Medical Licensing Examination. However, GPT-4 has more advanced capabilities. Objective This study aims to examine the efficacy of GPT-3.5 and GPT-4 within the Taiwan National Pharmacist Licensing Examination and to ascertain their utility and potential application in clinical pharmacy and education. Methods The pharmacist examination in Taiwan consists of 2 stages: basic subjects and clinical subjects. In this study, exam questions were manually fed into the GPT-3.5 and GPT-4 models, and their responses were recorded; graphic-based questions were excluded. This study encompassed three steps: (1) determining the answering accuracy of GPT-3.5 and GPT-4, (2) categorizing question types and observing differences in model performance across these categories, and (3) comparing model performance on calculation and situational questions. Microsoft Excel and R software were used for statistical analyses. Results GPT-4 achieved an accuracy rate of 72.9%, overshadowing GPT-3.5, which achieved 59.1% (P<.001). In the basic subjects category, GPT-4 significantly outperformed GPT-3.5 (73.4% vs 53.2%; P<.001). However, in clinical subjects, only minor differences in accuracy were observed. Specifically, GPT-4 outperformed GPT-3.5 in the calculation and situational questions. Conclusions This study demonstrates that GPT-4 outperforms GPT-3.5 in the Taiwan National Pharmacist Licensing Examination, particularly in basic subjects. While GPT-4 shows potential for use in clinical practice and pharmacy education, its limitations warrant caution. Future research should focus on refining prompts, improving model stability, integrating medical databases, and designing questions that better assess student competence and minimize guessing.
Collapse
Affiliation(s)
- Ying-Mei Wang
- Department of Medical Education and Research, Taipei Veterans General Hospital Hsinchu Branch, 81, Section 1, Zhongfeng Road, Zhudong, Hsinchu, 310, Taiwan, 886 03-5962134 ext 127
- Department of Pharmacy, Taipei Veterans General Hospital Hsinchu Branch, Hsinchu, Taiwan
- School of Medicine, National Tsing Hua University, Hsinchu, Taiwan
- Hsinchu County Pharmacists Association, Hsinchu, Taiwan
| | - Hung-Wei Shen
- Department of Medical Education and Research, Taipei Veterans General Hospital Hsinchu Branch, 81, Section 1, Zhongfeng Road, Zhudong, Hsinchu, 310, Taiwan, 886 03-5962134 ext 127
- Department of Pharmacy, Taipei Veterans General Hospital Hsinchu Branch, Hsinchu, Taiwan
- Hsinchu County Pharmacists Association, Hsinchu, Taiwan
| | - Tzeng-Ji Chen
- Department of Family Medicine, Taipei Veterans General Hospital Hsinchu Branch, Hsinchu, Taiwan
- Department of Family Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Post-Baccalaureate Medicine, National Chung Hsing University, Taichung, Taiwan
| | - Shu-Chiung Chiang
- Department of Medical Education and Research, Taipei Veterans General Hospital Hsinchu Branch, 81, Section 1, Zhongfeng Road, Zhudong, Hsinchu, 310, Taiwan, 886 03-5962134 ext 127
- Institute of Hospital and Health Care Administration, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ting-Guan Lin
- Department of Pharmacy, Taipei Veterans General Hospital Hsinchu Branch, Hsinchu, Taiwan
- Hsinchu County Pharmacists Association, Hsinchu, Taiwan
| |
Collapse
|
4
|
Shahin MH, Barth A, Podichetty JT, Liu Q, Goyal N, Jin JY, Ouellet D. Artificial Intelligence: From Buzzword to Useful Tool in Clinical Pharmacology. Clin Pharmacol Ther 2024; 115:698-709. [PMID: 37881133 DOI: 10.1002/cpt.3083] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 10/06/2023] [Indexed: 10/27/2023]
Abstract
The advent of artificial intelligence (AI) in clinical pharmacology and drug development is akin to the dawning of a new era. Previously dismissed as merely technological hype, these approaches have emerged as promising tools in different domains, including health care, demonstrating their potential to empower clinical pharmacology decision making, revolutionize the drug development landscape, and advance patient care. Although challenges remain, the remarkable progress already made signals that the leap from hype to reality is well underway, and AI promises to offer clinical pharmacology new tools and possibilities for optimizing patient care is gradually coming to fruition. This review dives into the burgeoning world of AI and machine learning (ML), showcasing different applications of AI in clinical pharmacology and the impact of successful AI/ML implementation on drug development and/or regulatory decisions. This review also highlights recommendations for areas of opportunity in clinical pharmacology, including data analysis (e.g., handling large data sets, screening to identify important covariates, and optimizing patient population) and efficiencies (e.g., automation, translation, literature curation, and training). Realizing the benefits of AI in drug development and understanding its value will lead to the successful integration of AI tools in our clinical pharmacology and pharmacometrics armamentarium.
Collapse
Affiliation(s)
- Mohamed H Shahin
- Clinical Pharmacology and Bioanalytics, Pfizer Inc., Groton, Connecticut, USA
| | - Aline Barth
- Clinical Pharmacology and Bioanalytics, Pfizer Inc., Groton, Connecticut, USA
| | | | - Qi Liu
- Office of Clinical Pharmacology, Office of Translational Sciences, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, Maryland, USA
| | - Navin Goyal
- Clinical Pharmacology and Pharmacometrics, Janssen Research and Development, LLC., Spring House, Pennsylvania, USA
| | - Jin Y Jin
- Department of Clinical Pharmacology, Genentech, South San Francisco, California, USA
| | - Daniele Ouellet
- Clinical Pharmacology and Pharmacometrics, Janssen Research and Development, LLC., Spring House, Pennsylvania, USA
| |
Collapse
|
5
|
Huang X, Estau D, Liu X, Yu Y, Qin J, Li Z. Evaluating the performance of ChatGPT in clinical pharmacy: A comparative study of ChatGPT and clinical pharmacists. Br J Clin Pharmacol 2024; 90:232-238. [PMID: 37626010 DOI: 10.1111/bcp.15896] [Citation(s) in RCA: 34] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 08/01/2023] [Accepted: 08/14/2023] [Indexed: 08/27/2023] Open
Abstract
AIMS To evaluate the performance of chat generative pretrained transformer (ChatGPT) in key domains of clinical pharmacy practice, including prescription review, patient medication education, adverse drug reaction (ADR) recognition, ADR causality assessment and drug counselling. METHODS Questions and clinical pharmacist's answers were collected from real clinical cases and clinical pharmacist competency assessment. ChatGPT's responses were generated by inputting the same question into the 'New Chat' box of ChatGPT Mar 23 Version. Five licensed clinical pharmacists independently rated these answers on a scale of 0 (Completely incorrect) to 10 (Completely correct). The mean scores of ChatGPT and clinical pharmacists were compared using a paired 2-tailed Student's t-test. The text content of the answers was also descriptively summarized together. RESULTS The quantitative results indicated that ChatGPT was excellent in drug counselling (ChatGPT: 8.77 vs. clinical pharmacist: 9.50, P = .0791) and weak in prescription review (5.23 vs. 9.90, P = .0089), patient medication education (6.20 vs. 9.07, P = .0032), ADR recognition (5.07 vs. 9.70, P = .0483) and ADR causality assessment (4.03 vs. 9.73, P = .023). The capabilities and limitations of ChatGPT in clinical pharmacy practice were summarized based on the completeness and accuracy of the answers. ChatGPT revealed robust retrieval, information integration and dialogue capabilities. It lacked medicine-specific datasets as well as the ability for handling advanced reasoning and complex instructions. CONCLUSIONS While ChatGPT holds promise in clinical pharmacy practice as a supplementary tool, the ability of ChatGPT to handle complex problems needs further improvement and refinement.
Collapse
Affiliation(s)
- Xiaoru Huang
- Department of Pharmacy, Peking University Third Hospital, Beijing, China
- Department of Pharmaceutical Management and Clinical Pharmacy, College of Pharmacy, Peking University, Beijing, China
| | - Dannya Estau
- Department of Pharmacy, Peking University Third Hospital, Beijing, China
- Department of Pharmaceutical Management and Clinical Pharmacy, College of Pharmacy, Peking University, Beijing, China
| | - Xuening Liu
- Department of Pharmacy, Peking University Third Hospital, Beijing, China
- Department of Pharmaceutical Management and Clinical Pharmacy, College of Pharmacy, Peking University, Beijing, China
| | - Yang Yu
- Department of Pharmacy, Peking University Third Hospital, Beijing, China
- Department of Pharmaceutical Management and Clinical Pharmacy, College of Pharmacy, Peking University, Beijing, China
| | - Jiguang Qin
- Department of Pharmacy, Peking University Third Hospital, Beijing, China
- Department of Pharmaceutical Management and Clinical Pharmacy, College of Pharmacy, Peking University, Beijing, China
| | - Zijian Li
- Department of Pharmacy, Peking University Third Hospital, Beijing, China
- Department of Pharmaceutical Management and Clinical Pharmacy, College of Pharmacy, Peking University, Beijing, China
- Department of Cardiology and Institute of Vascular Medicine, Peking University Third Hospital, Beijing Key Laboratory of Cardiovascular Receptors Research, Key Laboratory of Cardiovascular Molecular Biology and Regulatory Peptides, Ministry of Health, State Key Laboratory of Vascular Homeostasis and Remodeling, Peking University, Beijing, China
| |
Collapse
|
6
|
Madrid-García A, Rosales-Rosado Z, Freites-Nuñez D, Pérez-Sancristóbal I, Pato-Cour E, Plasencia-Rodríguez C, Cabeza-Osorio L, Abasolo-Alcázar L, León-Mateos L, Fernández-Gutiérrez B, Rodríguez-Rodríguez L. Harnessing ChatGPT and GPT-4 for evaluating the rheumatology questions of the Spanish access exam to specialized medical training. Sci Rep 2023; 13:22129. [PMID: 38092821 PMCID: PMC10719375 DOI: 10.1038/s41598-023-49483-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 12/08/2023] [Indexed: 12/17/2023] Open
Abstract
The emergence of large language models (LLM) with remarkable performance such as ChatGPT and GPT-4, has led to an unprecedented uptake in the population. One of their most promising and studied applications concerns education due to their ability to understand and generate human-like text, creating a multitude of opportunities for enhancing educational practices and outcomes. The objective of this study is twofold: to assess the accuracy of ChatGPT/GPT-4 in answering rheumatology questions from the access exam to specialized medical training in Spain (MIR), and to evaluate the medical reasoning followed by these LLM to answer those questions. A dataset, RheumaMIR, of 145 rheumatology-related questions, extracted from the exams held between 2010 and 2023, was created for that purpose, used as a prompt for the LLM, and was publicly distributed. Six rheumatologists with clinical and teaching experience evaluated the clinical reasoning of the chatbots using a 5-point Likert scale and their degree of agreement was analyzed. The association between variables that could influence the models' accuracy (i.e., year of the exam question, disease addressed, type of question and genre) was studied. ChatGPT demonstrated a high level of performance in both accuracy, 66.43%, and clinical reasoning, median (Q1-Q3), 4.5 (2.33-4.67). However, GPT-4 showed better performance with an accuracy score of 93.71% and a median clinical reasoning value of 4.67 (4.5-4.83). These findings suggest that LLM may serve as valuable tools in rheumatology education, aiding in exam preparation and supplementing traditional teaching methods.
Collapse
Affiliation(s)
- Alfredo Madrid-García
- Grupo de Patología Musculoesquelética, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria del Hospital Clínico San Carlos (IdISSC), Prof. Martin Lagos S/N, 28040, Madrid, Spain.
| | - Zulema Rosales-Rosado
- Grupo de Patología Musculoesquelética, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria del Hospital Clínico San Carlos (IdISSC), Prof. Martin Lagos S/N, 28040, Madrid, Spain
| | - Dalifer Freites-Nuñez
- Grupo de Patología Musculoesquelética, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria del Hospital Clínico San Carlos (IdISSC), Prof. Martin Lagos S/N, 28040, Madrid, Spain
| | - Inés Pérez-Sancristóbal
- Grupo de Patología Musculoesquelética, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria del Hospital Clínico San Carlos (IdISSC), Prof. Martin Lagos S/N, 28040, Madrid, Spain
| | - Esperanza Pato-Cour
- Grupo de Patología Musculoesquelética, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria del Hospital Clínico San Carlos (IdISSC), Prof. Martin Lagos S/N, 28040, Madrid, Spain
| | | | - Luis Cabeza-Osorio
- Medicina Interna, Hospital Universitario del Henares, Avenida de Marie Curie, 0, 28822, Madrid, Spain
- Facultad de Medicina, Universidad Francisco de Vitoria, Carretera Pozuelo, Km 1800, 28223, Madrid, Spain
| | - Lydia Abasolo-Alcázar
- Grupo de Patología Musculoesquelética, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria del Hospital Clínico San Carlos (IdISSC), Prof. Martin Lagos S/N, 28040, Madrid, Spain
| | - Leticia León-Mateos
- Grupo de Patología Musculoesquelética, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria del Hospital Clínico San Carlos (IdISSC), Prof. Martin Lagos S/N, 28040, Madrid, Spain
| | - Benjamín Fernández-Gutiérrez
- Grupo de Patología Musculoesquelética, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria del Hospital Clínico San Carlos (IdISSC), Prof. Martin Lagos S/N, 28040, Madrid, Spain
- Facultad de Medicina, Universidad Complutense de Madrid, Madrid, Spain
| | - Luis Rodríguez-Rodríguez
- Grupo de Patología Musculoesquelética, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria del Hospital Clínico San Carlos (IdISSC), Prof. Martin Lagos S/N, 28040, Madrid, Spain
| |
Collapse
|
7
|
Montastruc F, Storck W, de Canecaude C, Victor L, Li J, Cesbron C, Zelmat Y, Barus R. Will artificial intelligence chatbots replace clinical pharmacologists? An exploratory study in clinical practice. Eur J Clin Pharmacol 2023; 79:1375-1384. [PMID: 37566133 DOI: 10.1007/s00228-023-03547-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 07/30/2023] [Indexed: 08/12/2023]
Abstract
PURPOSE Recently, there has been a growing interest in using ChatGPT for various applications in Medicine. We evaluated the interest of OpenAI chatbot (GPT 4.0) for drug information activities at Toulouse Pharmacovigilance Center. METHODS Based on a series of 50 randomly selected questions sent to our pharmacovigilance center by healthcare professionals or patients, we compared the level of responses from the chatbot GPT 4.0 with those provided by specialists in pharmacovigilance. RESULTS Chatbot answers were globally not acceptable. Responses to inquiries regarding the assessment of drug causality were not consistently precise or clinically meaningful. CONCLUSION The interest of chatbot assistance needs to be confirmed or rejected through further studies conducted in other pharmacovigilance centers.
Collapse
Affiliation(s)
- François Montastruc
- Department of Medical and Clinical Pharmacology, Centre of Pharmacovigilance and Pharmacoepidemiology, Faculty of Medicine, Toulouse University Hospital (CHU), Toulouse, France
| | - Wilhelm Storck
- Department of Medical and Clinical Pharmacology, Centre of Pharmacovigilance and Pharmacoepidemiology, Faculty of Medicine, Toulouse University Hospital (CHU), Toulouse, France
| | - Claire de Canecaude
- Department of Medical and Clinical Pharmacology, Centre of Pharmacovigilance and Pharmacoepidemiology, Faculty of Medicine, Toulouse University Hospital (CHU), Toulouse, France
| | - Léa Victor
- Department of Medical and Clinical Pharmacology, Centre of Pharmacovigilance and Pharmacoepidemiology, Faculty of Medicine, Toulouse University Hospital (CHU), Toulouse, France
| | - Julien Li
- Department of Medical and Clinical Pharmacology, Centre of Pharmacovigilance and Pharmacoepidemiology, Faculty of Medicine, Toulouse University Hospital (CHU), Toulouse, France
| | - Candice Cesbron
- Department of Medical and Clinical Pharmacology, Centre of Pharmacovigilance and Pharmacoepidemiology, Faculty of Medicine, Toulouse University Hospital (CHU), Toulouse, France
| | - Yoann Zelmat
- Department of Medical and Clinical Pharmacology, Centre of Pharmacovigilance and Pharmacoepidemiology, Faculty of Medicine, Toulouse University Hospital (CHU), Toulouse, France
| | - Romain Barus
- Department of Medical and Clinical Pharmacology, Centre of Pharmacovigilance and Pharmacoepidemiology, Faculty of Medicine, Toulouse University Hospital (CHU), Toulouse, France.
| |
Collapse
|
8
|
Elango A, Kannan N, Anandan I, Surapaneni KM. Testing the knowledge and interpretation skills of ChatGPT in pharmacology examination of phase II MBBS. Indian J Pharmacol 2023; 55:266-267. [PMID: 37737081 PMCID: PMC10657626 DOI: 10.4103/ijp.ijp_188_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/23/2023] Open
Affiliation(s)
- Anitha Elango
- Department of Pharmacology, Medical Education, Research, Panimalar Medical College Hospital and Research Institute, Varadharajapuram, Poonamallee, Chennai, Tamil Nadu, India
| | - Neevedha Kannan
- Department of Pharmacology, Medical Education, Research, Panimalar Medical College Hospital and Research Institute, Varadharajapuram, Poonamallee, Chennai, Tamil Nadu, India
| | - Isswariya Anandan
- Department of Pharmacology, Medical Education, Research, Panimalar Medical College Hospital and Research Institute, Varadharajapuram, Poonamallee, Chennai, Tamil Nadu, India
| | - Krishna Mohan Surapaneni
- Department of Biochemistry, Medical Education, Research, Panimalar Medical College Hospital and Research Institute, Varadharajapuram, Poonamallee, Chennai, Tamil Nadu, India
| |
Collapse
|