1
|
Chen X, Yang Z. Assessing virtual patients for empathy training in healthcare: A scoping review. PATIENT EDUCATION AND COUNSELING 2025; 136:108752. [PMID: 40112578 DOI: 10.1016/j.pec.2025.108752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2024] [Revised: 02/03/2025] [Accepted: 03/09/2025] [Indexed: 03/22/2025]
Abstract
OBJECTIVE A growing body of virtual patients (VPs) generated by computers has been incorporated into medical education that includes empathy training. We sought to uncover the validity and effectiveness of VPs in empathy training. METHOD The authors carried out a comprehensive search of all articles published between 1991 and 2023 in the seven databases of literature in the areas of health science and education. In total, 2170 abstracts were reviewed, and ultimately, the final corpus consisted of 44 articles. RESULTS Guided by the Computer-as-social-actor framework, this study identified four types of primary social cues presented in current literature to arouse trainees' social responses. Overall, the social cues used across the included studies were similar. However, the efficacy and effectiveness of VPs varied, and we identified four factors that may influence these outcomes. First, technology matters. VPs for VR systems were found to be effective in clinical empathy training, but limited empirical evidence supported web-or-mobile-based VPs. Second, improvement was only observed in the cognitive empathy dimension. Third, studies that have longer interaction duration (over 30 minutes). Last, using self-report measurements were more likely to observe significant improvements. Qualitative findings revealed that VPs for VR systems can create an immersive experience that allows users to understand the needs of patients and put themselves in patients' shoes, while web-or-mobile-based VPs are more convenient for trainees. PRACTICAL IMPLICATIONS This review displays evidence supporting the efficacy and effectiveness of VPs in future medical empathy training. Mechanisms and future research agendas were discussed. CONCLUSION VPs are promising tools for future empathy training.
Collapse
Affiliation(s)
- Xiaobei Chen
- College of Journalism and Communications, University of Florida, Gainesville, FL, USA; Department of Epidemiology, College of Public Health and Health Professions, University of Florida, Gainesville, FL, USA.
| | - Zixiao Yang
- School of Communication, University of Miami, Coral Gables, FL, USA.
| |
Collapse
|
2
|
Gray M, Baird A, Sawyer T, James J, DeBroux T, Bartlett M, Krick J, Umoren R. Increasing Realism and Variety of Virtual Patient Dialogues for Prenatal Counseling Education Through a Novel Application of ChatGPT: Exploratory Observational Study. JMIR MEDICAL EDUCATION 2024; 10:e50705. [PMID: 38300696 PMCID: PMC10870212 DOI: 10.2196/50705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 10/18/2023] [Accepted: 12/11/2023] [Indexed: 02/02/2024]
Abstract
BACKGROUND Using virtual patients, facilitated by natural language processing, provides a valuable educational experience for learners. Generating a large, varied sample of realistic and appropriate responses for virtual patients is challenging. Artificial intelligence (AI) programs can be a viable source for these responses, but their utility for this purpose has not been explored. OBJECTIVE In this study, we explored the effectiveness of generative AI (ChatGPT) in developing realistic virtual standardized patient dialogues to teach prenatal counseling skills. METHODS ChatGPT was prompted to generate a list of common areas of concern and questions that families expecting preterm delivery at 24 weeks gestation might ask during prenatal counseling. ChatGPT was then prompted to generate 2 role-plays with dialogues between a parent expecting a potential preterm delivery at 24 weeks and their counseling physician using each of the example questions. The prompt was repeated for 2 unique role-plays: one parent was characterized as anxious and the other as having low trust in the medical system. Role-play scripts were exported verbatim and independently reviewed by 2 neonatologists with experience in prenatal counseling, using a scale of 1-5 on realism, appropriateness, and utility for virtual standardized patient responses. RESULTS ChatGPT generated 7 areas of concern, with 35 example questions used to generate role-plays. The 35 role-play transcripts generated 176 unique parent responses (median 5, IQR 4-6, per role-play) with 268 unique sentences. Expert review identified 117 (65%) of the 176 responses as indicating an emotion, either directly or indirectly. Approximately half (98/176, 56%) of the responses had 2 or more sentences, and half (88/176, 50%) included at least 1 question. More than half (104/176, 58%) of the responses from role-played parent characters described a feeling, such as being scared, worried, or concerned. The role-plays of parents with low trust in the medical system generated many unique sentences (n=50). Most of the sentences in the responses were found to be reasonably realistic (214/268, 80%), appropriate for variable prenatal counseling conversation paths (233/268, 87%), and usable without more than a minimal modification in a virtual patient program (169/268, 63%). CONCLUSIONS Generative AI programs, such as ChatGPT, may provide a viable source of training materials to expand virtual patient programs, with careful attention to the concerns and questions of patients and families. Given the potential for unrealistic or inappropriate statements and questions, an expert should review AI chat outputs before deploying them in an educational program.
Collapse
Affiliation(s)
- Megan Gray
- Division of Neonatology, University of Washington, Seattle, WA, United States
| | - Austin Baird
- Division of Healthcare Simulation Sciences, Department of Surgery, University of Washington, Seattle, WA, United States
| | - Taylor Sawyer
- Division of Neonatology, University of Washington, Seattle, WA, United States
| | - Jasmine James
- Department of Family Medicine, Providence St Peter, Olympia, WA, United States
| | - Thea DeBroux
- Division of Neonatology, University of Washington, Seattle, WA, United States
| | - Michelle Bartlett
- Department of Pediatrics, Children's Hospital of Philadelphia, Philadelphia, PA, United States
| | - Jeanne Krick
- Department of Pediatrics, San Antonio Uniformed Services Health Education Consortium, San Antonio, TX, United States
| | - Rachel Umoren
- Division of Neonatology, University of Washington, Seattle, WA, United States
| |
Collapse
|
3
|
Bartlett MJ, Umoren R, Amory JH, Huynh T, Kim AJH, Stiffler AK, Mastroianni R, Ficco E, French H, Gray M. Measuring antenatal counseling skill with a milestone-based assessment tool: a validation study. BMC MEDICAL EDUCATION 2023; 23:325. [PMID: 37165398 PMCID: PMC10170031 DOI: 10.1186/s12909-023-04282-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 04/20/2023] [Indexed: 05/12/2023]
Abstract
BACKGROUND Antenatal counseling for parents in the setting of expected preterm delivery is an important component of pediatric training. However, healthcare professionals receive a variable amount and quality of formal training. This study evaluated and discussed validity of a practical tool to assess antenatal counseling skills and provide evaluative feedback: the Antenatal Counseling Milestones Scale (ACoMS). METHODS Experts in antenatal counseling developed an anchored milestone-based tool to evaluate observable skills. Study participants with a range of antenatal counseling skills were recruited to participate in simulation of counseling sessions in person or via video with standardized patient actors presenting with preterm labor at 23 weeks' gestation. Two faculty observers scored each session independently using the ACoMS. Participants completed an ACoMS self-assessment, demographic, and feedback survey. Validity was measured with weighted kappas for inter-rater agreement, Kruskal-Wallis and Dunn's tests for milestone levels between degrees of expertise in counseling, and cronbach's alpha for item consistency. RESULTS Forty-two participants completed observed counseling sessions. Of the 17 items included in the tool, 15 items were statistically significant with scores scaling with level of training. A majority of elements had fair-moderate agreement between raters, and there was high internal consistency amongst all items. CONCLUSION This study demonstrates that the internal structure of the ACoMS rubric has greater than fair inter-rater reliability and high internal consistency amongst items. Content validity is supported by the scale's ability to discern level of training. Application of the ACoMS to clinical encounters is needed to determine utility in clinical practice.
Collapse
Affiliation(s)
| | - Rachel Umoren
- University of Washington School of Medicine, Seattle, 98105, USA
| | | | - Trang Huynh
- Oregon Health & Science University, Portland, USA
| | | | | | | | - Ellie Ficco
- University of Washington School of Medicine, Seattle, 98105, USA
| | | | - Megan Gray
- University of Washington School of Medicine, Seattle, 98105, USA
| |
Collapse
|
4
|
Maicher KR, Stiff A, Scholl M, White M, Fosler-Lussier E, Schuler W, Serai P, Sunder V, Forrestal H, Mendella L, Adib M, Bratton C, Lee K, Danforth DR. Artificial intelligence in virtual standardized patients: Combining natural language understanding and rule based dialogue management to improve conversational fidelity. MEDICAL TEACHER 2022; 45:1-7. [PMID: 36346810 DOI: 10.1080/0142159x.2022.2130216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
INTRODUCTION Advances in natural language understanding have facilitated the development of Virtual Standardized Patients (VSPs) that may soon rival human patients in conversational ability. We describe herein the development of an artificial intelligence (AI) system for VSPs enabling students to practice their history taking skills. METHODS Our system consists of (1) Automated Speech Recognition (ASR), (2) hybrid AI for question identification, (3) classifier to choose between the two systems, and (4) automated speech generation. We analyzed the accuracy of the ASR, the two AI systems, the classifier, and student feedback with 620 first year medical students from 2018 to 2021. RESULTS System accuracy improved from ∼75% in 2018 to ∼90% in 2021 as refinements in algorithms and additional training data were utilized. Student feedback was positive, and most students felt that practicing with the VSPs was a worthwhile experience. CONCLUSION We have developed a novel hybrid dialogue system that enables artificially intelligent VSPs to correctly answer student questions at levels comparable with human SPs. This system allows trainees to practice and refine their history-taking skills before interacting with human patients.
Collapse
Affiliation(s)
- Kellen R Maicher
- The James Cancer Hospital, The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Adam Stiff
- The Department of Computer Science and Engineering, The Ohio State University, Columbus, OH, USA
| | - Marisa Scholl
- The Department of Obstetrics and Gynecology, College of Medicine, The Ohio State University, Columbus, OH, USA
| | - Michael White
- The Department of Computer Science and Engineering, The Ohio State University, Columbus, OH, USA
| | - Eric Fosler-Lussier
- The Department of Computer Science and Engineering, The Ohio State University, Columbus, OH, USA
- The Department of Linguistics, The Ohio State University, Columbus, OH, USA
| | - William Schuler
- The Department of Linguistics, The Ohio State University, Columbus, OH, USA
| | | | | | | | | | | | | | | | - Douglas R Danforth
- The Department of Obstetrics and Gynecology, College of Medicine, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
5
|
Kim AJ, Umoren R, Gray MM. Teaching Antenatal Counseling Skills via Video Conference. Cureus 2021; 13:e17030. [PMID: 34522511 PMCID: PMC8425487 DOI: 10.7759/cureus.17030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 08/08/2021] [Indexed: 11/21/2022] Open
Abstract
Neonatologists provide counseling to expectant parents to prepare them for the birth and subsequent medical care that their extremely preterm, or otherwise medically complex newborn may require. The skills required to conduct these sensitive conversations are often taught to neonatology trainees via direct observation or simulated scenarios in advance of counseling actual patients. This technical report details how we taught antenatal counseling skills to junior neonatal-perinatal medicine (NPM) fellows via video conferencing during the coronavirus disease 2019 (COVID-19) pandemic. This approach could be used to effectively prepare future trainees to perform antenatal counseling.
Collapse
Affiliation(s)
- Amanda J Kim
- Pediatrics/Neonatal-Perinatal Medicine, Oregon Health and Science University, Portland, USA
| | - Rachel Umoren
- Neonatology, Seattle Children's Hospital/University of Washington School of Medicine, Seattle, USA
| | - Megan M Gray
- Neonatology, Seattle Children's Hospital/University of Washington School of Medicine, Seattle, USA
| |
Collapse
|
6
|
Quail NPA, Boyle JG. Virtual Patients in Health Professions Education. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2019; 1171:25-35. [PMID: 31823237 DOI: 10.1007/978-3-030-24281-7_3] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
Health care professionals must not only have knowledge, but also be able to organise, synthesise and apply this knowledge in such a way that it promotes the development of clinical reasoning. Panels of Virtual patients (VPs) are widely being used in health professions education to facilitate the development of clinical reasoning. VPs can also be used to teach wider educational outcomes such as communication skills, resource utilisation and longitudinal patient care. This chapter will define virtual patients and examine the evidence behind their use in health professions learning and teaching. The chapter will discuss virtual patient design, such as gamification. Finally, the chapter will discuss where this pedagogical innovation is best integrated into assessment and potential barriers to implementation into existing curricula.
Collapse
|