1
|
Cragun D, Trepanier A, Krstić N, Racobaldo M, Hunt P, Randall Armel S. Application of the RIME framework in genetic counseling fieldwork training to assess practice-based competencies. J Genet Couns 2025; 34:e2007. [PMID: 39665248 DOI: 10.1002/jgc4.2007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Revised: 10/19/2024] [Accepted: 12/03/2024] [Indexed: 12/13/2024]
Abstract
Using educational frameworks for learner assessment in genetic counseling (GC) training may help students and supervisors articulate developmentally appropriate clinical skills-based objectives and tasks that align with various stages of training as students work toward achieving entry-level competency. This professional issues case study describes how two GC programs adapted and implemented the RIME (Reporter-Interpreter-Manager-Educator) learner assessment framework, originally designed for medical education, to support and assess students' acquisition of practice-based competencies (PBCs) during clinical fieldwork placements. Each RIME level describes a different set of expectations regarding the skills students should be able to demonstrate based on the level of training they have achieved up to that point in time. In early training, students work mainly on gathering and reporting clinical information (Reporter level). In early to mid-training, students have learned what information to collect from clients and begin to apply the information to generate differential diagnoses (Interpreter level). When students reach the Manager level (typically by mid- to late-training), they can independently develop and implement case management plans tailored to individual cases. The Educator level, which may not be fully attained until after graduation, involves critically evaluating evidence and educating others about new evidence. The following paper describes our experiences incorporating the RIME framework into two GC graduate programs and explains the development of corresponding RIME-based assessment forms that align with the Accreditation Council of Genetic Counseling's 2023 PBCs. Overall, we find that using the RIME framework fosters a growth mindset by enabling students and supervisors to create developmentally appropriate goals and expectations, thereby facilitating assessment and guidance of trainee progress. Despite these perceived benefits, we acknowledge the need for research to evaluate the efficacy of the RIME framework or other learner assessment models in supporting student progression in achieving the GC PBCs.
Collapse
Affiliation(s)
- Deborah Cragun
- College of Public Health, University of South Florida, Tampa, Florida, USA
| | - Angela Trepanier
- Center for Molecular Medicine and Genetics, Wayne State University, Detroit, Michigan, USA
| | - Nevena Krstić
- Department of Obstetrics and Gynecology, University of South Florida Morsani College of Medicine, Tampa, Florida, USA
| | - Melissa Racobaldo
- Department of Pediatrics, University of South Florida Morsani College of Medicine, Tampa, Florida, USA
| | - Paige Hunt
- College of Public Health, University of South Florida, Tampa, Florida, USA
| | - Susan Randall Armel
- Bhalwani Familial Cancer Clinic, Princess Margaret Cancer Center, Toronto, Ontario, Canada
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Kulkarni SA, Dhaliwal G, Teherani A, Connor DM. Clerkship Students' Use of Clinical Reasoning Concepts After a Pre-clinical Reasoning Course. J Gen Intern Med 2025:10.1007/s11606-024-09279-4. [PMID: 39747771 DOI: 10.1007/s11606-024-09279-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Accepted: 12/03/2024] [Indexed: 01/04/2025]
Abstract
BACKGROUND Many medical schools have incorporated clinical reasoning (CR) courses into their pre-clinical curricula to address the quality and safety issue of diagnostic error. It is unknown how students use concepts and practices from pre-clinical CR courses once in clerkships. OBJECTIVE We sought to understand how students utilize CR concepts from a pre-clinical course during clerkships and to identify facilitators and barriers to the use of reasoning concepts. DESIGN We used structured interviews to gain insight into medical students' experiences with CR concepts in clerkships. PARTICIPANTS We interviewed 16 students who had completed a pre-clinical CR course and subsequently completed a neurology, internal medicine, or pediatrics clerkship. APPROACH We used constructivist grounded theory to perform a qualitative analysis and to develop a theoretical model to describe findings. KEY RESULTS Insights fell into three main areas: (1) CR concept carryover, representing concepts taught in the CR course, such as problem representation, illness scripts, schema, and prioritized differential diagnosis, which were utilized in clerkships; (2) CR concept reinforcers, which included the clerkship setting and supervising physicians who emphasized and provided feedback on CR; and (3) CR concept diminishers, which included time constraints and supervisors who were unfamiliar with or did not reinforce CR concepts. CONCLUSIONS Concepts taught in a pre-clinical CR course influenced how students prepared for and navigated clinical encounters. Contextual factors both enhanced and inhibited the utilization of CR concepts. Our findings align with social learning theories including social cognitive theory and ecological psychology. This contextual view-taking into account interactions between personal, social, and environmental factors-can help educators integrate CR education from the classroom to the clinical setting.
Collapse
Affiliation(s)
| | - Gurpreet Dhaliwal
- Department of Medicine, University of California San Francisco, San Francisco, USA
- Medical Service, San Francisco VA Medical Center, San Francisco, USA
| | - Arianne Teherani
- Department of Medicine, University of California San Francisco, San Francisco, USA
| | - Denise M Connor
- Department of Medicine, University of California San Francisco, San Francisco, USA.
- Medical Service, San Francisco VA Medical Center, San Francisco, USA.
| |
Collapse
|
3
|
Rouse M, Newman JR, Waller C, Fink J. R.I.M.E. and reason: multi-station OSCE enhancement to neutralize grade inflation. MEDICAL EDUCATION ONLINE 2024; 29:2339040. [PMID: 38603644 PMCID: PMC11011230 DOI: 10.1080/10872981.2024.2339040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 04/01/2024] [Indexed: 04/13/2024]
Abstract
To offset grade inflation, many clerkships combine faculty evaluations with objective assessments including the Medical Examiners Subject Examination (NBME-SE) or Objective Structured Clinical Examination (OSCE), however, standardized methods are not established. Following a curriculum transition removing faculty clinical evaluations from summative grading, final clerkship designations of fail (F), pass (P), and pass-with-distinction (PD) were determined by combined NBME-SE and OSCE performance, with overall PD for the clerkship requiring meeting this threshold in both. At the time, 90% of students achieved PD on the Internal Medicine (IM) OSCE resulting in overall clerkship grades primarily determined by the NBME-SE. The clerkship sought to enhance the OSCE to provide a more thorough objective clinical skills assessment, offset grade inflation, and reduce the NBME-SE primary determination of the final clerkship grade. The single-station 43-point OSCE was enhanced to a three-station 75-point OSCE using the Reporter-Interpreter-Manager-Educator (RIME) framework to align patient encounters with targeted assessments of progressive skills and competencies related to the clerkship rotation. Student performances were evaluated pre- and post-OSCE enhancement. Student surveys provided feedback about the clinical realism of the OSCE and the difficulty. Pre-intervention OSCE scores were more tightly clustered (SD = 5.65%) around a high average performance with scores being highly negatively skewed. Post-intervention OSCE scores were more dispersed (SD = 6.88%) around a lower average with scores being far less skewed resulting in an approximately normal distribution. This lowered the total number of students achieving PD on the OSCE and PD in the clerkship, thus reducing the relative weight of the NMBE-SE in the overall clerkship grade. Student response was positive, indicating the examination was fair and reflective of their clinical experiences. Through structured development, OSCE assessment can provide a realistic and objective measurement of clinical performance as part of the summative evaluation of students.
Collapse
Affiliation(s)
- Michael Rouse
- Internal Medicine, The University of Kansas School of Medicine, Kansas City, USA
| | - Jessica R. Newman
- Internal Medicine, The University of Kansas School of Medicine, Kansas City, USA
| | - Charles Waller
- Evaluation Analyst in the Office of Medical Education, The University of Kansas School of Medicine, Kansas City, MO, USA
| | - Jennifer Fink
- Internal Medicine, The University of Kansas School of Medicine, Kansas City, USA
| |
Collapse
|
4
|
Kopelson K, de Peralta S, Pike NA. The 1-minute preceptor to improve diagnostic reasoning in a primary care nurse practitioner residency program. J Am Assoc Nurse Pract 2024; 36:491-500. [PMID: 38832876 DOI: 10.1097/jxx.0000000000001029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 04/22/2024] [Indexed: 06/06/2024]
Abstract
BACKGROUND The One-Minute Preceptor (OMP) model to teach diagnostic reasoning and Reporter, Interpreter, Manager, and Educator (RIME) framework to measure progress are used in physician training. Little is known about the use of these tools in nurse practitioner (NP) training. LOCAL PROBLEM Precepting NP trainees at the Veterans Affairs (VA) is not standardized. A standardized approach to precepting NP residency trainees using the OMP model and RIME scoring was evaluated for improvement and helpfulness. METHODS A quality-improvement project with two Plan-Do-Study-Act (PDSA) cycles were conducted over a 12-week period. Mean RIME scores, preceptor self-efficacy, and use of teaching skills were measured preintervention and postintervention. Data were analyzed using a paired sample t -test and descriptive statistics. INTERVENTIONS A convenience sample of preceptors and trainees was recruited from a large VA medical center. A 1-hour workshop educated preceptors with role playing and return demonstrations on OMP techniques and RIME scoring. The teachings were applied to standardize precepting and assess diagnostic reasoning. Trainee self-scoring and results triggered conversations to fulfil the identified gaps. RESULTS Mean RIME scores improved (1.62 [0.17] vs. 2.23 [0.38], p < .001) post 12-week intervention. Mean RIME scores improved between PDSA cycle 1 and cycle 2 (2.07 [0.25] vs. 2.48 [0.39], p < .001). Preceptors (91%) and trainees (100%) found the OMP model and RIME framework helpful. CONCLUSION Use of the OMP improved diagnostic reasoning in NP trainees. The OMP and RIME framework provided standardization of precepting and trainee discussions on improvements.
Collapse
Affiliation(s)
- Kristin Kopelson
- Department of Medicine, Veteran's Administration, Greater Los Angeles, CA
- School of Nursing, University of California, Los Angeles, CA
| | - Shelly de Peralta
- Department of Medicine, Veteran's Administration, Greater Los Angeles, CA
- School of Nursing, University of California, Los Angeles, CA
| | - Nancy A Pike
- School of Nursing, University of California, Los Angeles, CA
- Children's Hospital Los Angeles, CA
- Sue & Bill Gross School of Nursing, University of California, Irvine
| |
Collapse
|
5
|
Mannam SS, Subtirelu R, Chauhan D, Ahmad HS, Matache IM, Bryan K, Chitta SVK, Bathula SC, Turlip R, Wathen C, Ghenbot Y, Ajmera S, Blue R, Chen HI, Ali ZS, Malhotra N, Srinivasan V, Ozturk AK, Yoon JW. Large Language Model-Based Neurosurgical Evaluation Matrix: A Novel Scoring Criteria to Assess the Efficacy of ChatGPT as an Educational Tool for Neurosurgery Board Preparation. World Neurosurg 2023; 180:e765-e773. [PMID: 37839567 DOI: 10.1016/j.wneu.2023.10.043] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 10/07/2023] [Indexed: 10/17/2023]
Abstract
INTRODUCTION Technological advancements are reshaping medical education, with digital tools becoming essential in all levels of training. Amidst this transformation, the study explores the potential of ChatGPT, an artificial intelligence model by OpenAI, in enhancing neurosurgical board education. The focus extends beyond technology adoption to its effective utilization, with ChatGPT's proficiency evaluated against practice questions from the Primary Neurosurgery Written Board Exam. METHODS Using the Congress of Neurologic Surgeons (CNS) Self-Assessment Neurosurgery (SANS) Exam Board Review Prep questions, we conducted 3 rounds of analysis with ChatGPT. We developed a novel ChatGPT Neurosurgical Evaluation Matrix (CNEM) to assess the output quality, accuracy, concordance, and clarity of ChatGPT's answers. RESULTS ChatGPT achieved spot-on accuracy for 66.7% of prompted questions, 59.4% of unprompted questions, and 63.9% of unprompted questions with a leading phrase. Stratified by topic, accuracy ranged from 50.0% (Vascular) to 78.8% (Neuropathology). In comparison to SANS explanations, ChatGPT output was considered better in 19.1% of questions, equal in 51.6%, and worse in 29.3%. Concordance analysis showed that 95.5% of unprompted ChatGPT outputs and 97.4% of unprompted outputs with a leading phrase were aligned. CONCLUSIONS Our study evaluated the performance of ChatGPT in neurosurgical board education by assessing its accuracy, clarity, and concordance. The findings highlight the potential and challenges of integrating AI technologies like ChatGPT into medical and neurosurgical board education. Further research is needed to refine these tools and optimize their performance for enhanced medical education and patient care.
Collapse
Affiliation(s)
- Sneha Sai Mannam
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Robert Subtirelu
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Daksh Chauhan
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Hasan S Ahmad
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Irina Mihaela Matache
- Department of Physiology, Faculty of Medicine, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
| | - Kevin Bryan
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Siddharth V K Chitta
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Shreya C Bathula
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Ryan Turlip
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Connor Wathen
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Yohannes Ghenbot
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Sonia Ajmera
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Rachel Blue
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - H Isaac Chen
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Zarina S Ali
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Neil Malhotra
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Visish Srinivasan
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Ali K Ozturk
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Jang W Yoon
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA.
| |
Collapse
|
6
|
Puri A, Memari M, Sottile EM, Snydman LK, Lee WW, Bonnema RA, Jones D, Nandiwada DR. Changing the Assessment Paradigm: Promoting a Growth Mindset Across the Medical Education Continuum. Am J Med 2023; 136:207-212. [PMID: 36441037 DOI: 10.1016/j.amjmed.2022.10.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 10/18/2022] [Indexed: 02/01/2023]
Affiliation(s)
- Aditi Puri
- Department of Internal Medicine, MacNeal Hospital, Loyola University Health System, North Riverside, Ill.
| | - Milad Memari
- Division of General Internal Medicine University of Pittsburgh Medical Center, Pa
| | - Elisa M Sottile
- Division of General Internal Medicine, University of Florida College of Medicine - Jacksonville
| | - Laura K Snydman
- Division of General Internal Medicine, Tufts Medical Center, Boston, Mass
| | - Wei Wei Lee
- Section of General Internal Medicine, University of Chicago Pritzker School of Medicine, Ill
| | - Rachel A Bonnema
- Division of General Internal Medicine, University of Texas Southwestern School of Medicine, Dallas
| | - Danielle Jones
- Division of General Internal Medicine, Emory University of Medicine, Atlanta, Ga
| | - D Rani Nandiwada
- Division of General Internal Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pa
| |
Collapse
|
7
|
Pangaro L, Rodriguez RG, Hemmer PA. Reporter-Interpreter-Manager-Educator: An Observational Framework, Not a Grading Framework. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:9-10. [PMID: 36576762 DOI: 10.1097/acm.0000000000005017] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Affiliation(s)
- Louis Pangaro
- Professor of medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ; ORCID: http://orcid.org/orcid.org/0000-0001-5221-3161
| | - Rechell G Rodriguez
- Clinical professor of medicine, Department of Medicine, University of California, San Diego, California
| | - Paul A Hemmer
- Professor and vice chair for educational programs, Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland
| |
Collapse
|
8
|
Folk D, Ryckeley C, Nguyen M, Essig JJ, Beck Dallaghan GL, Coe C. Evaluating Family Medicine Resident Narrative Comments Using the RIME Scheme. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2022; 9:23821205221090162. [PMID: 35356418 PMCID: PMC8958670 DOI: 10.1177/23821205221090162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND In 2013, the Accreditation Council on Graduate Medical Education (ACGME) launched the Next Accreditation System, which required explicit documentation of trainee competence in six domains. To document narrative comments, the University of North Carolina Family Medicine Residency Program developed a mobile application to document real time observations. OBJECTIVE The objective of this work was to assess if the Reporter, Interpreter, Manager, Expert (RIME) framework could be applied to the narrative comments in order to convey a degree of competency. METHODS From August to December 2020, 7 individuals analyzed narrative comments of four family medicine residents. The narrative comments were collected from July to December 2019. Each individual applied the RIME framework to the comments and the team met to discuss. Comments where 5/7 individuals agreed were not further discussed. All other comments were discussed until consensus was achieved. RESULTS 102 unique comments were assessed. Of those comments, 25 (25.5%) met threshold for assessor agreement after independent review. Group discussion about discrepancies led to consensus about the appropriate classification for 92 (90.2%). General comments on performance were difficult to fit into the RIME framework. CONCLUSIONS Application of the RIME framework to narrative comments may add insight into trainee progress. Further faculty development is needed to ensure comments have discrete elements needed to apply the RIME framework and contribute to overall evaluation of competence.
Collapse
Affiliation(s)
| | | | | | | | | | - Catherine Coe
- University of North Carolina School of Medicine, Chapel Hill, NC
| |
Collapse
|
9
|
Bordage G, Daniels V, Wolpaw TM, Yudkowsky R. O-RI-M: Reporting to Include Data Interpretation. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1079-1080. [PMID: 36047866 DOI: 10.1097/acm.0000000000004136] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
- Georges Bordage
- Professor emeritus, Department of Medical Education, College of Medicine, University of Illinois at Chicago, Chicago, Illinois;
| | - Vijay Daniels
- Professor and associate chair for education and faculty development, Department of Medicine, and assistant dean for assessment, MD Program, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Terry M Wolpaw
- Professor, Division of Rheumatology, Department of Medicine, and vice dean for educational affairs, Penn State College of Medicine, Hershey, Pennsylvania
| | - Rachel Yudkowsky
- Professor and director of graduate studies, Department of Medical Education, College of Medicine, University of Illinois at Chicago, Chicago, Illinois
| |
Collapse
|
10
|
Roberts LW. Emerging Issues in Assessment in Medical Education: A Collection. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:159-160. [PMID: 33492817 DOI: 10.1097/acm.0000000000003855] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
|