1
|
Yeates P, Maluf A, McCray G, Kinston R, Cope N, Cullen K, O'Neill V, Cole A, Chung CW, Goodfellow R, Vallender R, Ensaff S, Goddard-Fuller R, McKinley R. Inter-school variations in the standard of examiners' graduation-level OSCE judgements. MEDICAL TEACHER 2025; 47:735-743. [PMID: 38976711 DOI: 10.1080/0142159x.2024.2372087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 06/20/2024] [Indexed: 07/10/2024]
Abstract
INTRODUCTION Ensuring equivalence in high-stakes performance exams is important for patient safety and candidate fairness. We compared inter-school examiner differences within a shared OSCE and resulting impact on students' pass/fail categorisation. METHODS The same 6 station formative OSCE ran asynchronously in 4 medical schools, with 2 parallel circuits/school. We compared examiners' judgements using Video-based Examiner Score Comparison and Adjustment (VESCA): examiners scored station-specific comparator videos in addition to 'live' student performances, enabling 1/controlled score comparisons by a/examiner-cohorts and b/schools and 2/data linkage to adjust for the influence of examiner-cohorts. We calculated score impact and change in pass/fail categorisation by school. RESULTS On controlled video-based comparisons, inter-school variations in examiners' scoring (16.3%) were nearly double within-school variations (8.8%). Students' scores received a median adjustment of 5.26% (IQR 2.87-7.17%). The impact of adjusting for examiner differences on students' pass/fail categorisation varied by school, with adjustment reducing failure rate from 39.13% to 8.70% (school 2) whilst increasing failure from 0.00% to 21.74% (school 4). DISCUSSION Whilst the formative context may partly account for differences, these findings query whether variations may exist between medical schools in examiners' judgements. This may benefit from systematic appraisal to safeguard equivalence. VESCA provided a viable method for comparisons.
Collapse
Affiliation(s)
- Peter Yeates
- School of Medicine, Keele University, Keele, United Kingdom
| | | | - Gareth McCray
- School of Medicine, Keele University, Keele, United Kingdom
| | - Ruth Kinston
- School of Medicine, Keele University, Keele, United Kingdom
| | - Natalie Cope
- School of Medicine, Keele University, Keele, United Kingdom
| | - Kathy Cullen
- School of Medicine, Dentistry and Biomedical Sciences, Queens University Belfast, Belfast, United Kingdom
| | - Vikki O'Neill
- School of Medicine, Dentistry and Biomedical Sciences, Queens University Belfast, Belfast, United Kingdom
| | - Aidan Cole
- School of Medicine, Dentistry and Biomedical Sciences, Queens University Belfast, Belfast, United Kingdom
| | - Ching-Wa Chung
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, United Kingdom
| | | | | | - Sue Ensaff
- School of Medicine, Cardiff University, Cardiff, United Kingdom
| | - Rikki Goddard-Fuller
- Christie Education, Christie Hospitals NHS Foundation Trust, Manchester, United Kingdom
| | | |
Collapse
|
2
|
Kelly WF, Hawks MK, Johnson WR, Maggio LA, Pangaro L, Durning SJ. Assessment Tools for Patient Notes in Medical Education: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2025; 100:358-374. [PMID: 39316464 DOI: 10.1097/acm.0000000000005886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2024]
Abstract
PURPOSE Physician proficiency in clinical encounter documentation is a universal expectation of medical education. However, deficiencies in note writing are frequently identified, which have implications for patient safety, health care quality, and cost. This study aimed to create a compendium of tools for educators' practical implementation or future research. METHOD A scoping review was conducted using the Arksey and O'Malley framework. PubMed, Embase, Ovid All EBM Reviews, Web of Science, and MedEdPORTAL were searched for articles published from database inception to November 16, 2023, using the following search terms: documentation, note-writing, patient note, electronic health record note, entrustable professional activity 5 , and other terms. For each note-writing assessment tool, information on setting, section(s) of note that was assessed, tool properties, numbers and roles of note writers and graders, weight given, if used in grading, learner performance, and stakeholder satisfaction and feasibility was extracted and summarized. RESULTS A total of 5,257 articles were identified; 32 studies with unique tools were included in the review. Eleven studies (34.4%) were published since 2018. Twenty-two studies (68.8%) outlined creating an original assessment tool, whereas 10 (31.2%) assessed a curriculum intervention using a tool. Tools varied in length and complexity. None provided data on equity or fairness to student or resident note writers or about readability for patients. Note writers often had missing or incomplete documentation (mean [SD] total tool score of 60.3% [19.4%] averaged over 25 studies), often improving after intervention. Selected patient note assessment tool studies have been cited a mean (SD) of 6.3 (9.2) times. Approximately half of the tools (17 [53.1%]) or their accompanying articles were open access. CONCLUSIONS Diverse tools have been published to assess patient notes, often identifying deficiencies. This compendium may assist educators and researchers in improving patient care documentation.
Collapse
|
3
|
Veters MD, May B, Wu CL, Schmit EO, Berger S. Improving Time to Completion of Medical Student Clerkship Evaluations. Hosp Pediatr 2025; 15:74-81. [PMID: 39729400 DOI: 10.1542/hpeds.2023-007629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 09/19/2024] [Indexed: 12/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Medical student clinical clerkship evaluations provide feedback for growth and contribute to the clerkship grade and the student's residency application. Their importance is expected to increase even more with the recent change of the US Medical Licensing Examination Step 1 to a pass/fail designation. Timely completion of medical student clerkship evaluations is a problem. In this study, the authors aimed to increase the percentage of student evaluations completed by pediatric hospital medicine physicians within 14 days of assignment from 45% to 75% or greater using quality improvement (QI) methodology over a 1-year period (January through December 2021). METHODS The evaluation program generated reports every 2 months. Control charts were used to analyze the data. Plan-Do-Study-Act cycles were performed. Interventions included education, baseline data knowledge, gamification, standardization, quarterly data updates, and weekly announcement of incomplete evaluations. Balancing measures included word count of narrative comments in the evaluation and percentage of clinical honors awarded. RESULTS Intervention implementation was associated with special cause variation of the primary outcome, with the average timely completion rate increasing from a baseline of 45% to 79%, which remained consistent during the 8-month sustainment period. Word count of narrative comments and percentage of honors designation had no special cause variation or shifts. CONCLUSIONS Using QI methods, timely completion of medical student clerkship evaluations was improved through several interventions, including weekly announcements of incomplete evaluations and gamification with quarterly recognition awards. QI methodology is an effective and practical way to improve timely completion of medical student clerkship evaluations.
Collapse
Affiliation(s)
- Michelle D Veters
- Department of Pediatrics, Division of Hospital Medicine, The University of Alabama at Birmingham Heersink School of Medicine, Birmingham, Alabama
| | - Brian May
- Department of Medicines and Pediatrics, Divisions of Hospital Medicine, The University of Alabama at Birmingham Heersink School of Medicine, Birmingham, Alabama
| | - Chang L Wu
- Department of Pediatrics, Division of Hospital Medicine, The University of Alabama at Birmingham Heersink School of Medicine, Birmingham, Alabama
| | - Erinn O Schmit
- Department of Pediatrics, Division of Hospital Medicine, The University of Alabama at Birmingham Heersink School of Medicine, Birmingham, Alabama
| | - Stephanie Berger
- Department of Pediatrics, Division of Hospital Medicine, The University of Alabama at Birmingham Heersink School of Medicine, Birmingham, Alabama
| |
Collapse
|
4
|
Russo RA, Raml DM, Kerlek AJ, Klapheke M, Martin KB, Rakofsky JJ. Bias in Medical School Clerkship Grading: Is It Time for a Change? ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2023; 47:428-431. [PMID: 35974212 DOI: 10.1007/s40596-022-01696-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 08/03/2022] [Indexed: 06/15/2023]
Affiliation(s)
- Rachel A Russo
- VA North Texas Health Care System and University of Texas - Southwestern Medical Center, Dallas, TX, USA.
| | - Dana M Raml
- University of Nebraska Medical Center, Lincoln, NE, USA
| | - Anna J Kerlek
- The Ohio State University College of Medicine, Columbus, OH, USA
| | - Martin Klapheke
- University of Central Florida College of Medicine, Orlando, FL, USA
| | - Katherine B Martin
- Lehigh Valley Health Network and University of South Florida Morsani College of Medicine, Allentown, PA, USA
| | | |
Collapse
|
5
|
Alexander SM, Shenvi CL, Nichols KR, Dent G, Smith KL. Multivariate Modeling of Student Performance on NBME Subject Exams. Cureus 2023; 15:e40809. [PMID: 37485212 PMCID: PMC10362906 DOI: 10.7759/cureus.40809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/22/2023] [Indexed: 07/25/2023] Open
Abstract
Aim This study sought to determine whether it was possible to develop statistical models which could be used to accurately correlate student performance on clinical subject exams based on their National Board of Medical Examiner (NBME) self-assessment performance and other variables, described below, as such tools are not currently available. Methods Students at a large public medical school were provided fee vouchers for NBME self-assessments before clinical subject exams. Multivariate regression models were then developed based on how self-assessment performance correlated to student success on the subsequent subject exam (Medicine, Surgery, Family Medicine, Obstetrics-Gynecology, Pediatrics, and Psychiatry) while controlling for the proximity of the self-assessment to the exam, USMLE Step 1 score, and the academic quarter. Results The variables analyzed satisfied the requirements of linear regression. The correlation strength of individual variables and overall models varied by discipline and outcome (equated percent correct or percentile, Model R2 Range: 0.1799-0.4915). All models showed statistical significance on the Omnibus F-test (p<0.001). Conclusion The correlation coefficients demonstrate that these models have weak to moderate predictive value, dependent on the clinical subject, in predicting student performance; however, this varies widely based on the subject exam in question. The next step is to utilize these models to identify struggling students to determine if their use reduces failure rates and to further improve model accuracy by controlling for additional variables.
Collapse
Affiliation(s)
- Seth M Alexander
- Medicine, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, USA
- Education, Harvard Graduate School of Education, Cambridge, USA
| | - Christina L Shenvi
- Emergency Medicine, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, USA
| | - Kimberley R Nichols
- Anesthesiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, USA
| | - Georgette Dent
- Pathology and Laboratory Medicine, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, USA
| | - Kelly L Smith
- Family Medicine, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, USA
| |
Collapse
|
6
|
Rucker L, Rucker G, Nguyen A, Noel M, Marroquin M, Streja E, Hennrikus E. Medical Faculty and Medical Student Opinions on the Utility of Questions to Teach and Evaluate in the Clinical Environment. MEDICAL SCIENCE EDUCATOR 2023; 33:669-678. [PMID: 37501806 PMCID: PMC10368585 DOI: 10.1007/s40670-023-01780-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/23/2023] [Indexed: 07/29/2023]
Abstract
Objectives We sought to report medical student and faculty perceptions of the purpose and utility of questions on clinical rounds. Methods We developed and administered a survey to third and fourth-year medical students and teaching physicians. The survey elicited attitudes about using questions to teach on rounds in both benign and malignant learning environments. Results Ninety-seven percent of faculty and 85% of students predicted they will use questions to teach. Nine percent of students described learning-impairing stress during benign bedside teaching. Fifty-nine percent of faculty felt questions were mostly for teaching; 74% of students felt questions were mostly for evaluation. Forty-six percent of students felt questions underestimated their knowledge. Students felt questions were more effective for classroom teaching than bedside teaching. Faculty and students agreed that a malignant environment detrimentally affected learning and performance. Conclusions Students and faculty supported the use of questions to teach and evaluate, especially in benign teaching environments. Many students described stress severe enough to affect their learning and performance, even when questioned in benign teaching environments. Faculty underestimated the degree to which students experience stress-related learning impairment and the degree to which students see questions as evaluation rather than teaching. Nearly half of students felt that questions underestimated their own knowledge. Students feel more stress and less learning when questioned with a patient present. Faculty must realize that even in the best learning environment some students experience stress-impaired learning and performance, perhaps because of the conflict between learning and evaluation.
Collapse
Affiliation(s)
- Lloyd Rucker
- Department of Medicine, Irvine School of Medicine, University of California, 101 the City Drive Orange, Irvine, CA 92868 USA
- Tibor Rubin Long Beach Veterans Administration Hospital, Long Beach, CA USA
| | - Garrett Rucker
- Department of Medicine, MedStar Washington Hospital Center – Georgetown University, Washington, DC USA
| | - Angelica Nguyen
- Tibor Rubin Long Beach Veterans Administration Hospital, Long Beach, CA USA
- Irvine School of Medicine, University of California, Irvine, CA USA
| | - Maria Noel
- Department of Emergency Medicine, Christiana Care, Newark, DE USA
| | | | - Elani Streja
- Division of Nephrology, Department of Medicine, University of California, Irvine School of Medicine, Irvine, CA USA
| | | |
Collapse
|
7
|
Schafer KR, Sood L, King CJ, Alexandraki I, Aronowitz P, Cohen M, Chretien K, Pahwa A, Shen E, Williams D, Hauer KE. The Grade Debate: Evidence, Knowledge Gaps, and Perspectives on Clerkship Assessment Across the UME to GME Continuum. Am J Med 2023; 136:394-398. [PMID: 36632923 DOI: 10.1016/j.amjmed.2023.01.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 01/03/2023] [Indexed: 01/10/2023]
Affiliation(s)
- Katherine R Schafer
- Department of Internal Medicine, Wake Forest University School of Medicine, Winston-Salem, NC.
| | - Lonika Sood
- Elson S. Floyd College of Medicine, Washington State University, Spokane
| | - Christopher J King
- Division of Hospital Medicine, Department of Medicine, University of Colorado School of Medicine, Aurora
| | | | | | - Margot Cohen
- Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| | | | - Amit Pahwa
- Johns Hopkins University School of Medicine, Baltimore, Md
| | - E Shen
- Department of Internal Medicine, Wake Forest University School of Medicine, Winston-Salem, NC
| | - Donna Williams
- Department of Internal Medicine, Wake Forest University School of Medicine, Winston-Salem, NC
| | | |
Collapse
|
8
|
Imm MR, Agarwal G, Zhang C, Deshpande AR, Issenberg B, Chandran L. EPMO: A novel medical student assessment tool that integrates entrustable professional activities, prime, and the modified Ottawa coactivity scale. MEDICAL TEACHER 2023; 45:419-425. [PMID: 36288734 DOI: 10.1080/0142159x.2022.2137012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE Alignment of workplace-based assessments (WPBA) with core entrustable professional activities (EPAs) for entering residency may provide opportunities to monitor student progress across the continuum of undergraduate medical education. Core EPAs, however, reflect tasks of varying degrees of difficulty and faculty assessors are not accustomed to rating students based on entrustability. Expectations of student progress should vary depending on the complexity of the tasks associated with the EPAs. An assessment tool that orients evaluators to the developmental progression of specific EPA tasks will be critical to fairly evaluate learners. METHODS The authors developed an EPA assessment tool combining the frameworks of Professionalism, Reporter, Interpreter, Manager, Educator (PRIME), and Modified Ottawa coactivity scales. Only those EPAs that could be repeatedly observed and assessed across clinical clerkships were included. From July 2019 to March 2020, third-year medical students across multiple clerkships were assessed using this tool. The authors hypothesized that if the tool was applied correctly, ratings of learner independence would be lower with higher complexity tasks and that such ratings would increase over the course of year with ongoing clinical learning. RESULTS Assessment data for 247 medical students were similar across clerkships suggesting that evaluators in diverse clinical contexts were able to use this tool to assign scores reflective of developing entrustability in the workplace. Faculty rated student entrustability highest in skills emphasized in the pre-clerkship curriculum (professionalism and reporter) and progressively lower in more advanced skills (interpreter and manager). Students' ratings increased over time with more clinical exposure. CONCLUSIONS The authors developed a composite WBPA tool that combines the frameworks of EPAs, PRIME, and Modified Ottawa Co- Activity and demonstrated the usability of applying it for learner assessments in clinical settings. Further multicenter studies with cohorts of pre- and post-clerkship students may provide additional validity evidence for the tool.
Collapse
Affiliation(s)
- Matthew R Imm
- Department of Medicine, The University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Gauri Agarwal
- Department of Medicine, The University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Chi Zhang
- Department of Medical Education, The University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Amar R Deshpande
- Department of Medicine, The University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Barry Issenberg
- Department of Medical Education, The University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Latha Chandran
- Department of Medical Education, The University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| |
Collapse
|
9
|
Zavodnick J, Doroshow J, Rosenberg S, Banks J, Leiby BE, Mingioni N. Hawks and Doves: Perceptions and Reality of Faculty Evaluations. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2023; 10:23821205231197079. [PMID: 37692558 PMCID: PMC10492463 DOI: 10.1177/23821205231197079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 08/08/2023] [Indexed: 09/12/2023]
Abstract
OBJECTIVES Internal medicine clerkship grades are important for residency selection, but inconsistencies between evaluator ratings threaten their ability to accurately represent student performance and perceived fairness. Clerkship grading committees are recommended as best practice, but the mechanisms by which they promote accuracy and fairness are not certain. The ability of a committee to reliably assess and account for grading stringency of individual evaluators has not been previously studied. METHODS This is a retrospective analysis of evaluations completed by faculty considered to be stringent, lenient, or neutral graders by members of a grading committee of a single medical college. Faculty evaluations were assessed for differences in ratings on individual skills and recommendations for final grade between perceived stringency categories. Logistic regression was used to determine if actual assigned ratings varied based on perceived faculty's grading stringency category. RESULTS "Easy graders" consistently had the highest probability of awarding an above-average rating, and "hard graders" consistently had the lowest probability of awarding an above-average rating, though this finding only reached statistical significance only for 2 of 8 questions on the evaluation form (P = .033 and P = .001). Odds ratios of assigning a higher final suggested grade followed the expected pattern (higher for "easy" and "neutral" compared to "hard," higher for "easy" compared to "neutral") but did not reach statistical significance. CONCLUSIONS Perceived differences in faculty grading stringency have basis in reality for clerkship evaluation elements. However, final grades recommended by faculty perceived as "stringent" or "lenient" did not differ. Perceptions of "hawks" and "doves" are not just lore but may not have implications for students' final grades. Continued research to describe the "hawk and dove effect" will be crucial to enable assessment of local grading variation and empower local educational leadership to correct, but not overcorrect, for this effect to maintain fairness in student evaluations.
Collapse
Affiliation(s)
- Jillian Zavodnick
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, USA
| | | | - Sarah Rosenberg
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, USA
| | - Joshua Banks
- Department of Pharmacology and Experimental Therapeutics, Division of Biostatistics, Thomas Jefferson University, Philadelphia, USA
| | - Benjamin E Leiby
- Department of Pharmacology and Experimental Therapeutics, Division of Biostatistics, Thomas Jefferson University, Philadelphia, USA
| | - Nina Mingioni
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, USA
| |
Collapse
|
10
|
Babiker ZOE, Gariballa S, Narchi H, Shaban S, Alshamsi F, Bakoush O. Score Gains on the NBME Subject Examinations in Internal Medicine Among Clerkship Students: a Two-Year Longitudinal Study from the United Arab Emirates. MEDICAL SCIENCE EDUCATOR 2022; 32:891-897. [PMID: 36035526 PMCID: PMC9411407 DOI: 10.1007/s40670-022-01582-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/15/2022] [Indexed: 06/15/2023]
Abstract
Background The impact of clinical proficiency on individual student scores on the National Board of Medical Examiners (NBME) Subject Examinations remains uncertain. We hypothesised that increasing the length of time spent in a clinical environment would augment students' performance. Methods Performance on the NBME Subject Examination in Internal Medicine (NBME-IM) of three student cohorts was observed longitudinally. Scores at the end of two unique internal medicine clerkships held at the third and fourth years were compared. The score differences between the two administrations were compared using paired t-tests, and the effect size was measured using Cohen's d. Moreover, linear regression was used to assess the correlation between the NBME-IM score gains and performance on a pre-clinical Comprehensive Basic Science Examination (CBSE). A two-tailed p-value <0.05 was considered significant. Results Of the 236 students enrolled during the third year, age, gender, CBSE, and NBME-IM scores were similar across all cohorts. The normalised score gain on the NBME-IM at the fourth year was 9.5% (range -38 to +45%) with a Cohen's d of 0.47. However, a larger effect size with a Cohen's d value of 0.96 was observed among poorly scoring students. Performance on the CBSE was a significant predictor of score gain on the NBME-IM (R 0.51, R 2 0.26, p-value < 0.001). Conclusions Despite the increased length of clinical exposure, modest improvement in students' performance on repeated NBME-IM examination was observed. Medical educators need to reconsider how the NBME-IM is used in clerkship assessments.
Collapse
Affiliation(s)
- Zahir Osman Eltahir Babiker
- Department of Internal Medicine, College of Medicine and Health Sciences, United Arab Emirates University, PO Box 17666, Al Ain, United Arab Emirates
- Division of Infectious Diseases, Sheikh Shakhbout Medical City, Abu Dhabi, United Arab Emirates
| | - Salah Gariballa
- Department of Internal Medicine, College of Medicine and Health Sciences, United Arab Emirates University, PO Box 17666, Al Ain, United Arab Emirates
| | - Hassib Narchi
- Department of Paediatrics, College of Medicine and Health Sciences, United Arab Emirates University, Al Ain, United Arab Emirates
| | - Sami Shaban
- Department of Medical Education, College of Medicine and Health Sciences, United Arab Emirates University, Al Ain, United Arab Emirates
| | - Fayez Alshamsi
- Department of Internal Medicine, College of Medicine and Health Sciences, United Arab Emirates University, PO Box 17666, Al Ain, United Arab Emirates
| | - Omran Bakoush
- Department of Internal Medicine, College of Medicine and Health Sciences, United Arab Emirates University, PO Box 17666, Al Ain, United Arab Emirates
| |
Collapse
|
11
|
Roles and Responsibilities of Medicine Subinternship Directors : Medicine Subinternship Director Roles. J Gen Intern Med 2022; 37:2698-2702. [PMID: 34545467 PMCID: PMC9411493 DOI: 10.1007/s11606-021-07128-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 08/27/2021] [Indexed: 01/07/2023]
Abstract
BACKGROUND The internal medicine (IM) subinternship (also referred to as acting internship) plays a crucial part in preparing medical students for residency. The roles, responsibilities, and support provided to subinternship directors have not been described. OBJECTIVE We sought to describe the current role of IM subinternship directors with respect to their responsibilities, salary support, and reporting structure. DESIGN Nationally representative, annually recurring thematic survey of IM core clerkship directors with membership in an academic professional association as of September 2017. PARTICIPANTS A total of 129 core clinical medicine clerkship directors at Liaison Committee on Medical Education fully accredited U.S./U.S.-territory-based medical schools. MAIN MEASURES Responsibilities, salary support, and reporting structure of subinternship directors. KEY RESULTS The survey response rate was 83.0% (107/129 medical schools). Fifty-one percent (54/107) of respondents reported overseeing both core clerkship inpatient experiences and/or one or more subinternships. For oversight, 49.1% (28/53) of subinternship directors also reported that they were the clerkship director, 26.4% (14/53) that another faculty member directed all medicine subinternships, and 18.9% (10/53) that each subinternship had its own director. The most frequently reported responsibilities for the subinternship directors were administration, including scheduling, and logistics of student schedules (83.0%, 44/53), course evaluation (81.1%, 43/53), and setting grades 79.2% (42/53). The modal response for estimated FTE per course was 10-20% FTE, with 33.3% (16/48) reporting this level of support and 29.2% (14/54) reporting no FTE support. CONCLUSIONS The role of the IM subinternship director has become increasingly complex. Since the IM subinternship is critical to preparing students for residency, IM subinternship directors require standard expectations and adequate support. Future studies are needed to determine the appropriate level of support for subinternship directors and to define essential roles and responsibilities.
Collapse
|
12
|
Variation in core clerkship grading reported on the Medical Student Performance Evaluation (MSPE) for orthopaedic surgery applicants: a retrospective review. CURRENT ORTHOPAEDIC PRACTICE 2022. [DOI: 10.1097/bco.0000000000001152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
13
|
Schilling D. A Needed Consensus: Do Not Use NBME Medicine Subject Exam Threshold Scores in Clerkship Grading. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:167. [PMID: 35084390 DOI: 10.1097/acm.0000000000004397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Affiliation(s)
- David Schilling
- Associate professor, Department of Psychiatry, Loyola University Chicago Stritch School of Medicine, Maywood, Illinois; ; ORCID: http://orcid/0000-0001-8553-6186
| |
Collapse
|
14
|
Dhaliwal G, Hauer KE. Excellence in medical training: developing talent-not sorting it. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:356-361. [PMID: 34415554 PMCID: PMC8377327 DOI: 10.1007/s40037-021-00678-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 06/02/2023]
Abstract
Many medical schools have reconsidered or eliminated clerkship grades and honor society memberships. National testing organizations announced plans to eliminate numerical scoring for the United States Medical Licensing Examination Step 1 in favor of pass/fail results. These changes have led some faculty to wonder: "How will we recognize and reward excellence?" Excellence in undergraduate medical education has long been defined by high grades, top test scores, honor society memberships, and publication records. However, this model of learner excellence is misaligned with how students learn or what society values. This accolade-driven view of excellence is perpetuated by assessments that are based on gestalt impressions influenced by similarity between evaluators and students, and assessments that are often restricted to a limited number of traditional skill domains. To achieve a new model of learner excellence that values the trainee's achievement, growth, and responsiveness to feedback across multiple domains, we must envision a new model of teacher excellence. Such teachers would have a growth mindset toward assessing competencies and learning new competencies. Actualizing true learner excellence will require teachers to change from evaluators who conduct assessments of learning to coaches who do assessment for learning. Schools will also need to establish policies and structures that foster a culture that supports this change. In this new paradigm, a teacher's core duty is to develop talent rather than sort it.
Collapse
Affiliation(s)
- Gurpreet Dhaliwal
- Department of Medicine, University of California San Francisco School of Medicine, San Francisco, CA, USA.
- Medical Service, San Francisco VA Medical Center, San Francisco, CA, USA.
| | - Karen E Hauer
- Department of Medicine, University of California San Francisco School of Medicine, San Francisco, CA, USA
| |
Collapse
|
15
|
Onumah CM, Lai CJ, Levine D, Ismail N, Pincavage AT, Osman NY. Aiming for Equity in Clerkship Grading: Recommendations for Reducing the Effects of Structural and Individual Bias. Am J Med 2021; 134:1175-1183.e4. [PMID: 34144012 DOI: 10.1016/j.amjmed.2021.06.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 06/03/2021] [Indexed: 12/30/2022]
Affiliation(s)
- Chavon M Onumah
- Department of Medicine, George Washington School of Medicine and Health Sciences, Washington, DC
| | - Cindy J Lai
- Department of Medicine, University of California, San Francisco, School of Medicine
| | - Diane Levine
- Department of Medicine, Wayne State University School of Medicine, Detroit, Mich
| | - Nadia Ismail
- Department of Medicine, Baylor College of Medicine, Houston, Texas
| | - Amber T Pincavage
- Department of Medicine, University of Chicago Pritzker School of Medicine, Ill
| | - Nora Y Osman
- Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Mass.
| |
Collapse
|
16
|
Gorth DJ, Magee RG, Rosenberg SE, Mingioni N. Gender Disparity in Evaluation of Internal Medicine Clerkship Performance. JAMA Netw Open 2021; 4:e2115661. [PMID: 34213556 PMCID: PMC8254135 DOI: 10.1001/jamanetworkopen.2021.15661] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
IMPORTANCE Women studying medicine currently equal men in number, but evidence suggests that men and women might not be evaluated equally throughout their education. OBJECTIVE To examine whether there are differences associated with gender in either objective or subjective evaluations of medical students in an internal medicine clerkship. DESIGN, SETTING, AND PARTICIPANTS This single-center retrospective cohort study evaluated data from 277 third-year medical students completing internal medicine clerkships in the 2017 to 2018 academic year at an academic hospital and its affiliates in Pennsylvania. Data were analyzed from September to November 2020. EXPOSURE Gender, presumed based on pronouns used in evaluations. MAIN OUTCOMES AND MEASURES Likert scale evaluations of clinical skills, standardized examination scores, and written evaluations were analyzed. Univariate and multivariate linear regression were used to observe trends in measures. Word embeddings were analyzed for narrative evaluations. RESULTS Analyses of 277 third-year medical students completing an internal medicine clerkship (140 women [51%] with a mean [SD] age of 25.5 [2.3] years and 137 [49%] presumed men with a mean [SD] age of 25.9 [2.7] years) detected no difference in final grade distribution. However, women outperformed men in 5 of 8 domains of clinical performance, including patient interaction (difference, 0.07 [95% CI, 0.04-0.13]), growth mindset (difference, 0.08 [95% CI, 0.01-0.11]), communication (difference, 0.05 [95% CI, 0-0.12]), compassion (difference, 0.125 [95% CI, 0.03-0.11]), and professionalism (difference, 0.07 [95% CI, 0-0.11]). With no difference in examination scores or subjective knowledge evaluation, there was a positive correlation between these variables for both genders (women: r = 0.35; men: r = 0.26) but different elevations for the line of best fit (P < .001). Multivariate regression analyses revealed associations between final grade and patient interaction (women: coefficient, 6.64 [95% CI, 2.16-11.12]; P = .004; men: coefficient, 7.11 [95% CI, 2.94-11.28]; P < .001), subjective knowledge evaluation (women: coefficient, 6.66 [95% CI, 3.87-9.45]; P < .001; men: coefficient, 5.45 [95% CI, 2.43-8.43]; P < .001), reported time spent with the student (women: coefficient, 5.35 [95% CI, 2.62-8.08]; P < .001; men: coefficient, 3.65 [95% CI, 0.83-6.47]; P = .01), and communication (women: coefficient, 6.32 [95% CI, 3.12-9.51]; P < .001; men: coefficient, 4.21 [95% CI, 0.92-7.49]; P = .01). The model based on the men's data also included growth mindset as a significant variable (coefficient, 4.09 [95% CI, 0.67-7.50]; P = .02). For narrative evaluations, words in context with "he or him" and "she or her" differed, with agentic terms used in descriptions of men and personality descriptors used more often for women. CONCLUSIONS AND RELEVANCE Despite no difference in final grade, women scored higher than men on various domains of clinical performance, and performance in these domains was associated with evaluators' suggested final grade. The content of narrative evaluations significantly differed by student gender. This work supports the hypothesis that how students are evaluated in clinical clerkships is associated with gender.
Collapse
Affiliation(s)
- Deborah J. Gorth
- Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Rogan G. Magee
- Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Sarah E. Rosenberg
- Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, Pennsylvania
- Department of Medicine, Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Nina Mingioni
- Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, Pennsylvania
- Department of Medicine, Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, Pennsylvania
| |
Collapse
|
17
|
Roberts LW. Emerging Issues in Assessment in Medical Education: A Collection. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:159-160. [PMID: 33492817 DOI: 10.1097/acm.0000000000003855] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
|