1
|
Miller B, Nowalk A, Ward C, Walker L, Dewar S. Pediatric residency milestone performance is not predicted by the United States Medical Licensing Examination Step 2 Clinical Knowledge. MEDEDPUBLISH 2024; 13:308. [PMID: 39185002 PMCID: PMC11344197 DOI: 10.12688/mep.19873.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/19/2024] [Indexed: 08/27/2024] Open
Abstract
Objectives This study aims to show whether correlation exists between pediatric residency applicants' quantitative scores on the United States Medical Licensing Exam Step 2 Clinical Knowledge examination and their subsequent performance in residency training based on the Accreditation Council for Graduate Medical Education Milestones, which are competency-based assessments that aim to determine residents' ability to work unsupervised after postgraduate training. No previous literature has correlated Step 2 Clinical Knowledge scores with pediatric residency performance assessed by Milestones. Methods In this retrospective cohort study, the United States Medical Licensing Exam Step 2 Clinical Knowledge Scores and Milestones data were collected from all 188 residents enrolled in a single categorical pediatric residency program from 2012 - 2017. Pearson correlation coefficients were calculated amongst available test and milestone data points to determine correlation between test scores and clinical performance. Results Using Pearson correlation coefficients, no significant correlation was found between quantitative scores on the Step 2 Clinical Knowledge exam and average Milestones ratings (r = -0.1 for post-graduate year 1 residents and r = 0.25 for post-graduate year 3 residents). Conclusions These results demonstrate that Step 2 scores have no correlation to success in residency training as measured by progression along competency-based Milestones. This information should limit the importance residency programs place on quantitative Step 2 scores in their ranking of residency applicants. Future studies should include multiple residency programs across multiple specialties to help make these findings more generalizable.
Collapse
Affiliation(s)
| | - Andrew Nowalk
- University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Caroline Ward
- University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | | | | |
Collapse
|
2
|
Halagur AS, Balakrishnan K, Ayoub N. Large Language Models in Otolaryngology Residency Admissions: A Random Sampling Analysis. Laryngoscope 2024. [PMID: 39157995 DOI: 10.1002/lary.31705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 07/25/2024] [Accepted: 07/29/2024] [Indexed: 08/20/2024]
Abstract
OBJECTIVES To investigate potential demographic bias in artificial intelligence (AI)-based simulations of otolaryngology, residency selection committee (RSC) members tasked with selecting one applicant among candidates with varied racial, gender, and sexual orientations. METHODS This study employed random sampling of simulated RSC member decisions using a novel Application Programming Interface (API) to virtually connect to OpenAI's Generative Pre-Trained Transformers (GPT-4 and GPT-4o). Simulated RSC members with diverse demographics were tasked with ranking to match 1 applicant among 10 with varied racial, gender, and sexual orientations. All applicants had identical qualifications; only demographics of the applicants and RSC members were varied for each simulation. Each RSC simulation ran 1000 times. Chi-square tests analyzed differences across categorical variables. GPT-4o simulations additionally requested a rationale for each decision. RESULTS Simulated RSCs consistently showed racial, gender, and sexual orientation bias. Most applicant pairwise comparisons showed statistical significance (p < 0.05). White and Black RSCs exhibited greatest preference for applicants sharing their own demographic characteristics, favoring White and Black female applicants, respectively, over others (all pairwise p < 0.001). Asian male applicants consistently received lowest selection rates. Male RSCs favored White male and female applicants, while female RSCs preferred LGBTQIA+, White and Black female applicants (all p < 0.05). High socioeconomic status (SES) RSCs favored White female and LGBTQIA+ applicants, while low SES RSCs favored Black female and LGBTQIA+ applicants over others (all p < 0.001). Results from the newest iteration of the LLM, ChatGPT-4o, indicated evolved selection preferences favoring Black female and LGBTQIA+ applicants across all RSCs, with the rationale of prioritizing inclusivity given in >95% of such decisions. CONCLUSION Utilizing publicly available LLMs to aid in otolaryngology residency selection may introduce significant racial, gender, and sexual orientation bias. Potential for significant and evolving LLM bias should be appreciated and minimized to promote a diverse and representative field of future otolaryngologists in alignment with current workforce data. LEVEL OF EVIDENCE N/A Laryngoscope, 2024.
Collapse
Affiliation(s)
- Akash S Halagur
- Department of Otolaryngology-Head & Neck Surgery, Stanford University School of Medicine, Stanford, California, U.S.A
- Department of Otolaryngology-Head & Neck Surgery, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, U.S.A
| | - Karthik Balakrishnan
- Division of Pediatric Otolaryngology, Department of Otolaryngology-Head & Neck Surgery, Stanford University School of Medicine, Palo Alto, California, U.S.A
| | - Noel Ayoub
- Division of Rhinology and Skull Base Surgery, Department of Otolaryngology-Head & Neck Surgery, Mass Eye and Ear, Boston, Massachusetts, U.S.A
| |
Collapse
|
3
|
Gardner AK, Costa P. Predicting Surgical Resident Performance With Situational Judgment Tests. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:884-888. [PMID: 38412475 DOI: 10.1097/acm.0000000000005680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
PURPOSE Situational judgment tests (SJTs) have been proposed as an efficient, effective, and equitable approach to residency program applicant selection. This study examined how SJTs can predict milestone performance during early residency. METHOD General surgery residency program applicants during 3 selection cycles (2018-2019, 2019-2020, 2020-2021) completed SJTs. Accreditation Council for Graduate Medical Education milestone performance data from selected applicants were collected in March and April 2019, 2020, and 2021 and from residents in March 2020, August 2020, March 2021, September 2021, and March 2022. Descriptive statistics and correlations were computed and analysis of variance tests performed to examine differences among 4 SJT performance groups: green, top 10% to 25%; yellow, next 25% to 50%; red, bottom 50%; and unknown, did not complete the SJT. RESULTS Data were collected for 70 residents from 7 surgery residency programs. Differences were found for patient care ( F3,189 = 3.19, P = .03), medical knowledge ( F3,176 = 3.22, P = .02), practice-based learning and improvement ( F3,189 = 3.18, P = .04), professionalism ( F3,189 = 3.82, P = .01), interpersonal and communication skills ( F3,190 = 3.35, P = .02), and overall milestone score ( F3,189 = 3.44, P = .02). The green group performed better on patient care, medical knowledge, practice-based learning and improvement, professionalism, and overall milestone score. The yellow group performed better than the red group on professionalism and overall milestone score, better than the green group on interpersonal and communication skills, and better than the unknown group on all but practice-based learning and improvement. The red group outperformed the unknown group on all but professionalism and outperformed the green group on medical knowledge. CONCLUSIONS SJTs demonstrate promise for assessing important noncognitive attributes in residency applicants and align with national efforts to review candidates more holistically and minimize potential biases.
Collapse
|
4
|
Uner OE, Choi D, Hwang TS, Faridi A. Bias Reduction Practices in Underrepresented Groups in Ophthalmology Resident Recruitment. JAMA Ophthalmol 2024; 142:429-435. [PMID: 38546576 PMCID: PMC10979357 DOI: 10.1001/jamaophthalmol.2024.0394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 01/22/2024] [Indexed: 04/01/2024]
Abstract
Importance Best recruitment practices for increasing diversity are well established, but the adoption and impact of these practices in ophthalmology residency recruitment are unknown. Objective To describe the adoption of bias reduction practices in groups underrepresented in ophthalmology (URiO) residency recruitment and determine which practices are effective for increasing URiO residents. Design, Setting, and Participants This cross-sectional survey study used an 18-item questionnaire included in the online survey of the Association of University Professors in Ophthalmology (AUPO) Residency Program Directors. Data collection occurred from July 2022 to December 2022. The data were initially analyzed on January 16, 2023. Participants included residency program directors (PDs) in the AUPO PD listserv database. Main Outcomes and Measures Descriptive analysis of resident selection committee approaches, evaluation of applicant traits, and use of bias reduction tools. Primary outcome was diversity assessed by presence of at least 1 resident in the last 5 classes who identified as URiO, including those underrepresented in medicine (URiM), lesbian, gay, bisexual, transgender, queer, intersex, and asexual plus, or another disadvantaged background (eg, low socioeconomic status). Multivariate analyses of recruitment practices were conducted to determine which practices were associated with increased URiO and URiM. Results Among 106 PDs, 65 completed the survey (61.3%). Thirty-nine PDs used an interview rubric (60.0%), 28 used interview standardization (43.0%), 56 provided at least 1 bias reduction tool to their selection committee (86.2%), and 44 used postinterview metrics to assess diversity, equity, and inclusion efforts (67.7%). Application filters, interview standardization, and postinterview metrics were not associated with increased URiO. Multivariate logistic regression analysis showed larger residency class (odds ratio [OR], 1.34; 95% CI, 1.09-1.65; P = .01) and use of multiple selection committee bias reduction tools (OR, 1.47; 95% CI, 1.13-1.92; P = .01) were positively associated with increased URiO, whereas use of interview rubrics (OR, 0.72; 95% CI, 0.59-0.87; P = .001) and placing higher importance of applicant interest in a program (OR, 0.83; 95% CI, 0.75-0.92; P = .02) were negatively associated. URiM analyses showed similar associations. Conclusions and Relevance Ophthalmology residency interviews are variably standardized. In this study, providing multiple bias reduction tools to selection committees was associated with increased URiO and URiM residents. Prioritizing applicant interest in a program may reduce resident diversity. Interview rubrics, while intended to reduce bias, may inadvertently increase inequity.
Collapse
Affiliation(s)
- Ogul E. Uner
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland
| | - Dongseok Choi
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland
- Oregon Health & Science University, Portland State University School of Public Health, Portland
| | - Thomas S. Hwang
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland
| | - Ambar Faridi
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland
- Veteran Affairs Portland Health Care System, Oregon
| |
Collapse
|
5
|
De Rosa P, Takacs EB, Wendt L, Tracy CR. Effect of Holistic Review, Interview Blinding, and Structured Questions in Resident Selection: Can we Predict Who Will Do Well in a Residency Interview? Urology 2023; 173:41-47. [PMID: 36603653 DOI: 10.1016/j.urology.2022.11.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 11/06/2022] [Accepted: 11/15/2022] [Indexed: 01/04/2023]
Abstract
OBJECTIVE To examine the Urology residency application process, particularly the interview. Historically, the residency interview has been vulnerable to bias and not determined to be a predictor of future residency performance. Our goal is to determine the relationship between pre-interview metrics and post-interview ranking using best practices for Urology resident selection including holistic review, blinded interviews, and structured behaviorally anchored questions. METHODS Applications were assessed on cognitive (Alpha Omega Alpha, class rank, junior year clinical clerkship grades) and non-cognitive attributes (letters of recommendation [LOR], personal statement [PS], demographics, research, personal characteristics) by reviewers blinded to USMLE scores and photograph. Interviewers were blinded to the application other than PS and LORs. Interviews consisted of a structured behaviorally anchored question (SBI) and an unstructured interview (UI). Odds ratios were determined comparing pre-interview and interview impressions. RESULTS Fifty-one applicants were included in the analysis. USMLE step 1 score (average 245) was associated with Alpha Omega Alpha, class rank, junior year clinical clerkship, and PS. The UI score was associated with the LOR (P = .04) whereas SBI scores were not (P = .5). Faculty rank was associated with SBI, UI, and overall interview (OI) scores (P < .001). Faculty rank was also associated with LOR. Resident impression of interviewees were associated with faculty interview scores (P = .001) and faculty rank (P < .001). CONCLUSION Traditional interviews may be biased toward application materials and may be balanced with behavioral questions. While Step 1 score does not offer additional information over other PI metrics, blinded interviews may offer discriminant validity over a PI rubric.
Collapse
Affiliation(s)
- Paige De Rosa
- Department of Urology, University of Iowa Hospitals & Clinics, Iowa City, Iowa
| | - Elizabeth B Takacs
- Department of Urology, University of Iowa Hospitals & Clinics, Iowa City, Iowa
| | - Linder Wendt
- Department of Statistics, University of Iowa, Iowa City, Iowa
| | - Chad R Tracy
- Department of Urology, University of Iowa Hospitals & Clinics, Iowa City, Iowa.
| |
Collapse
|
6
|
Dacre M, Branzetti J, Hopson LR, Regan L, Gisondi MA. Rejecting Reforms, Yet Calling for Change: A Qualitative Analysis of Proposed Reforms to the Residency Application Process. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:219-227. [PMID: 36512846 DOI: 10.1097/acm.0000000000005100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE Annual increases in the number of residency applications burden students and challenge programs. Several reforms to the application process have been proposed; however, stakeholder input is often overlooked. The authors examined key stakeholders' opinions about several proposed reforms to the residency application process and identified important factors to guide future reforms. METHOD Using semistructured interviews, the authors asked educational administrators and trainees to consider 5 commonly proposed reforms to the residency application process: Match to obtain residency interviews, preference signaling, application limits, geographic preference disclosure, and abolishing the Match. The authors conducted a modified content analysis of interview transcripts using qualitative and quantitative analytic techniques. Frequency analysis regarding the acceptability of the 5 proposed reforms and thematic analysis of important factors to guide reform were performed. Fifteen-minute interviews were conducted between July and October 2019, with data analysis completed during a 6-month period in 2020 and 2021. RESULTS Participants included 30 stakeholders from 9 medical specialties and 15 institutions. Most participants wanted to keep the Match process intact; however, they noted several important flaws in the system that disadvantage students and warrant change. Participants did not broadly support any of the 5 proposed reforms. Two themes were identified: principles to guide reform (fairness, transparency, equity, reducing costs to students, reducing total applications, reducing work for program directors, and avoiding unintended consequences) and unpopular reform proposals (concern that application limits threaten less competitive students and signaling adds bias to the system). CONCLUSIONS Key stakeholders in the residency application process believe the system has important flaws that demand reform. Despite this, the most commonly proposed reforms are unacceptable to these stakeholders because they threaten fairness to students and program workload. These findings call for a larger investigation of proposed reforms with a more nationally representative stakeholder cohort.
Collapse
Affiliation(s)
- Michael Dacre
- M. Dacre is a fourth-year medical student, Stanford School of Medicine, Stanford, California; ORCID: https://orcid.org/0000-0002-5561-1656
| | - Jeremy Branzetti
- J. Branzetti was assistant professor, Department of Emergency Medicine, New York University Langone School of Medicine, New York, New York, at the time of this work. He is now affiliated with Geisinger Health System, Danville, Pennsylvania. ORCID: https://orcid.org/0000-0002-2397-0566
| | - Laura R Hopson
- L.R. Hopson is professor and associate chair for education, Department of Emergency Medicine, University of Michigan, Ann Arbor, Michigan; ORCID: https://orcid.org/0000-0002-1183-4751
| | - Linda Regan
- L. Regan is associate professor and vice chair of education, Department of Emergency Medicine, Johns Hopkins Medical Institutes, Baltimore, Maryland; ORCID: https://orcid.org/0000-0003-0390-4243
| | - Michael A Gisondi
- M.A. Gisondi is associate professor and vice chair of education, Department of Emergency Medicine, Stanford School of Medicine, Stanford, California; ORCID: https://orcid.org/0000-0002-6800-3932
| |
Collapse
|
7
|
Parker AS, Mwachiro MM, Kirui JR, Many HR, Mwachiro EB, Parker RK. A Semistructured Interview for Surgical Residency Targeting Nontechnical Skills. JOURNAL OF SURGICAL EDUCATION 2022; 79:e213-e219. [PMID: 36030183 DOI: 10.1016/j.jsurg.2022.07.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 07/05/2022] [Accepted: 07/16/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVE We review the development, implementation, and initial outcomes of a semistructured interview process to assess the nontechnical skills of surgical residency applicants. DESIGN In 2018, we restructured our residency selection interview process. Through semistructured faculty interviews, we sought to evaluate candidates along seven nontechnical skills (grit, ownership, rigor, teamwork, presence, impact, and organizational alignment). We plotted each candidate's scores on a radar plot for graphical representation and calculated the plot area of each candidate. We retrospectively evaluated 3 years of data, comparing the nontechnical skill scores of matriculants into the training program to those of nonmatriculants. SETTING Tenwek Hospital is a 361-bed tertiary teaching and referral hospital in rural western Kenya with a 5-year general surgery residency program. PARTICIPANTS Thirty-one applicants were interviewed over 3 years. Thirteen matriculated into the program. RESULTS Scores for grit, (4.8 vs 3.9; p = 0.0004), impact (4.2 vs 3.5; p = 0.014), ownership (4.2 vs 3.6; p = 0.01), and organizational alignment (4.3 vs 3.8; p = 0.008) were significantly higher in matriculants. CONCLUSIONS This semistructured interview process provides a robust and beneficial mechanism for assessing applicants' nontechnical skills, which may allow for the matriculation of more well-rounded candidates into surgical residency and, ultimately, surgical practice.
Collapse
Affiliation(s)
- Andrea S Parker
- Department of Surgery, Tenwek Hospital, Bomet, Kenya; Department of Surgery, Alpert Medical School of Brown University, Providence, Rhode Island.
| | | | | | - Heath R Many
- Department of Surgery, Tenwek Hospital, Bomet, Kenya; Department of Surgery, University of Tennessee Medical Center, Knoxville, Tennessee
| | | | - Robert K Parker
- Department of Surgery, Tenwek Hospital, Bomet, Kenya; Department of Surgery, Alpert Medical School of Brown University, Providence, Rhode Island
| |
Collapse
|
8
|
Lund S, D'Angelo JD, Baloul M, Yeh VJH, Stulak J, Rivera M. Simulation as Soothsayer: Simulated Surgical Skills MMIs During Residency Interviews are Associated With First Year Residency Performance. JOURNAL OF SURGICAL EDUCATION 2022; 79:e235-e241. [PMID: 35725725 DOI: 10.1016/j.jsurg.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 05/18/2022] [Accepted: 06/04/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVE The main consideration during residency recruitment is identifying applicants who will succeed during residency. However, few studies have identified applicant characteristics that are associated with competency development during residency, such as the Accreditation Council for Graduate Medical Education milestones. As mini multiple interviews (MMIs) can be used to assess various competencies, we aimed to determine if simulated surgical skills MMI scores during a general surgery residency interview were associated with Accreditation Council for Graduate Medical Education milestone ratings at the conclusion of intern year. DESIGN Retrospective cohort study. Interns' Step 1 and 2 clinical knowledge (CK) scores, interview day simulated surgical skills MMI overall score, traditional faculty interview scores, average overall milestone ratings in the spring of residency, and intern American Board of Surgery In-Training Examination (ABSITE) percentile scores were gathered. Two multiple linear regression were performed analyzing the association between Step 1, Step 2 CK, MMI, and traditional faculty interview scores with (1) average overall milestone rating and (2) ABSITE percentile scores, controlling for categorical/preliminary intern classification. SETTING One academic medical center PARTICIPANTS: General surgery interns matriculating in 2020-2021 RESULTS: Nineteen interns were included. Multiple linear regression revealed that higher overall simulated surgical skills MMI score was associated with higher average milestone ratings (β = .45, p = 0.03) and higher ABSITE score (β = .43, p = 0.02) while neither Step 1, Step 2 CK, nor faculty interview scores were significantly associated with average milestone ratings. CONCLUSIONS Surgical residency programs invest a tremendous amount of effort into training residents, thus metrics for predicting applicants that will succeed are needed. Higher scores on a simulated surgical skills MMIs are associated with higher milestone ratings 1 year into residency and higher intern ABSITE percentiles. These results indicate a noteworthy method, simulated surgical skills MMIs, as an additional metric that may select residents that will have early success in residency.
Collapse
Affiliation(s)
- Sarah Lund
- Mayo Clinic Department of Surgery, Rochester, Minnesota.
| | | | | | - Vicky J-H Yeh
- Mayo Clinic Department of Surgery, Rochester, Minnesota
| | - John Stulak
- Mayo Clinic Department of Cardiovascular Surgery, Rochester, Minnesota
| | - Mariela Rivera
- Mayo Clinic Division of Trauma, Critical Care, and General Surgery, Rochester, Minnesota
| |
Collapse
|
9
|
Abstract
PURPOSE OF REVIEW Objective measures of residency applicants do not correlate to success within residency. While industry and business utilize standardized interviews with blinding and structured questions, residency programs have yet to uniformly incorporate these techniques. This review focuses on an in-depth evaluation of these practices and how they impact interview formatting and resident selection. RECENT FINDINGS Structured interviews use standardized questions that are behaviorally or situationally anchored. This requires careful creation of a scoring rubric and interviewer training, ultimately leading to improved interrater agreements and biases as compared to traditional interviews. Blinded interviews eliminate even further biases, such as halo, horn, and affinity bias. This has also been seen in using multiple interviewers, such as in the multiple mini-interview format, which also contributes to increased diversity in programs. These structured formats can be adopted to the virtual interviews as well. There is growing literature that using structured interviews reduces bias, increases diversity, and recruits successful residents. Further research to measure the extent of incorporating this method into residency interviews will be needed in the future.
Collapse
|
10
|
Lee SH, Phan PH, Desai SV. Evaluation of house staff candidates for program fit: a cohort-based controlled study. BMC MEDICAL EDUCATION 2022; 22:754. [PMID: 36320029 PMCID: PMC9628087 DOI: 10.1186/s12909-022-03801-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND Medical school academic achievements do not necessarily predict house staff job performance. This study explores a selection mechanism that improves house staff-program fit that enhances the Accreditation Council for Graduate Medical Education Milestones performance ratings. OBJECTIVE Traditionally, house staff were selected primarily on medical school academic performance. To improve residency performance outcomes, the Program designed a theory-driven selection tool to assess house staff candidates on their personal values and goals fit with Program values and goals. It was hypothesized cohort performance ratings will improve because of the intervention. METHODS Prospective quasi-experimental cohort design with data from two house staff cohorts at a university-based categorical Internal Medicine Residency Program. The intervention cohort, comprising 45 house staff from 2016 to 2017, was selected using a Behaviorally Anchored Rating Scales (BARS) tool for program fit. The control cohort, comprising 44 house staff from the prior year, was selected using medical school academic achievement scores. House staff performance was evaluated using ACGME Milestones indicators. The mean scores for each category were compared between the intervention and control cohorts using Student's t-tests with Bonferroni correction and Cohen's d for effect size. RESULTS The cohorts were no different in academic performance scores at time of Program entry. The intervention cohort outperformed the control cohort on all 6 dimensions of Milestones by end-PGY1 and 3 of 6 dimensions by mid-PGY3. CONCLUSION Selecting house staff based on compatibility with Residency Program values and objectives may yield higher job performance because trainees benefit more from a better fit with the training program.
Collapse
Affiliation(s)
- Soo-Hoon Lee
- Strome College of Business, Old Dominion University, Norfolk, VA, USA
| | - Phillip H Phan
- Johns Hopkins Carey Business School, Johns Hopkins University, Baltimore, MD, USA.
- Johns Hopkins School of Medicine, Johns Hopkins University, Baltimore, MD, USA.
| | - Sanjay V Desai
- Johns Hopkins School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
11
|
Mokhtech M, Jagsi R, Vega RM, Brown DW, Golden DW, Juang T, Mattes MD, Pinnix CC, Evans SB. Mitigating Bias in Recruitment: Attracting a Diverse, Dynamic Workforce to Sustain the Future of Radiation Oncology. Adv Radiat Oncol 2022; 7:100977. [PMID: 36060636 PMCID: PMC9436705 DOI: 10.1016/j.adro.2022.100977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 04/15/2022] [Indexed: 10/31/2022] Open
|
12
|
Lund S, D'Angelo J, D'Angelo AL, Heller S, Stulak J, Rivera M. New Heuristics to Stratify Applicants: Predictors of General Surgery Residency Applicant Step 1 Scores. JOURNAL OF SURGICAL EDUCATION 2022; 79:349-354. [PMID: 34776371 DOI: 10.1016/j.jsurg.2021.10.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 10/04/2021] [Accepted: 10/20/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE In 2022, United States Medical Licensing Examination (USMLE) Step 1 scores will become pass/fail. This may be problematic, as residency programs heavily rely on USMLE Step 1 scores as a metric when determining interview invitations. This study aimed to assess candidate application metrics associated with USMLE Step 1 scores to offer programs new cues for stratifying applicants. DESIGN Retrospective cohort study analyzing interviewed applicants to one general surgery residency program in 2019 and 2020. Applicant data analyzed included USMLE Step 1 scores, number of publications, clerkship scores, letter of recommendation scores (out of 2, scored by 0.25 interval), interview overall score (out of 5, scored by integer level), and standardized question score (out of 10). Each year, applicant's answers to one standardized behavioral question during their interview were scored by interviewers. SETTING Tertiary medical center, academic general surgery residency program. PARTICIPANTS Interviewed applicants at one general surgery residency program whose applications contained complete demographic information (203 out of 247). RESULTS Multiple Linear Regression revealed that higher surgical clerkship (β = 0.19, p = 0.006) and higher standardized interview question (β = 0.32, p < 0.001) scores were positively associated with applicant USMLE Step 1 score (F[7, 195] = 6.61, p < 0.001, R2 = 0.19). Letter of recommendation score, number of peer reviewed publications, gender, race, and applicant type (preliminary/categorical) were not associated with USMLE Step 1 scores. CONCLUSIONS With USMLE Step 1 scores transitioning to pass/fail, surgical residency programs need new selection heuristics. Surgery clerkship scores and standardized behavioral questions answered by applicants prior to the interview could provide a holistic view of applicants and help programs better stratify candidates without USMLE Step 1 scores.
Collapse
Affiliation(s)
- Sarah Lund
- Mayo Clinic Department of Surgery, Rochester, Minnesota.
| | | | | | - Stephanie Heller
- Mayo Clinic Division of Trauma, Critical Care, and General Surgery, Rochester, Minnesota
| | - John Stulak
- Mayo Clinic Department of Cardiovascular Surgery, Rochester, Minnesota
| | - Mariela Rivera
- Mayo Clinic Division of Trauma, Critical Care, and General Surgery, Rochester, Minnesota
| |
Collapse
|
13
|
Evaluating the Whole Applicant: Use of Situational Judgment Testing and Personality Testing to Address Disparities in Resident Selection. Curr Urol Rep 2022; 23:309-318. [PMID: 36255650 PMCID: PMC9579621 DOI: 10.1007/s11934-022-01115-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/18/2022] [Indexed: 12/14/2022]
Abstract
PURPOSE OF REVIEW Urology program directors are faced with increasing numbers of applications annually, making holistic review of each candidate progressively more difficult. Efforts to streamline evaluation using traditional cognitive metrics have fallen short as these do not predict overall resident performance. Situational judgment tests (SJTs) and personality assessment tools (PATs) have been used in business and industry for decades to evaluate candidates and measure non-cognitive attributes that better predict subsequent performance. The purpose of this review is to describe what these assessments are and the current literature on the use of these metrics in medical education. RECENT FINDINGS SJTs relative to PATs have more original research. Data suggests that SJTs decrease bias, increase diversity, and may be predictive of performance in residency. PATs are also emerging with data to support use with ability to assess fit to program and certain traits identified more consistently among high-performing residents and correlation to performance on ACGME milestones. PATs may be more coachable than SJTs. SJTs and PATs are emerging as techniques to supplement the current resident application review process. Early evidence supports their use in undergraduate medical education as does some early preliminary results in graduate medical education.
Collapse
|
14
|
Al-Sheikh M, Albaker W, Ayub M. Do mock medical licensure exams improve performance of graduates? Experience from a Saudi medical college. SAUDI JOURNAL OF MEDICINE AND MEDICAL SCIENCES 2022; 10:157-161. [PMID: 35602392 PMCID: PMC9121705 DOI: 10.4103/sjmms.sjmms_173_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 06/10/2021] [Accepted: 04/04/2022] [Indexed: 11/04/2022] Open
|
15
|
Obeso V, Grbic D, Emery M, Parekh K, Phillipi C, Swails J, Jayas A, Andriole DA. Core Entrustable Professional Activities (EPAs) and the Transition from Medical School to Residency: the Postgraduate Year One Resident Perspective. MEDICAL SCIENCE EDUCATOR 2021; 31:1813-1822. [PMID: 34956699 PMCID: PMC8651854 DOI: 10.1007/s40670-021-01370-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/14/2021] [Indexed: 06/14/2023]
Abstract
INTRODUCTION The Association of American Medical Colleges (AAMC) proposed thirteen core Entrustable Professional Activities (EPAs) that all graduates should be able to perform under indirect supervision upon entering residency. As an underlying premise is that graduates ready to do so will be better prepared to transition to the responsibilities of residency, we explored the relationship between postgraduate year (PGY)-1 residents' self-assessed preparedness to perform core EPAs under indirect supervision at the start of residency with their ease of transition to residency. METHODS Using response data to a questionnaire administered in September 2019 to PGY-1 residents who graduated from AAMC core EPA pilot schools, we examined between-group differences and independent associations for each of PGY-1 position type, specialty, and "EPA-preparedness" score (proportion of EPAs the resident reported as prepared to perform under indirect supervision at the start of residency) and ease of transition to residency (from 1 = much harder to 5 = much easier than expected). RESULTS Of 274 questionnaire respondents (19% of 1438 graduates), 241 (88% of 274) had entered PGY-1 training and completed all questionnaire items of interest. EPA-preparedness score (mean 0.71 [standard deviation 0.26]) correlated with ease of transition (3.1 [0.9]; correlation = .291, p < .001). In linear regression controlling for specialty (among other variables), EPA-preparedness score (β-coefficient 1.08; 95% confidence interval .64-1.52; p < .001) predicted ease of transition to residency. CONCLUSION Graduates who felt prepared to perform many of the core EPAs under indirect supervision at the start of PGY-1 training reported an easier-than-expected transition to residency. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s40670-021-01370-3.
Collapse
Affiliation(s)
- Vivian Obeso
- Florida International University Herbert Wertheim College of Medicine, Miami, FL USA
| | - Douglas Grbic
- Association of American Medical Colleges, 655 K Street, N.W., Washington, DC 20001-2399 USA
| | - Matthew Emery
- Michigan State University College of Human Medicine, Grand Rapids, MI USA
| | - Kendra Parekh
- Vanderbilt University School of Medicine, Nashville, TN USA
| | - Carrie Phillipi
- Oregon Health & Science University School of Medicine, Portland, OR USA
| | - Jennifer Swails
- McGovern Medical School at the University of Texas Health Science Center, Houston, TX USA
| | - Amy Jayas
- Association of American Medical Colleges, 655 K Street, N.W., Washington, DC 20001-2399 USA
| | - Dorothy A. Andriole
- Association of American Medical Colleges, 655 K Street, N.W., Washington, DC 20001-2399 USA
| | | |
Collapse
|
16
|
Bethel EC, Marchetti KA, Hecklinski TM, Daignault-Newton S, Kraft KH, Hamilton BD, Faerber GJ, Ambani SN. The LEGO™ Exercise: An Assessment of Core Competencies in Urology Residency Interviews. JOURNAL OF SURGICAL EDUCATION 2021; 78:2063-2069. [PMID: 34172410 DOI: 10.1016/j.jsurg.2021.05.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 05/01/2021] [Accepted: 05/28/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND In competitive residency specialties such as Urology, it has become increasingly challenging to differentiate similarly qualified applicants. Residency interviews are utilized to rank applicants, yet they are often biased and do not explicitly address ACGME core competencies. OBJECTIVE We hypothesized a team-based exercise in the urology residency interview centered on building LEGOs assesses core competences. DESIGN From 2014-2017, students interviewing for urology residency at two institutions participated in a LEGO™ building activity. Applicants were assigned to "architect"- describing how to construct a structure - or "builder" - constructing the same structure with pieces-using only verbal cues to assemble the structure. Participants were graded using a rubric assessing competencies of interpersonal communication, problem-based learning, professionalism, and manual dexterity (indicator of procedural skill). The total minimum score was 16 and maximum was 80. SETTING The study took place at two tertiary referral centers: University of Michigan Medical School in Ann Arbor, MI, and University of Utah School of Medicine in Salt Lake City, UT. PARTICIPANTS A total of 176 applicants participated, comprised of applicants interviewing for urology residency at two institutions during the study timeframe. RESULTS For architects and builders, there was a maximum score of 80, and minimum of 34 and 32, respectively. Both distributions show a right shift with mean scores of 64.3 and 65.9, and median scores of 69 and 65.5. Successful pairs excelled with consistent nomenclature and clear directionality. Ineffective pairs miscommunicated with false affirmations, inconsistent nomenclature, and lack of patience. CONCLUSIONS The LEGO™ exercise allowed for standardized assessment of applicants based on ACGME core competencies. The rubric identified poor performers who do not rise to the challenge of a team-based task.
Collapse
Affiliation(s)
- Emma C Bethel
- University of Michigan Medical School, Ann Arbor, Michigan.
| | - Kathryn A Marchetti
- Department of Urology, University of Michigan Medical School, Ann Arbor, Michigan
| | | | | | - Kate H Kraft
- Department of Urology, University of Michigan Medical School, Ann Arbor, Michigan
| | - Blake D Hamilton
- Department of Urology, University of Utah School of Medicine, Salt Lake City, Utah
| | - Gary J Faerber
- Division of Urology, Duke University School of Medicine, Durham, North Carolina
| | - Sapan N Ambani
- Department of Urology, University of Michigan Medical School, Ann Arbor, Michigan
| |
Collapse
|
17
|
Top Three Learning Platforms for Orthopaedic In-Training Knowledge Produce Different Results. J Am Acad Orthop Surg Glob Res Rev 2021. [PMCID: PMC8337059 DOI: 10.5435/jaaosglobal-d-21-00148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
18
|
Rahil A, Hamamyh T, Al-Mohammed A, Kamel A, Abubeker I, Abu-Raddad L, Dargham S, Suliman S, Al Mohanadi D, Al Khal A. Do the selection criteria of internal medicine residency program predict resident performance? Qatar Med J 2021; 2021:20. [PMID: 34189112 PMCID: PMC8216212 DOI: 10.5339/qmj.2021.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 03/18/2021] [Indexed: 01/07/2023] Open
Abstract
BACKGROUND Well-performing physician reflects the success of the residency program in selecting the best candidates for training. This study aimed to evaluate the selection criteria, mainly the United States Medical Licensing Examination (USMLE) Step 2 Clinical Knowledge (CK) results and applicants' status as international or locally trained applicants, used by the medical education department and the internal medicine residency program in Hamad Medical Corporation in Qatar to predict the residents' performance during their training. METHODS A retrospective chart review was performed for three batches of graduates who started residency training in 2011, 2012, and 2013. Each group completed 4 years of training. The USMLE Step 2 CK status of the applicant, in-training exam (ITE) scores, formative evaluation scores, Arab Board written and clinical exams pass rate, and other indicators were analyzed. Statistical analysis included chi squares and independent t-test to identify associations. Multivariable analyses were conducted using logistic and linear regressions to test for adjusted associations. RESULTS The study included 118 (81 international/37 locally trained applicants) internal medicine residents. The ITE score correlated positively with the USMLE Step 2 CK score (r = 0.621, r = 0.587, r = 0.576, r = 0.571, p < 0.001) over the 4 years of training and among the international compared with locally trained applicants (p < 0.001). The rate of passing part 1 and 2 written exam of the Arab Board was higher in international than in local applicants, whereas clinical Arab Board exam and formative evaluation were not associated with any criteria. CONCLUSIONS Higher USMLE Step 2 CK score correlated with better performance on ITE but not with other performance indicators, whereas international applicants did better in both ITE and Arab Board written exam than local applicants. These variables may provide reasonable predictors of well-performing physicians.
Collapse
Affiliation(s)
- Ali Rahil
- Hamad General Hospital, Doha, Qatar E-mail: ,E-mail:
| | | | | | | | | | - Laith Abu-Raddad
- Biomathematics Research Core, Weill Cornell Medical College, Qatar
| | - Soha Dargham
- Biomathematics Research Core, Weill Cornell Medical College, Qatar
| | | | | | | |
Collapse
|
19
|
Ware AD, Flax LW, White MJ. Strategies to Enhance Diversity, Equity, and Inclusion in Pathology Training Programs: A Comprehensive Review of the Literature. Arch Pathol Lab Med 2021; 145:1071-1080. [PMID: 34015822 DOI: 10.5858/arpa.2020-0595-ra] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/03/2021] [Indexed: 11/06/2022]
Abstract
CONTEXT.— Like many medical specialties, pathology faces the ongoing challenge of effectively enriching diversity, equity, and inclusion within training programs and the field as a whole. This issue is furthered by a decline in US medical student interest in the field of pathology, possibly attributable to increasingly limited pathology exposure during medical school and medical student perceptions about careers in pathology. OBJECTIVE.— To review the literature to identify the challenges to diversity, equity, and inclusion in pathology, with an emphasis on the pathology trainee pipeline. To evaluate the medical education literature from other medical specialties for diversity and inclusion-focused studies and initiatives, and determine the outcomes and/or approaches relevant for pathology training programs. DATA SOURCES.— A literature review was completed by a search of the PubMed database, as well as a similar general Google search. Additional resources, including the Web sites of the Association of American Medical Colleges, the Electronic Residency Application Service, and the National Resident Matching Program, were used. CONCLUSIONS.— Many strategies exist to increase diversity and encourage an inclusive and equitable training environment, and many of these strategies may be applied to the field of pathology. Interventions such as increasing exposure to the field, using a holistic application review process, and addressing implicit biases have been shown to promote diversity, equity, and inclusion in many medical specialties. In addition, increasing access to elective and pipeline programs may help to bolster medical student interest in careers in pathology.
Collapse
Affiliation(s)
- Alisha D Ware
- From the Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, Maryland (Ware, White)
| | | | - Marissa J White
- From the Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, Maryland (Ware, White)
| |
Collapse
|
20
|
Horita S, Park YS, Son D, Eto M. Computer-based test (CBT) and OSCE scores predict residency matching and National Board assessment results in Japan. BMC MEDICAL EDUCATION 2021; 21:85. [PMID: 33531010 PMCID: PMC7856777 DOI: 10.1186/s12909-021-02520-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Accepted: 01/18/2021] [Indexed: 06/12/2023]
Abstract
CONTEXT The Japan Residency Matching Program (JRMP) launched in 2003 and is now a significant event for graduating medical students and postgraduate residency hospitals. The environment surrounding JRMP changed due to Japanese health policy, resulting in an increase in the number of unsuccessfully-matched students in the JRMP. Beyond policy issues, we suspected there were also common characteristics among the students who do not get a match with residency hospitals. METHODS In total 237 out of 321 students at The University of Tokyo Faculty of Medicine graduates from 2018 to 2020 participated in the study. The students answered to the questionnaire and gave written consent for using their personal information including the JRMP placement, scores of the pre-clinical clerkship (CC) Objective Structured Clinical Examinations (OSCE), the Computer-Based Test (CBT), the National Board Examination (NBE), and domestic scores for this study. The collected data were statistically analyzed. RESULTS The JRMP placements were correlated with some of the pre-CC OSCE factors/stations and/or total scores/global scores. Above all, the result of neurological examination station had most significant correlation between the JRMP placements. On the other hand, the CBT result had no correlation with the JRMP results. The CBT results had significant correlation between the NBE results. CONCLUSIONS Our data suggest that the pre-clinical clerkship OSCE score and the CBT score, both undertaken before the clinical clerkship, predict important outcomes including the JRMP and the NBE. These results also suggest that the educational resources should be intensively put on those who did not make good scores in the pre-clinical clerkship OSCE and the CBT to avoid the failure in the JRMP and the NBE.
Collapse
Affiliation(s)
- Shoko Horita
- Office for Clinical Practice and Medical Education, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
| | - Yoon-Soo Park
- Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Daisuke Son
- International Research Center for Medical Education, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Department of Community-based Family Medicine, School of Medicine, Tottori University Faculty of Medicine, Yonago, Japan
| | - Masato Eto
- International Research Center for Medical Education, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
21
|
Burk-Rafel J, Standiford TC. A Novel Ticket System for Capping Residency Interview Numbers: Reimagining Interviews in the COVID-19 Era. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:50-55. [PMID: 32910007 DOI: 10.1097/acm.0000000000003745] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The 2019 novel coronavirus (COVID-19) pandemic has led to dramatic changes in the 2020 residency application cycle, including halting away rotations and delaying the application timeline. These stressors are laid on top of a resident selection process already under duress with exploding application and interview numbers-the latter likely to be exacerbated with the widespread shift to virtual interviewing. Leveraging their trainee perspective, the authors propose enforcing a cap on the number of interviews that applicants may attend through a novel interview ticket system (ITS). Specialties electing to participate in the ITS would select an evidence-based, specialty-specific interview cap. Applicants would then receive unique electronic tickets-equal in number to the cap-that would be given to participating programs at the time of an interview, when the tickets would be marked as used. The system would be self-enforcing and would ensure each interview represents genuine interest between applicant and program, while potentially increasing the number of interviews-and thus match rate-for less competitive applicants. Limitations of the ITS and alternative approaches for interview capping, including an honor code system, are also discussed. Finally, in the context of capped interview numbers, the authors emphasize the need for transparent preinterview data from programs to inform applicants and their advisors on which interviews to attend, learning from prior experiences and studies on virtual interviewing, adherence to best practices for interviewing, and careful consideration of how virtual interviews may shift inequities in the resident selection process.
Collapse
Affiliation(s)
- Jesse Burk-Rafel
- J. Burk-Rafel is assistant professor of internal medicine and assistant director of UME-GME innovation, Institute for Innovations in Medical Education, NYU Grossman School of Medicine, New York, New York. At the time this article was written, he was an internal medicine resident, NYU Langone Health, New York, New York
| | - Taylor C Standiford
- T.C. Standiford is a fourth-year medical student, University of Michigan Medical School, Ann Arbor, Michigan
| |
Collapse
|
22
|
Lucey CR, Hauer KE, Boatright D, Fernandez A. Medical Education's Wicked Problem: Achieving Equity in Assessment for Medical Learners. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:S98-S108. [PMID: 32889943 DOI: 10.1097/acm.0000000000003717] [Citation(s) in RCA: 66] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Despite a lack of intent to discriminate, physicians educated in U.S. medical schools and residency programs often take actions that systematically disadvantage minority patients. The approach to assessment of learner performance in medical education can similarly disadvantage minority learners. The adoption of holistic admissions strategies to increase the diversity of medical training programs has not been accompanied by increases in diversity in honor societies, selective residency programs, medical specialties, and medical school faculty. These observations prompt justified concerns about structural and interpersonal bias in assessment. This manuscript characterizes equity in assessment as a "wicked problem" with inherent conflicts, uncertainty, dynamic tensions, and susceptibility to contextual influences. The authors review the underlying individual and structural causes of inequity in assessment. Using an organizational model, they propose strategies to achieve equity in assessment and drive institutional and systemic improvement based on clearly articulated principles. This model addresses the culture, systems, and assessment tools necessary to achieve equitable results that reflect stated principles. Three components of equity in assessment that can be measured and evaluated to confirm success include intrinsic equity (selection and design of assessment tools), contextual equity (the learning environment in which assessment occurs), and instrumental equity (uses of assessment data for learner advancement and selection and program evaluation). A research agenda to address these challenges and controversies and demonstrate reduction in bias and discrimination in medical education is presented.
Collapse
Affiliation(s)
- Catherine R Lucey
- C.R. Lucey is executive vice dean/vice dean for education and professor of medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Karen E Hauer
- K.E. Hauer is professor of medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Dowin Boatright
- D. Boatright is assistant professor of emergency medicine, Yale University School of Medicine, New Haven, Connecticut
| | - Alicia Fernandez
- A. Fernandez is professor of medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| |
Collapse
|
23
|
Rashid H, Coppola KM, Lebeau R. Three Decades Later: A Scoping Review of the Literature Related to the United States Medical Licensing Examination. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:S114-S121. [PMID: 33105189 DOI: 10.1097/acm.0000000000003639] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
PURPOSE To conduct a scoping review of the timing, scope, and purpose of literature related to the United States Medical Licensing Examination (USMLE) given the recent impetus to revise USMLE scoring. METHOD The authors searched PubMed, PsycInfo, and ERIC for relevant articles published from 1990 to 2019. Articles selected for review were labeled as research or commentaries and coded by USMLE Step level, sample characteristics (e.g., year(s), single/multiple institutions), how scores were used (e.g., predictor/outcome/descriptor), and purpose (e.g., clarification/justification/description). RESULTS Of the 741 articles meeting inclusion criteria, 636 were research and 105 were commentaries. Publication totals in the past 5 years exceeded those of the first 20 years.Step 1 was the sole focus of 38%, and included in 84%, of all publications. Approximately half of all research articles used scores as a predictor or outcome measure related to other curricular/assessment efforts, with a marked increase in the use of scores as predictors in the past 10 years. The overwhelming majority of studies were classified as descriptive in purpose. CONCLUSIONS Nearly 30 years after the inception of the USMLE, aspirations for its predictive utility are rising faster than evidence supporting the manner in which the scores are used. A closer look is warranted to systematically review and analyze the contexts and purposes for which USMLE scores can productively be used. Future research should explore cognitive and noncognitive factors that can be used in conjunction with constrained use of USMLE results to inform evaluation of medical students and schools and to support the residency selection process.
Collapse
Affiliation(s)
- Hanin Rashid
- H. Rashid is associate director, Office for Advancing Learning, Teaching, and Assessment, and assistant professor, Cognitive Skills Program, Rutgers Robert Wood Johnson Medical School, Piscataway, New Jersey
| | - Kristen M Coppola
- K.M. Coppola is assistant professor, Cognitive Skills Program, Rutgers Robert Wood Johnson Medical School, Piscataway, New Jersey
| | - Robert Lebeau
- R. Lebeau is director, Office for Advancing Learning, Teaching, and Assessment, and Cognitive Skills Program, Rutgers Robert Wood Johnson Medical School, Piscataway, New Jersey
| |
Collapse
|
24
|
Cullen MJ, Zhang C, Marcus-Blank B, Braman JP, Tiryaki E, Konia M, Hunt MA, Lee MS, Van Heest A, Englander R, Sackett PR, Andrews JS. Improving Our Ability to Predict Resident Applicant Performance: Validity Evidence for a Situational Judgment Test. TEACHING AND LEARNING IN MEDICINE 2020; 32:508-521. [PMID: 32427496 DOI: 10.1080/10401334.2020.1760104] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Construct: We investigated whether a situational judgment test (SJT) designed to measure professionalism in physicians predicts residents' performance on (a) Accreditation Council for Graduate Medical Education (ACGME) competencies and (b) a multisource professionalism assessment (MPA). Background: There is a consensus regarding the importance of assessing professionalism and interpersonal and communication skills in medical students, residents, and practicing physicians. Nonetheless, these noncognitive competencies are not well measured during medical education selection processes. One promising method for measuring these noncognitive competencies is the SJT. In a typical SJT, respondents are presented with written or video-based scenarios and asked to make choices from a set of alternative courses of action. Interpersonally oriented SJTs are commonly used for selection to medical schools in the United Kingdom and Belgium and for postgraduate selection of trainees to medical practice in Belgium, Singapore, Canada, and Australia. However, despite international evidence suggesting that SJTs are useful predictors of in-training performance, end-of-training performance, supervisory ratings of performance, and clinical skills licensing objective structured clinical examinations, the use of interpersonally oriented SJTs in residency settings in the United States has been infrequently investigated. The purpose of this study was to investigate whether residents' performance on an SJT designed to measure professionalism-related competencies-conscientiousness, integrity, accountability, aspiring to excellence, teamwork, stress tolerance, and patient-centered care-predicts both their current and future performance as residents on two important but conceptually distinct criteria: ACGME competencies and the MPA. Approach: We developed an SJT to measure seven dimensions of professionalism. During calendar year 2017, 21 residency programs from 2 institutions administered the SJT. We conducted analyses to determine the validity of SJT and USMLE scores in predicting milestone performance in ACGME core competency domains and the MPA in June 2017 and 3 months later in September 2017 for the MPA and 1 year later, in June 2018, for ACGME domains. Results: At both periods, the SJT score predicted overall ACGME milestone performance (r = .13 and .17, respectively; p < .05) and MPA performance (r = .19 and .21, respectively; p < .05). In addition, the SJT predicted ACGME patient care, systems-based practice, practice-based learning and improvement, interpersonal and communication skills, and professionalism competencies (r = .16, .15, .15, .17, and .16, respectively; p < .05) 1 year later. The SJT score contributed incremental validity over USMLE scores in predicting overall ACGME milestone performance (ΔR = .07) 1 year later and MPA performance (ΔR = .05) 3 months later. Conclusions: SJTs show promise as a method for assessing noncognitive attributes in residency program applicants. The SJT's incremental validity to the USMLE series in this study underscores the importance of moving beyond these standardized tests to a more holistic review of candidates that includes both cognitive and noncognitive measures.
Collapse
Affiliation(s)
- Michael J Cullen
- Department of Graduate Medical Education, University of Minnesota Medical School, Minneapolis, Minnesota, USA
| | - Charlene Zhang
- Department of Psychology, University of Minnesota, Minnesota, USA
| | | | - Jonathan P Braman
- Department of Orthopedic Surgery, University of Minnesota Medical School, Minneapolis, Minnesota, USA
| | - Ezgi Tiryaki
- Department of Neurology, University of Minnesota Medical School, Minneapolis, Minnesota, USA
| | - Mojca Konia
- Department of Anesthesiology, University of Minnesota Medical School, Minneapolis, Minnesota, USA
| | - Matthew A Hunt
- Department of Neurosurgery, University of Minnesota Medical School, Minneapolis, Minnesota, USA
| | - Michael S Lee
- Departments of Ophthalmology and Visual Neurosciences, Neurology and Neurosurgery, University of Minnesota Medical School, Minneapolis, Minnesota, USA
| | - Ann Van Heest
- Department of Orthopedic Surgery, University of Minnesota Medical School, Minneapolis, Minnesota, USA
| | - Robert Englander
- Department of Pediatrics, University of Minnesota Medical School, Minneapolis, Minnesota, USA
| | - Paul R Sackett
- Department of Psychology, University of Minnesota, Minnesota, USA
| | - John S Andrews
- GME Innovations, American Medical Association, Chicago, Illinois, USA
| |
Collapse
|
25
|
Application Factors Associated With Clinical Performance During Pediatric Internship. Acad Pediatr 2020; 20:1007-1012. [PMID: 32268217 DOI: 10.1016/j.acap.2020.03.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 03/25/2020] [Accepted: 03/26/2020] [Indexed: 11/22/2022]
Abstract
OBJECTIVE Our goal was to identify aspects of residency applications predictive of subsequent performance during pediatric internship. METHODS We conducted a retrospective cohort study of graduates of US medical schools who began pediatric internship in a large pediatric residency program in the summers of 2013 to 2017. The primary outcome was the weighted average of subjects' Accreditation Council for Graduate Medical Education pediatric Milestone scores at the end of pediatric internship. To determine factors independently associated with performance, we conducted multivariate linear mixed-effects models controlling for match year and Milestone grading committee as random effects and the following application factors as fixed effects: letter of recommendation strength, clerkship grades, medical school reputation, master's or PhD degrees, gender, US Medical Licensing Examination Step 1 score, Alpha Omega Alpha membership, private medical school, and interview score. RESULTS Our study population included 195 interns. In multivariate analyses, the aspects of applications significantly associated with composite Milestone scores at the end of internship were letter of recommendation strength (estimate 0.09, 95% confidence intervals [CI]: 0.04, 0.15), numbers of clerkship honors (est. 0.05, 95% CI: 0.01-0.09), medical school ranking (est. 0.04, 95% CI: 0.08-0.01), having a master's degree (est. 0.19, 95% CI: 0.03-0.36), and not having a PhD (est. 0.14, 95% CI: 0.02-0.26). Overall, the final model explained 18% of the variance in milestone scoring. CONCLUSIONS Letter of recommendation strength, clerkship grades, medical school ranking, and having obtained a Master's degree were significantly associated with higher clinical performance during pediatric internship.
Collapse
|
26
|
McDonald FS, Jurich D, Duhigg LM, Paniagua M, Chick D, Wells M, Williams A, Alguire P. Correlations Between the USMLE Step Examinations, American College of Physicians In-Training Examination, and ABIM Internal Medicine Certification Examination. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1388-1395. [PMID: 32271224 DOI: 10.1097/acm.0000000000003382] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE To assess the correlations between United States Medical Licensing Examination (USMLE) performance, American College of Physicians Internal Medicine In-Training Examination (IM-ITE) performance, American Board of Internal Medicine Internal Medicine Certification Exam (IM-CE) performance, and other medical knowledge and demographic variables. METHOD The study included 9,676 postgraduate year (PGY)-1, 11,424 PGY-2, and 10,239 PGY-3 internal medicine (IM) residents from any Accreditation Council for Graduate Medical Education-accredited IM residency program who took the IM-ITE (2014 or 2015) and the IM-CE (2015-2018). USMLE scores, IM-ITE percent correct scores, and IM-CE scores were analyzed using multiple linear regression, and IM-CE pass/fail status was analyzed using multiple logistic regression, controlling for USMLE Step 1, Step 2 Clinical Knowledge, and Step 3 scores; averaged medical knowledge milestones; age at IM-ITE; gender; and medical school location (United States or Canada vs international). RESULTS All variables were significant predictors of passing the IM-CE with IM-ITE scores having the strongest association and USMLE Step scores being the next strongest predictors. Prediction curves for the probability of passing the IM-CE based solely on IM-ITE score for each PGY show that residents must score higher on the IM-ITE with each subsequent administration to maintain the same estimated probability of passing the IM-CE. CONCLUSIONS The findings from this study should support residents and program directors in their efforts to more precisely identify and evaluate knowledge gaps for both personal learning and program improvement. While no individual USMLE Step score was as strongly predictive of IM-CE score as IM-ITE score, the combined relative contribution of all 3 USMLE Step scores was of a magnitude similar to that of IM-ITE score.
Collapse
Affiliation(s)
- Furman S McDonald
- F.S. McDonald is senior vice president for academic and medical affairs, American Board of Internal Medicine, Philadelphia, Pennsylvania, adjunct professor of medicine, Mayo Clinic College of Medicine and Science, Rochester, Minnesota, adjunct professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, and clinical associate, J. Edwin Wood Clinic, Pennsylvania Hospital, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-7952-3776
| | - Daniel Jurich
- D. Jurich is senior psychometrician, National Board of Medical Examiners, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-1870-2436
| | - Lauren M Duhigg
- L.M. Duhigg is senior research associate, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Miguel Paniagua
- M. Paniagua is medical advisor, National Board of Medical Examiners, and adjunct professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0003-2307-4873
| | - Davoren Chick
- D. Chick is senior vice president of medical education, American College of Physicians, and adjunct professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0003-4477-1272
| | - Margaret Wells
- M. Wells is director of assessment and education programs, American College of Physicians, Philadelphia, Pennsylvania
| | - Amber Williams
- A. Williams is manager, Relationship Development, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Patrick Alguire
- P. Alguire is senior vice president emeritus medical education, American College of Physicians, Philadelphia, Pennsylvania
| |
Collapse
|
27
|
Yang A, Gilani C, Saadat S, Murphy L, Toohey S, Boysen‐Osborn M. Which Applicant Factors Predict Success in Emergency Medicine Training Programs? A Scoping Review. AEM EDUCATION AND TRAINING 2020; 4:191-201. [PMID: 32704588 PMCID: PMC7369487 DOI: 10.1002/aet2.10411] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 10/30/2019] [Accepted: 11/04/2019] [Indexed: 05/30/2023]
Abstract
BACKGROUND Program directors (PDs) in emergency medicine (EM) receive an abundance of applications for very few residency training spots. It is unclear which selection strategies will yield the most successful residents. Many authors have attempted to determine which items in an applicant's file predict future performance in EM. OBJECTIVES The purpose of this scoping review is to examine the breadth of evidence related to the predictive value of selection factors for performance in EM residency. METHODS The authors systematically searched four databases and websites for peer-reviewed and gray literature related to EM admissions published between 1992 and February 2019. Two reviewers screened titles and abstracts for articles that met the inclusion criteria, according to the scoping review study protocol. The authors included studies if they specifically examined selection factors and whether those factors predicted performance in EM residency training in the United States. RESULTS After screening 23,243 records, the authors selected 60 for full review. From these, the authors selected 15 published manuscripts, one unpublished manuscript, and 11 abstracts for inclusion in the review. These studies examined the United States Medical Licensing Examination (USMLE), Standardized Letters of Evaluation, Medical Student Performance Evaluation, medical school attended, clerkship grades, membership in honor societies, and other less common factors and their association with future EM residency training performance. CONCLUSIONS The USMLE was the most common factor studied. It unreliably predicts clinical performance, but more reliably predicts performance on licensing examinations. All other factors were less commonly studied and, similar to the USMLE, yielded mixed results.
Collapse
Affiliation(s)
- Allen Yang
- Department of Emergency MedicineUniversity of California, IrvineIrvineCA
| | - Chris Gilani
- Department of Emergency MedicineUniversity of California, IrvineIrvineCA
| | - Soheil Saadat
- Department of Emergency MedicineUniversity of California, IrvineIrvineCA
| | - Linda Murphy
- Health Science Library OrangeUniversity of California, IrvineIrvineCA
| | - Shannon Toohey
- Department of Emergency MedicineUniversity of California, IrvineIrvineCA
| | - Megan Boysen‐Osborn
- Department of Emergency MedicineUniversity of California, IrvineIrvineCA
- School of MedicineUniversity of California, IrvineIrvineCA
| |
Collapse
|
28
|
Hafferty FW, O'Brien BC, Tilburt JC. Beyond High-Stakes Testing: Learner Trust, Educational Commodification, and the Loss of Medical School Professionalism. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:833-837. [PMID: 32079955 DOI: 10.1097/acm.0000000000003193] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
With ever-growing emphasis on high-stakes testing in medical education, such as the Medical College Admission Test and the United States Medical Licensing Examination Step 1, there has been a recent surge of concerns on the rise of a "Step 1 climate" within U.S. medical schools. The authors propose an alternative source of the "climate problem" in current institutions of medical education. Drawing on the intertwined concepts of trust and professionalism as organizational constructs, the authors propose that the core problem is not hijacking-by-exam but rather a hijackable learning environment weakened by a pernicious and under-recognized tide of commodification within the U.S. medical education system. The authors discuss several factors contributing to this weakening of medicine's control over its learning environments, including erosion of trust in medical school curricula as adequate preparation for entry into the profession, increasing reliance on external profit-driven sources of medical education, and the emergence of an internal medical education marketplace. They call attention to breaches in the core tenets of a profession-namely a logic that differentiates its work from market and managerial forces, along with related slippages in discretionary decision making. The authors suggest reducing reliance on external performance metrics (high-stakes exams and corporate rankings), identifying and investing in alternative metrics that matter, abandoning the marketization of medical education "products," and attending to the language of educational praxis and its potential corruption by market and managerial lexicons. These steps might salvage some self-governing independence implied in the term "profession" and make possible (if not probable) a recovery of a public trust becoming of the term and its training institutions.
Collapse
Affiliation(s)
- Frederic W Hafferty
- F.W. Hafferty is professor of medical education, Division of General Internal Medicine and Program in Professionalism and Values, Mayo Clinic, Rochester, Minnesota; ORCID: https://orcid.org/0000-0002-5604-7268. B.C. O'Brien is professor of medicine, Department of Medicine, and education scientist, Center for Faculty Educators, University of California, San Francisco, School of Medicine, San Francisco, California. J.C. Tilburt is professor of medicine, Division of General Internal Medicine, Mayo Clinic, Rochester, Minnesota
| | | | | |
Collapse
|
29
|
Samade R, Balch Samora J, Scharschmidt TJ, Goyal KS. Use of Standardized Letters of Recommendation for Orthopaedic Surgery Residency Applications: A Single-Institution Retrospective Review. J Bone Joint Surg Am 2020; 102:e14. [PMID: 31596798 DOI: 10.2106/jbjs.19.00130] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND Standardized letters of recommendation (SLORs) were introduced to provide a more objective method of evaluating applicants for orthopaedic surgery residency positions. We sought to establish whether there exists an association between the SLOR summative rank statement (SRS), which is a question that asks the letter-writing authors where they would rank a student relative to other applicants, and success in matching into orthopaedic surgery residency. METHODS We reviewed 858 applications to an orthopaedic surgery residency program from 2017 to 2018. Data on 9 assessment categories, SRSs, and written comments in the SLORs were extracted. The match success of applicants was determined by an internet search algorithm. Logistic regression was used to evaluate the association between the SRSs and match success. The Spearman correlation was performed between the SRSs and other variables. RESULTS Only 60% of all LORs were SLORs. With 24% of the SLORs, a supplemental letter was used. Median percentile rank ranged from 90% to 100% for the 9 categories in the SLORs. Recommendations of "high rank" or higher were found in 88% of the SRSs. The mean of the SLOR SRSs was associated with match success. CONCLUSIONS The mean of the SLOR SRSs was associated with match success. However, the SLOR is not uniformly used. Future efforts should be devoted to improving question design and validity in order to better discriminate among applicants, increase adherence to the rating scale, and quantify the strength of the written comments in the SLOR.
Collapse
Affiliation(s)
- Richard Samade
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Julie Balch Samora
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, Ohio.,Department of Orthopedic Surgery, Nationwide Children's Hospital, Columbus, Ohio
| | - Thomas J Scharschmidt
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Kanu S Goyal
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, Ohio
| |
Collapse
|
30
|
Bajwa NM, Nendaz MR, Galetto-Lacour A, Posfay-Barbe K, Yudkowsky R, Park YS. Can Professionalism Mini-Evaluation Exercise Scores Predict Medical Residency Performance? Validity Evidence Across Five Longitudinal Cohorts. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:S57-S63. [PMID: 31365408 DOI: 10.1097/acm.0000000000002895] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE The residency admissions process is a high-stakes assessment system with the purpose of identifying applicants who best meet standards of the residency program and the medical specialty. Prior studies have found that professionalism issues contribute significantly to residents in difficulty during training. This study examines the reliability (internal structure) and predictive (relations to other variables) validity evidence for a standardized patient (SP)-based Professionalism Mini-Evaluation Exercise (P-MEX) using longitudinal data from pediatrics candidates from admission to the end of the first year of postgraduate training. METHOD Data from 5 cohorts from 2012 to 2016 (195 invited applicants) were analyzed from the University of Geneva (Switzerland) Pediatrics Residency Program. Generalizability theory was used to examine the reliability and variance components of the P-MEX scores, gathered across 3 cases. Correlations and mixed-effects regression analyses were used to examine the predictive utility of SP-based P-MEX scores (gathered as part of the admissions process) with rotation evaluation scores (obtained during the first year of residency). RESULTS Generalizability was moderate (G coefficient = 0.52). Regression analyses predicting P-MEX scores to first-year rotation evaluations indicated significant standardized effect sizes for attitude and personality (β = 0.36, P = .02), global evaluation (β = 0.27, P = .048), and total evaluation scores (β = 0.34, P = .04). CONCLUSIONS Validity evidence supports the use of P-MEX scores as part of the admissions process to assess professionalism. P-MEX scores provide a snapshot of an applicant's level of professionalism and may predict performance during the first year of residency.
Collapse
Affiliation(s)
- Nadia M Bajwa
- N.M. Bajwa is residency program director, Department of General Pediatrics, Children's Hospital, Geneva University Hospitals, and faculty member, Unit of Development and Research in Medical Education, Faculty of Medicine, University of Geneva, Geneva, Switzerland; ORCID: http://orcid.org/0000-0002-1445-4594. M.R. Nendaz is professor and director, Unit of Development and Research in Medical Education, Faculty of Medicine, University of Geneva, and attending physician, Division of General Internal Medicine, Geneva University Hospitals, Geneva, Switzerland; ORCID: http://orcid.org/0000-0003-3795-3254. A. Galetto-Lacour is professor and pediatric clerkship director, Department of Pediatric Emergency Medicine, Children's Hospital, Geneva University Hospitals, Geneva, Switzerland; ORCID: https://orcid.org/0000-0002-7901-1647. K. Posfay-Barbe is professor and chairperson, Department of General Pediatrics, Children's Hospital, Geneva University Hospitals, Geneva, Switzerland; ORCID: https://orcid.org/0000-0001-9464-5704. R. Yudkowsky is professor, Department of Medical Education, College of Medicine, University of Illinois at Chicago, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-2145-7582. Y.S. Park is associate professor, Department of Medical Education, College of Medicine, University of Illinois at Chicago, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335
| | | | | | | | | | | |
Collapse
|
31
|
Lyons J, Bingmer K, Ammori J, Marks J. Utilization of a Novel Program-Specific Evaluation Tool Results in a Decidedly Different Interview Pool Than Traditional Application Review. JOURNAL OF SURGICAL EDUCATION 2019; 76:e110-e117. [PMID: 31668694 DOI: 10.1016/j.jsurg.2019.10.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2019] [Revised: 10/04/2019] [Accepted: 10/13/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND There are almost twice as many applicants as there are general surgery internships, each utilizing a common application with standard components. These elements are frequently not useful in determining affinity for a program or overall ability, and resultant poor fit may be partially responsible for program attrition. Alternative evaluation instruments would be beneficial to both programs and applicants. METHODS An application review committee comprised of resident representatives, faculty representing all program-affiliated institutions, and program leadership completed a written evaluation developed by a third party (SurgWise Consulting) that specializes in industrial and organizational psychology. The responses were compiled to create a standardized assessment tool. This assessment was sent to applicants who were subsequently ranked according to fit with our program. The pool of applicants was separately evaluated using our traditional application review. Two residents independently graded each applicant on a 5-point Likert scale to evaluate common application elements; applicants were subsequently assigned an overall score. RESULTS The assessment was completed by 507 (99%) of 512 qualifying applicants. Separately, 378 applications were reviewed by the traditional method for a total of 756 reviews. Of the 96 applicants identified by the assessment tool to invite for interviews, 22 (23%) qualified for interview invitations according to the traditional review method. The assessment produced 74 applicants that otherwise would not have been interviewed. CONCLUSION Traditional application review strategies have many shortcomings. A competency-based assessment tool in the residency application selection process identifies a pool of applicants not identified by traditional review methods.
Collapse
Affiliation(s)
- Joshua Lyons
- University Hospitals Cleveland Medical Center, Cleveland, Ohio
| | | | - John Ammori
- University Hospitals Cleveland Medical Center, Cleveland, Ohio
| | - Jeffrey Marks
- University Hospitals Cleveland Medical Center, Cleveland, Ohio.
| |
Collapse
|
32
|
Andolsek KM. One Small Step for Step 1. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:309-313. [PMID: 30570496 DOI: 10.1097/acm.0000000000002560] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Step 1 of the United States Medical Licensing Examination (USMLE) is a multiple-choice exam primarily measuring knowledge about foundational sciences and organ systems. The test was psychometrically designed as pass/fail for licensing boards to decide whether physician candidates meet minimum standards they deem necessary to obtain the medical licensure necessary to practice. With an increasing number of applicants to review, Step 1 scores are commonly used by residency program directors to screen applicants, even though the exam was not intended for this purpose. Elsewhere in this issue, Chen and colleagues describe the "Step 1 climate" that has evolved in undergraduate medical education, affecting learning, diversity, and well-being.Addressing issues related to Step 1 is a challenge. Various stakeholders frequently spend more time demonizing one another rather than listening, addressing what lies under their respective control, and working collaboratively toward better long-term solutions. In this Invited Commentary, the author suggests how different constituencies can act now to improve this situation while aspirational future solutions are developed.One suggestion is to report Step 1 and Step 2 Clinical Knowledge scores as pass/fail and Step 2 Clinical Skills scores numerically. Any changes must be carefully implemented in a way that is mindful of the kind of unintended consequences that have befallen Step 1. The upcoming invitational conference on USMLE scoring (InCUS) will bring together representatives from all stakeholders. Until there is large-scale reform, all stakeholders should commit to taking (at least) one small step toward fixing Step 1 today.
Collapse
Affiliation(s)
- Kathryn M Andolsek
- K.M. Andolsek is professor, Department of Community and Family Medicine, and assistant dean for premedical education, Duke University School of Medicine, Durham, North Carolina; ORCID: https://orcid.org/0000-0001-7994-3869
| |
Collapse
|
33
|
Stein JL. Residency Interviews: The Ethics of Asking Ethical Questions. J Grad Med Educ 2019; 11:10-11. [PMID: 30805089 PMCID: PMC6375327 DOI: 10.4300/jgme-d-18-00419.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
|