1
|
Alomari S, Lubelski D, Feghali J, Brem H, Witham T, Huang J. Impact of virtual vs. in-person interviews among neurosurgery residency applicants. J Clin Neurosci 2022; 101:63-66. [PMID: 35561432 DOI: 10.1016/j.jocn.2022.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 04/09/2022] [Accepted: 05/07/2022] [Indexed: 11/28/2022]
Abstract
BACKGROUND The interview is considered a key factor in selecting residents in various medical and surgical specialties. However, the reliability of the interview process in selecting neurosurgery training program applicants remains largely under-investigated. OBJECTIVE To investigate the reliability of the interview process for neurosurgery residency applicants and to evaluate the impact of virtual interviews on this process. METHODS We analyzed the records of neurosurgery residency applicant interviews at our institution between 2016 and 2021. An average of 20 neurosurgery faculty members (clinical and research) interviewed each applicant and graded them 1 (best) to 4 (worst). Intraclass correlation coefficient (ICC) and Levene's test were used to assess the inter-rater and intra-rater reliability, respectively. RESULTS 214 neurosurgery residency applicants were interviewed at a single institution between 2016 and 2021. The mean applicant rating each year ranged from 1.77 to 1.92. Inter-rater agreement was relatively poor in each year, (ICC < 0.5, P < 0.05). Among 60% of the raters, variability of scores significantly changed from year to year, (p < 0.05). When comparing the scores submitted during the virtual interview process (2021) with the scores submitted in the previous years (2016-2020), 2 interviewers (10%) had less variability using the virtual process. CONCLUSION Our analysis found that the current interview process for neurosurgery residency applicants' selection suffers from poor inter- and intra-rater reliability. Virtual interviews may be part of a cost-effective strategy to improve the reliability of the interview process. Further validation is needed, as well as identification of novel strategies to maximize the reliability of the selection process.
Collapse
Affiliation(s)
- Safwan Alomari
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Daniel Lubelski
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - James Feghali
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Henry Brem
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Timothy Witham
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Judy Huang
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
2
|
Can Behavior-Based Interviews Reduce Bias in Fellowship Applicant Assessment? Acad Pediatr 2022; 22:478-485. [PMID: 34929389 DOI: 10.1016/j.acap.2021.12.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 12/08/2021] [Accepted: 12/13/2021] [Indexed: 11/23/2022]
Abstract
PURPOSE Components of trainee applications may introduce bias based on race or gender. Behavior-based interviews (BBIs) rely on structured questions to elicit applicants' past experiences to predict future behavior. Our objective was to implement BBIs in one fellowship program and compare applicant assessment by race and gender when using a standardized assessment tool versus a BBI-based tool. METHODS In 2019 and 2020, we developed BBIs and BBI-specific assessments; 6 of 15 faculty were trained in this interview method. Applicants completed 6 interviews with either a BBI or unstructured format. All faculty completed a standardized assessment on applicants. BBI faculty also completed a BBI-specific assessment. Normalized average scores were calculated and used to rank applicants into quartiles. Race was categorized into White, underrepresented minorities (URMs; Black and Hispanics), and non-URMs (all others). Faculty and applicants were surveyed about BBIs. RESULTS Seventy-five applicants were interviewed. Significant differences were found in standardized assessment scores (White 1.01 +/- 0.09, non-URM 1.02 +/- 0.08, URM .94 +/- 0.07; P = .02) and quartiles by race (P = .05), but not for BBI scores (White 0.98 +/- 0.09, non-URM 1.03 +/- 0.09, URM 1.02 +/- 0.1; P = .18) or quartiles by race (P = .17). There were no significant differences in score or quartile by gender for either tool. The majority of faculty and applicant survey respondents commented positively about BBIs. CONCLUSION BBIs were successfully implemented and generally reviewed positively by faculty and applicants. BBIs reduced racial differences in applicant assessments. Applicant assessment may benefit from structured tools to mitigate potential biases.
Collapse
|
3
|
An Overview of the GI Fellowship Interview: Part II-Tips for Selection Committees and Interviewers. Dig Dis Sci 2022; 67:1712-1717. [PMID: 35122593 PMCID: PMC8817657 DOI: 10.1007/s10620-022-07409-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/14/2022] [Indexed: 12/09/2022]
|
4
|
Hughes RH, Kleinschmidt S, Sheng AY. Using structured interviews to reduce bias in emergency medicine residency recruitment: Worth a second look. AEM EDUCATION AND TRAINING 2021; 5:S130-S134. [PMID: 34616987 PMCID: PMC8480396 DOI: 10.1002/aet2.10562] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 11/20/2020] [Accepted: 11/24/2020] [Indexed: 05/29/2023]
Affiliation(s)
| | | | - Alexander Y. Sheng
- Department of Emergency MedicineBoston Medical CenterBostonMAUSA
- Boston University School of MedicineBostonMAUSA
| |
Collapse
|
5
|
Bohrer-Clancy J, Lukowski L, Turner L, Staff I, London S. Emergency Medicine Residency Applicant Characteristics Associated with Measured Adverse Outcomes During Residency. West J Emerg Med 2017; 19:106-111. [PMID: 29383064 PMCID: PMC5785175 DOI: 10.5811/westjem.2017.11.35007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Revised: 10/29/2017] [Accepted: 11/03/2017] [Indexed: 11/11/2022] Open
Abstract
Introduction Negative outcomes in emergency medicine (EM) programs use a disproportionate amount of educational resources to the detriment of other residents. We sought to determine if any applicant characteristics identifiable during the selection process are associated with negative outcomes during residency. Methods Primary analysis consisted of looking at the association of each of the descriptors including resident characteristics and events during residency with a composite measure of negative outcomes. Components of the negative outcome composite were any formal remediation, failure to complete residency, or extension of residency. Results From a dataset of 260 residents who completed their residency over a 19-year period, 26 (10%) were osteopaths and 33 (13%) were international medical school graduates A leave of absence during medical school (p <.001), failure to send a thank-you note (p=.008), a failing score on United States Medical Licensing Examination Step I (p=.002), and a prior career in health (p=.034) were factors associated with greater likelihood of a negative outcome. All four residents with a "red flag" during their medicine clerkships experienced a negative outcome (p <.001). Conclusion "Red flags" during EM clerkships, a leave of absence during medical school for any reason and failure to send post-interview thank-you notes may be associated with negative outcomes during an EM residency.
Collapse
Affiliation(s)
- Jesse Bohrer-Clancy
- University of Connecticut, Department of Emergency Medicine, Farmington, Connecticut
| | - Leslie Lukowski
- Hartford Hospital, Department of Emergency Medicine, Hartford, Connecticut
| | - Lisa Turner
- University of Connecticut, Department of Emergency Medicine, Farmington, Connecticut
| | - Ilene Staff
- Hartford Hospital, Proposal Design and Statistical Analysis, Hartford, Connecticut
| | - Shawn London
- University of Connecticut, Department of Emergency Medicine, Farmington, Connecticut.,Hartford Hospital, Department of Emergency Medicine, Hartford, Connecticut
| |
Collapse
|
6
|
Schenker ML, Baldwin KD, Israelite CL, Levin LS, Mehta S, Ahn J. Selecting the Best and Brightest: A Structured Approach to Orthopedic Resident Selection. JOURNAL OF SURGICAL EDUCATION 2016; 73:879-885. [PMID: 27230568 DOI: 10.1016/j.jsurg.2016.04.004] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Revised: 01/19/2016] [Accepted: 04/02/2016] [Indexed: 06/05/2023]
Abstract
BACKGROUND Resident selection is integral to the graduate medical educational process and the future of our profession. There is no consensus among residency directors as to how to systematically and consistently screen and select applicants who would perform well as residents. The purpose of this study was to introduce and assess a high volume application screening tool and semistructured interview process. METHODS This study took place in an academic orthopedic surgery department over 2 years (2013-2014). Overall, 1382 applications were screened in 7 categories, with a maximum score of 100. A total of 14 faculty reviewed applications; 218 interviews were offered; 165 applicants accepted the interview. Overall, 4 interview domains (cognitive, affective, activities, and theme), and an impression score were ranked from 1 (Exceptional) to 6 (Concern). Each room had an assigned "theme" (ethics, affective, cognitive, research, and "fit") with standardized questions. A summary score was generated of all scores to determine the preliminary rank list; the final rank list was determined after group discussion. Correlation between preliminary rank, final rank, and screening scores were assessed. RESULTS The average screening score was 62.5 (range: 0-100, median = 64). The average interview score was 69.5 (range: 32.24-95.0). Final rank lists correlated most highly with initial rank (0.912, p < 0.001), impression (0.847, p < 0.001), and affective domain (0.834, p < 0.001). Cognitive domain (0.628, p < 0.001) and screening scores (0.264, p < 0.001) less highly correlated with final rank position. CONCLUSIONS A systematic approach was used to screen and evaluate a large number of orthopedic surgery applicants. Our system demonstrated excellent feasibility, reliability, and predictability for the final rank list.
Collapse
Affiliation(s)
- Mara L Schenker
- Department of Orthopaedic Surgery, Emory University, Atlanta, Georgia.
| | - Keith D Baldwin
- Department of Orthopaedic Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Craig L Israelite
- Department of Orthopaedic Surgery, University of Pennsylvania, Philadelphia, Pennsylvania
| | - L Scott Levin
- Department of Orthopaedic Surgery, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Samir Mehta
- Department of Orthopaedic Surgery, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Jaimo Ahn
- Department of Orthopaedic Surgery, Emory University, Atlanta, Georgia
| |
Collapse
|
7
|
Lubelski D, Healy AT, Friedman A, Ferraris D, Benzel EC, Schlenk R. Correlation of personality assessments with standard selection criteria for neurosurgical residency applicants. J Neurosurg 2016; 125:986-994. [PMID: 26848920 DOI: 10.3171/2015.7.jns15880] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Neurosurgery is among the most competitive residencies, as evidenced by the high number of applicants for relatively few positions. Although it is important to recruit candidates who have the intellectual capacity and drive to succeed, traditional objective selection criteria, such as US Medical Licensing Examination (USMLE) (also known as Step 1) score, number of publications, and class ranking, have not been shown to consistently predict clinical and academic success. Furthermore, these traditional objective parameters have not been associated with specific personality traits. METHODS The authors sought to determine the efficacy of a personality assessment in the selection of neurosurgery residents. Specifically, the aim was to determine the correlation between traditional measures used to evaluate an applicant (e.g., USMLE score, number of publications, MD/PhD status) and corresponding validated personality traits. RESULTS Fifty-four neurosurgery residency applicants were interviewed at the Cleveland Clinic during the 2014-2015 application cycle. No differences in validated personality scores were identified between the 46 MD applicants and 8 MD/PhD applicants. The mean USMLE score (± SD) was 252.3 ± 11.9, and those in the high-USMLE-score category (USMLE score ≥ 260) had a significantly lower "imaginative" score (a stress measure of eccentric thinking and impatience with those who think more slowly). The average number of publications per applicant was 8.6 ± 7.9, and there was a significant positive correlation (r = 0.339, p = 0.016) between greater number of publications and a higher "adjustment" score (a measure of being even-tempered, having composure under pressure). Significant negative correlations existed between the total number of publications and the "excitable" score (a measure of being emotionally volatile) (r = -0.299, p = 0.035) as well as the "skeptical" score (measure of being sensitive to criticism) (r = -0.325, p = 0.021). The average medical school rank was 25.8, and medical school rankings were positively correlated with the "imaginative" score (r = 0.287, p = 0.044). CONCLUSIONS This is the first study to investigate the use of personality scores in the selection of neurosurgical residents. The use of personality assessments has the potential to provide insight into an applicant's future behavior as a resident and beyond. This information may be useful in the selection of neurosurgical residents and can be further used to customize the teaching of residents and for enabling them to recognize their own strengths and weaknesses for self-improvement.
Collapse
Affiliation(s)
- Daniel Lubelski
- Center for Spine Health and.,Lerner College of Medicine, Cleveland Clinic, Cleveland, Ohio.,Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, Maryland; and
| | | | - Alan Friedman
- J3Personica Research and Development, Eatontown, New Jersey
| | - Dyan Ferraris
- J3Personica Research and Development, Eatontown, New Jersey
| | - Edward C Benzel
- Center for Spine Health and.,Lerner College of Medicine, Cleveland Clinic, Cleveland, Ohio
| | - Richard Schlenk
- Center for Spine Health and.,Lerner College of Medicine, Cleveland Clinic, Cleveland, Ohio
| |
Collapse
|
8
|
Bandiera G, Abrahams C, Ruetalo M, Hanson MD, Nickell L, Spadafora S. Identifying and Promoting Best Practices in Residency Application and Selection in a Complex Academic Health Network. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:1594-601. [PMID: 26488571 DOI: 10.1097/acm.0000000000000954] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Medical education institutions have a social mandate to produce a diverse physician workforce that meets the public's needs. Recent reports have framed the admission process outcome of undergraduate and postgraduate medical education (UGME and PGME) programs as a key determinant of the collective contributions graduating cohorts will make to society, creating a sense of urgency around the issue of who gets accepted. The need for evidence-informed residency application and selection processes is growing because of the increasing size and diversity of the applicant pool and the need for equity, fairness, social accountability, and health human resource planning. The selection literature, however, is dominated by a UGME focus and emphasizes determination of desirable qualities of future physicians and selection instrument reliability and validity. Gaps remain regarding PGME selection, particularly the creation of specialty-specific selection criteria, suitable outcome measures, and reliable selection systems.In this Perspective, the authors describe the University of Toronto's centralized approach to defining system-level best practices for residency application and selection. Over the 2012-2013 academic year, the Best Practices in Application and Selection working group reviewed relevant literature and reports, consulted content experts, surveyed local practices, and conducted iterative stakeholder consultations on draft recommendations. Strong agreement arose around the resulting 13 principles and 24 best practices, which had either empirical support or face validity. These recommendations, which are shared in this article, have been adopted by the university's PGME advisory committee and will inform a national initiative to improve trainees' transition from UGME to PGME in Canada.
Collapse
Affiliation(s)
- Glen Bandiera
- G. Bandiera is associate dean, Postgraduate Medical Education, University of Toronto, and chief of emergency medicine, St. Michael's Hospital, Toronto, Ontario, Canada. C. Abrahams is director of policy and analysis, Postgraduate Medical Education Office, University of Toronto, Toronto, Ontario, Canada. M. Ruetalo is a research officer, Postgraduate Medical Education Office, University of Toronto, Toronto, Ontario, Canada. M.D. Hanson is associate dean, Undergraduate Medical Education Admissions and Student Finances, University of Toronto, Toronto, Ontario, Canada. L. Nickell is associate dean, Undergraduate Health Professions Students Affairs, University of Toronto, Toronto, Ontario, Canada. S. Spadafora is vice dean, Postgraduate Medical Education, University of Toronto, Toronto, Ontario, Canada
| | | | | | | | | | | |
Collapse
|
9
|
Multiple Mini-Interviews (MMI) and Semistructured Interviews for the Selection of Family Medicine Residents: A Comparative Analysis. INTERNATIONAL SCHOLARLY RESEARCH NOTICES 2014; 2014:747168. [PMID: 27433527 PMCID: PMC4897381 DOI: 10.1155/2014/747168] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 03/06/2014] [Revised: 05/08/2014] [Accepted: 05/12/2014] [Indexed: 11/17/2022]
Abstract
Background. Family Medicine Residency Program at the Aga Khan University has applicants for the residency position in excess of the positions offered resulting in formulation of certain selection criteria. The objective of this study was to compare MMI versus semistructured interviews for assessing noncognitive domains in the selection of residents. The secondary objectives were to determine perceptions of the interviewers and candidates for the acceptability and feasibility of MMI as a selection tool. Methods. The candidates underwent semistructured interviews along with MMI and identical attributes were tested in both. The attributes tested were safe doctor, communication skills, professionalism, problem solving, team approach, ethical issues, reasons for selecting family medicine, and commitment to the program. Descriptive statistics were calculated and comparison between ratings for MMI and interview was performed by Wilcoxon sign rank test. Results. Total number of candidates was 14. On comparison between interview and MMI, the scores were not statistically different for all attributes except ethics (mean interview scores: 3.04, mean MMI scores: 2.5, and P value 0.046). Conclusion. The study showed no difference between MMI and semistructured interviews. However, it needs to be replicated in order to determine the predictive validity and feasibility of MMI over time.
Collapse
|
10
|
A Survey of Academic Emergency Medicine Department Chairs on Hiring New Attending Physicians. J Emerg Med 2014; 47:92-8. [DOI: 10.1016/j.jemermed.2013.08.105] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2013] [Revised: 08/07/2013] [Accepted: 08/18/2013] [Indexed: 11/20/2022]
|
11
|
Gardner KM. Developing a Customized Multiple Interview for Dental School Admissions. J Dent Educ 2014. [DOI: 10.1002/j.0022-0337.2014.78.4.tb05711.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
12
|
Dumont TM, Horgan MA. The surgical skills laboratory residency interview: an enjoyable alternative. JOURNAL OF SURGICAL EDUCATION 2012; 69:407-410. [PMID: 22483145 DOI: 10.1016/j.jsurg.2011.09.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2011] [Revised: 09/23/2011] [Accepted: 09/25/2011] [Indexed: 05/31/2023]
Abstract
PURPOSE The authors aimed to trial an alternative interviewing strategy by inviting residency candidates to our surgical anatomy laboratory. Interviews were coincident with surgical dissection. The authors hypothesized that residency candidates hoping to match into a surgical subspecialty might enjoy this unconventional interviewing strategy, which would mimic an operating room experience. METHODS On scheduled residency interview dates, formal, unstructured interviews were held with half of the neurosurgical faculty, and unstructured surgical skills laboratory-based interviews were held with the other half of the neurosurgical faculty. Interviews in the skills laboratory featured cases and corresponding surgical dissection guided by faculty. After the interview, the residency candidates were encouraged to complete an optional survey about their interview process. The survey results were pooled for analysis. RESULTS Of 28 interviewed, 19 individuals responded to the survey. The survey respondents had favorable reviews of the all aspects of the interview process. When asked to report the most enjoyable part of the interview, all respondents listed the surgical skills laboratory. The average respondent scores for importance of the surgical skills laboratory interview (9.5 ± 1.1) compared with conventional interview with faculty (9.2 ± 1.0) or residents (9.1 ± 1.0) was not significantly different (p = 0.50, analysis of variance). CONCLUSIONS The surgical skills laboratory interviews were reviewed favorably by the survey respondents. Nearly all respondents listed the surgical skills interview as the most enjoyable part of the interview experience. The authors advocate this residency interview strategy for surgical subspecialty residencies.
Collapse
Affiliation(s)
- Travis M Dumont
- Division of Neurosurgery, University of Vermont, Burlington, Vermont 05401, USA.
| | | |
Collapse
|
13
|
Blouin D, Day AG, Pavlov A. Comparative reliability of structured versus unstructured interviews in the admission process of a residency program. J Grad Med Educ 2011. [PMID: 23205201 PMCID: PMC3244318 DOI: 10.4300/jgme-d-10-00248.1] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Although never directly compared, structured interviews are reported as being more reliable than unstructured interviews. This study compared the reliability of both types of interview when applied to a common pool of applicants for positions in an emergency medicine residency program. METHODS In 2008, one structured interview was added to the two unstructured interviews traditionally used in our resident selection process. A formal job analysis using the critical incident technique guided the development of the structured interview tool. This tool consisted of 7 scenarios assessing 4 of the domains deemed essential for success as a resident in this program. The traditional interview tool assessed 5 general criteria. In addition to these criteria, the unstructured panel members were asked to rate each candidate on the same 4 essential domains rated by the structured panel members. All 3 panels interviewed all candidates. Main outcomes were the overall, interitem, and interrater reliabilities, the correlations between interview panels, and the dimensionality of each interview tool. RESULTS Thirty candidates were interviewed. The overall reliability reached 0.43 for the structured interview, and 0.81 and 0.71 for the unstructured interviews. Analyses of the variance components showed a high interrater, low interitem reliability for the structured interview, and a high interrater, high interitem reliability for the unstructured interviews. The summary measures from the 2 unstructured interviews were significantly correlated, but neither was correlated with the structured interview. Only the structured interview was multidimensional. CONCLUSIONS A structured interview did not yield a higher overall reliability than both unstructured interviews. The lower reliability is explained by a lower interitem reliability, which in turn is due to the multidimensionality of the interview tool. Both unstructured panels consistently rated a single dimension, even when prompted to assess the 4 specific domains established as essential to succeed in this residency program.
Collapse
|