1
|
Kwan BYM, Mussari B, Moore P, Meilleur L, Islam O, Menard A, Soboleski D, Cofie N. A Pilot Study on Diagnostic Radiology Residency Case Volumes From a Canadian Perspective: A Marker of Resident Knowledge. Can Assoc Radiol J 2020; 71:490-494. [PMID: 32037849 DOI: 10.1177/0846537119899227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2023] Open
Abstract
Purpose: New guidelines from the Accreditation Council for Graduate Medical Education (ACGME) have proposed minimum case volumes to be obtained during residency. While radiology residency programs in Canada are accredited by the Royal College of Physicians and Surgeons of Canada, there are currently no minimum case volumes standards for radiology residency training in Canada. New changes in residency training throughout Canada are coming in the form of competency-based medical education. Using data from a pilot study, this article examines radiology resident case volumes among recently graduated cohorts of residents and determines whether there is a correlation between case volumes and measures of resident success. Materials and Methods: Resident case volumes for 3 cohorts of graduated residents (2016-2018) were extracted from the institutional database. Achievement of minimum case volumes based on the ACGME guidelines was performed for each resident. Pearson correlation analysis (n = 9) was performed to examine the relationships between resident case volumes and markers of resident success including residents’ relative knowledge ranking and their American College of Radiology (ACR) in-training exam scores. Results: A statistically significant, positive correlation was observed between residents’ case volume and their relative knowledge ranking ( r = 0.682, P < .05). Residents’ relative knowledge ranking was also statistically significant and positively correlated with their ACR in-training percentile score ( r = 0.715, P < .05). Conclusions: This study suggests that residents who interpret more cases are more likely to demonstrate higher knowledge, thereby highlighting the utility of case volumes as a prognostic marker of resident success. As well, the results underscore the potential use of ACGME minimum case volumes as a prognostic marker. These findings can inform future curriculum planning and development in radiology residency training programs.
Collapse
Affiliation(s)
- Benjamin Y. M. Kwan
- Department of Radiology, School of Medicine, Queen’s University, Kingston, Ontario, Canada
| | - Benedetto Mussari
- Department of Radiology, School of Medicine, Queen’s University, Kingston, Ontario, Canada
| | - Pam Moore
- Department of Radiology, School of Medicine, Queen’s University, Kingston, Ontario, Canada
| | - Lynne Meilleur
- Department of Radiology, School of Medicine, Queen’s University, Kingston, Ontario, Canada
| | - Omar Islam
- Department of Radiology, School of Medicine, Queen’s University, Kingston, Ontario, Canada
| | - Alexandre Menard
- Department of Radiology, School of Medicine, Queen’s University, Kingston, Ontario, Canada
| | - Don Soboleski
- Department of Radiology, School of Medicine, Queen’s University, Kingston, Ontario, Canada
| | - Nicholas Cofie
- Faculty of Health Sciences, Queen’s University, Kingston, Ontario, Canada
| |
Collapse
|
2
|
Dubosh NM, Fisher J, Lewis J, Ullman EA. Faculty Evaluations Correlate Poorly with Medical Student Examination Performance in a Fourth-Year Emergency Medicine Clerkship. J Emerg Med 2017; 52:850-855. [PMID: 28341085 DOI: 10.1016/j.jemermed.2016.09.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2016] [Accepted: 09/12/2016] [Indexed: 11/29/2022]
Abstract
BACKGROUND Clerkship directors routinely evaluate medical students using multiple modalities, including faculty assessment of clinical performance and written examinations. Both forms of evaluation often play a prominent role in final clerkship grade. The degree to which these modalities correlate in an emergency medicine (EM) clerkship is unclear. OBJECTIVE We sought to correlate faculty clinical evaluations with medical student performance on a written, standardized EM examination of medical knowledge. METHODS This is a retrospective study of fourth-year medical students in a 4-week EM elective at one academic medical center. EM faculty performed end of shift evaluations of students via a blinded online system using a 5-point Likert scale for 8 domains: data acquisition, data interpretation, medical knowledge base, professionalism, patient care and communication, initiative/reliability/dependability, procedural skills, and overall evaluation. All students completed the National EM M4 Examination in EM. Means, medians, and standard deviations for end of shift evaluation scores were calculated, and correlations with examination scores were assessed using a Spearman's rank correlation coefficient. RESULTS Thirty-nine medical students with 224 discrete faculty evaluations were included. The median number of evaluations completed per student was 6. The mean score (±SD) on the examination was 78.6% ± 6.1%. The examination score correlated poorly with faculty evaluations across all 8 domains (ρ 0.074-0.316). CONCLUSION Faculty evaluations of medical students across multiple domains of competency correlate poorly with written examination performance during an EM clerkship. Educators need to consider the limitations of examination score in assessing students' ability to provide quality patient clinical care.
Collapse
Affiliation(s)
- Nicole M Dubosh
- Department of Emergency Medicine, Harvard Medical School, Boston, Massachusetts; Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Jonathan Fisher
- Department of Emergency Medicine, Maricopa Medical Center, Phoenix, Arizona
| | - Jason Lewis
- Department of Emergency Medicine, Harvard Medical School, Boston, Massachusetts; Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Edward A Ullman
- Department of Emergency Medicine, Harvard Medical School, Boston, Massachusetts; Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| |
Collapse
|
3
|
Faculty and resident evaluations of medical students on a surgery clerkship correlate poorly with standardized exam scores. Am J Surg 2014; 207:231-5. [DOI: 10.1016/j.amjsurg.2013.10.008] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2013] [Revised: 10/02/2013] [Accepted: 10/03/2013] [Indexed: 11/21/2022]
|
4
|
Ryan JG, Barlas D, Pollack S. The relationship between faculty performance assessment and results on the in-training examination for residents in an emergency medicine training program. J Grad Med Educ 2013; 5:582-6. [PMID: 24455005 PMCID: PMC3886455 DOI: 10.4300/jgme-d-12-00240.1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2012] [Revised: 01/05/2013] [Accepted: 02/25/2013] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Medical knowledge (MK) in residents is commonly assessed by the in-training examination (ITE) and faculty evaluations of resident performance. OBJECTIVE We assessed the reliability of clinical evaluations of residents by faculty and the relationship between faculty assessments of resident performance and ITE scores. METHODS We conducted a cross-sectional, observational study at an academic emergency department with a postgraduate year (PGY)-1 to PGY-3 emergency medicine residency program, comparing summative, quarterly, faculty evaluation data for MK and overall clinical competency (OC) with annual ITE scores, accounting for PGY level. We also assessed the reliability of faculty evaluations using a random effects, intraclass correlation analysis. RESULTS We analyzed data for 59 emergency medicine residents during a 6-year period. Faculty evaluations of MK and OC were highly reliable (κ = 0.99) and remained reliable after stratification by year of training (mean κ = 0.68-0.84). Assessments of resident performance (MK and OC) and the ITE increased with PGY level. The MK and OC results had high correlations with PGY level, and ITE scores correlated moderately with PGY. The OC and MK results had a moderate correlation with ITE score. When residents were grouped by PGY level, there was no significant correlation between MK as assessed by the faculty and the ITE score. CONCLUSIONS Resident clinical performance and ITE scores both increase with resident PGY level, but ITE scores do not predict resident clinical performance compared with peers at their PGY level.
Collapse
|
5
|
Dudas RA, Colbert JM, Goldstein S, Barone MA. Validity of faculty and resident global assessment of medical students' clinical knowledge during their pediatrics clerkship. Acad Pediatr 2012; 12:138-41. [PMID: 22056224 DOI: 10.1016/j.acap.2011.09.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2011] [Revised: 09/07/2011] [Accepted: 09/14/2011] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Medical knowledge is one of six core competencies in medicine. Medical student assessments should be valid and reliable. We assessed the relationship between faculty and resident global assessment of pediatric medical student knowledge and performance on a standardized test in medical knowledge. METHODS Retrospective cross-sectional study of medical students on a pediatric clerkship in academic year 2008-2009 at one academic health center. Faculty and residents rated students' clinical knowledge on a 5-point Likert scale. The inter-rater reliability of clinical knowledge ratings was assessed by calculating the intra-class correlation coefficient (ICC) for residents' ratings, faculty ratings, and both rating types combined. Convergent validity between clinical knowledge ratings and scores on the National Board of Medical Examiners (NBME) clinical subject examination in pediatrics was assessed with Pearson product moment correlation correction and the coefficient of the determination. RESULTS There was moderate agreement for global clinical knowledge ratings by faculty and moderate agreement for ratings by residents. The agreement was also moderate when faculty and resident ratings were combined. Global ratings of clinical knowledge had high convergent validity with pediatric examination scores when students were rated by both residents and faculty. CONCLUSIONS Our findings provide evidence for convergent validity of global assessment of medical students' clinical knowledge with NBME subject examination scores in pediatrics.
Collapse
Affiliation(s)
- Robert A Dudas
- Department of Pediatrics, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | | | | | | |
Collapse
|
6
|
Guffey RC, Rusin K, Chidiac EJ, Marsh HM. The utility of pre-residency standardized tests for anesthesiology resident selection: the place of United States Medical Licensing Examination scores. Anesth Analg 2011; 112:201-6. [PMID: 21048098 DOI: 10.1213/ane.0b013e3181fcfacd] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND The resident selection process could be improved if United States Medical Licensing Examination (USMLE) scores obtained during residency application were found to predict success on the American Board of Anesthesiology (ABA) written examination (part 1). In this study, we compared USMLE performance during medical school to anesthesiology residency standardized examination performance. METHODS Sixty-nine anesthesiology residents' USMLE, ABA/American Society of Anesthesiologists (ASA) In-Training Examination, and ABA written board examination (part 1) scores were compared. Linear regression, adjusted Pearson partial correlation, multiple regression, and analysis of variance were used to cross-correlate pre-residency and intra-residency scores. Residents' school of medicine location and year of graduation were noted. RESULTS Both USMLE step 1 and step 2 Clinical Knowledge examinations correlated significantly with all intra-residency standardized tests. Averaged step 1 and step 2 USMLE score correlated to ABA written examination (part 1) score with a slope of 0.72 and r of 0.48 (P = 0.001). CONCLUSIONS The USMLE is a significant predictor of residency ABA/ASA In-Training Examination and ABA written examination performance in anesthesiology. Our program has significantly increased its average written board examination performance while increasing the relative importance of USMLE in resident selection.
Collapse
Affiliation(s)
- Ryan C Guffey
- Department of Anesthesiology, Wayne State University School of Medicine, Detroit, MI 48201, USA
| | | | | | | |
Collapse
|
7
|
Ringdahl EN, Delzell JE, Kruse RL. Evaluation of interns by senior residents and faculty: is there any difference? MEDICAL EDUCATION 2004; 38:646-651. [PMID: 15189261 DOI: 10.1111/j.1365-2929.2004.01832.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
INTRODUCTION Both senior residents and faculty members evaluate family practice interns (PGY-1) on the inpatient family medicine service at the University of Missouri-Columbia. The purpose of this study was to investigate the content and nature of narrative comments on a clinical evaluation sheet. METHODS Objective 1. The authors placed the subjective comments made by faculty and senior residents in their evaluations of PGY-1 residents into 12 distinctive categories. Objective 2. Comments were coded with a positive or negative valence. Objective 3. The genders of the evaluator and learner were recorded. RESULTS All evaluations made between 1996 and 1999 were analysed. A total of 1341 individual comments were reviewed. Objective 1. Categories used most often were generic comments (20.2%), personal attributes (18%), and clinical competence (14.1%). There was no difference in category use based on the experience level of the evaluator (P = 0.17). Objective 2. The majority of the comments (81.9%) were positive in nature. Senior faculty members were significantly less likely to make negative comments than were junior faculty members or senior residents (P = 0.004). Objective 3. There were no differences in category use based on the gender of the evaluator (P = 0.13). CONCLUSIONS Objective 1. Narrative evaluation comments may be placed into 12 distinctive categories. Most comments are generic and do not help to inform learning. Objective 2. A total of 82% of comments were positive. Residents were more likely to make negative comments than senior faculty members. Objective 3. There was no demonstrable gender bias in writing negative comments.
Collapse
Affiliation(s)
- Erika N Ringdahl
- Department of Family and Community Medicine, University of Missouri-Columbia, Columbia, Missouri, USA
| | | | | |
Collapse
|
8
|
Affiliation(s)
- Jannette Collins
- Department of Radiology, University of Wisconsin Hospital and Clinics, E3/311 Clinical Science Center, 600 Highland Ave, Madison, WI 53792-3252, USA
| |
Collapse
|
9
|
Abstract
In the past 20 years, there has been increasing recognition of the need to consider cost in medical decision making. This period has seen an explosion in the number of economic evaluations appearing in the medical literature. Cost-effectiveness analysis is an objective systematic technique for comparing alternative health care strategies on both cost and effectiveness simultaneously. Cost-effectiveness analysis can be used to inform medical decision makers in the establishment of clinical practice guidelines and in the setting of health policy. Cost-effectiveness analysis is a state-of-the-art research tool with its own terminology and methods. It is critical that radiologists become familiar with the concepts and procedures of cost-effectiveness analysis so they can properly evaluate cost-effectiveness analysis studies and be more knowledgeable participants in the health care decision-making process. This article explains the rationale, terminology, and methods of cost-effectiveness analysis as applied to radiology.
Collapse
Affiliation(s)
- M E Singer
- Department of Epidemiology and Biostatistics, Metro Health Medical Center, Case Western Reserve University School of Medicine, 10900 Euclid Ave, Cleveland, OH 44106, USA.
| | | |
Collapse
|
10
|
Mullins ME, Mehta A, Patel H, McLoud TC, Novelline RA. Impact of PACS on the education of radiology residents: the residents' perspective. Acad Radiol 2001; 8:67-73. [PMID: 11201459 DOI: 10.1016/s1076-6332(03)80745-6] [Citation(s) in RCA: 21] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
RATIONALE AND OBJECTIVES Because digital imaging and the picture archiving and communication system (PACS) are replacing radiographic film, the effect of PACS on residents' perceptions and their educational experience was investigated. MATERIALS AND METHODS Residents taking part in large diagnostic radiology training programs at two hospitals were surveyed. Approximately 75% of radiographic studies were reviewed with the use of PACS at both hospitals. Survey topics included technical and didactic issues based on direct and indirect comparison with analog (conventional film) images. RESULTS Fifty residents were polled (20 respondents). The majority has been using PACS for more than 1 year (14 of 20, 70%) to interpret 75%-100% of cases (11 of 20, 55%). The majority believed that PACS improved patient care (15 of 20, 75%) and their educational experience (15 of 20, 75%). A minority believed that increased patient throughput was harmful to the educational experience (five of 20, 25%) because it permitted attending radiologists to review cases too quickly (four of 20, 20%). Residents favored PACS over hard-copy images for ease of manipulation, resolution, and ability to see pathologic conditions and normal anatomic characteristics. CONCLUSION Residents believe that PACS has positively affected their learning experience and does not negatively affect the quality of resident education.
Collapse
Affiliation(s)
- M E Mullins
- Department of Radiology, Founders House, Masaachusetts General Hospital, Boston 02115, USA
| | | | | | | | | |
Collapse
|
11
|
Wise SW, Mauger DT, Matthews AE, Hartman DS. Impact of the Armed Forces Institute of Pathology Radiologic Pathology Course on radiology resident performance on the ACR In-Training and ABR written Examinations. American College of Radiology. American Board of Radiology. Acad Radiol 2000; 7:693-9. [PMID: 10987330 DOI: 10.1016/s1076-6332(00)80525-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
RATIONALE AND OBJECTIVES The purpose of this study was to assess resident scores on the American College of Radiology (ACR) In-Training Examination and on the written American Board of Radiology (ABR) Examination relative to attendance at and timing of the Armed Forces Institute of Pathology (AFIP) Radiologic Pathology Course. MATERIALS AND METHODS A survey of 200 radiology residency program directors requested the type of residency program, whether the program sent residents to the AFIP course, dates of AFIP attendance for individual residents, percentile scores of residents on the ACR examination from 1995 through 1998, and ABR examination scores for 1997. Scores were analyzed before and after AFIP attendance and also temporally for examinations during or after AFIP attendance. Improvement in percentile scores for residents undergoing the ACR examination while attending the AFIP were compared with scores of matched residents from their programs who had not attended. RESULTS Thirty-six (18%) program directors responded, providing data on 619 residents who underwent the ACR examination, ABR examination, or both. No significant improvement was found between pre- and post-AFIP ACR Examination scores for residents at university or military programs. There were statistically significantly improved scores for residents at community programs (mean percentile improvement, 8.1 points; P = .0064). Residents who underwent the ACR examination during the AFIP course improved their scores by 10.7 percentile points compared with matched residents who had not attended the AFIP course (P = .041). CONCLUSION Residents undergoing the ACR examination while attending the AFIP improve their percentile scores more than residents who have not attended the AFIP.
Collapse
Affiliation(s)
- S W Wise
- Department of Radiology, Penn State University College of Medicine, Milton S. Hershey Medical Center, PA 17033, USA
| | | | | | | |
Collapse
|
12
|
|