1
|
Tanaka P, Goldmacher J, Park YS, Vinagre R, Macario A. Factors Identified by Faculty That Contribute to Straight-Line Scoring in End-of-Rotation Evaluations of Anesthesiology Residents. A A Pract 2025; 19:e01947. [PMID: 40257125 DOI: 10.1213/xaa.0000000000001947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/22/2025]
Abstract
This study examined the prevalence of "straight-line scoring" (SLS), where all subcompetencies are rated the same, in anesthesiology resident evaluations. Nearly half of the evaluations used SLS, particularly in higher training levels. Faculty interviews revealed contributing factors: lack of specific performance information, reluctance to give extreme scores, mismatch between evaluation items and actual work, limited answer options, low priority given to assessments, and perceived lack of usefulness. The authors suggest improvements to the evaluation tool, faculty training, timely completion, increased performance data collection, and a culture shift to reduce SLS and enhance feedback value.
Collapse
Affiliation(s)
- Pedro Tanaka
- From the Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California
| | - Jesse Goldmacher
- From the Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois at Chicago, Chicago, Illinois
| | - Rafael Vinagre
- From the Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California
| | - Alex Macario
- From the Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California
| |
Collapse
|
2
|
Holmboe ES. From Chrysalis to Taking Flight, the Metamorphosis of the ACGME During Dr Thomas Nasca's Tenure as CEO. J Grad Med Educ 2024; 16:652-661. [PMID: 39677319 PMCID: PMC11641890 DOI: 10.4300/jgme-d-24-00937.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/17/2024] Open
Abstract
Thomas J. Nasca, MD, MACP, served as the President and Chief Executive Officer (CEO) of the Accreditation Council for Graduate Medical Education (ACGME) for 17 years, with his tenure ending December 2024. During this time he led and supported significant changes in accreditation and medical education. This article will examine the changes during this period through the lens of key themes, including the redesign of the graduate medical education (GME) accreditation model and new and expanded roles that the ACGME assumed during 3 phases between 2007 and 2024: (1) the development years leading to the Next Accreditation System (NAS), (2) implementation of the NAS, and (3) the COVID-19 pandemic. Launched in 2012, the NAS redesigned accreditation as a balanced combination of assurance- and improvement-focused policies and activities. The NAS served as the foundation for harmonizing GME training through the creation of the single accreditation system. The ACGME also took on new roles within the professional self-regulatory system by tackling difficult issues such as wellness and physician suicide, as well as diversity, equity, and inclusion in medical education. In addition, the ACGME substantially expanded its role as facilitator and educator via the introduction of multiple resources to support GME. However, the medical education landscape remains complex and faces continued uncertainty, especially as it emerges from the effects of the COVID-19 pandemic. The next ACGME President and CEO faces critical issues in GME.
Collapse
Affiliation(s)
- Eric S. Holmboe
- Eric S. Holmboe, MD, is Chief Executive Officer, Intealth, Philadelphia, Pennsylvania, USA
| |
Collapse
|
3
|
Santen SA, Yingling S, Hogan SO, Vitto CM, Traba CM, Strano-Paul L, Robinson AN, Reboli AC, Leong SL, Jones BG, Gonzalez-Flores A, Grinnell ME, Dodson LG, Coe CL, Cangiarella J, Bruce EL, Richardson J, Hunsaker ML, Holmboe ES, Park YS. Are They Prepared? Comparing Intern Milestone Performance of Accelerated 3-Year and 4-Year Medical Graduates. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:1267-1277. [PMID: 39178363 DOI: 10.1097/acm.0000000000005855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
Abstract
PURPOSE Accelerated 3-year programs (A3YPs) at medical schools were developed to address student debt and mitigate workforce shortage issues. This study investigated whether medical school length (3 vs 4 years) was associated with early residency performance. The primary research question was as follows: Are the Accreditation Council for Graduate Medical Education Milestones (MS) attained by A3YP graduates comparable to graduates of traditional 4-year programs (T4YPs) at 6 and 12 months into internship? METHOD The MS data from students entering U.S. medical schools in 2021 and 2022 from the 6 largest specialties were used: emergency medicine, family medicine, internal medicine, general surgery, psychiatry, and pediatrics. Three-year and 4-year graduates were matched for analysis (2,899 matched learners: 182 in A3YPs and 2,717 in T4YPs). The study used a noninferiority study design to examine data trends between the study cohort (A3YP) and control cohort (T4YP). To account for medical school and residency program effects, the authors used cross-classified random-effects regression to account for clustering and estimate group differences. RESULTS The mean Harmonized MS ratings for the midyear and end-year reporting periods showed no significant differences between the A3YP and T4YP groups (mean [SE] cross-classified coefficient = 0.01 [0.02], P = .77). Mean MS ratings across internal medicine MS for the midyear and end-year reporting periods showed no significant differences between the A3YP and T4YP groups (mean [SE] cross-classified coefficient = -0.03 [0.03], P = .31). Similarly, for family medicine, there were no statistically significant differences between the A3YP and T4YP groups (mean [SE] cross-classified coefficient = 0.01 [0.02], P = .96). CONCLUSIONS For the specialties studied, there were no significant differences in MS performance between 3-year and 4-year graduates at 6 and 12 months into internship. These results support comparable efficacy of A3YPs in preparing medical students for residency.
Collapse
|
4
|
Ruan X, Xu X, Pei L, Yi J, Yu C, Yu X, Zhu B, Quan X, Li X, Jv H, Zhang Y, Huang Y. Chinese Anesthesiology Milestones in Resident Evaluation: Reliability, Validity, and Correlation with Objective Examination Scores: a Cross-sectional Study. Anesth Analg 2024:00000539-990000000-00992. [PMID: 39418193 DOI: 10.1213/ane.0000000000007279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
BACKGROUND Evaluating competency acquisition during residency training is crucial. The Anesthesiology Milestones have been implemented in the United States. The China Consortium of Elite Teaching Hospitals for Residency Education has also developed the Chinese Resident Core Competency Milestone Evaluation System. Despite this, Milestones tailored for anesthesiology have yet to be implemented in China. To address this gap, we have developed Chinese Anesthesiology Milestones. This study aims to assess the reliability and validity of the Chinese Anesthesiology Milestones and their correlation with objective examinations. METHODS In this single-center cross-sectional study, we included anesthesia residents enrolled in the standardized residency training program at our hospital during the academic year 2021 to 2022. The Chinese Anesthesiology Milestones were developed based on the American version of Anesthesiology Milestones 2.0 and the Chinese Resident Core Competency Milestone Evaluation System using the Delphi method. The Delphi panel comprised a diverse group, including education administrators, faculty from teaching hospitals, and anesthesia residents. Five attending anesthesiologists independently assessed the levels achieved by each anesthesia resident based on the Chinese Anesthesiology Milestones. Subsequently, they collaboratively discussed the ratings for each resident until a consensus was reached. The interrater reliability, internal consistency, and construct validity were assessed using Kendall's coefficient, Cronbach's α coefficient/ composite reliability, and average variance extracted, respectively. Higher values indicated better reliability or validity. The correlation between Milestone ratings and objective examination scores, including written examinations and Objective Structured Clinical Examinations, were analyzed using Pearson correlation. RESULTS The Chinese Anesthesiology Milestones encompassed 6 competencies, including professionalism, medical knowledge and technical skills, patient care, interpersonal and communication skills, teaching ability, and life-long learning. Milestone evaluation data were available and analyzed from 66 residents. The Kendall's coefficient of concordance among raters ranged from 0.799 (95% confidence interval [CI], 0.793-0.918) to 0.942 (95% CI, 0.934-0.982). The average variance extracted, composite reliability, and Cronbach's α coefficient ranged from 0.782 to 0.920, 0.935 to 0.980, and 0.916 to 0.978, respectively. Correlations between objective examination scores and related Milestone subcompetencies were as follows: written examinations: r = 0.52 (95% CI, 0.22-0.71), technical skills stations: r = 0.51 (95% CI, 0.21-0.71), the oral test station: r = 0.66 (95% CI, 0.45-0.79), and the standardized patient station: r = 0.61 (95% CI, 0.36-0.76). CONCLUSIONS The Chinese Anesthesiology Milestones demonstrated satisfactory interrater reliability, internal consistency, construct validity, and correlation with objective examination scores within our hospital.
Collapse
Affiliation(s)
- Xia Ruan
- From the Department of Anesthesiology, Chinese Academy of Medical Sciences & Peking Union Medical College Hospital, Beijing, China
| | - Xiaohan Xu
- From the Department of Anesthesiology, Chinese Academy of Medical Sciences & Peking Union Medical College Hospital, Beijing, China
| | - Lijian Pei
- From the Department of Anesthesiology, Chinese Academy of Medical Sciences & Peking Union Medical College Hospital, Beijing, China
| | - Jie Yi
- From the Department of Anesthesiology, Chinese Academy of Medical Sciences & Peking Union Medical College Hospital, Beijing, China
| | - Chunhua Yu
- From the Department of Anesthesiology, Chinese Academy of Medical Sciences & Peking Union Medical College Hospital, Beijing, China
| | - Xuerong Yu
- From the Department of Anesthesiology, Chinese Academy of Medical Sciences & Peking Union Medical College Hospital, Beijing, China
| | - Bo Zhu
- From the Department of Anesthesiology, Chinese Academy of Medical Sciences & Peking Union Medical College Hospital, Beijing, China
| | - Xiang Quan
- From the Department of Anesthesiology, Chinese Academy of Medical Sciences & Peking Union Medical College Hospital, Beijing, China
| | - Xu Li
- From the Department of Anesthesiology, Chinese Academy of Medical Sciences & Peking Union Medical College Hospital, Beijing, China
| | - Hui Jv
- Department of Education, Peking University People's Hospital, Beijing, China
| | - Yuelun Zhang
- Center for Prevention and Early Intervention, National Infrastructures for Translational Medicine, Institute of Clinical Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Yuguang Huang
- From the Department of Anesthesiology, Chinese Academy of Medical Sciences & Peking Union Medical College Hospital, Beijing, China
| |
Collapse
|
5
|
Gorgas DL, Joldersma KB, Ankel FK, Carter WA, Barton MA, Reisdorff EJ. Emergency Medicine Milestones Final Ratings Are Often Subpar. West J Emerg Med 2024; 25:735-738. [PMID: 39319804 PMCID: PMC11418869 DOI: 10.5811/westjem.18703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 06/14/2024] [Accepted: 07/08/2024] [Indexed: 09/26/2024] Open
Abstract
Background The emergency medicine (EM) milestones are objective behaviors that are categorized into thematic domains called "subcompetencies" (eg, emergency stabilization). The scale for rating milestones is predicated on the assumption that a rating (level) of 1.0 corresponds to an incoming EM-1 resident and a rating of 4.0 is the "target rating" (albeit not an expectation) for a graduating resident. Our aim in this study was to determine the frequency with which graduating residents received the target milestone ratings. Methods This retrospective, cross-sectional study was a secondary analysis of a dataset used in a prior study but was not reported previously. We analyzed milestone subcompetency ratings from April 25-June 24, 2022 for categorical EM residents in their final year of training. Ratings were dichotomized as meeting the expected level at the time of program completion (ratings of ≥3.5) and not meeting the expected level at the time of program completion (ratings of ≤3.0). We calculated the number of residents who did not achieve target ratings for each of the subcompetencies. Results In Spring 2022, of the 2,637 residents in the spring of their last year of training, 1,613 (61.2%) achieved a rating of ≥3.5 on every subcompetency and 1,024 (38.8%) failed to achieve that rating on at least one subcompetency. There were 250 residents (9.5%) who failed to achieve half of their expected subcompetency ratings and 105 (4.0%) who failed to achieve the expected rating (ie, rating was ≤3.0) on every subcompetency. Conclusion When using an EM milestone rating threshold of 3.5, only 61.2% of physicians achieved the target ratings for program graduation; 4.0% of physicians failed to achieve target ratings for any milestone subcompetency; and 9.5% of physicians failed to achieve the target ratings for graduating residents in half of the subcompetencies.
Collapse
Affiliation(s)
- Diane L. Gorgas
- Ohio State University Wexner Medical Center, Department of Emergency Medicine, Columbus, Ohio
| | | | - Felix K. Ankel
- Regions Hospital, Department of Emergency Medicine, St. Paul, Minnesota
| | - Wallace A. Carter
- Weill Cornell Medicine, Department of Emergency Medicine, New York, New York
| | | | | |
Collapse
|
6
|
Miller B, Nowalk A, Ward C, Walker L, Dewar S. Pediatric residency milestone performance is not predicted by the United States Medical Licensing Examination Step 2 Clinical Knowledge. MEDEDPUBLISH 2024; 13:308. [PMID: 39185002 PMCID: PMC11344197 DOI: 10.12688/mep.19873.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/19/2024] [Indexed: 08/27/2024] Open
Abstract
Objectives This study aims to show whether correlation exists between pediatric residency applicants' quantitative scores on the United States Medical Licensing Exam Step 2 Clinical Knowledge examination and their subsequent performance in residency training based on the Accreditation Council for Graduate Medical Education Milestones, which are competency-based assessments that aim to determine residents' ability to work unsupervised after postgraduate training. No previous literature has correlated Step 2 Clinical Knowledge scores with pediatric residency performance assessed by Milestones. Methods In this retrospective cohort study, the United States Medical Licensing Exam Step 2 Clinical Knowledge Scores and Milestones data were collected from all 188 residents enrolled in a single categorical pediatric residency program from 2012 - 2017. Pearson correlation coefficients were calculated amongst available test and milestone data points to determine correlation between test scores and clinical performance. Results Using Pearson correlation coefficients, no significant correlation was found between quantitative scores on the Step 2 Clinical Knowledge exam and average Milestones ratings (r = -0.1 for post-graduate year 1 residents and r = 0.25 for post-graduate year 3 residents). Conclusions These results demonstrate that Step 2 scores have no correlation to success in residency training as measured by progression along competency-based Milestones. This information should limit the importance residency programs place on quantitative Step 2 scores in their ranking of residency applicants. Future studies should include multiple residency programs across multiple specialties to help make these findings more generalizable.
Collapse
Affiliation(s)
| | - Andrew Nowalk
- University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Caroline Ward
- University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | | | | |
Collapse
|
7
|
Norcini J, Grabovsky I, Barone MA, Anderson MB, Pandian RS, Mechaber AJ. The Associations Between United States Medical Licensing Examination Performance and Outcomes of Patient Care. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:325-330. [PMID: 37816217 DOI: 10.1097/acm.0000000000005480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/12/2023]
Abstract
PURPOSE The United States Medical Licensing Examination (USMLE) comprises a series of assessments required for the licensure of U.S. MD-trained graduates as well as those who are trained internationally. Demonstration of a relationship between these examinations and outcomes of care is desirable for a process seeking to provide patients with safe and effective health care. METHOD This was a retrospective cohort study of 196,881 hospitalizations in Pennsylvania over a 3-year period (January 1, 2017 to December 31, 2019) for 5 primary diagnoses: heart failure, acute myocardial infarction, stroke, pneumonia, or chronic obstructive pulmonary disease. The 1,765 attending physicians for these hospitalizations self-identified as family physicians or general internists. A converted score based on USMLE Step 1, Step 2 Clinical Knowledge, and Step 3 scores was available, and the outcome measures were in-hospital mortality and log length of stay (LOS). The research team controlled for characteristics of patients, hospitals, and physicians. RESULTS For in-hospital mortality, the adjusted odds ratio was 0.94 (95% confidence interval [CI] = 0.90, 0.99; P < .02). Each standard deviation increase in the converted score was associated with a 5.51% reduction in the odds of in-hospital mortality. For log LOS, the adjusted estimate was 0.99 (95% CI = 0.98, 0.99; P < .001). Each standard deviation increase in the converted score was associated with a 1.34% reduction in log LOS. CONCLUSIONS Better provider USMLE performance was associated with lower in-hospital mortality and shorter log LOS for patients, although the magnitude of the latter is unlikely to be of practical significance. These findings add to the body of evidence that examines the validity of the USMLE licensure program.
Collapse
|
8
|
Park YS, Ryan MS, Hogan SO, Berg K, Eickmeyer A, Fancher TL, Farnan J, Lawson L, Turner L, Westervelt M, Holmboe E, Santen SA. Transition to Residency: National Study of Factors Contributing to Variability in Learner Milestones Ratings in Emergency Medicine and Family Medicine. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S123-S132. [PMID: 37983405 DOI: 10.1097/acm.0000000000005366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
PURPOSE The developmental trajectory of learning during residency may be attributed to multiple factors, including variation in individual trainee performance, program-level factors, graduating medical school effects, and the learning environment. Understanding the relationship between medical school and learner performance during residency is important in prioritizing undergraduate curricular strategies and educational approaches for effective transition to residency and postgraduate training. This study explores factors contributing to longitudinal and developmental variability in resident Milestones ratings, focusing on variability due to graduating medical school, training program, and learners using national cohort data from emergency medicine (EM) and family medicine (FM). METHOD Data from programs with residents entering training in July 2016 were used (EM: n=1,645 residents, 178 residency programs; FM: n=3,997 residents, 487 residency programs). Descriptive statistics were used to examine data trends. Cross-classified mixed-effects regression were used to decompose variance components in Milestones ratings. RESULTS During postgraduate year (PGY)-1, graduating medical school accounted for 5% and 6% of the variability in Milestones ratings, decreasing to 2% and 5% by PGY-3 for EM and FM, respectively. Residency program accounted for substantial variability during PGY-1 (EM=70%, FM=53%) but decreased during PGY-3 (EM=62%, FM=44%), with greater variability across training period in patient care (PC), medical knowledge (MK), and systems-based practice (SBP). Learner variance increased significantly between PGY-1 (EM=23%, FM=34%) and PGY-3 (EM=34%, FM=44%), with greater variability in practice-based learning and improvement (PBLI), professionalism (PROF), and interpersonal communication skills (ICS). CONCLUSIONS The greatest variance in Milestone ratings can be attributed to the residency program and to a lesser degree, learners, and medical school. The dynamic impact of program-level factors on learners shifts during the first year and across the duration of residency training, highlighting the influence of curricular, instructional, and programmatic factors on resident performance throughout residency.
Collapse
Affiliation(s)
- Yoon Soo Park
- Y.S. Park is head, Department of Medical Education, and The Ilene B. Harris Endowed Professor, University of Illinois College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0001-8583-4335
| | - Michael S Ryan
- M.S. Ryan is associate dean for assessment, evaluation, research, and innovation, and professor of pediatrics, University of Virginia School of Medicine, Charlottesville, Virginia; ORCID: https://orcid.org/0000-0003-3266-9289
| | - Sean O Hogan
- S.O. Hogan is director of outcomes research and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0009-0008-9006-1857
| | - Katherine Berg
- K. Berg is associate dean of assessment, director, Rector Clinical Skills and Simulation Center, and professor of medicine, Sydney Kimmel Medical College, Philadelphia, Pennsylvania
| | - Adam Eickmeyer
- A. Eickmeyer is director of medical school education, University of Chicago Pritzker School of Medicine, Chicago, Illinois, and a PhD candidate, Maastricht University School of Health Professions Education, Maastricht, the Netherlands
| | - Tonya L Fancher
- T.L. Fancher is associate dean for workforce innovation and education quality improvement and professor of medicine, University of California, Davis, School of Medicine, Sacramento, California
| | - Jeanne Farnan
- J. Farnan is associate dean for undergraduate medical education and professor of medicine, University of Chicago Pritzker School of Medicine, Chicago, Illinois
| | - Luan Lawson
- L. Lawson is senior associate dean of medical education and student affairs and professor of emergency medicine, Virginia Commonwealth University, Richmond, Virginia
| | - Laurah Turner
- L. Turner is assistant dean for evaluation and assessment and assistant professor of medical education, The University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Marjorie Westervelt
- M. Westervelt is director of educational assessment, scholarship, improvement, and innovation, Office of Medical Education, University of California, Davis, School of Medicine, Sacramento, California
| | - Eric Holmboe
- E. Holmboe is chief, research, milestones development and evaluation officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Sally A Santen
- S.A. Santen is senior associate dean, Virginia Commonwealth University, Richmond, Virginia, and professor, emergency medicine and medical education, University of Cincinnati, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-8327-8002
| |
Collapse
|
9
|
Santen SA, Hemphill RR. Embracing our responsibility to ensure trainee competency. AEM EDUCATION AND TRAINING 2023; 7:e10863. [PMID: 37013132 PMCID: PMC10066499 DOI: 10.1002/aet2.10863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 02/25/2023] [Accepted: 02/27/2023] [Indexed: 06/19/2023]
Affiliation(s)
- Sally A. Santen
- Department of Emergency MedicineUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
- Virginia Commonwealth University School of MedicineRichmondVirginiaUSA
| | | |
Collapse
|
10
|
Kinnear B, Schumacher DJ, Driessen EW, Varpio L. How argumentation theory can inform assessment validity: A critical review. MEDICAL EDUCATION 2022; 56:1064-1075. [PMID: 35851965 PMCID: PMC9796688 DOI: 10.1111/medu.14882] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 07/07/2022] [Accepted: 07/15/2022] [Indexed: 05/21/2023]
Abstract
INTRODUCTION Many health professions education (HPE) scholars frame assessment validity as a form of argumentation in which interpretations and uses of assessment scores must be supported by evidence. However, what are purported to be validity arguments are often merely clusters of evidence without a guiding framework to evaluate, prioritise, or debate their merits. Argumentation theory is a field of study dedicated to understanding the production, analysis, and evaluation of arguments (spoken or written). The aim of this study is to describe argumentation theory, articulating the unique insights it can offer to HPE assessment, and presenting how different argumentation orientations can help reconceptualize the nature of validity in generative ways. METHODS The authors followed a five-step critical review process consisting of iterative cycles of focusing, searching, appraising, sampling, and analysing the argumentation theory literature. The authors generated and synthesised a corpus of manuscripts on argumentation orientations deemed to be most applicable to HPE. RESULTS We selected two argumentation orientations that we considered particularly constructive for informing HPE assessment validity: New rhetoric and informal logic. In new rhetoric, the goal of argumentation is to persuade, with a focus on an audience's values and standards. Informal logic centres on identifying, structuring, and evaluating arguments in real-world settings, with a variety of normative standards used to evaluate argument validity. DISCUSSION Both new rhetoric and informal logic provide philosophical, theoretical, or practical groundings that can advance HPE validity argumentation. New rhetoric's foregrounding of audience aligns with HPE's social imperative to be accountable to specific stakeholders such as the public and learners. Informal logic provides tools for identifying and structuring validity arguments for analysis and evaluation.
Collapse
Affiliation(s)
- Benjamin Kinnear
- Department of PediatricsUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
- School of Health Professions Education (SHE)Maastricht UniversityMaastrichtThe Netherlands
| | - Daniel J. Schumacher
- Department of PediatricsUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
| | - Erik W. Driessen
- School of Health Professions Education Faculty of HealthMedicine and Life Sciences of Maastricht UniversityMaastrichtThe Netherlands
| | - Lara Varpio
- Uniformed Services University of the Health SciencesBethesdaMarylandUSA
| |
Collapse
|
11
|
Read EK, Maxey C, Hecker KG. Longitudinal assessment of competency development at The Ohio State University using the competency-based veterinary education (CBVE) model. Front Vet Sci 2022; 9:1019305. [PMID: 36387400 PMCID: PMC9642912 DOI: 10.3389/fvets.2022.1019305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Accepted: 09/29/2022] [Indexed: 09/19/2023] Open
Abstract
With the development of the American Association of Veterinary Medical Colleges' Competency-Based Veterinary Education (CBVE) model, veterinary schools are reorganizing curricula and assessment guidelines, especially within the clinical rotation training elements. Specifically, programs are utilizing both competencies and entrustable professional activities (EPAs) as opportunities for gathering information about student development within and across clinical rotations. However, what evidence exists that use of the central tenets of the CBVE model (competency framework, milestones and EPAs) improves our assessment practices and captures reliable and valid data to track competency development of students as they progress through their clinical year? Here, we report on validity evidence to support the use of scores from in-training evaluation report forms (ITERs) and workplace-based assessments of EPAs to evaluate competency progression within and across domains described in the CBVE, during the final year clinical training period of The Ohio State University's College of Veterinary Medicine (OSU-CVM) program. The ITER, used at the conclusion of each rotation, was modified to include the CBVE competencies that were assessed by identifying the stage of student development on a series of descriptive milestones (from pre-novice to competent). Workplace based assessments containing entrustment scales were used to assess EPAs from the CBVE model within each clinical rotation. Competency progression and entrustment scores were evaluated on each of the 31 rotations offered and high-stakes decisions regarding student performance were determined by a collective review of all the ITERs and EPAs recorded for each learner across each semester and the entire year. Results from the class of 2021, collected on approximately 190 students from 31 rotations, are reported with more than 55 299 total competency assessments combined with milestone placement and 2799 complete EPAs. Approximately 10% of the class was identified for remediation and received additional coaching support. Data collected longitudinally through the ITER on milestones provides initial validity evidence to support using the scores in higher stakes contexts such as identifying students for remediation and for determining whether students have met the necessary requirements to successfully complete the program. Data collected on entrustment scores did not, however, support such decision making. Implications are discussed.
Collapse
Affiliation(s)
- Emma K. Read
- College of Veterinary Medicine, The Ohio State University, Columbus, OH, United States
| | - Connor Maxey
- Faculty of Veterinary Medicine, University of Calgary, Calgary, AB, Canada
| | - Kent G. Hecker
- Faculty of Veterinary Medicine, University of Calgary, Calgary, AB, Canada
- International Council for Veterinary Assessment, Bismarck, ND, United States
| |
Collapse
|
12
|
Nagasaki K, Nishizaki Y, Nojima M, Shimizu T, Konishi R, Okubo T, Yamamoto Y, Morishima R, Kobayashi H, Tokuda Y. Validation of the General Medicine in-Training Examination Using the Professional and Linguistic Assessments Board Examination Among Postgraduate Residents in Japan. Int J Gen Med 2021; 14:6487-6495. [PMID: 34675616 PMCID: PMC8504475 DOI: 10.2147/ijgm.s331173] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 09/14/2021] [Indexed: 11/23/2022] Open
Abstract
Purpose In Japan, the General Medicine In-training Examination (GM-ITE) was developed by a non-profit organization in 2012. The GM-ITE aimed to assess the general clinical knowledge among residents and to improve the training programs; however, it has not been sufficiently validated and is not used for high-stake decision-making. This study examined the association between GM-ITE and another test measure, the Professional and Linguistic Assessments Board (PLAB) 1 examination. Methods Ninety-seven residents who completed the GM-ITE in fiscal year 2019 were recruited and took the PLAB 1 examination in Japanese. The association between two tests was assessed using the Pearson product-moment statistics. The discrimination indexes were also assessed for each question. Results A total of 91 residents at 17 teaching hospitals were finally included in the analysis, of whom 69 (75.8%) were women and 59 (64.8%) were postgraduate second year residents. All the participants were affiliated with community hospitals. Positive correlations were demonstrated between the GM-ITE and the PLAB scores (r = 0.58, p < 0.001). The correlations between the PLAB score and the scores in GM-ITE categories were as follows: symptomatology/clinical reasoning (r = 0.54, p < 0.001), physical examination/procedure (r = 0.38, p < 0.001), medical interview/professionalism (r = 0.25, p < 0.001), and disease knowledge (r = 0.36, p < 0.001). The mean discrimination index of each question of the GM-ITE (mean ± SD; 0.23 ± 0.15) was higher than that of the PLAB (0.16 ± 0.16; p = 0.004). Conclusion This study demonstrates incremental validity evidence of the GM-ITE to assess the clinical knowledge acquisition. The results indicate that GM-ITE can be widely used to improve resident education in Japan.
Collapse
Affiliation(s)
- Kazuya Nagasaki
- Department of Internal Medicine, Mito Kyodo General Hospital, University of Tsukuba, Ibaraki, Japan
| | - Yuji Nishizaki
- Medical Technology Innovation Center, Juntendo University, Tokyo, Japan.,Division of Medical Education, Juntendo University School of Medicine, Tokyo, Japan
| | - Masanori Nojima
- Center for Translational Research, The Institute of Medical Science, The University of Tokyo, Tokyo, Japan
| | - Taro Shimizu
- Department of Diagnostic and Generalist Medicine, Dokkyo Medical University Hospital, Tochigi, Japan
| | - Ryota Konishi
- Education Adviser Japan Organization of Occupational Health and Safety, Kanagawa, Japan
| | - Tomoya Okubo
- Research Division, The National Center for University Entrance Examinations, Tokyo, Japan
| | - Yu Yamamoto
- Division of General Medicine, Center for Community Medicine, Jichi Medical University School of Medicine, Tochigi, Japan
| | - Ryo Morishima
- Department of Neurology, Tokyo Metropolitan Neurological Hospital, Tokyo, Japan
| | - Hiroyuki Kobayashi
- Department of Internal Medicine, Mito Kyodo General Hospital, University of Tsukuba, Ibaraki, Japan
| | | |
Collapse
|
13
|
Tamakuwala S, Dean J, Kramer KJ, Shafi A, Ottum S, George J, Kaur S, Chao CR, Recanati MA. Potential Impact of Pass/Fail Scores on USMLE Step 1: Predictors of Excellence in Obstetrics and Gynecology Residency Training. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2021; 8:23821205211037444. [PMID: 34805529 PMCID: PMC8597065 DOI: 10.1177/23821205211037444] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Accepted: 07/16/2021] [Indexed: 06/07/2023]
Abstract
AIM The study aims to determine resident applicant metrics most predictive of academic and clinical performance as measured by the Council of Resident Education in Obstetrics and Gynecology (CREOG) examination scores and Accreditation Council for Graduate Medical Education (ACGME) clinical performance (Milestones) in the aftermath of United States Medical Licensing Examination Scores (USMLE) Step 1 becoming a pass/fail examination. METHODS In this retrospective study, electronic and paper documents for Wayne State University Obstetrics and Gynecology residents matriculated over a 5-year period ending July 2018 were collected. USMLE scores, clerkship grade, and wording on the letters of recommendation as well as Medical Student Performance Evaluation (MSPE) were extracted from the Electronic Residency Application Service (ERAS) and scored numerically. Semiannual Milestone evaluations and yearly CREOG scores were used as a marker of resident performance. Statistical analysis on residents (n = 75) was performed using R and SPSS and significance was set at P < .05. RESULTS Mean USMLE score correlated with CREOG performance and, of all 3 Steps, Step 1 had the tightest association. MSPE and class percentile also correlated with CREOGs. Clerkship grade and recommendation letters had no correlation with resident performance. Of all metrics provided by ERAS, none taken alone, were as useful as Step 1 scores at predicting performance in residency. Regression modeling demonstrated that the combination of Step 2 scores with MSPE wording restored the predictive ability lost by Step 1. CONCLUSIONS The change of USMLE Step 1 to pass/fail may alter resident selection strategies. Other objective markers are needed in order to evaluate an applicant's future performance in residency.
Collapse
Affiliation(s)
| | | | | | - Adib Shafi
- Wayne State University, Detroit, MI, USA
| | | | | | | | | | | |
Collapse
|