1
|
Anderson LM, Rowland K, Edberg D, Wright KM, Park YS, Tekian A. An Analysis of Written and Numeric Scores in End-of-Rotation Forms from Three Residency Programs. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:497-506. [PMID: 37929204 PMCID: PMC10624145 DOI: 10.5334/pme.41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 10/24/2023] [Indexed: 11/07/2023]
Abstract
Introduction End-of-Rotation Forms (EORFs) assess resident progress in graduate medical education and are a major component of Clinical Competency Committee (CCC) discussion. Single-institution studies suggest EORFs can detect deficiencies, but both grades and comments skew positive. In this study, we sought to determine whether the EORFs from three programs, including multiple specialties and institutions, produced useful information for residents, program directors, and CCCs. Methods Evaluations from three programs were included (Program 1, Institution A, Internal Medicine: n = 38; Program 2, Institution A, Anesthesia: n = 9; Program 3, Institution B, Anesthesia: n = 11). Two independent researchers coded each written comment for relevance (specificity and actionability) and orientation (praise or critical) using a standardized rubric. Numeric scores were analyzed using descriptive statistics. Results 4869 evaluations were collected from the programs. Of the 77,434 discrete numeric scores, 691 (0.89%) were considered "below expected level." 71.2% (2683/3767) of the total written comments were scored as irrelevant, while 3217 (85.4%) of total comments were scored positive and 550 (14.6%) were critical. When combined, 63.2% (n = 2379) of comments were scored positive and irrelevant while 6.5% (n = 246) were scored critical and relevant. Discussion <1% of comments indicated below average performance; >70% of comments scored irrelevant. Critical, relevant comments were least frequently observed, consistent across all 3 programs. The low rate of constructive feedback and the high rate of irrelevant comments are inadequate for a CCC to make informed decisions. The consistency of these findings across programs, specialties, and institutions suggests both local and systemic changes should be considered.
Collapse
Affiliation(s)
- Lauren M. Anderson
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Kathleen Rowland
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Deborah Edberg
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Katherine M. Wright
- Department of Family & Community Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois, US
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois Chicago, Chicago, Illinois, US
| | - Ara Tekian
- Department of Medical Education, University of Illinois Chicago, Chicago, Illinois, US
| |
Collapse
|
2
|
Cheong CWS, Quah ELY, Chua KZY, Lim WQ, Toh RQE, Chiang CLL, Ng CWH, Lim EG, Teo YH, Kow CS, Vijayprasanth R, Liang ZJ, Tan YKI, Tan JRM, Chiam M, Lee ASI, Ong YT, Chin AMC, Wijaya L, Fong W, Mason S, Krishna LKR. Post graduate remediation programs in medicine: a scoping review. BMC MEDICAL EDUCATION 2022; 22:294. [PMID: 35443679 PMCID: PMC9020048 DOI: 10.1186/s12909-022-03278-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 03/16/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Recognizing that physicians may struggle to achieve knowledge, skills, attitudes and or conduct at one or more stages during their training has highlighted the importance of the 'deliberate practice of improving performance through practising beyond one's comfort level under guidance'. However, variations in physician, program, contextual and healthcare and educational systems complicate efforts to create a consistent approach to remediation. Balancing the inevitable disparities in approaches and settings with the need for continuity and effective oversight of the remediation process, as well as the context and population specific nature of remediation, this review will scrutinise the remediation of physicians in training to better guide the design, structuring and oversight of new remediation programs. METHODS Krishna's Systematic Evidence Based Approach is adopted to guide this Systematic Scoping Review (SSR in SEBA) to enhance the transparency and reproducibility of this review. A structured search for articles on remediation programs for licenced physicians who have completed their pre-registration postings and who are in training positions published between 1st January 1990 and 31st December 2021 in PubMed, Scopus, ERIC, Google Scholar, PsycINFO, ASSIA, HMIC, DARE and Web of Science databases was carried out. The included articles were concurrently thematically and content analysed using SEBA's Split Approach. Similarities in the identified themes and categories were combined in the Jigsaw Perspective and compared with the tabulated summaries of included articles in the Funnelling Process to create the domains that will guide discussions. RESULTS The research team retrieved 5512 abstracts, reviewed 304 full-text articles and included 101 articles. The domains identified were characteristics, indications, frameworks, domains, enablers and barriers and unique features of remediation in licenced physicians in training programs. CONCLUSION Building upon our findings and guided by Hauer et al. approach to remediation and Taylor and Hamdy's Multi-theories Model, we proffer a theoretically grounded 7-stage evidence-based remediation framework to enhance understanding of remediation in licenced physicians in training programs. We believe this framework can guide program design and reframe remediation's role as an integral part of training programs and a source of support and professional, academic, research, interprofessional and personal development.
Collapse
Affiliation(s)
- Clarissa Wei Shuen Cheong
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Elaine Li Ying Quah
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Keith Zi Yuan Chua
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Wei Qiang Lim
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Rachelle Qi En Toh
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Christine Li Ling Chiang
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Caleb Wei Hao Ng
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Elijah Gin Lim
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Yao Hao Teo
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Cheryl Shumin Kow
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Raveendran Vijayprasanth
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Zhen Jonathan Liang
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Yih Kiat Isac Tan
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Javier Rui Ming Tan
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Min Chiam
- Division of Cancer Education, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 169610 Singapore
| | - Alexia Sze Inn Lee
- Division of Cancer Education, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 169610 Singapore
| | - Yun Ting Ong
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
| | - Annelissa Mien Chew Chin
- Medical Library, National University of Singapore Libraries, Blk MD6, Centre, 14 Medical Dr, #05-01 for Translational Medicine, Singapore, 117599 Singapore
| | - Limin Wijaya
- Duke-NUS Medical School, 8 College Road, Singapore, 169857 Singapore
- Department of Infectious Diseases, Singapore General Hospital, Outram Road, Singapore, 169608 Singapore
| | - Warren Fong
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Duke-NUS Medical School, 8 College Road, Singapore, 169857 Singapore
- Department of Rheumatology and Immunology, Singapore General Hospital, 16 College Road, Block 6 Level 9, Singapore, 169854 Singapore
| | - Stephen Mason
- Palliative Care Institute Liverpool, Academic Palliative & End of Life Care Centre, Cancer Research Centre, University of Liverpool, 200 London Road, Liverpool, L3 9TA UK
| | - Lalit Kumar Radha Krishna
- Yong Loo Lin School of Medicine, National University Singapore, 1E Kent Ridge Road, 119228 NUHS Tower Block, Level, Singapore, 11 Singapore
- Division of Supportive Palliative and Care, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 16961 Singapore
- Division of Cancer Education, National Cancer Centre Singapore, 11 Hospital Crescent, Singapore, 169610 Singapore
- Duke-NUS Medical School, 8 College Road, Singapore, 169857 Singapore
- Palliative Care Institute Liverpool, Academic Palliative & End of Life Care Centre, Cancer Research Centre, University of Liverpool, 200 London Road, Liverpool, L3 9TA UK
- Centre for Biomedical Ethics, National University of Singapore, Blk MD11, 10 Medical Drive, #02-03, Singapore, 117597 Singapore
- PalC, The Palliative Care Centre for Excellence in Research and Education, PalC c/o Dover Park Hospice, 10 Jalan Tan Tock Seng, Singapore, 308436 Singapore
| |
Collapse
|
3
|
Folk D, Ryckeley C, Nguyen M, Essig JJ, Beck Dallaghan GL, Coe C. Evaluating Family Medicine Resident Narrative Comments Using the RIME Scheme. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2022; 9:23821205221090162. [PMID: 35356418 PMCID: PMC8958670 DOI: 10.1177/23821205221090162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND In 2013, the Accreditation Council on Graduate Medical Education (ACGME) launched the Next Accreditation System, which required explicit documentation of trainee competence in six domains. To document narrative comments, the University of North Carolina Family Medicine Residency Program developed a mobile application to document real time observations. OBJECTIVE The objective of this work was to assess if the Reporter, Interpreter, Manager, Expert (RIME) framework could be applied to the narrative comments in order to convey a degree of competency. METHODS From August to December 2020, 7 individuals analyzed narrative comments of four family medicine residents. The narrative comments were collected from July to December 2019. Each individual applied the RIME framework to the comments and the team met to discuss. Comments where 5/7 individuals agreed were not further discussed. All other comments were discussed until consensus was achieved. RESULTS 102 unique comments were assessed. Of those comments, 25 (25.5%) met threshold for assessor agreement after independent review. Group discussion about discrepancies led to consensus about the appropriate classification for 92 (90.2%). General comments on performance were difficult to fit into the RIME framework. CONCLUSIONS Application of the RIME framework to narrative comments may add insight into trainee progress. Further faculty development is needed to ensure comments have discrete elements needed to apply the RIME framework and contribute to overall evaluation of competence.
Collapse
Affiliation(s)
| | | | | | | | | | - Catherine Coe
- University of North Carolina School of Medicine, Chapel Hill, NC
| |
Collapse
|
4
|
Kelleher M, Kinnear B, Sall DR, Weber DE, DeCoursey B, Nelson J, Klein M, Warm EJ, Schumacher DJ. Warnings in early narrative assessment that might predict performance in residency: signal from an internal medicine residency program. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:334-340. [PMID: 34476730 PMCID: PMC8633188 DOI: 10.1007/s40037-021-00681-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 07/08/2021] [Accepted: 07/11/2021] [Indexed: 05/10/2023]
Abstract
INTRODUCTION Narrative assessment data are valuable in understanding struggles in resident performance. However, it remains unknown which themes in narrative data that occur early in training may indicate a higher likelihood of struggles later in training, allowing programs to intervene sooner. METHODS Using learning analytics, we identified 26 internal medicine residents in three cohorts that were below expected entrustment during training. We compiled all narrative data in the first 6 months of training for these residents as well as 13 typically performing residents for comparison. Narrative data were blinded for all 39 residents during initial phases of an inductive thematic analysis for initial coding. RESULTS Many similarities were identified between the two cohorts. Codes that differed between typical and lower entrusted residents were grouped into two types of themes: three explicit/manifest and three implicit/latent with six total themes. The explicit/manifest themes focused on specific aspects of resident performance with assessors describing 1) Gaps in attention to detail, 2) Communication deficits with patients, and 3) Difficulty recognizing the "big picture" in patient care. Three implicit/latent themes, focused on how narrative data were written, were also identified: 1) Feedback described as a deficiency rather than an opportunity to improve, 2) Normative comparisons to identify a resident as being behind their peers, and 3) Warning of possible risk to patient care. DISCUSSION Clinical competency committees (CCCs) usually rely on accumulated data and trends. Using the themes in this paper while reviewing narrative comments may help CCCs with earlier recognition and better allocation of resources to support residents' development.
Collapse
Affiliation(s)
- Matthew Kelleher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Benjamin Kinnear
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Dana R Sall
- HonorHealth Internal Medicine Residency Program, Scottsdale, Arizona and University of Arizona College of Medicine, Phoenix, AZ, USA
| | - Danielle E Weber
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Bailey DeCoursey
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jennifer Nelson
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Melissa Klein
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Eric J Warm
- Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Daniel J Schumacher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
5
|
Hartman ND, Manthey DE, Strowd LC, Potisek NM, Vallevand A, Tooze J, Goforth J, McDonough K, Askew KL. Effect of Perceived Level of Interaction on Faculty Evaluations of 3rd Year Medical Students. MEDICAL SCIENCE EDUCATOR 2021; 31:1327-1332. [PMID: 34457975 PMCID: PMC8368453 DOI: 10.1007/s40670-021-01307-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/11/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Several factors are known to affect the way clinical performance evaluations (CPEs) of medical students are completed by supervising physicians. We sought to explore the effect of faculty perceived "level of interaction" (LOI) on these evaluations. METHODS Our third-year CPE requires evaluators to identify perceived LOI with each student as low, moderate, or high. We examined CPEs completed during the academic year 2018-2019 for differences in (1) clinical and professionalism ratings, (2) quality of narrative comments, (3) quantity of narrative comments, and (4) percentage of evaluation questions left unrated. RESULTS A total of 3682 CPEs were included in the analysis. ANOVA revealed statistically significant differences between LOI and clinical ratings (p ≤ .001), with mean ratings from faculty with a high LOI significantly higher than from faculty with a moderate or low LOI (p ≤ .001). Chi-squared analysis demonstrated differences based on faculty LOI and whether questions were left unrated (p ≤ .001), quantity of narrative comments (p ≤ .001), and specificity of narrative comments (p ≤ .001). CONCLUSIONS Faculty who perceive higher LOI were more likely to assign that student higher ratings, complete more of the clinical evaluation and were more likely to provide narrative feedback with more specific, higher-quality comments. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s40670-021-01307-w.
Collapse
Affiliation(s)
- Nicholas D. Hartman
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - David E. Manthey
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Lindsay C. Strowd
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Nicholas M. Potisek
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Andrea Vallevand
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Janet Tooze
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Jon Goforth
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Kimberly McDonough
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Kim L. Askew
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| |
Collapse
|
6
|
Ginsburg S, Watling CJ, Schumacher DJ, Gingerich A, Hatala R. Numbers Encapsulate, Words Elaborate: Toward the Best Use of Comments for Assessment and Feedback on Entrustment Ratings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S81-S86. [PMID: 34183607 DOI: 10.1097/acm.0000000000004089] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The adoption of entrustment ratings in medical education is based on a seemingly simple premise: to align workplace-based supervision with resident assessment. Yet it has been difficult to operationalize this concept. Entrustment rating forms combine numeric scales with comments and are embedded in a programmatic assessment framework, which encourages the collection of a large quantity of data. The implicit assumption that more is better has led to an untamable volume of data that competency committees must grapple with. In this article, the authors explore the roles of numbers and words on entrustment rating forms, focusing on the intended and optimal use(s) of each, with a focus on the words. They also unpack the problematic issue of dual-purposing words for both assessment and feedback. Words have enormous potential to elaborate, to contextualize, and to instruct; to realize this potential, educators must be crystal clear about their use. The authors set forth a number of possible ways to reconcile these tensions by more explicitly aligning words to purpose. For example, educators could focus written comments solely on assessment; create assessment encounters distinct from feedback encounters; or use different words collected from the same encounter to serve distinct feedback and assessment purposes. Finally, the authors address the tyranny of documentation created by programmatic assessment and urge caution in yielding to the temptation to reduce words to numbers to make them manageable. Instead, they encourage educators to preserve some educational encounters purely for feedback, and to consider that not all words need to become data.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Sinai Health System and Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650
| | - Christopher J Watling
- C.J. Watling is professor and director, Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0001-9686-795X
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| | - Andrea Gingerich
- A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: https://orcid.org/0000-0001-5765-3975
| | - Rose Hatala
- R. Hatala is professor, Department of Medicine, and director, Clinical Educator Fellowship, Center for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada; ORCID: https://orcid.org/0000-0003-0521-2590
| |
Collapse
|
7
|
To H, Cargill A, Tobin S, Nestel D. Remediation for surgical trainees: recommendations from a narrative review. ANZ J Surg 2021; 91:1117-1124. [PMID: 33538072 DOI: 10.1111/ans.16637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 01/13/2021] [Accepted: 01/15/2021] [Indexed: 11/27/2022]
Abstract
BACKGROUND Remediation involves formalized support for surgical trainees with significant underperformance to return to expected standards. There is a need to understand current evidence of remediation for surgical trainees to inform practice and justify investment of resources. METHODS Following the principles of a systematic review, we conducted a narrative analysis to make recommendations for remediation of underperforming surgical trainees. RESULTS From a review of 55 articles on remediation of trainees in medical and surgical sub-specialities, we have identified system and process level recommendations. Remediation is reported as long-term, complex and resource-intensive. Establishing a defined and standardized remediation framework enables co-ordination of multi-modal interventions. System level recommendations aim to consolidate protocols via developing better assessment, intervention and re-evaluation modalities whilst also strengthening support to supervisors conducting the remediation. Process level recommendations should be tailored for the specific needs of each trainee, aiming to be proactive with interventions within a programmatic framework. Regular reassessment is required, and long-term follow-up shows that remediation efforts are often successful. CONCLUSION While remediation within a programmatic framework is complex, it is often a successful approach to return surgical trainees to their expected standard. Future directions involve applying learning theories, encouraging research methods and to develop integrated collaborate protocols and support to synergize efforts.
Collapse
Affiliation(s)
- Henry To
- The University of Melbourne, Melbourne, Victoria, Australia
| | - Ashleigh Cargill
- Department of Surgery, St Vincent's Hospital, Melbourne, Victoria, Australia
| | - Stephen Tobin
- School of Medicine, Western Sydney University, Sydney, New South Wales, Australia
| | - Debra Nestel
- Department of Surgery (Austin), The University of Melbourne, Melbourne, Victoria, Australia.,Monash Institute for Health and Clinical Education, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
8
|
Ginsburg S, Kogan JR, Gingerich A, Lynch M, Watling CJ. Taken Out of Context: Hazards in the Interpretation of Written Assessment Comments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1082-1088. [PMID: 31651432 DOI: 10.1097/acm.0000000000003047] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE Written comments are increasingly valued for assessment; however, a culture of politeness and the conflation of assessment with feedback lead to ambiguity. Interpretation requires reading between the lines, which is untenable with large volumes of qualitative data. For computer analytics to help with interpreting comments, the factors influencing interpretation must be understood. METHOD Using constructivist grounded theory, the authors interviewed 17 experienced internal medicine faculty at 4 institutions between March and July, 2017, asking them to interpret and comment on 2 sets of words: those that might be viewed as "red flags" (e.g., good, improving) and those that might be viewed as signaling feedback (e.g., should, try). Analysis focused on how participants ascribed meaning to words. RESULTS Participants struggled to attach meaning to words presented acontextually. Four aspects of context were deemed necessary for interpretation: (1) the writer; (2) the intended and potential audiences; (3) the intended purpose(s) for the comments, including assessment, feedback, and the creation of a permanent record; and (4) the culture, including norms around assessment language. These contextual factors are not always apparent; readers must balance the inevitable need to interpret others' language with the potential hazards of second-guessing intent. CONCLUSIONS Comments are written for a variety of intended purposes and audiences, sometimes simultaneously; this reality creates dilemmas for faculty attempting to interpret these comments, with or without computer assistance. Attention to context is essential to reduce interpretive uncertainty and ensure that written comments can achieve their potential to enhance both assessment and feedback.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University Health Network, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650. J.R. Kogan is professor of medicine, Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania. A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: http://orcid.org/0000-0001-5765-3975. M. Lynch is postdoctoral fellow, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada. C.J. Watling is professor, Department of Clinical Neurological Sciences, scientist, Centre for Education Research and Innovation, and associate dean of postgraduate medical education, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; ORCID: http://orcid.org/0000-0001-9686-795X
| | | | | | | | | |
Collapse
|
9
|
Diller D, Cooper S, Jain A, Lam CN, Riddell J. Which Emergency Medicine Milestone Sub-competencies are Identified Through Narrative Assessments? West J Emerg Med 2019; 21:173-179. [PMID: 31913841 PMCID: PMC6948702 DOI: 10.5811/westjem.2019.12.44468] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2019] [Accepted: 12/04/2019] [Indexed: 12/02/2022] Open
Abstract
Introduction Evaluators use assessment data to make judgments on resident performance within the Accreditation Council for Graduate Medical Education (ACGME) milestones framework. While workplace-based narrative assessments (WBNA) offer advantages to rating scales, validity evidence for their use in assessing the milestone sub-competencies is lacking. This study aimed to determine the frequency of sub-competencies assessed through WBNAs in an emergency medicine (EM) residency program. Methods We performed a retrospective analysis of WBNAs of postgraduate year (PGY) 2–4 residents. A shared mental model was established by reading and discussing the milestones framework, and we created a guide for coding WBNAs to the milestone sub-competencies in an iterative process. Once inter-rater reliability was satisfactory, raters coded each WBNA to the 23 EM milestone sub-competencies. Results We analyzed 2517 WBNAs. An average of 2.04 sub-competencies were assessed per WBNA. The sub-competencies most frequently identified were multitasking, medical knowledge, practice-based performance improvement, patient-centered communication, and team management. The sub-competencies least frequently identified were pharmacotherapy, airway management, anesthesia and acute pain management, goal-directed focused ultrasound, wound management, and vascular access. Overall, the frequency with which WBNAs assessed individual sub-competencies was low, with 14 of the 23 sub-competencies being assessed in less than 5% of WBNAs. Conclusion WBNAs identify few milestone sub-competencies. Faculty assessed similar sub-competencies related to interpersonal and communication skills, practice-based learning and improvement, and medical knowledge, while neglecting sub-competencies related to patient care and procedural skills. These findings can help shape faculty development programs designed to improve assessments of specific workplace behaviors and provide more robust data for the summative assessment of residents.
Collapse
Affiliation(s)
- David Diller
- LAC+USC Medical Center, Keck School of Medicine of the University of Southern California, Department of Emergency Medicine, Los Angeles, California
| | - Shannon Cooper
- Henry Ford Allegiance Health, Department of Emergency Medicine, Jackson, Michigan
| | - Aarti Jain
- LAC+USC Medical Center, Keck School of Medicine of the University of Southern California, Department of Emergency Medicine, Los Angeles, California
| | - Chun Nok Lam
- LAC+USC Medical Center, Keck School of Medicine of the University of Southern California, Department of Emergency Medicine, Los Angeles, California
| | - Jeff Riddell
- LAC+USC Medical Center, Keck School of Medicine of the University of Southern California, Department of Emergency Medicine, Los Angeles, California
| |
Collapse
|
10
|
Tekian A, Park YS, Tilton S, Prunty PF, Abasolo E, Zar F, Cook DA. Competencies and Feedback on Internal Medicine Residents' End-of-Rotation Assessments Over Time: Qualitative and Quantitative Analyses. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1961-1969. [PMID: 31169541 PMCID: PMC6882536 DOI: 10.1097/acm.0000000000002821] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
PURPOSE To examine how qualitative narrative comments and quantitative ratings from end-of-rotation assessments change for a cohort of residents from entry to graduation, and explore associations between comments and ratings. METHOD The authors obtained end-of-rotation quantitative ratings and narrative comments for 1 cohort of internal medicine residents at the University of Illinois at Chicago College of Medicine from July 2013-June 2016. They inductively identified themes in comments, coded orientation (praising/critical) and relevance (specificity and actionability) of feedback, examined associations between codes and ratings, and evaluated changes in themes and ratings across years. RESULTS Data comprised 1,869 assessments (828 comments) on 33 residents. Five themes aligned with ACGME competencies (interpersonal and communication skills, professionalism, medical knowledge, patient care, and systems-based practice), and 3 did not (personal attributes, summative judgment, and comparison to training level). Work ethic was the most frequent subtheme. Comments emphasized medical knowledge more in year 1 and focused more on autonomy, leadership, and teaching in later years. Most comments (714/828 [86%]) contained high praise, and 412/828 (50%) were very relevant. Average ratings correlated positively with orientation (β = 0.46, P < .001) and negatively with relevance (β = -0.09, P = .01). Ratings increased significantly with each training year (year 1, mean [standard deviation]: 5.31 [0.59]; year 2: 5.58 [0.47]; year 3: 5.86 [0.43]; P < .001). CONCLUSIONS Narrative comments address resident attributes beyond the ACGME competencies and change as residents progress. Lower quantitative ratings are associated with more specific and actionable feedback.
Collapse
Affiliation(s)
- Ara Tekian
- A. Tekian is professor and associate dean for international affairs, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-9252-1588
| | - Yoon Soo Park
- Y.S. Park is associate professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335
| | - Sarette Tilton
- S. Tilton is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Patrick F. Prunty
- P.F. Prunty is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Eric Abasolo
- E. Abasolo is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Fred Zar
- F. Zar is professor and program director, Department of Medicine, University of Illinois at Chicago College of Medicine, Chicago, Illinois
| | - David A. Cook
- D.A. Cook is professor of medicine and medical education and associate director, Office of Applied Scholarship and Education Science, and consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota; ORCID: https://orcid.org/0000-0003-2383-4633
| |
Collapse
|
11
|
Chou CL, Kalet A, Costa MJ, Cleland J, Winston K. Guidelines: The dos, don'ts and don't knows of remediation in medical education. PERSPECTIVES ON MEDICAL EDUCATION 2019; 8:322-338. [PMID: 31696439 PMCID: PMC6904411 DOI: 10.1007/s40037-019-00544-5] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
INTRODUCTION Two developing forces have achieved prominence in medical education: the advent of competency-based assessments and a growing commitment to expand access to medicine for a broader range of learners with a wider array of preparation. Remediation is intended to support all learners to achieve sufficient competence. Therefore, it is timely to provide practical guidelines for remediation in medical education that clarify best practices, practices to avoid, and areas requiring further research, in order to guide work with both individual struggling learners and development of training program policies. METHODS Collectively, we generated an initial list of Do's, Don'ts, and Don't Knows for remediation in medical education, which was then iteratively refined through discussions and additional evidence-gathering. The final guidelines were then graded for the strength of the evidence by consensus. RESULTS We present 26 guidelines: two groupings of Do's (systems-level interventions and recommendations for individual learners), along with short lists of Don'ts and Don't Knows, and our interpretation of the strength of current evidence for each guideline. CONCLUSIONS Remediation is a high-stakes, highly complex process involving learners, faculty, systems, and societal factors. Our synthesis resulted in a list of guidelines that summarize the current state of educational theory and empirical evidence that can improve remediation processes at individual and institutional levels. Important unanswered questions remain; ongoing research can further improve remediation practices to ensure the appropriate support for learners, institutions, and society.
Collapse
Affiliation(s)
- Calvin L Chou
- Department of Medicine, University of California and Veterans Affairs Healthcare System, San Francisco, CA, USA.
| | - Adina Kalet
- Department of Medicine, New York University School of Medicine, New York, NY, USA
| | - Manuel Joao Costa
- Life and Health Sciences Research Institute, School of Medicine, University of Minho, Minho, Portugal
| | - Jennifer Cleland
- Centre for Healthcare Education Research and Innovation (CHERI), University of Aberdeen, Aberdeen, UK
| | - Kalman Winston
- Department of Public Health and Primary Care, Cambridge University, Cambridge, UK
| |
Collapse
|
12
|
Tremblay G, Carmichael PH, Maziade J, Grégoire M. Detection of Residents With Progress Issues Using a Keyword-Specific Algorithm. J Grad Med Educ 2019; 11:656-662. [PMID: 31871565 PMCID: PMC6919172 DOI: 10.4300/jgme-d-19-00386.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Revised: 09/16/2019] [Accepted: 09/17/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND The literature suggests that specific keywords included in summative rotation assessments might be an early indicator of abnormal progress or failure. OBJECTIVE This study aims to determine the possible relationship between specific keywords on in-training evaluation reports (ITERs) and subsequent abnormal progress or failure. The goal is to create a functional algorithm to identify residents at risk of failure. METHODS A database of all ITERs from all residents training in accredited programs at Université Laval between 2001 and 2013 was created. An instructional designer reviewed all ITERs and proposed terms associated with reinforcing and underperformance feedback. An algorithm based on these keywords was constructed by recursive partitioning using classification and regression tree methods. The developed algorithm was tuned to achieve 100% sensitivity while maximizing specificity. RESULTS There were 41 618 ITERs for 3292 registered residents. Residents with failure to progress were detected for family medicine (6%, 67 of 1129) and 36 other specialties (4%, 78 of 2163), while the positive predictive values were 23.3% and 23.4%, respectively. The low positive predictive value may be a reflection of residents improving their performance after receiving feedback or a reluctance by supervisors to ascribe a "fail" or "in difficulty" score on the ITERs. CONCLUSIONS Classification and regression trees may be helpful to identify pertinent keywords and create an algorithm, which may be implemented in an electronic assessment system to detect future residents at risk of poor performance.
Collapse
|
13
|
Lessing JN, Bryan S, Johnson C, Keating J, Guerrasio J. Junior doctor remediation: an international reflection. Med J Aust 2019; 211:507-508.e1. [DOI: 10.5694/mja2.50422] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Affiliation(s)
| | - Sheila Bryan
- Postgraduate Medical Council of Victoria Melbourne VIC
- Monash Health Melbourne VIC
| | | | | | | |
Collapse
|
14
|
Abstract
Early identification and successful remediation of unachieved emergency medicine (EM) milestones are challenging for program directors. Residents who fail to achieve milestones in the expected time frame will have varied educational needs to course correct, dependent on the year of training, as well as the specific deficiencies to resolve. Experts from the Council of Residency Directors in Emergency Medicine (CORD-EM) Remediation Task Force (RTF) collaborated with the objective to create tools for identifying and remediating residents with deficiencies in patient care milestones (PCMs).
Collapse
|
15
|
Wilbur K. Does faculty development influence the quality of in-training evaluation reports in pharmacy? BMC MEDICAL EDUCATION 2017; 17:222. [PMID: 29157239 PMCID: PMC5697106 DOI: 10.1186/s12909-017-1054-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 11/02/2017] [Indexed: 06/02/2023]
Abstract
BACKGROUND In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. However, outside of medicine, the quality of submitted workplace-based assessments is largely uninvestigated. This study assessed the quality of ITERs in pharmacy and whether clinical supervisors could be trained to complete higher quality reports. METHODS A random sample of ITERs submitted in a pharmacy program during 2013-2014 was evaluated. These ITERs served as a historical control (control group 1) for comparison with ITERs submitted in 2015-2016 by clinical supervisors who participated in an interactive faculty development workshop (intervention group) and those who did not (control group 2). Two trained independent raters scored the ITERs using a previously validated nine-item scale assessing report quality, the Completed Clinical Evaluation Report Rating (CCERR). The scoring scale for each item is anchored at 1 ("not at all") and 5 ("exemplary"), with 3 categorized as "acceptable". RESULTS Mean CCERR score for reports completed after the workshop (22.9 ± 3.39) did not significantly improve when compared to prospective control group 2 (22.7 ± 3.63, p = 0.84) and were worse than historical control group 1 (37.9 ± 8.21, p = 0.001). Mean item scores for individual CCERR items were below acceptable thresholds for 5 of the 9 domains in control group 1, including supervisor documented evidence of specific examples to clearly explain weaknesses and concrete recommendations for student improvement. Mean item scores for individual CCERR items were below acceptable thresholds for 6 and 7 of the 9 domains in control group 2 and the intervention group, respectively. CONCLUSIONS This study is the first using CCERR to evaluate ITER quality outside of medicine. Findings demonstrate low baseline CCERR scores in a pharmacy program not demonstrably changed by a faculty development workshop, but strategies are identified to augment future rater training.
Collapse
Affiliation(s)
- Kerry Wilbur
- College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
| |
Collapse
|
16
|
Bartels J, Mooney CJ, Stone RT. Numerical versus narrative: A comparison between methods to measure medical student performance during clinical clerkships. MEDICAL TEACHER 2017; 39:1154-1158. [PMID: 28845738 DOI: 10.1080/0142159x.2017.1368467] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
BACKGROUND Medical school evaluations typically rely on both language-based narrative descriptions and psychometrically converted numeric scores to convey performance to the grading committee. We evaluated inter-rater reliability and correlation of numeric versus narrative evaluations for students on their Neurology Clerkship. DESIGN/METHODS 50 Neurology Clerkship in-training evaluation reports completed by their residents and faculty members at the University of Rochester School of Medicine were dissected into narrative and numeric components. 5 Clerkship grading committee members retrospectively gave new narrative scores (NNS) while blinded to original numeric scores (ONS). We calculated intra-class correlation coefficients (ICC) and their associated confidence intervals for the ONS and the NNS. In addition, we calculated the correlation between ONS and NNS. RESULTS The ICC was greater for the NNS (ICC = .88 (95% CI = .70-.94)) than the ONS (ICC = .62 (95% CI = .40-.77)) Pearson correlation coefficient showed that the ONS and NNS were highly correlated (r = .81). CONCLUSIONS Narrative evaluations converted by a small group of experienced graders are at least as reliable as numeric scoring by individual evaluators. We could allow evaluators to focus their efforts on creating richer narrative of greater value to trainees.
Collapse
Affiliation(s)
- Josef Bartels
- a Family Medicine , WWAMI Region Practice & Research Network , Boise , ID , USA
| | - Christopher John Mooney
- b Office of Medical Education , University of Rochester School of Medicine and Dentistry , Rochester , NY , USA
| | - Robert Thompson Stone
- c Neurology , University of Rochester School of Medicine and Dentistry , Rochester , NY , USA
| |
Collapse
|
17
|
Ginsburg S, van der Vleuten CPM, Eva KW. The Hidden Value of Narrative Comments for Assessment: A Quantitative Reliability Analysis of Qualitative Data. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:1617-1621. [PMID: 28403004 DOI: 10.1097/acm.0000000000001669] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
PURPOSE In-training evaluation reports (ITERs) are ubiquitous in internal medicine (IM) residency. Written comments can provide a rich data source, yet are often overlooked. This study determined the reliability of using variable amounts of commentary to discriminate between residents. METHOD ITER comments from two cohorts of PGY-1s in IM at the University of Toronto (graduating 2010 and 2011; n = 46-48) were put into sets containing 15 to 16 residents. Parallel sets were created: one with comments from the full year and one with comments from only the first three assessments. Each set was rank-ordered by four internists external to the program between April 2014 and May 2015 (n = 24). Generalizability analyses and a decision study were performed. RESULTS For the full year of comments, reliability coefficients averaged across four rankers were G = 0.85 and G = 0.91 for the two cohorts. For a single ranker, G = 0.60 and G = 0.73. Using only the first three assessments, reliabilities remained high at G = 0.66 and G = 0.60 for a single ranker. In a decision study, if two internists ranked the first three assessments, reliability would be G = 0.80 and G = 0.75 for the two cohorts. CONCLUSIONS Using written comments to discriminate between residents can be extremely reliable even after only several reports are collected. This suggests a way to identify residents early on who may require attention. These findings contribute evidence to support the validity argument for using qualitative data for assessment.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor, Department of Medicine, and scientist, Wilson Centre for Research in Education, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada. C.P.M. van der Vleuten is professor of education, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. K.W. Eva is associate director and senior scientist, Centre for Health Education Scholarship, and professor and director of educational research and scholarship, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | | | | |
Collapse
|
18
|
Hauer KE, Nishimura H, Dubon D, Teherani A, Boscardin C. Competency assessment form to improve feedback. CLINICAL TEACHER 2017; 15:472-477. [PMID: 29045060 DOI: 10.1111/tct.12726] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
BACKGROUND In-training evaluation reports are a commonly used assessment method for clinical learners that can characterise the development of competence in essential domains of practice. Strategies to increase the usefulness and specificity of written narrative comments about learner performance in these reports are needed to guide their learning. Soliciting narrative comments by competency domain from supervising doctors on in-training evaluation reports could improve the quality of written feedback to students. METHODS This is a pre-post study examining narrative comments derived from assessments of core clerkship students by faculty members and resident supervisors in seven clerkships using two assessment forms in academic years 2013/14 (pre; two comments fields - summative, constructive) and 2014/15 (post; seven comments fields - six competency domains, constructive comments). Using a purposive sample of 60 students based on overall clerkship performance, we conducted content analysis of written comments to compare comment quality based on word count, competencies addressed and reinforcing or constructive content. Differences between the two forms across these three components of quality were compared using Student's t-tests. RESULTS The revised form elicited more narrative comments in all seven clerkships, with more competencies addressed. The revised form led to a decrease in the proportion of constructive comments about the students' performances. In-training evaluation reports are a commonly used assessment method for clinical learners DISCUSSION: Structural changes to a medical student assessment form to elicit narrative comments by competency improved some measures of the quality of narrative comments provided by faculty members and residents. Additional study is needed to determine how learners use this information to improve their clinical practice.
Collapse
Affiliation(s)
- Karen E Hauer
- University of California at San Francisco, San Francisco, California, USA
| | - Holly Nishimura
- University of California at San Francisco, San Francisco, California, USA
| | - Diego Dubon
- University of California at Berkeley, Berkeley, California, USA
| | - Arianne Teherani
- University of California at San Francisco, San Francisco, California, USA
| | - Christy Boscardin
- University of California at San Francisco, San Francisco, California, USA
| |
Collapse
|
19
|
Academic Remediation: Why Early Identification and Intervention Matters. Acad Radiol 2017; 24:730-733. [PMID: 28343750 DOI: 10.1016/j.acra.2016.12.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Revised: 12/19/2016] [Accepted: 12/22/2016] [Indexed: 11/27/2022]
Abstract
At our institution, we have developed a remediation team of strong, focused experts who help us with struggling learners in making the diagnosis and then coaching on their milestone deficits. It is key for all program directors to recognize struggling residents because early recognition and intervention gives the resident the best chance of success.
Collapse
|
20
|
Hatala R, Sawatsky AP, Dudek N, Ginsburg S, Cook DA. Using In-Training Evaluation Report (ITER) Qualitative Comments to Assess Medical Students and Residents: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:868-879. [PMID: 28557953 DOI: 10.1097/acm.0000000000001506] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE In-training evaluation reports (ITERs) constitute an integral component of medical student and postgraduate physician trainee (resident) assessment. ITER narrative comments have received less attention than the numeric scores. The authors sought both to determine what validity evidence informs the use of narrative comments from ITERs for assessing medical students and residents and to identify evidence gaps. METHOD Reviewers searched for relevant English-language studies in MEDLINE, EMBASE, Scopus, and ERIC (last search June 5, 2015), and in reference lists and author files. They included all original studies that evaluated ITERs for qualitative assessment of medical students and residents. Working in duplicate, they selected articles for inclusion, evaluated quality, and abstracted information on validity evidence using Kane's framework (inferences of scoring, generalization, extrapolation, and implications). RESULTS Of 777 potential articles, 22 met inclusion criteria. The scoring inference is supported by studies showing that rich narratives are possible, that changing the prompt can stimulate more robust narratives, and that comments vary by context. Generalization is supported by studies showing that narratives reach thematic saturation and that analysts make consistent judgments. Extrapolation is supported by favorable relationships between ITER narratives and numeric scores from ITERs and non-ITER performance measures, and by studies confirming that narratives reflect constructs deemed important in clinical work. Evidence supporting implications is scant. CONCLUSIONS The use of ITER narratives for trainee assessment is generally supported, except that evidence is lacking for implications and decisions. Future research should seek to confirm implicit assumptions and evaluate the impact of decisions.
Collapse
Affiliation(s)
- Rose Hatala
- R. Hatala is associate professor of medicine, Faculty of Medicine, and director, Clinical Educator Fellowship, Centre for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada. A.P. Sawatsky is assistant professor of medicine and senior associate consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota. N. Dudek is associate professor, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. S. Ginsburg is professor, Department of Medicine, Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University Health Network/University of Toronto, and staff physician, Mount Sinai Hospital, Toronto, Ontario, Canada. D.A. Cook is professor of medicine and medical education, associate director, Mayo Clinic Online Learning, and consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota
| | | | | | | | | |
Collapse
|
21
|
Ginsburg S, van der Vleuten C, Eva KW, Lingard L. Hedging to save face: a linguistic analysis of written comments on in-training evaluation reports. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2016; 21:175-88. [PMID: 26184115 DOI: 10.1007/s10459-015-9622-0] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2015] [Accepted: 07/06/2015] [Indexed: 05/07/2023]
Abstract
Written comments on residents' evaluations can be useful, yet the literature suggests that the language used by assessors is often vague and indirect. The branch of linguistics called pragmatics argues that much of our day to day language is not meant to be interpreted literally. Within pragmatics, the theory of 'politeness' suggests that non-literal language and other strategies are employed in order to 'save face'. We conducted a rigorous, in-depth analysis of a set of written in-training evaluation report (ITER) comments using Brown and Levinson's influential theory of 'politeness' to shed light on the phenomenon of vague language use in assessment. We coded text from 637 comment boxes from first year residents in internal medicine at one institution according to politeness theory. Non-literal language use was common and 'hedging', a key politeness strategy, was pervasive in comments about both high and low rated residents, suggesting that faculty may be working to 'save face' for themselves and their residents. Hedging and other politeness strategies are considered essential to smooth social functioning; their prevalence in our ITERs may reflect the difficult social context in which written assessments occur. This research raises questions regarding the 'optimal' construction of written comments by faculty.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- Department of Medicine and Wilson Centre for Research in Education, University of Toronto, Toronto, ON, Canada.
- Mount Sinai Hospital, 600 University Ave, Ste. 433, Toronto, ON, M5G1X5, Canada.
| | - Cees van der Vleuten
- School for Health Professions Education, Maastricht University, Maastricht, Netherlands
| | - Kevin W Eva
- Faculty of Medicine, Centre for Health Education Scholarship, University of British Columbia, Vancouver, BC, Canada
| | - Lorelei Lingard
- Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| |
Collapse
|
22
|
Affiliation(s)
- Nicholas D. Lawson
- Corresponding author: Nicholas D. Lawson, MD, University of Kansas School of Medicine–Wichita, 1010 N Kansas, Wichita, KS 67214,
| | | |
Collapse
|
23
|
Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane's framework. MEDICAL EDUCATION 2015; 49:560-75. [PMID: 25989405 DOI: 10.1111/medu.12678] [Citation(s) in RCA: 312] [Impact Index Per Article: 34.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2014] [Revised: 11/20/2014] [Accepted: 12/19/2014] [Indexed: 05/13/2023]
Abstract
CONTEXT Assessment is central to medical education and the validation of assessments is vital to their use. Earlier validity frameworks suffer from a multiplicity of types of validity or failure to prioritise among sources of validity evidence. Kane's framework addresses both concerns by emphasising key inferences as the assessment progresses from a single observation to a final decision. Evidence evaluating these inferences is planned and presented as a validity argument. OBJECTIVES We aim to offer a practical introduction to the key concepts of Kane's framework that educators will find accessible and applicable to a wide range of assessment tools and activities. RESULTS All assessments are ultimately intended to facilitate a defensible decision about the person being assessed. Validation is the process of collecting and interpreting evidence to support that decision. Rigorous validation involves articulating the claims and assumptions associated with the proposed decision (the interpretation/use argument), empirically testing these assumptions, and organising evidence into a coherent validity argument. Kane identifies four inferences in the validity argument: Scoring (translating an observation into one or more scores); Generalisation (using the score[s] as a reflection of performance in a test setting); Extrapolation (using the score[s] as a reflection of real-world performance), and Implications (applying the score[s] to inform a decision or action). Evidence should be collected to support each of these inferences and should focus on the most questionable assumptions in the chain of inference. Key assumptions (and needed evidence) vary depending on the assessment's intended use or associated decision. Kane's framework applies to quantitative and qualitative assessments, and to individual tests and programmes of assessment. CONCLUSIONS Validation focuses on evaluating the key claims, assumptions and inferences that link assessment scores with their intended interpretations and uses. The Implications and associated decisions are the most important inferences in the validity argument.
Collapse
Affiliation(s)
- David A Cook
- Mayo Clinic Online Learning, Mayo Clinic College of Medicine, Rochester, Minnesota, USA
- Division of General Internal Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Ryan Brydges
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
- Wilson Centre, University Health Network, Toronto, Ontario, Canada
| | - Shiphra Ginsburg
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
- Wilson Centre, University Health Network, Toronto, Ontario, Canada
| | - Rose Hatala
- Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
24
|
Ginsburg S, Regehr G, Lingard L, Eva KW. Reading between the lines: faculty interpretations of narrative evaluation comments. MEDICAL EDUCATION 2015; 49:296-306. [PMID: 25693989 DOI: 10.1111/medu.12637] [Citation(s) in RCA: 99] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2014] [Revised: 07/28/2014] [Accepted: 10/01/2014] [Indexed: 05/09/2023]
Abstract
OBJECTIVES Narrative comments are used routinely in many forms of rater-based assessment. Interpretation can be difficult as a result of idiosyncratic writing styles and disconnects between literal and intended meanings. Our purpose was to explore how faculty attendings interpret and make sense of the narrative comments on residents' in-training evaluation reports (ITERs) and to determine the language cues that appear to be influential in generating and justifying their interpretations. METHODS A group of 24 internal medicine (IM) faculty attendings each categorised a subgroup of postgraduate year 1 (PGY1) and PGY2 IM residents based solely on ITER comments. They were then interviewed to determine how they had made their judgements. Constant comparative techniques from constructivist grounded theory were used to analyse the interviews and develop a framework to help in understanding how ITER language was interpreted. RESULTS The overarching theme of 'reading between the lines' explained how participants read and interpreted ITER comments. Scanning for 'flags' was part of this strategy. Participants also described specific factors that shaped their judgements, including: consistency of comments; competency domain; specificity; quantity, and context (evaluator identity, rotation type and timing). There were several perceived purposes of ITER comments, including feedback to the resident, summative assessment and other more socially complex objectives. CONCLUSIONS Participants made inferences based on what they thought evaluators intended by their comments and seemed to share an understanding of a 'hidden code'. Participants' ability to 'read between the lines' explains how comments can be effectively used to categorise and rank-order residents. However, it also suggests a mechanism whereby variable interpretations can arise. Our findings suggest that current assumptions about the purpose, value and effectiveness of ITER comments may be incomplete. Linguistic pragmatics and politeness theories may shed light on why such an implicit code might evolve and be maintained in clinical evaluation.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | | | | | | |
Collapse
|
25
|
Hanson JL, Rosenberg AA, Lane JL. Narrative descriptions should replace grades and numerical ratings for clinical performance in medical education in the United States. Front Psychol 2013; 4:668. [PMID: 24348433 PMCID: PMC3836691 DOI: 10.3389/fpsyg.2013.00668] [Citation(s) in RCA: 64] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2013] [Accepted: 09/05/2013] [Indexed: 11/13/2022] Open
Abstract
Background: In medical education, evaluation of clinical performance is based almost universally on rating scales for defined aspects of performance and scores on examinations and checklists. Unfortunately, scores and grades do not capture progress and competence among learners in the complex tasks and roles required to practice medicine. While the literature suggests serious problems with the validity and reliability of ratings of clinical performance based on numerical scores, the critical issue is not that judgments about what is observed vary from rater to rater but that these judgments are lost when translated into numbers on a scale. As the Next Accreditation System of the Accreditation Council on Graduate Medical Education (ACGME) takes effect, medical educators have an opportunity to create new processes of evaluation to document and facilitate progress of medical learners in the required areas of competence. Proposal and initial experience: Narrative descriptions of learner performance in the clinical environment, gathered using a framework for observation that builds a shared understanding of competence among the faculty, promise to provide meaningful qualitative data closely linked to the work of physicians. With descriptions grouped in categories and matched to milestones, core faculty can place each learner along the milestones' continua of progress. This provides the foundation for meaningful feedback to facilitate the progress of each learner as well as documentation of progress toward competence. Implications: This narrative evaluation system addresses educational needs as well as the goals of the Next Accreditation System for explicitly documented progress. Educators at other levels of education and in other professions experience similar needs for authentic assessment and, with meaningful frameworks that describe roles and tasks, may also find useful a system built on descriptions of learner performance in actual work settings. Conclusions: We must place medical learning and assessment in the contexts and domains in which learners do clinical work. The approach proposed here for gathering qualitative performance data in different contexts and domains is one step along the road to moving learners toward competence and mastery.
Collapse
Affiliation(s)
- Janice L Hanson
- Department of Pediatrics, University of Colorado School of Medicine Aurora, CO, USA
| | - Adam A Rosenberg
- Department of Pediatrics, University of Colorado School of Medicine Aurora, CO, USA
| | - J Lindsey Lane
- Department of Pediatrics, University of Colorado School of Medicine Aurora, CO, USA
| |
Collapse
|
26
|
Guerrasio J, Weissberg M. Unsigned: why anonymous evaluations in clinical settings are counterproductive. MEDICAL EDUCATION 2012; 46:928-930. [PMID: 22989124 DOI: 10.1111/j.1365-2923.2012.04323.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Affiliation(s)
- Jeannette Guerrasio
- Department of General Internal Medicine, University of Colorado Denver, Aurora, Colorado, USA.
| | | |
Collapse
|