1
|
Shavoun AH, Mirzazadeh A, Kashani H, Raeeskarami SR, Gandomkar R. Translation and psychometric evaluation of composite feedback-seeking behavior questionnaire among Iranian medical residents. BMC MEDICAL EDUCATION 2024; 24:594. [PMID: 38811982 PMCID: PMC11137997 DOI: 10.1186/s12909-024-05586-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 05/22/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND Proactively seeking feedback from clinical supervisors, peers or other healthcare professionals is a valuable mechanism for residents to obtain useful information about and improve their performance in clinical settings. Given the scant studies investigating the limited aspects of psychometrics properties of the feedback-seeking instruments in medical education, this study aimed to translate the feedback-seeking behavior scales (frequency of feedback-seeking, motives of feedback-seeking, and promotion of feedback-seeking by supervisors) into Persian and evaluate the psychometric properties of the composite questionnaire among medical residents at Tehran University of Medical Sciences in Iran. METHODS In this cross-sectional study, feedback-seeking behavior scales were translated through the forward-backward method, and its face validity and content validity were assessed by 10 medical residents and 18 experts. The test-retest reliability was evaluated by administering the questionnaire to 20 medical residents on two testing occasions. A convenience sample of 548 residents completed the questionnaire. Construct validity was examined by exploratory factor analysis and confirmatory factor analysis and concurrent validity was determined by Pearson's correlation coefficient. RESULTS Content validity assessment showed that the CVR (0.66 to 0.99) and CVI (0.82 to 0.99) values for items and S-CVI values (0.88 to 0.99) for scales were satisfactory. The exploratory and confirmatory factor analysis demonstrated that the models were confirmed with eight items and two factors (explaining 70.98% of the total variance) for the frequency of feedback-seeking scale, with 16 items and four factors (explaining 73.22% of the total variance) for the motives of feedback seeking scale and with four items and one factor (explaining 69.46% of the total variance) for promotion of feedback-seeking by supervisors. AVE values greater than 0.5 and discriminant validity correlations significantly less than 1.0 demonstrated that the total scores of the composite feedback-seeking behavior questionnaire had a favorable fit and the questions could fit their respective factors, and the latent variables were distinct. We found positive and significant correlations between the three scales and their subscales. CONCLUSION The results of the present study supported the validity and reliability of the Persian composite feedback-seeking behavior questionnaire for assessing feedback-seeking behaviors in medical residents. Applying the questionnaire in residency programs may enhance the quality of clinical education.
Collapse
Affiliation(s)
- Amin Hoseini Shavoun
- Department of Medical Education, School of Medicine, Tehran University of Medical Sciences, No. 57, Hojatdoust St., Keshavarz Blvd, Tehran, Iran
| | - Azim Mirzazadeh
- Department of Internal Medicine, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Homa Kashani
- Department of Research Methodology and Data Analysis, Institute for Environmental Research, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Reza Raeeskarami
- Department of Pediatrics, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Roghayeh Gandomkar
- Department of Medical Education, School of Medicine, Tehran University of Medical Sciences, No. 57, Hojatdoust St., Keshavarz Blvd, Tehran, Iran.
- Health Professions Education Research Center, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
2
|
Tischendorf JS, Krecko LK, Filipiak R, Osman F, Zelenski AB. Gender influences resident physicians' perception of an employee-to-employee recognition program: a mixed methods study. BMC MEDICAL EDUCATION 2024; 24:109. [PMID: 38302913 PMCID: PMC10835820 DOI: 10.1186/s12909-024-05083-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 01/22/2024] [Indexed: 02/03/2024]
Abstract
BACKGROUND Burnout is prevalent in medical training. While some institutions have implemented employee-to-employee recognition programs to promote wellness, it is not known how such programs are perceived by resident physicians, or if the experience differs among residents of different genders. METHODS We used convergent mixed methods to characterize how residents in internal medicine (IM), pediatrics, and general surgery programs experience our employee-to-employee recognition ("Hi-5″) program. We collected Hi-5s received by residents in these programs from January 1, 2021-December 31, 2021 and coded them for recipient discipline, sex, and PGY level and sender discipline and professional role. We conducted virtual focus groups with residents in each training program. MAIN MEASURES AND APPROACH We compared Hi-5 receipt between male and female residents; overall and from individual professions. We submitted focus group transcripts to content analysis with codes generated iteratively and emergent themes identified through consensus coding. RESULTS Over a 12-month period, residents received 382 Hi-5s. There was no significant difference in receipt of Hi-5s by male and female residents. Five IM, 3 surgery, and 12 pediatric residents participated in focus groups. Residents felt Hi-5s were useful for interprofessional feedback and to mitigate burnout. Residents who identified as women shared concerns about differing expectations of professional behavior and communication based on gender, a fear of backlash when behavior does not align with gender stereotypes, and professional misidentification. CONCLUSIONS The "Hi-5" program is valuable for interprofessional feedback and promotion of well-being but is experienced differently by men and women residents. This limitation of employee-to-employee recognition should be considered when designing equitable programming to promote well-being and recognition.
Collapse
Affiliation(s)
- Jessica S Tischendorf
- Division of Infectious Disease, Department of Medicine, University of Wisconsin School of Medicine and Public Health, Medical Foundation Centennial Building Room 5263, 1685 Highland Avenue, Madison, WI, 53705, USA.
| | - Laura K Krecko
- Department of Surgery, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, WI, 53792, USA
| | - Rachel Filipiak
- Division of Infectious Disease, Department of Medicine, University of Wisconsin School of Medicine and Public Health, Medical Foundation Centennial Building Room 5263, 1685 Highland Avenue, Madison, WI, 53705, USA
| | - Fauzia Osman
- Department of Medicine, University of Wisconsin School of Medicine and Public Health, 1685 Highland Avenue, Madison, WI, 53705, USA
| | - Amy B Zelenski
- Department of Medicine, University of Wisconsin School of Medicine and Public Health, 1685 Highland Avenue, Madison, WI, 53705, USA
| |
Collapse
|
3
|
Van Ostaeyen S, De Langhe L, De Clercq O, Embo M, Schellens T, Valcke M. Automating the Identification of Feedback Quality Criteria and the CanMEDS Roles in Written Feedback Comments Using Natural Language Processing. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:540-549. [PMID: 38144670 PMCID: PMC10742245 DOI: 10.5334/pme.1056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 10/03/2023] [Indexed: 12/26/2023]
Abstract
Introduction Manually analysing the quality of large amounts of written feedback comments is time-consuming and demands extensive resources and human effort. Therefore, this study aimed to explore whether a state-of-the-art large language model (LLM) could be fine-tuned to identify the presence of four literature-derived feedback quality criteria (performance, judgment, elaboration and improvement) and the seven CanMEDS roles (Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar and Professional) in written feedback comments. Methods A set of 2,349 labelled feedback comments of five healthcare educational programs in Flanders (Belgium) (specialistic medicine, general practice, midwifery, speech therapy and occupational therapy) was split into 12,452 sentences to create two datasets for the machine learning analysis. The Dutch BERT models BERTje and RobBERT were used to train four multiclass-multilabel classification models: two to identify the four feedback quality criteria and two to identify the seven CanMEDS roles. Results The classification models trained with BERTje and RobBERT to predict the presence of the four feedback quality criteria attained macro average F1-scores of 0.73 and 0.76, respectively. The F1-score of the model predicting the presence of the CanMEDS roles trained with BERTje was 0.71 and 0.72 with RobBERT. Discussion The results showed that a state-of-the-art LLM is able to identify the presence of the four feedback quality criteria and the CanMEDS roles in written feedback comments. This implies that the quality analysis of written feedback comments can be automated using an LLM, leading to savings of time and resources.
Collapse
Affiliation(s)
| | - Loic De Langhe
- Language and Translation Technology Team at Ghent University, Belgium
| | - Orphée De Clercq
- Language and Translation Technology Team at Ghent University, Belgium
| | - Mieke Embo
- Department of Educational Sciences at Ghent University and in the Expertise Network Health and Care at the Artevelde University of Applied Sciences, Belgium
| | - Tammy Schellens
- Department of Educational Sciences at Ghent University, Belgium
| | - Martin Valcke
- Department of Educational Sciences at Ghent University, Belgium
| |
Collapse
|
4
|
Anderson LM, Rowland K, Edberg D, Wright KM, Park YS, Tekian A. An Analysis of Written and Numeric Scores in End-of-Rotation Forms from Three Residency Programs. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:497-506. [PMID: 37929204 PMCID: PMC10624145 DOI: 10.5334/pme.41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 10/24/2023] [Indexed: 11/07/2023]
Abstract
Introduction End-of-Rotation Forms (EORFs) assess resident progress in graduate medical education and are a major component of Clinical Competency Committee (CCC) discussion. Single-institution studies suggest EORFs can detect deficiencies, but both grades and comments skew positive. In this study, we sought to determine whether the EORFs from three programs, including multiple specialties and institutions, produced useful information for residents, program directors, and CCCs. Methods Evaluations from three programs were included (Program 1, Institution A, Internal Medicine: n = 38; Program 2, Institution A, Anesthesia: n = 9; Program 3, Institution B, Anesthesia: n = 11). Two independent researchers coded each written comment for relevance (specificity and actionability) and orientation (praise or critical) using a standardized rubric. Numeric scores were analyzed using descriptive statistics. Results 4869 evaluations were collected from the programs. Of the 77,434 discrete numeric scores, 691 (0.89%) were considered "below expected level." 71.2% (2683/3767) of the total written comments were scored as irrelevant, while 3217 (85.4%) of total comments were scored positive and 550 (14.6%) were critical. When combined, 63.2% (n = 2379) of comments were scored positive and irrelevant while 6.5% (n = 246) were scored critical and relevant. Discussion <1% of comments indicated below average performance; >70% of comments scored irrelevant. Critical, relevant comments were least frequently observed, consistent across all 3 programs. The low rate of constructive feedback and the high rate of irrelevant comments are inadequate for a CCC to make informed decisions. The consistency of these findings across programs, specialties, and institutions suggests both local and systemic changes should be considered.
Collapse
Affiliation(s)
- Lauren M. Anderson
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Kathleen Rowland
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Deborah Edberg
- Department of Family and Preventive Medicine, Rush University, Chicago, Illinois, US
| | - Katherine M. Wright
- Department of Family & Community Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois, US
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois Chicago, Chicago, Illinois, US
| | - Ara Tekian
- Department of Medical Education, University of Illinois Chicago, Chicago, Illinois, US
| |
Collapse
|
5
|
Renting N, Jaarsma D, Borleffs JC, Slaets JPJ, Cohen-Schotanus J, Gans ROB. Effectiveness of a supervisor training on quality of feedback to internal medicine residents: a controlled longitudinal multicentre study. BMJ Open 2023; 13:e076946. [PMID: 37770280 PMCID: PMC10546104 DOI: 10.1136/bmjopen-2023-076946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 09/04/2023] [Indexed: 09/30/2023] Open
Abstract
OBJECTIVES High-quality feedback on different dimensions of competence is important for resident learning. Supervisors may need additional training and information to fulfil this demanding task. This study aimed to evaluate whether a short and simple training improves the quality of feedback residents receive from their clinical supervisors in daily practice. DESIGN Longitudinal quasi-experimental controlled study with a pretest/post-test design. We collected multiple premeasurements and postmeasurements for each supervisor over 2 years. A repeated measurements ANOVA was performed on the data. SETTING Internal medicine departments of seven Dutch teaching hospitals. PARTICIPANTS Internal medicine supervisors (n=181) and residents (n=192). INTERVENTION Half of the supervisors attended a short 2.5-hour training session during which they could practise giving feedback in a simulated setting using video fragments. Highly experienced internal medicine educators guided the group discussions about the feedback. The other half of the supervisors formed the control group and received no feedback training. OUTCOME MEASURES Residents rated the quality of supervisors' oral feedback with a previously validated questionnaire. Furthermore, the completeness of the supervisors' written feedback on evaluation forms was analysed. RESULTS The data showed a significant increase in the quality of feedback after the training F (1, 87)=6.76, p=0.04. This effect remained significant up to 6 months after the training session. CONCLUSIONS A short training session in which supervisors practise giving feedback in a simulated setting increases the quality of their feedback. This is a promising outcome since it is a feasible approach to faculty development.
Collapse
Affiliation(s)
- Nienke Renting
- Faculty of Behavioral & Social Sciences, GION, University of Groningen, Groningen, The Netherlands
| | - Debbie Jaarsma
- Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Jan Cc Borleffs
- Center for Education Developmand and Research in Health Professions, University Medical Center Groningen, Groningen, The Netherlands
| | - Joris P J Slaets
- Geriatric Medicine, Leyden Academy on Vitality and Ageing, Leiden, The Netherlands
| | - Janke Cohen-Schotanus
- Center for Education Developmand and Research in Health Professions, University Medical Center Groningen, Groningen, The Netherlands
| | - Rob O B Gans
- Internal Medicine, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| |
Collapse
|
6
|
Gore KM, Schiebout J, Peksa GD, Hock S, Patwari R, Gottlieb M. The integrative feedback tool: assessing a novel feedback tool among emergency medicine residents. Clin Exp Emerg Med 2023; 10:306-314. [PMID: 36796780 PMCID: PMC10579731 DOI: 10.15441/ceem.22.395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 01/19/2023] [Accepted: 02/07/2023] [Indexed: 02/18/2023] Open
Abstract
OBJECTIVE Feedback is critical to the growth of learners. However, feedback quality can be variable in practice. Most feedback tools are generic, with few targeting emergency medicine. We created a feedback tool designed for emergency medicine residents, and this study aimed to evaluate the effectiveness of this tool. METHODS This was a single-center, prospective cohort study comparing feedback quality before and after introducing a novel feedback tool. Residents and faculty completed a survey after each shift assessing feedback quality, feedback time, and the number of feedback episodes. Feedback quality was assessed using a composite score from seven questions, which were each scored 1 to 5 points (minimum total score, 7 points; maximum, 35 points). Preintervention and postintervention data were analyzed using a mixed-effects model that took into account the correlation of random effects between study participants. RESULTS Residents completed 182 surveys and faculty members completed 158 surveys. The use of the tool was associated with improved consistency in the summative score of effective feedback attributes as assessed by residents (P=0.040) but not by faculty (P=0.259). However, most of the individual scores for attributes of good feedback did not reach statistical significance. With the tool, residents perceived that faculty spent more time providing feedback (P=0.040) and that the delivery of feedback was more ongoing throughout the shift (P=0.020). Faculty felt that the tool allowed for more ongoing feedback (P=0.002), with no perceived increase in the time spent delivering feedback (P=0.833). CONCLUSION The use of a dedicated tool may help educators provide more meaningful and frequent feedback without impacting the perceived required time needed to provide feedback.
Collapse
Affiliation(s)
- Katarzyna M. Gore
- Department of Emergency Medicine, Rush University Medical Center, Chicago, IL, USA
| | - Jessen Schiebout
- Department of Emergency Medicine, Rush University Medical Center, Chicago, IL, USA
| | - Gary D. Peksa
- Department of Emergency Medicine, Rush University Medical Center, Chicago, IL, USA
| | - Sara Hock
- Department of Emergency Medicine, Rush University Medical Center, Chicago, IL, USA
| | - Rahul Patwari
- Department of Emergency Medicine, Rush University Medical Center, Chicago, IL, USA
| | - Michael Gottlieb
- Department of Emergency Medicine, Rush University Medical Center, Chicago, IL, USA
| |
Collapse
|
7
|
Quinn JK, Mongelluzzo J, Nip A, Graterol J, Chen EH. Perception of Quiet Students in Emergency Medicine: An Exploration of Narratives in the Standardized Letter of Evaluation. West J Emerg Med 2023; 24:728-731. [PMID: 37527382 PMCID: PMC10393445 DOI: 10.5811/westjem.57756] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 03/17/2023] [Indexed: 08/03/2023] Open
Abstract
INTRODUCTION The Standardized Letter of Evaluation (SLOE) is designed to assist emergency medicine (EM) residency programs in differentiating applicants and in selecting those to interview. The SLOE narrative component summarizes the student's clinical skills as well as their non-cognitive attributes. The purpose of this qualitative investigation was to explore how students described in the SLOE as quiet are perceived by faculty and to better understand how this may impact their residency candidacy. METHODS This retrospective cohort study included all SLOEs submitted to one EM residency program during one application cycle. We analyzed sentences in the SLOE narrative describing students as "quiet," "shy," and/or "reserved." Using grounded theory, thematic content analysis with a constructivist approach, we identified five mutually exclusive themes that best characterized the usage of these target words. RESULTS We identified five themes: 1) quiet traits portrayed as implied-negative attributes (62.4%); 2) quiet students portrayed as overshadowed by more extraverted peers (10.3%); 3) quiet students portrayed as unfit for fast-paced clinical settings (3.4%); 4) "quiet" portrayed as a positive attribute (10.3%); and 5) "quiet" comments deemed difficult to assess due to lack of context (15.6%). CONCLUSION We found that quiet personality traits were often portrayed as negative attributes. Further, comments often lacked clinical context, leaving them vulnerable to misunderstanding or bias. More research is needed to determine how quiet students perform compared to their non-quiet peers and to determine what changes to instructional practices may support the quiet student and help create a more inclusive learning environment.
Collapse
Affiliation(s)
- John K Quinn
- University of California, San Francisco, Department of Emergency Medicine, San Francisco, California
| | - Jillian Mongelluzzo
- University of California, San Francisco, Department of Emergency Medicine, San Francisco, California
| | - Alyssa Nip
- University of California, San Francisco, Department of Emergency Medicine, San Francisco, California
| | - Joseph Graterol
- University of California, San Francisco, Department of Emergency Medicine, San Francisco, California
| | - Esther H Chen
- University of California, San Francisco, Department of Emergency Medicine, San Francisco, California
| |
Collapse
|
8
|
Alsahafi A, Ling DLX, Newell M, Kropmans T. A systematic review of effective quality feedback measurement tools used in clinical skills assessment. MEDEDPUBLISH 2023; 12:11. [PMID: 37435429 PMCID: PMC10331851 DOI: 10.12688/mep.18940.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2023] [Indexed: 07/13/2023] Open
Abstract
BACKGROUND Objective Structured Clinical Examination (OSCE) is a valid tool to assess the clinical skills of medical students. Feedback after OSCE is essential for student improvement and safe clinical practice. Many examiners do not provide helpful or insightful feedback in the text space provided after OSCE stations, which may adversely affect learning outcomes. The aim of this systematic review was to identify the best determinants for quality written feedback in the field of medicine. Methods: PubMed, Medline, Embase, CINHAL, Scopus, and Web of Science were searched for relevant literature up to February 2021. We included studies that described the quality of good/effective feedback in clinical skills assessment in the field of medicine. Four independent reviewers extracted determinants used to assess the quality of written feedback. The percentage agreement and kappa coefficients were calculated for each determinant. The ROBINS-I (Risk Of Bias In Non-randomized Studies of Interventions) tool was used to assess the risk of bias. RESULTS 14 studies were included in this systematic review. 10 determinants were identified for assessing feedback. The determinants with the highest agreement among reviewers were specific, described gap, balanced, constructive and behavioural; with kappa values of 0.79, 0.45, 0.33, 0.33 and 0.26 respectively. All other determinants had low agreement (kappa values below 0.22) indicating that even though they have been used in the literature, they might not be applicable for good quality feedback. The risk of bias was low or moderate overall. CONCLUSIONS This work suggests that good quality written feedback should be specific, balanced, and constructive in nature, and should describe the gap in student learning as well as observed behavioural actions in the exams. Integrating these determinants in OSCE assessment will help guide and support educators for providing effective feedback for the learner.
Collapse
Affiliation(s)
- Akram Alsahafi
- College of Medicine, Nursing and Health Sciences – School of Medicine, National University of Ireland, Galway, Galway, Galway. Co, H91 V4AY, Ireland
- Department of Medical Education, College of Medicine, Taif University, Saudi Arabia, P.O Box 11099, Taif 21944, Saudi Arabia
| | - Davina Li Xin Ling
- College of Medicine, Nursing and Health Sciences – School of Medicine, National University of Ireland, Galway, Galway, Galway. Co, H91 V4AY, Ireland
| | - Micheál Newell
- College of Medicine, Nursing and Health Sciences – School of Medicine, National University of Ireland, Galway, Galway, Galway. Co, H91 V4AY, Ireland
| | - Thomas Kropmans
- College of Medicine, Nursing and Health Sciences – School of Medicine, National University of Ireland, Galway, Galway, Galway. Co, H91 V4AY, Ireland
| |
Collapse
|
9
|
Singh S, Cheung WJ, Dewhirst S, Wood TJ, Landreville JM. The influence of clinical coaching teams on quality of entrustable professional activity assessments. AEM EDUCATION AND TRAINING 2023; 7:e10879. [PMID: 37361186 PMCID: PMC10290210 DOI: 10.1002/aet2.10879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 03/18/2023] [Accepted: 03/28/2023] [Indexed: 06/28/2023]
Abstract
Background Coaching is an important component of workplace-based assessment in competency-based medical education. Longitudinal coaching relationships have been proposed to enhance the trainee-supervisor relationship and promote high-quality assessment. Objective The objective of this study was to determine the influence of longitudinal coaching relationships on the quality of entrustable professional activity (EPA) assessments. Methods EPAs (n = 174) completed by emergency medicine (EM) supervisors between July 2020 and June 2021 were extracted and divided into two groups; one group consisted of EPAs completed by supervisors when a longitudinal coaching relationship existed (n = 87) and the other group consisted of EPAs completed by the same supervisors when no coaching relationship existed (n = 87). Three physicians were recruited to rate the EPAs using the Quality of Assessment and Learning (QuAL) score, a previously published measure of EPA quality. An analysis of variance was performed to compare mean QuAL scores between the groups. Linear regression analysis was conducted to examine the relationship between trainee performance (EPA rating) and EPA assessment quality (QuAL score). Results All raters completed the survey. The mean ± SD QuAL score in the coaching relationship group (3.63 ± 0.91) was higher than the no coaching relationship group (3.51 ± 1.10) but the difference was not statistically significant (p = 0.40). Supervisor was a significant predictor of QuAL score (p = 0.012) and supervisor alone accounted for 26% of the variability in QuAL scores (R2 = 0.26). There was no significant relationship between trainee performance and EPA assessment quality. Conclusions The presence of a longitudinal coaching relationship did not influence the quality of EPA assessments.
Collapse
Affiliation(s)
| | - Warren J. Cheung
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
- Royal College of Physicians and Surgeons of CanadaOttawaOntarioCanada
| | | | - Timothy J. Wood
- Department of Innovation in Medical EducationUniversity of OttawaOttawaOntarioCanada
| | | |
Collapse
|
10
|
Chakroun M, Dion VR, Ouellet K, Graillon A, Désilets V, Xhignesse M, St-Onge C. Quality of Narratives in Assessment: Piloting a List of Evidence-Based Quality Indicators. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:XX. [PMID: 37252269 PMCID: PMC10215990 DOI: 10.5334/pme.925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/12/2023] [Indexed: 05/31/2023]
Abstract
Background & Need for Innovation Appraising the quality of narratives used in assessment is challenging for educators and administrators. Although some quality indicators for writing narratives exist in the literature, they remain context specific and not always sufficiently operational to be easily used. Creating a tool that gathers applicable quality indicators and ensuring its standardized use would equip assessors to appraise the quality of narratives. Steps taken for Development and Implementation of innovation We used DeVellis' framework to develop a checklist of evidence-informed indicators for quality narratives. Two team members independently piloted the checklist using four series of narratives coming from three different sources. After each series, team members documented their agreement and achieved a consensus. We calculated frequencies of occurrence for each quality indicator as well as the interrater agreement to assess the standardized application of the checklist. Outcomes of Innovation We identified seven quality indicators and applied them on narratives. Frequencies of quality indicators ranged from 0% to 100%. Interrater agreement ranged from 88.7% to 100% for the four series. Critical Reflection Although we were able to achieve a standardized application of a list of quality indicators for narratives used in health sciences education, it does not exclude the fact that users would need training to be able to write good quality narratives. We also noted that some quality indicators were less frequent than others and we suggested a few reflections on this.
Collapse
Affiliation(s)
- Molk Chakroun
- Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Vincent R. Dion
- Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Kathleen Ouellet
- Paul Grand’Maison de la Sociétédes médecins de l’Universitéde Sherbrooke research chair in medical education, Sherbrooke, Québec, CA
| | - Ann Graillon
- Centre de pédagogie et des sciences de la santé, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Valérie Désilets
- Department of Pediatrics, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Marianne Xhignesse
- Department of Family and Emergency Medicine, Faculty of medicine and health sciences, Universitéde Sherbrooke, Sherbrooke, Québec, CA
| | - Christina St-Onge
- Department of Medicine, Faculty of medicine and health sciences, Universitéde Sherbrooke, Paul Grand’Maison de la Sociétédes médecins de l’Universitéde Sherbrooke research chair in medical education, Sherbrooke, Québec, CA
| |
Collapse
|
11
|
Haas MRC, Davis MG, Harvey CE, Huang R, Scott KW, George BC, Wnuk GM, Burkhardt J. Implementation of the SIMPL (Society for Improving Medical Professional Learning) performance assessment tool in the emergency department: A pilot study. AEM EDUCATION AND TRAINING 2023; 7:e10842. [PMID: 36777102 PMCID: PMC9899600 DOI: 10.1002/aet2.10842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 12/01/2022] [Accepted: 12/18/2022] [Indexed: 06/18/2023]
Abstract
Background Feedback and assessment are difficult to provide in the emergency department (ED) setting despite their critical importance for competency-based education, and traditional end-of-shift evaluations (ESEs) alone may be inadequate. The SIMPL (Society for Improving Medical Professional Learning) mobile application has been successfully implemented and studied in the operative setting for surgical training programs as a point-of-care tool that incorporates three assessment scales in addition to dictated feedback. SIMPL may represent a viable tool for enhancing workplace-based feedback and assessment in emergency medicine (EM). Methods We implemented SIMPL at a 4-year EM residency program during a pilot study from March to June 2021 for observable activities such as medical resuscitations and related procedures. Faculty and residents underwent formal rater training prior to launch and were asked to complete surveys regarding the SIMPL app's content, usability, and future directions at the end of the pilot. Results A total of 36/58 (62%) of faculty completed at least one evaluation, for a total of 190 evaluations and an average of three evaluations per faculty. Faculty initiated 130/190 (68%) and residents initiated 60/190 (32%) evaluations. Ninety-one percent included dictated feedback. A total of 45/54 (83%) residents received at least one evaluation, with an average of 3.5 evaluations per resident. Residents generally agreed that SIMPL increased the quality of feedback received and that they valued dictated feedback. Residents generally did not value the numerical feedback provided from SIMPL. Relative to the residents, faculty overall responded more positively toward SIMPL. The pilot generated several suggestions to inform the optimization of the next version of SIMPL for EM training programs. Conclusions The SIMPL app, originally developed for use in surgical training programs, can be implemented for use in EM residency programs, has positive support from faculty, and may provide important adjunct information beyond current ESEs.
Collapse
Affiliation(s)
- Mary R. C. Haas
- Department of Emergency MedicineUniversity of Michigan Medical SchoolAnn ArborMichiganUSA
| | - Mallory G. Davis
- Department of Emergency MedicineUniversity of Michigan Medical SchoolAnn ArborMichiganUSA
| | - Carrie E. Harvey
- Department of Emergency MedicineUniversity of Michigan Medical SchoolAnn ArborMichiganUSA
| | - Rob Huang
- Department of Emergency MedicineUniversity of Michigan Medical SchoolAnn ArborMichiganUSA
| | - Kirstin W. Scott
- University of Michigan Emergency Medicine Residency ProgramAnn ArborMichiganUSA
| | - Brian C. George
- Center for Surgical Training and Research, Department of SurgeryUniversity of Michigan Medical SchoolAnn ArborMichiganUSA
| | - Gregory M. Wnuk
- Center for Surgical Training and Research, Department of SurgeryUniversity of Michigan Medical SchoolAnn ArborMichiganUSA
| | - John Burkhardt
- Departments of Emergency Medicine and Learning Health SciencesUniversity of Michigan Medical SchoolAnn ArborMichiganUSA
| |
Collapse
|
12
|
Kogan JR, Dine CJ, Conforti LN, Holmboe ES. Can Rater Training Improve the Quality and Accuracy of Workplace-Based Assessment Narrative Comments and Entrustment Ratings? A Randomized Controlled Trial. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:237-247. [PMID: 35857396 DOI: 10.1097/acm.0000000000004819] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Prior research evaluating workplace-based assessment (WBA) rater training effectiveness has not measured improvement in narrative comment quality and accuracy, nor accuracy of prospective entrustment-supervision ratings. The purpose of this study was to determine whether rater training, using performance dimension and frame of reference training, could improve WBA narrative comment quality and accuracy. A secondary aim was to assess impact on entrustment rating accuracy. METHOD This single-blind, multi-institution, randomized controlled trial of a multifaceted, longitudinal rater training intervention consisted of in-person training followed by asynchronous online spaced learning. In 2018, investigators randomized 94 internal medicine and family medicine physicians involved with resident education. Participants assessed 10 scripted standardized resident-patient videos at baseline and follow-up. Differences in holistic assessment of narrative comment accuracy and specificity, accuracy of individual scenario observations, and entrustment rating accuracy were evaluated with t tests. Linear regression assessed impact of participant demographics and baseline performance. RESULTS Seventy-seven participants completed the study. At follow-up, the intervention group (n = 41), compared with the control group (n = 36), had higher scores for narrative holistic specificity (2.76 vs 2.31, P < .001, Cohen V = .25), accuracy (2.37 vs 2.06, P < .001, Cohen V = .20) and mean quantity of accurate (6.14 vs 4.33, P < .001), inaccurate (3.53 vs 2.41, P < .001), and overall observations (2.61 vs 1.92, P = .002, Cohen V = .47). In aggregate, the intervention group had more accurate entrustment ratings (58.1% vs 49.7%, P = .006, Phi = .30). Baseline performance was significantly associated with performance on final assessments. CONCLUSIONS Quality and specificity of narrative comments improved with rater training; the effect was mitigated by inappropriate stringency. Training improved accuracy of prospective entrustment-supervision ratings, but the effect was more limited. Participants with lower baseline rating skill may benefit most from training.
Collapse
Affiliation(s)
- Jennifer R Kogan
- J.R. Kogan is associate dean, Student Success and Professional Development, and professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| | - C Jessica Dine
- C.J. Dine is associate dean, Evaluation and Assessment, and associate professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-5894-0861
| | - Lisa N Conforti
- L.N. Conforti is research associate for milestones evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-7317-6221
| | - Eric S Holmboe
- E.S. Holmboe is chief, research, milestones development and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| |
Collapse
|
13
|
Zavodnick J, Doroshow J, Rosenberg S, Banks J, Leiby BE, Mingioni N. Hawks and Doves: Perceptions and Reality of Faculty Evaluations. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2023; 10:23821205231197079. [PMID: 37692558 PMCID: PMC10492463 DOI: 10.1177/23821205231197079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 08/08/2023] [Indexed: 09/12/2023]
Abstract
OBJECTIVES Internal medicine clerkship grades are important for residency selection, but inconsistencies between evaluator ratings threaten their ability to accurately represent student performance and perceived fairness. Clerkship grading committees are recommended as best practice, but the mechanisms by which they promote accuracy and fairness are not certain. The ability of a committee to reliably assess and account for grading stringency of individual evaluators has not been previously studied. METHODS This is a retrospective analysis of evaluations completed by faculty considered to be stringent, lenient, or neutral graders by members of a grading committee of a single medical college. Faculty evaluations were assessed for differences in ratings on individual skills and recommendations for final grade between perceived stringency categories. Logistic regression was used to determine if actual assigned ratings varied based on perceived faculty's grading stringency category. RESULTS "Easy graders" consistently had the highest probability of awarding an above-average rating, and "hard graders" consistently had the lowest probability of awarding an above-average rating, though this finding only reached statistical significance only for 2 of 8 questions on the evaluation form (P = .033 and P = .001). Odds ratios of assigning a higher final suggested grade followed the expected pattern (higher for "easy" and "neutral" compared to "hard," higher for "easy" compared to "neutral") but did not reach statistical significance. CONCLUSIONS Perceived differences in faculty grading stringency have basis in reality for clerkship evaluation elements. However, final grades recommended by faculty perceived as "stringent" or "lenient" did not differ. Perceptions of "hawks" and "doves" are not just lore but may not have implications for students' final grades. Continued research to describe the "hawk and dove effect" will be crucial to enable assessment of local grading variation and empower local educational leadership to correct, but not overcorrect, for this effect to maintain fairness in student evaluations.
Collapse
Affiliation(s)
- Jillian Zavodnick
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, USA
| | | | - Sarah Rosenberg
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, USA
| | - Joshua Banks
- Department of Pharmacology and Experimental Therapeutics, Division of Biostatistics, Thomas Jefferson University, Philadelphia, USA
| | - Benjamin E Leiby
- Department of Pharmacology and Experimental Therapeutics, Division of Biostatistics, Thomas Jefferson University, Philadelphia, USA
| | - Nina Mingioni
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, USA
| |
Collapse
|
14
|
Chakroun M, Dion VR, Ouellet K, Graillon A, Désilets V, Xhignesse M, St-Onge C. Narrative Assessments in Higher Education: A Scoping Review to Identify Evidence-Based Quality Indicators. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:1699-1706. [PMID: 35612917 DOI: 10.1097/acm.0000000000004755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Narrative comments are increasingly used in assessment to document trainees' performance and to make important decisions about academic progress. However, little is known about how to document the quality of narrative comments, since traditional psychometric analysis cannot be applied. The authors aimed to generate a list of quality indicators for narrative comments, to identify recommendations for writing high-quality narrative comments, and to document factors that influence the quality of narrative comments used in assessments in higher education. METHOD The authors conducted a scoping review according to Arksey & O'Malley's framework. The search strategy yielded 690 articles from 6 databases. Team members screened abstracts for inclusion and exclusion, then extracted numerical and qualitative data based on predetermined categories. Numerical data were used for descriptive analysis. The authors completed the thematic analysis of qualitative data with iterative discussions until they achieved consensus for the interpretation of the results. RESULTS After the full-text review of 213 selected articles, 47 were included. Through the thematic analysis, the authors identified 7 quality indicators, 12 recommendations for writing quality narratives, and 3 factors that influence the quality of narrative comments used in assessment. The 7 quality indicators are (1) describes performance with a focus on particular elements (attitudes, knowledge, skills); (2) provides a balanced message between positive elements and elements needing improvement; (3) provides recommendations to learners on how to improve their performance; (4) compares the observed performance with an expected standard of performance; (5) provides justification for the mark/score given; (6) uses language that is clear and easily understood; and (7) uses a nonjudgmental style. CONCLUSIONS Assessors can use these quality indicators and recommendations to write high-quality narrative comments, thus reinforcing the appropriate documentation of trainees' performance, facilitating solid decision making about trainees' progression, and enhancing the impact of narrative feedback for both learners and programs.
Collapse
Affiliation(s)
- Molk Chakroun
- M. Chakroun is a PhD student, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0002-0518-1782
| | - Vincent R Dion
- V.R. Dion was research assistant, Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke Research Chair in Medical Education, at the time of this work, and is now a first-year medical student, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Kathleen Ouellet
- K. Ouellet is research coordinator, Centre de pédagogie et des sciences de la santé, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-9829-151X
| | - Ann Graillon
- A. Graillon is associate professor, Department of Pediatrics, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0003-3677-7113
| | - Valérie Désilets
- V. Désilets is associate professor, Department of Pediatrics, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-7399-119X
| | - Marianne Xhignesse
- M. Xhignesse is full professor, Department of Family and Emergency Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0002-3257-5912
| | - Christina St-Onge
- C. St-Onge is full professor, Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, and holds the Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke Research Chair in Medical Education, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-5313-0456
| |
Collapse
|
15
|
Gordon LB, Zelaya-Floyd M, White P, Hallen S, Varaklis K, Tavakolikashi M. Interprofessional bedside rounding improves quality of feedback to resident physicians. MEDICAL TEACHER 2022; 44:907-913. [PMID: 35373712 DOI: 10.1080/0142159x.2022.2049735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Obtaining high quality feedback in residency education is challenging, in part due to limited opportunities for faculty observation of authentic clinical work. This study reviewed the impact of interprofessional bedside rounds ('iPACE™') on the length and quality of faculty narrative evaluations of residents as compared to usual inpatient teaching rounds. METHODS Narrative comments from faculty evaluations of Internal Medicine (IM) residents both on usual teaching service as well as the iPACE™ service (spanning 2017-2020) were reviewed and coded using a deductive content analysis approach. RESULTS Six hundred ninety-two narrative evaluations by 63 attendings of 103 residents were included. Evaluations of iPACE™ residents were significantly longer than those of residents on usual teams (109 vs. 69 words, p < 0.001). iPACE™ evaluations contained a higher average occurrence of direct observations of patient/family interactions (0.72 vs. 0.32, p < 0.001), references to interprofessionalism (0.17 vs. 0.05, p < 0.001), as well as specific (3.21 vs. 2.26, p < 0.001), actionable (1.01 vs. 0.69, p < 0.001), and corrective feedback (1.2 vs. 0.88, p = 0.001) per evaluation. CONCLUSIONS This study suggests that the iPACE™ model, which prioritizes interprofessional bedside rounds, had a positive impact on the quantity and quality of feedback, as measured via narrative comments on weekly evaluations.
Collapse
Affiliation(s)
- Lesley B Gordon
- Tufts University School of Medicine, Boston, MA, USA
- Department of Medicine, Maine Medical Center, Portland, ME, USA
| | | | - Patricia White
- Department of Medical Education, Maine Medical Center, Portland, ME, USA
| | - Sarah Hallen
- Tufts University School of Medicine, Boston, MA, USA
- Division of Geriatrics, Maine Medical Center, Portland, ME, USA
| | - Kalli Varaklis
- Tufts University School of Medicine, Boston, MA, USA
- Department of Medical Education, Maine Medical Center, Portland, ME, USA
- Department of Obstetrics and Gynecology, Maine Medical Center, Portland, ME, USA
| | - Motahareh Tavakolikashi
- Department of Medical Education, Maine Medical Center, Portland, ME, USA
- Department of System Science and Industrial Engineering, Binghamton University, Binghamton, NY, USA
| |
Collapse
|
16
|
Alsahafi A, Ling DLX, Newell M, Kropmans T. A systematic review of effective quality feedback measurement tools used in clinical skills assessment. MEDEDPUBLISH 2022. [DOI: 10.12688/mep.18940.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Background: Objective Structured Clinical Examination (OSCE) is a valid tool to assess the clinical skills of medical students. Feedback after OSCE is essential for student improvement and safe clinical practice. Many examiners do not provide helpful or insightful feedback in the text space provided after OSCE stations, which may adversely affect learning outcomes. The aim of this systematic review was to identify the best determinants for quality written feedback in the field of medicine. Methods: PubMed, Medline, Embase, CINHAL, Scopus, and Web of Science were searched for relevant literature up to February 2021. We included studies that described the quality of good/effective feedback in clinical skills assessment in the field of medicine. Four independent reviewers extracted determinants used to assess the quality of written feedback. The percentage agreement and kappa coefficients were calculated for each determinant. The ROBINS-I (Risk Of Bias In Non-randomized Studies of Interventions) tool was used to assess the risk of bias. Results: 14 studies were included in this systematic review. 10 determinants were identified for assessing feedback. The determinants with the highest agreement among reviewers were specific, described gap, balanced, constructive and behavioural; with kappa values of 0.79, 0.45, 0.33, 0.33 and 0.26 respectively. All other determinants had low agreement (kappa values below 0.22) indicating that even though they have been used in the literature, they might not be applicable for good quality feedback. The risk of bias was low or moderate overall. Conclusions: This work suggests that good quality written feedback should be specific, balanced, and constructive in nature, and should describe the gap in student learning as well as observed behavioural actions in the exams. Integrating these determinants in OSCE assessment will help guide and support educators for providing effective feedback for the learner.
Collapse
|
17
|
French JC, Pien LC. A Document Analysis of Nationally Available Faculty Assessment Forms of Resident Performance. J Grad Med Educ 2021; 13:833-840. [PMID: 35070096 PMCID: PMC8672836 DOI: 10.4300/jgme-d-21-00289.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 06/28/2021] [Accepted: 09/07/2021] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Written feedback by faculty of resident performance is valuable when it includes components based on assessment for learning. However, it is not clear how often assessment forms include these components for summative and formative feedback. OBJECTIVE To analyze prompts used in forms for faculty assessment of resident performance, guided by best practices in survey research methodology, self-regulation theory, and competency-based assessment. METHODS A document analysis, which is a qualitative approach used to analyze content and structure of texts, was completed on assessment forms nationally available in MedHub. Due to the number of forms available, only internal medicine and surgery specialties were included. A document summary form was created to analyze the assessments. The summary form guided researchers through the analysis. RESULTS Forty-eight forms were reviewed, each from a unique residency program. All forms provided a textbox for comments, and 54% made this textbox required for assessment completion. Eighty-three percent of assessments placed the open textbox at the end of the form. One-third of forms contained a simple prompt, "Comments," for the narrative section. Fifteen percent of forms included a box to check if the information on the form had been discussed with the resident. Fifty percent of the assessments were unclear if they were meant to be formative or summative in nature. CONCLUSIONS Our document analysis of assessment forms revealed they do not always follow best practices in survey design for narrative sections, nor do they universally address elements deemed important for promotion of self-regulation and competency-based assessment.
Collapse
Affiliation(s)
- Judith C. French
- Judith C. French, PhD, is Surgical Educator, General Surgery Residency Program, Department of General Surgery, Cleveland Clinic, and Assistant Professor of Surgery, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University
| | - Lily C. Pien
- Lily C. Pien, MD, MHPE, is Core Faculty, Allergy and Immunology Fellowship Program, Cleveland Clinic, and Associate Professor of Medicine, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University
| |
Collapse
|
18
|
Ginsburg S, Watling CJ, Schumacher DJ, Gingerich A, Hatala R. Numbers Encapsulate, Words Elaborate: Toward the Best Use of Comments for Assessment and Feedback on Entrustment Ratings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S81-S86. [PMID: 34183607 DOI: 10.1097/acm.0000000000004089] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The adoption of entrustment ratings in medical education is based on a seemingly simple premise: to align workplace-based supervision with resident assessment. Yet it has been difficult to operationalize this concept. Entrustment rating forms combine numeric scales with comments and are embedded in a programmatic assessment framework, which encourages the collection of a large quantity of data. The implicit assumption that more is better has led to an untamable volume of data that competency committees must grapple with. In this article, the authors explore the roles of numbers and words on entrustment rating forms, focusing on the intended and optimal use(s) of each, with a focus on the words. They also unpack the problematic issue of dual-purposing words for both assessment and feedback. Words have enormous potential to elaborate, to contextualize, and to instruct; to realize this potential, educators must be crystal clear about their use. The authors set forth a number of possible ways to reconcile these tensions by more explicitly aligning words to purpose. For example, educators could focus written comments solely on assessment; create assessment encounters distinct from feedback encounters; or use different words collected from the same encounter to serve distinct feedback and assessment purposes. Finally, the authors address the tyranny of documentation created by programmatic assessment and urge caution in yielding to the temptation to reduce words to numbers to make them manageable. Instead, they encourage educators to preserve some educational encounters purely for feedback, and to consider that not all words need to become data.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Sinai Health System and Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650
| | - Christopher J Watling
- C.J. Watling is professor and director, Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0001-9686-795X
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| | - Andrea Gingerich
- A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: https://orcid.org/0000-0001-5765-3975
| | - Rose Hatala
- R. Hatala is professor, Department of Medicine, and director, Clinical Educator Fellowship, Center for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada; ORCID: https://orcid.org/0000-0003-0521-2590
| |
Collapse
|
19
|
Lin D. Hospitalist Readiness to Assess and Evaluate Resident Progress. South Med J 2021; 114:215-217. [PMID: 33787934 DOI: 10.14423/smj.0000000000001227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Affiliation(s)
- Doris Lin
- From the Department of Medicine, Baylor College of Medicine, Houston, Texas
| |
Collapse
|
20
|
Dory V, Danoff D, Plotnick LH, Cummings BA, Gomez-Garibello C, Pal NE, Gumuchian ST, Young M. Does Educational Handover Influence Subsequent Assessment? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:118-125. [PMID: 32496286 DOI: 10.1097/acm.0000000000003528] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Educational handover (i.e., providing information about learners' past performance) is controversial. Proponents argue handover could help tailor learning opportunities. Opponents fear it could bias subsequent assessments and lead to self-fulfilling prophecies. This study examined whether raters provided with reports describing learners' minor weaknesses would generate different assessment scores or narrative comments than those who did not receive such reports. METHOD In this 2018 mixed-methods, randomized, controlled, experimental study, clinical supervisors from 5 postgraduate (residency) programs were randomized into 3 groups receiving no educational handover (control), educational handover describing weaknesses in medical expertise, and educational handover describing weaknesses in communication. All participants watched the same videos of 2 simulated resident-patient encounters and assessed performance using a shortened mini-clinical evaluation exercise form. The authors compared mean scores, percentages of negative comments, comments focusing on medical expertise, and comments focusing on communication across experimental groups using analyses of variance. They examined potential moderating effects of supervisor experience, gender, and mindsets (fixed vs growth). RESULTS Seventy-two supervisors participated. There was no effect of handover report on assessment scores (F(2, 69) = 0.31, P = .74) or percentage of negative comments (F(2, 60) = 0.33, P = .72). Participants who received a report indicating weaknesses in communication generated a higher percentage of comments on communication than the control group (63% vs 50%, P = .03). Participants who received a report indicating weaknesses in medical expertise generated a similar percentage of comments on expertise compared to the controls (46% vs 47%, P = .98). CONCLUSIONS This study provides initial empirical data about the effects of educational handover and suggests it can-in some circumstances-lead to more targeted feedback without influencing scores. Further studies are required to examine the influence of reports for a variety of performance levels, areas of weakness, and learners.
Collapse
Affiliation(s)
- Valérie Dory
- V. Dory was, when this study occurred, assistant professor, Department of Medicine, assessment specialist for undergraduate medical education, and core member, Centre for Medical Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada, and then assistant professor, General Practice, Institut de Recherche Santé et Société and Centre académique de médecine générale, Université catholique de Louvain, Brussels, Belgium. She is currently an educationalist, Department of General Practice, Université de Liège, Liège, Belgium; ORCID: https://orcid.org/0000-0002-5814-5654
| | - Deborah Danoff
- D. Danoff is affiliate member, Institute of Health Sciences Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - Laurie H Plotnick
- L.H. Plotnick is associate professor and associate chair, Department of Pediatrics, Faculty of Medicine, McGill University, and director, Division of Pediatric Emergency Medicine, Montreal Children's Hospital, McGill University Health Centre, Montreal, Quebec, Canada. She is also associate member, Institute of Health Sciences Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - Beth-Ann Cummings
- B-.A. Cummings is associate professor, Department of Medicine, and associate member, Institute of Health Sciences Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada; ORCID: https://orcid.org/0000-0001-6565-6930
| | - Carlos Gomez-Garibello
- C. Gomez-Garibello is assistant professor, Department of Medicine and Institute of Health Sciences Education, and assessment specialist, Postgraduate Medical Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada; ORCID: http://orcid.org/0000-0003-0288-3081
| | - Nicole E Pal
- N.E. Pal is research assistant, Institute of Health Sciences Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - Stephanie T Gumuchian
- S.T. Gumuchian is research assistant, Institute of Health Sciences Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - Meredith Young
- M. Young is associate professor, Institute of Health Sciences Education and Department of Medicine, Faculty of Medicine, McGill University, Montreal, Quebec, Canada; ORCID: http://orcid.org/0000-0002-2036-2119
| |
Collapse
|
21
|
Ginsburg S, Gingerich A, Kogan JR, Watling CJ, Eva KW. Idiosyncrasy in Assessment Comments: Do Faculty Have Distinct Writing Styles When Completing In-Training Evaluation Reports? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:S81-S88. [PMID: 32769454 DOI: 10.1097/acm.0000000000003643] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE Written comments are gaining traction as robust sources of assessment data. Compared with the structure of numeric scales, what faculty choose to write is ad hoc, leading to idiosyncratic differences in what is recorded. This study offers exploration of what aspects of writing styles are determined by the faculty offering comment and what aspects are determined by the trainee being commented upon. METHOD The authors compiled in-training evaluation report comment data, generated from 2012 to 2015 by 4 large North American Internal Medicine training programs. The Linguistic Index and Word Count (LIWC) was used to categorize and quantify the language contained. Generalizability theory was used to determine whether faculty could be reliably discriminated from one another based on writing style. Correlations and ANOVAs were used to determine what styles were related to faculty or trainee demographics. RESULTS Datasets contained 23-142 faculty who provided 549-2,666 assessments on 161-989 trainees. Faculty could easily be discriminated from one another using a variety of LIWC metrics including word count, words per sentence, and the use of "clout" words. These patterns appeared person specific and did not reflect demographic factors such as gender or rank. These metrics were similarly not consistently associated with trainee factors such as postgraduate year or gender. CONCLUSIONS Faculty seem to have detectable writing styles that are relatively stable across the trainees they assess, which may represent an under-recognized source of construct irrelevance. If written comments are to meaningfully contribute to decision making, we need to understand and account for idiosyncratic writing styles.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University Health Network, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650
| | - Andrea Gingerich
- A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: https://orcid.org/0000-0001-5765-3975
| | - Jennifer R Kogan
- J.R. Kogan is professor and associate dean for student success and professional development, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| | - Christopher J Watling
- C.J. Watling is professor and director, Centre for Education Research and Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0001-9686-795X
| | - Kevin W Eva
- K.W. Eva is professor and director of education research and scholarship, Department of Medicine, and associate director and senior scientist, Centre for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada; ORCID: http://orcid.org/0000-0002-8672-2500
| |
Collapse
|
22
|
Hahn B, Waring ED, Chacko J, Trovato G, Tice A, Greenstein J. Assessment of Written Feedback for Emergency Medicine Residents. South Med J 2020; 113:451-456. [PMID: 32885265 DOI: 10.14423/smj.0000000000001142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
OBJECTIVES An essential component of resident growth is a learning environment with high-quality feedback. Criteria have been developed for characterizing and assessing written feedback quality in internal medicine residents by Jackson et al. Our primary goal was to describe feedback characteristics and assess the quality of written feedback for emergency medicine (EM) residents. Our secondary goals were to evaluate the relation between feedback quality and objective outcome measures. METHODS This retrospective study was conducted between July 1, 2016 and July 1, 2018. EM residents with an Accreditation Council for Graduate Medical Education composite score (ACS), an in-service score, and written evaluations completed by an attending physician or EM resident in each of the 2 years of the study period were included. RESULTS Overall, most of the evaluations contained 1 (21%), 2 (23%), or 3 (17%) feedback items. Feedback tended to be positive (82%) and the feedback quality of the evaluations was more likely to be high (44%). There was an association between feedback quality and ACS change (P < 0.0001), but not in-service score change (P = 0.63). Resident evaluations were more likely than attending evaluations to correlate with ACS change (P < 0.00001). CONCLUSIONS The written evaluations contained few individual feedback items. Evaluations generally focused on the feedback characteristics of professionalism and interpersonal communication. The general feedback quality of evaluations tended to be high and correlated with an increase in ACSs.
Collapse
Affiliation(s)
- Barry Hahn
- From the Department of Emergency Medicine, Staten Island University Hospital, Northwell Health, Staten Island, New York
| | - Elizabeth D Waring
- From the Department of Emergency Medicine, Staten Island University Hospital, Northwell Health, Staten Island, New York
| | - Jerel Chacko
- From the Department of Emergency Medicine, Staten Island University Hospital, Northwell Health, Staten Island, New York
| | - Gabriella Trovato
- From the Department of Emergency Medicine, Staten Island University Hospital, Northwell Health, Staten Island, New York
| | - Amanda Tice
- From the Department of Emergency Medicine, Staten Island University Hospital, Northwell Health, Staten Island, New York
| | - Josh Greenstein
- From the Department of Emergency Medicine, Staten Island University Hospital, Northwell Health, Staten Island, New York
| |
Collapse
|
23
|
Ginsburg S, Kogan JR, Gingerich A, Lynch M, Watling CJ. Taken Out of Context: Hazards in the Interpretation of Written Assessment Comments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1082-1088. [PMID: 31651432 DOI: 10.1097/acm.0000000000003047] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE Written comments are increasingly valued for assessment; however, a culture of politeness and the conflation of assessment with feedback lead to ambiguity. Interpretation requires reading between the lines, which is untenable with large volumes of qualitative data. For computer analytics to help with interpreting comments, the factors influencing interpretation must be understood. METHOD Using constructivist grounded theory, the authors interviewed 17 experienced internal medicine faculty at 4 institutions between March and July, 2017, asking them to interpret and comment on 2 sets of words: those that might be viewed as "red flags" (e.g., good, improving) and those that might be viewed as signaling feedback (e.g., should, try). Analysis focused on how participants ascribed meaning to words. RESULTS Participants struggled to attach meaning to words presented acontextually. Four aspects of context were deemed necessary for interpretation: (1) the writer; (2) the intended and potential audiences; (3) the intended purpose(s) for the comments, including assessment, feedback, and the creation of a permanent record; and (4) the culture, including norms around assessment language. These contextual factors are not always apparent; readers must balance the inevitable need to interpret others' language with the potential hazards of second-guessing intent. CONCLUSIONS Comments are written for a variety of intended purposes and audiences, sometimes simultaneously; this reality creates dilemmas for faculty attempting to interpret these comments, with or without computer assistance. Attention to context is essential to reduce interpretive uncertainty and ensure that written comments can achieve their potential to enhance both assessment and feedback.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University Health Network, University of Toronto, Toronto, Ontario, Canada, and Canada Research Chair in Health Professions Education; ORCID: http://orcid.org/0000-0002-4595-6650. J.R. Kogan is professor of medicine, Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania. A. Gingerich is assistant professor, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada; ORCID: http://orcid.org/0000-0001-5765-3975. M. Lynch is postdoctoral fellow, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada. C.J. Watling is professor, Department of Clinical Neurological Sciences, scientist, Centre for Education Research and Innovation, and associate dean of postgraduate medical education, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; ORCID: http://orcid.org/0000-0001-9686-795X
| | | | | | | | | |
Collapse
|
24
|
Ledford R, Burger A, LaRochelle J, Klocksieben F, DeWaay D, O’Brien KE. Exploring Perspectives from Internal Medicine Clerkship Directors in the USA on Effective Narrative Evaluation: Results from the CDIM National Survey. MEDICAL SCIENCE EDUCATOR 2020; 30:155-161. [PMID: 34457654 PMCID: PMC8368638 DOI: 10.1007/s40670-019-00825-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Clinical performance evaluations play a critical role in determining medical school clerkship grades. This study aimed to provide clarification from clerkship directors in internal medicine on what constitutes an effective and informative narrative description of student performance. METHODS In September 2016, the Clerkship Directors in Internal Medicine (CDIM) electronically administered its annual, voluntary, and confidential cross-sectional survey of its US membership. One section of the survey asked six questions regarding the helpful components of an effective narrative evaluation. Respondents were asked to rate the effectiveness of elements contained within narrative evaluations of students. RESULTS Ninety-five CDIM members responded to the survey with an overall response rate of 74.2%. Descriptions of skills and behaviors were felt to be the most important, followed by a description of the overall synthetic or global assessment level of the student. Descriptions of personality and attitude were the next highest rated feature followed by adjectives describing performance. Length was felt to be the least important component. In free-text comments, several respondents indicated that direct observation of performance and specific examples of skills and behaviors are also desirable. CONCLUSIONS Narrative evaluations of students that explicitly comment on skills, behaviors, and an overarching performance level of the learner are strongly preferred by clerkship directors. Direct observation of clinical performance and giving specific examples of such behaviors give evaluations even more importance. Faculty development on evaluation and assessment should include instruction on these narrative assessment characteristics.
Collapse
Affiliation(s)
- Robert Ledford
- Department of Internal Medicine, Division of Hospital Medicine, University of South Florida Morsani College of Medicine, 12901 Bruce B. Downs Boulevard, MDC 80, Tampa, FL 33612 USA
| | - Alfred Burger
- Icahn School of Medicine at Mount Sinai, New York, NY USA
| | - Jeff LaRochelle
- Department of Medical Education, University of Central Florida College of Medicine, Orlando, FL USA
| | - Farina Klocksieben
- Department of Internal Medicine, Division of Hospital Medicine, University of South Florida Morsani College of Medicine, 12901 Bruce B. Downs Boulevard, MDC 80, Tampa, FL 33612 USA
| | - Deborah DeWaay
- Department of Internal Medicine, Division of Hospital Medicine, University of South Florida Morsani College of Medicine, 12901 Bruce B. Downs Boulevard, MDC 80, Tampa, FL 33612 USA
| | - Kevin E. O’Brien
- Department of Internal Medicine, Division of Hospital Medicine, University of South Florida Morsani College of Medicine, 12901 Bruce B. Downs Boulevard, MDC 80, Tampa, FL 33612 USA
| |
Collapse
|
25
|
Vu JV, Harbaugh CM, De Roo AC, Biesterveld BE, Gauger PG, Dimick JB, Sandhu G. Leadership-Specific Feedback Practices in Surgical Residency: A Qualitative Study. JOURNAL OF SURGICAL EDUCATION 2020; 77:45-53. [PMID: 31492642 PMCID: PMC6944744 DOI: 10.1016/j.jsurg.2019.08.020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 08/02/2019] [Accepted: 08/19/2019] [Indexed: 05/12/2023]
Abstract
OBJECTIVE The importance of feedback is well recognized in surgical training. Although there is increased focus on leadership as an essential competency in surgical training, it is unclear whether surgical residents receive effective feedback on leadership performance. We performed an exploratory qualitative study with surgical residents to understand current leadership-specific feedback practices in one surgical training program. DESIGN We conducted semistructured interviews with surgical residents. Using line-by-line coding in an iterative process, we focused on feedback on leadership performance to capture both semantic and conceptual data. SETTING The general surgery residency program at the University of Michigan, a tertiary care, academic institution. PARTICIPANTS Residents were purposively selected to include key informants and comprise a balanced sample with respect to postgraduate year, gender, and race. RESULTS Four major themes were identified during the thematic analysis: (1) the importance of feedback for leadership development in residency; (2) inadequacy of current feedback mechanisms; (3) barriers to giving and receiving leadership-specific feedback; and (4) resident-driven recommendations for better leadership feedback. CONCLUSIONS Many surgical residents do not receive effective leadership feedback, although they express strong desire for formal evaluation of leadership skills. Establishing avenues for feedback on leadership performance will help bridge this gap. Additionally, training to give and receive leadership-specific feedback may improve the quality and incorporation of delivered feedback for developing surgeon-leaders.
Collapse
Affiliation(s)
- Joceline V Vu
- Department of Surgery, University of Michigan, Ann Arbor, Michigan; Center for Healthcare Outcomes and Policy, Ann Arbor, Michigan.
| | - Calista M Harbaugh
- Department of Surgery, University of Michigan, Ann Arbor, Michigan; Center for Healthcare Outcomes and Policy, Ann Arbor, Michigan
| | - Ana C De Roo
- Department of Surgery, University of Michigan, Ann Arbor, Michigan; Center for Healthcare Outcomes and Policy, Ann Arbor, Michigan
| | | | - Paul G Gauger
- Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Justin B Dimick
- Department of Surgery, University of Michigan, Ann Arbor, Michigan; Center for Healthcare Outcomes and Policy, Ann Arbor, Michigan
| | - Gurjit Sandhu
- Department of Surgery, University of Michigan, Ann Arbor, Michigan
| |
Collapse
|
26
|
Tekian A, Park YS, Tilton S, Prunty PF, Abasolo E, Zar F, Cook DA. Competencies and Feedback on Internal Medicine Residents' End-of-Rotation Assessments Over Time: Qualitative and Quantitative Analyses. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1961-1969. [PMID: 31169541 PMCID: PMC6882536 DOI: 10.1097/acm.0000000000002821] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
PURPOSE To examine how qualitative narrative comments and quantitative ratings from end-of-rotation assessments change for a cohort of residents from entry to graduation, and explore associations between comments and ratings. METHOD The authors obtained end-of-rotation quantitative ratings and narrative comments for 1 cohort of internal medicine residents at the University of Illinois at Chicago College of Medicine from July 2013-June 2016. They inductively identified themes in comments, coded orientation (praising/critical) and relevance (specificity and actionability) of feedback, examined associations between codes and ratings, and evaluated changes in themes and ratings across years. RESULTS Data comprised 1,869 assessments (828 comments) on 33 residents. Five themes aligned with ACGME competencies (interpersonal and communication skills, professionalism, medical knowledge, patient care, and systems-based practice), and 3 did not (personal attributes, summative judgment, and comparison to training level). Work ethic was the most frequent subtheme. Comments emphasized medical knowledge more in year 1 and focused more on autonomy, leadership, and teaching in later years. Most comments (714/828 [86%]) contained high praise, and 412/828 (50%) were very relevant. Average ratings correlated positively with orientation (β = 0.46, P < .001) and negatively with relevance (β = -0.09, P = .01). Ratings increased significantly with each training year (year 1, mean [standard deviation]: 5.31 [0.59]; year 2: 5.58 [0.47]; year 3: 5.86 [0.43]; P < .001). CONCLUSIONS Narrative comments address resident attributes beyond the ACGME competencies and change as residents progress. Lower quantitative ratings are associated with more specific and actionable feedback.
Collapse
Affiliation(s)
- Ara Tekian
- A. Tekian is professor and associate dean for international affairs, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-9252-1588
| | - Yoon Soo Park
- Y.S. Park is associate professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335
| | - Sarette Tilton
- S. Tilton is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Patrick F. Prunty
- P.F. Prunty is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Eric Abasolo
- E. Abasolo is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Fred Zar
- F. Zar is professor and program director, Department of Medicine, University of Illinois at Chicago College of Medicine, Chicago, Illinois
| | - David A. Cook
- D.A. Cook is professor of medicine and medical education and associate director, Office of Applied Scholarship and Education Science, and consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota; ORCID: https://orcid.org/0000-0003-2383-4633
| |
Collapse
|
27
|
Tran A, Vertes J, Kulai T, Connors L. The role of video-assisted feedback sessions in resident teaching: A pre-post intervention. MEDEDPUBLISH 2019; 8:219. [PMID: 38089340 PMCID: PMC10712475 DOI: 10.15694/mep.2019.000219.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2024] Open
Abstract
This article was migrated. The article was marked as recommended. Purpose: Despite providing a large component of teaching to trainees, internal medicine residents receive little feedback on their teaching ability. Methods: This was a single-center, mixed methods study of 19 senior internal medicine residents in Canada. Classroom-based teaching sessions delivered by the participants were individually video recorded. The individual recording was then watched by the participant and by two feedback facilitators, who then met for face-to-face feedback. Participants completed a self-reflective exercise after this intervention. Audience members of the recorded session and a post-feedback teaching session completed an evaluation form. Scores from the evaluation forms from each phase were analyzed with the Wilcoxon Signed-Rank Test. Inductive coding was performed for qualitative data from the feedback sessions and reflective exercises. Results: 19 residents participated. There was no statistical difference in the evaluation form scores between the pre-intervention and post-intervention teaching sessions. Mean scores varied from 4.6 to 5.0 out of 5.0 on combined pre-and post-intervention evaluations. 89% of participants found viewing their recorded session useful. 94% of residents stated the intervention was worth continuing. Common themes of feedback and self-evaluation included "time-management," "organization," "communication," and "environment." Conclusion: Video-assisted feedback of teaching improved self-perception of a resident's teaching ability.
Collapse
|
28
|
Wolpaw J, Saddawi-Konefka D, Dwivedi P, Toy S. Faculty Underestimate Resident Desire for Constructive Feedback and Overestimate Retaliation. THE JOURNAL OF EDUCATION IN PERIOPERATIVE MEDICINE : JEPM 2019; 21:E634. [PMID: 32123699 PMCID: PMC7039676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
BACKGROUND Constructive feedback from faculty to trainees is essential to promoting trainees' learning yet is rarely provided. Resident physicians want more feedback than they receive but it is unclear whether faculty know this. We explored faculty and resident impressions of constructive feedback and the barriers to giving more. We hypothesized that residents want more constructive feedback; however, faculty believe that residents do not want constructive feedback and would retaliate against faculty who give it. METHODS Between January and March 2019, we performed a cross-sectional survey study of anesthesiology residents and teaching faculty at two large academic centers. All residents and faculty were eligible to participate. The survey assessed satisfaction with written and in-person feedback and predicted responses to specific examples, in addition to perceived barriers. RESULTS The survey was distributed to 156 residents and 260 faculty across the two institutions: 116 residents (74% response rate) and 127 faculty (49% response rate) responded. Eighty-eight percent of residents would want to receive feedback similar to the examples, whereas only 60% of faculty responded that they thought residents would want feedback. Ninety-eight percent of residents said they would not retaliate. Barriers to providing feedback included time constraints, insufficient confidence/training, fear of retaliation, and feelings of futility. CONCLUSIONS Residents were significantly more likely to want to receive constructive feedback than the faculty members had predicted. Further, residents are unlikely to retaliate against faculty who provide feedback. Addressing barriers may help increase the amount of constructive feedback that faculty provide and resident satisfaction with feedback received.
Collapse
|
29
|
Milestone Implementation's Impact on Narrative Comments and Perception of Feedback for Internal Medicine Residents: a Mixed Methods Study. J Gen Intern Med 2019; 34:929-935. [PMID: 30891692 PMCID: PMC6544770 DOI: 10.1007/s11606-019-04946-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
BACKGROUND Feedback is a critical element of graduate medical education. Narrative comments on evaluation forms are a source of feedback for residents. As a shared mental model for performance, milestone-based evaluations may impact narrative comments and resident perception of feedback. OBJECTIVE To determine if milestone-based evaluations impacted the quality of faculty members' narrative comments on evaluations and, as an extension, residents' perception of feedback. DESIGN Concurrent mixed methods study, including qualitative analysis of narrative comments and survey of resident perception of feedback. PARTICIPANTS Seventy internal medicine residents and their faculty evaluators at the University of Utah. APPROACH Faculty narrative comments from 248 evaluations pre- and post-milestone implementation were analyzed for quality and Accreditation Council for Graduate Medical Education competency by area of strength and area for improvement. Seventy residents were surveyed regarding quality of feedback pre- and post-milestone implementation. KEY RESULTS Qualitative analysis of narrative comments revealed nearly all evaluations pre- and post-milestone implementation included comments about areas of strength but were frequently vague and not related to competencies. Few evaluations included narrative comments on areas for improvement, but these were of higher quality compared to areas of strength (p < 0.001). Overall resident perception of quality of narrative comments was low and did not change following milestone implementation (p = 0.562) for the 86% of residents (N = 60/70) who completed the pre- and post-surveys. CONCLUSIONS The quality of narrative comments was poor, and there was no evidence of improved quality following introduction of milestone-based evaluations. Comments on areas for improvement were of higher quality than areas of strength, suggesting an area for targeted intervention. Residents' perception of feedback quality did not change following implementation of milestone-based evaluations, suggesting that in the post-milestone era, internal medicine educators need to utilize additional interventions to improve quality of feedback.
Collapse
|
30
|
Albano S, Quadri SA, Farooqui M, Arangua L, Clark T, Fischberg GM, Tayag EC, Siddiqi J. Resident Perspective on Feedback and Barriers for Use as an Educational Tool. Cureus 2019; 11:e4633. [PMID: 31312559 PMCID: PMC6623994 DOI: 10.7759/cureus.4633] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Background Feedback in physician graduate medical education is not clearly defined. Some parties may view questioning as a form of feedback, others the conversations over lunch, some the comments in the operating room (OR), and still others the written evaluation at planned meetings. The lack of clarity in defining what constitutes feedback is concerning when this is considered a fundamental means of education to enhance practices and care for patients. If residents do not recognize they are receiving feedback, or the response to feedback is met with opposition, then feedback as an educational device can be limited. For this manuscript, feedback is defined as written or verbal comments regarding medical knowledge, performance, technique, or patient care. Objective This study attempts to identify barriers to feedback by identifying attitudes toward feedback processes through a questionnaire. Methods Ten questions were provided to residents at a single institution representing, emergency medicine, family medicine, internal medicine, neurology, and neurosurgery during the 2017-2018 academic year. Response was voluntary and the study was granted exemption by local institutional review board since no identifying information was collected to link responses to specific residents. Questions were formulated to identify how positive or negative a resident felt toward specific aspects of feedback. Results Of the possible 84 resident respondents, 40 residents participated reflecting a response of approximately 48%. Questionnaires revealed that 22.5% of respondents found feedback to be a stressful event. Sixty-seven point five percent (67.5%) of resident respondents associated the prompt that they are about to receive feedback as concerning. Only 2.5% of residents identified a meeting with the program director as a sign that the resident may be doing well. Appointments for feedback were viewed as a positive event in 12.5% of respondents. Ninety-five percent (95%) of residents do not feel that all feedback will affect their permanent record. Ten percent (10%) of residents identified receiving feedback as a positive event. Ninety-five percent (95%) of residents indicated that they have actively tried to change behavior or practices based on feedback. Forty percent (40%) of residents found themselves censoring "negative" feedback. Conclusions Barriers to feedback include the inability to present sensitive subjects in a constructive manner and superficial relationships between the evaluator and resident physician. Research directed at addressing these barriers could lead to improved use of feedback as an educational tool.
Collapse
Affiliation(s)
- Stephen Albano
- Neurosurgery, Desert Regional Medical Center, Palm Springs, USA
| | - Syed A Quadri
- Neurosurgery, California Institute of Neurosciences, Thousand Oaks, USA
| | | | - Luis Arangua
- Neurology, Desert Regional Medical Center, Palm Springs, USA
| | - Thomas Clark
- Neurology, Desert Regional Medical Center, Palm Springs, USA
| | - Glenn M Fischberg
- Neurology and Neurosurgery, Desert Regional Medical Center, Palm Springs, USA
| | - Emilio C Tayag
- Neurology and Neurosurgery, Desert Regional Medical Center, Palm Springs, USA
| | - Javed Siddiqi
- Neurosurgery, Desert Regional Medical Center, Palm Springs, USA
| |
Collapse
|
31
|
Abraham RM, Singaram VS. Using deliberate practice framework to assess the quality of feedback in undergraduate clinical skills training. BMC MEDICAL EDUCATION 2019; 19:105. [PMID: 30975213 PMCID: PMC6460682 DOI: 10.1186/s12909-019-1547-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2018] [Accepted: 04/04/2019] [Indexed: 05/03/2023]
Abstract
BACKGROUND In this research paper we report on the quality of feedback provided in the logbooks of pre-clinical undergraduate students based on a model of 'actionable feedback'. Feedback to clinical learners about their performance is crucial to their learning, which ultimately impacts on their development into competent clinicians. Due to students' concerns regarding the inconsistency and quality of feedback provided by clinicians, a structured feedback improvement strategy to move feedback forward was added to the clinical skills logbook. The instrument was also extended for peer assessment. This study aims to assess the quality of feedback using the deliberate practice framework. METHODS A feedback scoring system was used to retrospectively assess the quality of tutor and peer logbook feedback provided to second and third year medical students to identify deliberate practice components i.e. task, performance gap and action plan. The sample consisted of 425 second year and 600 third year feedback responses over a year. RESULTS All three deliberate practice components were observed in the majority of the written feedback for both classes. The frequency was higher in peer (83%, 89%) than tutor logbook assessments (51%, 67%) in both classes respectively. Average tutor and peer task, gap and action feedback scores ranged from 1.84-2.07 and 1.93-2.21 respectively. The overall quality of feedback provided by the tutor and peer was moderate and less specific (average score < or = 2). The absence of the three components was noted in only 1% of the feedback responses in both 2nd and 3rd year. CONCLUSION This study found that adding in a feed-forward strategy to the logbooks increased the overall quality of tutor and peer feedback as the task, gap and action plans were described. Deliberate practice framework provides an objective assessment of tutor and peer feedback quality and can be used for faculty development and training. The findings from our study suggest that the ratings from the tool can also be used as guidelines to provide feedback providers with feedback on the quality of feedback they provided. This includes specifically describing a task, performance gap and providing a learning plan as feed-forward to enhance feedback given.
Collapse
Affiliation(s)
- Reina M Abraham
- Clinical and Professional Practice, School of Clinical Medicine, College of Health Sciences, University of KwaZulu-Natal, Durban, 4000, South Africa.
| | - Veena S Singaram
- Clinical and Professional Practice, School of Clinical Medicine, College of Health Sciences, University of KwaZulu-Natal, Durban, 4000, South Africa
| |
Collapse
|
32
|
Bing-You R, Varaklis K, Hayes V, Trowbridge R, Kemp H, McKelvy D. The Feedback Tango: An Integrative Review and Analysis of the Content of the Teacher-Learner Feedback Exchange. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:657-663. [PMID: 28991848 DOI: 10.1097/acm.0000000000001927] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE To conduct an integrative review and analysis of the literature on the content of feedback to learners in medical education. METHOD Following completion of a scoping review in 2016, the authors analyzed a subset of articles published through 2015 describing the analysis of feedback exchange content in various contexts: audiotapes, clinical examination, feedback cards, multisource feedback, videotapes, and written feedback. Two reviewers extracted data from these articles and identified common themes. RESULTS Of the 51 included articles, about half (49%) were published since 2011. Most involved medical students (43%) or residents (43%). A leniency bias was noted in many (37%), as there was frequently reluctance to provide constructive feedback. More than one-quarter (29%) indicated the feedback was low in quality (e.g., too general, limited amount, no action plans). Some (16%) indicated faculty dominated conversations, did not use feedback forms appropriately, or provided inadequate feedback, even after training. Multiple feedback tools were used, with some articles (14%) describing varying degrees of use, completion, or legibility. Some articles (14%) noted the impact of the gender of the feedback provider or learner. CONCLUSIONS The findings reveal that the exchange of feedback is troubled by low-quality feedback, leniency bias, faculty deficient in feedback competencies, challenges with multiple feedback tools, and gender impacts. Using the tango dance form as a metaphor for this dynamic partnership, the authors recommend ways to improve feedback for teachers and learners willing to partner with each other and engage in the complexities of the feedback exchange.
Collapse
Affiliation(s)
- Robert Bing-You
- R. Bing-You is professor, Tufts University School of Medicine, Boston, Massachusetts, and vice president for medical education, Maine Medical Center, Portland, Maine. K. Varaklis is clinical associate professor, Tufts University School of Medicine, Boston, Massachusetts, and designated institutional official, Maine Medical Center, Portland, Maine. V. Hayes is clinical assistant professor, Tufts University School of Medicine, Boston, Massachusetts, and faculty member, Department of Family Medicine, Maine Medical Center, Portland, Maine. R. Trowbridge is associate professor, Tufts University School of Medicine, Boston, Massachusetts, and director of undergraduate medical education, Department of Medicine, Maine Medical Center, Portland, Maine. H. Kemp is medical librarian, Maine Medical Center, Portland, Maine. D. McKelvy is manager of library and knowledge services, Maine Medical Center, Portland, Maine
| | | | | | | | | | | |
Collapse
|
33
|
Tiwari V, Kumar AB. A Novel Method of Evaluating Key Factors for Success in a Multifaceted Critical Care Fellowship Using Data Envelopment Analysis. Anesth Analg 2018; 126:260-269. [PMID: 28742779 DOI: 10.1213/ane.0000000000002260] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND The current system of summative multi-rater evaluations and standardized tests to determine readiness to graduate from critical care fellowships has limitations. We sought to pilot the use of data envelopment analysis (DEA) to assess what aspects of the fellowship program contribute the most to an individual fellow's success. DEA is a nonparametric, operations research technique that uses linear programming to determine the technical efficiency of an entity based on its relative usage of resources in producing the outcome. DESIGN Retrospective cohort study. SUBJECTS AND SETTING Critical care fellows (n = 15) in an Accreditation Council for Graduate Medical Education (ACGME) accredited fellowship at a major academic medical center in the United States. METHODS After obtaining institutional review board approval for this retrospective study, we analyzed the data of 15 anesthesiology critical care fellows from academic years 2013-2015. The input-oriented DEA model develops a composite score for each fellow based on multiple inputs and outputs. The inputs included the didactic sessions attended, the ratio of clinical duty works hours to the procedures performed (work intensity index), and the outputs were the Multidisciplinary Critical Care Knowledge Assessment Program (MCCKAP) score and summative evaluations of fellows. RESULTS A DEA efficiency score that ranged from 0 to 1 was generated for each of the fellows. Five fellows were rated as DEA efficient, and 10 fellows were characterized in the DEA inefficient group. The model was able to forecast the level of effort needed for each inefficient fellow, to achieve similar outputs as their best performing peers. The model also identified the work intensity index as the key element that characterized the best performers in our fellowship. CONCLUSIONS DEA is a feasible method of objectively evaluating peer performance in a critical care fellowship beyond summative evaluations alone and can potentially be a powerful tool to guide individual performance during the fellowship.
Collapse
Affiliation(s)
- Vikram Tiwari
- From the Departments of Anesthesiology and Biomedical Informatics
| | - Avinash B Kumar
- Division of Critical Care, Department of Anesthesiology, Vanderbilt University, Nashville, Tennessee
| |
Collapse
|
34
|
Cheung WJ, Dudek NL, Wood TJ, Frank JR. Supervisor-trainee continuity and the quality of work-based assessments. MEDICAL EDUCATION 2017; 51:1260-1268. [PMID: 28971502 DOI: 10.1111/medu.13415] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 05/30/2017] [Accepted: 07/11/2017] [Indexed: 05/12/2023]
Abstract
CONTEXT Work-based assessments (WBAs) represent an increasingly important means of reporting expert judgements of trainee competence in clinical practice. However, the quality of WBAs completed by clinical supervisors is of concern. The episodic and fragmented interaction that often occurs between supervisors and trainees has been proposed as a barrier to the completion of high-quality WBAs. OBJECTIVES The primary purpose of this study was to determine the effect of supervisor-trainee continuity on the quality of assessments documented on daily encounter cards (DECs), a common form of WBA. The relationship between trainee performance and DEC quality was also examined. METHODS Daily encounter cards representing three differing degrees of supervisor-trainee continuity (low, intermediate, high) were scored by two raters using the Completed Clinical Evaluation Report Rating (CCERR), a previously published nine-item quantitative measure of DEC quality. An analysis of variance (anova) was performed to compare mean CCERR scores among the three groups. Linear regression analysis was conducted to examine the relationship between resident performance and DEC quality. RESULTS Differences in mean CCERR scores were observed between the three continuity groups (p = 0.02); however, the magnitude of the absolute differences was small (partial eta-squared = 0.03) and not educationally meaningful. Linear regression analysis demonstrated a significant inverse relationship between resident performance and CCERR score (p < 0.001, r2 = 0.18). This inverse relationship was observed in both groups representing on-service residents (p = 0.001, r2 = 0.25; p = 0.04, r2 = 0.19), but not in the Off-service group (p = 0.62, r2 = 0.05). CONCLUSIONS Supervisor-trainee continuity did not have an educationally meaningful influence on the quality of assessments documented on DECs. However, resident performance was found to affect assessor behaviours in the On-service group, whereas DEC quality remained poor regardless of performance in the Off-service group. The findings suggest that greater attention should be given to determining ways of improving the quality of assessments reported for off-service residents, as well as for those residents demonstrating appropriate clinical competence progression.
Collapse
Affiliation(s)
- Warren J Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Nancy L Dudek
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Timothy J Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Jason R Frank
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| |
Collapse
|
35
|
Loeppky C, Babenko O, Ross S. Examining gender bias in the feedback shared with family medicine residents. EDUCATION FOR PRIMARY CARE 2017; 28:319-324. [DOI: 10.1080/14739879.2017.1362665] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Chantal Loeppky
- Department of Family Medicine, University of Alberta, Edmonton, Canada
| | - Oksana Babenko
- Department of Family Medicine, University of Alberta, Edmonton, Canada
| | - Shelley Ross
- Department of Family Medicine, University of Alberta, Edmonton, Canada
| |
Collapse
|
36
|
Hatala R, Sawatsky AP, Dudek N, Ginsburg S, Cook DA. Using In-Training Evaluation Report (ITER) Qualitative Comments to Assess Medical Students and Residents: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:868-879. [PMID: 28557953 DOI: 10.1097/acm.0000000000001506] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE In-training evaluation reports (ITERs) constitute an integral component of medical student and postgraduate physician trainee (resident) assessment. ITER narrative comments have received less attention than the numeric scores. The authors sought both to determine what validity evidence informs the use of narrative comments from ITERs for assessing medical students and residents and to identify evidence gaps. METHOD Reviewers searched for relevant English-language studies in MEDLINE, EMBASE, Scopus, and ERIC (last search June 5, 2015), and in reference lists and author files. They included all original studies that evaluated ITERs for qualitative assessment of medical students and residents. Working in duplicate, they selected articles for inclusion, evaluated quality, and abstracted information on validity evidence using Kane's framework (inferences of scoring, generalization, extrapolation, and implications). RESULTS Of 777 potential articles, 22 met inclusion criteria. The scoring inference is supported by studies showing that rich narratives are possible, that changing the prompt can stimulate more robust narratives, and that comments vary by context. Generalization is supported by studies showing that narratives reach thematic saturation and that analysts make consistent judgments. Extrapolation is supported by favorable relationships between ITER narratives and numeric scores from ITERs and non-ITER performance measures, and by studies confirming that narratives reflect constructs deemed important in clinical work. Evidence supporting implications is scant. CONCLUSIONS The use of ITER narratives for trainee assessment is generally supported, except that evidence is lacking for implications and decisions. Future research should seek to confirm implicit assumptions and evaluate the impact of decisions.
Collapse
Affiliation(s)
- Rose Hatala
- R. Hatala is associate professor of medicine, Faculty of Medicine, and director, Clinical Educator Fellowship, Centre for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada. A.P. Sawatsky is assistant professor of medicine and senior associate consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota. N. Dudek is associate professor, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. S. Ginsburg is professor, Department of Medicine, Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University Health Network/University of Toronto, and staff physician, Mount Sinai Hospital, Toronto, Ontario, Canada. D.A. Cook is professor of medicine and medical education, associate director, Mayo Clinic Online Learning, and consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota
| | | | | | | | | |
Collapse
|
37
|
Blankush JM, Shah BJ, Barnett SH, Badran G, Mercado A, Karani R, Muller D, Leitman IM. What are the associations between the quantity of faculty evaluations and residents' perception of quality feedback? Ann Med Surg (Lond) 2017; 16:40-43. [PMID: 28386393 PMCID: PMC5369264 DOI: 10.1016/j.amsu.2017.03.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Accepted: 03/01/2017] [Indexed: 11/17/2022] Open
Abstract
Objectives To determine if there is a correlation between the numbers of evaluations submitted by faculty and the perception of the quality of feedback reported by trainees on a yearly survey. Method 147 ACGME-accredited training programs sponsored by a single medical school were included in the analysis. Eighty-seven programs (49 core residency programs and 38 advanced training programs) with 4 or more trainees received ACGME survey summary data for academic year 2013–2014. Resident ratings of satisfaction with feedback were analyzed against the number of evaluations completed per resident during the same period. R-squared correlation analysis was calculated using a Pearson correlation coefficient. Results 177,096 evaluations were distributed to the 87 programs, of which 117,452 were completed (66%). On average, faculty submitted 33.9 evaluations per resident. Core residency programs had a greater number of evaluations per resident than fellowship programs (39.2 vs. 27.1, respectively, p = 0.15). The average score for the “satisfied with feedback after assignment” survey questions was 4.2 (range 2.2–5.0). There was no overall correlation between the number of evaluations per resident and the residents' perception of feedback from faculty based on medical, surgical or hospital-based programs. Conclusions Resident perception of feedback is not correlated with number of faculty evaluations. An emphasis on faculty summative evaluation of resident performance is important but appears to miss the mark as a replacement for on-going, data-driven, structured resident feedback. Understanding the difference between evaluation and feedback is a global concept that is important for all medical educators and learners. Residents and fellows do not perceive that regular evaluations are the same as feedback. The quantity of faculty evaluations does not correlate the resident perception of quality feedback. A greater emphasis is necessary to instruct faculty on providing regular, timely and data-driven feedback to residents and fellows with specific comments on performance. Faculty summative evaluation of resident performance is important but this is not a replacement for structured feedback.
Collapse
Affiliation(s)
- Joseph M Blankush
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - Brijen J Shah
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - Scott H Barnett
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - Gaber Badran
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - Amanda Mercado
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - Reena Karani
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - David Muller
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| | - I Michael Leitman
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, USA
| |
Collapse
|
38
|
Andolsek KM. Improving the Medical Student Performance Evaluation to Facilitate Resident Selection. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:1475-1479. [PMID: 27603040 DOI: 10.1097/acm.0000000000001386] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
The Medical Student Performance Evaluation (MSPE) was introduced as a refinement of the prior "dean's letter" to provide residency program directors with a standardized comprehensive assessment of a medical student's performance throughout medical school. The author argues that, although the MSPE was created with good intentions, many have questioned its efficacy in predicting performance during residency. The author asserts that, despite decades of use and some acknowledged improvement, the MSPE remains a suboptimal tool for informing program directors' decisions about which applicants to interview and rank. In the current approach to MSPEs, there may even be some inherent conflicts of interest that cannot be overcome. In January 2015, an MSPE Task Force was created to review the MSPE over three years and recommend changes to its next iteration. The author believes, however, that expanding this collaborative effort between undergraduate and graduate medical education and other stakeholders could optimize the MSPE's standardization and transparency. The author offers six recommendations for achieving this goal: developing a truly standardized MSPE template; improving faculty accountability in student assessment; enhancing transparency in the MSPE; reconsidering the authorship responsibility of the MSPE; including assessment of compliance with administrative tasks and peer assessments in student evaluations; and embracing milestones for evaluation of medical student performance.
Collapse
Affiliation(s)
- Kathryn M Andolsek
- K.M. Andolsek is professor of community and family medicine and assistant dean for premedical education, Duke University School of Medicine, Durham, North Carolina
| |
Collapse
|
39
|
Gulbas L, Guerin W, Ryder HF. Does what we write matter? Determining the features of high- and low-quality summative written comments of students on the internal medicine clerkship using pile-sort and consensus analysis: a mixed-methods study. BMC MEDICAL EDUCATION 2016; 16:145. [PMID: 27177917 PMCID: PMC4866272 DOI: 10.1186/s12909-016-0660-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Accepted: 05/02/2016] [Indexed: 06/05/2023]
Abstract
BACKGROUND Written comments by medical student supervisors provide written foundation for grade narratives and deans' letters and play an important role in student's professional development. Written comments are widely used but little has been published about the quality of written comments. We hypothesized that medical students share an understanding of qualities inherent to a high-quality and a low-quality narrative comment and we aimed to determine the features that define high- and low-quality comments. METHODS Using the well-established anthropological pile-sort method, medical students sorted written comments into 'helpful' and 'unhelpful' piles, then were interviewed to determine how they evaluated comments. We used multidimensional scaling and cluster analysis to analyze data, revealing how written comments were sorted across student participants. We calculated the degree of shared knowledge to determine the level of internal validity in the data. We transcribed and coded data elicited during the structured interview to contextualize the student's answers. Length of comment was compared using one-way analysis of variance; valence and frequency comments were thought of as helpful were analyzed by chi-square. RESULTS Analysis of written comments revealed four distinct clusters. Cluster A comments reinforced good behaviors or gave constructive criticism for how changes could be made. Cluster B comments exhorted students to continue non-specific behaviors already exhibited. Cluster C comments used grading rubric terms without giving student-specific examples. Cluster D comments used sentence fragments lacking verbs and punctuation. Student data exhibited a strong fit to the consensus model, demonstrating that medical students share a robust model of attributes of helpful and unhelpful comments. There was no correlation between valence of comment and perceived helpfulness. CONCLUSIONS Students find comments demonstrating knowledge of the student and providing specific examples of appropriate behavior to be reinforced or inappropriate behavior to be eliminated helpful, and comments that are non-actionable and non-specific to be least helpful. Our research and analysis allow us to make recommendations helpful for faculty development around written feedback.
Collapse
Affiliation(s)
- Lauren Gulbas
- School of Social Work, The University of Texas, Austin, TX, USA
| | - William Guerin
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | - Hilary F Ryder
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA.
- Department of Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH, 03784, USA.
| |
Collapse
|
40
|
The Art and Science of Learning, Teaching, and Delivering Feedback in Psychosomatic Medicine. PSYCHOSOMATICS 2016; 57:31-40. [DOI: 10.1016/j.psym.2015.09.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2015] [Revised: 08/28/2015] [Accepted: 08/31/2015] [Indexed: 11/18/2022]
|
41
|
Abstract
PURPOSE We examined the evaluations given by nurses to obstetrics and gynecology residents to estimate whether gender bias was evident. BACKGROUND Women receive more negative feedback and evaluations than men-from both sexes. Some suggest that, to be successful in traditionally male roles such as surgeon, women must manifest a warmth-related (communal) rather than competence-related (agentic) demeanor. Compared with male residents, female residents experience more interpersonal difficulties and less help from female nurses. We examined feedback provided to residents by female nurses. METHODS We examined Professional Associate Questionnaires (2006-2014) using a mixed-methods design. We compared scores per training year by gender using Mann-Whitney and linear regression adjusting for resident and nurse cohorts. Using grounded theory analysis, we developed a coding system for blinded comments based on principles of effective feedback, medical learners' evaluation, and impression management. χ examined the proportions of negative and positive and communal and agentic comments between genders. RESULTS We examined 2,202 evaluations: 397 (18%) for 10 men and 1,805 (82%) for 34 women. Twenty-three compliments (eg, "Great resident!") were excluded. Evaluations per training year varied: men n=77-134; women n=384-482. Postgraduate year (PGY)-1, PGY-2, and PGY-4 women had lower mean ratings (P<.035); when adjusted, the difference remained significant in PGY-2 (MWomen=1.5±0.6 compared with MMen=1.7±0.5; P=.001). Postgraduate year-1 women received disproportionately fewer positive and more negative agentic comments than PGY-1 men (positive=17.3% compared with 40%, negative=17.3% compared with 3.3%, respectively; P=.041). CONCLUSION Evidence of gender bias in evaluations emerged; albeit subtle, women received harsher feedback as lower-level residents than men. Training in effective evaluation and gender bias management is warranted.
Collapse
|