1
|
The effect of gender dyads on the quality of narrative assessments of general surgery trainees. Am J Surg 2021; 224:179-184. [PMID: 34911639 DOI: 10.1016/j.amjsurg.2021.12.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 11/30/2021] [Accepted: 12/01/2021] [Indexed: 01/13/2023]
Abstract
BACKGROUND Prior studies have shown that gender can influence how learners are assessed and the feedback they receive. We investigated the quality of faculty narrative comments in general surgery trainee evaluation using trainee-assessor gender dyads. METHODS Narrative assessments of surgical trainees at the University of British Columbia were collected and rated using the McMaster Narrative Comment Rating Scale (MNCRS). Variables from the MNCRS were inputted into a generalized linear mixed model to explore the impact of gender dyads on the quality of narrative feedback. RESULTS 2,469 assessments were collected. Women assessors tended to give higher-quality comments (p's < 0.05) than men assessors. Comments from men assessors to women trainees were significantly more positive than comments from men assessors to men trainees (p = 0.02). Men assessors also tended to give women trainees more reinforcing than corrective comments than to men trainees (p < 0.01). CONCLUSIONS There are significant differences in the quality of faculty feedback to trainees by gender dyads. A range of solutions to improve and reduce differences in feedback quality are discussed.
Collapse
|
2
|
Roshan A, Wagner N, Acai A, Emmerton-Coughlin H, Sonnadara RR, Scott TM, Karimuddin AA. Comparing the Quality of Narrative Comments by Rotation Setting. JOURNAL OF SURGICAL EDUCATION 2021; 78:2070-2077. [PMID: 34301523 DOI: 10.1016/j.jsurg.2021.06.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 06/20/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To investigate the effect of rotation setting on trainee-directed narrative comments within a Canadian General Surgery Residency Program. The primary outcome was to use the McMaster Narrative Comment Rating Scale (MNCRS) to evaluate the quality of narrative comments across five domains: valence of language, degree of correction versus reinforcement, specificity, actionability and overall usefulness. As distributed medical education in the postgraduate training context becomes more prevalent, delineating differences in feedback between various sites will be imperative, as it may affect how narrative comments are interpreted by clinical competency committee (CCC) members. DESIGN, SETTING, AND PARTICIPANTS A retrospective analysis of 2,469 assessments obtained between July 1, 2014 and May 5, 2019 from the General Surgery Residency Program at the University of British Columbia (UBC) was conducted. Narrative comments were rated using the McMaster Narrative Comment Rating Scale (MNCRS), a validated instrument for evaluating the quality of narrative comments. A repeated measures Analysis of Variance (ANOVA) was conducted to explore the impact of rotation setting, academic, urban tertiary, distributed urban, and distributed rural on the quality of narrative feedback. RESULTS Overall, the quality of the narrative comments varied substantially between and within rotation settings. Academic sites tended to provide more actionable comments (p = 0.01) and more corrective versus reinforcing comments, compared with other sites (p's < 0.01). Comments produced by the urban tertiary rotation setting were consistently lower in quality across all scale categories compared with other settings (p's < 0.01). CONCLUSION The type of rotation setting has a significant effect on the quality of faculty feedback for trainees. Faculty development on the provision of feedback is necessary, regardless of rotation setting, and should appropriately combine rotation-specific needs and overarching program goals to ensure trainees and clinical competence committees receive high quality narrative.
Collapse
Affiliation(s)
- Aishwarya Roshan
- University of British Columbia, Vancouver, British Columbia, Canada.
| | - Natalie Wagner
- Office of Professional Development & Educational Scholarship, Queen's University, Kingston, Ontario Canada
| | - Anita Acai
- Department of Psychology, Neuroscience & Behavior, McMaster University, Hamilton, Ontario, Canada; Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada; Office of Education Science, Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Heather Emmerton-Coughlin
- Department of Surgery, University of British Columbia, Vancouver, British Columbia Canada; Department of Surgery, Royal Jubilee Hospital, Victoria, British Columbia, Canada
| | - Ranil R Sonnadara
- Office of Education Science, Department of Surgery, McMaster University, Hamilton, Ontario, Canada; Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Tracy M Scott
- Department of Surgery, University of British Columbia, Vancouver, British Columbia Canada; Department of Surgery, St. Paul's Hospital, Vancouver, British Columbia, Canada
| | - Ahmer A Karimuddin
- Department of Surgery, University of British Columbia, Vancouver, British Columbia Canada; Department of Surgery, St. Paul's Hospital, Vancouver, British Columbia, Canada
| |
Collapse
|
3
|
Kazevman G, Ng JCY, Marshall JL, Slater M, Leung FH, Guiang CB. Challenges for Family Medicine Residents in Attaining the CanMEDS Professional Role: A Thematic Analysis of Preceptor Field Notes. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1598-1602. [PMID: 34039855 DOI: 10.1097/acm.0000000000004184] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE Among the roles of the competent physician is that of a professional, according to the Canadian Medical Education Directives for Specialists (CanMEDS) framework, which describes the abilities physicians require to effectively meet the health care needs of the people they serve. Through examination of preceptor field notes on resident performance, the authors identified aspects of this role with which family medicine residents struggle. METHOD The authors used a structured thematic analysis in this qualitative study to explore the written feedback postgraduate medical learners receive at the University of Toronto Department of Family and Community Medicine. Seventy field notes written between 2015 and 2017 by clinical educators for residents who scored "below expectation" in the CanMEDS professional role were analyzed. From free-text comments, the authors derived inductive codes, amalgamated the codes into themes, and measured the frequency of the occurrence of the codes. The authors then mapped the themes to the key competencies of the CanMEDS professional role. RESULTS From the field notes, 7 themes emerged that described reasons for poor performance. Lack of collegiality, failure to adhere to standards of practice or legal guidelines, and lack of reflection or self-learning were identified as major issues. Other themes were failure to maintain boundaries, taking actions that could have a negative impact on patient care, failure to maintain patient confidentiality, and failure to engage in self-care. When the themes were mapped to the key competencies in the CanMEDS professional role, most related to the competency "commitment to the profession." CONCLUSIONS This study highlights aspects of professional conduct with which residents struggle and suggests that the way professionalism is taught in residency programs-and at all medical training levels-should be reassessed. Educational interventions that emphasize learners' commitment to the profession could enhance the development of more practitioners who are consummate professionals.
Collapse
Affiliation(s)
- Gill Kazevman
- G. Kazevman is a third-year medical student, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Jessica C Y Ng
- J.C.Y. Ng is a graduate of the University of Toronto, Scarborough, Ontario, Canada
| | - Jessica L Marshall
- J.L. Marshall is a graduate of the University of Toronto, Scarborough, Ontario, Canada
| | - Morgan Slater
- M. Slater is a postdoctoral fellow, Department of Family Medicine, Queen's University School of Medicine, Kingston, Ontario, Canada
| | - Fok-Han Leung
- F.-H. Leung is associate professor, Department of Family and Community Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Charlie B Guiang
- C.B. Guiang is assistant professor and resident academic project coordinator, Department of Family and Community Medicine, Temerty Faculty of Medicine, University of Toronto, and physician co-lead, Wellesley-St. James Town Health Centre, Unity Health Toronto, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Hartman ND, Manthey DE, Strowd LC, Potisek NM, Vallevand A, Tooze J, Goforth J, McDonough K, Askew KL. Effect of Perceived Level of Interaction on Faculty Evaluations of 3rd Year Medical Students. MEDICAL SCIENCE EDUCATOR 2021; 31:1327-1332. [PMID: 34457975 PMCID: PMC8368453 DOI: 10.1007/s40670-021-01307-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/11/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Several factors are known to affect the way clinical performance evaluations (CPEs) of medical students are completed by supervising physicians. We sought to explore the effect of faculty perceived "level of interaction" (LOI) on these evaluations. METHODS Our third-year CPE requires evaluators to identify perceived LOI with each student as low, moderate, or high. We examined CPEs completed during the academic year 2018-2019 for differences in (1) clinical and professionalism ratings, (2) quality of narrative comments, (3) quantity of narrative comments, and (4) percentage of evaluation questions left unrated. RESULTS A total of 3682 CPEs were included in the analysis. ANOVA revealed statistically significant differences between LOI and clinical ratings (p ≤ .001), with mean ratings from faculty with a high LOI significantly higher than from faculty with a moderate or low LOI (p ≤ .001). Chi-squared analysis demonstrated differences based on faculty LOI and whether questions were left unrated (p ≤ .001), quantity of narrative comments (p ≤ .001), and specificity of narrative comments (p ≤ .001). CONCLUSIONS Faculty who perceive higher LOI were more likely to assign that student higher ratings, complete more of the clinical evaluation and were more likely to provide narrative feedback with more specific, higher-quality comments. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s40670-021-01307-w.
Collapse
Affiliation(s)
- Nicholas D. Hartman
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - David E. Manthey
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Lindsay C. Strowd
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Nicholas M. Potisek
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Andrea Vallevand
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Janet Tooze
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Jon Goforth
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Kimberly McDonough
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| | - Kim L. Askew
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157 USA
| |
Collapse
|
5
|
Ryan MS, Lee B, Richards A, Perera RA, Haley K, Rigby FB, Park YS, Santen SA. Evaluating the Reliability and Validity Evidence of the RIME (Reporter-Interpreter-Manager-Educator) Framework for Summative Assessments Across Clerkships. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:256-262. [PMID: 33116058 DOI: 10.1097/acm.0000000000003811] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE The ability of medical schools to accurately and reliably assess medical student clinical performance is paramount. The RIME (reporter-interpreter-manager-educator) schema was originally developed as a synthetic and intuitive assessment framework for internal medicine clerkships. Validity evidence of this framework has not been rigorously evaluated outside of internal medicine. This study examined factors contributing to variability in RIME assessment scores using generalizability theory and decision studies across multiple clerkships, thereby contributing to its internal structure validity evidence. METHOD Data were collected from RIME-based summative clerkship assessments during 2018-2019 at Virginia Commonwealth University. Generalizability theory was used to explore variance attributed to different facets through a series of unbalanced random-effects models by clerkship. For all analyses, decision (D-) studies were conducted to estimate the effects of increasing the number of assessments. RESULTS From 231 students, 6,915 observations were analyzed. Interpreter was the most common RIME designation (44.5%-46.8%) across all clerkships. Variability attributable to students ranged from 16.7% in neurology to 25.4% in surgery. D-studies showed the number of assessments needed to achieve an acceptable reliability (0.7) ranged from 7 in pediatrics and surgery to 11 in internal medicine and 12 in neurology. However, depending on the clerkship each student received between 3 and 8 assessments. CONCLUSIONS This study conducted generalizability- and D-studies to examine the internal structure validity evidence of RIME clinical performance assessments across clinical clerkships. Substantial proportion of variance in RIME assessment scores was attributable to the rater, with less attributed to the student. However, the proportion of variance attributed to the student was greater than what has been demonstrated in other generalizability studies of summative clinical assessments. Overall, these findings support the use of RIME as a framework for assessment across clerkships and demonstrate the number of assessments required to obtain sufficient reliability.
Collapse
Affiliation(s)
- Michael S Ryan
- M.S. Ryan is assistant dean for clinical medical education and associate professor of pediatrics, Virginia Commonwealth University School of Medicine, Richmond, Virginia; ORCID: https://orcid.org/0000-0003-3266-9289
| | - Bennett Lee
- B. Lee is associate professor of internal medicine, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Alicia Richards
- A. Richards is a doctoral student in the department of biostatistics, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Robert A Perera
- R.A. Perera is associate professor of biostatistics, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Kellen Haley
- K. Haley is a resident in neurology at the University of Michigan School of Medicine, Ann Arbor, Michigan. At the time of initial drafting of this manuscript, Dr. Haley was a fourth-year medical student at Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Fidelma B Rigby
- F.B. Rigby is associate professor and clerkship director of obstetrics and gynecology, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Yoon Soo Park
- Y.S. Park is associate professor and associate head, department of medical education, and director of research, office of educational affairs, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335
| | - Sally A Santen
- S.A. Santen is senior associate dean for evaluation, assessment and scholarship, and professor of emergency medicine Virginia Commonwealth University School of Medicine, Richmond, Virginia; ORCID: https://orcid.org/0000-0002-8327-8002
| |
Collapse
|
6
|
Odorizzi S, Cheung WJ, Sherbino J, Lee AC, Thurgur L, Frank JR. A Signal Through the Noise: Do Professionalism Concerns Impact the Decision Making of Competence Committees? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:896-901. [PMID: 31577582 DOI: 10.1097/acm.0000000000003005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To characterize how professionalism concerns influence individual reviewers' decisions about resident progression using simulated competence committee (CC) reviews. METHOD In April 2017, the authors conducted a survey of 25 Royal College of Physicians and Surgeons of Canada emergency medicine residency program directors and senior faculty who were likely to function as members of a CC (or equivalent) at their institution. Participants took a survey with 12 resident portfolios, each containing hypothetical formative and summative assessments. Six portfolios represented residents progressing as expected (PAE) and 6 represented residents not progressing as expected (NPAE). A professionalism variable (PV) was developed for each portfolio. Two counterbalanced surveys were developed in which 6 portfolios contained a PV and 6 portfolios did not (for each PV condition, 3 portfolios represented residents PAE and 3 represented residents NPAE). Participants were asked to make progression decisions based on each portfolio. RESULTS Without PVs, the consistency of participants giving scores of 1 or 2 (i.e., little or no need for educational intervention) to residents PAE and to those NPAE was 92% and 10%, respectively. When a PV was added, the consistency decreased by 34% for residents PAE and increased by 4% for those NPAE (P = .01). CONCLUSIONS When reviewing a simulated resident portfolio, individual reviewer scores for residents PAE were responsive to the addition of professionalism concerns. Considering this, educators using a CC should have a system to report, collect, and document professionalism issues.
Collapse
Affiliation(s)
- Scott Odorizzi
- S. Odorizzi is postgraduate year 5 resident physician, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada. W.J. Cheung is assistant professor and staff physician, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada. J. Sherbino is professor, Division of Emergency Medicine, Department of Medicine, and assistant dean, health professions education research, McMaster University, Hamilton, Ontario, Canada. A.C. Lee is conjoint associate professor, School of Medicine and Public Health, The University of Newcastle Australia, Callaghan, New South Wales, Australia, and psychometrician, Royal Australasian College of Physicians, Sydney, New South Wales, Australia. L. Thurgur is assistant professor and staff physician, Department of Emergency Medicine, and program director, Royal College Emergency Medicine Residency Program, University of Ottawa, Ottawa, Ontario, Canada. J.R. Frank is associate professor and staff physician, Department of Emergency Medicine, University of Ottawa, and director, Specialty Education, Strategy and Standards, Office of Specialty Education, Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| | | | | | | | | | | |
Collapse
|
7
|
Impact of an Immersive Virtual Reality Curriculum on Medical Students' Clinical Assessment of Infants With Respiratory Distress. Pediatr Crit Care Med 2020; 21:477-485. [PMID: 32106189 DOI: 10.1097/pcc.0000000000002249] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To determine whether exposure to an immersive virtual reality curriculum on pediatric respiratory distress improves medical students' recognition of impending respiratory failure. DESIGN Randomized, controlled, prospective study conducted from July 2017 to June 2018. Evaluators blinded to student groupings. SETTING Academic, free-standing children's hospital. PARTICIPANTS All third-year medical students (n = 168) were eligible. The standard curriculum was delivered to all students during their pediatric rotation with optional inclusion of research data per Institutional Review Board review. A randomized selection of students was exposed to the virtual reality curriculum. INTERVENTION All students received standard training on respiratory distress through didactics and high-fidelity mannequin simulation. Intervention students underwent an additional 30-minute immersive virtual reality curriculum, experienced through an OculusRift headset, with three simulations of an infant with 1) no distress, 2) respiratory distress, and 3) impending respiratory failure. MEASUREMENTS AND MAIN RESULTS The impact of the virtual reality curriculum on recognition/interpretation of key examination findings, assignment of an appropriate respiratory status assessment, and recognition of the need for escalation of care for patients in impending respiratory failure was assessed via a free response clinical assessment of video vignettes at the end of the pediatric rotation. Responses were scored on standardized rubrics by physician experts. All eligible students participated (78 intervention and 90 control). Significant differences between intervention and control were demonstrated for consideration/interpretation of mental status (p < 0.01), assignment of the appropriate respiratory status assessment (p < 0.01), and recognition of a need for escalation of care (p = 0.0004). CONCLUSIONS Exposure to an immersive virtual reality curriculum led to improvement in objective competence at the assessment of respiratory distress and recognition of the need for escalation of care for patients with signs of impending respiratory failure. This study represents a novel application of immersive virtual reality and suggests that it may be effective for clinical assessment training.
Collapse
|
8
|
Scarff CE. Towards a greater understanding of narrative data on trainee performance. MEDICAL EDUCATION 2019; 53:962-964. [PMID: 31402480 DOI: 10.1111/medu.13940] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Affiliation(s)
- Catherine Elizabeth Scarff
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
9
|
Meyer EG, Cozza KL, Konara RMR, Hamaoka D, West JC. Inflated Clinical Evaluations: a Comparison of Faculty-Selected and Mathematically Calculated Overall Evaluations Based on Behaviorally Anchored Assessment Data. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2019; 43:151-156. [PMID: 30091071 DOI: 10.1007/s40596-018-0957-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Accepted: 07/12/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE This retrospective study compared faculty-selected evaluation scores with those mathematically calculated from behaviorally anchored assessments. METHODS Data from 1036 psychiatry clerkship clinical evaluations (2012-2015) was reviewed. These clinical evaluations required faculty to assess clinical performance using 14 behaviorally anchored questions followed by a faculty-selected overall evaluation. An explicit rubric was included in the overall evaluation to assist the faculty in interpreting their 14 assessment responses. Using the same rubric, mathematically calculated evaluations of the same assessment responses were generated and compared to the faculty-selected evaluations. RESULTS Comparison of faculty-selected to mathematically calculated evaluations revealed that while the two methods were reliably correlated (Cohen's kappa = 0.314, Pearson's coefficient = 0.658, p < 0.001), there was a notable difference in the results (t = 24.5, p < 0.0001). The average faculty-selected evaluation was 1.58 (SD = 0.61) with a mode of "1" or "outstanding," while the mathematically calculated evaluation had an average of 2.10 (SD = 0.90) with a mode of "3" or "satisfactory." 51.0% of the faculty-selected evaluations matched the mathematically calculated results: 46.1% were higher and 2.9% were lower. CONCLUSIONS Clerkship clinical evaluation forms that require faculty to make an overall evaluation generate results that are significantly higher than what would have been assigned solely using behavioral anchored assessment questions. Focusing faculty attention on assessing specific behaviors rather than overall evaluations may reduce this inflation and improve validity. Clerkships may want to consider removing overall evaluation questions from their clinical evaluation tools.
Collapse
Affiliation(s)
- Eric G Meyer
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA.
| | - Kelly L Cozza
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | | | - Derrick Hamaoka
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - James C West
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
10
|
What do quantitative ratings and qualitative comments tell us about general surgery residents' progress toward independent practice? Evidence from a 5-year longitudinal cohort. Am J Surg 2018; 217:288-295. [PMID: 30309619 DOI: 10.1016/j.amjsurg.2018.09.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 09/12/2018] [Accepted: 09/28/2018] [Indexed: 11/21/2022]
Abstract
BACKGROUND This study examines the alignment of quantitative and qualitative assessment data in end-of-rotation evaluations using longitudinal cohorts of residents progressing throughout the five-year general surgery residency. METHODS Rotation evaluation data were extracted for 171 residents who trained between July 2011 and July 2016. Data included 6069 rotation evaluations forms completed by 38 faculty members and 164 peer-residents. Qualitative comments mapped to general surgery milestones were coded for positive/negative feedback and relevance. RESULTS Quantitative evaluation scores were significantly correlated with positive/negative feedback, r = 0.52 and relevance, r = -0.20, p < .001. Themes included feedback on leadership, teaching contribution, medical knowledge, work ethic, patient-care, and ability to work in a team-based setting. Faculty comments focused on technical and clinical abilities; comments from peers focused on professionalism and interpersonal relationships. CONCLUSIONS We found differences in themes emphasized as residents progressed. These findings underscore improving our understanding of how faculty synthesize assessment data.
Collapse
|
11
|
Bartels J, Mooney CJ, Stone RT. Numerical versus narrative: A comparison between methods to measure medical student performance during clinical clerkships. MEDICAL TEACHER 2017; 39:1154-1158. [PMID: 28845738 DOI: 10.1080/0142159x.2017.1368467] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
BACKGROUND Medical school evaluations typically rely on both language-based narrative descriptions and psychometrically converted numeric scores to convey performance to the grading committee. We evaluated inter-rater reliability and correlation of numeric versus narrative evaluations for students on their Neurology Clerkship. DESIGN/METHODS 50 Neurology Clerkship in-training evaluation reports completed by their residents and faculty members at the University of Rochester School of Medicine were dissected into narrative and numeric components. 5 Clerkship grading committee members retrospectively gave new narrative scores (NNS) while blinded to original numeric scores (ONS). We calculated intra-class correlation coefficients (ICC) and their associated confidence intervals for the ONS and the NNS. In addition, we calculated the correlation between ONS and NNS. RESULTS The ICC was greater for the NNS (ICC = .88 (95% CI = .70-.94)) than the ONS (ICC = .62 (95% CI = .40-.77)) Pearson correlation coefficient showed that the ONS and NNS were highly correlated (r = .81). CONCLUSIONS Narrative evaluations converted by a small group of experienced graders are at least as reliable as numeric scoring by individual evaluators. We could allow evaluators to focus their efforts on creating richer narrative of greater value to trainees.
Collapse
Affiliation(s)
- Josef Bartels
- a Family Medicine , WWAMI Region Practice & Research Network , Boise , ID , USA
| | - Christopher John Mooney
- b Office of Medical Education , University of Rochester School of Medicine and Dentistry , Rochester , NY , USA
| | - Robert Thompson Stone
- c Neurology , University of Rochester School of Medicine and Dentistry , Rochester , NY , USA
| |
Collapse
|
12
|
Fazio SB, Torre DM, DeFer TM. Grading Practices and Distributions Across Internal Medicine Clerkships. TEACHING AND LEARNING IN MEDICINE 2016; 28:286-292. [PMID: 27143310 DOI: 10.1080/10401334.2016.1164605] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
THEORY Clerkship evaluation and grading practices vary widely between U.S. medical schools. Grade inflation continues to exist, and grade distribution is likely to be different among U.S. medical schools. HYPOTHESES Increasing the number of available grades curtails "grade inflation." METHOD A national survey of all Clerkship Directors in Internal Medicine members was administered in 2011. The authors assessed key aspects of grading. RESULTS Response rate was 76%. Among clerkship directors (CDs), 61% of respondents agreed that grade inflation existed in the internal medicine clerkship at their school, and 43% believed that it helped students obtain better residency positions. With respect to grading practices, 79% of CDs define specific behaviors needed to achieve each grade, and 36% specify an ideal grade distribution. In addition, 44% have a trained core faculty responsible for evaluating students, 35% describe formal grading meetings, and 39% use the Reporter-Interpreter-Manager-Educator (RIME) scheme. Grading scales were described as follows: 4% utilize a pass/fail system, 13% a 3-tier (e.g., Honors/Pass/Fail), 45% 4-tier, 35% 5-tier, and 4% 6+-tier system. There was a trend to higher grades with more tiers available. CONCLUSIONS Grade inflation continues in the internal medicine clerkship. Almost half of CDs feel that this practice assists students to obtain better residency positions. A minority of programs have a trained core faculty who are responsible for evaluation. About one third have formal grading meetings and use the RIME system; both have been associated with more robust and balanced grading practices. In particular, there is a wide variation between schools in the percentage of students who are awarded the highest grade, which has implications for residency applications. Downstream users of clinical clerkship grades must be fully aware of these variations in grading in order to appropriately judge medical student performance.
Collapse
Affiliation(s)
- Sara B Fazio
- a Department of Medicine , Beth Israel Deaconess Medical Center , Boston , Massachusetts , USA
| | - Dario M Torre
- b Department of Medicine , University of Pittsburgh Medical Center , Pittsburgh , Pennsylvania , USA
| | - Thomas M DeFer
- c Department of Internal Medicine , Washington University School of Medicine , St. Louis , Missouri , USA
| |
Collapse
|
13
|
Hemmer PA, Dadekian GA, Terndrup C, Pangaro LN, Weisbrod AB, Corriere MD, Rodriguez R, Short P, Kelly WF. Regular Formal Evaluation Sessions are Effective as Frame-of-Reference Training for Faculty Evaluators of Clerkship Medical Students. J Gen Intern Med 2015; 30:1313-8. [PMID: 26173519 PMCID: PMC4539339 DOI: 10.1007/s11606-015-3294-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
BACKGROUND Face-to-face formal evaluation sessions between clerkship directors and faculty can facilitate the collection of trainee performance data and provide frame-of-reference training for faculty. OBJECTIVE We hypothesized that ambulatory faculty who attended evaluation sessions at least once in an academic year (attendees) would use the Reporter-Interpreter-Manager/Educator (RIME) terminology more appropriately than faculty who did not attend evaluation sessions (non-attendees). DESIGN Investigators conducted a retrospective cohort study using the narrative assessments of ambulatory internal medicine clerkship students during the 2008-2009 academic year. PARTICIPANTS The study included assessments of 49 clerkship medical students, which comprised 293 individual teacher narratives. MAIN MEASURES Single-teacher written and transcribed verbal comments about student performance were masked and reviewed by a panel of experts who, by consensus, (1) determined whether RIME was used, (2) counted the number of RIME utterances, and (3) assigned a grade based on the comments. Analysis included descriptive statistics and Pearson correlation coefficients. KEY RESULTS The authors reviewed 293 individual teacher narratives regarding the performance of 49 students. Attendees explicitly used RIME more frequently than non-attendees (69.8 vs. 40.4 %; p < 0.0001). Grades recommended by attendees correlated more strongly with grades assigned by experts than grades recommended by non-attendees (r = 0.72; 95 % CI (0.65, 0.78) vs. 0.47; 95 % CI (0.26, 0.64); p = 0.005). Grade recommendations from individual attendees and non-attendees each correlated significantly with overall student clerkship clinical performance [r = 0.63; 95 % CI (0.54, 0.71) vs. 0.52 (0.36, 0.66), respectively], although the difference between the groups was not statistically significant (p = 0.21). CONCLUSIONS On an ambulatory clerkship, teachers who attended evaluation sessions used RIME terminology more frequently and provided more accurate grade recommendations than teachers who did not attend. Formal evaluation sessions may provide frame-of-reference training for the RIME framework, a method that improves the validity and reliability of workplace assessment.
Collapse
Affiliation(s)
- Paul A Hemmer
- F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA,
| | | | | | | | | | | | | | | | | |
Collapse
|
14
|
Fazio SB, Papp KK, Torre DM, Defer TM. Grade inflation in the internal medicine clerkship: a national survey. TEACHING AND LEARNING IN MEDICINE 2013; 25:71-76. [PMID: 23330898 DOI: 10.1080/10401334.2012.741541] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
BACKGROUND Grade inflation is a growing concern, but the degree to which it continues to exist in 3rd-year internal medicine (IM) clerkships is unknown. PURPOSE The authors sought to determine the degree to which grade inflation is perceived to exist in IM clerkships in North American medical schools. METHODS A national survey of all Clerkship Directors in Internal Medicine members was administered in 2009. The authors assessed key aspects of grading. RESULTS Response rate was 64%. Fifty-five percent of respondents agreed that grade inflation exists in the Internal Medicine clerkship at their school. Seventy-eight percent reported it as a serious/somewhat serious problem, and 38% noted students have passed the IM clerkship at their school who should have failed. CONCLUSIONS A majority of clerkship directors report that grade inflation still exists. In addition, many note students who passed despite the clerkship director believing they should have failed. Interventions should be developed to address both of these problems.
Collapse
Affiliation(s)
- Sara B Fazio
- Department of Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA 02215, USA.
| | | | | | | |
Collapse
|
15
|
Donaldson JH, Gray M. Systematic review of grading practice: Is there evidence of grade inflation? Nurse Educ Pract 2012; 12:101-14. [DOI: 10.1016/j.nepr.2011.10.007] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2010] [Revised: 06/22/2011] [Accepted: 10/02/2011] [Indexed: 11/27/2022]
|
16
|
Ginsburg S, Gold W, Cavalcanti RB, Kurabi B, McDonald-Blumer H. Competencies "plus": the nature of written comments on internal medicine residents' evaluation forms. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2011; 86:S30-4. [PMID: 21955764 DOI: 10.1097/acm.0b013e31822a6d92] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
BACKGROUND Comments on residents' in-training evaluation reports (ITERs) may be more useful than scores in identifying trainees in difficulty. However, little is known about the nature of comments written by internal medicine faculty on residents' ITERs. METHOD Comments on 1,770 ITERs (from 180 residents in postgraduate years 1-3) were analyzed using constructivist grounded theory beginning with an existing framework. RESULTS Ninety-three percent of ITERs contained comments, which were frequently easy to map onto traditional competencies, such as knowledge base (n = 1,075 comments) to the CanMEDs Medical Expert role. Many comments, however, could be linked to several overlapping competencies. Also common were comments completely unrelated to competencies, for instance, the resident's impact on staff (813), or personality issues (450). Residents' "trajectory" was a major theme (performance in relation to expected norms [494], improvement seen [286], or future predictions [286]). CONCLUSIONS Faculty's assessments of residents are underpinned by factors related and unrelated to traditional competencies. Future evaluations should attempt to capture these holistic, integrated impressions.
Collapse
|
17
|
Ander DS, Wallenstein J, Abramson JL, Click L, Shayne P. Reporter-Interpreter-Manager-Educator (RIME) descriptive ratings as an evaluation tool in an emergency medicine clerkship. J Emerg Med 2011; 43:720-7. [PMID: 21945508 DOI: 10.1016/j.jemermed.2011.05.069] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2010] [Revised: 01/05/2011] [Accepted: 05/28/2011] [Indexed: 10/17/2022]
Abstract
BACKGROUND Emergency Medicine (EM) clerkships traditionally assess students using numerical ratings of clinical performance. The descriptive ratings of the Reporter, Interpreter, Manager, and Educator (RIME) method have been shown to be valuable in other specialties. OBJECTIVES We hypothesized that the RIME descriptive ratings would correlate with clinical performance and examination scores in an EM clerkship, indicating that the RIME ratings are a valid measure of performance. METHODS This was a prospective cohort study of an evaluation instrument for 4(th)-year medical students completing an EM rotation. This study received exempt Institutional Review Board status. EM faculty and residents completed shift evaluation forms including both numerical and RIME ratings. Students completed a final examination. Mean scores for RIME and clinical evaluations were calculated. Linear regression models were used to determine whether RIME ratings predicted clinical evaluation scores or final examination scores. RESULTS Four hundred thirty-nine students who completed the EM clerkship were enrolled in the study. After excluding items with missing data, there were 2086 evaluation forms (based on 289 students) available for analysis. There was a clear positive relationship between RIME category and clinical evaluation score (r(2)=0.40, p<0.01). RIME ratings correlated most strongly with patient management skills and least strongly with humanistic qualities. A very weak correlation was seen with RIME and final examination. CONCLUSION We found a positive association between RIME and clinical evaluation scores, suggesting that RIME is a valid clinical evaluation instrument. RIME descriptive ratings can be incorporated into EM evaluation instruments and provides useful data related to patient management skills.
Collapse
Affiliation(s)
- Douglas S Ander
- Department of Emergency Medicine, Emory University School of Medicine, Atlanta, Georgia 30303, USA
| | | | | | | | | |
Collapse
|
18
|
Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. MEDICAL TEACHER 2010; 32:676-82. [PMID: 20662580 DOI: 10.3109/0142159x.2010.500704] [Citation(s) in RCA: 517] [Impact Index Per Article: 36.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Competency-based medical education (CBME), by definition, necessitates a robust and multifaceted assessment system. Assessment and the judgments or evaluations that arise from it are important at the level of the trainee, the program, and the public. When designing an assessment system for CBME, medical education leaders must attend to the context of the multiple settings where clinical training occurs. CBME further requires assessment processes that are more continuous and frequent, criterion-based, developmental, work-based where possible, use assessment methods and tools that meet minimum requirements for quality, use both quantitative and qualitative measures and methods, and involve the wisdom of group process in making judgments about trainee progress. Like all changes in medical education, CBME is a work in progress. Given the importance of assessment and evaluation for CBME, the medical education community will need more collaborative research to address several major challenges in assessment, including "best practices" in the context of systems and institutional culture and how to best to train faculty to be better evaluators. Finally, we must remember that expertise, not competence, is the ultimate goal. CBME does not end with graduation from a training program, but should represent a career that includes ongoing assessment.
Collapse
|
19
|
Donato AA, Pangaro L, Smith C, Rencic J, Diaz Y, Mensinger J, Holmboe E. Evaluation of a novel assessment form for observing medical residents: a randomised, controlled trial. MEDICAL EDUCATION 2008; 42:1234-1242. [PMID: 19120955 DOI: 10.1111/j.1365-2923.2008.03230.x] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
CONTEXT Teaching faculty cannot reliably distinguish between satisfactory and unsatisfactory resident performances and give non-specific feedback. OBJECTIVES This study aimed to test whether a novel rating form can improve faculty accuracy in detecting unsatisfactory performances, generate more rater observations and improve feedback quality. METHODS Participants included two groups of 40 internal medicine residency faculty staff. Both groups received 1-hour training on how to rate trainees in the mini-clinical evaluation exercise (mini-CEX) format. The intervention group was given a new rating form structured with prompts, space for free-text comments, behavioural anchors and fewer scoring levels, whereas the control group used the current American Board of Internal Medicine Mini-CEX form. Participants watched and scored six scripted videotapes of resident performances 2-3 weeks after the training session. RESULTS Intervention group participants were more accurate in discriminating satisfactory from unsatisfactory performances (85% versus 73% correct; odds ratio [OR] 2.13, 95% confidence interval [CI] 1.16-3.14, P = 0.02) and yielded more correctly identified unsatisfactory performances (96% versus 52% correct; OR 25.35, 95% CI 9.12-70.46), but were less accurate in identifying satisfactory performances (73% versus 95% correct; OR 0.15, 95% CI 0.05-0.39). Intervention group participants averaged one fewer declared intended feedback item (4.7 versus 5.7) and showed no difference in the amount of feedback that was above minimal in quality. Intervention group participants generated more written evaluative observations (10.8 versus 5.7). Inter-rater agreement improved with the new form (Fleiss' kappa, 0.52 versus 0.30). CONCLUSIONS Modifying the currently used direct observations process may produce more recorded observations, increase inter-rater agreement and improve overall rater accuracy, but it may also increase severity error.
Collapse
Affiliation(s)
- Anthony A Donato
- Department of Internal Medicine, The Reading Hospital and Medical Center, Reading, Pennsylvania, USA.
| | | | | | | | | | | | | |
Collapse
|
20
|
Dudek NL, Marks MB, Wood TJ, Lee AC. Assessing the quality of supervisors' completed clinical evaluation reports. MEDICAL EDUCATION 2008; 42:816-22. [PMID: 18564093 DOI: 10.1111/j.1365-2923.2008.03105.x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
CONTEXT Although concern has been raised about the value of clinical evaluation reports for discriminating among trainees, there have been few efforts to formalise the dimensions and qualities that distinguish effective versus less useful styles of form completion. METHODS Using brainstorming and a modified Delphi technique, a focus group determined the key features of high-quality completed evaluation reports. These features were used to create a rating scale to evaluate the quality of completed reports. The scale was pilot-tested locally; the results were psychometrically analysed and used to modify the scale. The scale was then tested on a national level. Psychometric analysis and final modification of the scale were completed. RESULTS Sixteen features of high-quality reports were identified and used to develop a rating scale: the Completed Clinical Evaluation Report Rating (CCERR). The reliability of the scale after a national field test with 55 raters assessing 18 in-training evaluation reports (ITERs) was 0.82. Further revisions were made; the final version of the CCERR contains nine items rated on a 5-point scale. With this version, the mean ratings of three groups of 'gold-standard' ITERs (previously judged to be of high, average and poor quality) differed significantly (P < 0.05). DISCUSSION The CCERR is a validated scale that can be used to help train supervisors to complete and assess the quality of evaluation reports.
Collapse
Affiliation(s)
- Nancy L Dudek
- Department of Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada.
| | | | | | | |
Collapse
|
21
|
Griffith CH, Wilson JF. The association of student examination performance with faculty and resident ratings using a modified RIME process. J Gen Intern Med 2008; 23:1020-3. [PMID: 18612736 PMCID: PMC2517939 DOI: 10.1007/s11606-008-0611-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
BACKGROUND RIME is a descriptive framework in which students and their teachers can gauge progress throughout a clerkship from R (reporter) to I (interpreter) to M (manager) to E (educator). RIME, as described in the literature, is complemented by residents and attending physicians meeting with a clerkship director to discuss individual student progress, with group discussion resulting in assignment of a RIME stage. OBJECTIVE 1) to determine whether a student's RIME rating is associated with end-of-clerkship examination performance; and 2) to determine whose independent RIME rating is most predictive of a student's examination performance: attendings, residents, or interns. DESIGN Prospective cohort study. PARTICIPANTS Third year medical students from academic years 2004-2005 and early 2005-2006 at 1 medical school. MEASUREMENTS AND MAIN RESULTS Each attending, resident, and intern independently assessed the student's final RIME stage attained. For the purpose of analysis, R stage=1, I=2, M=3, and E=4. Regression analyses were performed with examination scores as dependent variables (National Board of Medical Examiners [NBME] medicine subject examination and a clinical performance examination [CPE]), with independent variables of mean attending RIME score, mean resident score, and mean intern score. For the 122 students, significant predictors of NBME subject exam score were resident RIME rating (p = .008) and intern RIME rating (p = .02). Significant predictor of CPE performance was resident RIME rating (p = .01). CONCLUSION House staff RIME ratings of students are associated with student performance on written and clinical skills examinations.
Collapse
|
22
|
Hemmer PA, Papp KK, Mechaber AJ, Durning SJ. Evaluation, grading, and use of the RIME vocabulary on internal medicine clerkships: results of a national survey and comparison to other clinical clerkships. TEACHING AND LEARNING IN MEDICINE 2008; 20:118-126. [PMID: 18444197 DOI: 10.1080/10401330801991287] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
BACKGROUND Evaluation methods within and across clerkships are rapidly evolving, including greater emphasis or frameworks for descriptive evaluation and direct observation of competence. PURPOSE The purpose of this study is to describe current evaluation methods, use of the Reporter-Interpreter-Manager/Educator (RIME) framework, and grade assignment by internal medicine clerkship directors. METHODS In 2005, the Clerkship Directors in Internal Medicine surveyed its 109 institutional members. Topics included evaluation methods and grade contribution, use of evaluation sessions and/or RIME, and grade assignment (criterion referenced or normative). RESULTS Response rate was 81% (88/109). The evaluation methods were as follows: teachers' evaluations, 93% (64% of grade); National Board of Medical Examiners subject examination, 81% (25% of grade); faculty written exam, 34% (14% of grade); objective structured clinical examinations, 32% (12% of grade); direct observation, 22% (7% of grade). RIME is used by 42% of respondents. Many clerkship directors (43%) meet with teachers to discuss student performance. Criterion-referenced grading is used by 59%, and normative grading is used by 27%. Unsatisfactory grades are given for examination failures (72%), unprofessional behavior (49%), poor clinical performance (42%), and failure to meet requirements (18%). CONCLUSIONS Internal medicine clerkship directors emphasize description and observation of students. RIME and discussions with teachers are becoming commonplace.
Collapse
Affiliation(s)
- Paul A Hemmer
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland 20814, USA.
| | | | | | | |
Collapse
|
23
|
Dornan T, Boshuizen H, Cordingley L, Hider S, Hadfield J, Scherpbier A. Evaluation of self-directed clinical education: validation of an instrument. MEDICAL EDUCATION 2004; 38:670-678. [PMID: 15189264 DOI: 10.1111/j.1365-2929.2004.01837.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
AIM To explore the evaluation of self-directed, integrated clinical education. METHODS We delivered a quantitative and qualitative, self-report questionnaire to students through their web-based learning management system. The questionnaire was distributed 4 times over 1 year, each time in 2 parts. A generic part evaluated boundary conditions for learning, teaching activities and "real patient learning". Factor analysis with varimax rotation was used to validate the constructs that made up the scale and to stimulate hypotheses about how they interrelated. A module-specific part evaluated real patient learning of the subject matter in the curriculum. RESULTS A total of 101 students gave 380 of a possible 404 responses (94%). The generic data loaded onto 4 factors, corresponding to: firm quality; hospital-based teaching and learning; community and out-patient learning, and problem-based learning (PBL). A 5-item quality index had content, construct and criterion validity. Quality differed greatly between firms. Self-evaluation of module-specific, real patient learning was also valid. It was strongly influenced by the specialty interests of hospital firms. CONCLUSIONS Quality is a multidimensional construct. Self-report evaluation of real patient learning is feasible, and could be capitalised on to promote reflective self-direction. The social and material context of learning is an important dimension of educational quality.
Collapse
Affiliation(s)
- T Dornan
- Hope Hospital, University of Manchester School of Medicine, Stott Lane, Salford, Manchester M6 8HD, UK.
| | | | | | | | | | | |
Collapse
|
24
|
Hemmer PA, Griffith C, Elnicki DM, Fagan M. The internal medicine clerkship in the clinical education of medical students. Am J Med 2003; 115:423-7. [PMID: 14553891 DOI: 10.1016/s0002-9343(03)00442-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
25
|
Appel J, Friedman E, Fazio S, Kimmel J, Whelan A. Educational assessment guidelines: a Clerkship Directors in Internal Medicine commentary. Am J Med 2002; 113:172-9. [PMID: 12133764 DOI: 10.1016/s0002-9343(02)01211-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Joel Appel
- Department of Internal Medicine, Sinai-Grace Hospital, Wayne State University School of Medicine, Michigan, USA
| | | | | | | | | |
Collapse
|