1
|
Rassos J, Ginsburg S, Stalmeijer RE, Melvin LJ. The Senior Medical Resident's New Role in Assessment in Internal Medicine. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:711-717. [PMID: 34879012 DOI: 10.1097/acm.0000000000004552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE With the introduction of competency-based medical education, senior residents have taken on a new, formalized role of completing assessments of their junior colleagues. However, no prior studies have explored the role of near-peer assessment within the context of entrustable professional activities (EPAs) and competency-based medical education. This study explored internal medicine residents' perceptions of near-peer feedback and assessment in the context of EPAs. METHOD Semistructured interviews were conducted from September 2019 to March 2020 with 16 internal medicine residents (8 first-year residents and 8 second- and third-year residents) at the University of Toronto, Toronto, Ontario, Canada. Interviews were conducted and coded iteratively within a constructivist grounded theory approach until sufficiency was reached. RESULTS Senior residents noted a tension in their dual roles of coach and assessor when completing EPAs. Senior residents managed the relationship with junior residents to not upset the learner and potentially harm the team dynamic, leading to the documentation of often inflated EPA ratings. Junior residents found senior residents to be credible providers of feedback; however, they were reticent to find senior residents credible as assessors. CONCLUSIONS Although EPAs have formalized moments of feedback, senior residents struggled to include constructive feedback comments, all while knowing the assessment decisions may inform the overall summative decision of their peers. As a result, EPA ratings were often inflated. The utility of having senior residents serve as assessors needs to be reexamined because there is concern that this new role has taken away the benefits of having a senior resident act solely as a coach.
Collapse
Affiliation(s)
- James Rassos
- J. Rassos is assistant professor, Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Shiphra Ginsburg
- S. Ginsburg is professor, Department of Medicine, and scientist, Wilson Centre for Education, University of Toronto, Toronto, Ontario, Canada
| | - Renée E Stalmeijer
- R.E. Stalmeijer is assistant professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| | - Lindsay J Melvin
- L.J. Melvin is assistant professor, Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Kistler EA, Chiappa V, Chang Y, Baggett M. Evaluating the SPIKES Model for Improving Peer-to-Peer Feedback Among Internal Medicine Residents: a Randomized Controlled Trial. J Gen Intern Med 2021; 36:3410-3416. [PMID: 33506399 PMCID: PMC8606477 DOI: 10.1007/s11606-020-06459-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 12/13/2020] [Indexed: 11/25/2022]
Abstract
BACKGROUND Feedback improves trainee clinical performance, but the optimal way to provide it remains unclear. Peer feedback offers unique advantages but comes with significant challenges including a lack of rigorously studied methods. The SPIKES framework is a communication tool adapted from the oncology and palliative care literature for teaching trainees how to lead difficult conversations. OBJECTIVE To determine if a brief educational intervention focused on the SPIKES framework improves peer feedback between internal medicine trainees on inpatient medicine services as compared to usual practice. DESIGN Randomized, controlled trial at an academic medical center during academic year 2017-2018. PARTICIPANTS Seventy-five PGY1 and 49 PGY2 internal medicine trainees were enrolled. PGY2s were randomized 1:1 to the intervention or control group. INTERVENTION The intervention entailed a 30-min, case-based didactic on the SPIKES framework followed by a refresher email on SPIKES sent to PGY2s before each inpatient medicine rotation. PGY1s were blinded as to which PGY2s underwent the training. MAIN MEASURES The primary outcome was PGY1 evaluation of the extent of feedback provided by PGY2s. Secondary outcomes included PGY1 report of feedback quality and PGY2 self-report of feedback quantity and quality. Outcomes were obtained via anonymous online survey and reported using a Likert scale with a range of one to four. KEY RESULTS PGY1s completed 207 surveys (51% response rate) and PGY2s completed 61 surveys (42% response rate). PGY1s reported a higher extent of feedback (2.5 vs 2.2; p = 0.02; Cohen's d = 0.31), more specific feedback (2.3 vs 2.0; p < 0.01; d = 0.33), and higher satisfaction with feedback (2.6 vs 2.2; p < 0.01; d = 0.47) from intervention PGY2s. There were no significant differences in PGY2 self-reported outcomes. CONCLUSIONS With modest implementation requirements and notable limitations, a brief educational intervention focused on SPIKES increased PGY1 perception of the extent, specificity, and satisfaction with feedback from PGY2s.
Collapse
Affiliation(s)
- Emmett A Kistler
- Department of Medicine, Division of Pulmonary and Critical Care Medicine, Massachusetts General Hospital, Boston, MA, USA.
| | - Victor Chiappa
- Department of Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Yuchiao Chang
- Department of Medicine, Division of General Internal Medicine (Biostatistics), Massachusetts General Hospital, Boston, MA, USA
| | - Meridale Baggett
- Department of Medicine, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
3
|
Najafipour S, Mortaz Hejri S, Nikbakht Nasrabadi A, Yekaninejad MS, Shirazi M, Labaf A, Jalili M. Psychometric properties of the mini peer assessment tools (Mini-PAT) in emergency medicine residents. Med J Islam Repub Iran 2020; 34:126. [PMID: 33437722 PMCID: PMC7787031 DOI: 10.34171/mjiri.34.126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Indexed: 11/05/2022] Open
Abstract
Background: A few studies have been done regarding the validity and reliability of the Mini-Peer Assessment Tool across various specialties. This study was conducted to determine the reliability, content and construct validity of Mini-Peer Assessment Tool to assess the competency of emergency medicine residents. Methods: This study was carried out to investigate the psychometric properties of the mini-PAT tool to evaluate the professional competencies of emergency medicine residents in educational hospitals affiliated to Tehran University of Medical Sciences. The initial Mini-Peer Assessment Tool was translated into Persian. After that, the content validity index and content validity ratio determined by consulting 12 professors of emergency medicine. The construct validity was determined with exploratory factor analysis and investigation of the correlation coefficient on 31 self and 248 peer assessment cases. The reliability of the mini peer assessment tool was determined by internal consistency and item deletion by using Cronbach's alpha coefficient. Reliability was also assessed by determining the agreement between the two tools of self-assessment and peer assessment by using the diagram Bland and Altman. Results: The results showed content validity ratio (CVR) of the items ranged from 0.56 to 0.83, and the content validity index (CVI) of the items ranged from 0.72 to 0.90. The reliability of the self-assessment and peer-assessment tools were 0.83 and 0.95 respectively and there was a relative agreement between the self-assessment method and the peer assessment method. Finally, the tool underwent exploratory factor analysis resulting extraction into two factors namely 'clinical competencies' and 'human interactions' in the peer assessment tool. In the self-assessment tool, the factors of 'good practice' and 'technical competence' were extracted. Conclusion: The results of the present study provided evidence of the adequacy of content validity, reliability of the contextually customized mini-peer assessment tool in assessing the competencies of emergency medicine residents.
Collapse
Affiliation(s)
- Sedigheh Najafipour
- 1Department of Medical Education, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Sara Mortaz Hejri
- 1Department of Medical Education, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran
,2Institute of Health Science Education, McGill University, Montreal, Canada
| | - Alireza Nikbakht Nasrabadi
- 3Department of Nursing, School of Nursing and Midwifery, Tehran University of Medical Sciences, Tehran, Iran
| | - Mir Saeed Yekaninejad
- 4Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
| | - Mandana Shirazi
- 1Department of Medical Education, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Ali Labaf
- 5Department of Emergency Medicine , Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad Jalili
- 5Department of Emergency Medicine , Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran
,Corresponding author: Dr Mohammad Jalili,
| |
Collapse
|
4
|
Coaching the Debriefer: Peer Coaching to Improve Debriefing Quality in Simulation Programs. Simul Healthc 2018; 12:319-325. [PMID: 28538446 DOI: 10.1097/sih.0000000000000232] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
STATEMENT Formal faculty development programs for simulation educators are costly and time-consuming. Peer coaching integrated into the teaching flow can enhance an educator's debriefing skills. We provide a practical guide for the who, what, when, where, why, and how of peer coaching for debriefing in simulation-based education. Peer coaching offers advantages such as psychological safety and team building, and it can benefit both the educator who is receiving feedback and the coach who is providing it. A feedback form for effective peer coaching includes the following: (1) psychological safety, (2) framework, (3) method/strategy, (4) content, (5) learner centeredness, (6) co-facilitation, (7) time management, (8) difficult situations, (9) debriefing adjuncts, and (10) individual style and experience. Institutional backing of peer coaching programs can facilitate implementation and sustainability. Program leaders should communicate the need and benefits, establish program goals, and provide assessment tools, training, structure, and evaluation to optimize chances of success.
Collapse
|
5
|
Fitch C, Malik A, Lelliott P, Bhugra D, Andiappan M. Assessing psychiatric competencies: what does the literature tell us about methods of workplace-based assessment? ACTA ACUST UNITED AC 2018. [DOI: 10.1192/apt.bp.107.003871] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Workplace-based assessment (WPBA) is becoming a key component of post-graduate medical training in several countries. The various methods of WPBA include: the long case; multisource feedback (MSF); Mini-Clinical Examination (mini-CEX); Direct Observation of Procedural Skills (DOPS); case-based discussion (CbD); and journal club presentation. For each assessment method, we define what the approach practically involves, then consider the key messages and research evidence from the literature regarding their reliability, validity and general usefulness.
Collapse
|
6
|
Salmon G, Pugsley L. The mini-PAT as a multi-source feedback tool for trainees in child and adolescent psychiatry: assessing whether it is fit for purpose. BJPsych Bull 2017; 41:115-119. [PMID: 28400971 PMCID: PMC5376729 DOI: 10.1192/pb.bp.115.052720] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
This paper discusses the research supporting the use of multi-source feedback (MSF) for doctors and describes the mini-Peer Assessment Tool (mini-PAT), the MSF instrument currently used to assess trainees in child and adolescent psychiatry. The relevance of issues raised in the literature about MSF tools in general is examined in relation to trainees in child and adolescent psychiatry as well as the appropriateness of the mini-PAT for this group. Suggestions for change including modifications to existing MSF tools or the development of a specialty-specific MSF instrument are offered.
Collapse
|
7
|
Robinson JD, Turner JW, Morris E, Roett M, Liao Y. What Residents Say About Communicating with Patients: A Preliminary Examination of Doctor-to-Doctor Interaction. HEALTH COMMUNICATION 2016; 31:1405-1411. [PMID: 27050397 DOI: 10.1080/10410236.2015.1077415] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This article describes the implementation and initial assessment of a training blog created within a family medicine department and used as a feedback mechanism for residents. First-year residents (n = 7) at a large private East Coast university hospital had an interaction with a patient recorded and posted to a training blog. The residents then watched this, and posted a reaction to their interaction with the patient. Within this reaction the residents offered self-reflection on the experience and were provided an opportunity to solicit advice from their colleagues to improve their communicative strategies and style. Once the reaction was posted to the blog, other residents watched the videotaped interaction, read the self-assessment written by the resident, and responded as part of their communication training. Content analysis of the messages suggests that the residents are socially skilled. They offer each other advice, provide each other with emotional and esteem social support, and use techniques such as self-deprecation in what appears to be a strategic manner. Perhaps most interesting is that they tend to identify the problems and difficulties they experience during patient-physician interactions in an apparent effort to deflect responsibility from the practicing physician. Patient challenges raised by residents included talkativeness, noncompliance, health literacy, and situational constraints.
Collapse
Affiliation(s)
| | - Jeanine W Turner
- b Department of Communication , Culture, and Technology, Georgetown University
| | - Elise Morris
- c Department of Family Medicine , Georgetown University Medical Center
| | - Michelle Roett
- c Department of Family Medicine , Georgetown University Medical Center
| | - Yuting Liao
- b Department of Communication , Culture, and Technology, Georgetown University
| |
Collapse
|
8
|
Abstract
BACKGROUND Peer feedback is increasingly being used by residency programs to provide an added dimension to the assessment process. Studies show that peer feedback is useful, uniquely informative, and reliable compared to other types of assessments. Potential barriers to implementation include insufficient training/preparation, negative consequences for working relationships, and a perceived lack of benefit. OBJECTIVE We explored the perceptions of residents involved in peer-to-peer feedback, focusing on factors that influence accuracy, usefulness, and application of the information. METHODS Family medicine residents at the University of Michigan who were piloting an online peer assessment tool completed a brief survey to offer researchers insight into the peer feedback process. Focus groups were conducted to explore residents' perceptions that are most likely to affect giving and receiving peer feedback. RESULTS Survey responses were provided by 28 of 30 residents (93%). Responses showed that peer feedback provided useful (89%, 25 of 28) and unique (89%, 24 of 27) information, yet only 59% (16 of 27) reported that it benefited their training. Focus group participants included 21 of 29 eligible residents (72%). Approaches to improve residents' ability to give and accept feedback included preparatory training, clearly defined goals, standardization, fewer and more qualitatively oriented encounters, 1-on-1 delivery, immediacy of timing, and cultivation of a feedback culture. CONCLUSIONS Residents perceived feedback as important and offered actionable suggestions to enhance accuracy, usefulness, and application of the information shared. The findings can be used to inform residency programs that are interested in creating a meaningful peer feedback process.
Collapse
|
9
|
Meeks DW, Meyer AND, Rose B, Walker YN, Singh H. Exploring new avenues to assess the sharp end of patient safety: an analysis of nationally aggregated peer review data. BMJ Qual Saf 2014; 23:1023-30. [DOI: 10.1136/bmjqs-2014-003239] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
10
|
Grover B, Hayes BD, Watson K. Feedback in clinical pharmacy education. Am J Health Syst Pharm 2014; 71:1592-6. [DOI: 10.2146/ajhp130701] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Affiliation(s)
- Brian Grover
- Emergency Medicine and Toxicology, University of Maryland Medical Center, Baltimore
| | - Bryan D. Hayes
- Emergency Medicine and Toxicology, University of Maryland Medical Center, Baltimore
| | | |
Collapse
|
11
|
Dawdy K, McGuffin M, Peacock M, Moline K, Di Prospero L. Incorporating Peer Assessments within a Clinical Practicum Course: Insights from a Clinical Faculty Initiative. J Med Imaging Radiat Sci 2014; 45:244-252. [PMID: 31051975 DOI: 10.1016/j.jmir.2014.01.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2013] [Revised: 01/31/2014] [Accepted: 01/31/2014] [Indexed: 10/25/2022]
Abstract
INTRODUCTION Peer assessments have been used within health professional programs to provide some degree of judgment of professional behavior and to facilitate feedback among peers. In an attempt to further support the clinical learning of our students, the clinical education team at the Odette Cancer Centre initiated a pilot to introduce peer assessments as a part of strategies for learning and engagement within laboratory sessions. The aim of our work was to retrospectively review peer assessments completed during these sessions in an attempt to identify professional behaviors, both positive and negative, and subsequently correlate the assessments with observed behaviors noted, both formally and anecdotally, within clinical faculty assessments. Further to this, our team attempted to explore student perceptions on the impact of peer assessments to their own learning. METHODS Students in the final year of a 3-year undergraduate medical radiation sciences program were asked to assess their peers during laboratory sessions using a modified version of an assessment tool previously known to the students, the Assessment of Readiness for Clinical tool. Students (N = 14) were required to evaluate each of their peers who participated in the same session and provide supporting comments for their rating. For each student, responses from peer assessors were anonymized and collated. Comments and numerical ratings on the peer assessments were compared. The student assessments were subsequently compared with similar measures extracted from faculty assessments. Students also participated in a debriefing session to provide feedback regarding the integration of these assessments within the learning sessions and the potential impact they had on their own professional behaviors. RESULTS The majority of students rated their peers in all criteria at a score of 2 (performed or surpassed expectations). There was some correlation between numerical ratings and comments written in the assessments. Comments on peer assessments were in concordance with observations extracted from previous assessments by clinical faculty and teachers for 71% of the students. Students expressed a favorable attitude toward the use of the peer assessments but did not find the numerical ratings useful and instead valued supporting constructive comments that cited specific examples for improvement. CONCLUSIONS Peer assessments were found to be of some benefit to the learning of our students, particularly the anecdotal supporting comments that accompanied the ratings. However, their use must be accompanied by formalized training and guidelines to teachers and learners as well as a careful consideration of the tool chosen to ensure the most purposeful impact on behavior change.
Collapse
Affiliation(s)
- Krista Dawdy
- Department of Radiation Therapy, Odette Cancer Centre, Sunnybrook, Toronto, Ontario, Canada.
| | - Merrylee McGuffin
- Department of Radiation Therapy, Odette Cancer Centre, Sunnybrook, Toronto, Ontario, Canada
| | - Marnie Peacock
- Department of Radiation Therapy, Odette Cancer Centre, Sunnybrook, Toronto, Ontario, Canada
| | - Karen Moline
- Department of Radiation Therapy, Odette Cancer Centre, Sunnybrook, Toronto, Ontario, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Lisa Di Prospero
- Department of Radiation Therapy, Odette Cancer Centre, Sunnybrook, Toronto, Ontario, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
12
|
Donnon T, Al Ansari A, Al Alawi S, Violato C. The reliability, validity, and feasibility of multisource feedback physician assessment: a systematic review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:511-6. [PMID: 24448051 DOI: 10.1097/acm.0000000000000147] [Citation(s) in RCA: 121] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
PURPOSE The use of multisource feedback (MSF) or 360-degree evaluation has become a recognized method of assessing physician performance in practice. The purpose of the present systematic review was to investigate the reliability, generalizability, validity, and feasibility of MSF for the assessment of physicians. METHOD The authors searched the EMBASE, PsycINFO, MEDLINE, PubMed, and CINAHL databases for peer-reviewed, English-language articles published from 1975 to January, 2013. Studies were included if they met the follow ing inclusion criteria: used one or more MSF instruments to assess physician performance in practice; reported psychometric evidence of the instrument(s) in the form of reliability, generalizability coefficients, and construct or criterion-related validity; and provided information regarding the administration or feasibility of the process in collecting the feedback data. RESULTS Of the 96 full-text articles assessed for eligibility, 43 articles were included. The use of MSF has been shown to be an effective method for providing feedback to physicians from a multitude of specialties about their clinical and nonclinical (i.e., professionalism, communication, interpersonal relationship, management) performance. In general, assessment of physician performance was based on the completion of the MSF instruments by 8 medical colleagues, 8 coworkers, and 25 patients to achieve adequate reliability and generalizability coefficients of α ≥ 0.90 and Ep ≥ 0.80, respectively. CONCLUSIONS The use of MSF employing medical colleagues, coworkers, and patients as a method to assess physicians in practice has been shown to have high reliability, validity, and feasibility.
Collapse
Affiliation(s)
- Tyrone Donnon
- Dr. Donnon is associate professor, Medical Education and Research Unit, Department of Community Health Sciences, Faculty of Medicine, University of Calgary, Calgary, Alberta, Canada. Dr. Al Ansari is director of training and development, Department of Medical Education, Faculty of Medicine, Bahrain Defense Force Hospital, Riffa, Bahrain. Dr. Al Alawi is a faculty member, Department of Family Medicine, Faculty of Medicine, Bahrain Defense Force Hospital, Riffa, Bahrain. Dr. Violato is professor, Medical Education and Research Unit, Department of Community Health Sciences, Faculty of Medicine, University of Calgary, Calgary, Alberta, Canada
| | | | | | | |
Collapse
|
13
|
Al Ansari A, Donnon T, Al Khalifa K, Darwish A, Violato C. The construct and criterion validity of the multi-source feedback process to assess physician performance: a meta-analysis. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2014; 5:39-51. [PMID: 24600300 PMCID: PMC3942110 DOI: 10.2147/amep.s57236] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
BACKGROUND The purpose of this study was to conduct a meta-analysis on the construct and criterion validity of multi-source feedback (MSF) to assess physicians and surgeons in practice. METHODS In this study, we followed the guidelines for the reporting of observational studies included in a meta-analysis. In addition to PubMed and MEDLINE databases, the CINAHL, EMBASE, and PsycINFO databases were searched from January 1975 to November 2012. All articles listed in the references of the MSF studies were reviewed to ensure that all relevant publications were identified. All 35 articles were independently coded by two authors (AA, TD), and any discrepancies (eg, effect size calculations) were reviewed by the other authors (KA, AD, CV). RESULTS Physician/surgeon performance measures from 35 studies were identified. A random-effects model of weighted mean effect size differences (d) resulted in: construct validity coefficients for the MSF system on physician/surgeon performance across different levels in practice ranged from d=0.14 (95% confidence interval [CI] 0.40-0.69) to d=1.78 (95% CI 1.20-2.30); construct validity coefficients for the MSF on physician/surgeon performance on two different occasions ranged from d=0.23 (95% CI 0.13-0.33) to d=0.90 (95% CI 0.74-1.10); concurrent validity coefficients for the MSF based on differences in assessor group ratings ranged from d=0.50 (95% CI 0.47-0.52) to d=0.57 (95% CI 0.55-0.60); and predictive validity coefficients for the MSF on physician/surgeon performance across different standardized measures ranged from d=1.28 (95% CI 1.16-1.41) to d=1.43 (95% CI 0.87-2.00). CONCLUSION The construct and criterion validity of the MSF system is supported by small to large effect size differences based on the MSF process and physician/surgeon performance across different clinical and nonclinical domain measures.
Collapse
Affiliation(s)
- Ahmed Al Ansari
- Department of General Surgery, Bahrain Defense Force Hospital, Riffa, Kingdom of Bahrain
| | - Tyrone Donnon
- Medical Education and Research Unit, Department of Community Health Sciences, Faculty of Medicine, University of Calgary, AB, Canada
| | - Khalid Al Khalifa
- Department of General Surgery, Bahrain Defense Force Hospital, Riffa, Kingdom of Bahrain
| | - Abdulla Darwish
- Department of Pathology, Bahrain Defense Force Hospital, Riffa, Kingdom of Bahrain
| | - Claudio Violato
- Department of Medical Education, Faculty of Medicine, University Ambrosiana, Milan, Italy
| |
Collapse
|
14
|
Arora VM, Greenstein EA, Woodruff JN, Staisiunas PG, Farnan JM. Implementing peer evaluation of handoffs: associations with experience and workload. J Hosp Med 2013; 8:132-6. [PMID: 23382137 DOI: 10.1002/jhm.2002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/17/2012] [Revised: 11/13/2012] [Accepted: 11/15/2012] [Indexed: 11/10/2022]
Abstract
BACKGROUND Although peer evaluation can be used to evaluate in-hospital handoffs, few studies have described using this strategy. OBJECTIVE Our objective was to assess feasibility of an online peer handoff evaluation and characterize performance over time among medical interns. DESIGN The design was a prospective cohort study. PATIENTS Subjects were medical interns from residency program rotating at 2 teaching hospitals. MEASUREMENTS Measurements were performance on an end-of-rotation evaluation of giving and receiving handoffs. RESULTS From July 2009 to March 2010, 31 interns completed 60% (172/288) of peer evaluations. Ratings were high across domains (mean, 8.3-8.6). In multivariate regression controlling for evaluator and evaluatee, statistically significant improvements over time were observed for 4 items compared to the first 3 months of the year: 1) communication skills (season 2, +0.34 [95% confidence interval (CI), 0.08-0.60], P = 0.009); 2) listening behavior (season 2, +0.29 [95% CI, 0.04-0.55], P = 0.025); 3) accepting professional responsibility (season 3, +0.37 [95% CI, 0.08-0.65], P = 0.012); and 4) accessing the system (season 2, +0.21 [95% CI, 0.03-0.39], P = 0.023). Ratings were also significantly lower when interns were postcall in written sign-out quality (8.21 vs 8.39, P = 0.008) and accepting feedback (8.25 vs 8.42, P = 0.006). Ratings from a community hospital rotation, with a lower census than the teaching hospital, were significantly higher for overall performance and 7 of 12 domains (P < 0.05 for all). Significant evaluator effects were observed. CONCLUSIONS Although there is evidence of leniency, peer evaluation of handoffs demonstrate increases over time and associations with workload such as postcall status. This suggests the importance of examining how workload impacts handoffs in the future.
Collapse
Affiliation(s)
- Vineet M Arora
- Section of General Internal Medicine, Department of Medicine, University of Chicago, Chicago, IL 60637, USA.
| | | | | | | | | |
Collapse
|
15
|
Leung KK, Wang WD, Chen YY. Multi-source evaluation of interpersonal and communication skills of family medicine residents. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2012; 17:717-726. [PMID: 22240920 DOI: 10.1007/s10459-011-9345-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2011] [Accepted: 12/23/2011] [Indexed: 05/31/2023]
Abstract
There is a lack of information on the use of multi-source evaluation to assess trainees' interpersonal and communication skills in Oriental settings. This study is conducted to assess the reliability and applicability of assessing the interpersonal and communication skills of family medicine residents by patients, peer residents, nurses, and teaching staffs and to compare the ratings with the objective structured clinical examination (OSCE). Our results revealed instruments used by staffs, peers, nurses, and self-evaluation have good internal consistency reliability (α > 0.90), except for the behavioral checklist (α = 0.57). Staffs', peers', and nurses' evaluations were highly correlated with one another (r = 0.722 for staff- and peer-rating, r = 0.734 for staff- and nurse-rating, r = 0.634 for peer- and nurse-rating). However, residents' self-rating and patients-rating were not correlated to ratings by any other raters. OSCE evaluation was correlated to peer-rating (r = 0.533) and staff-rating (r = 0.642), but not correlated to self- or patient-rating. The generalizability study revealed the major sources of variance came from the types of rater and the interaction of residents and types of rater. This study found self-rating and patient-rating were not consistent with other sources of rating on residents' interpersonal and communication skills. Whether variations among different types of rater in a multi-source evaluation should be regarded as measurement errors or complementary information is worth further study.
Collapse
Affiliation(s)
- Kai-Kuen Leung
- Department of Family Medicine, National Taiwan University College of Medicine, Taipei, ROC.
| | | | | |
Collapse
|
16
|
Owen C, Mathews PW, Phillips C, Ramsey W, Corrigan G, Bassett M, Wenzel J. Intern culture, internal resistance: uptake of peer review in two Australian hospital internship programs. AUST HEALTH REV 2012; 35:430-5. [PMID: 22126945 DOI: 10.1071/ah10925] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2010] [Accepted: 01/31/2011] [Indexed: 11/23/2022]
Abstract
OBJECTIVE To compare the uptake of peer review among interns in mandatory and voluntary peer-review programs. POPULATION All first and second year graduates (n=105) in two Australian hospitals. MAIN OUTCOME MEASURES Completion of peer review, and reported responses by doctors to peer review. RESULTS Eight of sixty interns undertaking the mandated program completed all steps. In the voluntary program, none of 45 interns did so. Resistance to peer review occurred at all stages of the trial, from the initial briefing sessions to the provision of peer-review reports. DISCUSSION; Hospital internship is a critical period for the development of professional identity among doctors. We hypothesise that resistance to peer review among novice doctors reflects a complex tension between the processes underpinning the development of a group professional identity in hospital, and a managerial drive for personal reflection and accountability. Peer review may be found threatening by interns because it appears to run counter to collegiality or 'team culture'. In this study, resistance to peer review represented a low-cost strategy in which the interns' will could be asserted against management. CONCLUSION To enhance uptake, peer review should be structured as key to clinical development, and modelled as a professional behaviour by higher-status colleagues.
Collapse
Affiliation(s)
- Cathy Owen
- Australian National University, ANU Medical School, Canberra, ACT 0200, Australia.
| | | | | | | | | | | | | |
Collapse
|
17
|
Corgnet B. Peer evaluations and team performance: when friends do worse than strangers. ECONOMIC INQUIRY 2012; 50:171-181. [PMID: 22329052 DOI: 10.1111/j.1465-7295.2010.00354.x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We use peer assessments as a tool to allocate joint profits in a real-effort team experiment. We find that using this incentive mechanism reduces team performance. More specifically, we show that teams composed of acquaintances rather than strangers actually underperform in a context of peer evaluations. We conjecture that peer evaluations undermine the inherently high level of intrinsic motivation that characterizes teams composed of friends and possibly exacerbate negative reciprocity among partners. Finally, we analyze the determinants of peer assessments and stress the crucial importance of equality concerns.
Collapse
|
18
|
Willett LL, Estrada CA, Wall TC, Coley HL, Ngu J, Curry W, Salanitro A, Houston TK. Use of ecological momentary assessment to guide curricular change in graduate medical education. J Grad Med Educ 2011; 3:162-7. [PMID: 22655137 PMCID: PMC3184922 DOI: 10.4300/jgme-d-10-00165.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/25/2010] [Revised: 12/28/2010] [Accepted: 01/03/2011] [Indexed: 11/06/2022] Open
Abstract
PURPOSE To assess whether a novel evaluation tool could guide curricular change in an internal medicine residency program. METHOD The authors developed an 8-item Ecological Momentary Assessment tool and collected daily evaluations from residents of the relative educational value of 3 differing ambulatory morning report formats (scale: 8 = best, 0 = worst). From the evaluations, they made a targeted curricular change and used the tool to assess its impact. RESULTS Residents completed 1388 evaluation cards for 223 sessions over 32 months, with a response rate of 75.3%. At baseline, there was a decline in perceived educational value with advancing postgraduate (PGY) year for the overall mean score (PGY-1, 7.4; PGY-2, 7.2; PGY-3, 7.0; P < .01) and for percentage reporting greater than 2 new things learned (PGY-1, 77%; PGY-2, 66%; PGY-3, 50%; P < .001). The authors replaced the format of a lower scoring session with one of higher cognitive content to target upper-level residents. The new session's mean score improved (7.1 to 7.4; P = .03); the adjusted odds ratios before and after the change for percentage answering, "Yes, definitely" to "Area I need to improve" was 2.53 (95% confidence interval [CI], 1.45-4.42; P = .001) and to "Would recommend to others," it was 2.08 (95% CI, 1.12-3.89; P = .05). CONCLUSIONS The Ecological Momentary Assessment tool successfully guided ambulatory morning report curricular changes and confirmed successful curricular impact. Ecological Momentary Assessment concepts of multiple, frequent, timely evaluations can be successfully applied in residency curriculum redesign.
Collapse
|
19
|
Dupras DM, Edson RS. A survey of resident opinions on peer evaluation in a large internal medicine residency program. J Grad Med Educ 2011; 3:138-43. [PMID: 22655133 PMCID: PMC3184905 DOI: 10.4300/jgme-d-10-00099.1] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/08/2010] [Revised: 08/15/2010] [Accepted: 11/15/2010] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Starting in the 1960s, studies have suggested that peer evaluation could provide unique insights into the performance of residents in training. However, reports of resident resistance to peer evaluation because of confidentiality issues and the possible impact on their working relationships raised concerns about the acceptability and utility of peer evaluation in graduate medical education. The literature suggests that peers are able to reliably assess communication, interpersonal skills, and professionalism and provide input that may differ from faculty evaluations. This study assessed the attitudes of internal medicine residents 1 year after the implementation of a peer-evaluation system. METHODS During the 2005-2006 academic year, we conducted an anonymous survey of the 168 residents in the Internal Medicine Residency Program at the Mayo Clinic, Rochester, Minnesota. Contingency table analysis was used to compare the response patterns of the groups. RESULTS The response rate was 61% (103/168 residents) and it did not differ by year of training. Most residents (74/103; 72%) felt that peers could provide valuable feedback. Eighty percent of residents (82/103) felt the feedback was important for their professional development and 84% (86/102) agreed that peers observe behaviors not seen by attending faculty. CONCLUSIONS The results of this study suggest that internal medicine residents provide unique assessment of their peers and provide feedback they consider important for their professional development. More importantly, the results support the role of peer evaluation in the assessment of the competencies of professionalism and interpersonal and communication skills.
Collapse
|
20
|
Lanning SK, Brickhouse TH, Gunsolley JC, Ranson SL, Willett RM. Communication skills instruction: an analysis of self, peer-group, student instructors and faculty assessment. PATIENT EDUCATION AND COUNSELING 2011; 83:145-51. [PMID: 20638816 DOI: 10.1016/j.pec.2010.06.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2009] [Revised: 06/06/2010] [Accepted: 06/16/2010] [Indexed: 05/06/2023]
Abstract
OBJECTIVE To explore the correlation of student and faculty assessments of, second-year dental students' (D2s) communicative skills during simulated patient interviews. METHODS Eighty-two D2s, 14 student instructors and 8 faculty used a 5-point scale, (1=poor-5=excellent) to assess 12 specific communicative skills of D2s generating assessment sources of self, peer-group, student instructor, and faculty. Mean scores and comparisons between assessment sources were calculated. Spearman correlations evaluated relationships between specific skills and assessment sources. RESULTS Mean assessment score and standard error for peer-group (4.14 ± 0.04), was higher than self (3.86 ± 0.06, p<0.05) yet slightly higher than student instructor (4.07 ± 0.04) and faculty (3.93±0.10). Regarding assessment sources, the degree of correlation from highest to lowest was peer-group and student instructor (ρ=0.46, p<0.0001), self and student instructor (ρ=0.35, p<0.002), self and peer-group (ρ=0.28, p<0.02). The correlations between student instructor and faculty, faculty and self, and faculty and peer-group were nonsignificant. CONCLUSION Student assessments were different from faculty by mean score and correlation index. Future studies are needed to determine the nature of the differences found between student and faculty assessments. PRACTICE IMPLICATIONS Peer, student instructor and faculty assessments of dental students' communicative skills are not necessarily interchangeable but may offer uniquely different and valuable feedback to students.
Collapse
Affiliation(s)
- Sharon K Lanning
- Department of Periodontics, Virginia Commonwealth University, School of Dentistry, USA.
| | | | | | | | | |
Collapse
|
21
|
Speyer R, Pilz W, Van Der Kruis J, Brunings JW. Reliability and validity of student peer assessment in medical education: a systematic review. MEDICAL TEACHER 2011; 33:e572-85. [PMID: 22022910 DOI: 10.3109/0142159x.2011.610835] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
BACKGROUND Peer assessment has been demonstrated to be an effective educational intervention for health science students. AIMS This study aims to give an overview of all instruments or questionnaires for peer assessments used in medical and allied health professional educational settings and their psychometric characteristics as described in literature. METHODS A systematic literature search was carried out using the electronic databases Pubmed, Embase, ERIC, PsycINFO and Web of Science, including all available inclusion dates up to May 2010. RESULTS Out of 2899 hits, 28 studies were included, describing 22 different instruments for peer assessment in mainly medical educational settings. Although most studies considered professional behaviour as a main subject of assessment and described peer assessment usually as an assessment tool, great diversity was found in educational settings and application of peer assessment, dimensions or constructs as well as number of items and scoring system per questionnaire, and in psychometric characteristics. CONCLUSIONS Although quite a few instruments of peer assessment have been identified, many questionnaires did not provide sufficient psychometric data. Still, the final choice of an instrument for educational purposes can only be justified by its sufficient reliability and validity as well as the discriminative and evaluative purposes of the assessment.
Collapse
Affiliation(s)
- Renée Speyer
- Institute of Health Studies, HAN University of Applied Sciences, Nijmegen, The Netherlands.
| | | | | | | |
Collapse
|
22
|
Mackillop LH, Crossley J, Vivekananda-Schmidt P, Wade W, Armitage M. A single generic multi-source feedback tool for revalidation of all UK career-grade doctors: does one size fit all? MEDICAL TEACHER 2011; 33:e75-e83. [PMID: 21275537 DOI: 10.3109/0142159x.2010.535870] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
BACKGROUND The UK Department of Health is considering a single, generic multi-source feedback (MSF) questionnaire to inform revalidation. METHOD Evaluation of an implementation pilot, reporting: response rates, assessor mix, question redundancy and participants' perceptions. Reliability was estimated using Generalisability theory. RESULTS A total of 12,540 responses were received on 977 doctors. The mean time taken to complete an MSF exercise was 68.2 days. The mean number of responses received per doctor was 12.0 (range 1-17) with no significant difference between specialties. Individual question response rates and participants' comments about questions indicate that some questions are less appropriate for some specialities. There was a significant difference in the mean score between specialities. Despite guidance, there were significant differences in the mix of assessors across specialties. More favourable scores were given by progressively more junior doctors. Nurses gave the most reliable scores. CONCLUSIONS It is feasible to electronically administer a generic questionnaire to a large population of doctors. Generic content is appropriate for most but not all specialties. The differences in mean scores and the reliability of the MSF between specialties may be in part due to the specialty differences in assessor mix. Therefore the number and assessor mix should be standardised at specialty level and scores should not be compared across specialties.
Collapse
|
23
|
Cook DA, Beckman TJ, Mandrekar JN, Pankratz VS. Internal structure of mini-CEX scores for internal medicine residents: factor analysis and generalizability. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2010; 15:633-45. [PMID: 21120648 DOI: 10.1007/s10459-010-9224-9] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2009] [Accepted: 02/01/2010] [Indexed: 05/24/2023]
Abstract
UNLABELLED The mini-CEX is widely used to rate directly observed resident-patient encounters. Although several studies have explored the reliability of mini-CEX scores, the dimensionality of mini-CEX scores is incompletely understood. OBJECTIVE explore the dimensionality of mini-CEX scores through factor analysis and generalizability analysis. DESIGN factor analytic and generalizability study using retrospective data. PARTICIPANTS eighty five physician preceptors and 264 internal medicine residents (postgraduate years 1-3). METHODS preceptors used the six-item mini-CEX to rate directly observed resident-patient encounters in internal medicine resident continuity clinics. We analyzed mini-CEX scores accrued over 4 years using repeated measures analysis of variance to generate a correlation matrix adjusted for multiple observations on individual residents, and then performed factor analysis on this adjusted correlation matrix. We also performed generalizability analyses. RESULTS eighty-five preceptors rated 264 residents in 1,414 resident-patient encounters. Common factor analysis of these scores after adjustment for repeated measures revealed a single-factor solution. Cronbach's alpha for this single factor (i.e. all six mini-CEX items) was ≥ 0.86. Sensitivity analyses using principal components and other method variations revealed a similar factor structure. Generalizability studies revealed a reproducibility coefficient of 0.23 (0.70 for 10 raters or encounters). CONCLUSIONS the mini-CEX appears to measure a single global dimension of clinical competence. If educators desire to measure discrete clinical skills, alternative assessment methods may be required. Our approach to factor analysis overcomes the limitation of repeated observations on subjects without discarding data, and may be useful to other researchers attempting factor analysis of datasets in which individuals contribute multiple observations.
Collapse
Affiliation(s)
- David A Cook
- Division of General Internal Medicine and Office of Education Research, Mayo Clinic College of Medicine, Baldwin 4-A, 200 First Street SW, Rochester, MN 55905, USA.
| | | | | | | |
Collapse
|
24
|
Warm EJ, Schauer D, Revis B, Boex JR. Multisource feedback in the ambulatory setting. J Grad Med Educ 2010; 2:269-77. [PMID: 21975632 PMCID: PMC2941386 DOI: 10.4300/jgme-d-09-00102.1] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2009] [Revised: 01/18/2010] [Accepted: 01/25/2010] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education has mandated multisource feedback (MSF) in the ambulatory setting for internal medicine residents. Few published reports demonstrate actual MSF results for a residency class, and fewer still include clinical quality measures and knowledge-based testing performance in the data set. METHODS Residents participating in a year-long group practice experience called the "long-block" received MSF that included self, peer, staff, attending physician, and patient evaluations, as well as concomitant clinical quality data and knowledge-based testing scores. Residents were given a rank for each data point compared with peers in the class, and these data were reviewed with the chief resident and program director over the course of the long-block. RESULTS Multisource feedback identified residents who performed well on most measures compared with their peers (10%), residents who performed poorly on most measures compared with their peers (10%), and residents who performed well on some measures and poorly on others (80%). Each high-, intermediate-, and low-performing resident had a least one aspect of the MSF that was significantly lower than the other, and this served as the basis of formative feedback during the long-block. CONCLUSION Use of multi-source feedback in the ambulatory setting can identify high-, intermediate-, and low-performing residents and suggest specific formative feedback for each. More research needs to be done on the effect of such feedback, as well as the relationships between each of the components in the MSF data set.
Collapse
Affiliation(s)
- Eric J. Warm
- Corresponding author: Eric J. Warm, MD, Department of Internal Medicine, University of Cincinnati Academic Health Center, 231 Albert Sabin Way, Cincinnati, OH 45267-0557, 513.558.2590,
| | | | | | | |
Collapse
|
25
|
Implementation of Peer Review into a Physical Medicine and Rehabilitation Program and its Effect on Professionalism. PM R 2010; 2:117-24. [DOI: 10.1016/j.pmrj.2009.11.013] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2009] [Revised: 11/23/2009] [Accepted: 11/26/2009] [Indexed: 11/19/2022]
|
26
|
Mathews PW, Owen C, Ramsey W, Corrigan G, Bassett M, Wenzel J. Assessment of a peer review process among interns at an Australian hospital. AUST HEALTH REV 2010; 34:499-505. [DOI: 10.1071/ah09838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2009] [Accepted: 03/09/2010] [Indexed: 11/23/2022]
Abstract
Purpose. This study considered how a peer review process could work in an Australian public hospital setting. Method. Up to 229 medical personnel completed an online performance assessment of 52 Junior Medical Officers (JMOs) during the last quarter of 2008. Results. Results indicated that the registrar was the most suitable person to assess interns, although other professionals, including interns themselves, were identified as capable of playing a role in a more holistic appraisal system. Significant sex differences were also found, which may be worthy of further study. Also, the affirmative rather than the formative aspect of the assessment results suggested that the criteria and questions posed in peer review be re-examined. Conclusion. A peer review process was able to be readily implemented in a large institution, and respondents were positive towards peer review generally as a valuable tool in the development of junior medical staff. What is known about the topic? The literature generally concurs that peer review is a useful tool in professional development and can provide a rounded view from diverse sources about a peer’s professional performance. It has been implemented in at least one Canadian medical facility as a mandatory process. What does this paper add? Our study identifies who is considered the most suitable peer(s) to assess interns, various substantive issues about peer review and about the process itself, and raises questions about the voluntary v. mandatory nature of peer review. It is the first study to trial peer review amongst interns in an Australian hospital. What are the implications for practitioners? That peer review is a suitable tool in professional development and generally supported in our study, suggesting that it could be implemented into Australian healthcare practice. However, education about the nature and value of peer review would be required amongst healthcare professionals, and the use of peer review could imply greater managerial engagement in medical practice. Peer review is a more effective assessment tool than that currently employed in many Australian hospitals.
Collapse
|
27
|
van Mook WNKA, Gorter SL, O'Sullivan H, Wass V, Schuwirth LW, van der Vleuten CPM. Approaches to professional behaviour assessment: tools in the professionalism toolbox. Eur J Intern Med 2009; 20:e153-7. [PMID: 19892295 DOI: 10.1016/j.ejim.2009.07.012] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2008] [Revised: 07/22/2009] [Accepted: 07/25/2009] [Indexed: 11/26/2022]
Abstract
There is general agreement that professionalism and professional behaviour should be (formatively and summatively) assessed, but consensus on how this should be done is still lacking. After discussing some of the remaining issues and questions regarding professionalism assessment, this article discusses the importance of qualitative comments to the assessment of professional behaviour, focuses on the currently most frequently used tools, as well as stresses the need for triangulation (combining) of these tools.
Collapse
Affiliation(s)
- Walther N K A van Mook
- Department of Intensive Care and Internal Medicine, Maastricht University Medical Centre, Maastricht, The Netherlands.
| | | | | | | | | | | |
Collapse
|
28
|
Richards SH, Campbell JL, Walshaw E, Dickens A, Greco M. A multi-method analysis of free-text comments from the UK General Medical Council Colleague Questionnaires. MEDICAL EDUCATION 2009; 43:757-66. [PMID: 19659489 DOI: 10.1111/j.1365-2923.2009.03416.x] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
CONTEXT Colleague surveys are important sources of information on a doctor's professional performance in UK revalidation plans. Colleague surveys are analysed by deriving quantitative measures from rating scales. As free-text comments are also recorded, we explored the utility of a mixed-methods approach to their analysis. METHODS A volunteer sample of practising UK doctors (from acute, primary and other care settings) undertook a General Medical Council (GMC) colleague survey. Up to 20 colleagues per doctor completed an online Colleague Questionnaire (CQ), which included 18 performance evaluation items and an optional comment box. The polarity of each comment was noted and a qualitative content analysis undertaken. Emerging themes were mapped onto existing items to identify areas not previously captured. We then quantitatively analysed the associations between the polarity of comments (positive/adverse) and their related item scale scores. RESULTS A total of 1636 of 4269 (38.3%) colleagues recorded free-text comments (median = 14 per doctor) and most were unequivocally positive; only 127 of 1636 (7.8%) recorded negative statements and these were clustered on a subset comprising 80 of 302 (26.5%) doctors. Doctors' overall mean CQ performance scores were significantly correlated with the numbers of colleagues recording positive (r = 0.35; P < 0.0001) and adverse (r = - 0.40; P = 0.0003) comments. In total, 1224 of 1636 (74.8%) comments included statements that mapped on CQ items, and statistically significant associations (P < 0.05) were observed for 14 of 15 items. Five global themes (innovativeness, interpersonal skills, popularity, professionalism, respect) were identified in 904 of 1636 (73.9%) comments. CONCLUSIONS There is an inevitable trade-off between the capturing of indicators of problematic performance (i.e. adverse statements which contradict a positive scale rating) and the ease with which such statements can be identified. Our data suggest there is little benefit in routinely analysing narrative comments for the purposes of revalidation.
Collapse
|
29
|
Gross M, Pelz J. [Change in the job description of physicians. Consequences for medical education]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2009; 52:831-40. [PMID: 19626280 DOI: 10.1007/s00103-009-0906-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
After receiving the final degree at the age of about 25 years, physicians are going to practice a minimum of 40 years. Therefore, one can assume that after graduation physicians are confronted with many occupational challenges which were not and could not be covered during their studies. This implies that medical education does not only have to provide intensive knowledge about established methods but above all about potential future techniques. Throughout the educational period and continuing during professional life, physicians have first to learn and then to be able to seek information and to conduct a critical appraisal - systematically examining research evidence, assessing its validity and the relevance of the results. The increasing velocity of innovation in the realm of medicine requires students to be prepared for life-long learning and continuous, autonomous professional development.
Collapse
Affiliation(s)
- M Gross
- Charité - Universitätsmedizin Berlin, 10117 Berlin.
| | | |
Collapse
|
30
|
O'Brien CE, Franks AM, Stowe CD. Multiple rubric-based assessments of student case presentations. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2008; 72:58. [PMID: 18698367 PMCID: PMC2508736 DOI: 10.5688/aj720358] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2007] [Accepted: 12/09/2007] [Indexed: 05/20/2023]
Abstract
OBJECTIVES To evaluate a rubric-based method of assessing pharmacy students' case presentations in the recitation component of a therapeutics course. METHODS A rubric was developed to assess knowledge, skills, and professional behavior. The rubric was used for instructor, student peer, and student self-assessment of case presentations. Rubric-based composite scores were compared to the previous dichotomous checklist-based scores. RESULTS Rubric-based instructor scores were significantly lower and had a broader score distribution than those resulting from the checklist method. Spring 2007 rubric-based composite scores from instructors and peers were significantly lower than those from the pilot study results, but self-assessment composite scores were not significantly different. CONCLUSIONS Successful development and implementation of a grading rubric facilitated evaluation of knowledge, skills, and professional behavior from the viewpoints of instructor, peer, and self in a didactic course.
Collapse
Affiliation(s)
- Catherine E O'Brien
- College of Pharmacy, University of Arkansas for Medical Sciences, 4301 West Markham Street, Little Rock, AR 72205, USA
| | | | | |
Collapse
|
31
|
Overeem K, Faber MJ, Arah OA, Elwyn G, Lombarts KMJMH, Wollersheim HC, Grol RPTM. Doctor performance assessment in daily practise: does it help doctors or not? A systematic review. MEDICAL EDUCATION 2007; 41:1039-49. [PMID: 17973764 DOI: 10.1111/j.1365-2923.2007.02897.x] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
CONTEXT Continuous assessment of individual performance of doctors is crucial for life-long learning and quality of care. Policy-makers and health educators should have good insights into the strengths and weaknesses of the methods available. The aim of this study was to systematically evaluate the feasibility of methods, the psychometric properties of instruments that are especially important for summative assessments, and the effectiveness of methods serving formative assessments used in routine practise to assess the performance of individual doctors. METHODS We searched the MEDLINE (1966-January 2006), PsychINFO (1972-January 2006), CINAHL (1982-January 2006), EMBASE (1980-January 2006) and Cochrane (1966-2006) databases for English language articles, and supplemented this with a hand-search of reference lists of relevant studies and bibliographies of review articles. Studies that aimed to assess the performance of individual doctors in routine practise were included. Two reviewers independently abstracted data regarding study design, setting and findings related to reliability, validity, feasibility and effectiveness using a standard data abstraction form. RESULTS A total of 64 articles met our inclusion criteria. We observed 6 different methods of evaluating performance: simulated patients; video observation; direct observation; peer assessment; audit of medical records, and portfolio or appraisal. Peer assessment is the most feasible method in terms of costs and time. Little psychometric assessment of the instruments has been undertaken so far. Effectiveness of formative assessments is poorly studied. All systems but 2 rely on a single method to assess performance. DISCUSSION There is substantial potential to assess performance of doctors in routine practise. The longterm impact and effectiveness of formative performance assessments on education and quality of care remains hardly known. Future research designs need to pay special attention to unmasking effectiveness in terms of performance improvement.
Collapse
Affiliation(s)
- Karlijn Overeem
- Centre for Quality of Care Research, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
32
|
McCormack WT, Lazarus C, Stern D, Small PA. Peer nomination: a tool for identifying medical student exemplars in clinical competence and caring, evaluated at three medical schools. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2007; 82:1033-1039. [PMID: 17971688 DOI: 10.1097/01.acm.0000285345.75528.ee] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE Peer evaluation is underused in medical education. The goals of this study were to validate in a multiinstitutional study a peer nomination form that identifies outstanding students in clinical competency and interpersonal skills, to test the hypothesis that with additional survey items humanism could be identified as a separate factor, and to find the simplest method of analysis. METHOD In 2003, a 12-item peer nomination form was administered to junior or senior medical students at three institutions. Factor analysis was used to identify major latent variables and the items related to those characteristics. On the basis of those results, in 2004 a simpler, six-item form was developed and administered. Student rankings based on factor analysis and nomination counts were compared. RESULTS Factor analysis of peer nomination data from both surveys identified three factors: clinical competence, caring, and community service. New survey items designed to address humanism are all weighted with interpersonal skills items; thus, the second major factor is characterized as caring. Rankings based on peer nomination results analyzed by either factor analysis or simply counting nominations distinguish at least the top 15% of students for each characteristic. CONCLUSIONS Counting peer nominations using a simple, six-item form identifies medical student exemplars for three characteristics: clinical competence, caring, and community service. Factor analysis of peer nomination data did not identify humanism as a separate factor. Peer nomination rankings provide medical schools with a reliable tool to identify exemplars for recognition in medical student performance evaluations and selection for honors (e.g., Gold Humanism Honor Society).
Collapse
Affiliation(s)
- Wayne T McCormack
- Department of Pathology, Immunology & Laboratory Medicine, University of Florida College of Medicine, Gainesville, Florida 32610-0215, USA
| | | | | | | |
Collapse
|
33
|
Davidson ML. The 360 degrees evaluation. Clin Podiatr Med Surg 2007; 24:65-94, vii. [PMID: 17127162 DOI: 10.1016/j.cpm.2006.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The 360 degrees evaluation refers to resident assessment by all persons in the resident's sphere of influence. Although most think of the 360 degrees evaluation as a single tool used by all nonfaculty personnel, it is actually an assessment system. Therefore, all evaluators, including faculty, and all tools used in the assessment of residents comprise the 360 degrees evaluation. The goal of the 360 degrees evaluation is to accurately assess resident performance using tools that are reliable and valid so that competence can be demonstrated and programmatic needs identified. This article provides insights into the evaluation process through guidelines, practical advice, and examples from the field.
Collapse
Affiliation(s)
- Melissa L Davidson
- Department of Anesthesiology, University of Medicine and Dentistry of New Jersey, New Jersey Medical School, MSB E-550, 185 South Orange Avenue, Newark, NJ 07103, USA.
| |
Collapse
|
34
|
Lurie SJ, Nofziger AC, Meldrum S, Mooney C, Epstein RM. Temporal and group-related trends in peer assessment amongst medical students. MEDICAL EDUCATION 2006; 40:840-7. [PMID: 16925633 DOI: 10.1111/j.1365-2929.2006.02540.x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Peer assessment has been increasingly recommended as a way to evaluate the professional competencies of medical trainees. Prior studies have only assessed single groups measured at a single timepoint. Thus, neither the longitudinal stability of such ratings nor differences between groups using the same peer-assessment instrument have been reported previously. Participants were all members of 2 consecutive classes of medical students (n = 77 and n = 85) at the University of Rochester School of Medicine and Dentistry who completed Years 2 and 3 of medical school consecutively. All participants were evaluated by 6-12 classmates near the end of both Years 2 and 3. Main outcome measures were mean numerical ratings on peer-assessed scales of professional work habits (WH) and interpersonal attributes (IA). Both scales had high internal consistencies in both years (Cronbach's alpha 0.84-0.94). The IA and WH scales were moderately correlated with one another (r = 0.36 in Year 2, r = 0.28 in Year 3). Year 2 scores were predictive of Year 3 scores for both scales (WH: r = 0.64; IA; r = 0.62). Generalisability and decision analyses revealed that 1 class was consistently more discriminating with the WH scale, while the other was more discriminating with of the IA scale. Depending on the class, year and scale, the number of raters needed to achieve a reasonable reliability ranged between 7 and 28. Although Year 3 peer ratings were consistently higher than Year 2 peer ratings for both WH and IA, individual scores were highly correlated across the 2 years, despite the fact that different individuals were chosen as peer raters. Abilities appear to be stable between Years 2 and 3 of medical school. Groups may differ in their ability to discriminate different kinds of skills. Generalisability analysis can be used to discover these patterns within groups.
Collapse
Affiliation(s)
- Stephen J Lurie
- School of Medicine and Dentistry, University of Rochester, Rochester, New York 14642, USA.
| | | | | | | | | |
Collapse
|
35
|
Abstract
Recent developments in assessing professionalism and remediating unprofessional behavior can curtail the inaction that often follows observations of negative as well as positive professionalism of learners and faculty. Developments include: longitudinal assessment models promoting professional behavior, not just penalizing lapses; clarity about the assessment's purpose; methods separating formative from summative assessment; conceptual and behavioral definitions of professionalism; techniques increasing the reliability and validity of quantitative and qualitative approaches to assessment such as 360-degree assessments, performance-based assessments, portfolios, and humanism connoisseurs; and systems-design providing infrastructure support for assessment. Models for remediation have been crafted, including: due process, a warning period and, if necessary, confrontation to initiate remediation of the physician who has acted unprofessionally. Principles for appropriate remediation stress matching the intervention to the cause of the professional lapse. Cognitive behavioral therapy, motivational interviewing, and continuous monitoring linked to behavioral contracts are effective remediation techniques. Mounting and maintaining robust systems for professionalism and remediating professional lapses are not easy tasks. They require a sea change in the fundamental goal of academic health care institutions: medical education must not only be a technical undertaking but also a moral process designed to build and sustain character in all its professional citizens.
Collapse
Affiliation(s)
- Louise Arnold
- University of Missouri-Kansas City School of Medicine, Kansas City, MO 64108, USA.
| |
Collapse
|
36
|
Abstract
Medical education has traditionally focused on imparting medical knowledge, delivering quality patient care, and teaching research methodology. Various measures of success, including standardized testing, have been developed to assess the achievement of those goals. These measures then served as documentation of the effectiveness of individual training programs. However, in 1999, the Accreditation Council for Graduate Medical Education (ACGME) changed the way we measure the success of medical education. They developed six core competencies for medical education and assigned the task of enforcing them to the individual Residency Review Committees. By July 2006, all accredited programs, including dermatopathology fellowships, must use measurable, competency-based objectives, and assess achievement of those objectives. Programs should also be documenting ways they are improving the evaluation process. They must be in full compliance with implementation, measurement, and assessment of the six core competencies for accreditation. The next phase required by the ACGME involves developing curriculum based on competencies as well as using resident, fellow, or graduate competency performance to assess success in preparing trainees for the practice of medicine. This manuscript discusses measurable objectives to address the core competencies for dermatopathology fellowship training as well as dermatopathology rotations in dermatology and pathology residency training.
Collapse
Affiliation(s)
- Molly A Hinshaw
- Department of Dermatology, University of Wisconsin, Madison, USA.
| | | |
Collapse
|
37
|
Shue CK, Arnold L, Stern DT. Maximizing participation in peer assessment of professionalism: the students speak. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2005; 80:S1-5. [PMID: 16199444 DOI: 10.1097/00001888-200510001-00004] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
BACKGROUND Medical students have unique information about peers' professionalism but are reluctant to share it through peer assessment. METHOD Students (231 of 375; 62%) in one school replied to a survey about whether various characteristics of peer assessment (e.g., who receives the assessment, its anonymity, implications for the classmate) would prevent or encourage their participation. RESULTS Sixty-six percent of the students agreed that there should be peer assessment of professionalism as long as the assessment reflected their preferences for how the assessment should take place. Some of their preferences included reporting unprofessional behavior to an impartial counselor, a 100% anonymous process, and having the classmate receive corrective instruction. Students across year levels generally agreed about the characteristics of peer assessment. Men and women disagreed about some characteristics. CONCLUSION Most students are willing to participate in peer assessment as long as their preferences are taken into consideration.
Collapse
Affiliation(s)
- Carolyn K Shue
- University of Missouri-Kansas City School of Medicine, 2411 Holmes, Kansas City, MO 64108, USA
| | | | | |
Collapse
|
38
|
Abstract
BACKGROUND Although peer assessment holds promise for assessing professionalism, reluctance and refusal to participate have been noted among learners and practicing physicians. Understanding the perspectives of potential participants may therefore be important in designing and implementing effective peer assessment. OBJECTIVE To identify factors that, according to students themselves, will encourage or discourage participation in peer assessment. DESIGN A qualitative study using grounded theory to interpret views shared during 16 focus groups that were conducted by leaders using a semi-structured guide. PARTICIPANTS Sixty-one students in Years 1, 3, and 4 in 2 mid-western public medical schools. RESULTS Three themes students say would promote or discourage peer assessment emerged: personal struggles with peer assessment, characteristics of the assessment system itself, and the environment in which the system operates. Students struggle with reporting an unprofessional peer lest they bring harm to the peer, themselves, or their clinic team or work group. Who receives the assessment and gives the peer feedback and whether it is formative or summative and anonymous, signed, or confidential are important system characteristics. Students' views of characteristics promoting peer assessment were not unanimous. Receptivity to peer reports and close positive relationships among students and between students and faculty mark an environment conducive to peer assessment, students say. CONCLUSIONS The study lays a foundation for creating acceptable peer assessment systems among students by soliciting their views. Merely introducing an assessment tool will not result in students' willingness to assess each other.
Collapse
Affiliation(s)
- Louise Arnold
- University of Missouri-Kansas City School of Medicine, Kansas City, MO 64108, USA.
| | | | | | | | | |
Collapse
|
39
|
Dannefer EF, Henson LC, Bierer SB, Grady-Weliky TA, Meldrum S, Nofziger AC, Barclay C, Epstein RM. Peer assessment of professional competence. MEDICAL EDUCATION 2005; 39:713-22. [PMID: 15960792 DOI: 10.1111/j.1365-2929.2005.02193.x] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
BACKGROUND Current assessment formats for medical students reliably test core knowledge and basic skills. Methods for assessing other important domains of competence, such as interpersonal skills, humanism and teamwork skills, are less well developed. This study describes the development, implementation and results of peer assessment as a measure of professional competence of medical students to be used for formative purposes. METHODS Year 2 medical students assessed the professional competence of their peers using an online assessment instrument. Fifteen randomly selected classmates were assigned to assess each student. The responses were analysed to determine the reliability and validity of the scores and to explore relationships between peer assessments and other assessment measures. RESULTS Factor analyses suggest a 2-dimensional conceptualisation of professional competence: 1 factor represents Work Habits, such as preparedness and initiative, and the other factor represents Interpersonal Habits, including respect and trustworthiness. The Work Habits factor had moderate, yet statistically significant correlations ranging from 0.21 to 0.53 with all other performance measures that were part of a comprehensive assessment of professional competence. Approximately 6 peer raters were needed to achieve a generalisability coefficient of 0.70. CONCLUSIONS Our findings suggest that it is possible to introduce peer assessment for formative purposes in an undergraduate medical school programme that provides multiple opportunities to interact with and observe peers.
Collapse
Affiliation(s)
- Elaine F Dannefer
- Cleveland Clinic Lerner College of Medicine, Case Western Reserve University, Cleveland, Ohio 44195, USA.
| | | | | | | | | | | | | | | |
Collapse
|
40
|
Abstract
OBJECTIVE To determine whether a multisource feedback questionnaire, SPRAT (Sheffield peer review assessment tool), is a feasible and reliable assessment method to inform the record of in-training assessment for paediatric senior house officers and specialist registrars. DESIGN Trainees' clinical performance was evaluated using SPRAT sent to clinical colleagues of their choosing. Responses were analysed to determine variables that affected ratings and their measurement characteristics. SETTING Three tertiary hospitals and five secondary hospitals across a UK deanery. PARTICIPANTS 112 paediatric senior house officers and middle grades. MAIN OUTCOME MEASURES 95% confidence intervals for mean ratings; linear and hierarchical regression to explore potential biasing factors; time needed for the process per doctor. RESULTS 20 middle grades and 92 senior house officers were assessed using SPRAT to inform their record of in-training assessment; 921/1120 (82%) of their proposed raters completed a SPRAT form. As a group, specialist registrars (mean 5.22, SD 0.34) scored significantly higher (t = - 4.765) than did senior house officers (mean 4.81, SD 0.35) (P < 0.001). The grade of the doctor accounted for 7.6% of the variation in the mean ratings. The hierarchical regression showed that only 3.4% of the variation in the means could be additionally attributed to three main factors (occupation of rater, length of working relationship, and environment in which the relationship took place) when the doctor's grade was controlled for (significant F change < 0.001). 93 (83%) of the doctors in this study would have needed only four raters to achieve a reliable score if the intent was to determine if they were satisfactory. The mean time taken to complete the questionnaire by a rater was six minutes. Just over an hour of administrative time is needed for each doctor. CONCLUSIONS SPRAT seems to be a valid way of assessing large numbers of doctors to support quality assurance procedures for training programmes. The feedback from SPRAT can also be used to inform personal development planning and focus quality improvements.
Collapse
Affiliation(s)
- Julian C Archer
- Academic Unit of Child Health, Sheffield Children's Hospital, Sheffield S10 2HT
| | | | | |
Collapse
|
41
|
Duffy FD, Gordon GH, Whelan G, Cole-Kelly K, Frankel R, Buffone N, Lofton S, Wallace M, Goode L, Langdon L. Assessing competence in communication and interpersonal skills: the Kalamazoo II report. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2004; 79:495-507. [PMID: 15165967 DOI: 10.1097/00001888-200406000-00002] [Citation(s) in RCA: 353] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Accreditation of residency programs and certification of physicians requires assessment of competence in communication and interpersonal skills. Residency and continuing medical education program directors seek ways to teach and evaluate these competencies. This report summarizes the methods and tools used by educators, evaluators, and researchers in the field of physician-patient communication as determined by the participants in the "Kalamazoo II" conference held in April 2002. Communication and interpersonal skills form an integrated competence with two distinct parts. Communication skills are the performance of specific tasks and behaviors such as obtaining a medical history, explaining a diagnosis and prognosis, giving therapeutic instructions, and counseling. Interpersonal skills are inherently relational and process oriented; they are the effect communication has on another person such as relieving anxiety or establishing a trusting relationship. This report reviews three methods for assessment of communication and interpersonal skills: (1) checklists of observed behaviors during interactions with real or simulated patients; (2) surveys of patients' experience in clinical interactions; and (3) examinations using oral, essay, or multiple-choice response questions. These methods are incorporated into educational programs to assess learning needs, create learning opportunities, or guide feedback for learning. The same assessment tools, when administered in a standardized way, rated by an evaluator other than the teacher, and using a predetermined passing score, become a summative evaluation. The report summarizes the experience of using these methods in a variety of educational and evaluation programs and presents an extensive bibliography of literature on the topic. Professional conversation between patients and doctors shapes diagnosis, initiates therapy, and establishes a caring relationship. The degree to which these activities are successful depends, in large part, on the communication and interpersonal skills of the physician. This report focuses on how the physician's competence in professional conversation with patients might be measured. Valid, reliable, and practical measures can guide professional formation, determine readiness for independent practice, and deepen understanding of the communication itself.
Collapse
Affiliation(s)
- F Daniel Duffy
- American Board of Internal Medicine, Philadelphia, PA 19106, USA.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
42
|
Abstract
OBJECTIVES To identify existing instruments for rating peers (professional colleagues) in medical practice and to evaluate them in terms of how they have been developed, their validity and reliability, and their appropriateness for use in clinical settings, including primary care. DESIGN Systematic literature review. DATA SOURCES Electronic search techniques, snowball sampling, and correspondence with specialists. STUDY SELECTION The peer assessment instruments identified were evaluated in terms of how they were developed and to what extent, if relevant, their psychometric properties had been determined. RESULTS A search of six electronic databases identified 4566 possible articles. After appraisal of the abstracts and in depth assessment of 42 articles, three rating scales fulfilled the inclusion criteria and were fully appraised. The three instruments did not meet established standards of instrument development, as no reference was made to a theoretical framework and the published psychometric data omitted essential work on construct and criterion validity. Rater training was absent, and guidance consisted of short written instructions. Two instruments were developed for a hospital setting in the United States and one for a primary care setting in Canada. CONCLUSIONS The instruments developed to date for physicians to evaluate characteristics of colleagues need further assessment of validity before their widespread use is merited.
Collapse
Affiliation(s)
- Richard Evans
- Primary Care Group, Swansea Clinical School, University of Wales Swansea, Swansea SA2 8PP.
| | | | | |
Collapse
|
43
|
Abstract
OBJECTIVE This instalment in the series on professional assessment summarises how peers are used in the evaluation process and whether their judgements are reliable and valid. METHOD The nature of the judgements peers can make, the aspects of competence they can assess and the factors limiting the quality of the results are described with reference to the literature. The steps in implementation are also provided. RESULTS Peers are asked to make judgements about structured tasks or to provide their global impressions of colleagues. Judgements are gathered on whether certain actions were performed, the quality of those actions and/or their suitability for a particular purpose. Peers are used to assess virtually all aspects of professional competence, including technical and non-technical aspects of proficiency. Factors influencing the quality of those assessments are reliability, relationships, stakes and equivalence. CONCLUSION Given the broad range of ways peer evaluators can be used and the sizeable number of competencies they can be asked to judge, generalisations are difficult to derive and this form of assessment can be good or bad depending on how it is carried out.
Collapse
Affiliation(s)
- John J Norcini
- Foundation for Advancement of International Medical Education and Research, Philadelphia, Pennsylvania 19104, USA.
| |
Collapse
|
44
|
Musick DW, McDowell SM, Clark N, Salcido R. Pilot study of a 360-degree assessment instrument for physical medicine & rehabilitation residency programs. Am J Phys Med Rehabil 2003; 82:394-402. [PMID: 12704281 DOI: 10.1097/01.phm.0000064737.97937.45] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE To perform a pilot test on a new format for multidisciplinary assessment of resident physicians' professionalism and clinical performance in acute inpatient rehabilitation settings. DESIGN In this pilot study, a 26-item ratings instrument was developed for use by therapists, nurses, social workers, case managers, and psychologists to rate inpatient residents. RESULTS A total of 421 ratings forms were returned over four academic years. Alpha reliability coefficient for instrumentation sample was 0.99. chi2 and analysis of variance procedures examined item mean differences. Significant differences (P <or= 0.05) were found based on resident sex (17 items) and rotation setting (20 items). No significant differences were found based on rater profession; mean ratings by profession ranged from 6.67 (physical therapists) to 7.46 (case managers). CONCLUSIONS Psychometric properties of this new ratings format are encouraging. The tool was a useful way to provide formative feedback to residents regarding professionalism and performance. Residency program directors can use this approach to fulfill Accreditation Council for Graduate Medical Education mandates to use a variety of assessment methods regarding resident education. However, potential sex bias and other issues affecting performance ratings should be considered in interpreting results and warrant further study.
Collapse
Affiliation(s)
- David W Musick
- Department of Rehabilitation Medicine, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania 19104, USA
| | | | | | | |
Collapse
|
45
|
Arnold L. Assessing professional behavior: yesterday, today, and tomorrow. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2002; 77:502-515. [PMID: 12063194 DOI: 10.1097/00001888-200206000-00006] [Citation(s) in RCA: 182] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE The author interprets the state of the art of assessing professional behavior. She defines the concept of professionalism, reviews the psychometric properties of key approaches to assessing professionalism, conveys major findings that these approaches produced, and discusses recommendations to improve the assessment of professionalism. METHOD The author reviewed professionalism literature from the last 30 years that had been identified through database searches; included in conference proceedings, bibliographies, and reference lists; and suggested by experts. The cited literature largely came from peer-reviewed journals, represented themes or novel approaches, reported qualitative or quantitative data about measurement instruments, or described pragmatic or theoretical approaches to assessing professionalism. RESULTS A circumscribed concept of professionalism is available to serve as a foundation for next steps in assessing professional behavior. The current array of assessment tools is rich. However, their measurement properties should be strengthened. Accordingly, future research should explore rigorous qualitative techniques; refine quantitative assessments of competence, for example, through OSCEs; and evaluate separate elements of professionalism. It should test the hypothesis that assessment tools will be better if they define professionalism as behaviors expressive of value conflicts, investigate the resolution of these conflicts, and recognize the contextual nature of professional behaviors. Whether measurement tools should be tailored to the stage of a medical career and how the environment can support or sabotage the assessment of professional behavior are central issues. FINAL THOUGHT: Without solid assessment tools, questions about the efficacy of approaches to educating learners about professional behavior will not be effectively answered.
Collapse
Affiliation(s)
- Louise Arnold
- University of Missouri-Kansas City School of Medicine, 64108, USA
| |
Collapse
|
46
|
Rudy DW, Fejfar MC, Griffith CH, Wilson JF. Self- and peer assessment in a first-year communication and interviewing course. Eval Health Prof 2001; 24:436-45. [PMID: 11817201 DOI: 10.1177/016327870102400405] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Peer and self-evaluation are crucial in the professional development of physicians. However, these skills must be learned, and there are barriers to their acceptance and successful utilization. To overcome these obstacles, it has been suggested that these concepts should be addressed longitudinally throughout medical education. Therefore, first-year medical students were introduced to peer and self-assessment as part of a videotape review during an interviewing course by having students complete written peer and self-assessments of the interviews. Students' self-assessments were compared with the assessments of peers and faculty. Written evaluations showed peers were more lenient than faculty and students were most critical of their own performances. Students could provide balanced assessments of their peers but were predominately negative regarding their own performances. It appears first-year students are capable of evaluating their peers but have difficulty accurately assessing their own performance. Further interventions are needed to foster self-assessment skills in first-year students.
Collapse
|
47
|
Parikh A, McReelis K, Hodges B. Student feedback in problem based learning: a survey of 103 final year students across five Ontario medical schools. MEDICAL EDUCATION 2001; 35:632-636. [PMID: 11437964 DOI: 10.1046/j.1365-2923.2001.00994.x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
CONTEXT Problem based learning (PBL) has become an integral component of medical curricula around the world. In Ontario, Canada, PBL has been implemented in all five Ontario medical schools for several years. Although proper and timely feedback is an essential component of medical education, the types of feedback that students receive in PBL have not been systematically investigated. OBJECTIVES In the first multischool study of PBL in Canada, we sought to determine the types of feedback (grades, written comments, group feedback from tutor, individual feedback from tutor, peer feedback, self-assessment, no feedback) that students receive as well as their satisfaction with these different feedback modalities. SUBJECTS AND METHODS We surveyed a sample of 103 final year medical students at the five Ontario schools (University of Toronto, McMaster University, Queens University, University of Ottawa and University of Western Ontario). Subjects were recruited via E-mail and were asked to fill out a questionnaire. RESULTS Many students felt that the most helpful type of feedback in PBL was individual feedback from the tutor, and indeed, individual feedback was one of the more common types of feedback provided. However, although students also indicated a strong preference for peer and group feedback, these forms of feedback were not widely reported. There were significant differences between schools in the use of grades, written comments, self-assessment and peer feedback, as well as the immediacy of the feedback given. CONCLUSIONS Across Ontario, students do receive frequent feedback in PBL. However, significant differences exist in the types of feedback students receive, as well as the timing. Although rated highly by students at all schools, the use of peer feedback and self-assessment is limited at most, but not all, medical schools.
Collapse
Affiliation(s)
- A Parikh
- Centre for Research in Education, Faculty of Medicine, University of Toronto, Toronto, Canada
| | | | | |
Collapse
|
48
|
Ginsburg S, Regehr G, Hatala R, McNaughton N, Frohna A, Hodges B, Lingard L, Stern D. Context, conflict, and resolution: a new conceptual framework for evaluating professionalism. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2000; 75:S6-S11. [PMID: 11031159 DOI: 10.1097/00001888-200010001-00003] [Citation(s) in RCA: 97] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Affiliation(s)
- S Ginsburg
- Mt. Sinai Hospital, Toronto, Ontario, Canada
| | | | | | | | | | | | | | | |
Collapse
|
49
|
|