1
|
Garibaldi BT, Russell SW. Strategies to Improve Bedside Clinical Skills Teaching. Chest 2021; 160:2187-2195. [PMID: 34242633 DOI: 10.1016/j.chest.2021.06.055] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 06/27/2021] [Accepted: 06/30/2021] [Indexed: 11/18/2022] Open
Abstract
The bedside encounter between a patient and physician remains the cornerstone of the practice of medicine. However, physicians and trainees spend less time in direct contact with patients and families in the modern healthcare system. The current pandemic has further threatened time spent with patients. This lack of time has led to a decline in clinical skills, and a decrease in the number of faculty who are confident in teaching at the bedside. In this review we offer several strategies to get physicians and trainees back to the bedside to engage in clinical skills teaching and assessment. We recommend that providers pause before bedside encounters to be present with patients and learners and develop clear goals for a bedside teaching session. We suggest that clinical teachers practice an evidence-based approach, including a hypothesis-driven physical examination. We encourage the use of point-of-care technology to assist in diagnosis and allow learners to calibrate traditional physical exam skills with real-time visualization of pathology. Tools like point-of-care ultrasound can be powerful levers to get learners excited about bedside teaching, and to engage patients in their clinical care. We value telemedicine visits as unique opportunities to engage with patients in their home environment and to participate in patient-directed physical exam maneuvers. Finally, we recommend that educators provide feedback to learners on specific clinical exam skills, whether in the clinic, the wards, or during dedicated clinical skills assessments.
Collapse
Affiliation(s)
- Brian T Garibaldi
- Department of Medicine, Division of Pulmonary and Critical Care, Johns Hopkins University School of Medicine, Baltimore, MD.
| | - Stephen W Russell
- University of Alabama at Birmingham School of Medicine, Birmingham, AL
| |
Collapse
|
2
|
Monteiro S, Xenodemetropoulos T. Resident Practice Audit in Gastroenterology (RPAGE): an innovative approach to trainee evaluation and professional development in medicine. CANADIAN MEDICAL EDUCATION JOURNAL 2019; 10:e72-e77. [PMID: 31388379 PMCID: PMC6681922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
BACKGROUND The Resident Practice Audit in Gastroenterology (RPAGE) captures assessments of knowledge, professionalism, and technical skills, in real time. This brief report describes this innovative instrument and aspects of its utility. METHODS Assessment data on colonoscopy, endoscopy, and sigmoidoscopy procedures in 2016 were submitted to a repeated measures ANOVA with six within subjects' assessments and one between subjects' factor of year of specialization to evaluate construct validity. The validity hypothesis tested was that more experienced residents would be rated higher than less experienced residents. Reliability was assessed using Cronbach's alpha. RESULTS The proportion of completed assessments was relatively low (9 to 22%). Overall reliability was high (α >0.8). There was evidence of validity as global ratings indicated higher competence for senior residents at colonoscopy (1.6) and upper endoscopy (1.4) than for more junior residents (1.9 and 2.1 respectively). These differences were significant for both colonoscopy, (F (1, 282) = 14.8, p <0.001) and endoscopy, F (1, 136) = 56.9, p <0.001. CONCLUSION These findings suggest RPAGE is an acceptable electronic log of practice data, but may not be acceptable for workplace based assessment. A key next step will be to evaluate how information collected through RPAGE can help inform resident competency committees.
Collapse
|
3
|
What do quantitative ratings and qualitative comments tell us about general surgery residents' progress toward independent practice? Evidence from a 5-year longitudinal cohort. Am J Surg 2018; 217:288-295. [PMID: 30309619 DOI: 10.1016/j.amjsurg.2018.09.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 09/12/2018] [Accepted: 09/28/2018] [Indexed: 11/21/2022]
Abstract
BACKGROUND This study examines the alignment of quantitative and qualitative assessment data in end-of-rotation evaluations using longitudinal cohorts of residents progressing throughout the five-year general surgery residency. METHODS Rotation evaluation data were extracted for 171 residents who trained between July 2011 and July 2016. Data included 6069 rotation evaluations forms completed by 38 faculty members and 164 peer-residents. Qualitative comments mapped to general surgery milestones were coded for positive/negative feedback and relevance. RESULTS Quantitative evaluation scores were significantly correlated with positive/negative feedback, r = 0.52 and relevance, r = -0.20, p < .001. Themes included feedback on leadership, teaching contribution, medical knowledge, work ethic, patient-care, and ability to work in a team-based setting. Faculty comments focused on technical and clinical abilities; comments from peers focused on professionalism and interpersonal relationships. CONCLUSIONS We found differences in themes emphasized as residents progressed. These findings underscore improving our understanding of how faculty synthesize assessment data.
Collapse
|
4
|
Halman S, Rekman J, Wood T, Baird A, Gofton W, Dudek N. Avoid reinventing the wheel: implementation of the Ottawa Clinic Assessment Tool (OCAT) in Internal Medicine. BMC MEDICAL EDUCATION 2018; 18:218. [PMID: 30236097 PMCID: PMC6148769 DOI: 10.1186/s12909-018-1327-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2018] [Accepted: 09/13/2018] [Indexed: 05/16/2023]
Abstract
BACKGROUND Workplace based assessment (WBA) is crucial to competency-based education. The majority of healthcare is delivered in the ambulatory setting making the ability to run an entire clinic a crucial core competency for Internal Medicine (IM) trainees. Current WBA tools used in IM do not allow a thorough assessment of this skill. Further, most tools are not aligned with the way clinical assessors conceptualize performances. To address this, many tools aligned with entrustment decisions have recently been published. The Ottawa Clinic Assessment Tool (OCAT) is an entrustment-aligned tool that allows for such an assessment but was developed in the surgical setting and it is not known if it can perform well in an entirely different context. The aim of this study was to implement the OCAT in an IM program and collect psychometric data in this different setting. Using one tool across multiple contexts may reduce the need for tool development and ensure that tools used have proper psychometric data to support them. METHODS Psychometrics characteristics were determined. Descriptive statistics and effect sizes were calculated. Scores were compared between levels of training (juniors (PGY1), seniors (PGY2s and PGY3s) & fellows (PGY4s and PGY5s)) using a one-way ANOVA. Safety for independent practice was analyzed with a dichotomous score. Variance components were generated and used to estimate the reliability of the OCAT. RESULTS Three hundred ninety OCATs were completed over 52 weeks by 86 physicians assessing 44 residents. The range of ratings varied from 2 (I had to talk them through) to 5 (I did not need to be there) for most items. Mean scores differed significantly by training level (p < .001) with juniors having lower ratings (M = 3.80 (out of 5), SD = 0.49) than seniors (M = 4.22, SD = - 0.47) who had lower ratings than fellows (4.70, SD = 0.36). Trainees deemed safe to run the clinic independently had significantly higher mean scores than those deemed not safe (p < .001). The generalizability coefficient that corresponds to internal consistency is 0.92. CONCLUSIONS This study's psychometric data demonstrates that we can reliably use the OCAT in IM. We support assessing existing tools within different contexts rather than continuous developing discipline-specific instruments.
Collapse
Affiliation(s)
- Samantha Halman
- Department of Medicine, the University of Ottawa, The Ottawa Hospital General Campus, 501 Smyth Road, Box 209, Ottawa, Ontario K1H 8L6 Canada
| | - Janelle Rekman
- Department of Surgical Education, the University of Ottawa, The Ottawa Hospital Civic Campus, Loeb Research Building - Main Floor WM150b, 725 Parkdale Avenue, C/O Isabel Menard, Ottawa, Ontario K1Y 4E9 Canada
| | - Timothy Wood
- Department of Innovation in Medical Education, Faculty of Medicine, the University of Ottawa, 850 Peter Morand Crescent (Room 102), Ottawa, Ontario K1G 5Z3 Canada
| | - Andrew Baird
- Department of Medicine, the University of Ottawa, The Ottawa Hospital Parkdale Campus, Room 162, 1053 Carling Avenue, C/O Odile Kaufmann, Ottawa, Ontario K1Y 4E9 Canada
| | - Wade Gofton
- Department of Surgical Education, the University of Ottawa, Ottawa Hospital - Civic Campus, Suite J15, 1053 Carling Avenue, Ottawa, Ontario K1Y 4E9 Canada
| | - Nancy Dudek
- Department of Medicine, the University of Ottawa, The Rehabillitation Centre. 505 Smyth Road, Ottawa, Ontario K1H 8M2 Canada
| |
Collapse
|
5
|
Abstract
Clinical skills remain fundamental to the practice of medicine and form a core component of the professional identity of the physician. However, evidence exists to suggest that the practice of some clinical skills is declining, particularly in the United States. A decline in practice of any skill can lead to a decline in its teaching and assessment, with further decline in practice as a result. Consequently, assessment not only drives learning of clinical skills, but their practice. This article summarizes contemporary approaches to clinical skills assessment that, if more widely adopted, could support the maintenance and reinvigoration of bedside clinical skills.
Collapse
Affiliation(s)
- Andrew Elder
- Department of Acute Medicine for Older People, Edinburgh Medical School, Western General Hospital, Crewe Road, Edinburgh EH42XU, UK.
| |
Collapse
|
6
|
Cadieux DC, Goldszmidt M. It's not just what you know: junior trainees' approach to follow-up and documentation. MEDICAL EDUCATION 2017; 51:812-825. [PMID: 28418205 PMCID: PMC5518220 DOI: 10.1111/medu.13286] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2016] [Revised: 08/12/2016] [Accepted: 01/19/2017] [Indexed: 05/25/2023]
Abstract
CONTEXT In teaching hospitals, junior trainees (first-year residents and third-year medical students) are responsible for patient follow-up and documentation under the supervision of senior team members. In order to support trainees in their role, supervisors need to understand how trainees approach these tasks and how they can be coached to develop best practices. OBJECTIVES The purpose of our study was to explore the range of practices used by junior trainees in clinical settings. METHODS Constructivist grounded theory was used to guide the collection and analysis of data on follow-up and documentation during 34 observation periods with 17 junior trainees. Data sources included field notes, field interviews and de-identified copies of patient charts. We also held two focus groups with four attending physicians in each. RESULTS We were able to describe three interrelated characteristics that influenced a trainee's approach to and ability to perform the tasks of patient follow-up and documentation: (i) diligence; (ii) relationship to the team (dependent, independent, collaborative), and (iii) level of performance (Data Gatherer, Sensemaker, Manager). Diligence and relationship to the team appeared to influence the quality and focus of a trainee's approach at all levels of performance. Level of performance was felt, by focus group attending physicians, to reflect a developmental progression of knowledge and skills. CONCLUSIONS Our findings contribute to the existing literature in three ways. Firstly, they extend our understanding of how junior trainees approach the task of in-patient follow-up and clinical documentation and the value of those activities. Secondly, they provide new insights to support formative and summative assessment. Finally, they contribute to a growing body of literature exploring the factors that impact trainees' roles and interactions with the team. Future research should focus on validating our findings and exploring their utility in the development of novel assessment strategies.
Collapse
Affiliation(s)
- Dani C Cadieux
- Centre for Education Research and InnovationSchulich School of Medicine and DentistryWestern UniversityLondonOntarioCanada
| | - Mark Goldszmidt
- Centre for Education Research and InnovationSchulich School of Medicine and DentistryWestern UniversityLondonOntarioCanada
| |
Collapse
|
7
|
Srisarajivakul N, Lucero C, Wang XJ, Poles M, Gillespie C, Zabar S, Weinshel E, Malter L. Disruptive behavior in the workplace: Challenges for gastroenterology fellows. World J Gastroenterol 2017; 23:3315-3321. [PMID: 28566892 PMCID: PMC5434438 DOI: 10.3748/wjg.v23.i18.3315] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/28/2016] [Revised: 03/04/2017] [Accepted: 04/21/2017] [Indexed: 02/06/2023] Open
Abstract
AIM To assess first-year gastroenterology fellows’ ability to address difficult interpersonal situations in the workplace using objective structured clinical examinations (OSCE).
METHODS Two OSCEs (“distracted care team” and “frazzled intern”) were created to assess response to disruptive behavior. In case 1, a fellow used a colonoscopy simulator while interacting with a standardized patient (SP), nurse, and attending physician all played by actors. The nurse and attending were instructed to display specific disruptive behavior and disregard the fellow unless requested to stop the disruptive behavior and focus on the patient and procedure. In case 2, the fellow was to calm an intern managing a patient with massive gastrointestinal bleeding. The objective in both scenarios was to assess the fellows’ ability to perform their duties while managing the disruptive behavior displayed by the actor. The SPs used checklists to rate fellows’ performances. The fellows completed a self-assessment survey.
RESULTS Twelve fellows from four gastrointestinal fellowship training programs participated in the OSCE. In the “distracted care team” case, one-third of the fellows interrupted the conflict and refocused attention to the patient. Half of the fellows were able to display professionalism despite the heated discussion nearby. Fellows scored lowest in the interprofessionalism portion of post-OSCE surveys, measuring their ability to handle the conflict. In the “frazzled intern” case, 68% of fellows were able to establish a calm and professional relationship with the SP. Despite this success, only half of the fellows were successfully communicate a plan to the SP and only a third scored “well done” in a domain that focused on allowing the intern to think through the case with the fellow’s guidance.
CONCLUSION Fellows must receive training on how to approach disruptive behavior. OSCEs are a tool that can assess fellow skills and set a culture for open discussion.
Collapse
|
8
|
Park YS, Zar FA, Norcini JJ, Tekian A. Competency Evaluations in the Next Accreditation System: Contributing to Guidelines and Implications. TEACHING AND LEARNING IN MEDICINE 2016; 28:135-145. [PMID: 26849397 DOI: 10.1080/10401334.2016.1146607] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
UNLABELLED CONSTRUCT: This study examines validity evidence of end-of-rotation evaluation scores used to measure competencies and milestones as part of the Next Accreditation System (NAS) of the Accreditation Council for Graduate Medical Education (ACGME). BACKGROUND Since the implementation of the milestones, end-of-rotation evaluations have surfaced as a potentially useful assessment method. However, validity evidence on the use of rotation evaluation scores as part of the NAS has not been studied. This article examines validity evidence for end-of-rotation evaluations that can contribute to developing guidelines that support the NAS. APPROACH Data from 2,701 end-of-rotation evaluations measuring 21 out of 22 Internal Medicine milestones for 142 residents were analyzed (July 2013-June 2014). Descriptive statistics were used to measure the distribution of ratings by evaluators (faculty, n = 116; fellows, n = 59; peer-residents, n = 131), by postgraduate years. Generalizability analysis and higher order confirmatory factor analysis were used to examine the internal structure of ratings. Psychometric implications for combining evaluation scores using composite score reliability were examined. RESULTS Milestone ratings were significantly higher for each subsequent year of training (15/21 milestones). Faculty evaluators had greater variability in ratings across milestones, compared to fellows and residents; faculty ratings were generally correlated with milestone ratings from fellows (r = .45) and residents (r = .25), but lower correlations were found for Professionalism and Interpersonal and Communication Skills. The Φ-coefficient was .71, indicating good reliability. Internal structure supported a 6-factor solution, corresponding to the hierarchical relationship between the milestones and the 6 core competencies. Evaluation scores corresponding to Patient Care, Medical Knowledge, and Practice-Based Learning and Improvement had higher correlations to milestones reported to the ACGME. Mean evaluation ratings predicted problem residents (odds ratio = 5.82, p < .001). CONCLUSIONS Guidelines for rotation evaluations proposed in this study provide useful solutions that can help program directors make decisions on resident progress and contribute to assessment systems in graduate medical education.
Collapse
Affiliation(s)
- Yoon Soo Park
- a Department of Medical Education , University of Illinois at Chicago College of Medicine , Chicago , Illinois , USA
| | - Fred A Zar
- b Department of Medicine , University of Illinois at Chicago College of Medicine , Chicago , Illinois , USA
| | - John J Norcini
- c Foundation for Advancement of International Medical Education and Research , Philadelphia , Pennsylvania , USA
| | - Ara Tekian
- a Department of Medical Education , University of Illinois at Chicago College of Medicine , Chicago , Illinois , USA
| |
Collapse
|
9
|
Affiliation(s)
| | | | - Shanu Gupta
- Corresponding author: Shanu Gupta, MD and Director of Education, Rush University Hospitalists, 10 Kellogg, 1717 West Congress Parkway, Chicago, IL 60612, 312.942.4200,
| | | | | |
Collapse
|
10
|
Jackson JL, Kay C, Frank M. The validity and reliability of attending evaluations of medicine residents. SAGE Open Med 2015; 3:2050312115589648. [PMID: 26770788 PMCID: PMC4679281 DOI: 10.1177/2050312115589648] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2015] [Accepted: 05/11/2015] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVES To assess the reliability and validity of faculty evaluations of medicine residents. METHODS We conducted a retrospective study (2004-2012) involving 228 internal medicine residency graduates at the Medical College of Wisconsin who were evaluated by 334 attendings. Measures included evaluations of residents by attendings, based on six competencies and interns and residents' performance on the American Board of Internal Medicine certification exam and annual in-service training examination. All residents had at least one in-service training examination result and 80% allowed the American Board of Internal Medicine to release their scores. RESULTS Attending evaluations had good consistency (Cronbach's α = 0.96). There was poor construct validity with modest inter-rater reliability and evidence that attendings were rating residents on a single factor rather than the six competencies intended to be measured. There was poor predictive validity as attending ratings correlated weakly with performance on the in-service training examination or American Board of Internal Medicine certification exam. CONCLUSION We conclude that attending evaluations are poor measures for assessing progress toward competency. It may be time to move beyond evaluations that rely on global, end-of-rotation appraisals.
Collapse
Affiliation(s)
- Jeffrey L Jackson
- C, GIM Section, Zablocki VAMC, Department of Medicine, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Cynthia Kay
- Department of Medicine, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Michael Frank
- Program Director, Internal Medicine Residency, Department of Medicine, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
11
|
Wilbur K. Summative assessment in a doctor of pharmacy program: a critical insight. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2015; 6:119-126. [PMID: 25733948 PMCID: PMC4337416 DOI: 10.2147/amep.s77198] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
BACKGROUND The Canadian-accredited post-baccalaureate Doctor of Pharmacy program at Qatar University trains pharmacists to deliver advanced patient care. Emphasis on acquisition and development of the necessary knowledge, skills, and attitudes lies in the curriculum's extensive experiential component. A campus-based oral comprehensive examination (OCE) was devised to emulate a clinical viva voce and complement the extensive formative assessments conducted at experiential practice sites throughout the curriculum. We describe an evaluation of the final exit summative assessment for this graduate program. METHODS OCE results since the inception of the graduate program (3 years ago) were retrieved and recorded into a blinded database. Examination scores among each paired faculty examiner team were analyzed for inter-rater reliability and linearity of agreement using intraclass correlation and Spearman's correlation coefficient measurements, respectively. Graduate student ranking from individual examiner OCE scores was compared with that of other relative ranked student performance. RESULTS Sixty-one OCEs were administered to 30 graduate students over 3 years by a composite of eleven different pairs of faculty examiners. Intraclass correlation measures demonstrated that examiner team reliability was low and linearity of agreements was inconsistent. Only one examiner team in each respective academic year was found to have statistically significant inter-rater reliability, and linearity of agreements was inconsistent in all years. No association was found between examination performance rankings and other academic parameters. CONCLUSION Critical review of our final summative assessment implies it is lacking robustness and defensibility. Measures are in place to continue the quality improvement process and develop and implement an alternative means of evaluation within a more authentic context.
Collapse
Affiliation(s)
- Kerry Wilbur
- College of Pharmacy, Qatar University, Doha, Qatar
| |
Collapse
|
12
|
Mount CA, Short PA, Mount GR, Schofield CM. An End-of-Year Oral Examination for Internal Medicine Residents: An Assessment Tool for the Clinical Competency Committee. J Grad Med Educ 2014; 6:551-4. [PMID: 25210583 PMCID: PMC4160061 DOI: 10.4300/jgme-d-13-00365.1] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/10/2013] [Revised: 01/28/2014] [Accepted: 04/14/2014] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Comprehensive evaluations of clinical competency consume a large amount of time and resources. An oral examination is a unique evaluation tool that can augment a global performance assessment by the Clinical Competency Committee (CCC). OBJECTIVE We developed an oral examination to aid our CCC in evaluating resident performance. METHODS We reviewed tools used in our internal medicine residency program and other training programs in our institution. A literature search failed to identify reports of a similar evaluation tool used in internal medicine programs. We developed and administered an internal medicine oral examination (IMOE) to our postgraduate year-1 and postgraduate year-2 internal medicine residents annually over a 3-year period. The results were used to enhance our CCC's discussion of overall resident performance. We estimated the costs in terms of faculty time away from patient care activities. RESULTS Of the 54 residents, 46 (86%) passed the IMOE on their first attempt. Of the 8 (14%) residents who failed, all but 1 successfully passed after a mentored study period and retest. Less than 0.1 annual full-time equivalent per faculty member was committed by most faculty involved, and the time spent on the IMOE replaced regular resident daily conference activities. CONCLUSIONS The results of the IMOE were added to other assessment tools and used by the CCC for a global assessment of resident performance. An oral examination is feasible in terms of cost and can be easily modified to fit the needs of various competency committees.
Collapse
|
13
|
Sehgal R, Hardman J, Haney E. Observing trainee encounters using a one-way mirror. CLINICAL TEACHER 2014; 11:247-50. [DOI: 10.1111/tct.12140] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Affiliation(s)
- Raj Sehgal
- South Texas Veterans Health Care System and University of Texas Health Science Center at San Antonio; Texas USA
| | | | | |
Collapse
|
14
|
Ginsburg S, Eva K, Regehr G. Do in-training evaluation reports deserve their bad reputations? A study of the reliability and predictive ability of ITER scores and narrative comments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:1539-1544. [PMID: 23969371 DOI: 10.1097/acm.0b013e3182a36c3d] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE Although scores on in-training evaluation reports (ITERs) are often criticized for poor reliability and validity, ITER comments may yield valuable information. The authors assessed across-rotation reliability of ITER scores in one internal medicine program, ability of ITER scores and comments to predict postgraduate year three (PGY3) performance, and reliability and incremental predictive validity of attendings' analysis of written comments. METHOD Numeric and narrative data from the first two years of ITERs for one cohort of residents at the University of Toronto Faculty of Medicine (2009-2011) were assessed for reliability and predictive validity of third-year performance. Twenty-four faculty attendings rank-ordered comments (without scores) such that each resident was ranked by three faculty. Mean ITER scores and comment rankings were submitted to regression analyses; dependent variables were PGY3 ITER scores and program directors' rankings. RESULTS Reliabilities of ITER scores across nine rotations for 63 residents were 0.53 for both postgraduate year one (PGY1) and postgraduate year two (PGY2). Interrater reliabilities across three attendings' rankings were 0.83 for PGY1 and 0.79 for PGY2. There were strong correlations between ITER scores and comments within each year (0.72 and 0.70). Regressions revealed that PGY1 and PGY2 ITER scores collectively explained 25% of variance in PGY3 scores and 46% of variance in PGY3 rankings. Comment rankings did not improve predictions. CONCLUSIONS ITER scores across multiple rotations showed decent reliability and predictive validity. Comment ranks did not add to the predictive ability, but correlation analyses suggest that trainee performance can be measured through these comments.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- Dr. Ginsburg is professor, Department of Medicine, and scientist, Wilson Centre for Research in Education, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada. Dr. Eva is professor, Department of Medicine, and senior scientist, Centre for Health Education Scholarship, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada. Dr. Regehr is professor, Department of Surgery, and associate director, Centre for Health Education Scholarship, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | | | | |
Collapse
|
15
|
Grover M, Drossman DA, Oxentenko AS. Direct trainee observation: opportunities for enhanced physician-patient communication and faculty development. Gastroenterology 2013; 144:1330-4. [PMID: 23623965 DOI: 10.1053/j.gastro.2013.04.039] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Affiliation(s)
- Madhusudan Grover
- Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, Minnesota 55905, USA.
| | | | | |
Collapse
|
16
|
Shanmugam VK, Tsagaris K, Schilling A, McNish S, Desale S, Mete M, Adams M. Impact of subspecialty elective exposures on outcomes on the American board of internal medicine certification examination. BMC MEDICAL EDUCATION 2012; 12:94. [PMID: 23057635 PMCID: PMC3480921 DOI: 10.1186/1472-6920-12-94] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2012] [Accepted: 10/05/2012] [Indexed: 05/16/2023]
Abstract
BACKGROUND The American Board of Internal Medicine Certification Examination (ABIM-CE) is one of several methods used to assess medical knowledge, an Accreditation Council for Graduate Medical Education (ACGME) core competency for graduating internal medicine residents. With recent changes in graduate medical education program directors and internal medicine residents are seeking evidence to guide decisions regarding residency elective choices. Prior studies have shown that formalized elective curricula improve subspecialty ABIM-CE scores. The primary aim of this study was to evaluate whether the number of subspecialty elective exposures or the specific subspecialties which residents complete electives in impact ABIM-CE scores. METHODS ABIM-CE scores, elective exposures and demographic characteristics were collected for MedStar Georgetown University Hospital internal medicine residents who were first-time takers of the ABIM-CE in 2006-2010 (n=152). Elective exposures were defined as a two-week period assigned to the respective subspecialty. ABIM-CE score was analyzed using the difference between the ABIM-CE score and the standardized passing score (delta-SPS). Subspecialty scores were analyzed using percentage of correct responses. Data was analyzed using GraphPad Prism version 5.00 for Windows. RESULTS Paired elective exposure and ABIM-CE scores were available in 131 residents. There was no linear correlation between ABIM-CE mean delta-SPS and the total number of electives or the number of unique elective exposures. Residents with ≤14 elective exposures had higher ABIM-CE mean delta-SPS than those with ≥15 elective exposures (143.4 compared to 129.7, p=0.051). Repeated electives in individual subspecialties were not associated with significant difference in mean ABIM-CE delta-SPS. CONCLUSIONS This study did not demonstrate significant positive associations between individual subspecialty elective exposures and ABIM-CE mean delta-SPS score. Residents with ≤14 elective exposures had higher ABIM-CE mean delta-SPS than those with ≥15 elective exposures suggesting there may be an "ideal" number of elective exposures that supports improved ABIM-CE performance. Repeated elective exposures in an individual specialty did not correlate with overall or subspecialty ABIM-CE performance.
Collapse
Affiliation(s)
- Victoria K Shanmugam
- Division of Rheumatology, Immunology and Allergy, MedStar Georgetown University Hospital, 3800 Reservoir Road, NW Washington, DC, 20007, USA
| | - Katina Tsagaris
- Department of Medicine, MedStar Georgetown University Hospital, 3800 Reservoir Road, NW Washington, DC, 20007, USA
| | - Amber Schilling
- Division of Rheumatology, Immunology and Allergy, MedStar Georgetown University Hospital, 3800 Reservoir Road, NW Washington, DC, 20007, USA
| | - Sean McNish
- Division of Rheumatology, Immunology and Allergy, MedStar Georgetown University Hospital, 3800 Reservoir Road, NW Washington, DC, 20007, USA
| | - Sameer Desale
- Division of Biostatistics and Epidemiology, MedStar Health Research Institute, 6525 Belcrest Road, Suite 700, Hyattsville, MD, 20782, USA
| | - Mihriye Mete
- Division of Biostatistics and Epidemiology, MedStar Health Research Institute, 6525 Belcrest Road, Suite 700, Hyattsville, MD, 20782, USA
| | - Michael Adams
- Department of Medicine, MedStar Georgetown University Hospital, 3800 Reservoir Road, NW Washington, DC, 20007, USA
| |
Collapse
|
17
|
Meade LB, Borden SH, McArdle P, Rosenblum MJ, Picchioni MS, Hinchey KT. From theory to actual practice: creation and application of milestones in an internal medicine residency program, 2004-2010. MEDICAL TEACHER 2012; 34:717-723. [PMID: 22646298 DOI: 10.3109/0142159x.2012.689441] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
BACKGROUND In the USA, the Accreditation Council of Graduate Medical Education, Educational Innovations Project is a partner in reshaping residency training to meet increasingly complex systems of health care delivery. AIM We describe the creation and implementation of milestones as a vehicle for translating educational theory into practice in preparing residents to provide safe, autonomous patient care. METHOD Six program faculty leaders, all with advanced medical education training, met in an iterative process of developing, implementing, and modifying milestones until a final set were vetted. RESULTS We first formed the profile of a Master Internist. We then translated it into milestone language and implemented its integration across the program. Thirty-seven milestones were applied in all settings and rotations to reach explicit educational outcomes. We created three types of milestones: Progressive, build one on top of the other to mastery; additive, adding multiple behaviors together to culminate in mastery; and descriptive, using a proscribe set of complex, predetermined steps toward mastery. CONCLUSIONS Using milestones, our program has enhanced an educational model into explicit, end of training goals. Milestone implementation has yielded positive results toward competency-based training and others may adapt our strategies in a similar effort.
Collapse
Affiliation(s)
- Lauren B Meade
- Baystate Medical Center, Tufts University, Springfield, MA, USA.
| | | | | | | | | | | |
Collapse
|
18
|
Regehr G, Ginsburg S, Herold J, Hatala R, Eva K, Oulanova O. Using "standardized narratives" to explore new ways to represent faculty opinions of resident performance. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2012; 87:419-27. [PMID: 22361788 DOI: 10.1097/acm.0b013e31824858a9] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
PURPOSE Most efforts to develop reliable evaluations of clinical competence have been oriented toward deconstructing the requisite competencies into separate scales. However, many are questioning the value of this approach on theoretical and empirical bases. This study uses "standardized narratives" to explore a different approach to assessing resident performance. METHOD In 2009, based on interviews with 19 experienced clinical faculty from two institutions, 16 narrative profiles were created to represent the range of resident competence that clinical faculty might encounter during supervision. Fourteen clinicians from three institutions independently grouped the profiles into as many categories as necessary to reflect various levels of performance, described their categories, then ranked the individual profiles within each category. Then, in groups of three or four, participants negotiated a final ranking and grouping of the 16 profiles. RESULTS Despite interesting idiosyncracies in the factors some participants identified as guiding their rankings, there was strong consistency across the 14 clinicians regarding the rankings (single-rater intraclass correlation [ICC] = 0.86) and groupings (single-rater ICC = 0.81) of the profiles. Similarly, across institutions, the four groups were highly consistent in their final negotiated rankings (single-group ICC = 0.91) and groupings (single-group ICC = 0.87) of the profiles. CONCLUSIONS Faculty showed more consistency in their decisions of what constitutes excellent, competent, and problematic performance in residents than implied by current assessment techniques that require deconstruction of resident competencies. This use of standardized narratives points to interesting opportunities for more authentically codifying faculty opinions of residents.
Collapse
Affiliation(s)
- Glenn Regehr
- Department of Surgery, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | | | | | | | | | | |
Collapse
|
19
|
Beran MC, Awan H, Rowley D, Samora JB, Griesser MJ, Bishop JY. Assessment of musculoskeletal physical examination skills and attitudes of orthopaedic residents. J Bone Joint Surg Am 2012; 94:e36. [PMID: 22438009 DOI: 10.2106/jbjs.k.00518] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND Although the musculoskeletal physical examination is an essential part of patient encounters, we believe that it is underemphasized in residency education and that residents' physical examination skills may be lacking. We sought to assess attitudes regarding teaching of the physical examination in orthopaedic residencies, to assess physical examination knowledge and skills among residents, and to develop a method to track the skill level of residents in order to improve our physical examination curriculum. METHODS We created a thirty-question multiple-choice musculoskeletal physical examination test and administered it to our residents. We created a five-question survey assessing attitudes toward physical examination teaching in orthopaedic residencies and distributed it to U.S. orthopaedic department chairs We developed an Objective Structured Clinical Examination (OSCE), in which standardized patients enact four clinical scenarios, to observe and assess physical examination skills. RESULTS The mean score on the multiple-choice physical examination test was 76% despite the fact that our residents consistently scored above 90% on the Orthopaedic In-Training Examination. Department chairs and residents agreed that, although learning to perform the physical examination is important, there is not enough time in the clinical setting to observe and critique a resident's patient examination. The overall score of our residents on the OSCE was 66%. CONCLUSIONS We have exposed a deficiency in the physical examination knowledge and skills of our residents. Although the musculoskeletal physical examination is a vital practice component, our data indicate that it is likely underemphasized in training. Clinic time alone is likely insufficient for the teaching and learning of the musculoskeletal physical examination.
Collapse
Affiliation(s)
- Matthew C Beran
- Department of Orthopaedics, The Ohio State University, 2050 Kenny Road, Suite 3100, Columbus, OH 43221, USA
| | | | | | | | | | | |
Collapse
|
20
|
Griesser MJ, Beran MC, Flanigan DC, Quackenbush M, Van Hoff C, Bishop JY. Implementation of an objective structured clinical exam (OSCE) into orthopedic surgery residency training. JOURNAL OF SURGICAL EDUCATION 2012; 69:180-189. [PMID: 22365863 DOI: 10.1016/j.jsurg.2011.07.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2011] [Revised: 07/23/2011] [Accepted: 07/31/2011] [Indexed: 05/31/2023]
Abstract
OBJECTIVE While the musculoskeletal (MSK) physical examination (PE) is an essential part of a patient encounter, we believe it is an underemphasized component of orthopedic residency education and that resident PE skills may be lacking. The purpose of this investigation was to (1) assess the attitudes regarding PE teaching in orthopedic residencies today; (2) develop an MSK objective structured clinical examination (OSCE) to assess the MSK PE knowledge and skills of our orthopedic residents. DESIGN Prospective, uncontrolled, observational. SETTING A major Midwestern tertiary referral center and academic medical center. PARTICIPANTS The orthopedic surgery residents in our program. Twenty-two of 24 completed the OSCE. RESULTS Surveys showed that residents agreed that although learning the PE is important, there is not enough time in clinic to actually observe and critique a resident examining a patient. For the 22 residents (postgraduate year [PGY] 2-5) who participated in the OSCE, the overall score was 66%. Scores were significantly better for the trauma scenario (78%; p < 0.05) than for the shoulder (67%), spine (64%), and knee (59%) encounters. The overall scores for each component of the OSCE were: (1) history 53%; (2) PE 60%; (3) 5-question posttest 64%; and (4) communication skills 90%. CONCLUSIONS We have exposed a deficiency in the PE knowledge and skills of our residents. Clinic time alone may be insufficient to both teach and learn the MSK PE. The use of a MSK OSCE, while novel in orthopedics, will allow more direct observation of our residents MSK PE skills and also allow us to follow resident skills longitudinally through their training. We hope that our efforts will encourage other programs to assess their PE curriculum and perhaps prompt change.
Collapse
Affiliation(s)
- Michael J Griesser
- Department of Orthopedics, Ohio State University, Columbus, OH 43221, USA
| | | | | | | | | | | |
Collapse
|
21
|
Ginsburg S, Gold W, Cavalcanti RB, Kurabi B, McDonald-Blumer H. Competencies "plus": the nature of written comments on internal medicine residents' evaluation forms. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2011; 86:S30-4. [PMID: 21955764 DOI: 10.1097/acm.0b013e31822a6d92] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
BACKGROUND Comments on residents' in-training evaluation reports (ITERs) may be more useful than scores in identifying trainees in difficulty. However, little is known about the nature of comments written by internal medicine faculty on residents' ITERs. METHOD Comments on 1,770 ITERs (from 180 residents in postgraduate years 1-3) were analyzed using constructivist grounded theory beginning with an existing framework. RESULTS Ninety-three percent of ITERs contained comments, which were frequently easy to map onto traditional competencies, such as knowledge base (n = 1,075 comments) to the CanMEDs Medical Expert role. Many comments, however, could be linked to several overlapping competencies. Also common were comments completely unrelated to competencies, for instance, the resident's impact on staff (813), or personality issues (450). Residents' "trajectory" was a major theme (performance in relation to expected norms [494], improvement seen [286], or future predictions [286]). CONCLUSIONS Faculty's assessments of residents are underpinned by factors related and unrelated to traditional competencies. Future evaluations should attempt to capture these holistic, integrated impressions.
Collapse
|
22
|
Thomas MR, Beckman TJ, Mauck KF, Cha SS, Thomas KG. Group assessments of resident physicians improve reliability and decrease halo error. J Gen Intern Med 2011; 26:759-64. [PMID: 21369769 PMCID: PMC3138588 DOI: 10.1007/s11606-011-1670-4] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/06/2010] [Revised: 01/19/2011] [Accepted: 02/11/2011] [Indexed: 11/29/2022]
Abstract
BACKGROUND Individual faculty assessments of resident competency are complicated by inconsistent application of standards, lack of reliability, and the "halo" effect. OBJECTIVE We determined whether the addition of faculty group assessments of residents in an ambulatory clinic, compared with individual faculty-of-resident assessments alone, have better reliability and reduced halo effects. DESIGN This prospective, longitudinal study was performed in the outpatient continuity clinics of a large internal medicine residency program. MAIN MEASURES Faculty-on-resident and group faculty-on-resident assessment scores were used for comparison. KEY RESULTS Overall mean scores were significantly higher for group than individual assessments (3.92 ± 0.51 vs. 3.83 ± 0.38, p = 0.0001). Overall inter-rater reliability increased when combining group and individual assessments compared to individual assessments alone (intraclass correlation coefficient, 95% CI = 0.828, 0.785-0.866 vs. 0.749, 0.686-0.804). Inter-item correlations were less for group (0.49) than individual (0.68) assessments. CONCLUSIONS This study demonstrates improved inter-rater reliability and reduced range restriction (halo effect) of resident assessment across multiple performance domains by adding the group assessment method to traditional individual faculty-on-resident assessment. This feasible model could help graduate medical education programs achieve more reliable and discriminating resident assessments.
Collapse
Affiliation(s)
- Matthew R Thomas
- Division of Primary Care Internal Medicine, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA.
| | | | | | | | | |
Collapse
|
23
|
Dupras DM, Edson RS. A survey of resident opinions on peer evaluation in a large internal medicine residency program. J Grad Med Educ 2011; 3:138-43. [PMID: 22655133 PMCID: PMC3184905 DOI: 10.4300/jgme-d-10-00099.1] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/08/2010] [Revised: 08/15/2010] [Accepted: 11/15/2010] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Starting in the 1960s, studies have suggested that peer evaluation could provide unique insights into the performance of residents in training. However, reports of resident resistance to peer evaluation because of confidentiality issues and the possible impact on their working relationships raised concerns about the acceptability and utility of peer evaluation in graduate medical education. The literature suggests that peers are able to reliably assess communication, interpersonal skills, and professionalism and provide input that may differ from faculty evaluations. This study assessed the attitudes of internal medicine residents 1 year after the implementation of a peer-evaluation system. METHODS During the 2005-2006 academic year, we conducted an anonymous survey of the 168 residents in the Internal Medicine Residency Program at the Mayo Clinic, Rochester, Minnesota. Contingency table analysis was used to compare the response patterns of the groups. RESULTS The response rate was 61% (103/168 residents) and it did not differ by year of training. Most residents (74/103; 72%) felt that peers could provide valuable feedback. Eighty percent of residents (82/103) felt the feedback was important for their professional development and 84% (86/102) agreed that peers observe behaviors not seen by attending faculty. CONCLUSIONS The results of this study suggest that internal medicine residents provide unique assessment of their peers and provide feedback they consider important for their professional development. More importantly, the results support the role of peer evaluation in the assessment of the competencies of professionalism and interpersonal and communication skills.
Collapse
|
24
|
Ogunyemi D, Eno M, Rad S, Fong A, Alexander C, Azziz R. Evaluating professionalism, practice-based learning and improvement, and systems-based practice: utilization of a compliance form and correlation with conflict styles. J Grad Med Educ 2010; 2:423-9. [PMID: 21976093 PMCID: PMC2951784 DOI: 10.4300/jgme-d-10-00048.1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/17/2010] [Revised: 04/26/2010] [Accepted: 05/16/2010] [Indexed: 11/06/2022] Open
Abstract
OBJECTIVE The purpose of this article was to develop and determine the utility of a compliance form in evaluating and teaching the Accreditation Council for Graduate Medical Education competencies of professionalism, practice-based learning and improvement, and systems-based practice. METHODS In 2006, we introduced a 17-item compliance form in an obstetrics and gynecology residency program. The form prospectively monitored residents on attendance at required activities (5 items), accountability of required obligations (9 items), and completion of assigned projects (3 items). Scores were compared to faculty evaluations of residents, resident status as a contributor or a concerning resident, and to the residents' conflict styles, using the Thomas-Kilmann Conflict MODE Instrument. RESULTS Our analysis of 18 residents for academic year 2007-2008 showed a mean (standard error of mean) of 577 (65.3) for postgraduate year (PGY)-1, 692 (42.4) for PGY-2, 535 (23.3) for PGY-3, and 651.6 (37.4) for PGY-4. Non-Hispanic white residents had significantly higher scores on compliance, faculty evaluations on interpersonal and communication skills, and competence in systems-based practice. Contributing residents had significantly higher scores on compliance compared with concerning residents. Senior residents had significantly higher accountability scores compared with junior residents, and junior residents had increased project completion scores. Attendance scores increased and accountability scores decreased significantly between the first and second 6 months of the academic year. There were positive correlations between compliance scores with competing and collaborating conflict styles, and significant negative correlations between compliance with avoiding and accommodating conflict styles. CONCLUSIONS Maintaining a compliance form allows residents and residency programs to focus on issues that affect performance and facilitate assessment of the ACGME competencies. Postgraduate year, behavior, and conflict styles appear to be associated with compliance. A lack of association with faculty evaluations suggests measurement of different perceptions of residents' behavior.
Collapse
Affiliation(s)
- Dotun Ogunyemi
- Corresponding author: Dotun Ogunyemi, MD, Residency Program Director, Cedars-Sinai Medical Center, Department of Obstetrics and Gynecology, 8700 Beverly Boulevard, South Tower, Suite 3620, Los Angeles, CA 90048, 310.423.3394,
| | | | | | | | | | | |
Collapse
|
25
|
Geraci SA, Babbott SF, Hollander H, Buranosky R, Devine DR, Kovach RA, Berkowitz L. AAIM Report on Master Teachers and Clinician Educators Part 1: needs and skills. Am J Med 2010; 123:769-73. [PMID: 20670734 DOI: 10.1016/j.amjmed.2010.05.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2010] [Accepted: 05/06/2010] [Indexed: 10/19/2022]
Affiliation(s)
- Stephen A Geraci
- Division of Pulmonary, Critical Care and Sleep Medicine, Department of Medicine, University of Mississippi School of Medicine, Jackson, USA.
| | | | | | | | | | | | | |
Collapse
|
26
|
Motycka CA, Rose RL, Ried LD, Brazeau G. Self-assessment in pharmacy and health science education and professional practice. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2010; 74:85. [PMID: 20798800 PMCID: PMC2907850 DOI: 10.5688/aj740585] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2009] [Accepted: 12/13/2009] [Indexed: 05/13/2023]
Abstract
Self-assessment is an important skill necessary for continued development of a health care professional from student pharmacist throughout their professional career. This paper reviews the literature on student and practitioner self-assessment and whether this skill can be improved upon. Although self-assessment appears to be a skill that can be improved, both students and professionals continue to have difficulty with accurate self-assessment. Experts' external assessment of students should remain the primary method of testing skills and knowledge until self-assessment strategies improve. While self-assessment is important to lifelong learning, external assessment is also important for practitioners' continuing professional development.
Collapse
Affiliation(s)
- Carol A Motycka
- College of Pharmacy, University of Florida, Jacksonville, FL 32209, USA.
| | | | | | | |
Collapse
|
27
|
Warm EJ, Schauer D, Revis B, Boex JR. Multisource feedback in the ambulatory setting. J Grad Med Educ 2010; 2:269-77. [PMID: 21975632 PMCID: PMC2941386 DOI: 10.4300/jgme-d-09-00102.1] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2009] [Revised: 01/18/2010] [Accepted: 01/25/2010] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education has mandated multisource feedback (MSF) in the ambulatory setting for internal medicine residents. Few published reports demonstrate actual MSF results for a residency class, and fewer still include clinical quality measures and knowledge-based testing performance in the data set. METHODS Residents participating in a year-long group practice experience called the "long-block" received MSF that included self, peer, staff, attending physician, and patient evaluations, as well as concomitant clinical quality data and knowledge-based testing scores. Residents were given a rank for each data point compared with peers in the class, and these data were reviewed with the chief resident and program director over the course of the long-block. RESULTS Multisource feedback identified residents who performed well on most measures compared with their peers (10%), residents who performed poorly on most measures compared with their peers (10%), and residents who performed well on some measures and poorly on others (80%). Each high-, intermediate-, and low-performing resident had a least one aspect of the MSF that was significantly lower than the other, and this served as the basis of formative feedback during the long-block. CONCLUSION Use of multi-source feedback in the ambulatory setting can identify high-, intermediate-, and low-performing residents and suggest specific formative feedback for each. More research needs to be done on the effect of such feedback, as well as the relationships between each of the components in the MSF data set.
Collapse
Affiliation(s)
- Eric J. Warm
- Corresponding author: Eric J. Warm, MD, Department of Internal Medicine, University of Cincinnati Academic Health Center, 231 Albert Sabin Way, Cincinnati, OH 45267-0557, 513.558.2590,
| | | | | | | |
Collapse
|
28
|
Ginsburg S, McIlroy J, Oulanova O, Eva K, Regehr G. Toward authentic clinical evaluation: pitfalls in the pursuit of competency. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2010; 85:780-6. [PMID: 20520025 DOI: 10.1097/acm.0b013e3181d73fb6] [Citation(s) in RCA: 151] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
PURPOSE The drive toward competency-based education frameworks has created a tension between competing desires-for quantified, standardized measures on one hand, and for an authentic representation of what it means to be a good doctor on the other. The purpose of this study was to better understand the tensions that exist between competency frameworks and faculty's real-life experiences in evaluating residents. METHOD Interviews were conducted with 19 experienced internal medicine attendings at two Canadian universities in 2007. Attendings each discussed a specific outstanding, average, and problematic resident they had supervised. Interviews were analyzed using grounded theory. RESULTS Eight major themes emerged reflecting how faculty conceptualize residents' performance: knowledge, professionalism, patient interactions, team interactions, systems, disposition, trust, and impact on staff. Attendings' impressions of residents did not seem to result from a linear sum of dimensions; rather, domains idiosyncratically took on variable degrees of importance depending on the resident. Relative deficiencies in outstanding residents could be overlooked, whereas strengths in problematic residents could be discounted. Some constructs (e.g., impact on staff) were not competencies at all; rather, they seem to act as explanations or evidence of attendings' opinions. Standardized evaluation forms might constrain authentic depictions of residents' performance. CONCLUSIONS Despite concerted efforts to create standardized, objective, competency-based evaluations, the assessment of residents' clinical performance still has a strong subjective influence. Attendings' holistic impressions should not be considered invalid simply because they are subjective. Instead, assessment methods should consider novel ways of accommodating these impressions to improve evaluation.
Collapse
Affiliation(s)
- Shiphra Ginsburg
- Wilson Centre for Research in Education, University Health Network, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.
| | | | | | | | | |
Collapse
|
29
|
McOwen KS, Shea JA, Bellini LM, Kogan JR. Are successful resident clinicians good teachers? Multidimensional resident assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2009; 84:S46-S49. [PMID: 19907384 DOI: 10.1097/acm.0b013e3181b37c12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
BACKGROUND GME training programs must demonstrate residents are prepared as both teachers and clinicians. We examined the relationship between faculty evaluations of residents' clinical performance and student evaluations of residents' teaching performance. METHOD Concordance tables for mean ratings and qualitative analysis of comments among 95 residents receiving evaluations by 267 faculty and 106 students. RESULTS A total of 88% of residents received concordant ratings from faculty and students. When discordant, faculty gave higher ratings (P = .003). Ninety and ninety-one percent of faculty and student comments exhibited a positive tone. CONCLUSIONS Faculty and students tended to agree on the successful and unsuccessful residents, supporting the validity of rating scale evaluations. Future research might focus on how to tailor resident remediation when discordant ratings or comments occur.
Collapse
|
30
|
Abstract
CONTEXT The ways hospitalists interact with and contribute to internal medicine residencies in the United States have been described locally, but have not been documented on a national level. OBJECTIVES To describe the penetration of hospitalists into medicine residency faculty nationally, and document their contributions to teaching activities. DESIGN, SETTING, AND PARTICIPANTS Survey of all 386 internal medicine residency directors in the United States in 2005 (272 respondents) and 2007 (236 respondents). MEASUREMENTS Number of teaching hospitals utilizing hospitalists, number of programs utilizing hospitalists to teach, hospitalist teaching duties, and number with hospitalist tracks. RESULTS In 2005, program directors recalled 54% of teaching hospitals employed hospitalists before and 73% after implementation of work-hour limitations. Of those employing hospitalists, 92% of programs in the Northeast and West used them to teach. Two years later, the Midwest (78%) and South (76%) continued to lag behind in the proportion of teaching hospitalists. Specific teaching activities of hospitalists included: attending on teaching service (92%), conducting rounds (81%), observation of clinical skills (67%), lectures (68%), and morning report (52%). Seven percent of program directors reported other duties of hospitalists, including: supervising procedures, reviewing night float patients, serving as associate program directors, and writing curricula. Eleven percent of training programs had hospitalist tracks. CONCLUSIONS As hospitalists have become prevalent and have become efficient clinicians in community and university hospitals, the majority of internal medicine residencies have enlisted them to provide rounds, lectures, and bedside teaching. A small number of residencies are beginning to develop tracks to facilitate this new career option for graduates.
Collapse
Affiliation(s)
- Brent W Beasley
- Internal Medicine, University of Missouri-Kansas City, Saint Luke's Hospital, Kansas City, Missouri 64111, USA.
| | | | | |
Collapse
|
31
|
Fromme HB, Karani R, Downing SM. Direct Observation in Medical Education: A Review of the Literature and Evidence for Validity. ACTA ACUST UNITED AC 2009; 76:365-71. [DOI: 10.1002/msj.20123] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
32
|
Bowen JL, Cook DA, Gerrity M, Kalet AL, Kogan JR, Spickard A, Wayne DB. Navigating the JGIM Special Issue on Medical Education. J Gen Intern Med 2008; 23:899-902. [PMID: 18612714 PMCID: PMC2517909 DOI: 10.1007/s11606-008-0675-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Judith L Bowen
- Division of General Internal Medicine & Geriatrics, Department of Medicine, Oregon Health & Science University, Portland, OR, USA.
| | | | | | | | | | | | | |
Collapse
|