1
|
Ryan JF, Malpani A, Naz H, Boahene KD, Papel ID, Kontis TC, Maxwell JH, Creighton FX, Byrne PJ, Wanamaker JR, Hager GD, Vedula SS, Malekzadeh S, Ishii LE, Ishii M. Do Attending and Trainee Surgeons Agree on What Happens in the Operating Room During Septoplasty? Facial Plast Surg Aesthet Med 2022; 24:472-477. [PMID: 35255228 PMCID: PMC9700360 DOI: 10.1089/fpsam.2021.0327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Background: Surgeons must select cases whose complexity aligns with their skill set. Objectives: To determine how accurately trainees report involvement in procedures, judge case complexity, and assess their own skills. Methods: We recruited attendings and trainees from two otolaryngology departments. After performing septoplasty, they completed identical surveys regarding case complexity, achievement of goals, who performed which steps, and trainee skill using the septoplasty global assessment tool (SGAT) and visual analog scale (VAS). Agreement regarding which steps were performed by the trainee was assessed with Cohen's kappa coefficients (κ). Correlations between trainee and attending responses were measured with Spearman's correlation coefficients (rho). Results: Seven attendings and 42 trainees completed 181 paired surveys. Trainees and attendings sometimes disagreed about which steps were performed by trainees (range of κ = 0.743-0.846). Correlation between attending and trainee responses was low for VAS skill ratings (range of rho = 0.12-0.34), SGAT questions (range of rho = 0.03-0.53), and evaluation of case complexity (range of rho = 0.24-0.48). Conclusion: Trainees sometimes disagree with attendings about which septoplasty steps they perform and are limited in their ability to judge complexity, goals, and their skill.
Collapse
Affiliation(s)
- John F. Ryan
- Department of Otolaryngology—Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Anand Malpani
- Malone Center for Engineering in Healthcare, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Hajira Naz
- Department of Otolaryngology—Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Kofi D.O. Boahene
- Department of Otolaryngology—Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Ira D. Papel
- Department of Otolaryngology—Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Theda C. Kontis
- Department of Otolaryngology—Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jessica H. Maxwell
- Department of Otolaryngology–Head and Neck Surgery, MedStar Georgetown University Hospital, Washington, District of Columbia, USA
- ENT Section, Veterans Affairs Medical Center, Washington, District of Columbia, USA
| | - Francis X. Creighton
- Department of Otolaryngology—Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | | | - John R. Wanamaker
- Department of Otolaryngology–Head and Neck Surgery, MedStar Georgetown University Hospital, Washington, District of Columbia, USA
- ENT Section, Veterans Affairs Medical Center, Washington, District of Columbia, USA
| | - Gregory D. Hager
- Malone Center for Engineering in Healthcare, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - S. Swaroop Vedula
- Malone Center for Engineering in Healthcare, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Sonya Malekzadeh
- Department of Otolaryngology–Head and Neck Surgery, MedStar Georgetown University Hospital, Washington, District of Columbia, USA
- ENT Section, Veterans Affairs Medical Center, Washington, District of Columbia, USA
| | - Lisa E. Ishii
- Department of Otolaryngology—Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Masaru Ishii
- Department of Otolaryngology—Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
2
|
Esposito AC, Coppersmith NA, White EM, Yoo PS. Video Coaching in Surgical Education: Utility, Opportunities, and Barriers to Implementation. JOURNAL OF SURGICAL EDUCATION 2022; 79:717-724. [PMID: 34972670 DOI: 10.1016/j.jsurg.2021.12.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 11/30/2021] [Accepted: 12/04/2021] [Indexed: 06/14/2023]
Abstract
OBJECTIVE This review discusses the literature on Video-Based Coaching (VBC) and explores the barriers to widespread implementation. DESIGN A search was performed on Scopus and PubMed for the terms "operation," "operating room," "surgery," "resident," "house staff," "graduate medical education," "teaching," "coaching," "assessment," "reflection," "camera," and "video" on July 27, 2021, in English. This yielded 828 results. A single author reviewed the titles and abstracts and eliminated any results that did not pertain to operative VBC or assessment. All bibliographies were reviewed, and appropriate manuscripts were included in this study. This resulted in a total of 52 manuscripts included in this review. SETTING/PARTICIPANTS Original, peer-reviewed studies focused on VBC or assessment. RESULTS VBC has been both subjectively and objectively found to be a valuable educational tool. Nearly every study of video recording in the operating room found that subjects, including surgical residents and seasoned surgeons alike, overwhelmingly considered it a useful, non-redundant adjunct to their training. Most studies that evaluated skill acquisition via standardized assessment tools found that surgical residents who underwent a VBC program had significant improvements compared to their counterparts who did not undergo video review. Despite this evidence of effectiveness, fewer than 5% of residency programs employ video recording in the operating room. Barriers to implementation include significant time commitments for proposed coaching curricula and difficulty with integration of video cameras into the operating room. CONCLUSIONS VBC has significant educational benefits, but a scalable curriculum has not been developed. An optimal solution would ensure technical ease and expediency, simple, high-quality cameras, immediate review, and overcoming entrenched surgical norms and culture.
Collapse
Affiliation(s)
- Andrew C Esposito
- Yale School of Medicine, Department of Surgery, New Haven, Connecticut
| | | | - Erin M White
- Yale School of Medicine, Department of Surgery, New Haven, Connecticut
| | - Peter S Yoo
- Yale School of Medicine, Department of Surgery, New Haven, Connecticut.
| |
Collapse
|
3
|
Sidhu NS, Pearce GC, Cavadino A. Interviewer bias in selection of anaesthesia Fellows: A single-institution quality assessment study. Anaesth Intensive Care 2020; 48:358-365. [PMID: 33017184 DOI: 10.1177/0310057x20945326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Fellowships are competitive training posts, often in a subspecialty area. We performed a quality assessment of potential interviewer bias on anaesthesia Fellow selection. After research locality approval, we analysed interview scores for all Fellowship applications to our department over six years. Panel interviewers participated in a structured interview process, asking a series of standardised questions to rate applicants. A mixed model analysis of total applicant rating with crossed effects of applicants and interviewers was used. A total of 94 applicants were interviewed by 27 panel members, with between two and four panel members per interview, giving a total of 329 applicant ratings. The random effect of applicants accounted for 45.8% of total variance in ratings (95% confidence intervals (CI) for intraclass correlation (ICC) 35.8%-57.2%) while interviewer effects accounted for 13.4% of total variance (95% CI for ICC 5.3%-30.0%). We found no evidence of bias for most potential sources after analysing multiple applicant and interviewer factors. After adjusting for interviewer training programme, applicants from other training programmes were rated a mean of 1.87 points lower than Australian and New Zealand College of Anaesthetists (ANZCA) applicants (95% CI 0.62-3.12, P = 0.003) and 1.84 points lower than Royal College of Anaesthetists (RCoA) applicants (95% CI 0.37-3.32, P = 0.014). After adjusting for applicant gender, female clinicians rated applicants 1.12 points higher (95% CI 0.19-2.06, P = 0.019) on average than male clinicians. The observed differences in interview scores amongst male and female clinicians and lower scores in applicants from programmes other than ANZCA/RCoA were small, and require confirmation in independent studies.
Collapse
Affiliation(s)
- Navdeep S Sidhu
- Department of Anaesthesia and Perioperative Medicine, North Shore Hospital, Takapuna, New Zealand.,Department of Anaesthesiology, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Greta C Pearce
- Department of Anaesthesia and Perioperative Medicine, North Shore Hospital, Takapuna, New Zealand
| | - Alana Cavadino
- Department of Epidemiology and Biostatistics, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| |
Collapse
|
4
|
Rajan S, Chen HY, Chen JJ, Chin-You S, Chee S, Chrun R, Byun J, Abuzar M. Final year dental students' self-assessed confidence in general dentistry. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2020; 24:233-242. [PMID: 31845456 DOI: 10.1111/eje.12489] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 10/20/2019] [Accepted: 12/12/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND Self-assessment is an important introspective skill that dental professionals will utilise throughout their professional career. Its value lies in its ability to help individuals identify areas of strengths and weakness, and subsequently seek further development of professional skills where needed. The aim of this study was to investigate the correlation between self-assessed confidence and the assessment grade of final year dental students based on the professional attributes and competencies of newly qualified dentists outlined by the Australian Dental Council (ADC). METHODS Ethical approval was obtained prior to distribution of a questionnaire with 45 statements to final year dental students. The survey was created based on the learning outcomes of the ADC guidelines in the domains of "scientific and clinical knowledge" and "patient care." Participants indicated their level of self-assessed confidence by marking "X" on a visual analogue scale (VAS) from zero ("No Confidence") to 10 cm ("Very Confident"). The assessment grade was based on OSCE, viva voce, case report and written paper. RESULTS A total of 58 (71.6%) dental students participated in the survey. The reported self-assessed confidence over two domains were under "patient care": clinical information gathering 8.92 ± 1.07 cm (range =3.94-10.0 cm: n = 58; 100%), clinical diagnosis and management planning 8.26 ± 1.34 cm (range =0.50-9.95 cm: n = 55; 94.8%), clinical treatment and evaluation, 6.07 ± 1.69 cm (range =0-10.00 cm: n = 55; 94.8%), and "scientific and clinical knowledge": 6.98 ± 1.58 cm (range =0-10.00 cm: n = 58; 100.0%). Within these categories, high confidence was reported for routine dental care (caries management and preventive care) whilst lower confidence was reported for the management of oral medicine and pathologies, dental emergencies, trauma, paediatric dentistry and prosthodontics. Correlation between the assessment grade and the overall score of self-assessed confidence is low positive (r = .225) and not statistically significant (n = 46; P = .132, Spearman'sρ). CONCLUSIONS The final year dental students appear to have good overall self-assessed confidence in core areas of general dentistry. However, confidence seems to be over-estimated when compared with summative assessment.
Collapse
Affiliation(s)
- Sadna Rajan
- Melbourne Dental School, University of Melbourne, Melbourne, VIC, Australia
| | - Hong Yang Chen
- Melbourne Dental School, University of Melbourne, Melbourne, VIC, Australia
| | - Jess Jinxuan Chen
- Melbourne Dental School, University of Melbourne, Melbourne, VIC, Australia
| | - Samantha Chin-You
- Melbourne Dental School, University of Melbourne, Melbourne, VIC, Australia
| | - Sandra Chee
- Melbourne Dental School, University of Melbourne, Melbourne, VIC, Australia
| | - Rina Chrun
- Melbourne Dental School, University of Melbourne, Melbourne, VIC, Australia
| | - Jasper Byun
- Melbourne Dental School, University of Melbourne, Melbourne, VIC, Australia
| | - Menaka Abuzar
- Melbourne Dental School, University of Melbourne, Melbourne, VIC, Australia
- School of Dentistry and Oral Health, Griffith University, Gold Coast, QLD, Australia
| |
Collapse
|
5
|
Jong M, Elliott N, Nguyen M, Goyke T, Johnson S, Cook M, Lindauer L, Best K, Gernerd D, Morolla L, Matuzsan Z, Kane B. Assessment of Emergency Medicine Resident Performance in an Adult Simulation Using a Multisource Feedback Approach. West J Emerg Med 2018; 20:64-70. [PMID: 30643603 PMCID: PMC6324708 DOI: 10.5811/westjem.2018.12.39844] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2018] [Revised: 12/06/2018] [Accepted: 12/09/2018] [Indexed: 11/11/2022] Open
Abstract
Introduction The Accreditation Council for Graduate Medical Education (ACGME) specifically notes multisource feedback (MSF) as a recommended means of resident assessment in the emergency medicine (EM) Milestones. High-fidelity simulation is an environment wherein residents can receive MSF from various types of healthcare professionals. Previously, the Queen’s Simulation Assessment Tool (QSAT) has been validated for faculty to assess residents in five categories: assessment; diagnostic actions; therapeutic actions; interpersonal communication, and overall assessment. We sought to determine whether the QSAT could be used to provide MSF using a standardized simulation case. Methods Prospectively after institutional review board approval, residents from a dual ACGME/osteopathic-approved postgraduate years (PGY) 1–4 EM residency were consented for participation. We developed a standardized resuscitation after overdose case with specific 1–5 Likert anchors used by the QSAT. A PGY 2–4 resident participated in the role of team leader, who completed a QSAT as self-assessment. The team consisted of a PGY-1 peer, an emergency medical services (EMS) provider, and a nurse. Two core faculty were present to administer the simulation case and assess. Demographics were gathered from all participants completing QSATs. We analyzed QSATs by each category and on cumulative score. Hypothesis testing was performed using intraclass correlation coefficients (ICC), with 95% confidence intervals. Interpretation of ICC results was based on previously published definitions. Results We enrolled 34 team leader residents along with 34 nurses. A single PGY-1, a single EMS provider and two faculty were also enrolled. Faculty provided higher cumulative QSAT scores than the other sources of MSF. QSAT scores did not increase with team leader PGY level. ICC for inter-rater reliability for all sources of MSF was 0.754 (0.572–0.867). Removing the self-evaluation scores increased inter-rater reliability to 0.838 (0.733–0.910). There was lesser agreement between faculty and nurse evaluations than from the EMS or peer evaluation. Conclusion In this single-site cohort using an internally developed simulation case, the QSAT provided MSF with excellent reliability. Self-assessment decreases the reliability of the MSF, and our data suggest self-assessment should not be a component of MSF. Use of the QSAT for MSF may be considered as a source of data for clinical competency committees.
Collapse
Affiliation(s)
- Michael Jong
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Nicole Elliott
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Michael Nguyen
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Terrence Goyke
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Steven Johnson
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Matthew Cook
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Lisa Lindauer
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Katie Best
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Douglas Gernerd
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Louis Morolla
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Zachary Matuzsan
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Bryan Kane
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| |
Collapse
|
6
|
Isaak R, Stiegler M, Hobbs G, Martinelli SM, Zvara D, Arora H, Chen F. Comparing Real-time Versus Delayed Video Assessments for Evaluating ACGME Sub-competency Milestones in Simulated Patient Care Environments. Cureus 2018; 10:e2267. [PMID: 29736352 PMCID: PMC5935426 DOI: 10.7759/cureus.2267] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Background Simulation is an effective method for creating objective summative assessments of resident trainees. Real-time assessment (RTA) in simulated patient care environments is logistically challenging, especially when evaluating a large group of residents in multiple simulation scenarios. To date, there is very little data comparing RTA with delayed (hours, days, or weeks later) video-based assessment (DA) for simulation-based assessments of Accreditation Council for Graduate Medical Education (ACGME) sub-competency milestones. We hypothesized that sub-competency milestone evaluation scores obtained from DA, via audio-video recordings, are equivalent to the scores obtained from RTA. Methods Forty-one anesthesiology residents were evaluated in three separate simulated scenarios, representing different ACGME sub-competency milestones. All scenarios had one faculty member perform RTA and two additional faculty members perform DA. Subsequently, the scores generated by RTA were compared with the average scores generated by DA. Variance component analysis was conducted to assess the amount of variation in scores attributable to residents and raters. Results Paired t-tests showed no significant difference in scores between RTA and averaged DA for all cases. Cases 1, 2, and 3 showed an intraclass correlation coefficient (ICC) of 0.67, 0.85, and 0.50 for agreement between RTA scores and averaged DA scores, respectively. Analysis of variance of the scores assigned by the three raters showed a small proportion of variance attributable to raters (4% to 15%). Conclusions The results demonstrate that video-based delayed assessment is as reliable as real-time assessment, as both assessment methods yielded comparable scores. Based on a department’s needs or logistical constraints, our findings support the use of either real-time or delayed video evaluation for assessing milestones in a simulated patient care environment.
Collapse
Affiliation(s)
- Robert Isaak
- Department of Anesthesiology, University of North Carolina School of Medicine
| | - Marjorie Stiegler
- Department of Anesthesiology, University of North Carolina School of Medicine
| | - Gene Hobbs
- Department of Neurosurgery, University of North Carolina School of Medicine
| | - Susan M Martinelli
- Department of Anesthesiology, University of North Carolina School of Medicine
| | - David Zvara
- Department of Anesthesiology, University of North Carolina School of Medicine
| | - Harendra Arora
- Department of Anesthesiology, University of North Carolina School of Medicine
| | - Fei Chen
- Department of Anesthesiology, University of North Carolina School of Medicine
| |
Collapse
|
7
|
Salmon G, Pugsley L. The mini-PAT as a multi-source feedback tool for trainees in child and adolescent psychiatry: assessing whether it is fit for purpose. BJPsych Bull 2017; 41:115-119. [PMID: 28400971 PMCID: PMC5376729 DOI: 10.1192/pb.bp.115.052720] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
This paper discusses the research supporting the use of multi-source feedback (MSF) for doctors and describes the mini-Peer Assessment Tool (mini-PAT), the MSF instrument currently used to assess trainees in child and adolescent psychiatry. The relevance of issues raised in the literature about MSF tools in general is examined in relation to trainees in child and adolescent psychiatry as well as the appropriateness of the mini-PAT for this group. Suggestions for change including modifications to existing MSF tools or the development of a specialty-specific MSF instrument are offered.
Collapse
|
8
|
Luna JM, Yip N, Pivovarov R, Vawdrey DK. Representativeness comparisons of nurse and computer charting of heart rate across nursing-intensity protocols. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:2550-2553. [PMID: 28268842 DOI: 10.1109/embc.2016.7591250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Clinical teams in acute inpatient settings can greatly benefit from automated charting technologies that continuously monitor patient vital status. NewYork-Presbyterian has designed and developed a real-time patient monitoring system that integrates vital signs sensors, networking, and electronic health records, to allow for automatic charting of patient status. We evaluate the representativeness (a combination of agreement, safety and timing) of a core vital sign across nursing intensity care protocols for preliminary feasibility assessment. Our findings suggest an automated way of summarizing heart rate provides representation of true heart rate status and can facilitate alternatives approaches to burdensome manual nurse charting of physiological parameters.
Collapse
|
9
|
Marshall JK, Cooper LA, Green AR, Bertram A, Wright L, Matusko N, McCullough W, Sisson SD. Residents' Attitude, Knowledge, and Perceived Preparedness Toward Caring for Patients from Diverse Sociocultural Backgrounds. Health Equity 2017; 1:43-49. [PMID: 28905046 PMCID: PMC5586003 DOI: 10.1089/heq.2016.0010] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Purpose: Training residents to deliver care to increasingly diverse patients in the United States is an important strategy to help alleviate racial and ethnic disparities in health outcomes. Cross-cultural care training of residents continues to present challenges. This study sought to explore the associations among residents' cross-cultural attitudes, preparedness, and knowledge about disparities to better elucidate possible training needs. Methods: This cross-sectional study used web-based questionnaires from 2013 to 2014. Eighty-four internal medicine residency programs with 954 residents across the United States participated. The main outcome was perceived preparedness to care for sociocultural diverse patients. Key Results: Regression analysis showed attitude toward cross-cultural care (beta coefficient [β]=0.57, 95% confidence interval [CI]: 0.49-0.64, p<0.001) and report of serving a large number of racial/ethnic minorities (β=0.90, 95% CI: 0.56-1.24, p<0.001), and low-socioeconomic status patients (β=0.74, 95% CI: 0.37-1.10, p<0.001) were positively associated with preparedness. Knowledge of disparities was poor and did not differ significantly across postgraduate year (PGY)-1, PGY-2, and PGY-3 residents (mean scores: 56%, 58%, and 55%, respectively; p=0.08). Conclusion: Residents' knowledge of health and healthcare disparities is poor and does not improve during training. Residents' preparedness to provide cross-cultural care is directly associated with their attitude toward cross-cultural care and their level of exposure to patients from diverse sociocultural backgrounds. Future studies should examine the role of residents' cross-cultural care-related attitudes on their ability to care for diverse patients.
Collapse
Affiliation(s)
| | - Lisa A Cooper
- Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Alexander R Green
- Department of Medicine, Massachusetts General Hospital, Boston, Massachusetts
| | - Amanda Bertram
- Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Letitia Wright
- Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Niki Matusko
- Office of Health Equity and Inclusion, University of Michigan Health System, Ann Arbor, Michigan
| | - Wayne McCullough
- Division of Public Health, College of Human Medicine, Michigan State University, East Lansing, Michigan
| | - Stephen D Sisson
- Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland
| |
Collapse
|
10
|
Murphy KR, McManigle JE, Wildman-Tobriner BM, Little Jones A, Dekker TJ, Little BA, Doty JP, Taylor DC. Design, implementation, and demographic differences of HEAL: a self-report health care leadership instrument. J Healthc Leadersh 2016; 8:51-59. [PMID: 29355186 PMCID: PMC5741008 DOI: 10.2147/jhl.s114360] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
The medical community has recognized the importance of leadership skills among its members. While numerous leadership assessment tools exist at present, few are specifically tailored to the unique health care environment. The study team designed a 24-item survey (Healthcare Evaluation & Assessment of Leadership [HEAL]) to measure leadership competency based on the core competencies and core principles of the Duke Healthcare Leadership Model. A novel digital platform was created for use on handheld devices to facilitate its distribution and completion. This pilot phase involved 126 health care professionals self-assessing their leadership abilities. The study aimed to determine both the content validity of the survey and the feasibility of its implementation and use. The digital platform for survey implementation was easy to complete, and there were no technical problems with survey use or data collection. With regard to reliability, initial survey results revealed that each core leadership tenet met or exceeded the reliability cutoff of 0.7. In self-assessment of leadership, women scored themselves higher than men in questions related to patient centeredness (P=0.016). When stratified by age, younger providers rated themselves lower with regard to emotional intelligence and integrity. There were no differences in self-assessment when stratified by medical specialty. While only a pilot study, initial data suggest that HEAL is a reliable and easy-to-administer survey for health care leadership assessment. Differences in responses by sex and age with respect to patient centeredness, integrity, and emotional intelligence raise questions about how providers view themselves amid complex medical teams. As the survey is refined and further administered, HEAL will be used not only as a self-assessment tool but also in “360” evaluation formats.
Collapse
Affiliation(s)
- Kelly R Murphy
- Duke Healthcare Leadership Program, Duke University School of Medicine, Durham, NC, USA
| | - John E McManigle
- Duke Healthcare Leadership Program, Duke University School of Medicine, Durham, NC, USA
| | | | - Amy Little Jones
- Duke Healthcare Leadership Program, Duke University School of Medicine, Durham, NC, USA
| | - Travis J Dekker
- Duke Healthcare Leadership Program, Duke University School of Medicine, Durham, NC, USA
| | - Barrett A Little
- Duke Healthcare Leadership Program, Duke University School of Medicine, Durham, NC, USA
| | - Joseph P Doty
- Duke Healthcare Leadership Program, Duke University School of Medicine, Durham, NC, USA
| | - Dean C Taylor
- Duke Healthcare Leadership Program, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
11
|
Utility of factor analysis in optimization of resident assessment and faculty evaluation. Am J Surg 2016; 211:1158-63. [DOI: 10.1016/j.amjsurg.2015.04.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2014] [Revised: 03/29/2015] [Accepted: 04/01/2015] [Indexed: 11/19/2022]
|
12
|
Hayward MF, Curran V, Curtis B, Schulz H, Murphy S. Reliability of the interprofessional collaborator assessment rubric (ICAR) in multi source feedback (MSF) with post-graduate medical residents. BMC MEDICAL EDUCATION 2014; 14:1049. [PMID: 25551678 PMCID: PMC4318203 DOI: 10.1186/s12909-014-0279-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Accepted: 12/16/2014] [Indexed: 05/14/2023]
Abstract
BACKGROUND Increased attention on collaboration and teamwork competency development in medical education has raised the need for valid and reliable approaches to the assessment of collaboration competencies in post-graduate medical education. The purpose of this study was to evaluate the reliability of a modified Interprofessional Collaborator Assessment Rubric (ICAR) in a multi-source feedback (MSF) process for assessing post-graduate medical residents' collaborator competencies. METHODS Post-graduate medical residents (n = 16) received ICAR assessments from three different rater groups (physicians, nurses and allied health professionals) over a four-week rotation. Internal consistency, inter-rater reliability, inter-group differences and relationship between rater characteristics and ICAR scores were analyzed using Cronbach's alpha, one-way and two-way repeated measures ANOVA, and logistic regression. RESULTS Missing data decreased from 13.1% using daily assessments to 8.8% utilizing an MSF process, p = .032. High internal consistency measures were demonstrated for overall ICAR scores (α = .981) and individual assessment domains within the ICAR (α = .881 to .963). There were no significant differences between scores of physician, nurse, and allied health raters on collaborator competencies (F2,5 = 1.225, p = .297, η2 = .016). Rater gender was the only significant factor influencing scores with female raters scoring residents significantly lower than male raters (6.12 v. 6.82; F1,5 = 7.184, p = .008, η 2 = .045). CONCLUSION The study findings suggest that the use of the modified ICAR in a MSF assessment process could be a feasible and reliable assessment approach to providing formative feedback to post-graduate medical residents on collaborator competencies.
Collapse
Affiliation(s)
- Mark F Hayward
- Patient Research Center, Faculty of Medicine, Memorial University, St. John’s, A1B 3 V6 NL Canada
| | - Vernon Curran
- Patient Research Center, Faculty of Medicine, Memorial University, St. John’s, A1B 3 V6 NL Canada
| | - Bryan Curtis
- Patient Research Center, Faculty of Medicine, Memorial University, St. John’s, A1B 3 V6 NL Canada
| | - Henry Schulz
- Patient Research Center, Faculty of Medicine, Memorial University, St. John’s, A1B 3 V6 NL Canada
| | - Sean Murphy
- Patient Research Center, Faculty of Medicine, Memorial University, St. John’s, A1B 3 V6 NL Canada
| |
Collapse
|
13
|
Nichols BG, Stadler ME, Poetker DM. Attitudes toward professionalism education in Otolaryngology-Head and Neck Surgery residency programs. Laryngoscope 2014; 125:348-53. [DOI: 10.1002/lary.24824] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2014] [Revised: 06/09/2014] [Accepted: 06/16/2014] [Indexed: 11/11/2022]
Affiliation(s)
- Brent G. Nichols
- Department of Otolaryngology and Communication Sciences; Medical College of Wisconsin; Milwaukee Wisconsin U.S.A
| | - Michael E. Stadler
- Department of Otolaryngology and Communication Sciences; Medical College of Wisconsin; Milwaukee Wisconsin U.S.A
| | - David M. Poetker
- Department of Otolaryngology and Communication Sciences; Medical College of Wisconsin; Milwaukee Wisconsin U.S.A
| |
Collapse
|
14
|
Practicing emergency physicians report performing well on most emergency medicine milestones. J Emerg Med 2014; 47:432-40. [PMID: 25012279 DOI: 10.1016/j.jemermed.2014.04.032] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2013] [Revised: 01/30/2014] [Accepted: 04/28/2014] [Indexed: 11/22/2022]
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education's Next Accreditation System endorsed specialty-specific milestones as the foundation of an outcomes-based resident evaluation process. These milestones represent five competency levels (entry level to expert), and graduating residents will be expected to meet Level 4 on all 23 milestones. Limited validation data on these milestones exist. It is unclear if higher levels represent true competencies of practicing emergency medicine (EM) attendings. OBJECTIVE Our aim was to examine how practicing EM attendings in academic and community settings self-evaluate on the new EM milestones. METHODS An electronic self-evaluation survey outlining 9 of the 23 EM milestones was sent to a sample of practicing EM attendings in academic and community settings. Attendings were asked to identify which level was appropriate for them. RESULTS Seventy-nine attendings were surveyed, with an 89% response rate. Sixty-one percent were academic. Twenty-three percent (95% confidence interval [CI] 20%-27%) of all responses were Levels 1, 2, or 3; 38% (95% CI 34%-42%) were Level 4; and 39% (95% CI 35%-43%) were Level 5. Seventy-seven percent of attendings found themselves to be Level 4 or 5 in eight of nine milestones. Only 47% found themselves to be Level 4 or 5 in ultrasound skills (p = 0.0001). CONCLUSIONS Although a majority of EM attendings reported meeting Level 4 milestones, many felt they did not meet Level 4 criteria. Attendings report less perceived competence in ultrasound skills than other milestones. It is unclear if self-assessments reflect the true competency of practicing attendings. The study design can be useful to define the accuracy, precision, and validity of milestones for any medical field.
Collapse
|
15
|
Nichols BG, Nichols LM, Poetker DM, Stadler ME. Operationalizing professionalism: A meaningful and practical integration for resident education. Laryngoscope 2013; 124:110-5. [DOI: 10.1002/lary.24184] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Revised: 03/26/2013] [Accepted: 04/15/2013] [Indexed: 11/05/2022]
Affiliation(s)
- Brent G. Nichols
- Department of Otolaryngology and Communication SciencesMedical College of WisconsinMilwaukee Wisconsin U.S.A
| | - Laura M. Nichols
- Department of Internal MedicineMedical College of WisconsinMilwaukee Wisconsin U.S.A
| | - David M. Poetker
- Department of Otolaryngology and Communication SciencesMedical College of WisconsinMilwaukee Wisconsin U.S.A
| | - Michael E. Stadler
- Department of Otolaryngology and Communication SciencesMedical College of WisconsinMilwaukee Wisconsin U.S.A
| |
Collapse
|
16
|
Evaluación de los cursos de simulación médica avanzada para la formación de los médicos residentes de pediatría en situaciones de emergencia. An Pediatr (Barc) 2013; 78:241-7. [DOI: 10.1016/j.anpedi.2012.07.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2012] [Revised: 06/11/2012] [Accepted: 07/02/2012] [Indexed: 11/17/2022] Open
|
17
|
Mitchell C, Bhat S, Herbert A, Baker P. Workplace-based assessments in Foundation Programme training: do trainees in difficulty use them differently? MEDICAL EDUCATION 2013; 47:292-300. [PMID: 23398015 DOI: 10.1111/medu.12113] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
CONTEXT Trainee-led workplace-based assessment (WPBA) is increasingly used in postgraduate medical training. Trainees in difficulty are known to behave differently from their peers; these differences may be reflected in their use of WPBAs and may give new insights into the behaviour and assessment of struggling trainees. METHODS Data were extracted for 76 115 assessments, completed by 1900 UK Foundation Programme (FP) trainees. Of these 1900 trainees, 95 (5%) were FP trainees in difficulty (FTiDs). We analysed aspects of the use of WPBAs, using multiple logistic regressions, to compare the behaviours of FTiDs with those of their peers. RESULTS Of 48 possible comparisons, only two (i.e. the rate expected to occur by chance) showed statistically significant differences: relative to their peers, FTiDs were more likely to choose nurse assessors in direct observations of procedural skills (odds ratio [OR] 7.05, 95% confidence interval [CI] 1.23-40.43) and more likely to choose non-clinical assessors for assessments using the mini-peer assessment tool (OR 30.44, 95% CI 1.34-689.29). CONCLUSIONS Key features of assessor choice for FTiDs are familiarity and likelihood of receiving a positive assessment. This analysis has not demonstrated that FTiDs use WPBAs any differently from their peers who are not in difficulty, although it does suggest associations and trends that require further exploration. These null results are interesting and raise hypotheses for prospective confirmation or disproof, and for further qualitative work investigating how struggling trainees use WPBAs in order to guide the future implementation of WPBAs in postgraduate training.
Collapse
Affiliation(s)
- Colin Mitchell
- Department of Medicine for the Elderly, St Mary's Hospital, Imperial College Healthcare NHS Trust, London, UK.
| | | | | | | |
Collapse
|
18
|
Nikels SM, Guiton G, Loeb D, Brandenburg S. Evaluating nonphysician staff members' self-perceived ability to provide multisource evaluations of residents. J Grad Med Educ 2013; 5:64-9. [PMID: 24404229 PMCID: PMC3613321 DOI: 10.4300/jgme-d-11-00315.1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2011] [Revised: 06/05/2012] [Accepted: 06/06/2012] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Multisource evaluations of residents offer valuable feedback, yet there is little evidence on the best way to collect these data from a range of health care professionals. OBJECTIVE This study evaluated nonphysician staff members' ability to assess internal medicine residents' performance and behavior, and explored whether staff members differed in their perceived ability to participate in resident evaluations. METHODS We distributed an anonymous survey to nurses, medical assistants, and administrative staff at 6 internal medicine residency continuity clinics. Differences between nurses and other staff members' perceived ability to evaluate resident behavior were examined using independent t tests. RESULTS The survey response rate was 82% (61 of 74). A total of 55 respondents (90%) reported that it was important for them to evaluate residents. Participants reported being able to evaluate professional behaviors very well (62% [36 of 58] on the domain of respect to staff; 61% [36 of 59] on attire; and 54% [32 of 59] on communication). Individuals without a clinical background reported being uncomfortable evaluating medical knowledge (60%; 24 of 40) and judgment (55%; 22 of 40), whereas nurses reported being more comfortable evaluating these competencies. Respondents reported that the biggest barrier to evaluation was limited contact (86%; 48 of 56), and a significant amount of feedback was given verbally rather than on written evaluations. CONCLUSIONS Nonphysician staff members agree it is important to evaluate residents, and they are most comfortable providing feedback on professional behaviors. A significant amount of feedback is provided verbally but not necessarily captured in a formal written evaluation process.
Collapse
|
19
|
White JS, Sharma N. "Who writes what?" Using written comments in team-based assessment to better understand medical student performance: a mixed-methods study. BMC MEDICAL EDUCATION 2012; 12:123. [PMID: 23249445 PMCID: PMC3558404 DOI: 10.1186/1472-6920-12-123] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2012] [Accepted: 12/04/2012] [Indexed: 05/16/2023]
Abstract
BACKGROUND Observation of the performance of medical students in the clinical environment is a key part of assessment and learning. To date, few authors have examined written comments provided to students and considered what aspects of observed performance they represent. The aim of this study was to examine the quantity and quality of written comments provided to medical students by different assessors using a team-based model of assessment, and to determine the aspects of medical student performance on which different assessors provide comments. METHODS Medical students on a 7-week General Surgery & Anesthesiology clerkship received written comments on 'Areas of Excellence' and 'Areas for Improvement' from physicians, residents, nurses, patients, peers and administrators. Mixed-methods were used to analyze the quality and quantity of comments provided and to generate a conceptual framework of observed student performance. RESULTS 1,068 assessors and 127 peers provided 2,988 written comments for 127 students, a median of 188 words per student divided into 26 "Areas of Excellence" and 5 "Areas for Improvement". Physicians provided the most comments (918), followed by patients (692) and peers (586); administrators provided the fewest (91). The conceptual framework generated contained four major domains: 'Student as Physician-in-Training', 'Student as Learner', 'Student as Team Member', and 'Student as Person.' CONCLUSIONS A wide range of observed medical student performance is recorded in written comments provided by members of the surgical healthcare team. Different groups of assessors provide comments on different aspects of student performance, suggesting that comments provided from a single viewpoint may potentially under-represent or overlook some areas of student performance. We hope that the framework presented here can serve as a basis to better understand what medical students do every day, and how they are perceived by those with whom they work.
Collapse
Affiliation(s)
- Jonathan Samuel White
- Department of Surgery, Faculty of Medicine & Dentistry, University of Alberta, 10240 Kingsway Avenue, Edmonton, AB T5H 3V9, Canada
| | - Nishan Sharma
- Department of Surgery, Faculty of Medicine & Dentistry, University of Alberta, 10240 Kingsway Avenue, Edmonton, AB T5H 3V9, Canada
| |
Collapse
|
20
|
Dickson RP, Engelberg RA, Back AL, Ford DW, Curtis JR. Internal medicine trainee self-assessments of end-of-life communication skills do not predict assessments of patients, families, or clinician-evaluators. J Palliat Med 2012; 15:418-26. [PMID: 22475195 DOI: 10.1089/jpm.2011.0386] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
PURPOSE To investigate the strength of association between trainees' self-assessments of the quality of their end-of-life communication skills and the assessments of their patients, patients' families, and clinician-evaluators. METHODS As part of a randomized trial, pre-intervention survey data were collected at two sites from internal medicine trainees and their patients, patients' families, and clinician-evaluators. In this observational analysis, comparisons using regression analysis were made between (1) trainees' scores on a scale of perceived competence at communication about end-of-life care and (2) patients', families', and clinician-evaluators' scores on a questionnaire on the quality of end-of-life communication (QOC). Secondary analyses were performed using topic-focused subscales of these measures. RESULTS Internal medicine trainees (143) were studied with both self-assessment and external assessments. No significant associations were found between trainee perceived competence scores and primary outcome measures (p>0.05). Of the 12 secondary subscale analyses, trainees' self-ratings were significantly associated with external assessments for only one comparison, but the association was in the opposite direction with increased trainee ratings being significantly associated with decreased family ratings on "treatment discussions." We also examined the correlation between ratings by patients, family, and clinician-evaluators, which showed significant correlations (p<0.05) for 7 of 18 comparisons (38.9%). CONCLUSIONS Trainee self-evaluations do not predict assessments by their patients, patients' families, or their clinician-evaluators regarding the quality of end-of-life communication. Although these results should be confirmed using the same measures across all raters, in the meantime efforts to improve communication about end-of-life care should consider outcomes other than physician self-assessment to determine intervention success.
Collapse
Affiliation(s)
- Robert P Dickson
- Department of Medicine, University of Washington, Seattle, WA, USA
| | | | | | | | | |
Collapse
|
21
|
Mudumbai SC, Gaba DM, Boulet J, Howard SK, Davies MF. Feasibility of an internet-based global ranking instrument. J Grad Med Educ 2011; 3:67-74. [PMID: 22379525 PMCID: PMC3186268 DOI: 10.4300/jgme-d-10-00162.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2010] [Revised: 10/10/2010] [Accepted: 10/26/2010] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Single-item global ratings are commonly used at the end of undergraduate clerkships and residency rotations to measure specific competencies and/or to compare the performances of individuals against their peers. We hypothesized that an Internet-based instrument would be feasible to adequately distinguish high- and low-ability residents. MATERIALS AND METHODS After receiving Institutional Review Board approval, we developed an Internet-based global ranking instrument to rank 42 third-year residents (21 in 2008 and 21 in 2009) in a major university teaching hospital's department of anesthesiology. Evaluators were anesthesia attendings and nonphysicians in 3 tertiary-referral hospitals. Evaluators were asked this ranking question: "When it comes to overall clinical ability, how does this individual compare to all their peers?" RESULTS For 2008, 111 evaluators completed the ranking exercise; for 2009, 79 completed it. Residents were rank-ordered using the median of evaluator categorizations and the frequency of ratings per assigned relative performance quintile. Across evaluator groups and study years, the summary evaluation data consistently distinguished the top and bottom resident cohorts. DISCUSSION An Internet-based instrument, using a single-item global ranking, demonstrated feasibility and can be used to differentiate top- and bottom-performing cohorts. Although ranking individuals yields norm-referenced measures of ability, successfully identifying poorly performing residents using online technologies is efficient and will be useful in developing and administering targeted evaluation and remediation programs.
Collapse
Affiliation(s)
- Seshadri C Mudumbai
- Corresponding author: Seshadri C. Mudumbai, MD, Stanford University/VA Palo Alto HCS Anesthesiology, 3801 Miranda Av(112 A), Palo Alto, CA 94304- 9891,
| | | | | | | | | |
Collapse
|
22
|
Root Kustritz MV, Molgaard LK, Rendahl A. Comparison of student self-assessment with faculty assessment of clinical competence. JOURNAL OF VETERINARY MEDICAL EDUCATION 2011; 38:163-170. [PMID: 22023925 DOI: 10.3138/jvme.38.2.163] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
At the University of Minnesota, fourth-year veterinary students assessed their clinical competence after completion of a small-animal, internal-medicine clinical rotation using the same rotation assessment form used by supervising faculty. Grades were compared between the two groups. Students identified by faculty as low-performing were more likely to overestimate their competence in the areas of knowledge, clinical skill, and professionalism than were students identified by faculty as higher performing. This finding mirrors research results in human health professional training. Self-assessment should not be used as the primary or sole measure of clinical competence in veterinary medical training without the introduction of measures to ensure the accuracy of student self-assessment, measures that include active faculty mentoring of student self-assessment, student goal-setting and reflection, and availability of subsequent opportunities to practice additional self-assessment.
Collapse
Affiliation(s)
- Margaret V Root Kustritz
- Department of Veterinary Clinical Sciences, University of Minnesota College of Veterinary Medicine, St. Paul, MN 55108, USA.
| | | | | |
Collapse
|
23
|
Mencía Bartolomé S, López-Herce Cid J, Carrillo Alvarez A, Bustinza Arriortúa A, Moral Torrero R, Sancho Pérez L, Seriñá Ramirez C, Alcaraz Romero A, Sánchez Galindo A. [Evaluation of a paediatric critical care training program for residents in paediatrics]. An Pediatr (Barc) 2010; 73:5-11. [PMID: 20605754 DOI: 10.1016/j.anpedi.2010.03.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2009] [Revised: 03/20/2010] [Accepted: 03/22/2010] [Indexed: 11/26/2022] Open
Abstract
OBJECTIVE To evaluate a training program in paediatric critical care for residents in paediatrics. METHODS Description of a paediatric critical care training program for residents in paediatrics. To evaluate the results of the program an initial, and final written test, an evaluation by the physician responsible for the program, a self-evaluation by the residents, and a written survey on the quality of the training program, were performed. RESULTS From April 1998 to August 2009, 156 residents were included in the training program. All residents showed an improvement between the initial and final written test; initial score (5.6+/-1.2), final score (8.6+/-0.7) (P<0.001). Only 14.1% of the residents answered at least 70 % of the questions correctly in the initial test, compared with 96.6 % in the final test (P<0.001). The score in final test was significantly higher than the self-evaluation by the residents (6.7+/-1.2) and the evaluation by the tutor (6.9+/-0.9) (P<0.001). There were no differences between the practical self-evaluation by the residents (6.2+/-1.0) and the practical evaluation by the tutor (6.7+/-0.9). Residents considered the training program as adequate: theoretical education (8.5+/-0.8), resident handbook (9+/-0.9), practical training (8.3+/-1.0), investigation (7.6+/-2.0) and human relationship (9.2+/-0.9). CONCLUSIONS This training program is an useful educational method for training paediatric intensive care residents. The evaluation of the training program is essential to improve the education in paediatric residents.
Collapse
|
24
|
Berk RA. Using the 360 degrees multisource feedback model to evaluate teaching and professionalism. MEDICAL TEACHER 2009; 31:1073-1080. [PMID: 19995170 DOI: 10.3109/01421590802572775] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
BACKGROUND Student ratings have dominated as the primary and, frequently, only measure of teaching performance at colleges and universities for the past 50 years. Recently, there has been a trend toward augmenting those ratings with other data sources to broaden and deepen the evidence base. The 360 degrees multisource feedback (MSF) model used in management and industry for half a century and in clinical medicine for the last decade seemed like a best fit to evaluate teaching performance and professionalism. AIM To adapt the 360 degrees MSF model to the assessment of teaching performance and professionalism of medical school faculty. METHODS The salient characteristics of the MSF models in industry and medicine were extracted from the literature. These characteristics along with 14 sources of evidence from eight possible raters, including students, self, peers, outside experts, mentors, alumni, employers, and administrators, based on the research in higher education were adapted to formative and summative decisions. RESULTS Three 360 degrees MSF models were generated for three different decisions: (1) formative decisions and feedback about teaching improvement; (2) summative decisions and feedback for merit pay and contract renewal; and (3) formative decisions and feedback about professional behaviors in the academic setting. The characteristics of each model were listed. Finally, a top-10 list of the most persistent and, perhaps, intractable psychometric issues in executing these models was suggested to guide future research. CONCLUSIONS The 360 degrees MSF model appears to be a useful framework for implementing a multisource evaluation of faculty teaching performance and professionalism in medical schools. This model can provide more accurate, reliable, fair, and equitable decisions than the one based on just a single source.
Collapse
|
25
|
Lopez L, Vranceanu AM, Cohen AP, Betancourt J, Weissman JS. Personal characteristics associated with resident physicians' self perceptions of preparedness to deliver cross-cultural care. J Gen Intern Med 2008; 23:1953-8. [PMID: 18807099 PMCID: PMC2596517 DOI: 10.1007/s11606-008-0782-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/25/2007] [Revised: 01/09/2008] [Accepted: 08/22/2008] [Indexed: 12/16/2022]
Abstract
BACKGROUND Recent reports from the Institute of Medicine emphasize patient-centered care and cross-cultural training as a means of improving the quality of medical care and eliminating racial and ethnic disparities. OBJECTIVE To determine whether, controlling for training received in medical school or during residency, resident physician socio-cultural characteristics influence self-perceived preparedness and skill in delivering cross-cultural care. DESIGN National survey of resident physicians. PARTICIPANTS A probability sample of residents in seven specialties in their final year of training at US academic health centers. MEASUREMENT Nine resident characteristics were analyzed. Differences in preparedness and skill were assessed using the chi(2) statistic and multivariate logistic regression. RESULTS Fifty-eight percent (2047/3500) of residents responded. The most important factor associated with improved perceived skill level in performing selected tasks or services believed to be useful in treating culturally diverse patients was having received cross-cultural skills training during residency (OR range 1.71-4.22). Compared with white residents, African American physicians felt more prepared to deal with patients with distrust in the US healthcare system (OR 1.63) and with racial or ethnic minorities (OR 1.61), Latinos reported feeling more prepared to deal with new immigrants (OR 1.88) and Asians reported feeling more prepared to deal with patients with health beliefs at odds with Western medicine (1.43). CONCLUSIONS Cross-cultural care skills training is associated with increased self-perceived preparedness to care for diverse patient populations providing support for the importance of such training in graduate medical education. In addition, selected resident characteristics are associated with being more or less prepared for different aspects of cross-cultural care. This underscores the need to both include medical residents from diverse backgrounds in all training programs and tailor such programs to individual resident needs in order to maximize the chances that such training is likely to have an impact on the quality of care.
Collapse
Affiliation(s)
- Lenny Lopez
- Department of Medicine, Institute for Health Policy, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA.
| | | | | | | | | |
Collapse
|
26
|
Thammasitboon S, Mariscalco MM, Yudkowsky R, Hetland MD, Noronha PA, Mrtek RG. Exploring individual opinions of potential evaluators in a 360-degree assessment: four distinct viewpoints of a competent resident. TEACHING AND LEARNING IN MEDICINE 2008; 20:314-322. [PMID: 18855235 DOI: 10.1080/10401330802384680] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
BACKGROUND Despite the highly acclaimed psychometric features of a 360-degree assessment in the fields of economics, military, and education, there has been increased interest in developing 360-degree instruments to assess competencies in graduate medical education only in the past recent years. Most of the effort to date, however, has focused on developing instruments and testing their reliability and feasibility. Insufficient attention has gone into issues of construct validity and particularly understanding the underlying constructs on which the instruments are based as well as the phenomena that affect ratings. PURPOSE In preparation for developing a 360-degree assessment instrument, we explored variations in evaluators' opinion type of a competent resident and offer observation about evaluator's professional background and opinions. METHOD Evaluators from two residency programs ranked 36 opinion statements, using a relative-ranking model, based on their opinion of a competent resident. By-person factor analysis was used to structure opinion types. RESULTS Factor analysis of 156 responses identified four factors interpreted as four different opinion types of a competent resident: (a) altruistic, compassionate healer (n = 42 evaluators), (b) scientifically grounded clinician (n = 30), (c) holistic, humanistic clinician (n = 62), and (d) patient-focused, health manager (n = 31). Although 72% of nurses/respiratory therapist evaluators expressed type C, 28% expressed other types just as often. Only 14% of evaluator physicians expressed type D, and the remainders were evenly split among the other types. CONCLUSIONS Our evaluators in 360-degree system expressed four opinion types of a competent resident. The individual opinion and not professional background influences the characteristics an evaluator values in a competent resident. We propose that these values will have an impact on competency assessment and should be taken into account in a 360-degree assessment.
Collapse
Affiliation(s)
- Satid Thammasitboon
- Department of Pediatrics, Robert C Byrd Health Sciences Center, Morgantown, West Virginia 26506-9214, USA.
| | | | | | | | | | | |
Collapse
|
27
|
Video assessment of basic surgical trainees' operative skills. Am J Surg 2008; 196:265-72. [DOI: 10.1016/j.amjsurg.2007.09.044] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2007] [Revised: 09/07/2007] [Accepted: 09/11/2007] [Indexed: 01/22/2023]
|
28
|
Abstract
The structured evaluation of doctors' performance through peer review is a relatively new phenomenon brought about by public demand for accountability to patients. Medical knowledge (as assessed by examination score) is no longer a good predictor of individual performance, humanistic qualities and communication skills. The process of peer review (or multi-source assessment) was developed over the last two decades in the USA and has started to pick up momentum in the UK through the introduction of Modernizing Medical Careers. However the concept is not new. Driven by market forces, it was initially developed by industrial organizations to improve leadership qualities with a view to increasing productivity through positive behaviour change and self-awareness. Multi-source feedback is not without its problems and may not always produce its desired outcomes. In this article we review the evidence for peer review and critically discuss the current process of mini peer assessment tool (mini-PAT) as the assessment tool for peer review employed in UK.
Collapse
Affiliation(s)
- Aza Abdulla
- Consultant Physician, Princess Royal University Hospital, Bromley Hospitals NHSTrust, Farnborough Common, Orpington, Kent BR6 8ND, UK.
| |
Collapse
|
29
|
Colthart I, Bagnall G, Evans A, Allbutt H, Haig A, Illing J, McKinstry B. The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME Guide no. 10. MEDICAL TEACHER 2008; 30:124-45. [PMID: 18464136 DOI: 10.1080/01421590701881699] [Citation(s) in RCA: 213] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
BACKGROUND Health professionals are increasingly expected to identify their own learning needs through a process of ongoing self-assessment. Self-assessment is integral to many appraisal systems and has been espoused as an important aspect of personal professional behaviour by several regulatory bodies and those developing learning outcomes for clinical students. In this review we considered the evidence base on self-assessment since Gordon's comprehensive review in 1991. The overall aim of the present review was to determine whether specific methods of self-assessment lead to change in learning behaviour or clinical practice. Specific objectives sought evidence for effectiveness of self-assessment interventions to: a. improve perception of learning needs; b. promote change in learning activity; c. improve clinical practice; d. improve patient outcomes. METHODS The methods for this review were developed and refined in a series of workshops with input from an expert BEME systematic reviewer, and followed BEME guidance. Databases searched included Medline, CINAHL, BNI, Embase, EBM Collection, Psychlit, HMIC, ERIC, BEI, TIMElit and RDRB. Papers addressing self-assessment in all professions in clinical practice were included, covering under- and post-graduate education, with outcomes classified using an extended version of Kirkpatrick's hierarchy. In addition we included outcome measures of accuracy of self-assessment and factors influencing it. 5,798 papers were retrieved, 194 abstracts were identified as potentially relevant and 103 papers coded independently by pairs using an electronic coding sheet adapted from the standard BEME form. This total included 12 papers identified by hand-searches, grey literature, cited references and updating. The identification of a further 12 papers during the writing-up process resulted in a total of 77 papers for final analysis. RESULTS Although a large number of papers resulted from our original search only a small proportion of these were of sufficient academic rigour to be included in our review. The majority of these focused on judging the accuracy of self-assessment against some external standard, which raises questions about assumed reliability and validity of this 'gold standard'. No papers were found which satisfied Kirkpatrick's hierarchy above level 2, or which looked at the association between self-assessment and resulting changes in either clinical practice or patient outcomes. Thus our review was largely unable to answer the specific research questions and provide a solid evidence base for effective self-assessment. Despite this, there was some evidence that the accuracy of self-assessment can be enhanced by feedback, particularly video and verbal, and by providing explicit assessment criteria and benchmarking guidance. There was also some evidence that the least competent are also the least able to self-assess accurately. Our review recommends that these areas merit future systematic research to further our understanding of self-assessment. CONCLUSION As in other BEME reviews, the methodological issues emerging from this review indicate a need for more rigorous study designs. In addition, it highlights the need to consider the potential for combining qualitative and quantitative data to further our understanding of how self-assessment can improve learning and professional clinical practice.
Collapse
|
30
|
Clay AS, Que L, Petrusa ER, Sebastian M, Govert J. Debriefing in the intensive care unit: a feedback tool to facilitate bedside teaching. Crit Care Med 2007; 35:738-54. [PMID: 17255866 DOI: 10.1097/01.ccm.0000257329.22025.18] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To develop an assessment tool for bedside teaching in the intensive care unit (ICU) that provides feedback to residents about their performance compared with clinical best practices. METHOD We reviewed the literature on the assessment of resident clinical performance in critical care medicine and summarized the strengths and weaknesses of these assessments. Using debriefing after simulation as a model, we created five checklists for different situations encountered in the ICU--areas that encompass different Accreditation Council for Graduate Medical Education core competencies. Checklists were designed to incorporate clinical best practices as defined by the literature and institutional practices as defined by the critical care professionals working in our ICUs. Checklists were used at the beginning of the rotation to explicitly define our expectations to residents and were used during the rotation after a clinical encounter by the resident and supervising physician to review a resident's performance and to provide feedback to the resident on the accuracy of the resident's self-assessment of his or her performance. RESULTS Five "best practice" checklists were developed: central catheter placement, consultation, family discussions, resuscitation of hemorrhagic shock, and resuscitation of septic shock. On average, residents completed 2.6 checklists per rotation. Use of the cards was fairly evenly distributed, with the exception of resuscitation of hemorrhagic shock, which occurs less frequently than the other encounters in the medical ICU. Those who used more debriefing cards had higher fellow and faculty evaluations. Residents felt that debriefing cards were a useful learning tool in the ICU. CONCLUSIONS Debriefing sessions using checklists can be successfully implemented in ICU rotations. Checklists can be used to assess both resident performance and consistency of practice with respect to published standards of care in critical care medicine.
Collapse
Affiliation(s)
- Alison S Clay
- Critical Care Medicine, Department of Surgery, Duke University Medical Center, Durham, NC, USA
| | | | | | | | | |
Collapse
|
31
|
Kramer AWM, Zuithoff P, Jansen JJM, Tan LHC, Grol RPTM, Van der Vleuten CPM. Growth of self-perceived clinical competence in postgraduate training for general practice and its relation to potentially influencing factors. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2007; 12:135-45. [PMID: 16847736 DOI: 10.1007/s10459-006-9001-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2005] [Accepted: 02/08/2006] [Indexed: 05/10/2023]
Abstract
OBJECTIVE To examine the increase in self-perceived clinical competence during a three-year postgraduate training in general practice and to explore the relation between the growth of self-perceived competence and several background variables. DESIGN Cohort, 1995-1998. SETTING Three-year Postgraduate Training for General practice in the Netherlands PARTICIPANTS All Dutch trainees who followed postgraduate training from September 1995 to September 1998 (N=191). INTERVENTION We asked the trainees at the start and at the end of their postgraduate training to complete a questionnaire, which assessed their self-perceived knowledge, clinical skills and consultations skills. We collected information about potentially influencing background variables. Amongst these were variables such as: age, gender, prior medical experience, the effort someone has spent upon her/his education, insight in weak and strong areas of clinical competence and knowledge and skills levels. MAIN OUTCOME MEASURE Self-perceived competence. RESULTS A total of 127 trainees completed both questionnaires (190 at the first administration and 128 at the second one). We found statistically significant growth of self-perceived clinical competence. Self-perceived consultation skills increased more than self-perceived knowledge and clinical skills. The afore mentioned background variables did not relate in any way with the growth of self-perceived clinical competence. CONCLUSION This study shows that a 3-year postgraduate training in general practice enhances self-perceived clinical competence. However, we still do not know how to explain this improvement. Further study into the theoretical concept of self-assessment in medical education and into the factors contributing to the feeling of being competent, is required.
Collapse
Affiliation(s)
- A W M Kramer
- Centre for Postgraduate Training in General Practice, University Medical Centre , Nijmegen, The Netherlands.
| | | | | | | | | | | |
Collapse
|
32
|
Bond WF, Lammers RL, Spillane LL, Smith-Coggins R, Fernandez R, Reznek MA, Vozenilek JA, Gordon JA. The use of simulation in emergency medicine: a research agenda. Acad Emerg Med 2007; 14:353-63. [PMID: 17303646 DOI: 10.1197/j.aem.2006.11.021] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Medical simulation is a rapidly expanding area within medical education. In 2005, the Society for Academic Emergency Medicine Simulation Task Force was created to ensure that the Society and its members had adequate access to information and resources regarding this new and important topic. One of the objectives of the task force was to create a research agenda for the use of simulation in emergency medical education. The authors present here the consensus document from the task force regarding suggested areas for research. These include opportunities to study reflective experiential learning, behavioral and team training, procedural simulation, computer screen-based simulation, the use of simulation for evaluation and testing, and special topics in emergency medicine. The challenges of research in the field of simulation are discussed, including the impact of simulation on patient safety. Outcomes-based research and multicenter efforts will serve to advance simulation techniques and encourage their adoption.
Collapse
Affiliation(s)
- William F Bond
- Department of Emergency Medicine, Lehigh Valley Hospital and Health Network Affiliated with Pennsylvania State University School of Medicine, Allentown, PA, USA.
| | | | | | | | | | | | | | | |
Collapse
|
33
|
Davidson ML. The 360 degrees evaluation. Clin Podiatr Med Surg 2007; 24:65-94, vii. [PMID: 17127162 DOI: 10.1016/j.cpm.2006.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The 360 degrees evaluation refers to resident assessment by all persons in the resident's sphere of influence. Although most think of the 360 degrees evaluation as a single tool used by all nonfaculty personnel, it is actually an assessment system. Therefore, all evaluators, including faculty, and all tools used in the assessment of residents comprise the 360 degrees evaluation. The goal of the 360 degrees evaluation is to accurately assess resident performance using tools that are reliable and valid so that competence can be demonstrated and programmatic needs identified. This article provides insights into the evaluation process through guidelines, practical advice, and examples from the field.
Collapse
Affiliation(s)
- Melissa L Davidson
- Department of Anesthesiology, University of Medicine and Dentistry of New Jersey, New Jersey Medical School, MSB E-550, 185 South Orange Avenue, Newark, NJ 07103, USA.
| |
Collapse
|
34
|
Sargeant J, Mann K, Ferrier S. Exploring family physicians' reactions to multisource feedback: perceptions of credibility and usefulness. MEDICAL EDUCATION 2005; 39:497-504. [PMID: 15842684 DOI: 10.1111/j.1365-2929.2005.02124.x] [Citation(s) in RCA: 74] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
PURPOSE Physician performance is comprised of several domains of professional competence. Multisource feedback (MSF) or 360-degree feedback is an approach used to assess these, particularly the humanistic and relational competencies. Research studying responses to performance assessment shows that reactions vary and can influence how performance feedback is used. Improvement does not always result, especially when feedback is perceived as negative. This small qualitative study undertook preliminary exploration of physicians' reactions to MSF, and perceptions influencing these and the acceptance and use of their feedback. METHODS We held focus groups with 15 family physicians participating in an MSF pilot study. Qualitative analyses included content and constant comparative analyses. RESULTS Participants agreed that the purpose of MSF assessment should be to enhance practice and generally agreed with their patients' feedback. However, responses to medical colleague and co-worker feedback ranged from positive to negative. Several participants who responded negatively did not agree with their feedback nor were inclined to use it for practice improvement. Reactions were influenced by perceptions of accuracy, credibility and usefulness of feedback. Factors shaping these perceptions included: recruiting credible reviewers, ability of reviewers to make objective assessments, use of the assessment tool and specificity of the feedback. CONCLUSION Physicians' perceptions of the MSF process and feedback can influence how and if they use the feedback for practice improvement. These findings are important, raising the concern that feedback perceived as negative and not useful will have no or negative results, and highlight questions for further study.
Collapse
Affiliation(s)
- Joan Sargeant
- Continuing Medical Education, Dalhousie University, Clinical Research Centre, Halifax, Canada.
| | | | | |
Collapse
|
35
|
Vallis J, Hesketh A, Macpherson S. Pre-registration house officer training: a role for nurses in the new Foundation Programme? MEDICAL EDUCATION 2004; 38:708-716. [PMID: 15200395 DOI: 10.1111/j.1365-2929.2004.01845.x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE To explore senior nurses' views of pre-registration house officer (PRHO) training, including the scope for their contribution to the new Foundation Programme. DESIGN Data reported here are drawn from a larger, national project, which aimed to identify a curriculum for the PRHO year. The project was based in the Education Development Unit, Scottish Council for Postgraduate Medical and Dental Education (SCPMDE), Dundee. As part of the project, 40 semistructured interviews, each lasting about 1 hour, were held with senior nurses. Interviews were fully transcribed and coded in the qualitative software NVivo for further analysis. Codes were studied for emergent themes and categories. PARTICIPANTS Senior nurses (10 from each of the 4 postgraduate regions of Scotland), from diverse specialties. RESULTS Data suggest considerable cross- regional/specialty consistency. Key emergent themes concerned the process of training as much as the educational outcomes. The nurses focused on the development of outcomes such as communication and teamworking in addition to clinical and practical skills. They guided the PRHOs informally, but were concerned that their own extended roles were detracting from this. DISCUSSION Nurses are gaining increasingly advanced professional, clinical and practical skills. Traditionally, experienced nurses guide and support PRHOs, at least informally. Data collected suggested there may be scope for capitalising on their expertise, including formalising aspects of their contribution to the proposed PRHO Foundation Programme. However, this is a potentially sensitive area and more interprofessional dialogue is needed.
Collapse
Affiliation(s)
- Jo Vallis
- NHS Education for Scotland (NES), South East Region, Edinburgh, UK.
| | | | | |
Collapse
|
36
|
Nørgaard K, Ringsted C, Dolmans D. Validation of a checklist to assess ward round performance in internal medicine. MEDICAL EDUCATION 2004; 38:700-707. [PMID: 15200394 DOI: 10.1111/j.1365-2929.2004.01840.x] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
BACKGROUND Ward rounds are an essential responsibility for doctors in hospital settings. Tools for guiding and assessing trainees' performance of ward rounds are needed. A checklist was developed for that purpose for use with trainees in internal medicine. OBJECTIVE To assess the content and construct validity of the task-specific checklist. METHODS To determine content validity, a questionnaire was mailed to 295 internists. They were requested to give their opinion on the relevance of each item included on the checklist and to indicate the comprehensiveness of the checklist. To determine construct validity, an observer assessed 4 groups of doctors during performance of a complete ward round (n = 32). The nurse who accompanied the doctor on rounds made a global assessment of the performance. RESULTS The response rate to the questionnaire was 80.7%. The respondents found that all 10 items on the checklist were relevant to ward round performance and that the item collection was comprehensive. Checklist mean-item scores differed between levels of expertise: junior house officers 1.4 (1.0-1.9); senior house officers 2.0 (1.5-2.9); specialist trainees 2.5 (1.8-2.8), and specialists 2.7 (2.3-3.5); median (range) (P < 0.001). A significant correlation was found between global observer scores and nurse scores (r = 0.56, P < 0.001). CONCLUSION The checklist, developed for assessing trainees' performance of ward rounds in internal medicine, showed high content validity. Construct validity was supported by the higher scores of experienced doctors compared to those with less experience and the significant correlation between the observer's and nurses' global scores. The developed checklist should be valuable in guiding and assessing trainees on ward round performance.
Collapse
Affiliation(s)
- Kirsten Nørgaard
- Department of Endocrinology, Hvidovre Hospital, Copenhagen Hospital Corporation, Denmark.
| | | | | |
Collapse
|
37
|
Miller DC, Montie JE, Faerber GJ. Evaluating the Accreditation Council on Graduate Medical Education Core Clinical Competencies: Techniques and Feasibility in a Urology Training Program. J Urol 2003; 170:1312-7. [PMID: 14501755 DOI: 10.1097/01.ju.0000086703.21386.ae] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE We describe several traditional and novel techniques for teaching and evaluating the Accreditation Council on Graduate Medical Education (ACGME) core clinical competencies in a urology residency training program. MATERIALS AND METHODS The evolution and underpinnings of the ACGME Outcome Project were reviewed. Several publications related to the evaluation of clinical competencies as well as current assessment techniques at our institution were also analyzed. RESULTS Several tools for the assessment of clinical competencies have been developed and refined in response to the ACGME Outcome project. Standardized patient encounters and expanded patient satisfaction surveys may prove useful with regard to assessing resident professionalism, patient care and communication skills. A feasible and possibly undervalued technique for evaluating a number of core competencies is the implementation of formal written appraisals of the nature and quality of resident performance at departmental conferences. The assessment of competency in practice based learning and systems based practice may be achieved through innovative exercises, such as practice guideline development, that assess the evidence for various urologic interventions as well as the financial and administrative aspects of such care. CONCLUSIONS We describe several contemporary methods for teaching and evaluating the core clinical competencies in a urology training program. While the techniques described are neither comprehensive nor feasible for every program, they nevertheless provide an important starting point for a meaningful exchange of ideas in the urological graduate medical education community.
Collapse
Affiliation(s)
- David C Miller
- Department of Urology, University of Michigan Medical School, Ann Arbor, USA
| | | | | |
Collapse
|
38
|
Ward M, MacRae H, Schlachta C, Mamazza J, Poulin E, Reznick R, Regehr G. Resident self-assessment of operative performance. Am J Surg 2003; 185:521-4. [PMID: 12781878 DOI: 10.1016/s0002-9610(03)00069-2] [Citation(s) in RCA: 104] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
BACKGROUND In medicine, the development of expertise requires the recognition of one's capabilities and limitations. This study aimed to verify the accuracy of self-assessment for the performance of a surgical task, and to determine whether self-assessment may be improved through self-observation or exposure to relevant standards of performance. METHODS Twenty-six senior surgical residents were videotaped performing a laparoscopic Nissen fundoplication in a pig. Experts rated the videos using two scoring systems. Subjects evaluated their performances after performance of the Nissen, after self-observation of their videotaped performance, and after review of four videotaped "benchmark" performances. RESULTS Expert interrater reliability was 0.66 (intraclass correlation coefficient). The correlation between experts' and residents' self-evaluations was initially moderate (r = 0.50, P <0.01), increasing significantly after the residents reviewed their own videotaped performance to r = 0.63 (Deltar = 0.13, P <0.01), yet did not change after review of the benchmarks. CONCLUSIONS Self-observation of videotaped performance improved the residents' ability to self-evaluate.
Collapse
Affiliation(s)
- Mylène Ward
- Centre for Research in Education, University Health Network, and Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | | | | | | | | | | | | |
Collapse
|
39
|
Claridge JA, Calland JF, Chandrasekhara V, Young JS, Sanfey H, Schirmer BD. Comparing resident measurements to attending surgeon self-perceptions of surgical educators. Am J Surg 2003; 185:323-7. [PMID: 12657383 DOI: 10.1016/s0002-9610(02)01421-6] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
OBJECTIVE The purpose of this study was to evaluate the initiation and utility of evaluating attending surgeons as educators by resident trainees. Additionally, we were interested in comparing resident measurements to attending self-perceptions. METHODS A written evaluation form, (utilizing five-point ordinal scale assignments) queried respondents regarding the performance of surgical attendings in the operating room, and other clinical settings. A similar form was distributed to the faculty members, which they used to evaluate themselves. Mean scores were determined, as were comparisons between self-perception and resident assessments. Differences in scores with p values less than 0.05 were considered statistically significant. RESULTS Thirty-six residents evaluated 23 attendings. Mean assignments by residents of performance in the operating room, other clinical settings, and overall scores for all faculty members as a group were 4.22 +/- 0.04, 4.11 +/- 0.03, and 4.16 +/- 0.03, respectively, with a score of five, generally corresponding to a most favorable rating. When overall scores were analyzed, 10 attendings received scores that differed significantly from those received by their peers, with half of subjects above, and the other half being below the 95% confidence interval. Eighteen (78%) of attendings completed the self-evaluation forms, and of these, 11 (61%) had self-perceptions that differed significantly from overall scores as reported by the residents. CONCLUSIONS Our evaluation process delineated significant differences among attending faculty members and identified individual strengths and weaknesses. Many educators' self-perceptions differed significantly from resident assessments, and attendings who did not evaluate themselves scored lower than their peers.
Collapse
Affiliation(s)
- Jeffrey A Claridge
- Department of Surgery, University of Virginia, 1640 Stoney Creek Dr., Charlottesville, VA 22902, USA.
| | | | | | | | | | | |
Collapse
|
40
|
Arnold L. Assessing professional behavior: yesterday, today, and tomorrow. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2002; 77:502-515. [PMID: 12063194 DOI: 10.1097/00001888-200206000-00006] [Citation(s) in RCA: 182] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE The author interprets the state of the art of assessing professional behavior. She defines the concept of professionalism, reviews the psychometric properties of key approaches to assessing professionalism, conveys major findings that these approaches produced, and discusses recommendations to improve the assessment of professionalism. METHOD The author reviewed professionalism literature from the last 30 years that had been identified through database searches; included in conference proceedings, bibliographies, and reference lists; and suggested by experts. The cited literature largely came from peer-reviewed journals, represented themes or novel approaches, reported qualitative or quantitative data about measurement instruments, or described pragmatic or theoretical approaches to assessing professionalism. RESULTS A circumscribed concept of professionalism is available to serve as a foundation for next steps in assessing professional behavior. The current array of assessment tools is rich. However, their measurement properties should be strengthened. Accordingly, future research should explore rigorous qualitative techniques; refine quantitative assessments of competence, for example, through OSCEs; and evaluate separate elements of professionalism. It should test the hypothesis that assessment tools will be better if they define professionalism as behaviors expressive of value conflicts, investigate the resolution of these conflicts, and recognize the contextual nature of professional behaviors. Whether measurement tools should be tailored to the stage of a medical career and how the environment can support or sabotage the assessment of professional behavior are central issues. FINAL THOUGHT: Without solid assessment tools, questions about the efficacy of approaches to educating learners about professional behavior will not be effectively answered.
Collapse
Affiliation(s)
- Louise Arnold
- University of Missouri-Kansas City School of Medicine, 64108, USA
| |
Collapse
|
41
|
|
42
|
Nendaz MR, Perrier A, Simonet ML, Huber P, Junod A, Vu NV. Appraisal of clinical competence during clerkships: how knowledgeable in curriculum and assessment development should a physician-examiner be? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2001; 76:S99-S101. [PMID: 11597887 DOI: 10.1097/00001888-200110001-00033] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Affiliation(s)
- M R Nendaz
- Department of Internal Medicine, University Hospital, Geneva, Switzerland
| | | | | | | | | | | |
Collapse
|
43
|
Ginsburg S, Regehr G, Hatala R, McNaughton N, Frohna A, Hodges B, Lingard L, Stern D. Context, conflict, and resolution: a new conceptual framework for evaluating professionalism. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2000; 75:S6-S11. [PMID: 11031159 DOI: 10.1097/00001888-200010001-00003] [Citation(s) in RCA: 97] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Affiliation(s)
- S Ginsburg
- Mt. Sinai Hospital, Toronto, Ontario, Canada
| | | | | | | | | | | | | | | |
Collapse
|
44
|
|