1
|
Lewis JM, Yared K, Heidel RE, Kirkpatrick B, Freeman MB, Daley BJ, Shatzer J, McCallum RS. Emotional Intelligence and Burnout Related to Resident-Assessed Faculty Teaching Scores. JOURNAL OF SURGICAL EDUCATION 2021; 78:e100-e111. [PMID: 34750078 DOI: 10.1016/j.jsurg.2021.09.023] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Revised: 09/18/2021] [Accepted: 09/30/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Emotional intelligence (EI) is associated with job success in multiple fields, in part, because EI may mitigate stress and burnout. Research suggests these relationships may include teaching. Our purpose is to further explore the relationships between EI, burnout, and teaching for faculty surgeons. DESIGN With IRB approval, surgical faculty were offered the opportunity to complete personal demographics, the Maslach Burnout Inventory, the SETQ-SMART assessment of teaching ability, and the SEF:MED self-assessment of emotional intelligence. Surgical residents rated faculty teaching ability using the SETQ-SMART SETTING: A medium-sized academic medical center in the Southeast approved to graduate 6 residents per year. PARTICIPANTS ACGME surgical faculty and general surgical residents PGY1 to PGY5 including preliminary residents, were given the opportunity to participate. RESULTS Faculty self-assessed teaching scores were significantly different from resident scores for nine (60%) faculty; three (33%) overrated their and 6 (67%) under rated their overall teaching ability, relative to resident ratings. The 3 SEF:MED scales correlated low-moderate to strongly with the SETQ-OTS: IS (r = 0.41, p = 0.13), EM (r = 0.67, p < 0.01), and EA (r = 0.43, p = 0.11). Overall, 8(53%) faculty scored moderate to high on at least 1 of the 3 MBI subscales. Overall self-rated faculty teaching scores correlated negatively with higher EE and DP and positively with PA (r = -0.08, -0.21, and 0.52, p = 0.047; respectively). EI negatively correlated with MBI-EE and DP and positively with PA (r = -0.31, -0.18, 0.45, respectively), though due to the small sample none reach statistical significance with alpha set to 0.05. CONCLUSIONS In this pilot study, EI is positively correlated to surgical faculty members' teaching ability. Burnout was less strongly correlated with resident-assessed faculty teaching scores, but with similar trends. Finally, EI was correlated with MBI EE, DP, and PA as expected given the literature in other fields. Expanded study is warranted.
Collapse
Affiliation(s)
- James M Lewis
- Department of Surgery, University of Tennessee Graduate School of Medicine, Knoxville, Tennessee.
| | - Katherine Yared
- Department of Surgery, University of Tennessee Graduate School of Medicine, Knoxville, Tennessee
| | - Robert E Heidel
- Department of Surgery, University of Tennessee Graduate School of Medicine, Knoxville, Tennessee
| | - Baileigh Kirkpatrick
- Department of Education Psychology, University of Tennessee, Knoxville, Tennessee
| | - Michael B Freeman
- Department of Surgery, University of Tennessee Graduate School of Medicine, Knoxville, Tennessee
| | - Brian J Daley
- Department of Surgery, University of Tennessee Graduate School of Medicine, Knoxville, Tennessee
| | - John Shatzer
- Department of Interprofessional Studies, Johns Hopkins School of Education, Baltimore, Maryland
| | - R Steve McCallum
- Department of Education Psychology, University of Tennessee, Knoxville, Tennessee
| |
Collapse
|
2
|
Sam AH, Fung CY, Barth J, Raupach T. A Weighted Evaluation Study of Clinical Teacher Performance at Five Hospitals in the UK. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2021; 12:957-963. [PMID: 34471397 PMCID: PMC8405096 DOI: 10.2147/amep.s322105] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 08/03/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Evaluation of individual teachers in undergraduate medical education helps clinical teaching fellows identify their own strengths and weaknesses. In addition, evaluation data can be used to guide career decisions. In order for evaluation results to adequately reflect true teaching performance, a range of parameters should be considered when designing data collection tools. METHODS Clinical teaching fellows at five London teaching hospitals were evaluated by third-year students they had supervised during a ten-week clinical attachment. The questionnaire addressed (a) general teaching skills and (b) student learning outcome measured via comparative self-assessments. Teachers were ranked using different algorithms with various weights assigned to these two factors. RESULTS A total of 133 students evaluated 14 teaching fellows. Overall, ratings on teaching skills were largely favourable, while the perceived increase in student performance was modest. Considerable variability across teachers was observed for both factors. Teacher rankings were strongly influenced by the weighting algorithm used. Depending on the algorithm, one teacher was assigned any rank between #2 and #10. CONCLUSION Both parts of the questionnaire address different outcomes and thus highlight specific strengths and weaknesses of individual teachers. Programme directors need to carefully consider the weight assigned to individual components of teacher evaluations in order to ensure a fair appraisal of teacher performance.
Collapse
Affiliation(s)
- Amir H Sam
- Medical Education Research Unit, Imperial College School of Medicine, Imperial College London, London, UK
| | - Chee Yeen Fung
- Medical Education Research Unit, Imperial College School of Medicine, Imperial College London, London, UK
| | - Janina Barth
- Division of Medical Education Research and Curriculum Development, University Medical Centre Göttingen, Göttingen, Germany
| | - Tobias Raupach
- Institute for Medical Education, University Hospital Bonn, Bonn, Germany
| |
Collapse
|
3
|
Pedram K, Brooks MN, Marcelo C, Kurbanova N, Paletta-Hobbs L, Garber AM, Wong A, Qayyum R. Peer Observations: Enhancing Bedside Clinical Teaching Behaviors. Cureus 2020; 12:e7076. [PMID: 32226677 PMCID: PMC7093940 DOI: 10.7759/cureus.7076] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Background Medical training relies on direct observations and formative feedback. After residency graduation, opportunities to receive feedback on clinical teaching diminish. Although feedback through learner evaluations is common, these evaluations can be untimely, non-specific, and potentially biased. On the other hand, peer feedback in a small group setting or lecture format has been shown to be beneficial to teaching behaviors, however, little is known if peer observation using a standardized tool followed by feedback results in improved teaching behaviors. Therefore, the objective of this study was to examine if feedback after peer observation results in improved inpatient teaching behaviors. Methods This study was conducted at a tertiary care hospital. Academic hospitalists in the Division of Hospital Medicine developed a standardized 28-item peer observation tool based on the Stanford Faculty Development Program to observe their peers during bedside teaching rounds and provide timely feedback after observation. The tool focused on five teaching domains (learning climate, control of session, promotion of understanding and retention, evaluation, and feedback) relevant to the inpatient teaching environment. Teaching hospitalists were observed at the beginning of a two-week teaching rotation, given feedback, and then observed at the end of the rotation. Furthermore, we utilized a post-observation survey to assess the teaching and observing hospitalists’ comfort with observation and the usefulness of the feedback. We used mixed linear models with crossed design to account for correlations between the observations. Models were adjusted for gender, age, and years of experience. We tested the internal validity of the instrument with Cronbach’s alpha. Results Seventy (range: one to four observations per faculty) observations were performed involving 27 teaching attendings. A high proportion of teachers were comfortable with the observation (79%) and found the feedback helpful (92%), and useful for their own teaching (88%). Mean scores in teaching behavior domains ranged from 2.1 to 2.7. In unadjusted and adjusted analysis, each teaching observation was followed by higher scores in learning climate (adjusted improvement = 0.09; 95% CI = 0.02-0.15; p = 0.007) and promotion of understanding and retention (adjusted improvement = 0.09; 95% CI = 0.02-0.17; p = 0.01). The standardized observation tool had Cronbach’s alpha of 0.81 showing high internal validity. Conclusions Peer observation of bedside teaching followed by feedback using a standardized tool is feasible and results in measured improvements in desirable teaching behaviors. The success of this approach resulted in the expansion of peer observation to other Divisions within the Department of Internal Medicine at our Institution.
Collapse
Affiliation(s)
- Kimberly Pedram
- Internal Medicine, Division of Hospital Medicine, Virginia Commonwealth University School of Medicine, Richmond, USA
| | - Michelle N Brooks
- Internal Medicine, Division of Hospital Medicine, Virginia Commonwealth University School of Medicine, Richmond, USA
| | - Carolyn Marcelo
- Internal Medicine, Division of Hospital Medicine, Virginia Commonwealth University School of Medicine, Richmond, USA
| | - Nargiza Kurbanova
- Internal Medicine, Division of Hospital Medicine, Virginia Commonwealth University School of Medicine, Richmond, USA
| | - Laura Paletta-Hobbs
- Internal Medicine, Division of Hospital Medicine, Virginia Commonwealth University School of Medicine, Richmond, USA
| | - Adam M Garber
- Internal Medicine, Division of Hospital Medicine, Virginia Commonwealth University School of Medicine, Richmond, USA
| | - Alice Wong
- Internal Medicine, Division of Hospital Medicine, Virginia Commonwealth University School of Medicine, Richmond, USA
| | - Rehan Qayyum
- Internal Medicine, Virginia Commonwealth University School of Medicine, Richmond, USA
| |
Collapse
|
4
|
van der Meulen MW, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Lombarts KMJMH. Exploring Validity Evidence Associated With Questionnaire-Based Tools for Assessing the Professional Performance of Physicians: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1384-1397. [PMID: 31460937 DOI: 10.1097/acm.0000000000002767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To collect and examine-using an argument-based validity approach-validity evidence of questionnaire-based tools used to assess physicians' clinical, teaching, and research performance. METHOD In October 2016, the authors conducted a systematic search of the literature seeking articles about questionnaire-based tools for assessing physicians' professional performance published from inception to October 2016. They included studies reporting on the validity evidence of tools used to assess physicians' clinical, teaching, and research performance. Using Kane's validity framework, they conducted data extraction based on four inferences in the validity argument: scoring, generalization, extrapolation, and implications. RESULTS They included 46 articles on 15 tools assessing clinical performance and 72 articles on 38 tools assessing teaching performance. They found no studies on research performance tools. Only 12 of the tools (23%) gathered evidence on all four components of Kane's validity argument. Validity evidence focused mostly on generalization and extrapolation inferences. Scoring evidence showed mixed results. Evidence on implications was generally missing. CONCLUSIONS Based on the argument-based approach to validity, not all questionnaire-based tools seem to support their intended use. Evidence concerning implications of questionnaire-based tools is mostly lacking, thus weakening the argument to use these tools for formative and, especially, for summative assessments of physicians' clinical and teaching performance. More research on implications is needed to strengthen the argument and to provide support for decisions based on these tools, particularly for high-stakes, summative decisions. To meaningfully assess academic physicians in their tripartite role as doctor, teacher, and researcher, additional assessment tools are needed.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- M.W. van der Meulen is PhD candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-3636-5469. A. Smirnova is PhD graduate and researcher, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-4491-3007. S. Heeneman is professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-6103-8075. M.G.A. oude Egbrink is professor, Department of Physiology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-5530-6598. C.P.M. van der Vleuten is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119. K.M.J.M.H. Lombarts is professor, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0001-6167-0620
| | | | | | | | | | | |
Collapse
|
5
|
Maillot C, Martellotto S, Boukerrou M, Winer A. Correlation between students' and trainers' evaluations while learning delegated surgical procedures: A prospective cohort study. Int J Surg 2019; 68:157-162. [PMID: 31319231 DOI: 10.1016/j.ijsu.2019.07.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Accepted: 07/11/2019] [Indexed: 10/26/2022]
Abstract
BACKGROUND The delegation of procedures within the medical competence to the nurses can increase the effectiveness of the healthcare provided. The objectives of the study are (1) to assess the quality of training courses for delegated surgical procedures through implementation for graduate scrub nursing ("students") (2) and to evaluate the correlation between the evaluation of this training carried out by students and the self-assessment conducted by the faculty ("trainers"). MATERIALS AND METHODS We set up a 49-h training for five groups of 10 students from July 2016 to July 2017 in our tertiary academic hospital. The course consisted mostly in simulations based on the "Zwisch" model and focused on acquiring the control of the gesture as well as on the development of critical reasoning. An evaluation of the training by the students but also a self-assessment of trainers were prospectively collected using the SFDP26 questionnaire. RESULTS 52 active scrub nursing students and 21 trainers were included. 96% of students and 86% of trainers evaluated the training from "good" to "very good". Progress was observed for 41 (79%) of the students and 18 (86%) of the trainers, and 98% of students felt able to put their new skills into clinical practice after training. There was no difference between the total scores of students and teachers (p = 0.153). A statistically significant difference between the evaluations produced by the students and the self-evaluations produced by the trainers was observed for 8 of the 26 items of assessment. In case of inadequacy, the trainers' scores were always lower than those of the students. CONCLUSIONS Training in performing delegated surgical procedures by mixed cognitive and motor gestures learning, based on the development of critical thinking and simulations seems to be effective, with a significant improvement in students' knowledge and skills. Expectations of students and trainers are well correlated.
Collapse
Affiliation(s)
- Cédric Maillot
- Department of Orthopedic Surgery, University Hospital of South Reunion Island, BP 350, 97448, Saint-Pierre Cedex, Reunion.
| | - Sophie Martellotto
- Department of Digestive and Oncological Surgery, Gabriel Martin Hospital Center, 38 Rue Labourdonnais, 97960, Saint-Paul, Reunion.
| | - Malik Boukerrou
- Department of Gynecology and Obstetrics, University Hospital of South Reunion Island, BP 350, 97448, Saint-Pierre Cedex, Reunion; CEPOI, Perinatal Center of Study of the Indian Ocean, Faculty of Medicine, University Hospital of South Reunion Island, 97448, St-Pierre, Reunion; CSSOI, Center for Simulation in Health of the Indian Ocean, Faculty of Medicine, University Hospital of South Reunion Island, 97448, St-Pierre, Reunion.
| | - Arnaud Winer
- CEPOI, Perinatal Center of Study of the Indian Ocean, Faculty of Medicine, University Hospital of South Reunion Island, 97448, St-Pierre, Reunion; Intensive Care University Hospital of South Reunion Island, BP 350, 97448, Saint-Pierre Cedex, Reunion; CSSOI, Center for Simulation in Health of the Indian Ocean, Faculty of Medicine, University Hospital of South Reunion Island, 97448, St-Pierre, Reunion.
| |
Collapse
|
6
|
Meverden RA, Szostek JH, Mahapatra S, Schleck CD, Mandrekar JN, Beckman TJ, Wittich CM. Validation of a clinical rotation evaluation for physician assistant students. BMC MEDICAL EDUCATION 2018; 18:123. [PMID: 29866089 PMCID: PMC5987424 DOI: 10.1186/s12909-018-1242-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/02/2018] [Accepted: 05/25/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND We conducted a prospective validation study to develop a physician assistant (PA) clinical rotation evaluation (PACRE) instrument. The specific aims of this study were to 1) develop a tool to evaluate PA clinical rotations, and 2) explore associations between validated rotation evaluation scores and characteristics of the students and rotations. METHODS The PACRE was administered to rotating PA students at our institution in 2016. Factor analysis, internal consistency reliability, and associations between PACRE scores and student or rotation characteristics were determined. RESULTS Of 206 PACRE instruments sent, 124 were returned (60.2% response). Factor analysis supported a unidimensional model with a mean (SD) score of 4.31 (0.57) on a 5-point scale. Internal consistency reliability was excellent (Cronbach α=0.95). PACRE scores were associated with students' gender (P = .01) and rotation specialty (P = .006) and correlated with students' perception of being prepared (r = 0.32; P < .001) and value of the rotation (r = 0.57; P < .001). CONCLUSIONS This is the first validated instrument to evaluate PA rotation experiences. Application of the PACRE questionnaire could inform rotation directors about ways to improve clinical experiences. The findings of this study suggest that PA students must be adequately prepared to have a successful experience on their rotations. PA programs should consider offering transition courses like those offered in many medical schools to prepare their students for clinical experiences. Future research should explore whether additional rotation characteristics and educational outcomes are associated with PACRE scores.
Collapse
Affiliation(s)
- Ryan A. Meverden
- Mayo Clinic Gonda Vascular Center, Mayo Clinic, Rochester, MN USA
| | - Jason H. Szostek
- Division of General Internal Medicine, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 USA
| | - Saswati Mahapatra
- Division of General Internal Medicine, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 USA
| | - Cathy D. Schleck
- Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, MN USA
| | | | - Thomas J. Beckman
- Division of General Internal Medicine, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 USA
| | - Christopher M. Wittich
- Division of General Internal Medicine, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 USA
| |
Collapse
|
7
|
Kassis K, Wallihan R, Hurtubise L, Goode S, Chase M, Mahan JD. Milestone-Based Tool for Learner Evaluation of Faculty Clinical Teaching. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2017; 13:10626. [PMID: 30800827 PMCID: PMC6374742 DOI: 10.15766/mep_2374-8265.10626] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 08/16/2017] [Indexed: 06/06/2023]
Abstract
INTRODUCTION Traditional normative Likert-type evaluations of faculty teaching have several drawbacks, including lack of granular feedback, potential for inflation, and the halo effect. To provide more meaningful data to faculty on their teaching skills and encourage educator self-reflection and skill development, we designed and implemented a milestone-based faculty clinical teaching evaluation tool. METHODS The evaluation tool contains 10 questions that assess clinical teaching skills with descriptive milestone behavior anchors. Nine of these items are based on the Stanford Faculty Development Clinical Teaching Model and annual Accreditation Council for Graduate Medical Education (ACGME) resident survey questions; the tenth was developed to address professionalism at our institution. The tool was developed with input from residency program leaders, residents, and the faculty development committee and piloted with graduate medical education learners before implementation. RESULTS More than 7,200 faculty evaluations by learners and 550 faculty self-evaluations have been collected. Learners found the form easy to use and preferred it to previous Likert-based evaluations. Over the 2 years that faculty self-evaluations have been collected, their scores have been similar to the learner evaluation scores. The feedback provided faculty with more meaningful data on teaching skills and opportunities for reflection and skill improvement and was used in constructing faculty teaching skills programs at the institutional level. DISCUSSION This innovation provides an opportunity to give faculty members more meaningful teaching evaluations and feedback. It should be easy for other institutions and programs to implement. It leverages a familiar milestone construct and incorporates important ACGME annual resident survey information.
Collapse
Affiliation(s)
- Karyn Kassis
- Assistant Professor of Pediatrics, Nationwide Children's Hospital
- Assistant Professor of Pediatrics, The Ohio State University College of Medicine
- Director, Center for Faculty Development, Nationwide Children's Hospital
- Director, Center for Faculty Development, The Ohio State University College of Medicine
| | - Rebecca Wallihan
- Assistant Professor of Pediatrics, Nationwide Children's Hospital
- Assistant Professor of Pediatrics, The Ohio State University College of Medicine
- Associate Program Director, Pediatric Residency Program, Nationwide Children's Hospital
- Associate Program Director, Pediatric Residency Program, The Ohio State University College of Medicine
- Vice Chair for Education, Nationwide Children's Hospital
- Vice Chair for Education, The Ohio State University College of Medicine
| | - Larry Hurtubise
- Associate Director, Center for Faculty Development, Nationwide Children's Hospital
- Adjunct Associate Professor of Biomedical Education and Anatomy, The Ohio State University College of Medicine
| | - Sara Goode
- Program Administrator, Office of Graduate Medical Education, Nationwide Children's Hospital
| | - Margaret Chase
- Associate Professor of Pediatrics, Nationwide Children's Hospital
- Associate Professor of Pediatrics, The Ohio State University College of Medicine
- Program Director of the Internal Medicine/Pediatrics Residency, Nationwide Children's Hospital
- Program Director of the Internal Medicine/Pediatrics Residency, The Ohio State University College of Medicine
| | - John D. Mahan
- Professor of Pediatrics, Nationwide Children's Hospital
- Professor of Pediatrics, The Ohio State University College of Medicine
- Program Director, Pediatric Residency Program and Pediatric Nephrology Fellowship Program, Nationwide Children's Hospital
- Program Director, Pediatric Residency Program and Pediatric Nephrology Fellowship Program, The Ohio State University College of Medicine
| |
Collapse
|
8
|
O’Sullivan TA, Lau C, Patel M, Mac C, Krueger J, Danielson J, Weber SS. Student-Valued Measurable Teaching Behaviors of Award-Winning Pharmacy Preceptors. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2015; 79:151. [PMID: 26889063 PMCID: PMC4749899 DOI: 10.5688/ajpe7910151] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Accepted: 01/20/2015] [Indexed: 05/11/2023]
Abstract
OBJECTIVE To identify specific preceptor teaching-coaching, role modeling, and facilitating behaviors valued by pharmacy students and to develop measures of those behaviors that can be used for an experiential education quality assurance program. METHODS Using a qualitative research approach, we conducted a thematic analysis of student comments about excellent preceptors to identify behaviors exhibited by those preceptors. Identified behaviors were sorted according to the preceptor's role as role model, teacher/coach, or learning facilitator; measurable descriptors for each behavior were then developed. RESULTS Data analysis resulted in identification of 15 measurable behavior themes, the most frequent being: having an interest in student learning and success, making time for students, and displaying a positive preceptor attitude. Measureable descriptors were developed for 5 role-modeling behaviors, 6 teaching-coaching behaviors, and 4 facilitating behaviors. CONCLUSION Preceptors may need to be evaluated in their separate roles as teacher-coach, role model, and learning facilitator. The developed measures in this report could be used in site quality evaluation.
Collapse
Affiliation(s)
| | - Carmen Lau
- University of Washington Medical Center, Seattle, Washington
| | - Mitul Patel
- Palomar Health, Palomar Medical Center, Escondido, California
| | - Chi Mac
- University of Washington Health Center, Rubenstein Memorial Pharmacy, Seattle, Washington
| | | | | | - Stanley S. Weber
- University of Washington School of Pharmacy, Seattle, Washington
| |
Collapse
|
9
|
Mintz M, Southern DA, Ghali WA, Ma IWY. Validation of the 25-Item Stanford Faculty Development Program Tool on Clinical Teaching Effectiveness. TEACHING AND LEARNING IN MEDICINE 2015; 27:174-181. [PMID: 25893939 DOI: 10.1080/10401334.2015.1011645] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
UNLABELLED CONSTRUCT: The 25-item Stanford Faculty Development Program Tool on Clinical Teaching Effectiveness assesses clinical teaching effectiveness. BACKGROUND Valid and reliable rating of teaching effectiveness is helpful for providing faculty with feedback. The 25-item Stanford Faculty Development Program Tool on Clinical Teaching Effectiveness was intended to evaluate seven dimensions of clinical teaching. Confirmation of the structure of this tool has not been previously performed. APPROACH This study sought to validate this tool using a confirmatory factor analysis, testing a 7-factor model and compared its goodness of fit with a modified model. Acceptability of the use of the tool was assessed using a 6-item survey, completed by final year medical students (N = 119 of 156 students; 76%). RESULTS The testing of the goodness of fit indicated that the 7-factor model performed poorly, χ(2)(254) = 457.4, p < .001 (root mean square error of approximation [RMSEA] = 0.08, comparative fit index [CFI] = 0.91, non-normed fit index [NNFI] = 0.89). Only standardized root mean square residual (SRMR) indicated acceptable fit (0.06). Further exploratory analysis identified 10 items that cross-loaded on 2 factors. The remainder of the items loaded on factors as originally intended. By removing these 10 items, repeat confirmatory factor analysis on the modified 15-item, 5-factor model demonstrated a better fit than the original model: SRMR = 0.075, NNFI = 0.91, χ(2)(80) = 150.1, p < .001; RMSEA = 0.09; CFI = 0.93. Although 75% of the participants stated they were willing to fill the tool on their preceptors on a biweekly basis, only 25% were willing to do so on a weekly basis. CONCLUSIONS Our study failed to confirm factor structure of the 25-item tool. A modified tool with fewer, more conceptually distinct items was best fit by a 5-factor model. Further, the acceptability of use for the 25-item tool may be poor for rotations with a new preceptor weekly. The abbreviated tool may be preferable in that setting.
Collapse
Affiliation(s)
- Marcy Mintz
- a Department of Medicine , University of Calgary , Calgary , Alberta , Canada
| | | | | | | |
Collapse
|
10
|
Huff NG, Roy B, Estrada CA, Centor RM, Castiglioni A, Willett LL, Shewchuk RM, Cohen S. Teaching behaviors that define highest rated attending physicians: a study of the resident perspective. MEDICAL TEACHER 2014; 36:991-996. [PMID: 25072844 DOI: 10.3109/0142159x.2014.920952] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
BACKGROUND Better understanding teaching behaviors of highly rated clinical teachers could improve training for teaching. We examined teaching behaviors demonstrated by higher rated attending physicians. METHODS Qualitative and quantitative group consensus using the nominal group technique (NGT) among internal medicine residents and students on hospital services (2004-2005); participants voted on the three most important teaching behaviors (weight of 3 = top rated, 1 = lowest rated). Teaching behaviors were organized into domains of successful rounding characteristics. We used teaching evaluations to sort attending physicians into tertiles of overall teaching effectiveness. RESULTS Participants evaluated 23 faculty in 17 NGT sessions. Participants identified 66 distinct teaching behaviors (total sum of weights [sw] = 502). Nineteen items had sw ≥ 10, and these were categorized into the following domains: Teaching Process (n = 8; sw = 215, 42.8%), Learning Atmosphere (n = 5; sw = 145, 28.9%), Role Modeling (n = 3; sw = 74, 14.7%) and Team Management (n = 3; sw = 65, 12.9%). Attendings in the highest tertile received a larger number of votes for characteristics within the Teaching Process domain (56% compared to 39% in lowest tertile). CONCLUSIONS The most effective teaching behaviors fell into two broad domains: Teaching Process and Learning Atmosphere. Highest rated attending physicians are most recognized for characteristics in the Teaching Process domain.
Collapse
|
11
|
Boerebach BCM, Lombarts KMJMH, Arah OA. Confirmatory Factor Analysis of the System for Evaluation of Teaching Qualities (SETQ) in Graduate Medical Training. Eval Health Prof 2014; 39:21-32. [DOI: 10.1177/0163278714552520] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The System for Evaluation of Teaching Qualities (SETQ) was developed as a formative system for the continuous evaluation and development of physicians’ teaching performance in graduate medical training. It has been seven years since the introduction and initial exploratory psychometric analysis of the SETQ questionnaires. This study investigates the validity and reliability of the SETQ questionnaires across hospitals and medical specialties using confirmatory factor analyses (CFAs), reliability analysis, and generalizability analysis. The SETQ questionnaires were tested in a sample of 3,025 physicians and 2,848 trainees in 46 hospitals. The CFA revealed acceptable fit of the data to the previously identified five-factor model. The high internal consistency estimates suggest satisfactory reliability of the subscales. These results provide robust evidence for the validity and reliability of the SETQ questionnaires for evaluating physicians’ teaching performance.
Collapse
Affiliation(s)
- Benjamin C. M. Boerebach
- Professional Performance research group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Kiki M. J. M. H. Lombarts
- Professional Performance research group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Onyebuchi A. Arah
- Department of Epidemiology, University of California, Los Angeles (UCLA), School of Public Health, Los Angeles, CA, USA
- UCLA Center for Health Policy Research, Los Angeles, CA, USA
| |
Collapse
|
12
|
Mookherjee S, Monash B, Wentworth KL, Sharpe BA. Faculty development for hospitalists: structured peer observation of teaching. J Hosp Med 2014; 9:244-50. [PMID: 24446215 DOI: 10.1002/jhm.2151] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/07/2013] [Revised: 12/13/2013] [Accepted: 12/22/2013] [Indexed: 11/09/2022]
Abstract
BACKGROUND Hospitalists provide much of the clinical teaching in internal medicine, yet formative feedback to improve their teaching is rare. METHODS We developed a peer observation, assessment, and feedback program to improve attending hospitalist teaching. Participants were trained to identify 10 optimal teaching behaviors using a structured observation tool that was developed from the validated Stanford Faculty Development Program clinical teaching framework. Participants joined year-long feedback dyads and engaged in peer observation and feedback on teaching. Pre- and post-program surveys assessed confidence in teaching, performance of teaching behaviors, confidence in giving and receiving feedback, attitudes toward peer observation, and overall satisfaction with the program. RESULTS Twenty-two attending hospitalists participated, averaging 2.2 years (± 2.1 years standard deviation [SD]) experience; 15 (68%) completed pre- and post-program surveys. Confidence in giving feedback, receiving feedback, and teaching efficacy increased (1 = strongly disagree, 5 = strongly agree, mean ± SD): "I can accurately assess my colleagues' teaching skills," (pre = 3.2 ± 0.9 vs post = 4.1 ± 0.6, P < 0.01), "I can give accurate feedback to my colleagues" (pre = 3.4 ± 0.6 vs post = 4.2 ± 0.6, P < 0.01), and "I am confident in my ability to teach students and residents" (pre = 3.2 ± 0.9 vs post = 3.7 ± 0.8, P = 0.026). CONCLUSIONS Peer observation and feedback of teaching increases hospitalist confidence in several domains that are essential for optimizing teaching. Further studies are needed to examine if educational outcomes are improved by this program.
Collapse
Affiliation(s)
- Somnath Mookherjee
- Department of Medicine, Division of General Internal Medicine, University of Washington, Seattle, Seattle, Washington
| | | | | | | |
Collapse
|
13
|
Fann JI, Sullivan ME, Skeff KM, Stratos GA, Walker JD, Grossi EA, Verrier ED, Hicks GL, Feins RH. Teaching behaviors in the cardiac surgery simulation environment. J Thorac Cardiovasc Surg 2013; 145:45-53. [DOI: 10.1016/j.jtcvs.2012.07.111] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2012] [Revised: 05/25/2012] [Accepted: 07/30/2012] [Indexed: 11/29/2022]
|
14
|
Boerebach BCM, Arah OA, Busch ORC, Lombarts KMJMH. Reliable and valid tools for measuring surgeons' teaching performance: residents' vs. self evaluation. JOURNAL OF SURGICAL EDUCATION 2012; 69:511-520. [PMID: 22677591 DOI: 10.1016/j.jsurg.2012.04.003] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2012] [Revised: 03/09/2012] [Accepted: 04/05/2012] [Indexed: 06/01/2023]
Abstract
BACKGROUND In surgical education, there is a need for educational performance evaluation tools that yield reliable and valid data. This paper describes the development and validation of robust evaluation tools that provide surgeons with insight into their clinical teaching performance. We investigated (1) the reliability and validity of 2 tools for evaluating the teaching performance of attending surgeons in residency training programs, and (2) whether surgeons' self evaluation correlated with the residents' evaluation of those surgeons. MATERIALS AND METHODS We surveyed 343 surgeons and 320 residents as part of a multicenter prospective cohort study of faculty teaching performance in residency training programs. The reliability and validity of the SETQ (System for Evaluation Teaching Qualities) tools were studied using standard psychometric techniques. We then estimated the correlations between residents' and surgeons' evaluations. RESULTS The response rate was 87% among surgeons and 84% among residents, yielding 2625 residents' evaluations and 302 self evaluations. The SETQ tools yielded reliable and valid data on 5 domains of surgical teaching performance, namely, learning climate, professional attitude towards residents, communication of goals, evaluation of residents, and feedback. The correlations between surgeons' self and residents' evaluations were low, with coefficients ranging from 0.03 for evaluation of residents to 0.18 for communication of goals. CONCLUSIONS The SETQ tools for the evaluation of surgeons' teaching performance appear to yield reliable and valid data. The lack of strong correlations between surgeons' self and residents' evaluations suggest the need for using external feedback sources in informed self evaluation of surgeons.
Collapse
Affiliation(s)
- Benjamin C M Boerebach
- Department of Quality Management and Process Innovation, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands.
| | | | | | | |
Collapse
|
15
|
Boerebach BCM, Lombarts KMJMH, Keijzer C, Heineman MJ, Arah OA. The teacher, the physician and the person: how faculty's teaching performance influences their role modelling. PLoS One 2012; 7:e32089. [PMID: 22427818 PMCID: PMC3299651 DOI: 10.1371/journal.pone.0032089] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Accepted: 01/22/2012] [Indexed: 12/02/2022] Open
Abstract
Objective Previous studies identified different typologies of role models (as teacher/supervisor, physician and person) and explored which of faculty's characteristics could distinguish good role models. The aim of this study was to explore how and to which extent clinical faculty's teaching performance influences residents' evaluations of faculty's different role modelling statuses, especially across different specialties. Methods In a prospective multicenter multispecialty study of faculty's teaching performance, we used web-based questionnaires to gather empirical data from residents. The main outcome measures were the different typologies of role modelling. The predictors were faculty's overall teaching performance and faculty's teaching performance on specific domains of teaching. The data were analyzed using multilevel regression equations. Results In total 219 (69% response rate) residents filled out 2111 questionnaires about 423 (96% response rate) faculty. Faculty's overall teaching performance influenced all role model typologies (OR: from 8.0 to 166.2). For the specific domains of teaching, overall, all three role model typologies were strongly associated with “professional attitude towards residents” (OR: 3.28 for teacher/supervisor, 2.72 for physician and 7.20 for the person role). Further, the teacher/supervisor role was strongly associated with “feedback” and “learning climate” (OR: 3.23 and 2.70). However, the associations of the specific domains of teaching with faculty's role modelling varied widely across specialties. Conclusion This study suggests that faculty can substantially enhance their role modelling by improving their teaching performance. The amount of influence that the specific domains of teaching have on role modelling differs across specialties.
Collapse
Affiliation(s)
- Benjamin C M Boerebach
- Department of Quality Management and Process Innovation, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands.
| | | | | | | | | |
Collapse
|
16
|
Wong JG, Fang Y. Improving clinical teaching in China: initial report of a multihospital pilot faculty development effort. TEACHING AND LEARNING IN MEDICINE 2012; 24:355-360. [PMID: 23036004 DOI: 10.1080/10401334.2012.719801] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
BACKGROUND The study's purpose was to investigate whether or not a US-based faculty development program could be successfully used to improve the teaching skills of Chinese medical faculty. DESCRIPTION The program, based on the Stanford Faculty Development Program (SFDP) model, was presented to 28 faculty teachers affiliated with Zhejiang University School of Medicine, Hangzhou, China. Outcomes included the attendees' satisfaction of the seminars and their ratings of self-reported teaching ability using a previously studied retrospective pre-post questionnaire. Paired mean scores of the retrospective pre-test were statistically compared to the means of the retrospective post-test for all respondents. EVALUATION Twenty-eight teachers completed the survey. The seminars were rated highly and summative ratings of both global teaching performance and use of specific teaching behaviors were significantly improved between the retrospective pre- and post-test scores. CONCLUSION We were able to demonstrate a positive effect of a Western-based faculty development course on the teaching skills of Chinese clinical medical teachers.
Collapse
Affiliation(s)
- Jeffrey G Wong
- Department of Medicine, Medical University of South Carolina, Charleston, SC, USA.
| | | |
Collapse
|
17
|
Arah OA, Hoekstra JBL, Bos AP, Lombarts KMJMH. New tools for systematic evaluation of teaching qualities of medical faculty: results of an ongoing multi-center survey. PLoS One 2011; 6:e25983. [PMID: 22022486 PMCID: PMC3193529 DOI: 10.1371/journal.pone.0025983] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2011] [Accepted: 09/14/2011] [Indexed: 11/19/2022] Open
Abstract
Background Tools for the evaluation, improvement and promotion of the teaching excellence of faculty remain elusive in residency settings. This study investigates (i) the reliability and validity of the data yielded by using two new instruments for evaluating the teaching qualities of medical faculty, (ii) the instruments' potential for differentiating between faculty, and (iii) the number of residents' evaluations needed per faculty to reliably use the instruments. Methods and Materials Multicenter cross-sectional survey among 546 residents and 629 medical faculty representing 29 medical (non-surgical) specialty training programs in the Netherlands. Two instruments—one completed by residents and one by faculty—for measuring teaching qualities of faculty were developed. Statistical analyses included factor analysis, reliability and validity exploration using standard psychometric methods, calculation of the numbers of residents' evaluations needed per faculty to achieve reliable assessments and variance components and threshold analyses. Results A total of 403 (73.8%) residents completed 3575 evaluations of 570 medical faculty while 494 (78.5%) faculty self-evaluated. In both instruments five composite-scales of faculty teaching qualities were detected with high internal consistency and reliability: learning climate (Cronbach's alpha of 0.85 for residents' instrument, 0.71 for self-evaluation instrument, professional attitude and behavior (0.84/0.75), communication of goals (0.90/0.84), evaluation of residents (0.91/0.81), and feedback (0.91/0.85). Faculty tended to evaluate themselves higher than did the residents. Up to a third of the total variance in various teaching qualities can be attributed to between-faculty differences. Some seven residents' evaluations per faculty are needed for assessments to attain a reliability level of 0.90. Conclusions The instruments for evaluating teaching qualities of medical faculty appear to yield reliable and valid data. They are feasible for use in medical residencies, can detect between-faculty differences and supply potentially useful information for improving graduate medical education.
Collapse
Affiliation(s)
- Onyebuchi A. Arah
- Department of Epidemiology, UCLA School of Public Health, University of California Los Angeles, Los Angeles, California, United States of America
- UCLA Center for Health Policy Research, Los Angeles, California, United States of America
- Department of Quality Management and Process Innovation, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Joost B. L. Hoekstra
- Department of Internal Medicine, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Albert P. Bos
- Department of Pediatrics, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Kiki M. J. M. H. Lombarts
- Department of Quality Management and Process Innovation, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
- * E-mail:
| |
Collapse
|
18
|
Iblher P, Zupanic M, Härtel C, Heinze H, Schmucker P, Fischer MR. The Questionnaire "SFDP26-German": a reliable tool for evaluation of clinical teaching? GMS ZEITSCHRIFT FUR MEDIZINISCHE AUSBILDUNG 2011; 28:Doc30. [PMID: 21818240 PMCID: PMC3149471 DOI: 10.3205/zma000742] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2010] [Revised: 02/07/2011] [Accepted: 03/23/2011] [Indexed: 11/30/2022]
Abstract
Aims: Evaluation of the effectiveness of clinical teaching is an important contribution for the quality control of medical teaching. This should be evaluated using a reliable instrument in order to be able to both gauge the status quo and the effects of instruction. In the Stanford Faculty Development Program (SFDP), seven categories have proven to be appropriate: Establishing the Learning Climate, Controlling a Teaching Session, Communication of Goals, Encouraging Understanding and Retention, Evaluation, Feedback and Self-directed Learning.
Since 1998, the SFDP26 questionnaire has established itself as an evaluation tool in English speaking countries. To date there is no equivalent German-language questionnaire available which evaluates the overall effectiveness of teaching. Question:Development and theoretical testing of a German-language version of SFDP26 (SFDP26-German), Check the correlation of subscale of SFDPGerman against overall effectiveness of teaching.
Methods: 19 anaesthetists (7 female, 12 male) from the University of Lübeck were evaluated at the end of a teaching seminar on emergency medical care using SFDP-German. The sample consisted of 173 medical students (119 female (68.8%) and 54 male (31.2%), mostly from the fifth semester (6.6%) and sixth semester (80.3%). The mean age of the students was 23±3 years. Results: The discriminatory power of all items ranged between good and excellent (rit=0.48-0.75). All subscales displayed good internal consistency (α=0.69-0.92) and significant positive inter-scale correlations (r=0.40-0.70). The subscales and “overall effectiveness of teaching” showed significant correlation, with the highest correlation for the subscale “communication of goals (p< 0.001; r = 0.61). Conclusion: The analysis of SFDP26-German confirms high internal consistency. Future research should investigate the effectiveness of the individual categories on the overall effectiveness of teaching and validate according to external criteria.
Collapse
Affiliation(s)
- Peter Iblher
- Universität zu Lübeck, Klinik für Anästhesiologie, Lübeck, Deutschland
| | | | | | | | | | | |
Collapse
|
19
|
van der Leeuw R, Lombarts K, Heineman MJ, Arah O. Systematic evaluation of the teaching qualities of Obstetrics and Gynecology faculty: reliability and validity of the SETQ tools. PLoS One 2011; 6:e19142. [PMID: 21559275 PMCID: PMC3086887 DOI: 10.1371/journal.pone.0019142] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2010] [Accepted: 03/20/2011] [Indexed: 01/12/2023] Open
Abstract
Background The importance of effective clinical teaching for the quality of future patient care is globally understood. Due to recent changes in graduate medical education, new tools are needed to provide faculty with reliable and individualized feedback on their teaching qualities. This study validates two instruments underlying the System for Evaluation of Teaching Qualities (SETQ) aimed at measuring and improving the teaching qualities of obstetrics and gynecology faculty. Methods and Findings This cross-sectional multi-center questionnaire study was set in seven general teaching hospitals and two academic medical centers in the Netherlands. Seventy-seven residents and 114 faculty were invited to complete the SETQ instruments in the duration of one month from September 2008 to September 2009. To assess reliability and validity of the instruments, we used exploratory factor analysis, inter-item correlation, reliability coefficient alpha and inter-scale correlations. We also compared composite scales from factor analysis to global ratings. Finally, the number of residents' evaluations needed per faculty for reliable assessments was calculated. A total of 613 evaluations were completed by 66 residents (85.7% response rate). 99 faculty (86.8% response rate) participated in self-evaluation. Factor analysis yielded five scales with high reliability (Cronbach's alpha for residents' and faculty): learning climate (0.86 and 0.75), professional attitude (0.89 and 0.81), communication of learning goals (0.89 and 0.82), evaluation of residents (0.87 and 0.79) and feedback (0.87 and 0.86). Item-total, inter-scale and scale-global rating correlation coefficients were significant (P<0.01). Four to six residents' evaluations are needed per faculty (reliability coefficient 0.60–0.80). Conclusions Both SETQ instruments were found reliable and valid for evaluating teaching qualities of obstetrics and gynecology faculty. Future research should examine improvement of teaching qualities when using SETQ.
Collapse
Affiliation(s)
- Renée van der Leeuw
- Department of Quality and Process Innovation, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands.
| | | | | | | |
Collapse
|
20
|
Nation JG, Carmichael E, Fidler H, Violato C. The development of an instrument to assess clinical teaching with linkage to CanMEDS roles: A psychometric analysis. MEDICAL TEACHER 2011; 33:e290-6. [PMID: 21609164 DOI: 10.3109/0142159x.2011.565825] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Assessment of clinical teaching by learners is of value to teachers, department heads, and program directors, and must be comprehensive and feasible. AIMS To review published evaluation instruments with psychometric evaluations and to develop and psychometrically evaluate an instrument for assessing clinical teaching with linkages to the CanMEDS roles. METHOD We developed a 19-item questionnaire to reflect 10 domains relevant to teaching and the CanMEDS roles. A total of 317 medical learners assessed 170 instructors. Fourteen (4.4 %) clinical clerks, 229 (72.3%) residents, and 53 (16.7%) fellows assessed 170 instructors. Twenty-one (6.6%) did not specify their position. RESULTS A mean number of eight raters assessed each instructor. The internal consistency reliability of the 19-item instrument was Cronbach's α = 0.95. The generalizability coefficient (Ep(2)) analysis indicated that the raters achieved Ep(2) of 0.95. The factor analysis showed three factors that accounted for 67.97% of the total variance. The three factors together, with the variance accounted for and their internal consistency reliability, are teaching skills (variance = 53.25s%; Cronbach's α = 0.92), Patient interaction (variance = 8.56%; Cronbach's α = 0.91), and professionalism (variance = 6.16%; Cronbach's α = 0.86). The three factors are intercorrelated (correlations = 0.48, 0.58, 0.46; p < 0.01). CONCLUSION It is feasible to assess clinical teaching with the 19-item instrument that has demonstrated evidence of both validity and reliability.
Collapse
Affiliation(s)
- Jill G Nation
- Department of Obstetrics and Gynecology, Faculty of Medicine, University of Calgary, Calgary, AB T2N 4N2, Canada.
| | | | | | | |
Collapse
|
21
|
Fluit CRMG, Bolhuis S, Grol R, Laan R, Wensing M. Assessing the quality of clinical teachers: a systematic review of content and quality of questionnaires for assessing clinical teachers. J Gen Intern Med 2010; 25:1337-45. [PMID: 20703952 PMCID: PMC2988147 DOI: 10.1007/s11606-010-1458-y] [Citation(s) in RCA: 87] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2009] [Revised: 02/22/2010] [Accepted: 07/02/2010] [Indexed: 11/25/2022]
Abstract
BACKGROUND Learning in a clinical environment differs from formal educational settings and provides specific challenges for clinicians who are teachers. Instruments that reflect these challenges are needed to identify the strengths and weaknesses of clinical teachers. OBJECTIVE To systematically review the content, validity, and aims of questionnaires used to assess clinical teachers. DATA SOURCES MEDLINE, EMBASE, PsycINFO and ERIC from 1976 up to March 2010. REVIEW METHODS The searches revealed 54 papers on 32 instruments. Data from these papers were documented by independent researchers, using a structured format that included content of the instrument, validation methods, aims of the instrument, and its setting. RESULTS Aspects covered by the instruments predominantly concerned the use of teaching strategies (included in 30 instruments), supporter role (29), role modeling (27), and feedback (26). Providing opportunities for clinical learning activities was included in 13 instruments. Most studies referred to literature on good clinical teaching, although they failed to provide a clear description of what constitutes a good clinical teacher. Instrument length varied from 1 to 58 items. Except for two instruments, all had to be completed by clerks/residents. Instruments served to provide formative feedback ( instruments) but were also used for resource allocation, promotion, and annual performance review (14 instruments). All but two studies reported on internal consistency and/or reliability; other aspects of validity were examined less frequently. CONCLUSIONS No instrument covered all relevant aspects of clinical teaching comprehensively. Validation of the instruments was often limited to assessment of internal consistency and reliability. Available instruments for assessing clinical teachers should be used carefully, especially for consequential decisions. There is a need for more valid comprehensive instruments.
Collapse
Affiliation(s)
- Cornelia R M G Fluit
- Department for Evaluation, Quality and Development of Medical Education, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands.
| | | | | | | | | |
Collapse
|
22
|
Bing-You RG, Lee R, Trowbridge RL, Varaklis K, Hafler JP. Commentary: principle-based teaching competencies. J Grad Med Educ 2009; 1:100-3. [PMID: 21975714 PMCID: PMC2931196 DOI: 10.4300/01.01.0016] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Affiliation(s)
- Robert G. Bing-You
- Corresponding author: Robert Bing-You, MD, Maine Medical Center, 22 Bramhall Street, Portland, ME 04102, 207.662.7060,
| | | | | | | | | |
Collapse
|
23
|
Smith MEB, Desai SS, Allen ES, Saha S, Hunter AJ. Impact of shorter inpatient faculty rotations on resident learning experience. Am J Med 2009; 122:96-100. [PMID: 19114177 DOI: 10.1016/j.amjmed.2008.09.031] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/27/2008] [Revised: 07/12/2008] [Accepted: 09/25/2008] [Indexed: 11/28/2022]
Affiliation(s)
- M E Beth Smith
- Division of General Medicine and Geriatrics, Oregon Health and Science University, Portland, OR 97239, USA.
| | | | | | | | | |
Collapse
|
24
|
Moser EM, Kothari N, Stagnaro-Green A. Chief residents as educators: an effective method of resident development. TEACHING AND LEARNING IN MEDICINE 2008; 20:323-328. [PMID: 18855236 DOI: 10.1080/10401330802384722] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
BACKGROUND The importance of teaching residents how to instruct medical students is recognized, but time and logistics challenge the implementation of teaching skills programs. No study has described a dissemination model with chief residents as trainers and managers of a teaching skills program. DESCRIPTION All chief residents in three departments (n = 16), participated in an 8-hr train-the-trainer teaching skills program and then trained 178 residents through seven 1-hr sessions. Outcome was measured through student surveys using a validated instrument with seven teaching domains and overall assessment of teaching effectiveness. EVALUATION Survey results revealed a significant improvement in the vast majority of teaching domains 9 months after implementation of the program in all three departments. Student perceptions of overall teaching effectiveness improved in two departments and trended upwards in the third. CONCLUSION A resident teaching skills program utilizing chief residents as trainers resulted in improved 3rd-year medical student ratings of resident teaching.
Collapse
Affiliation(s)
- Eileen M Moser
- Department of Medicine, New Jersey Medical School, Newark, New Jersey, USA.
| | | | | |
Collapse
|
25
|
Beckman TJ, Cook DA, Mandrekar JN. Factor instability of clinical teaching assessment scores among general internists and cardiologists. MEDICAL EDUCATION 2006; 40:1209-16. [PMID: 17118115 DOI: 10.1111/j.1365-2929.2006.02632.x] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
CONTEXT We are unaware of studies examining the stability of teaching assessment scores across different medical specialties. A recent study showed that clinical teaching assessments of general internists reduced to interpersonal, clinical teaching and efficiency domains. We sought to determine the factor stability of this 3-dimensional model among cardiologists and to compare domain-specific scores between general internists and cardiologists. METHODS A total of 2000 general internal medicine and cardiology hospital teaching assessments carried out from January 2000 to March 2004 were analysed using principal factor analysis. Internal consistency and inter-rater reliability were calculated. Mean item scores were compared between general internists and cardiologists. RESULTS The interpersonal and clinical teaching domains previously demonstrated among general internists collapsed into 1 domain among cardiologists, whereas the efficiency domain remained stable. Internal consistency of domains (Cronbach's alpha range 0.89-0.93) and inter-rater reliability of items (range 0.65-0.87) were good to excellent for both specialties. General internists scored significantly higher (P<0.05) than cardiologists on most items except for 4 items that more accurately assessed the cardiology teaching environment. CONCLUSIONS We observed factor instability of clinical teaching assessment scores from the same instrument administered to general internists and cardiologists. This finding was attributed to salient differences between these specialties' educational environments and highlights the importance of validating assessments for the specific contexts in which they are to be used. Future research should determine whether interpersonal domain scores identify superior teachers and study the reasons why interpersonal and clinical teaching domains are unstable across different educational settings.
Collapse
Affiliation(s)
- Thomas J Beckman
- Division of General Internal Medicine, Department of Internal Medicine, Mayo Clinic and Mayo Foundation, Rochester, Minnesota 55905, USA.
| | | | | |
Collapse
|
26
|
Abstract
BACKGROUND Measuring outcomes of faculty development programs is difficult and infrequently attempted beyond measuring participant satisfaction with the program. Few studies have validated evaluation tools to assess the effectiveness of faculty development programs, and learners have rarely participated in assessing improvement of faculty who participate in such programs. OBJECTIVE To develop a questionnaire to measure the effectiveness of an enhanced 1-minute preceptor (OMP) faculty development workshop via faculty self-assessment and resident assessment of faculty, and to use the questionnaire to assess an OMP faculty development workshop. DESIGN AND MEASUREMENTS We developed and tested a questionnaire to assess the 5 "microskills" of a OMP faculty development program, and performed faculty self-assessment and resident assessment using the questionnaire 6 to 18 months before and 6 to 18 months after our experiential skills improvement workshop. PARTICIPANTS Sixty-eight internal medicine continuity clinic preceptors (44 control and 24 intervention faculty) at a university, a veteran's affairs hospital, and 2 community internal medicine training sites. RESULTS Twenty-two participants (92%) completed pre- and postintervention questionnaires. Residents completed 94 preintervention questionnaires and 58 postintervention questionnaires on participant faculty. Faculty reported improvement in behavior following the intervention. Residents reported no significant improvements in faculty teaching behaviors following the intervention. CONCLUSION We attempted to rigorously evaluate a faculty development program based on the OMP. Although the intervention did not show statistically significant changes in teaching behavior, we believe that this study is an important step in extending assessment of faculty development to include resident evaluation of participating faculty.
Collapse
|
27
|
Abstract
BACKGROUND Although a variety of validity evidence should be utilized when evaluating assessment tools, a review of teaching assessments suggested that authors pursue a limited range of validity evidence. OBJECTIVES To develop a method for rating validity evidence and to quantify the evidence supporting scores from existing clinical teaching assessment instruments. DESIGN A comprehensive search yielded 22 articles on clinical teaching assessments. Using standards outlined by the American Psychological and Education Research Associations, we developed a method for rating the 5 categories of validity evidence reported in each article. We then quantified the validity evidence by summing the ratings for each category. We also calculated weighted kappa coefficients to determine interrater reliabilities for each category of validity evidence. MAIN RESULTS Content and Internal Structure evidence received the highest ratings (27 and 32, respectively, of 44 possible). Relation to Other Variables, Consequences, and Response Process received the lowest ratings (9, 2, and 2, respectively). Interrater reliability was good for Content, Internal Structure, and Relation to Other Variables (kappa range 0.52 to 0.96, all P values < .01), but poor for Consequences and Response Process. CONCLUSIONS Content and Internal Structure evidence is well represented among published assessments of clinical teaching. Evidence for Relation to Other Variables, Consequences, and Response Process receive little attention, and future research should emphasize these categories. The low interrater reliability for Response Process and Consequences likely reflects the scarcity of reported evidence. With further development, our method for rating the validity evidence should prove useful in various settings.
Collapse
Affiliation(s)
- Thomas J Beckman
- Division of General Internal Medicine, Department of Internal Medicine, Mayo Clinic College of Medicine, Mayo Clinic and Mayo Foundation, Rochester, MN, USA.
| | | | | |
Collapse
|
28
|
Beckman TJ, Mandrekar JN. The interpersonal, cognitive and efficiency domains of clinical teaching: construct validity of a multi-dimensional scale. MEDICAL EDUCATION 2005; 39:1221-9. [PMID: 16313581 DOI: 10.1111/j.1365-2929.2005.02336.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
BACKGROUND We are unaware of any hypothesis-driven studies showing that teaching assessments are comprised solely of interpersonal and cognitive domains. Moreover, previous teaching assessments have been biased by heterogeneous samples of evaluators. Consequently, we investigated the construct validity of faculty assessments comprised of interpersonal and cognitive domains, utilising evaluations obtained from resident doctors on an internal medicine hospital service. METHODS A total of 1000 inpatient evaluations were completed on 60 general internal medicine faculty members. Education theory supported a 2-dimensional, 14-item scale. Principal factor analysis was used to explore the scale's dimensionality. Internal reliability and interobserver agreement were determined. Relationships between domains and instructor characteristics were also examined. RESULTS Principal factor analysis revealed interpersonal, clinical teaching and efficiency domains. Internal reliabilities of all domains are high (alpha > 0.90). Interobserver agreement is good (range 0.64-0.83). In the interpersonal domain there is a trend towards higher scores for lower ranking faculty. Significant findings are higher overall scores in the interpersonal domain (P < 0.001), higher scores for assistant professors in the interpersonal domain (P = 0.008) and higher scores for male than female faculty in the interpersonal (P = 0.041) and clinical teaching (P = 0.008) domains. CONCLUSIONS Clinical teaching evaluations are reducible to interpersonal, clinical teaching and efficiency domains. Evidence for construct validity includes predicted domains and high internal and interobserver reliabilities. Utilising a homogenous sample of evaluators minimised variance. Interestingly, lower ranking faculty scored higher in the interpersonal domain, suggesting that lower ranking faculty may focus more attention on teaching activities than full professors do.
Collapse
Affiliation(s)
- Thomas J Beckman
- Division of General Internal Medicine, Department of Internal Medicine, Mayo Clinic and Mayo Foundation, Rochester, Minnesota 55905, USA.
| | | |
Collapse
|
29
|
Houston TK, Clark JM, Levine RB, Ferenchick GS, Bowen JL, Branch WT, Boulware DW, Alguire P, Esham RH, Clayton CP, Kern DE. Outcomes of a national faculty development program in teaching skills: prospective follow-up of 110 medicine faculty development teams. J Gen Intern Med 2004; 19:1220-7. [PMID: 15610333 PMCID: PMC1492589 DOI: 10.1111/j.1525-1497.2004.40130.x] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
BACKGROUND Awareness of the need for ambulatory care teaching skills training for clinician-educators is increasing. A recent Health Resources and Services Administration (HRSA)-funded national initiative trained 110 teams from U.S. teaching hospitals to implement local faculty development (FD) in teaching skills. OBJECTIVE To assess the rate of successful implementation of local FD initiatives by these teams. METHODS A prospective observational study followed the 110 teams for up to 24 months. Self-reported implementation, our outcome, was defined as the time from the training conference until the team reported that implementation of their FD project was completely accomplished. Factors associated with success were assessed using Kaplan-Meier analysis. RESULTS The median follow-up was 18 months. Fifty-nine of the teams (54%) implemented their local FD project and subsequently trained over 1,400 faculty, of whom over 500 were community based. Teams that implemented their FD projects were more likely than those that did not to have the following attributes: met more frequently (P=.001), had less turnover (P=.01), had protected time (P=.01), rated their likelihood of success high (P=.03), had some project or institutional funding for FD (P=.03), and came from institutions with more than 75 department of medicine faculty (P=.03). The cost to the HRSA was $22,033 per successful team and $533 per faculty member trained. CONCLUSIONS This national initiative was able to disseminate teaching skills training to large numbers of faculty at modest cost. Smaller teaching hospitals may have limited success without additional support or targeted funding.
Collapse
Affiliation(s)
- Thomas K Houston
- University of Alabama at Birmingham School of Medicine, 35294, USA.
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
30
|
Beckman TJ, Ghosh AK, Cook DA, Erwin PJ, Mandrekar JN. How reliable are assessments of clinical teaching? A review of the published instruments. J Gen Intern Med 2004; 19:971-7. [PMID: 15333063 PMCID: PMC1492515 DOI: 10.1111/j.1525-1497.2004.40066.x] [Citation(s) in RCA: 107] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
BACKGROUND Learner feedback is the primary method for evaluating clinical faculty, despite few existing standards for measuring learner assessments. OBJECTIVE To review the published literature on instruments for evaluating clinical teachers and to summarize themes that will aid in developing universally appealing tools. DESIGN Searching 5 electronic databases revealed over 330 articles. Excluded were reviews, editorials, and qualitative studies. Twenty-one articles describing instruments designed for evaluating clinical faculty by learners were found. Three investigators studied these papers and tabulated characteristics of the learning environments and validation methods. Salient themes among the evaluation studies were determined. MAIN RESULTS Many studies combined evaluations from both outpatient and inpatient settings and some authors combined evaluations from different learner levels. Wide ranges in numbers of teachers, evaluators, evaluations, and scale items were observed. The most frequently encountered statistical methods were factor analysis and determining internal consistency reliability with Cronbach's alpha. Less common methods were the use of test-retest reliability, interrater reliability, and convergent validity between validated instruments. Fourteen domains of teaching were identified and the most frequently studied domains were interpersonal and clinical-teaching skills. CONCLUSIONS Characteristics of teacher evaluations vary between educational settings and between different learner levels, indicating that future studies should utilize more narrowly defined study populations. A variety of validation methods including temporal stability, interrater reliability, and convergent validity should be considered. Finally, existing data support the validation of instruments comprised solely of interpersonal and clinical-teaching domains.
Collapse
Affiliation(s)
- Thomas J Beckman
- Department of Internal Medicine, Department of Medicine, Mayo Clinic College of Medicine, Mayo Clinic, Mayo Foundation, Rochester, MN, USA.
| | | | | | | | | |
Collapse
|
31
|
Serwint JR, Feigelman S, Dumont-Driscoll M, Collins R, Zhan M, Kittredge D. Factors associated with resident satisfaction with their continuity experience. ACTA ACUST UNITED AC 2004; 4:4-10. [PMID: 14731100 DOI: 10.1367/1539-4409(2004)004<0004:fawrsw>2.0.co;2] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE To identify factors associated with resident satisfaction concerning residents' continuity experience. DESIGN AND METHODS Continuity directors distributed questionnaires to residents at their respective institutions. Resident satisfaction was defined as satisfied or very satisfied on a Likert scale. The independent variables included 60 characteristics of the continuity experience from 7 domains: 1) patient attributes, 2) continuity and longitudinal issues, 3) responsibility as primary care provider, 4) preceptor characteristics, 5) educational opportunities, 6) exposure to practice management, and 7) interaction with other clinic and practice staff. A stepwise logistic regression model and the Generalized Estimating Equations approach were used. RESULTS Thirty-six programs participated. Of 1155 residents (71%) who provided complete data, 67% (n = 775) stated satisfaction with their continuity experience. The following characteristics (adjusted odds ratio [OR] and 95% confidence interval [CI]) were found to be most significant: preceptor as good role model, OR = 7.28 ( CI = 4.2, 12.5); appropriate amount of teaching, OR = 3.25 (CI = 2.1, 5.1); involvement during hospitalization, OR = 2.61 (CI = 1.3, 5.2); exposure to practice management, OR = 2.39 (CI = 1.5, 3.8); good balance of general pediatric patients, OR = 2.34 (CI = 1.5, 3.6); resident as patient advocate, OR = 1.74 (CI = 1.2, 2.4); and appropriate amount of nursing support, OR = 1.65 (CI = 1.1, 2.6). Future career choice, type of continuity site, and level of training were not found to be statistically significant. CONCLUSIONS Pediatric resident satisfaction was significantly associated with 7 variables, the most important of which were the ability of the preceptor to serve as a role model and teacher. The type of continuity site was not significant. Residency programs may use these data to develop interventions to enhance resident satisfaction, which may lead to enhanced work performance and patient satisfaction.
Collapse
Affiliation(s)
- Janet R Serwint
- Department of Pediatrics, Johns Hopkins Hospital, Baltimore, MD 21287, USA.
| | | | | | | | | | | |
Collapse
|