1
|
Yong RL, Cheung W, Shrivastava RK, Bederson JB. Teaching quality in neurosurgery: quantitating outcomes over time. J Neurosurg 2021; 136:1147-1156. [PMID: 34479202 DOI: 10.3171/2021.2.jns203900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 02/01/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE High-quality neurosurgery resident training is essential to developing competent neurosurgeons. Validated formative tools to assess faculty teaching performance exist, but are not used widely among Accreditation Council for Graduate Medical Education (ACGME) residency programs in the United States. Furthermore, their longer-term impact on teaching performance improvement and educational outcomes remains unclear. The goal of this study was to assess the impact of implementing an evaluation system to provide faculty with feedback on teaching performance in a neurosurgery residency training program over a 4-year period. METHODS The authors performed a prospective cohort study in which a modified version of the System for Evaluation of Teaching Qualities (SETQ) instrument was administered to neurosurgical trainees in their department regularly every 6 months. The authors analyzed subscale score dynamics to identify the strongest correlates of faculty teaching performance improvement. ACGME program survey results and trainee performance on written board examinations were compared for the 3 years before and after SETQ implementation. RESULTS The overall response rate among trainees was 91.8%, with 1044 surveys completed for 41 faculty. Performance scores improved progressively from cycle 1 to cycle 6. The strongest correlate of overall performance was providing positive feedback to trainees. Compared to the 3 years prior, the 3 years following SETQ implementation saw significant increases in written board examination and ACGME resident survey scores compared to the national mean. CONCLUSIONS Implementation of SETQ was associated with significant improvements in faculty teaching performance as judged by trainees over a 4-year period, and guided curricular changes in the authors' training program that resulted in improved educational outcomes.
Collapse
|
2
|
van der Meulen MW, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Lombarts KMJMH. Exploring Validity Evidence Associated With Questionnaire-Based Tools for Assessing the Professional Performance of Physicians: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1384-1397. [PMID: 31460937 DOI: 10.1097/acm.0000000000002767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To collect and examine-using an argument-based validity approach-validity evidence of questionnaire-based tools used to assess physicians' clinical, teaching, and research performance. METHOD In October 2016, the authors conducted a systematic search of the literature seeking articles about questionnaire-based tools for assessing physicians' professional performance published from inception to October 2016. They included studies reporting on the validity evidence of tools used to assess physicians' clinical, teaching, and research performance. Using Kane's validity framework, they conducted data extraction based on four inferences in the validity argument: scoring, generalization, extrapolation, and implications. RESULTS They included 46 articles on 15 tools assessing clinical performance and 72 articles on 38 tools assessing teaching performance. They found no studies on research performance tools. Only 12 of the tools (23%) gathered evidence on all four components of Kane's validity argument. Validity evidence focused mostly on generalization and extrapolation inferences. Scoring evidence showed mixed results. Evidence on implications was generally missing. CONCLUSIONS Based on the argument-based approach to validity, not all questionnaire-based tools seem to support their intended use. Evidence concerning implications of questionnaire-based tools is mostly lacking, thus weakening the argument to use these tools for formative and, especially, for summative assessments of physicians' clinical and teaching performance. More research on implications is needed to strengthen the argument and to provide support for decisions based on these tools, particularly for high-stakes, summative decisions. To meaningfully assess academic physicians in their tripartite role as doctor, teacher, and researcher, additional assessment tools are needed.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- M.W. van der Meulen is PhD candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-3636-5469. A. Smirnova is PhD graduate and researcher, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-4491-3007. S. Heeneman is professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-6103-8075. M.G.A. oude Egbrink is professor, Department of Physiology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-5530-6598. C.P.M. van der Vleuten is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119. K.M.J.M.H. Lombarts is professor, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0001-6167-0620
| | | | | | | | | | | |
Collapse
|
3
|
Evaluation of teaching in a student-led clinic environment: Assessing the reliability of a questionnaire. INT J OSTEOPATH MED 2019. [DOI: 10.1016/j.ijosm.2018.11.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
4
|
Moon JY, Schullo-Feulner AM, Kolar C, Lepp G, Reidt S, Undeberg MR, Janke KK. Supporting formative peer review of clinical teaching through a focus on process. CURRENTS IN PHARMACY TEACHING & LEARNING 2018; 10:771-778. [PMID: 30025779 DOI: 10.1016/j.cptl.2018.03.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2017] [Revised: 11/16/2017] [Accepted: 03/02/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND The professional need for development of clinical faculty is clear. Previous scholarship provides insight into the formative potential of peer review in both didactic and experiential settings. Less information exists on a comprehensive peer review process (PRP) designed to support faculty change. EDUCATIONAL ACTIVITY AND SETTING A clinical faculty PRP was developed and implemented based on input from the literature, stakeholders, and field experts. The process included: 1) self-reflective pre-work, 2) a peer-observation component, 3) self-reflective post-work, and 4) creation of a specific action plan via meeting with an educational expert. The process was assessed by collecting evaluative data from peer reviewer and clinical faculty participants. FINDINGS Eight of 26 faculty members participated in a pilot of the PRP and formed four clinical faculty-peer dyads. When surveyed, all participants unanimously reported that they would participate in the PRP again. Aspects perceived among most helpful to clinical teaching included peer observation, self-reflection, and meeting with an educational expert. Challenges related to the process included anxiety of peer observation, burden of pre-work, and logistics of scheduling meetings. DISCUSSION While instruments are important in guiding and documenting the evaluation of clinical teaching during an observation period, this initiative focused on the process supporting the observation and evaluation, in order to optimize the formative feedback received by participating faculty and encourage professional development actions. SUMMARY A PRP that incorporates preparation, reflective practice, and a meeting with an educational expert may support meaningful faculty development in the area of clinical teaching.
Collapse
Affiliation(s)
- Jean Y Moon
- University of Minnesota College of Pharmacy-Twin Cities, 7-103 Weaver Densford Hall, 308 Harvard St SE, Minneapolis, MN 55455, United States.
| | - Anne M Schullo-Feulner
- University of Minnesota College of Pharmacy-Twin Cities, 7-103 Weaver Densford Hall, 308 Harvard St SE, Minneapolis, MN 55455, United States.
| | - Claire Kolar
- University of Minnesota College of Pharmacy-Twin Cities, 7-159 Weaver Densford Hall 308 Harvard St SE, Minneapolis, MN 55455, United States.
| | - Gardner Lepp
- University of Minnesota, College of Pharmacy, 232 Life Science 1110 Kirby Drive, Duluth, MN 55812-3003, United States.
| | - Shannon Reidt
- Optum, 12700 Whitewater Dr, Minnetonka, MN 55343, United States.
| | - Megan R Undeberg
- University of Minnesota College of Pharmacy- Duluth, 107 Life Science 1110 Kirby Drive, Duluth, MN 55812-3003, United States.
| | - Kristin K Janke
- University of Minnesota College of Pharmacy-Twin Cities, 7-159 Weaver Densford Hall 308 Harvard St SE, Minneapolis, MN 55455, United States.
| |
Collapse
|
5
|
Aoun Bahous S, Salameh P, Salloum A, Salameh W, Park YS, Tekian A. Voluntary vs. compulsory student evaluation of clerkships: effect on validity and potential bias. BMC MEDICAL EDUCATION 2018; 18:9. [PMID: 29304800 PMCID: PMC5756350 DOI: 10.1186/s12909-017-1116-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 12/28/2017] [Indexed: 05/30/2023]
Abstract
BACKGROUND Students evaluations of their learning experiences can provide a useful source of information about clerkship effectiveness in undergraduate medical education. However, low response rates in clerkship evaluation surveys remain an important limitation. This study examined the impact of increasing response rates using a compulsory approach on validity evidence. METHODS Data included 192 responses obtained voluntarily from 49 third-year students in 2014-2015, and 171 responses obtained compulsorily from 49 students in the first six months of the consecutive year at one medical school in Lebanon. Evidence supporting internal structure and response process validity was compared between the two administration modalities. The authors also tested for potential bias introduced by the use of the compulsory approach by examining students' responses to a sham item that was added to the last survey administration. RESULTS Response rates increased from 56% in the voluntary group to 100% in the compulsory group (P < 0.001). Students in both groups provided comparable clerkship rating except for one clerkship that received higher rating in the voluntary group (P = 0.02). Respondents in the voluntary group had higher academic performance compared to the compulsory group but this difference diminished when whole class grades were compared. Reliability of ratings was adequately high and comparable between the two consecutive years. Testing for non-response bias in the voluntary group showed that females were more frequent responders in two clerkships. Testing for authority-induced bias revealed that students might complete the evaluation randomly without attention to content. CONCLUSIONS While increasing response rates is often a policy requirement aimed to improve the credibility of ratings, using authority to enforce responses may not increase reliability and can raise concerns over the meaningfulness of the evaluation. Administrators are urged to consider not only response rates, but also representativeness and quality of responses in administering evaluation surveys.
Collapse
Affiliation(s)
- Sola Aoun Bahous
- Lebanese American University School of Medicine, Byblos, Lebanon
- Lebanese American University Medical Center – Rizk Hospital, May Zahhar Street, Ashrafieh, P.O. Box: 11-3288, Beirut, Lebanon
| | - Pascale Salameh
- Lebanese American University School of Pharmacy, Byblos, Lebanon
| | | | - Wael Salameh
- Lebanese American University School of Medicine, Byblos, Lebanon
| | - Yoon Soo Park
- Department of Medical Education, College of Medicine, University of Illinois at Chicago, Chicago, IL USA
| | - Ara Tekian
- Department of Medical Education, College of Medicine, University of Illinois at Chicago, Chicago, IL USA
| |
Collapse
|
6
|
Brown T, Williams B, Lynch M. Relationship between clinical fieldwork educator performance and health professional students' perceptions of their practice education learning environments. Nurs Health Sci 2013; 15:510-7. [DOI: 10.1111/nhs.12065] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2012] [Revised: 03/18/2013] [Accepted: 03/27/2013] [Indexed: 11/29/2022]
Affiliation(s)
- Ted Brown
- Department of Occupational Therapy; Monash University; Melbourne Victoria Australia
| | - Brett Williams
- Department of Community Emergency Health and Paramedic Practice; Monash University; Melbourne Victoria Australia
| | - Marty Lynch
- Department of Occupational Therapy; Monash University; Melbourne Victoria Australia
| |
Collapse
|
7
|
Backeris ME, Patel RM, Metro DG, Sakai T. Impact of a productivity-based compensation system on faculty clinical teaching scores, as evaluated by anesthesiology residents. J Clin Anesth 2013; 25:209-13. [DOI: 10.1016/j.jclinane.2012.11.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2012] [Revised: 10/07/2012] [Accepted: 11/11/2012] [Indexed: 11/17/2022]
|
8
|
Schönrock-Adema J, Boendermaker PM, Remmelts P. Opportunities for the CTEI: disentangling frequency and quality in evaluating teaching behaviours. PERSPECTIVES ON MEDICAL EDUCATION 2012; 1:172-179. [PMID: 23205342 PMCID: PMC3508268 DOI: 10.1007/s40037-012-0023-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Students' perceptions of teaching quality are vital for quality assurance purposes. An increasingly used, department-independent instrument is the (Cleveland) clinical teaching effectiveness instrument (CTEI). Although the CTEI was developed carefully and its validity and reliability confirmed, we noted an opportunity for improvement given an intermingling in its rating scales: the labels of the answering scales refer to both frequency and quality of teaching behaviours. Our aim was to investigate whether frequency and quality scores on the CTEI items differed. A sample of 112 residents anonymously completed the CTEI with separate 5-point rating scales for frequency and quality. Differences between frequency and quality scores were analyzed using paired t tests. Quality was, on average, rated higher than frequency, with significant differences for ten out of 15 items. The mean scores differed significantly in favour of quality. As the effect size was large, the difference in mean scores was substantial. Since quality was generally rated higher than frequency, the authors recommend distinguishing frequency from quality. This distinction helps to obtain unambiguous outcomes, which may be conducive to providing concrete and accurate feedback, improving faculty development and making fair decisions concerning promotion, tenure or salary.
Collapse
Affiliation(s)
- Johanna Schönrock-Adema
- Center for Research and Innovation in Medical Education, University of Groningen and University Medical Center Groningen, Antonius Deusinglaan 1, 9713 AV, Groningen, the Netherlands.
| | - Peter M Boendermaker
- Wenckebach Institute, University of Groningen and University Medical Center Groningen, Groningen, the Netherlands
| | - Pine Remmelts
- Wenckebach Institute, University of Groningen and University Medical Center Groningen, Groningen, the Netherlands
| |
Collapse
|
9
|
Laos CM, DiStefano MC, Cruz AT, Caviness AC, Hsu DC, Patel B. Mobile pediatric emergency response team: patient satisfaction during the novel H1N1 influenza outbreak. Acad Emerg Med 2012; 19:274-9. [PMID: 22435859 DOI: 10.1111/j.1553-2712.2012.01289.x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
OBJECTIVES The objective was to determine child caregiver satisfaction with a nontraditional pediatric emergency department (ED) venue during the 2009 novel H1N1 influenza outbreak. METHODS Between May 1 and 7, 2009, the Texas Children's Hospital (TCH) ED used a six-bed outdoor facility, the Mobile Pediatric Emergency Response Team (MPERT), to evaluate patients with suspected novel H1N1 influenza. Parents and caregivers of patients evaluated in the MPERT were surveyed by telephone using a validated questionnaire to evaluate satisfaction with the facility. RESULTS Of 353 patients, 155 caregivers (44%) completed questionnaires; 127 had wrong numbers, 71 did not answer, and 15 were on a no-call list. Survey responders felt that nurses and doctors explained concepts well (nurses 92%, doctors 94%), 91% felt TCH prepared them well for taking care of their children at home, 94% were satisfied with the medical care received, and 88% were not bothered by the outdoor setting. When asked to rate their MPERT experience on a scale of 0 (worst possible) to 10 (best possible), the median score was 9 (range 1 to 10). CONCLUSIONS The MPERT facility alleviated patient volume surge and potentially prevented transmission during H1N1 outbreak. While these were health care provider goals, caregiver expectations were also met. Caregivers perceived MPERT as an acceptable alternative to receiving care in the regular ED, felt that physicians and nurses communicated well, and felt that medical care was good to excellent. Use of the MPERT did not negatively affect overall caregiver satisfaction with TCH. These findings suggest that families of pediatric patients are amenable to nontraditional ED venues during periods of ED crowding.
Collapse
Affiliation(s)
- Carla M Laos
- Dell Children's Medical Center, Pediatric Emergency Medicine, Austin, TX, USA
| | | | | | | | | | | |
Collapse
|
10
|
Nation JG, Carmichael E, Fidler H, Violato C. The development of an instrument to assess clinical teaching with linkage to CanMEDS roles: A psychometric analysis. MEDICAL TEACHER 2011; 33:e290-6. [PMID: 21609164 DOI: 10.3109/0142159x.2011.565825] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Assessment of clinical teaching by learners is of value to teachers, department heads, and program directors, and must be comprehensive and feasible. AIMS To review published evaluation instruments with psychometric evaluations and to develop and psychometrically evaluate an instrument for assessing clinical teaching with linkages to the CanMEDS roles. METHOD We developed a 19-item questionnaire to reflect 10 domains relevant to teaching and the CanMEDS roles. A total of 317 medical learners assessed 170 instructors. Fourteen (4.4 %) clinical clerks, 229 (72.3%) residents, and 53 (16.7%) fellows assessed 170 instructors. Twenty-one (6.6%) did not specify their position. RESULTS A mean number of eight raters assessed each instructor. The internal consistency reliability of the 19-item instrument was Cronbach's α = 0.95. The generalizability coefficient (Ep(2)) analysis indicated that the raters achieved Ep(2) of 0.95. The factor analysis showed three factors that accounted for 67.97% of the total variance. The three factors together, with the variance accounted for and their internal consistency reliability, are teaching skills (variance = 53.25s%; Cronbach's α = 0.92), Patient interaction (variance = 8.56%; Cronbach's α = 0.91), and professionalism (variance = 6.16%; Cronbach's α = 0.86). The three factors are intercorrelated (correlations = 0.48, 0.58, 0.46; p < 0.01). CONCLUSION It is feasible to assess clinical teaching with the 19-item instrument that has demonstrated evidence of both validity and reliability.
Collapse
Affiliation(s)
- Jill G Nation
- Department of Obstetrics and Gynecology, Faculty of Medicine, University of Calgary, Calgary, AB T2N 4N2, Canada.
| | | | | | | |
Collapse
|
11
|
Fluit CRMG, Bolhuis S, Grol R, Laan R, Wensing M. Assessing the quality of clinical teachers: a systematic review of content and quality of questionnaires for assessing clinical teachers. J Gen Intern Med 2010; 25:1337-45. [PMID: 20703952 PMCID: PMC2988147 DOI: 10.1007/s11606-010-1458-y] [Citation(s) in RCA: 87] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2009] [Revised: 02/22/2010] [Accepted: 07/02/2010] [Indexed: 11/25/2022]
Abstract
BACKGROUND Learning in a clinical environment differs from formal educational settings and provides specific challenges for clinicians who are teachers. Instruments that reflect these challenges are needed to identify the strengths and weaknesses of clinical teachers. OBJECTIVE To systematically review the content, validity, and aims of questionnaires used to assess clinical teachers. DATA SOURCES MEDLINE, EMBASE, PsycINFO and ERIC from 1976 up to March 2010. REVIEW METHODS The searches revealed 54 papers on 32 instruments. Data from these papers were documented by independent researchers, using a structured format that included content of the instrument, validation methods, aims of the instrument, and its setting. RESULTS Aspects covered by the instruments predominantly concerned the use of teaching strategies (included in 30 instruments), supporter role (29), role modeling (27), and feedback (26). Providing opportunities for clinical learning activities was included in 13 instruments. Most studies referred to literature on good clinical teaching, although they failed to provide a clear description of what constitutes a good clinical teacher. Instrument length varied from 1 to 58 items. Except for two instruments, all had to be completed by clerks/residents. Instruments served to provide formative feedback ( instruments) but were also used for resource allocation, promotion, and annual performance review (14 instruments). All but two studies reported on internal consistency and/or reliability; other aspects of validity were examined less frequently. CONCLUSIONS No instrument covered all relevant aspects of clinical teaching comprehensively. Validation of the instruments was often limited to assessment of internal consistency and reliability. Available instruments for assessing clinical teachers should be used carefully, especially for consequential decisions. There is a need for more valid comprehensive instruments.
Collapse
Affiliation(s)
- Cornelia R M G Fluit
- Department for Evaluation, Quality and Development of Medical Education, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands.
| | | | | | | | | |
Collapse
|
12
|
Hsu DC, Macias CG. Rubric evaluation of pediatric emergency medicine fellows. J Grad Med Educ 2010; 2:523-9. [PMID: 22132272 PMCID: PMC3010934 DOI: 10.4300/jgme-d-10-00083.1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/10/2010] [Revised: 06/12/2010] [Accepted: 06/24/2010] [Indexed: 11/06/2022] Open
Abstract
OBJECTIVES To develop and validate a rubric assessment instrument for use by pediatric emergency medicine (PEM) faculty to evaluate PEM fellows and for fellows to use to self-assess. METHODS This is a prospective study at a PEM fellowship program. The assessment instrument was developed through a multistep process: (1) development of rubric format items, scaled on the modified Dreyfus model proficiency levels, corresponding to the 6 Accreditation Council for Graduate Medical Education core competencies; (2) determination of content and construct validity of the items through structured input and item refinement by subject matter experts and focus group review; (3) collection of data using a 61-item form; (4) evaluation of psychometrics; (5) selection of items for use in the final instrument. RESULTS A total of 261 evaluations were collected from 2006 to 2007; exploratory factor analysis yielded 5 factors with Eigenvalues >1.0; each contained ≥4 items, with factor loadings >0.4 corresponding with the following competencies: (1) medical knowledge and practice-based learning and improvement, (2) patient care and systems-based practice, (3) interpersonal skills, (4) communication skills, and (5) professionalism. Cronbach α for the final 53-item instrument was 0.989. There was also significant responsiveness of the tool to the year of training. CONCLUSION A substantively and statistically validated rubric evaluation of PEM fellows is a reliable tool for formative and summative evaluation.
Collapse
Affiliation(s)
- Deborah C. Hsu
- Corresponding author: Deborah Hsu, MD, MEd, Pediatric Emergency Medicine, Texas Children's Hospital, 6621 Fannin St Ste A 210 MC1-1481, Houston, TX 77030, 832.824.5487,
| | | |
Collapse
|
13
|
McNulty JA, Gruener G, Chandrasekhar A, Espiritu B, Hoyt A, Ensminger D. Are online student evaluations of faculty influenced by the timing of evaluations? ADVANCES IN PHYSIOLOGY EDUCATION 2010; 34:213-216. [PMID: 21098389 DOI: 10.1152/advan.00079.2010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Student evaluations of faculty are important components of the medical curriculum and faculty development. To improve the effectiveness and timeliness of student evaluations of faculty in the physiology course, we investigated whether evaluations submitted during the course differed from those submitted after completion of the course. A secure web-based system was developed to collect student evaluations that included numerical rankings (1-5) of faculty performance and a section for comments. The grades that students received in the course were added to the data, which were sorted according to the time of submission of the evaluations and analyzed by Pearson's correlation and Student's t-test. Only 26% of students elected to submit evaluations before completion of the course, and the average faculty ratings of these evaluations were highly correlated [r(14) = 0.91] with the evaluations submitted after completion of the course. Faculty evaluations were also significantly correlated with the previous year [r(14) = 0.88]. Concurrent evaluators provided more comments that were statistically longer and subjectively scored as more "substantive." Students who submitted their evaluations during the course and who included comments had significantly higher final grades in the course. In conclusion, the numeric ratings that faculty received were not influenced by the timing of student evaluations. However, students who submitted early evaluations tended to be more engaged as evidenced by their more substantive comments and their better performance on exams. The consistency of faculty evaluations from year to year and concurrent versus at the end of the course suggest that faculty tend not to make significant adjustments to student evaluations.
Collapse
Affiliation(s)
- John A McNulty
- Department of Cell and Molecular Physiology, Stritch School of Medicine, Loyola University, Maywood, IL 60153, USA.
| | | | | | | | | | | |
Collapse
|
14
|
Schubert A. Faculty Teaching Scores: Validating Evaluations, Evaluating Validation. Anesth Analg 2008; 107:1098-9. [DOI: 10.1213/ane.0b013e318182fbf1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|