1
|
Seyedhassan S, Mahsa M, Jeyran O, Mitra A, Rezvan G, Sedigheh M. Scan of the postgraduate educational environment domains questionnaire: a reliable and valid tool for the evaluation of educational environment in postgraduate medical education. BMC MEDICAL EDUCATION 2024; 24:1130. [PMID: 39394584 PMCID: PMC11468087 DOI: 10.1186/s12909-024-06125-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 10/03/2024] [Indexed: 10/13/2024]
Abstract
BACKGROUND The educational environment plays a critical role in shaping learners' perceptions and experiences in medical education. Evaluating and enhancing the quality of this environment is essential for the continuous improvement of medical training programs. The Scan of the Postgraduate Educational Environment Domains (SPEED) is a concise instrument that assesses three domains of the educational environment. This study aimed to translate the SPEED questionnaire into Persian and evaluate its validity and reliability in the context of postgraduate. METHODS A cross-sectional study was conducted with 200 first and second-year medical residents. The Persian translation of the SPEED questionnaire was assessed for content validity, and confirmatory factor analysis was performed to evaluate its structural validity. Cronbach's alpha coefficient was calculated to assess internal consistency reliability. RESULTS The Persian-translated SPEED questionnaire demonstrated satisfactory content validity, with all items exceeding the minimum acceptable values for content validity ratio and index. Confirmatory factor analysis indicated an acceptable fit for the 3-dimensional structure of the SPEED instrument. Internal consistency reliability analysis showed high reliability for the content, atmosphere, and organization domains. CONCLUSION The Persian-translated version of the SPEED questionnaire is a valid and reliable tool for assessing the domains of the educational environment in postgraduate medical education.
Collapse
Affiliation(s)
- Sadrian Seyedhassan
- Studens Research committee, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Moosavi Mahsa
- Clinical Education Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Ostovarfar Jeyran
- MPH Department, Medical School, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Amini Mitra
- Clinical Education Research Center, Shiraz University of Medical Sciences, Shiraz, Iran.
| | - Ghaderpanah Rezvan
- Studens Research committee, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Mokhtarpour Sedigheh
- Clinical Education Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
2
|
Tomas N, Italo M, Eva B, Veronica L. Assessment during clinical education among nursing students using two different assessment instruments. BMC MEDICAL EDUCATION 2024; 24:852. [PMID: 39112978 PMCID: PMC11308620 DOI: 10.1186/s12909-024-05771-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 07/11/2024] [Indexed: 08/10/2024]
Abstract
BACKGROUND Assessment of undergraduate students using assessment instruments in the clinical setting is known to be complex. The aim of this study was therefore to examine whether two different assessment instruments, containing learning objectives (LO`s) with similar content, results in similar assessments by the clinical supervisors and to explore clinical supervisors' experiences of assessment regarding the two different assessment instruments. METHOD A mixed-methods approach was used. Four simulated care encounter scenarios were evaluated by 50 supervisors using two different assessment instruments. 28 follow-up interviews were conducted. Descriptive statistics and logistic binary regression were used for quantitative data analysis, along with qualitative thematic analysis of interview data. RESULT While significant differences were observed within the assessment instruments, the differences were consistent between the two instruments, indicating that the quality of the assessment instruments were considered equivalent. Supervisors noted that the relationship between the students and supervisors could introduce subjectivity in the assessments and that working in groups of supervisors could be advantageous. In terms of formative assessments, the Likert scale was considered a useful tool for evaluating learning objectives. However, supervisors had different views on grading scales and the need for clear definitions. The supervisors concluded that a complicated assessment instrument led to limited very-day usage and did not facilitate formative feedback. Furthermore, supervisors discussed how their experiences influenced the use of the assessment instruments, which resulted in different descriptions of the experience. These differences led to a discussion of the need of supervisor teams to enhance the validity of assessments. CONCLUSION The findings showed that there were no significant differences in pass/fail gradings using the two different assessment instruments. The quantitative data suggests that supervisors struggled with subjectivity, phrasing, and definitions of the LO´s and the scales used in both instruments. This resulted in arbitrary assessments that were time-consuming and resulted in limited usage in the day-to-day assessment. To mitigate the subjectivity, supervisors suggested working in teams and conducting multiple assessments over time to increase assessment validity.
Collapse
Affiliation(s)
- Nilsson Tomas
- Department of Clinical Science and Education, Södersjukhuset, Karolinska Institutet, 11883, Stockholm, Sweden.
| | - Masiello Italo
- Department of Pedagogy, Linnaeus University, Växjö, Sweden
- Department for Neurobiology, Care Sciences, and Society, Division of Nursing, Karolinska Institutet, Huddinge, Sweden
- Department of Computer Science and Media Technology, Linnaeus University, Växjö, Sweden
| | - Broberger Eva
- Department for Neurobiology, Care Sciences, and Society, Division of Nursing, Karolinska Institutet, Huddinge, Sweden
| | - Lindström Veronica
- Department of Nursing, Division of Ambulance Service, Region Västerbotten, Umeå University, Umeå, Sweden
- Department of Health Promotion Science, Sophiahemmet University, Stockholm, Sweden
| |
Collapse
|
3
|
Azim SR, Azfar SM, Baig M. Students' perceptions of and satisfaction with their Orthopaedic posting learning environment by using the Healthcare Education Micro-Learning Environment Measure (HEMLEM) questionnaire. PLoS One 2024; 19:e0306971. [PMID: 38990914 PMCID: PMC11238969 DOI: 10.1371/journal.pone.0306971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 06/26/2024] [Indexed: 07/13/2024] Open
Abstract
BACKGROUND The learning environment in medical education is crucial for student development, encompassing social, psychological, and physical aspects that significantly affect learning. This study aimed to assess undergraduate medical students' perception of the orthopaedic ward's learning environment and examine the factors influencing their overall satisfaction during clinical rotation. METHODS This cross sectional quantitative study was conducted in a private medical college in Pakistan. Data was collected through a pre-validated questionnaire, "The Healthcare Education Micro-Learning Environment Measure (HEMLEM)." Data analysis was done using SPSS version 23 software. RESULTS A total of 205/300 students (response rate 68.33%) [103 (50.2%) males and 102(49.85) females] participated in this survey. Notably, 116 (56.6%) appreciated the ward's welcoming, friendly, and open atmosphere, and 114(55.6%) of the respondents appreciated the ward culture where they felt free to ask questions or comment. Additionally, 111(54.7%) appreciated the faculty's enthusiasm for teaching. A comparison between male and female students showed significantly higher satisfaction among males regarding staff attitudes and behaviours (p < .019). CONCLUSION Undergraduate students held a predominantly positive view of the orthopaedic ward's learning environment, with differences observed based on gender and year of study. The study highlights the importance of both staff attitude and teaching quality in shaping the educational experience. It suggests that medical institutions should focus on enhancing teaching skills among clinicians to improve learning experiences and ultimately benefit patient care and the healthcare system.
Collapse
Affiliation(s)
- Syeda Rubaba Azim
- Department of Medical Education, Dow University of Health Sciences, Karachi, Pakistan
| | - Syed Muhammad Azfar
- Department of Orthopedic, Liaquat College of Medicine and Dentistry, Karachi, Pakistan
| | - Mukhtiar Baig
- Department of Clinical Biochemistry, Faculty of Medicine, Rabigh, King Abdul Aziz University, Jeddaah, Saudi Arabia
| |
Collapse
|
4
|
Mitzkat A, Mink J, Arnold C, Mahler C, Mihaljevic AL, Möltner A, Trierweiler-Hauke B, Ullrich C, Wensing M, Kiesewetter J. Development of individual competencies and team performance in interprofessional ward rounds: results of a study with multimodal observations at the Heidelberg Interprofessional Training Ward. Front Med (Lausanne) 2023; 10:1241557. [PMID: 37828945 PMCID: PMC10566636 DOI: 10.3389/fmed.2023.1241557] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 08/07/2023] [Indexed: 10/14/2023] Open
Abstract
Introduction Interprofessional training wards (IPTW) aim to improve undergraduates' interprofessional collaborative practice of care. Little is known about the effects of the different team tasks on IPTW as measured by external assessment. In Heidelberg, Germany, four nursing and four medical undergraduates (= one cohort) care for up to six patients undergoing general surgery during a four-week placement. They learn both professionally and interprofessionally, working largely on their own responsibility under the supervision of the medical and nursing learning facilitators. Interprofessional ward rounds are a central component of developing individual competencies and team performance. The aim of this study was to evaluate individual competencies and team performance shown in ward rounds. Methods Observations took place in four cohorts of four nursing and four medical undergraduates each. Undergraduates in one cohort were divided into two teams, which rotated in morning and afternoon shifts. Team 1 was on morning shift during the first (t0) and third (t1) weeks of the IPTW placement, and Team 2 was on morning shift during the second (t0) and fourth (t1) weeks. Within each team, a tandem of one nursing and one medical undergraduate cared for a patient room with three patients. Ward round observations took place with each team and tandem at t0 and t1 using the IP-VITA instrument for individual competencies (16 items) and team performance (11 items). Four hypotheses were formulated for statistical testing with linear mixed models and correlations. Results A total of 16 nursing and medical undergraduates each were included. There were significant changes in mean values between t0 and t1 in individual competencies (Hypothesis 1). They were statistically significant for all three sum scores: "Roles and Responsibilities", Patient-Centeredness", and "Leadership". In terms of team performance (Hypothesis 2), there was a statistically significant change in mean values in the sum score "Roles and Responsibilities" and positive trends in the sum scores "Patient-Centeredness" and "Decision-Making/Collaborative Clinical Reasoning". Analysis of differences in the development of individual competencies in the groups of nursing and medical undergraduates (Hypothesis 3) showed more significant differences in the mean values of the two groups in t0 than in t1. There were significant correlations between individual competencies and team performance at both t0 and t1 (Hypothesis 4). Discussion The study has limitations due to the small sample and some sources of bias related to the external assessment by means of observation. Nevertheless, this study offers insights into interprofessional tasks on the IPTW from an external assessment. Results from quantitative and qualitative analysis of learners self-assessment are confirmed in terms of roles and responsibilities and patient-centeredness. It has been observed that medical undergraduates acquired and applied skills in collaborative clinic reasoning and decision-making, whereas nursing undergraduates acquired leadership skills. Within the study sample, only a small group of tandems remained constant over time. In team performance, the group of constant tandems tended to perform better than the group of random tandems. The aim of IPTW should be to prepare healthcare team members for the challenge of changing teams. Therefore, implications for IPTW implementation could be to develop learning support approaches that allow medical and nursing undergraduates to bring interprofessional competencies to team performance, independent of the tandem partner or team.
Collapse
Affiliation(s)
- Anika Mitzkat
- Department of General Practice and Health Services Research, University Hospital Heidelberg, Heidelberg, Germany
| | - Johanna Mink
- Department of General Practice and Health Services Research, University Hospital Heidelberg, Heidelberg, Germany
| | - Christine Arnold
- Division of Neonatology, Department of Paediatrics, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Cornelia Mahler
- Department of Nursing Science, University Hospital Tübingen, Tübingen, Germany
| | - André L. Mihaljevic
- Department of General Visceral and Transplantation Surgery, University Hospital Ulm, Ulm, Germany
| | - Andreas Möltner
- Department of Medical Examinations, Medical Faculty Heidelberg, Heidelberg, Germany
| | - Birgit Trierweiler-Hauke
- Department of General, Visceral and Transplantation Surgery, University Hospital Heidelberg, Heidelberg, Germany
| | - Charlotte Ullrich
- Department of General Practice and Health Services Research, University Hospital Heidelberg, Heidelberg, Germany
| | - Michel Wensing
- Department of General Practice and Health Services Research, University Hospital Heidelberg, Heidelberg, Germany
| | - Jan Kiesewetter
- Institute of Medical Education, LMU University Hospital, LMU München, München, Germany
| |
Collapse
|
5
|
Kim KW, Choe WJ, Kim JH. Application of competency-based education in the Korean anesthesiology residency program and survey analysis. Korean J Anesthesiol 2023; 76:135-142. [PMID: 35922894 PMCID: PMC10079002 DOI: 10.4097/kja.22383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 07/27/2022] [Accepted: 08/01/2022] [Indexed: 04/03/2023] Open
Abstract
BACKGROUND Although competency-based education (CBE) is becoming a popular form of medical education, it has not been used to train residents. Recently, the Korean Society of Anesthesiologists completed a pilot implementation and evaluation of a CBE program.This study aims to outline the experience. METHODS The chief training faculty from each hospital took a one-hour online course about CBE. Emails on the seven core competencies and their evaluation were sent ahead of a pilot core competency evaluation (CCE) to residents and faculty. The pilot CCE took place in late 2021, followed by a survey. RESULTS A total of 68 out of 84 hospitals participated in the pilot CCE. The survey response rate was 55.9% (38/68) for chief training faculty, 10.2% (91/888) for training faculty, and 30.2% (206/683) for residents. More than half of the training faculty thought that CCE was necessary for the education of residents. Residents' and training faculty's responses about CCE were generally positive, although their understanding of CCE criteria was low. More than 80% of the hospitals had a defibrillator and cardiopulmonary resuscitation manikin while the rarest piece of equipment was an ultrasound vessel model. Only defibrillators were used in more than half of the hospitals. Thoughts about CCE were related to various factors, such as length of employment, location of hospitals, and the number of residents per grade. CONCLUSIONS This study's results may be helpful in improving resident education quality to meet the expectations of both teaching faculty and residents while establishing CBE.
Collapse
Affiliation(s)
- Kyung Woo Kim
- Department of Anesthesiology and Pain Medicine, Inje University Ilsan Paik Hospital, Goyang, Korea
| | - Won Joo Choe
- Department of Anesthesiology and Pain Medicine, Inje University Ilsan Paik Hospital, Goyang, Korea
| | - Jun Hyun Kim
- Department of Anesthesiology and Pain Medicine, Inje University Ilsan Paik Hospital, Goyang, Korea
| |
Collapse
|
6
|
Wyatt TR, Wood EA, Waller JL, Egan SC, Stepleman LM. Patient care ownership in medical students: a validation study. BMC MEDICAL EDUCATION 2023; 23:127. [PMID: 36814275 PMCID: PMC9948326 DOI: 10.1186/s12909-023-04106-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND Psychological Ownership is the cognitive-affective state individuals experience when they come to feel they own something. The construct is context-dependent reliant on what is being owned and by whom. In medical education, this feeling translates to what has been described as "Patient Care Ownership," which includes the feelings of responsibility that physicians have for patient care. In this study, we adapted an instrument on Psychological Ownership that was originally developed for business employees for a medical student population. The aim of this study was to collect validity evidence for its fit with this population. METHODS A revised version of the Psychological Ownership survey was created and administered to 182 medical students rotating on their clerkships in 2018-2019, along with two other measures, the Teamwork Assessment Scale (TSA) and Maslach Burnout Inventory (MBI) Survey. A confirmatory factor analysis (CFA) was conducted, which indicated a poor fit between the original and revised version. As a result, an exploratory factor analysis (EFA) was conducted and validity evidence was gathered to assess the new instruments' fit with medical students. RESULTS The results show that the initial subscales proposed by Avey et al. (i.e. Territoriality, Accountability, Belongingness, Self-efficacy, and Self-identification) did not account for item responses in the revised instrument when administered to medical students. Instead, four subscales (Team Inclusion, Accountability, Territoriality, and Self-Confidence) better described patient care ownership for medical students, and the internal reliability of these subscales was found to be good. Using Cronbach's alpha, the internal consistency among items for each subscale, includes: Team Inclusion (0.91), Accountability (0.78), Territoriality (0.78), and Self-Confidence (0.82). The subscales of Territoriality, Team Inclusion, and Self-Confidence were negatively correlated with the 1-item Burnout measure (P = 0.01). The Team Inclusion subscale strongly correlated with the Teamwork Assessment Scale (TSA), while the subscales of Accountability correlated weakly, and Self-Confidence and Territoriality correlated moderately. CONCLUSION Our study provides preliminary validity evidence for an adapted version of Avey et al.'s Psychological Ownership survey, specifically designed to measure patient care ownership in a medical student population. We expect this revised instrument to be a valuable tool to medical educators evaluating and monitoring students as they learn how to engage in patient care ownership.
Collapse
Affiliation(s)
- Tasha R Wyatt
- Center for Health Professions Education, Department of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD, 20814-4712, USA.
| | - Elena A Wood
- Office of Academic Affairs, Medical College of Georgia at Augusta University, Augusta, GA, USA
| | - Jennifer L Waller
- Department of Population Health Science, Division of Biostatistics & Data Science, Medical College of Georgia at Augusta University, Augusta, GA, USA
| | - Sarah C Egan
- Office of Academic Affairs, Medical College of Georgia at Augusta University, Augusta, GA, USA
| | - Lara M Stepleman
- Department of Psychiatry and Health Behavior, Medical College of Georgia at Augusta University, Augusta, GA, USA
| |
Collapse
|
7
|
Lägervik M, Thörne K, Fristedt S, Henricson M, Hedberg B. Residents' and supervisors' experiences when using a feedback-model in post-graduate medical education. BMC MEDICAL EDUCATION 2022; 22:891. [PMID: 36564770 PMCID: PMC9789576 DOI: 10.1186/s12909-022-03969-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND Supervisors play a key part as role models and supporting the learning during residents' post-graduate medical education, but sometimes lack sufficient pedagogic training and are challenged by high demands in today's healthcare. The aim of this study was to describe the strengths and areas for improvement identified in the supervision process by residents and supervisors in post-graduate medical education. METHODS This study included supervisors and residents working at departments and health centres who have used a web-based questionnaire, as a part of the Evaluation and Feedback For Effective Clinical Teaching (EFFECT) model, during the period 2016-2019. Descriptive statistics and content analysis were used to analyse ratings and comments to describe strengths and areas for improvement in the supervision process. RESULTS The study included 287 resident evaluations of supervisors and 78 self-evaluations by supervisors. The supervisor as a role model, being available, and, giving personal support, were the three most important strengths identified by the residents and supervisors. Residents in primary care also identified the role modelling of general practice competence as a strength, whereas residents and supervisors in hospital departments addressed supervisors as energetic and showing work was fun. The area with the need of most improvement was, Giving and receiving feedback. CONCLUSIONS To be able to give feedback, residents and supervisors, needed to see each other in work, and the learning environment had to offer time and space to pedagogical processes, like feedback, to improve the learning environment.
Collapse
Affiliation(s)
- Martin Lägervik
- Jönköping Academy, Jönköping University, 551 11, Jönköping, Sweden.
- Futurum - the Academy for Health and Care, Region Jönköping County, 551 11, Jönköping, Sweden.
| | - Karin Thörne
- Jönköping Academy, Jönköping University, 551 11, Jönköping, Sweden
- Futurum - the Academy for Health and Care, Region Jönköping County, 551 11, Jönköping, Sweden
| | - Sofi Fristedt
- Jönköping Academy, Jönköping University, 551 11, Jönköping, Sweden
- Department of Health Sciences, Faculty of Medicine, Lund University, 221 00, Lund, Sweden
| | - Maria Henricson
- Jönköping Academy, Jönköping University, 551 11, Jönköping, Sweden
- Department of Caring Sciences, University of Borås, 501 90, Borås, Sweden
| | - Berith Hedberg
- Jönköping Academy, Jönköping University, 551 11, Jönköping, Sweden
| |
Collapse
|
8
|
Rechtien L, Gradel M, Fischer MR, Graupe T, Dimitriadis K. A Mixed Methods Assessment of the Management Role of Physicians. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2022; 13:1003-1017. [PMID: 36105767 PMCID: PMC9467689 DOI: 10.2147/amep.s370245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Accepted: 08/04/2022] [Indexed: 06/15/2023]
Abstract
Introduction Physicians are increasingly confronted with new requirements in their daily job, which go beyond the mere treatment of patients. The aim of this Mixed-Method-Study is to better understand management as it relates to physicians' daily work, to clarify the physicians' perception of their management role and to examine physician's self-assessed competence in these functions. Methods We used three different instruments: Semi-structured interviews, a self-assessment survey and direct observations to evaluate managerial activities performed by residents. Both latter were based on instruments established for management research. Results Interviewed residents were familiar with the term "Management" but had difficulties in defining it. Concerning managerial functions in context of their daily work, we identified three main categories: Self-management, Patient-management and Management of the ward. In this context, physicians named numerous examples of management tasks and for which they felt ill prepared. Eighty-eight residents participated in the self-assessment survey and rated the majority of the management tasks as necessary for the residents' work. Although physicians estimated the proportion of managerial work to comprise only 40.6%, a much higher number of mere management tasks could be identified through direct observations (n = 12). Activities related to management were more often observed than genuine physician tasks. Discussion This study illustrates the prominent role of management activities in context of the residents' work, while at the same time showing that residents do not feel sufficiently educated, prepared nor competent in management tasks.
Collapse
Affiliation(s)
- Laura Rechtien
- Institute of Medical Education University Hospital, Ludwig Maximilian University of Munich, Munich, Germany
- Department of Dermatology, Friedrich-Alexander-University of Erlangen-Nuremberg, Erlangen, Germany
| | - Maximilian Gradel
- Institute of Medical Education University Hospital, Ludwig Maximilian University of Munich, Munich, Germany
| | - Martin R Fischer
- Institute of Medical Education University Hospital, Ludwig Maximilian University of Munich, Munich, Germany
| | - Tanja Graupe
- Institute of Medical Education University Hospital, Ludwig Maximilian University of Munich, Munich, Germany
| | - Konstantinos Dimitriadis
- Institute of Medical Education University Hospital, Ludwig Maximilian University of Munich, Munich, Germany
- Department of Neurology, Ludwig Maximilian University of Munich, Munich, Germany
| |
Collapse
|
9
|
Bilan N, Negahdari R, Moghaddam SF. The Competency-based Evaluation of Educational Crew of Dental Faculty’s Obstacles in Institutionalizing Performance Assessments. Open Dent J 2022. [DOI: 10.2174/18742106-v16-e2206201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Aims:
This study evaluates the educational crew of dental faculty’s lived experiences facing obstacles and requirements in institutionalizing performance assessments to implement a professional competency-based evaluation system.
Background:
The competency-based evaluation of learning and teaching processes has been adopted as a key policy in the developed world, which indicates the achievement rate of educational goals and the quality of education.
Objective:
The main objective of this study was to evaluate obstacles in institutionalizing performance assessments for the educational crew.
Methods:
This qualitative study used a semi-structured interview in a focus group discussion. The experience of the educational crew regarding the obstacles of using performance assessments and their approaches to conducting a professional competency-based evaluation was assessed. The recruited participants were educational supervisors, professors of orthodontics and prosthodontics, and the medical education department and evaluation committee members of the faculty of dentistry at the University of Tabriz. The purposive sampling technique was used and continued until reaching saturation. Five focus group discussions were conducted with fourteen educational crew and three medical education department members. The data were analysed using thematic analysis.
Results:
The interview analysis results yielded 450 codes in three general categories, including “current condition of clinical education,” “obstacles of implementing new evaluation methods,” and “requirements for effective evaluation of clinical skills.” According to the results, changes in evaluation methods are necessary to respond to community needs. There are also many cultural problems with applying western models in developing countries.
Conclusion:
The medical community should be directed towards a competency-based curriculum, especially in procedure-based fields, such as dentistry.
Other:
They are moving towards altering traditional evaluation methods (the traditional classroom-based lectures). This paradigm change requires support from the department and the provision of infrastructure.
Collapse
|
10
|
Cordovani L, Cordovani D, Wong A. Characteristics of good clinical teachers in anesthesiology from medical students' perspective: a qualitative descriptive study. Can J Anaesth 2022; 69:841-848. [PMID: 35314995 DOI: 10.1007/s12630-022-02234-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 01/21/2022] [Accepted: 01/23/2022] [Indexed: 11/30/2022] Open
Abstract
PURPOSE Learning needs are influenced by the stage of learning and medical specialty. We sought to investigate the characteristics of a good clinical teacher in anesthesiology from the medical students' perspective. METHODS We conducted a qualitative descriptive study to analyze written comments of medical students about their clinical teachers' performances. Our analysis strategy was the inductive content analysis method. The results are reported as a descriptive summary with major themes as the final product. RESULTS Our study identified four themes. The first theme, teachers' individual characteristics, includes characteristics that are usually more related to students' subjective experiences and feelings. The second theme, teachers' characteristics that advance student learning, seems to be one of the most important contributions to learning because it increases the practice of procedural skills. The third theme, teachers' characteristics that prepare students for success, shows characteristics that facilitate students' learning by promoting a healthy and safe environment. Lastly, the fourth theme, characteristics related to teaching approaches, includes characteristics that can guide clinical teachers more objectively. CONCLUSION Our analysis of the written comments of medical students identified many characteristics of a good clinical teacher that were organized in four different themes. These themes contribute to expand on existing understandings of clinical teaching in the anesthesiology clerkship environment, and add new interpretations that can be reflected upon and explored by other clinical educators.
Collapse
Affiliation(s)
- Ligia Cordovani
- Department of Health Research Methods, Evidence, and Impact, McMaster University, 1280 Main St W, 2C Area, Hamilton, ON, L8S 4K1, Canada.
| | - Daniel Cordovani
- Department of Anesthesia, McMaster University, Hamilton, ON, Canada
| | - Anne Wong
- Department of Anesthesia, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
11
|
Jenq CC, Ou LS, Tseng HM, Chao YP, Lin JR, Monrouxe LV. Evaluating Clinical Educators' Competence in an East Asian Context: Who Values What? Front Med (Lausanne) 2022; 9:896822. [PMID: 35836950 PMCID: PMC9273768 DOI: 10.3389/fmed.2022.896822] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 06/07/2022] [Indexed: 11/13/2022] Open
Abstract
BackgroundHow to evaluate clinical educators is an important question in faculty development. The issue of who are best placed to evaluate their performance is also critical. However, the whos and the hows of clinical educator evaluation may differ culturally. This study aims to understand what comprises suitable evaluation criteria, alongside who is best placed to undertake the evaluation of clinical educators in medicine within an East Asian culture: specifically Taiwan.MethodsAn 84-item web-based questionnaire was created based on a literature review and medical educational experts' opinions focusing on potential raters (i.e., who) and domains (i.e., what) for evaluating clinical educators. Using purposive sampling, we sent 500 questionnaires to clinical educators, residents, Post-Graduate Year Trainees (PGYs), Year-4~6/Year-7 medical students (M4~6/M7) and nurses.ResultsWe received 258 respondents with 52% response rate. All groups, except nurses, chose “teaching ability” as the most important domain. This contrasts with research from Western contexts that highlights role modeling, leadership and enthusiasm. The clinical educators and nurses have the same choices of the top five items in the “personal qualities” domain, but different choices in “assessment ability” and “curriculum planning” domains. The best fit rater groups for evaluating clinical educators were educators themselves and PGYs.ConclusionsThere may well be specific suitable domains and populations for evaluating clinical educators' competence in East Asian culture contexts. Further research in these contexts is required to examine the reach of these findings.
Collapse
Affiliation(s)
- Chang-Chyi Jenq
- Department of Nephrology, Linkou Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Chang Gung University College of Medicine, Taoyuan, Taiwan
- Chang Gung Medical Education Research Center, Taoyuan, Taiwan
| | - Liang-Shiou Ou
- Chang Gung University College of Medicine, Taoyuan, Taiwan
- Chang Gung Medical Education Research Center, Taoyuan, Taiwan
- Division of Allergy, Asthma and Rheumatology, Department of Pediatrics, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | - Hsu-Min Tseng
- Chang Gung Medical Education Research Center, Taoyuan, Taiwan
- Department of Health Care Management, College of Management, Chang Gung University, Taoyuan, Taiwan
| | - Ya-Ping Chao
- Chang Gung Medical Education Research Center, Taoyuan, Taiwan
| | - Jiun-Ren Lin
- Chang Gung Medical Education Research Center, Taoyuan, Taiwan
| | - Lynn V. Monrouxe
- Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
- *Correspondence: Lynn V. Monrouxe
| |
Collapse
|
12
|
Hamilton SJ, Briggs L, Peterson EE, Slattery M, O'Donovan A. Supporting conscious competency: Validation of the Generic Supervision Assessment Tool (GSAT). Psychol Psychother 2022; 95:113-136. [PMID: 34708921 DOI: 10.1111/papt.12369] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 09/12/2021] [Indexed: 12/01/2022]
Abstract
OBJECTIVES Clinical supervision is essential for ensuring effective service delivery. International imperatives to demonstrate professional competence has increased attention on the role of supervision in enhancing client outcomes. Although supervisor competency tools are recognised as important components in effective supervision, there remains a shortage of tools that are evidenced-based, applicable across workforces and freely accessible. DESIGN An expert multidisciplinary group developed the Generic Supervision Assessment Tool (GSAT) to assess supervisor competencies across a range of professions. Initially the GSAT consisted of 32 items responded to by either a supervisor (GSAT-SR) or supervisee (GSAT-SE). The current study, using surveys, employed a cross-sectional design to test the reliability and construct validity of the GSAT. METHODS The study consisted of two phases and included 12 professional groups across Australasia. In 2018, exploratory factor analysis (EFA) was undertaken with survey data from 479 supervisors and 447 supervisees. In 2019 survey data from 182 supervisors and 186 supervisees were used to conduct confirmatory factor analysis (CFA). The results were used to refine and validate the GSAT. RESULTS The final GSAT-SR has four factors with 26 competency items. The final GSAT-SE has two factors with 21 competency items. The EFA and CFA confirmed that the GSAT-SR and the GSAT-SE are psychometrically valid tools that supervisors and supervisees can utilise to assess competencies. CONCLUSION As a non-discipline specific supervision tool, the GSAT is a validated, freely available tool for benchmarking the competencies of clinical supervisors across professions, potentially optimising supervisory evaluation processes and strengthening supervision effectiveness. PRACTITIONER POINTS Supervisor competency tools are recognised as important components of safe and effective supervision provision yet there is a dearth of valid, reliable and effective measures. The Generic Supervision Assessment Tool (GSAT-SR and GSAT-SE) are unique psychometrically valid, and reliable measures of supervisor competence. The GSAT-SR and the GSAT-SE can enhance translation of evidence-based supervision competency skills into regular practice. Validated with a broad cross section of professionals in diverse practice settings the GSAT provides a comprehensive conceptualization of supervisor competence.
Collapse
Affiliation(s)
- Sarah J Hamilton
- Queensland Health, Addiction and Mental Health Services Metro South Health, Upper Mount Gravatt, Queensland, Australia.,School of Health Sciences and Social Work, Griffith University, Logan, Queensland, Australia
| | - Lynne Briggs
- School of Health Sciences and Social Work, Griffith University, Logan, Queensland, Australia.,Menzies Health Institute Queensland, Griffith University, Brisbane, Queensland, Australia
| | | | - Maddy Slattery
- School of Health Sciences and Social Work, Griffith University, Logan, Queensland, Australia.,Menzies Health Institute Queensland, Griffith University, Brisbane, Queensland, Australia
| | - Analise O'Donovan
- Health Group, Griffith University, Gold Coast, Queensland, Australia
| |
Collapse
|
13
|
Kikukawa M, Stalmeijer R, Matsuguchi T, Oike M, Sei E, Schuwirth LWT, Scherpbier AJJA. How culture affects validity: understanding Japanese residents' sense-making of evaluating clinical teachers. BMJ Open 2021; 11:e047602. [PMID: 34408039 PMCID: PMC8375773 DOI: 10.1136/bmjopen-2020-047602] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
OBJECTIVES Traditionally, evaluation is considered a measurement process that can be performed independently of the cultural context. However, more recently the importance of considering raters' sense-making, that is, the process by which raters assign meaning to their collective experiences, is being recognised. Thus far, the majority of the discussion on this topic has originated from Western perspectives. Little is known about the potential influence of an Asian culture on raters' sense-making. This study explored residents' sense-making associated with evaluating their clinical teachers within an Asian setting to better understand contextual dependency of validity. DESIGN A qualitative study using constructivist grounded theory. SETTING The Japanese Ministry of Health, Labour and Welfare has implemented a system to monitor the quality of clinical teaching within its 2-year postgraduate training programme. An evaluation instrument was developed specifically for the Japanese setting through which residents can evaluate their clinical teachers. PARTICIPANTS 30 residents from 10 Japanese teaching hospitals with experience in evaluating their clinical teachers were sampled purposively and theoretically. METHODS We conducted in-depth semistructured individual interviews. Sensitising concepts derived from Confucianism and principles of response process informed open, axial and selective coding. RESULTS Two themes and four subthemes were constructed. Japanese residents emphasised the awareness of their relationship with their clinical teachers (1). This awareness was fuelled by their sense of hierarchy (1a) and being part of the collective society (1b). Residents described how the meaning of evaluation (2) was coloured by their perceived role as senior (2a) and their experienced responsibility for future generations (2b). CONCLUSIONS Japanese residents' sense-making while evaluating their clinical teachers appears to be situated and affected by Japanese cultural values. These findings contribute to a better understanding of a culture's influence on residents' sense-making of evaluation instruments and the validity argument of evaluation.
Collapse
Affiliation(s)
- Makoto Kikukawa
- Department of Medical Education, Kyushu University, Fukuoka, Japan
| | - Renée Stalmeijer
- Department of Educational Development and Research, Maastricht University Faculty of Health, Medicine and Life Sciences, Maastricht, The Netherlands
| | - Takahiro Matsuguchi
- Department of Gastroenterology, Kitakyushu Minicipal Medical Center, Fukuoka, Japan
| | - Miyako Oike
- Department of Nursing, Fukuoka International University of Health and Welfare, Fukuoka, Japan
| | - Emura Sei
- Saga Medical Career Support Center, Saga University Hospital, Saga, Japan
| | - Lambert W T Schuwirth
- Department of Educational Development and Research, Maastricht University Faculty of Health, Medicine and Life Sciences, Maastricht, The Netherlands
- Prideaux Centre for Research in Health Professions Education, Flinders University, Adelaide, South Australia, Australia
| | - Albert J J A Scherpbier
- Institute for Medical Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
14
|
Esteves A, McConnell M, Ferretti E, Garber A, Fung-Kee-Fung K. "When in Doubt, Ask the Patient": A Quantitative, Patient-Oriented Approach to Formative Assessment of CanMEDS Roles. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2021; 17:11169. [PMID: 34368437 PMCID: PMC8292435 DOI: 10.15766/mep_2374-8265.11169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 05/07/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Since the introduction of competency-based frameworks into postgraduate medical curricula, educators have struggled to implement robust assessment tools that document the progression of necessary skills. The global movement towards competency-based medical education demands validated assessment tools. Our objective was to provide validity evidence for the Ottawa CanMEDS Competency Assessment Tool (OCCAT), designed to assess clinical performance in the communicator, professional, and health advocate CanMEDS roles. METHODS We developed the OCCAT, a 29-item questionnaire informed by specialty-specific Entrustable Professional Activities and consultation with stakeholders, including patients. Our sample included nine neonatal-perinatal medicine and maternal fetal medicine fellows rotating through antenatal high-risk clinics at the Ottawa Hospital. Following 70 unique encounters, the OCCAT was completed by patients and learners. Generalizability theory was used to determine overall reliability of scores. Differences in self and patient ratings were assessed using analyses of variance. RESULTS Generalizability analysis demonstrated that both questionnaires produced reliable scores (G-coefficient > 0.9). Self-scores were significantly lower than patient scores across all competencies, F(1, 6) = 13.9, p = .007. Variability analysis demonstrated that trainee scores varied across all competencies, suggesting both groups were able to recognize competencies as distinct and discriminate favorable behaviors belonging to each. DISCUSSION Our findings lend support to the movement to integrate self-assessment and patient feedback in formal evaluations for the purpose of enriched learner experiences and improved patient outcomes. We anticipate that the OCCAT will facilitate bridging to competency-based medical education.
Collapse
Affiliation(s)
- Ashley Esteves
- Senior Medical Student, University of Ottawa Faculty of Medicine
| | - Meghan McConnell
- Associate Professor, Department of Innovation in Medical Education and Department of Anesthesiology and Pain Medicine, University of Ottawa Faculty of Medicine
| | - Emanuela Ferretti
- Neonatologist and Associate Professor, Division of Neonatology, Department of Pediatrics, Children's Hospital of Eastern Ontario and University of Ottawa Faculty of Medicine
| | - Adam Garber
- Associate Program Director and Associate Professor, Department of Obstetrics and Gynecology, University of Ottawa Faculty of Medicine
| | - Karen Fung-Kee-Fung
- Professor, Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Ottawa Faculty of Medicine
| |
Collapse
|
15
|
Sadka N, Lee V, Ryan A. Purpose, Pleasure, Pace and Contrasting Perspectives: Teaching and Learning in the Emergency Department. AEM EDUCATION AND TRAINING 2021; 5:e10468. [PMID: 33796807 PMCID: PMC7995923 DOI: 10.1002/aet2.10468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 05/02/2020] [Accepted: 05/04/2020] [Indexed: 06/12/2023]
Abstract
OBJECTIVES Teaching and learning in the clinical setting are vital for the training and development of emergency physicians. Increasing service provision and time pressures in the emergency department (ED) have led to junior trainees' perceptions of a lack of teaching and a lack of support during clinical shifts. We sought to explore the perceptions of learners and supervisors in our ED regarding teaching within this diverse and challenging context. METHODS Nine ED physicians and eight ED trainees were interviewed to explore perceptions of teaching in the ED. Clinical teaching was described as "on-the-floor" teaching during work shifts. We used a validated clinical teaching assessment instrument to help pilot and develop some of our interview questions, and data were analyzed using qualitative thematic analysis. RESULTS We identified three major themes in our study: 1) the strong sense of purpose and the pleasure gained through teaching and learning interactions, despite both groups being unsure of each other's engagement and enthusiasm; 2) contrasting perspectives of teaching with registrars holding a traditional knowledge transmission view, yet shared perspectives of teacher as being ED consultants; and 3) the effect of patient acuity and volume, which both facilitated learning until a critical point of busyness beyond which service provision pressures and staffing limitations were perceived to negatively impact learning. CONCLUSIONS The ED is a complex and fluid working and learning environment. We need to develop a shared understanding of teaching and learning opportunities in the ED, which helps all stakeholders move beyond learning as knowledge acquisition and sees the potential for learning from teachers of a multitude of professional backgrounds.
Collapse
Affiliation(s)
- Nancy Sadka
- From theEmergency Medicine TrainingAustin HealthHeidelbergVictoriaAustralia
| | - Victor Lee
- From theEmergency Medicine TrainingAustin HealthHeidelbergVictoriaAustralia
| | - Anna Ryan
- and theMelbourne Medical SchoolUniversity of MelbourneMelbourneVictoriaAustralia
| |
Collapse
|
16
|
Brand PLP, Rosingh HJ, Meijssen MAC, Nijholt IM, Dünnwald S, Prins J, Schönrock-Adema J. Reliability of residents' assessments of their postgraduate medical education learning environment: an observational study. BMC MEDICAL EDUCATION 2019; 19:450. [PMID: 31796005 PMCID: PMC6892206 DOI: 10.1186/s12909-019-1874-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 11/15/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND Even in anonymous evaluations of a postgraduate medical education (PGME) program, residents may be reluctant to provide an honest evaluation of their PGME program, because they fear embarrassment or repercussions from their supervisors if their anonymity as a respondent is endangered. This study was set up to test the hypothesis that current residents in a PGME program provide more positive evaluations of their PGME program than residents having completed it. We therefore compared PGME learning environment evaluations of current residents in the program to leaving residents having completed it. METHODS This observational study used data gathered routinely in the quality cycle of PGME programs at two Dutch teaching hospitals to test our hypothesis. At both hospitals, all current PGME residents are requested to complete the Scan of Postgraduate Education Environment Domains (SPEED) annually. Residents leaving the hospital after completion of the PGME program are also asked to complete the SPEED after an exit interview with the hospital's independent residency coordinator. All SPEED evaluations are collected and analysed anonymously. We compared the residents' grades (on a continuous scale ranging from 0 (poor) to 10 (excellent)) on the three SPEED domains (content, atmosphere, and organization of the program) and their mean (overall department grade) between current and leaving residents. RESULTS Mean (SD) overall SPEED department grades were 8.00 (0.52) for 287 current residents in 39 PGME programs and 8.07 (0.48) for 170 leaving residents in 39 programs. Neither the overall SPEED department grades (t test, p = 0.53, 95% CI for difference - 0.16 to 0.31) nor the department SPEED domain grades (MANOVA, F(3, 62) = 0.79, p = 0.51) were significantly different between current and leaving residents. CONCLUSIONS Residents leaving the program did not provide more critical evaluations of their PGME learning environment than current residents in the program. This suggests that current residents' evaluations of their postgraduate learning environment were not affected by social desirability bias or fear of repercussions from faculty.
Collapse
Affiliation(s)
- Paul L P Brand
- Isala Academy, Department of Medical Education and Faculty Development, Isala Hospital, Zwolle, the Netherlands.
- Center for Education Development and Research in Health Professions, University of Groningen and University Medical Centre, Groningen, the Netherlands.
| | - H Jeroen Rosingh
- Department of Ear, Nose and Throat Surgery, Isala Hospital, Zwolle, the Netherlands
| | | | - Ingrid M Nijholt
- Isala Academy, Department of Medical Education and Faculty Development, Isala Hospital, Zwolle, the Netherlands
| | - Saskia Dünnwald
- MCL Academy, Medical Center Leeuwarden, Leeuwarden, the Netherlands
| | - Jelle Prins
- Center for Education Development and Research in Health Professions, University of Groningen and University Medical Centre, Groningen, the Netherlands
- MCL Academy, Medical Center Leeuwarden, Leeuwarden, the Netherlands
| | - Johanna Schönrock-Adema
- Center for Education Development and Research in Health Professions, University of Groningen and University Medical Centre, Groningen, the Netherlands
| |
Collapse
|
17
|
van der Meulen MW, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Lombarts KMJMH. Exploring Validity Evidence Associated With Questionnaire-Based Tools for Assessing the Professional Performance of Physicians: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1384-1397. [PMID: 31460937 DOI: 10.1097/acm.0000000000002767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To collect and examine-using an argument-based validity approach-validity evidence of questionnaire-based tools used to assess physicians' clinical, teaching, and research performance. METHOD In October 2016, the authors conducted a systematic search of the literature seeking articles about questionnaire-based tools for assessing physicians' professional performance published from inception to October 2016. They included studies reporting on the validity evidence of tools used to assess physicians' clinical, teaching, and research performance. Using Kane's validity framework, they conducted data extraction based on four inferences in the validity argument: scoring, generalization, extrapolation, and implications. RESULTS They included 46 articles on 15 tools assessing clinical performance and 72 articles on 38 tools assessing teaching performance. They found no studies on research performance tools. Only 12 of the tools (23%) gathered evidence on all four components of Kane's validity argument. Validity evidence focused mostly on generalization and extrapolation inferences. Scoring evidence showed mixed results. Evidence on implications was generally missing. CONCLUSIONS Based on the argument-based approach to validity, not all questionnaire-based tools seem to support their intended use. Evidence concerning implications of questionnaire-based tools is mostly lacking, thus weakening the argument to use these tools for formative and, especially, for summative assessments of physicians' clinical and teaching performance. More research on implications is needed to strengthen the argument and to provide support for decisions based on these tools, particularly for high-stakes, summative decisions. To meaningfully assess academic physicians in their tripartite role as doctor, teacher, and researcher, additional assessment tools are needed.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- M.W. van der Meulen is PhD candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-3636-5469. A. Smirnova is PhD graduate and researcher, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-4491-3007. S. Heeneman is professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-6103-8075. M.G.A. oude Egbrink is professor, Department of Physiology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-5530-6598. C.P.M. van der Vleuten is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119. K.M.J.M.H. Lombarts is professor, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0001-6167-0620
| | | | | | | | | | | |
Collapse
|
18
|
Eldridge-Smith ED, Loew M, Stepleman LM. The adaptation and validation of a stigma measure for individuals with multiple sclerosis. Disabil Rehabil 2019; 43:262-269. [PMID: 31130021 DOI: 10.1080/09638288.2019.1617793] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Purpose: Stigma negatively impacts quality of life for individuals with multiple sclerosis. Availability of instruments to assess levels of stigma are crucial for monitoring and targeted intervention. The study aims to adapt the Reece Stigma Scale for use with this specific population and examine its reliability and validity.Methods: The scale was administered the 137 participants included in a larger study on identity and multiple sclerosis. Validity was evaluated utilizing the Downing model, as well as assessing potentially related constructs, including adherence, depression, anxiety, quality of life, self-efficacy, and post-traumatic growth.Results: Principal component analysis revealed a one factor solution with excellent internal consistency. Additional construct support offered evidence that higher levels of stigma are related to lower adherence and self-management efficacy, higher levels of anxiety and depressive symptoms, as well as more dissatisfaction with quality of life.Conclusions: This study provides preliminary support for an adapted version of the Reece Stigma Scale, specific to the multiple sclerosis population. The validation data suggests strong psychometric properties. Our findings underscore the clinical importance of measuring and addressing stigma among these patients, with the potential to improve medical (i.e., adherence), psychological (i.e., depression and anxiety), and quality of life outcomes.Implications for rehabilitationUnderstanding stigma-related experiences is crucial to enhance psychosocial factors related to multiple sclerosis.Stigma-related experiences also impact disease treatment outcomes for individuals with multiple sclerosis.The Reece Stigma Scale is a valid and reliable measure of felt stigma created for use in HIV populations. This study adapted and validated the use of the scale among individuals with multiple sclerosis.Clinicians and researchers working within the rehabilitation and treatment area of multiple sclerosis may benefit from using the adapted Reece Stigma Scale to measure and address stigma experiences.
Collapse
Affiliation(s)
| | - Megan Loew
- Department of Psychiatry and Health Behavior, Augusta University - Medical College of Georgia, Augusta, GA, USA
| | - Lara M Stepleman
- Department of Psychiatry and Health Behavior, Augusta University - Medical College of Georgia, Augusta, GA, USA
| |
Collapse
|
19
|
Cherney AR, Smith AB, Worrilow CC, Weaver KR, Yenser D, Macfarlan JE, Burket GA, Koons AL, Melder RJ, Greenberg MR, Kane BG. Emergency Medicine Resident Self-assessment of Clinical Teaching Compared to Student Evaluation Using a Previously Validated Rubric. Clin Ther 2018; 40:1375-1383. [PMID: 30064897 DOI: 10.1016/j.clinthera.2018.06.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 06/12/2018] [Accepted: 06/18/2018] [Indexed: 11/26/2022]
Abstract
PURPOSE The quality of clinical teaching in the emergency department from the students' perspective has not been previously described in the literature. Our goals were to assess senior residents' teaching ability from the resident/teacher and student/learner viewpoints for any correlation, and to explore any gender association. The secondary goal was to evaluate the possible impact of gender on the resident/student dyad, an interaction that has previously been studied only in the faculty/student pairing. METHODS After approval by an institutional review board, a 1-year, grant-funded, single-site, prospective study was implemented at a regional medical campus that sponsors a 4-year dually approved emergency medicine residency. The residency hosts both medical school students (MSs) and physician's assistant students (PAs). Each student and senior resident working concurrently completed a previously validated ER Scale, which measured residents' teaching performance in 4 categories: Didactic, Clinical, Approachable, and Helpful. Students evaluated residents' teaching, while residents self-assessed their performance. The participants' demographic characteristics gathered included prior knowledge of or exposure to clinical teaching models. Gender was self-reported by participants. The analysis accounted for multiple observations by comparing participants' mean scores. FINDINGS Ninety-nine subjects were enrolled; none withdrew consent. Thirty-seven residents (11 women) and 62 students (39 women) from 25 MSs and 6 PA schools were enrolled, completing 517 teaching assessments. Students evaluated residents more favorably in all ER Scale categories than did residents on self-assessments (P < 0.0001). This difference was significant in all subgroup comparisons (types of school versus postgraduate years [PGYs]). Residents' evaluations by type of student (MS vs PA) did not show a significant difference. PGY 3 residents assessed themselves higher in all categories than did PGY 4 residents, with Approachability reaching significance (P = 0.0105). Male residents self-assessed their teaching consistently higher than did female residents, significantly so on Clinical (P = 0.0300). Students' evaluations of the residents' teaching skills by residents' gender did not reveal gender differences. IMPLICATIONS MS and PA students evaluated teaching by EM senior residents statistically significantly higher than did EM residents on self-evaluation when using the ER Scale. Students did not evaluate residents' teaching with any difference by gender, although male residents routinely self-assessed their teaching abilities more positively than did female residents. These findings suggest that, if residency programs utilize resident self-evaluation for programmatic evaluation, the gender of the resident may impact self-scoring. This cohort may inform future study of resident teaching in the emergency department, such as the design of future resident-as-teacher curricula.
Collapse
Affiliation(s)
- Alan R Cherney
- Department of Emergency and Hospital Medicine, Lehigh Valley Health Network
| | - Amy B Smith
- Department of Education, Lehigh Valley Health Network, Allentown, Pennsylvania; Faculty at University of South Florida, Morsani College of Medicine, Tampa, Florida
| | - Charles C Worrilow
- Department of Emergency and Hospital Medicine, Lehigh Valley Health Network; Faculty at University of South Florida, Morsani College of Medicine, Tampa, Florida
| | - Kevin R Weaver
- Department of Emergency and Hospital Medicine, Lehigh Valley Health Network; Faculty at University of South Florida, Morsani College of Medicine, Tampa, Florida
| | - Dawn Yenser
- Department of Emergency and Hospital Medicine, Lehigh Valley Health Network
| | - Jennifer E Macfarlan
- Network Office of Research and Innovation, Lehigh Valley Health Network, Allentown, Pennsylvania
| | - Glenn A Burket
- Department of Emergency and Hospital Medicine, Lehigh Valley Health Network
| | - Andrew L Koons
- Department of Emergency and Hospital Medicine, Lehigh Valley Health Network
| | - Raymond J Melder
- Department of Emergency and Hospital Medicine, Lehigh Valley Health Network
| | - Marna R Greenberg
- Department of Emergency and Hospital Medicine, Lehigh Valley Health Network; Faculty at University of South Florida, Morsani College of Medicine, Tampa, Florida
| | - Bryan G Kane
- Department of Emergency and Hospital Medicine, Lehigh Valley Health Network; Faculty at University of South Florida, Morsani College of Medicine, Tampa, Florida.
| |
Collapse
|
20
|
Vaughan B. Exploring the measurement properties of the osteopathy clinical teaching questionnaire using Rasch analysis. Chiropr Man Therap 2018; 26:13. [PMID: 29744031 PMCID: PMC5932865 DOI: 10.1186/s12998-018-0182-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Accepted: 03/23/2018] [Indexed: 11/10/2022] Open
Abstract
Background Clinical teaching evaluations are common in health profession education programs to ensure students are receiving a quality clinical education experience. Questionnaires students use to evaluate their clinical teachers have been developed in professions such as medicine and nursing. The development of a questionnaire that is specifically for the osteopathy on-campus, student-led clinic environment is warranted. Previous work developed the 30-item Osteopathy Clinical Teaching Questionnaire. The current study utilised Rasch analysis to investigate the construct validity of the Osteopathy Clinical Teaching Questionnaire and provide evidence for the validity argument through fit to the Rasch model. Methods Senior osteopathy students at four institutions in Australia, New Zealand and the United Kingdom rated their clinical teachers using the Osteopathy Clinical Teaching Questionnaire. Three hundred and ninety-nine valid responses were received and the data were evaluated for fit to the Rasch model. Reliability estimations (Cronbach's alpha and McDonald's omega) were also evaluated for the final model. Results The initial analysis demonstrated the data did not fit the Rasch model. Accordingly, modifications to the questionnaire were made including removing items, removing person responses, and rescoring one item. The final model contained 12 items and fit to the Rasch model was adequate. Support for unidimensionality was demonstrated through both the Principal Components Analysis/t-test, and the Cronbach's alpha and McDonald's omega reliability estimates. Analysis of the questionnaire using McDonald's omega hierarchical supported a general factor (quality of clinical teaching in osteopathy). Conclusion The evidence for unidimensionality and the presence of a general factor support the calculation of a total score for the questionnaire as a sufficient statistic. Further work is now required to investigate the reliability of the 12-item Osteopathy Clinical Teaching Questionnaire to provide evidence for the validity argument.
Collapse
Affiliation(s)
- Brett Vaughan
- College of Health & Biomedicine, Victoria University, Melbourne, Australia
| |
Collapse
|
21
|
Vaižgėlienė E, Padaiga Ž, Rastenytė D, Tamelis A, Petrikonis K, Fluit C. Evaluation of clinical teaching quality in competency-based residency training in Lithuania. MEDICINA-LITHUANIA 2017; 53:339-347. [PMID: 29074340 DOI: 10.1016/j.medici.2017.08.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Revised: 08/17/2017] [Accepted: 08/28/2017] [Indexed: 11/17/2022]
Abstract
BACKGROUND AND AIM In 2013, all residency programs at the Lithuanian University of Health Sciences were renewed into the competency-based medical education curriculum (CBME). In 2015, we implemented the validated EFFECT questionnaire together with the EFFECT-System for quality assessment of clinical teaching in residency training. The aim of this study was to investigate the influence of characteristics of the resident (year of training) and clinical teacher (gender, age, and type of academic position) on teaching quality, as well as to assess areas for teaching quality improvement. MATERIALS AND METHODS Residents from 7 different residency study programs filled out 333 EFFECT questionnaires evaluating 146 clinical teachers. We received 143 self-evaluations of clinical teachers using the same questionnaire. Items were scored on a 6-point Likert scale. Main outcome measures were residents' mean overall (MOS), mean subdomain (MSS) and clinical teachers' self-evaluation scores. The overall comparisons of MOS and MSS across study groups and subgroups were done using Student's t test and ANOVA for trend. The intraclass correlation coefficient (ICC) was calculated in order to see how residents' evaluations match with self-evaluations for every particular teacher. To indicate areas for quality improvement items were analyzed subtracting their mean score from the respective (sub)domain score. RESULTS MOS for domains of "role modeling", "task allocation", "feedback", "teaching methodology" and "assessment" valued by residents were significantly higher than those valued by teachers (P<0.01). Teachers who filled out self-evaluation questionnaires were rated significantly higher by residents in role modeling subdomains (P<0.05). Male teachers in (sub)domains "role modeling: CanMEDS roles and reflection", "task allocation", "planning" and "personal support" were rated significantly higher than the female teachers (P<0.05). Teachers aged 40 years or younger were rated higher (P<0.01). Residents ratings by type of teachers' academic position almost in all (sub)domains differed significantly (P<0.05). No correlation observed between MOS of a particular teacher and MOS as rated by residents (ICC=0.055, P=0.399). The main areas for improvement were "feedback" and "assessment". CONCLUSIONS Resident evaluations of clinical teachers are influenced by teachers' age, gender, year of residency training, type of teachers' academic position and whether or not a clinical teacher performed self-evaluation. Development of CBME should be focused on the continuous evaluation of quality, clinical teachers educational support and the implementation of e-portfolio.
Collapse
Affiliation(s)
- Eglė Vaižgėlienė
- Department of Preventive Medicine, Faculty of Public Health, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania.
| | - Žilvinas Padaiga
- Department of Preventive Medicine, Faculty of Public Health, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Daiva Rastenytė
- Department of Neurology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Algimantas Tamelis
- Department of Surgery, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Kęstutis Petrikonis
- Department of Neurology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | | |
Collapse
|
22
|
Bossema ER, Meijs THJM, Peters JWB. Early predictors of study success in a Dutch advanced nurse practitioner education program: A retrospective cohort study. NURSE EDUCATION TODAY 2017; 57:68-73. [PMID: 28738236 DOI: 10.1016/j.nedt.2017.07.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Revised: 06/15/2017] [Accepted: 07/10/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND Study delay and attrition are major concerns in higher education. They cost time and effort, and threaten the availability of higher qualified professionals. Knowing early what factors contribute to delay and attrition may help prevent this. OBJECTIVE The aim of this study was to examine whether student characteristics, including a literature study report grade as a proxy of cognitive abilities, predicted study success in a dual advanced nurse practitioner education program. METHODS Retrospective cohort study, including all 214 students who between September 2009 and September 2015 started the two-year program at the HAN University of Applied Sciences in Nijmegen, the Netherlands. Study success was defined as having completed the program within the envisaged period. Variables examined included: age, gender, previous education (bachelor's degree or in-service training in nursing), work setting (general health, mental health, public health, or nursing home care), and literature study report grade (from 1 to 10). A hierarchical logistic regression analysis was performed. RESULTS Most students were female (80%), had a bachelor's degree in nursing (67%), and were employed in a general healthcare setting (58%). Mean age was 40.5years (SD 9.4). One hundred thirty-seven students (64%) had study success. Being employed in a general healthcare setting (p≤0.004) and a higher literature study report grade (p=0.001) were associated with a higher study success rate. CONCLUSION In advanced nurse practitioner education, study success rate seems associated with the student's cognitive abilities and work field. It might be worthwhile to identify students 'at risk of failure' before the start of the program and offer them extra support.
Collapse
Affiliation(s)
- Ercolie R Bossema
- Education program Master Advanced Nursing Practice (MANP), HAN University of Applied Sciences, PO Box 9029, 6500 JK Nijmegen, The Netherlands.
| | - Tineke H J M Meijs
- Education program Master Advanced Nursing Practice (MANP), HAN University of Applied Sciences, PO Box 9029, 6500 JK Nijmegen, The Netherlands
| | - Jeroen W B Peters
- Education program Master Advanced Nursing Practice (MANP), HAN University of Applied Sciences, PO Box 9029, 6500 JK Nijmegen, The Netherlands
| |
Collapse
|
23
|
Vaižgėlienė E, Padaiga Ž, Rastenytė D, Tamelis A, Petrikonis K, Kregždytė R, Fluit C. Validation of the EFFECT questionnaire for competence-based clinical teaching in residency training in Lithuania. MEDICINA-LITHUANIA 2017; 53:173-178. [PMID: 28596069 DOI: 10.1016/j.medici.2017.05.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Revised: 02/06/2017] [Accepted: 05/08/2017] [Indexed: 10/19/2022]
Abstract
BACKGROUND AND AIM In 2013, all residency programs at the Lithuanian University of Health Sciences were renewed into a competency-based medical education curriculum. To assess the quality of clinical teaching in residency training, we chose the EFFECT (evaluation and feedback for effective clinical teaching) questionnaire designed and validated at the Radboud University Medical Centre in the Netherlands. The aim of this study was to validate the EFFECT questionnaire for quality assessment of clinical teaching in residency training. MATERIALS AND METHODS The research was conducted as an online survey using the questionnaire containing 58 items in 7 domains. The questionnaire was double-translated into Lithuanian. It was sent to 182 residents of 7 residency programs (anesthesiology reanimathology, cardiology, dermatovenerology, emergency medicine, neurology, obstetrics and gynecology, physical medicine and rehabilitation). Overall, 333 questionnaires about 146 clinical teachers were filled in. To determine the item characteristics and internal consistency (Cronbach's α), the item and reliability analyses were performed. Furthermore, confirmatory factor analysis (CFI) was performed using a model for maximum-likelihood estimation. RESULTS Cronbach's α within different domains ranged between 0.91 and 0.97 and was comparable with the original version of the questionnaire. Confirmatory factor analysis demonstrated satisfactory model-fit with CFI of 0.841 and absolute model-fit RMSEA of 0.098. CONCLUSIONS The results suggest that the Lithuanian version of the EFFECT maintains its original validity and may serve as a valid instrument for quality assessment of clinical teaching in competency-based residency training in Lithuania.
Collapse
Affiliation(s)
- Eglė Vaižgėlienė
- Department of Preventive Medicine, Faculty of Public Health, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania.
| | - Žilvinas Padaiga
- Department of Preventive Medicine, Faculty of Public Health, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Daiva Rastenytė
- Department of Neurology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Algimantas Tamelis
- Department of Surgery, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Kęstutis Petrikonis
- Department of Neurology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Rima Kregždytė
- Department of Preventive Medicine, Faculty of Public Health, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | | |
Collapse
|
24
|
Sommer J, Lanier C, Perron NJ, Nendaz M, Clavet D, Audétat MC. A teaching skills assessment tool inspired by the Calgary-Cambridge model and the patient-centered approach. PATIENT EDUCATION AND COUNSELING 2016; 99:600-609. [PMID: 26680755 DOI: 10.1016/j.pec.2015.11.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Revised: 11/14/2015] [Accepted: 11/24/2015] [Indexed: 06/05/2023]
Abstract
OBJECTIVE The aim of this study was to develop a descriptive tool for peer review of clinical teaching skills. Two analogies framed our research: (1) between the patient-centered and the learner-centered approach; (2) between the structures of clinical encounters (Calgary-Cambridge communication model) and teaching sessions. METHOD During the course of one year, each step of the action research was carried out in collaboration with twelve clinical teachers from an outpatient general internal medicine clinic and with three experts in medical education. The content validation consisted of a literature review, expert opinion and the participatory research process. Interrater reliability was evaluated by three clinical teachers coding thirty audiotaped standardized learner-teacher interactions. RESULTS This tool contains sixteen items covering the process and content of clinical supervisions. Descriptors define the expected teaching behaviors for three levels of competence. Interrater reliability was significant for eleven items (Kendall's coefficient p<0.05). CONCLUSION This peer assessment tool has high reliability and can be used to facilitate the acquisition of teaching skills.
Collapse
Affiliation(s)
- Johanna Sommer
- Primary care unit, University of Geneva, Geneva, Switzerland.
| | - Cédric Lanier
- Primary care unit, University of Geneva, Geneva, Switzerland; Department of community medicine, primary care and emergencies, Geneva University Hospitals, Geneva, Switzerland.
| | - Noelle Junod Perron
- Department of community medicine, primary care and emergencies, Geneva University Hospitals, Geneva, Switzerland; Unit of development and research in medical education, University of Geneva, Geneva, Switzerland.
| | - Mathieu Nendaz
- Unit of development and research in medical education, University of Geneva, Geneva, Switzerland; Service of General Internal Medicine, Geneva University Hospitals, Geneva, Switzerland.
| | - Diane Clavet
- Center for health sciences education, Université de Sherbrooke, Sherbrooke, Canada.
| | - Marie-Claude Audétat
- Primary care unit, University of Geneva, Geneva, Switzerland; Family medicine and Emergency Department, Université de Montréal, Montréal, Canada.
| |
Collapse
|
25
|
Johnson CE, Keating JL, Boud DJ, Dalton M, Kiegaldie D, Hay M, McGrath B, McKenzie WA, Nair KBR, Nestel D, Palermo C, Molloy EK. Identifying educator behaviours for high quality verbal feedback in health professions education: literature review and expert refinement. BMC MEDICAL EDUCATION 2016; 16:96. [PMID: 27000623 PMCID: PMC4802720 DOI: 10.1186/s12909-016-0613-5] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2015] [Accepted: 03/09/2016] [Indexed: 05/09/2023]
Abstract
BACKGROUND Health professions education is characterised by work-based learning and relies on effective verbal feedback. However the literature reports problems in feedback practice, including lack of both learner engagement and explicit strategies for improving performance. It is not clear what constitutes high quality, learner-centred feedback or how educators can promote it. We hoped to enhance feedback in clinical practice by distinguishing the elements of an educator's role in feedback considered to influence learner outcomes, then develop descriptions of observable educator behaviours that exemplify them. METHODS An extensive literature review was conducted to identify i) information substantiating specific components of an educator's role in feedback asserted to have an important influence on learner outcomes and ii) verbal feedback instruments in health professions education, that may describe important educator activities in effective feedback. This information was used to construct a list of elements thought to be important in effective feedback. Based on these elements, descriptions of observable educator behaviours that represent effective feedback were developed and refined during three rounds of a Delphi process and a face-to-face meeting with experts across the health professions and education. RESULTS The review identified more than 170 relevant articles (involving health professions, education, psychology and business literature) and ten verbal feedback instruments in health professions education (plus modified versions). Eighteen distinct elements of an educator's role in effective feedback were delineated. Twenty five descriptions of educator behaviours that align with the elements were ratified by the expert panel. CONCLUSIONS This research clarifies the distinct elements of an educator's role in feedback considered to enhance learner outcomes. The corresponding set of observable educator behaviours aim to describe how an educator could engage, motivate and enable a learner to improve. This creates the foundation for developing a method to systematically evaluate the impact of verbal feedback on learner performance.
Collapse
Affiliation(s)
- Christina E. Johnson
- />Health Professions Education and Educational Research (HealthPEER), Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
- />Monash Doctors Education, Monash Health, Monash Medical Centre, Clayton, Melbourne, Australia
| | - Jennifer L. Keating
- />Department of Physiotherapy, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - David J. Boud
- />Centre for Research on Assessment and Digital Learning, Deakin University, Geelong, Australia
- />Faculty of Arts and Social Sciences, University of Technology Sydney, Ultimo, Australia
- />Institute of Work-Based Learning, Middlesex University, London, UK
| | - Megan Dalton
- />School of Human, Health and Social Science, Central Queensland University, Rockhampton, Australia
| | - Debra Kiegaldie
- />Faculty of Health Science, Youth and Community Studies, Holmesglen Institute and Healthscope Hospitals, Holmesglen, Melbourne, Australia
| | - Margaret Hay
- />Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Barry McGrath
- />Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | | | - Kichu Balakrishnan R. Nair
- />School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Newcastle, Australia
| | - Debra Nestel
- />Health Professions Education and Educational Research (HealthPEER), Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Claire Palermo
- />Department of Nutrition and Dietetics, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Elizabeth K. Molloy
- />Health Professions Education and Educational Research (HealthPEER), Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| |
Collapse
|
26
|
Schwartz CE, Ayandeh A, Finkelstein JA. When patients and surgeons disagree about surgical outcome: investigating patient factors and chart note communication. Health Qual Life Outcomes 2015; 13:161. [PMID: 26416031 PMCID: PMC4587581 DOI: 10.1186/s12955-015-0343-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2015] [Accepted: 09/10/2015] [Indexed: 11/10/2022] Open
Abstract
OBJECTIVE Effective physician-patient communication is a critical component of a clinical practice and in order to achieve optimal patient outcomes. We aimed to investigate indirect effects of physician-patient communication by examining the relationship between a physician-patient mismatch in perceived outcomes and content in the medical record's clinical note. We compared patient records whose perceived subjective assessment of surgery outcomes agreed or disagreed with the surgeon's perception of that outcome (Subjective Disagreement). METHODS This study included 172 spine surgery patients at a teaching hospital. Patient-reported outcomes included the Oswestry Disability Index; the Short-Form 36; and a Visual Analogue Scale items for leg and back pain. We content-analyzed the clinical note in the medical record, and used logistic regression to evaluate predictors of Subjective Disagreement (n = 41 disagreed vs. 131 agreed). RESULTS Patient and surgeon agreed in 76% of cases and disagreed in 24% of cases. Patients who assessed their outcome worse than their surgeons tended to be less educated and involved in litigation. They also tended to report worsened mental health and leg pain. Content analysis revealed group differences in surgeon communication patterns in the chart notes related to how symptom change was emphasized, how follow-up was described, and a specific word reference. Specifically, disagreement was predicted by using "much" to emphasize the findings and noting long-term prognosis. Agreement was predicted by use of positive emphasis terms, having an "as-needed" follow-up plan, and using "happy" in the chart note. CONCLUSION The nature of measuring outcomes of surgery is based on patient perception. In surgeon-patient perspective mismatches, patient factors may serve as barriers to improvement. Worsened change on patient-reported mental health may be an independent factor which colors the patient's general perceptions. This aspect of treatment may be missed by the spine surgeon. Chart note communication styles reflect the subjective disagreement. Investigating and/ or treating mental health deterioration may be valuable in resolving this mismatch and for overall outcome.
Collapse
Affiliation(s)
- Carolyn E Schwartz
- DeltaQuest Foundation, Inc., 31 Mitchell Road, Concord, MA, 01742, USA. .,Departments of Medicine and Orthopaedic Surgery, Tufts University Medical School, Boston, MA, USA.
| | - Armon Ayandeh
- DeltaQuest Foundation, Inc., 31 Mitchell Road, Concord, MA, 01742, USA
| | - Joel A Finkelstein
- Division of Orthopaedics, Sunnybrook Health Sciences Center and the University of Toronto, 2075 Bayview Avenue, Room MG361, Toronto, ON, Canada
| |
Collapse
|
27
|
Schönrock-Adema J, Visscher M, Raat ANJ, Brand PLP. Development and Validation of the Scan of Postgraduate Educational Environment Domains (SPEED): A Brief Instrument to Assess the Educational Environment in Postgraduate Medical Education. PLoS One 2015; 10:e0137872. [PMID: 26413836 PMCID: PMC4587553 DOI: 10.1371/journal.pone.0137872] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2015] [Accepted: 08/22/2015] [Indexed: 11/27/2022] Open
Abstract
Introduction Current instruments to evaluate the postgraduate medical educational environment lack theoretical frameworks and are relatively long, which may reduce response rates. We aimed to develop and validate a brief instrument that, based on a solid theoretical framework for educational environments, solicits resident feedback to screen the postgraduate medical educational environment quality. Methods Stepwise, we developed a screening instrument, using existing instruments to assess educational environment quality and adopting a theoretical framework that defines three educational environment domains: content, atmosphere and organization. First, items from relevant existing instruments were collected and, after deleting duplicates and items not specifically addressing educational environment, grouped into the three domains. In a Delphi procedure, the item list was reduced to a set of items considered most important and comprehensively covering the three domains. These items were triangulated against the results of semi-structured interviews with 26 residents from three teaching hospitals to achieve face validity. This draft version of the Scan of Postgraduate Educational Environment Domains (SPEED) was administered to residents in a general and university hospital and further reduced and validated based on the data collected. Results Two hundred twenty-three residents completed the 43-item draft SPEED. We used half of the dataset for item reduction, and the other half for validating the resulting SPEED (15 items, 5 per domain). Internal consistencies were high. Correlations between domain scores in the draft and brief versions of SPEED were high (>0.85) and highly significant (p<0.001). Domain score variance of the draft instrument was explained for ≥80% by the items representing the domains in the final SPEED. Conclusions The SPEED comprehensively covers the three educational environment domains defined in the theoretical framework. Because of its validity and brevity, the SPEED is promising as useful and easily applicable tool to regularly screen educational environment quality in postgraduate medical education.
Collapse
Affiliation(s)
- Johanna Schönrock-Adema
- Center for Educational Development and Research in health sciences, Institute for Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
- * E-mail: (JSA); (PLPB)
| | - Maartje Visscher
- UMCG Postgraduate School of Medicine, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
| | - A. N. Janet Raat
- Center for Educational Development and Research in health sciences, Institute for Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
| | - Paul L. P. Brand
- Center for Educational Development and Research in health sciences, Institute for Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
- UMCG Postgraduate School of Medicine, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
- Princess Amalia Children’s Centre, Isala Hospital, Zwolle, The Netherlands
- * E-mail: (JSA); (PLPB)
| |
Collapse
|
28
|
Fluit CRMG, Feskens R, Bolhuis S, Grol R, Wensing M, Laan R. Understanding resident ratings of teaching in the workplace: a multi-centre study. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:691-707. [PMID: 25314933 DOI: 10.1007/s10459-014-9559-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Accepted: 10/03/2014] [Indexed: 06/04/2023]
Abstract
Providing clinical teachers with feedback about their teaching skills is a powerful tool to improve teaching. Evaluations are mostly based on questionnaires completed by residents. We investigated to what extent characteristics of residents, clinical teachers, and the clinical environment influenced these evaluations, and the relation between residents' scores and their teachers' self-scores. The evaluation and feedback for effective clinical teaching questionnaire (EFFECT) was used to (self)assess clinical teachers from 12 disciplines (15 departments, four hospitals). Items were scored on a five-point Likert scale. Main outcome measures were residents' mean overall scores (MOSs), specific scale scores (MSSs), and clinical teachers' self-evaluation scores. Multilevel regression analysis was used to identify predictors. Residents' scores and self-evaluations were compared. Residents filled in 1,013 questionnaires, evaluating 230 clinical teachers. We received 160 self-evaluations. 'Planning Teaching' and 'Personal Support' (4.52, SD .61 and 4.53, SD .59) were rated highest, 'Feedback Content' (CanMEDS related) (4.12, SD .71) was rated lowest. Teachers in affiliated hospitals showed highest MOS and MSS. Medical specialty did not influence MOS. Female clinical teachers were rated higher for most MSS, achieving statistical significance. Residents in year 1-2 were most positive about their teachers. Residents' gender did not affect the mean scores, except for role modeling. At group level, self-evaluations and residents' ratings correlated highly (Kendall's τ 0.859). Resident evaluations of clinical teachers are influenced by teacher's gender, year of residency training, type of hospital, and to a lesser extent teachers' gender. Clinical teachers and residents agree on strong and weak points of clinical teaching.
Collapse
Affiliation(s)
- Cornelia R M G Fluit
- Academic Educational Institute, Radboud University Medical Center Nijmegen, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, The Netherlands,
| | | | | | | | | | | |
Collapse
|
29
|
Puddester D, MacDonald CJ, Clements D, Gaffney J, Wiesenfeld L. Designing faculty development to support the evaluation of resident competency in the intrinsic CanMEDS roles: practical outcomes of an assessment of program director needs. BMC MEDICAL EDUCATION 2015; 15:100. [PMID: 26043731 PMCID: PMC4472249 DOI: 10.1186/s12909-015-0375-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 05/12/2015] [Indexed: 05/10/2023]
Abstract
BACKGROUND The Royal College of Physicians and Surgeons of Canada and the College of Family Physicians of Canada mandate that faculty members demonstrate they are evaluating residents on all CanMEDS (Canadian Medical Education Directions for Specialists) roles as part of the accreditation process. Postgraduate Medical Education at the University of Ottawa initiated a 5-year project to develop and implement a comprehensive system to assess the full spectrum of CanMEDS roles. This paper presents the findings from a needs assessment with Program Directors, in order to determine how postgraduate medical faculty can be motivated and supported to evaluate residents on the intrinsic CanMEDS roles. METHODS Semi-structured individual interviews were conducted with 60 Postgraduate Program Directors in the Faculty of Medicine. Transcribed interviews were analyzed using qualitative analysis. Once the researchers were satisfied the identified themes reflected the views of the participants, the data was assigned to categories to provide rich, detailed, and comprehensive information that would indicate what faculty need in order to effectively evaluate their residents on the intrinsic roles. RESULTS Findings indicated faculty members need faculty development and shared point of care resources to support them with how to not only evaluate, but also teach, the intrinsic roles. Program Directors expressed the need to collaborate and share resources across departments and national specialty programs. Based on our findings, we designed and delivered workshops with companion eBooks to teach and evaluate residents at the point of care (Developing the Professional, Health Advocate and Scholar). CONCLUSIONS Identifying stakeholder needs is essential for designing effective faculty development. By sharing resources, faculties can prevent 'reinventing the wheel' and collaborate to meet the Colleges' accreditation requirements more efficiently.
Collapse
Affiliation(s)
- Derek Puddester
- Faculty of Medicine, University of Ottawa, 451 Smyth Road, Ottawa, ON, K1H 8M5, Canada.
| | - Colla J MacDonald
- Faculty of Education, University of Ottawa, 145 Jean Jacques Lussier, Ottawa, ON, K1N 6N5, Canada.
| | | | | | - Lorne Wiesenfeld
- Faculty of Medicine, University of Ottawa, 451 Smyth Road, Ottawa, ON, K1H 8M5, Canada.
| |
Collapse
|
30
|
Williams B, Olaussen A, Peterson EL. Peer-assisted teaching: An interventional study. Nurse Educ Pract 2015; 15:293-8. [PMID: 25866358 DOI: 10.1016/j.nepr.2015.03.008] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2014] [Revised: 12/05/2014] [Accepted: 03/22/2015] [Indexed: 10/23/2022]
Abstract
Peer-assisted learning (PAL) as an educational philosophy benefits both the peer-teacher and peer-learner. The changing role of paramedicine towards autonomous and professional practice demands future paramedics to be effective educators. Yet PAL is not formally integrated in undergraduate paramedic programs. We aimed to examine the effects of an educational intervention on students' PAL experiences as peer-teachers. Two one-hour workshops were provided prior to PAL teaching sessions including small group activities, individual reflections, role-plays and material notes. Peer-teachers completed the Teaching Style Survey, which uses a five-point Likert scale to measure participants' perceptions and confidence before and after PAL involvement. Thirty-eight students were involved in an average of 3.7 PAL sessions. The cohort was predominated by males (68.4%) aged ≤ 25 (73.7%). Following PAL, students reported feeling more confident in facilitating tutorial groups (p = 0.02). After the PAL project peer-teachers were also more likely to set high standards for their learners (p = 0.009). This PAL project yielded important information for the continual development of paramedic education. Although PAL increases students' confidence, the full role of PAL in education remains unexplored. The role of the university in this must also be clearly clarified.
Collapse
Affiliation(s)
- Brett Williams
- Monash University, Department of Community Emergency Health & Paramedic Practice, Vic, Australia.
| | - Alexander Olaussen
- Monash University, Department of Community Emergency Health & Paramedic Practice, Vic, Australia
| | - Evan L Peterson
- Monash University, Department of Community Emergency Health & Paramedic Practice, Vic, Australia
| |
Collapse
|
31
|
Da Dalt L, Anselmi P, Furlan S, Carraro S, Baraldi E, Robusto E, Perilongo G. Validating a set of tools designed to assess the perceived quality of training of pediatric residency programs. Ital J Pediatr 2015; 41:2. [PMID: 25599713 PMCID: PMC4339004 DOI: 10.1186/s13052-014-0106-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2014] [Accepted: 12/18/2014] [Indexed: 12/02/2022] Open
Abstract
Background The Paediatric Residency Program (PRP) of Padua, Italy, developed a set of questionnaires to assess the quality of the training provided by each faculty member, the quality of the professional experience the residents experienced during the various rotations and the functioning of the Resident Affair Committee (RAC), named respectively: “Tutor Assessment Questionnaire” (TAQ), “Rotation Assessment Questionnaire” (RAQ), and RAC Assessment Questionnaire”. The process that brought to their validation are herein presented. Method Between July 2012 and July 2013, 51 residents evaluated 26 tutors through the TAQ, and 25 rotations through the RAQ. Forty-eight residents filled the RAC Assessment Questionnaire. The three questionnaires were validated through a many-facet Rasch measurement analysis. Results In their final form, the questionnaires produced measures that were valid, reliable, unidimensional, and free from gender biases. TAQ and RAQ distinguished tutors and rotations into 5–6 levels of different quality and effectiveness. The three questionnaires allowed the identification of strengths and weaknesses of tutors, rotations, and RAC. The agreement observed among judges was coherent to the predicted values, suggesting that no particular training is required for developing a shared interpretation of the items. Conclusions The work herein presented serves to enrich the armamentarium of tools that resident medical programs can use to monitor their functioning. A larger application of these tools will serve to consolidate and refine further the results presented. Electronic supplementary material The online version of this article (doi:10.1186/s13052-014-0106-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Liviana Da Dalt
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| | | | - Sara Furlan
- Department FISPPA, University of Padua, Padua, Italy.
| | - Silvia Carraro
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| | - Eugenio Baraldi
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| | | | - Giorgio Perilongo
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| |
Collapse
|
32
|
Jochemsen-van der Leeuw HGAR, van Dijk N, Wieringa-de Waard M. Assessment of the clinical trainer as a role model: a Role Model Apperception Tool (RoMAT). ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:671-7. [PMID: 24556764 PMCID: PMC4885572 DOI: 10.1097/acm.0000000000000169] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
PURPOSE Positive role modeling by clinical trainers is important for helping trainees learn professional and competent behavior. The authors developed and validated an instrument to assess clinical trainers as role models: the Role Model Apperception Tool (RoMAT). METHOD On the basis of a 2011 systematic review of the literature and through consultation with medical education experts and with clinical trainers and trainees, the authors developed 17 attributes characterizing a role model, to be assessed using a Likert scale. In 2012, general practice (GP) trainees, in their first or third year of postgraduate training, who attended a curriculum day at four institutes in different parts of the Netherlands, completed the RoMAT. The authors performed a principal component analysis on the data that were generated, and they tested the instrument's validity and reliability. RESULTS Of 328 potential GP trainees, 279 (85%) participated. Of these, 202 (72%) were female, and 154 (55%) were first-year trainees. The RoMAT demonstrated both content and convergent validity. Two components were extracted: "Caring Attitude" and "Effectiveness." Both components had high reliability scores (0.92 and 0.84, respectively). Less experienced trainees scored their trainers significantly higher on the Caring Attitude component. CONCLUSIONS The RoMAT proved to be a valid, reliable instrument for assessing clinical trainers' role-modeling behavior. Both components include an equal number of items addressing personal (Heart), teaching (Head), and clinical (Hands-on) qualities, thus demonstrating that competence in the "3Hs" is a condition for positive role modeling. Educational managers (residency directors) and trainees alike can use the RoMAT.
Collapse
Affiliation(s)
- H G A Ria Jochemsen-van der Leeuw
- Dr. Jochemsen-van der Leeuw is general practitioner and PhD student, Department of General Practice/Family Medicine, Academic Medical Center-University of Amsterdam, Amsterdam, the Netherlands. Dr. van Dijk is assistant professor, Department of General Practice/Family Medicine, Academic Medical Center-University of Amsterdam, Amsterdam, the Netherlands. Dr. Wieringa-de Waard is professor, Department of General Practice/Family Medicine, Academic Medical Center-University of Amsterdam, Amsterdam, the Netherlands
| | | | | |
Collapse
|
33
|
Owolabi MO. Development and psychometric characteristics of a new domain of the stanford faculty development program instrument. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2014; 34:13-24. [PMID: 24648360 DOI: 10.1002/chp.21213] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
INTRODUCTION Teacher's attitude domain, a pivotal aspect of clinical teaching, is missing in the Stanford Faculty Development Program Questionnaire (SFDPQ), the most widely used student-based assessment method of clinical teaching skills. This study was conducted to develop and validate the teacher's attitude domain and evaluate the validity and internal consistency reliability of the augmented SFDPQ. METHODS Items generated for the new domain included teacher's enthusiasm, sobriety, humility, thoroughness, empathy, and accessibility. The study involved 20 resident doctors assessed once by 64 medical students using the augmented SFDPQ. Construct validity was explored using correlation among the different domains and a global rating scale. Factor analysis was performed. RESULTS The response rate was 94%. The new domain had a Cronbach's alpha of 0.89, with 1-factor solution explaining 57.1% of its variance. It showed the strongest correlation to the global rating scale (rho = 0.71). The augmented SFDPQ, which had a Cronbach's alpha of 0.93, correlated better (rho = 0.72, p < 0.00001) to the global rating scale than the original SFDPQ (rho = 0.67, p < 0.00001). DISCUSSION The new teacher's attitude domain exhibited good internal consistency and construct and factorial validity. It enhanced the content and construct validity of the SFDPQ. The validated construct of the augmented SFDPQ is recommended for design and evaluation of basic and continuing clinical teaching programs.
Collapse
|
34
|
Fluit CV, Bolhuis S, Klaassen T, DE Visser M, Grol R, Laan R, Wensing M. Residents provide feedback to their clinical teachers: reflection through dialogue. MEDICAL TEACHER 2013; 35:e1485-92. [PMID: 23968325 DOI: 10.3109/0142159x.2013.785631] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
BACKGROUND Physicians play a crucial role in teaching residents in clinical practice. Feedback on their teaching performance to support this role needs to be provided in a carefully designed and constructive way. AIMS We investigated an evaluation system for evaluating supervisors and providing formative feedback. METHOD In a design based research approach, the 'Evaluation and Feedback For Effective Clinical Teaching System' (EFFECT-S) was examined by conducting semi-structured interviews with residents and supervisors of five departments in five different hospitals about feedback conditions, acceptance and its effects. Interviews were analysed by three researchers, using qualitative research software (ATLAS-Ti). RESULTS Principles and characteristics of the design are supported by evaluating EFFECT-S. All steps of EFFECT-S appear necessary. A new step, team evaluation, was added. Supervisors perceived the feedback as instructive; residents felt capable of providing feedback. Creating safety and honesty require different actions for residents and supervisors. Outcomes include awareness of clinical teaching, residents learning feedback skills, reduced hierarchy and an improved learning climate. CONCLUSIONS EFFECT-S appeared useful for evaluating supervisors. Key mechanism was creating a safe environment for residents to provide honest and constructive feedback. Residents learned providing feedback, being part of the CanMEDS and ACGME competencies of medical education programmes.
Collapse
|
35
|
Fluit CRMG, Feskens R, Bolhuis S, Grol R, Wensing M, Laan R. Repeated evaluations of the quality of clinical teaching by residents. PERSPECTIVES ON MEDICAL EDUCATION 2013; 2:87-94. [PMID: 23670697 PMCID: PMC3656177 DOI: 10.1007/s40037-013-0060-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Many studies report on the validation of instruments for facilitating feedback to clinical supervisors. There is mixed evidence whether evaluations lead to more effective teaching and higher ratings. We assessed changes in resident ratings after an evaluation and feedback session with their supervisors. Supervisors of three medical specialities were evaluated, using a validated instrument (EFFECT). Mean overall scores (MOS) and mean scale scores were calculated and compared using paired T-tests. 24 Supervisors from three departments were evaluated at two subsequent years. MOS increased from 4.36 to 4.49. The MOS of two scales showed an increase >0.2: 'teaching methodology' (4.34-4.55), and 'assessment' (4.11-4.39). Supervisors with an MOS <4.0 at year 1 (n = 5) all demonstrated a strong increase in the MOS (mean overall increase 0.50, range 0.34-0.64). Four supervisors with an MOS between 4.0 and 4.5 (n = 6) demonstrated an increase >0.2 in their MOS (mean overall increase 0.21, range -0.15 to 53). One supervisor with an MOS >4.5 (n = 13) demonstrated an increase >0.02 in the MOS, two demonstrated a decrease >0.2 (mean overall increase -0.06, range -0.42 to 0.42). EFFECT-S was associated with a positive change in residents' ratings of their supervisors, predominantly in supervisors with relatively low initial scores.
Collapse
Affiliation(s)
- Cornelia R M G Fluit
- Academic Educational Institute, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands.
| | - Remco Feskens
- Department of Methods and Statistics, Utrecht University, Utrecht, the Netherlands
| | - Sanneke Bolhuis
- Academic Educational Institute, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| | - Richard Grol
- Scientific Institute for Quality of Healthcare, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| | - Michel Wensing
- Scientific Institute for Quality of Healthcare, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| | - Roland Laan
- Academic Educational Institute, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
- Department of Rheumatology, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| |
Collapse
|