1
|
"Rater training" re-imagined for work-based assessment in medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1697-1709. [PMID: 37140661 DOI: 10.1007/s10459-023-10237-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/30/2023] [Indexed: 05/05/2023]
Abstract
In this perspective, the authors critically examine "rater training" as it has been conceptualized and used in medical education. By "rater training," they mean the educational events intended to improve rater performance and contributions during assessment events. Historically, rater training programs have focused on modifying faculty behaviours to achieve psychometric ideals (e.g., reliability, inter-rater reliability, accuracy). The authors argue these ideals may now be poorly aligned with contemporary research informing work-based assessment, introducing a compatibility threat, with no clear direction on how to proceed. To address this issue, the authors provide a brief historical review of "rater training" and provide an analysis of the literature examining the effectiveness of rater training programs. They focus mainly on what has served to define effectiveness or improvements. They then draw on philosophical and conceptual shifts in assessment to demonstrate why the function, effectiveness aims, and structure of rater training requires reimagining. These include shifting competencies for assessors, viewing assessment as a complex cognitive task enacted in a social context, evolving views on biases, and reprioritizing which validity evidence should be most sought in medical education. The authors aim to advance the discussion on rater training by challenging implicit incompatibility issues and stimulating ways to overcome them. They propose that "rater training" (a moniker they suggest be reserved for strong psychometric aims) be augmented with "assessor readiness" programs that link to contemporary assessment science and enact the principle of compatibility between that science and ways of engaging with advances in real-world faculty-learner contexts.
Collapse
|
2
|
Observation of behavioural skills by medical simulation facilitators: a cross-sectional analysis of self-reported importance, difficulties, observation strategies and expertise development. Adv Simul (Lond) 2023; 8:28. [PMID: 38031197 PMCID: PMC10685611 DOI: 10.1186/s41077-023-00268-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 10/26/2023] [Indexed: 12/01/2023] Open
Abstract
BACKGROUND The association between team performance and patient care was an immense boost for team-based education in health care. Behavioural skills are an important focus in these sessions, often provided via a mannikin-based immersive simulation experience in a (near) authentic setting. Observation of these skills by the facilitator(s) is paramount for facilitated feedback with the team. Despite the acknowledgement that trained facilitators are important for optimal learning, insight into this observation process by facilitators is limited. OBJECTIVES What are the self-reported current practices and difficulties regarding the observation of behavioural skills amongst facilitators during team training and how have they been trained to observe behavioural skills? METHODS This cross-sectional study used a pilot-tested, content-validated, multi-linguistic online survey within Europe, distributed through a non-discriminative snowball sampling method. Inclusion was limited to facilitators observing behavioural skills within a medical team setting. RESULTS A total of 175 persons filled in the questionnaire. All aspects of behavioural skill were perceived as very important to observe. The self-perceived difficulty of the behavioural skill aspects ranged from slightly to moderately difficult. Qualitative analysis revealed three major themes elaborating on this perceived difficulty: (1) not everything can be observed, (2) not everything is observed and (3) interpretation of observed behavioural skills is difficult. Additionally, the number of team members health care facilitators have to observe, outnumbers their self-reported maximum. Strategies and tools used to facilitate their observation were a blank notepad, co-observers and predefined learning goals. The majority of facilitators acquired observational skills through self-study and personal experience and/or observing peers. Co-observation with either peers or experts was regarded as most learn some for their expertise development. Overall, participants perceived themselves as moderately competent in the observation of behavioural skills during team training. CONCLUSIONS Observation of behavioural skills by facilitators in health care remains a complex and challenging task. Facilitators' limitations with respect to attention, focus and (in)ability to perform concomitant tasks, need to be acknowledged. Although strategies and tools can help to facilitate the observation process, they all have their limitations and are used in different ways.
Collapse
|
3
|
Successful implementation of a rater training program for medical students to evaluate simulated pediatric emergencies. GMS JOURNAL FOR MEDICAL EDUCATION 2023; 40:Doc47. [PMID: 37560048 PMCID: PMC10407587 DOI: 10.3205/zma001629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Figures] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 03/04/2023] [Accepted: 04/20/2023] [Indexed: 08/11/2023]
Abstract
Introduction Simulation-based training is increasingly used in pediatrics to teach technical skills, teamwork, and team communication, and to improve potential deficiencies in pediatric emergency care. Team performance must be observed, analyzed, and evaluated by trained raters. The structured training of medical students for the assessment of simulated pediatric emergencies has not yet been investigated. Methods We developed a rater training program for medical students to assess guideline adherence, teamwork, and team communication in simulated pediatric emergencies. Interrater reliability was measured at each training stage using Kendall tau coefficients. Results In 10 out of 15 pairs of raters interrater reliability was moderate to high (tau>0.4), whereas it was low in the remaining 5 pairs of raters. Discussion The interrater reliability showed good agreement between medical students and expert raters at the end of the rater training program. Medical students can be successfully involved in the assessment of guideline adherence as well as teamwork and team communication in simulated pediatric emergencies.
Collapse
|
4
|
The influence of the simulation environment on teamwork and cognitive load in novice trauma professionals at the emergency department: Piloting a randomized controlled trial. Int Emerg Nurs 2023; 67:101261. [PMID: 36804137 DOI: 10.1016/j.ienj.2022.101261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 12/16/2022] [Accepted: 12/29/2022] [Indexed: 02/20/2023]
Abstract
INTRODUCTION This pilot study aimed to test the feasibility of conducting a randomized controlled trial to examine how simulation environments (in situ versus laboratory) influence teamwork skills development and cognitive load among novice healthcare trauma professionals in the emergency department. METHOD Twenty-four novice trauma professionals (nurses, medical residents, respiratory therapists) were assigned to in situ or laboratory simulations. They participated in two 15-minute simulations separated by a 45-minute debriefing on teamwork. After each simulation, they completed validated teamwork and cognitive load questionnaires. All simulations were video recorded to assess teamwork performance by trained external observers. Feasibility measures (e.g., recruitment rate, randomization procedure and intervention implementation) were recorded. Mixed ANOVAs were used to calculate effect sizes. RESULTS Regarding feasibility, several difficulties were encountered, such as a low recruitment rate and the inability to perform randomization. Outcome results suggest that the simulation environment does not affect novice trauma professionals' teamwork performance and cognitive load (small effect sizes), but a large effect size was observed for perceived learning. CONCLUSION This study highlights several barriers to conducting a randomized study in the context of interprofessional simulation-based education in the emergency department. Suggestions are made to guide future research in the field.
Collapse
|
5
|
Intraoperative Code Blue: Improving Teamwork and Code Response Through Interprofessional, In Situ Simulation. Jt Comm J Qual Patient Saf 2022; 48:665-673. [PMID: 36192311 DOI: 10.1016/j.jcjq.2022.08.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 08/28/2022] [Accepted: 08/30/2022] [Indexed: 12/30/2022]
Abstract
INTRODUCTION An intraoperative cardiac arrest requires perioperative teams to be equipped with the technical skills, nontechnical skills, and confidence to provide the best resuscitative measures for the patient. In situ simulation (simulation conducted in health professionals' work environment, such as a patient care unit, and not in an off-site location) has the potential to improve team performance. The research team assessed the effects of in situ simulation on code response, teamwork, communication, and comfort in intraoperative resuscitations. METHODS This study included seven interprofessional teams consisting of RNs, anesthesiologists, surgical technologists, and patient care technicians working in the operating room of a community hospital in New Jersey. The hour-long interdisciplinary simulation training sessions consisted of a code blue scenario run twice; both times video recorded, retrospectively reviewed, and compared to each other. Technical skills were measured by "time-to-tasks"; nontechnical skills were assessed using the Team Emergency Assessment Measure (TEAM) instrument. Self-reported comfort in skills was collected before the simulation program and after completion of the training. RESULTS A total of 21 perioperative nurses, 7 anesthesiologists, 7 surgical technologists, and 4 patient care technicians participated from January to April 2021. There was a significant (p < 0.05) decrease in time to compressions (by 14 seconds, 53.5% improvement) and in time to defibrillation (by 49 seconds) between the two simulations. Significant improvements were noted in confidence levels of certain CPR-related technical skills. There were statistically significant improvements in TEAM scores in the two teams that performed lowest in the pre-debrief simulation (p < 0.05). CONCLUSION In the operative setting, where time and space for training are limited, in situ simulation training was associated with improvement in technical skills of individuals and teams, with significantly improved teamwork in teams that required the most training. The long-term effects of such training and its effects on patient outcomes require additional research.
Collapse
|
6
|
Tools for Assessing the Performance of Pediatric Perioperative Teams During Simulated Crises: A Psychometric Analysis of Clinician Raters' Scores. Simul Healthc 2021; 16:20-28. [PMID: 33956763 DOI: 10.1097/sih.0000000000000467] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
INTRODUCTION The pediatric perioperative setting is a dynamic clinical environment where multidisciplinary interprofessional teams interact to deliver complex care to patients. This environment requires clinical teams to possess high levels of complex technical and nontechnical skills. For perioperative teams to identify and maintain clinical competency, well-developed and easy-to-use measures of competency are needed. METHODS Tools for measuring the technical and nontechnical performance of perioperative teams were developed and/or identified, and a group of raters were trained to use the instruments. The trained raters used the tools to assess pediatric teams managing simulated emergencies. A psychometric analysis of the trained raters' scores using the different instruments was performed and the agreement between the trained raters' scores and a reference score was determined. RESULTS Five raters were trained and scored 96 recordings of perioperative teams managing simulated emergencies. Scores from both technical skills assessment tools demonstrated significant reliability within and between ratings with the scenario-specific performance checklist tool demonstrating greater interrater agreement than scores from the global rating scale. Scores from both technical skills assessment tools correlated well with the other and with the reference standard scores. Scores from the Team Emergency Assessment Measure nontechnical assessment tool were more reliable within and between raters and correlated better with the reference standard than scores from the BARS tool. CONCLUSIONS The clinicians trained in this study were able to use the technical performance assessment tools with reliable results that correlated well with reference scores. There was more variability between the raters' scores and less correlation with the reference standard when the raters used the nontechnical assessment tools. The global rating scale used in this study was able to measure the performance of teams across a variety of scenarios and may be generalizable for assessing teams in other clinical scenarios. The Team Emergency Assessment Measure tool demonstrated reliable measures when used to assess interprofessional perioperative teams in this study.
Collapse
|
7
|
Implementation of the ACS/ APDS Resident Skills Curriculum reveals a need for rater training: An analysis using generalizability theory. Am J Surg 2021; 222:541-548. [PMID: 33516415 DOI: 10.1016/j.amjsurg.2021.01.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 12/16/2020] [Accepted: 01/11/2021] [Indexed: 11/25/2022]
Abstract
BACKGROUND The American College of Surgeons (ACS)/Association of Program Directors in Surgery (APDS) Resident Skills Curriculum includes validated task-specific checklists and global rating scales (GRS) for Objective Structured Assessment of Technical Skills (OSATS). However, it does not include instructions on use of these assessment tools. Since consistency of ratings is a key feature of assessment, we explored rater reliability for two skills. METHODS Surgical faculty assessed hand-sewn bowel and vascular anastomoses in real-time using the OSATS GRS. OSATS were video-taped and independently evaluated by a research resident and surgical attending. Rating consistency was estimated using intraclass correlation coefficients (ICC) and generalizability analysis. RESULTS Three-rater ICC coefficients across 24 videos ranged from 0.12 to 0.75. Generalizability reliability coefficients ranged from 0.55 to 0.8. Percent variance attributable to raters ranged from 2.7% to 32.1%. Pairwise agreement showed considerable inconsistency for both tasks. CONCLUSIONS Variability of ratings for these two skills indicate the need for rater training to increase scoring agreement and decrease rater variability for technical skill assessments.
Collapse
|
8
|
Rapid cycle system improvement for COVID-19 readiness: integrating deliberate practice, psychological safety and vicarious learning. BMJ SIMULATION & TECHNOLOGY ENHANCED LEARNING 2020. [PMID: 37534688 PMCID: PMC7441440 DOI: 10.1136/bmjstel-2020-000635] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Introduction In the face of a rapidly advancing pandemic with uncertain pathophysiology, pop-up healthcare units, ad hoc teams and unpredictable personal protective equipment supply, it is difficult for healthcare institutions and front-line teams to invent and test robust and safe clinical care pathways for patients and clinicians. Conventional simulation-based education was not designed for the time-pressured and emergent needs of readiness in a pandemic. We used ‘rapid cycle system improvement’ to create a psychologically safe learning oasis in the midst of a pandemic. This oasis provided a context to build staff technical and teamwork capacity and improve clinical workflows simultaneously. Methods At the Department of Anaesthesia and Intensive Care in Prince of Wales Hospital, a tertiary institution, in situ simulations were carried out in the operating theatres and intensive care unit (ICU). The translational simulation design leveraged principles of psychological safety, rapid cycle deliberate practice, direct and vicarious learning to ready over 200 staff with 51 sessions and achieve iterative system improvement all within 7 days. Staff evaluations and system improvements were documented postsimulation. Results/Findings Staff in both operating theatres and ICU were significantly more comfortable and confident in managing patients with COVID-19 postsimulation. Teamwork, communication and collective ability to manage infectious cases were enhanced. Key system issues were also identified and improved. Discussion To develop readiness in the rapidly progressing COVID-19 pandemic, we demonstrated that ‘rapid cycle system improvement’ can efficiently help achieve three intertwined goals: (1) ready staff for new clinical processes, (2) build team competence and confidence and (3) improve workflows and procedures.
Collapse
|
9
|
The Effect of Evaluator Training on Inter- and Intrarater Reliability in High-Stakes Assessment in Simulation. Nurs Educ Perspect 2020; 41:222-228. [PMID: 32569112 DOI: 10.1097/01.nep.0000000000000619] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
AIM The aim of this study was to evaluate the effectiveness of a training intervention in achieving inter- and intrarater reliability among faculty raters conducting high-stakes assessment of clinical performance in simulation. BACKGROUND High-stakes assessment of simulation performance is being adopted in nursing education. However, limited research exists to guide best practices in training raters, which is essential to ensure fair and defensible assessment. METHOD A nationwide sample of 75 prelicensure RN program faculty participated in an experimental, randomized, controlled study. RESULTS Participants completing a training intervention achieved higher inter- and intrarater reliability than control group participants when using a checklist evaluation tool. Mixed results were achieved by participants when completing a global competency assessment. CONCLUSION The training intervention was effective in helping participants to achieve a shared mental model for use of a checklist, but more time may be necessary to achieve consistent global competence decisions.
Collapse
|
10
|
In Situ Simulation to Assess Pediatric Tracheostomy Care Safety: A Novel Multicenter Quality Improvement Program. Otolaryngol Head Neck Surg 2020; 163:250-258. [PMID: 32450759 DOI: 10.1177/0194599820923659] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
OBJECTIVES Our objectives were (1) to use in situ simulation to assess the clinical environment and identify latent safety threats (LSTs) related to the management of pediatric tracheostomy patients and (2) to analyze the effects of systems interventions and team factors on LSTs and simulation performance. METHODS A multicenter, prospective study to assess LSTs related to pediatric tracheostomy care management was conducted in emergency departments (EDs) and intensive care units (ICUs). LSTs were identified through equipment checklists and in situ simulations via structured debriefs and blinded ratings of team performance. The research team and unit champions developed action plans with interventions to address each LST. Reassessment by equipment checklists and in situ simulations was repeated after 6 to 9 months. RESULTS Forty-one LSTs were identified over 21 simulations, 24 in the preintervention group and 17 in the postintervention group. These included LSTs in access to equipment (ie, availability of suction catheters, lack of awareness of the location of tracheostomy tubes) and clinical knowledge gaps. Mean equipment checklist scores improved from 76% to 87%. Twenty-one unique teams (65 participants) participated in the simulations. The average simulation score was 6.19 out of 16 points. DISCUSSION In situ simulation is feasible and effective as an assessment tool to identify latent safety threats and thus measure the system-level performance of a clinical care environment. IMPLICATIONS FOR PRACTICE In situ simulation can be used to identify and reassess latent safety threats related to pediatric tracheostomy management and thereby support quality improvement and educational initiatives.
Collapse
|
11
|
Comparison of peer assessment and faculty assessment in an interprofessional simulation-based team training program. Nurse Educ Pract 2020; 42:102666. [PMID: 31734516 DOI: 10.1016/j.nepr.2019.102666] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2018] [Revised: 06/25/2019] [Accepted: 11/08/2019] [Indexed: 11/28/2022]
Abstract
Challenges related to limited clinical sites and shortage of clinical instructors may reduce the quality of clinical experiences, leading to increased demand for the establishment of simulation-based training programs in the curricula of educational institutions. However, simulation-based training programs in health education place great demands on faculty resources. It is interesting, therefore, to investigate peers contributions in formal assessment, and how this compares to faculty assessment. This paper report the results from the comparison of direct observation by peer observers who had received short rater training, and post-hoc video-based assessment by trained facilitators. An observation form with six learning outcomes was used to rate team performance. Altogether 262 postgraduate nursing students, bachelor of nursing students and medical students participated, organized into 44 interprofessional teams. A total of 84 peers and two facilitators rated team performance. The sum score of all six learning outcomes showed that facilitators were more lenient than peer observers (p = .014). The inter-rater reliability varied considerably when comparing scores from peer observers from the three different professions with those of the facilitators. The results indicate that peer assessment may support, but not replace, faculty assessment.
Collapse
|
12
|
Does effectiveness in performance appraisal improve with rater training? PLoS One 2019; 14:e0222694. [PMID: 31536562 PMCID: PMC6752840 DOI: 10.1371/journal.pone.0222694] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Accepted: 09/05/2019] [Indexed: 12/03/2022] Open
Abstract
Performance appraisal is a complex process by which an organization can determine the extent to which employees are performing their work effectively. However, this appraisal may not be accurate if there is no reduction in the impact of problems caused by possibly subjective rater judgements. The main objective of this work is to check the effectiveness—separately and jointly—of the following four training programmes in the extant literature aimed at improving the accuracy of performance assessment: 1) Performance Dimension Training, 2) Frame-of-Reference, 3) Rater Error Training, and 4) Behavioural Observation Training. Based on these training strategies, three programmes were designed and applied separately. A fourth programme was a combination of the other three. We analyzed two studies using different samples (85 students and 42 employees) for the existence of differences in the levels of knowledge of performance and its dimensions, rater errors, observational accuracy, and accuracy of task and citizenship performance appraisal, according to the type of training raters receive. First, the main results show that training based on performance dimensions and the creation of a common framework, in addition to the training that includes the four programmes (Training_4_programmes), increases the level of knowledge of performance and its dimensions. Second, groups that receive training in rater error score higher in knowledge of biases than the other groups, whether or not they have received training. Third, participants’ observational accuracy improves with each new moment measure (post-training and follow-up), though not because of the type of training received. Fourth, participants who receive training through the programme that combine the other four gave a task performance appraisal that was closer to the one undertaken by the judges-experts than the other groups. And finally, students’ citizenship performance appraisal does not vary according to type of training or to different moment measures, whereas the group of employees who received all four types of training gave a more accurate citizenship performance assessment.
Collapse
|
13
|
Rating the quality of teamwork-a comparison of novice and expert ratings using the Team Emergency Assessment Measure (TEAM) in simulated emergencies. Scand J Trauma Resusc Emerg Med 2019; 27:12. [PMID: 30736821 PMCID: PMC6368771 DOI: 10.1186/s13049-019-0591-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Accepted: 11/14/2018] [Indexed: 01/01/2023] Open
Abstract
Background Training in teamwork behaviour improves technical resuscitation performance. However, its effect on patient outcome is less clear, partly because teamwork behaviour is difficult to measure. Furthermore, it is unknown who should evaluate it. In clinical practice, experts are obliged to participate in resuscitation efforts and are thus unavailable to assess teamwork quality. Consequently, we sought to determine if raters with little clinical experience and experts provide comparable evaluations of teamwork behaviour. Methods Novice and expert raters judged teamwork behaviour during 6 emergency medicine simulations using the Teamwork Emergency Assessment Measure (TEAM). Ratings of both groups were analysed descriptively and compared with U and t tests. We used a mixed effects model to identify the proportion of variance in TEAM scores attributable to rater status and other sources. Results Twelve raters evaluated 7 teams rotating through 6 cases, for a total of 84 observations. We found no significant difference between expert and novice ratings for 7 of the 11 items of the TEAM or in the sums of all item scores. Novices rated teamwork behaviour higher on 4 items and overall. Rater status accounted for 11.1% of the total variance in scores. Conclusions Experts’ and novices’ ratings were similarly distributed, implying that raters with limited experience can provide reliable data on teamwork behaviour. Novices show a consistent, but slightly more lenient rating behaviour. Clinical studies and real-life teams may thus employ novices using a structured observational tool such as TEAM to inform their performance review and improvement. Electronic supplementary material The online version of this article (10.1186/s13049-019-0591-9) contains supplementary material, which is available to authorized users.
Collapse
|
14
|
Assessing Communication Skills in Real Medical Encounters in Oncology: Development and Validation of the ComOn-Coaching Rating Scales. JOURNAL OF CANCER EDUCATION : THE OFFICIAL JOURNAL OF THE AMERICAN ASSOCIATION FOR CANCER EDUCATION 2019; 34:73-81. [PMID: 28815515 DOI: 10.1007/s13187-017-1269-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
One of the challenges in research on teaching physician-patient communication is how to assess communication, necessary for evaluating training, the learning process, and for feedback. Few instruments have been validated for real physician-patient consultations. Real consultations involve unique contexts, different persons, and topics, and are difficult to compare. The aim of this study is to develop and validate a rating scale for assessment of such consultations. For the evaluation study of a communication skills training for physicians in oncology, real consultations were recorded in three assessment points. Based on earlier work and on current studies, a new instrument was developed for assessment of these consultations. Two psychologists were trained in using the instrument and assessed 42 consultations. For inter-rater reliability, interclass correlation (ICC) was calculated. The final version of the rating scales consists of 13 items evaluated on a 5-point scale. The items are grouped in seven areas: "Start of conversation," "assessment of the patient's perspective," "structure of conversation," "emotional issues," "end of conversation," "general communication skills," and "overall evaluation." ICC coefficients for the domains ranged from .44 to .77. An overall coefficient of all items resulted in an ICC of .66. The ComOn-Coaching Rating Scales are a short, reliable, and applicable instrument for the assessment of real physician-patient consultations in oncology. If adapted, they could be used in other areas. They were developed for research and teaching purposes and meet the required methodological criteria. Rater training should be considered more deeply by further research.
Collapse
|
15
|
The development and implementation of a 12-month simulation-based learning curriculum for pediatric emergency medicine fellows utilizing debriefing with good judgment and rapid cycle deliberate practice. BMC MEDICAL EDUCATION 2019; 19:22. [PMID: 30646903 PMCID: PMC6334393 DOI: 10.1186/s12909-018-1417-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Accepted: 12/03/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND There are currently training gaps, primarily procedural and teamwork skills, for pediatric emergency medicine (PEM) fellows. Simulation-based learning (SBL) has been suggested as an educational modality to help fill those gaps. However, there is little evidence suggesting how to do so. The objective of this project is to develop and implement an SBL curriculum for PEM fellows with established curriculum development processes and instructional design strategies to improve PEM fellowship training. METHODS We developed a 12-month longitudinal SBL curriculum focused on needs assessment, instructional strategies, and evaluation. The curriculum development process led us to combine the instructional strategies of debriefing with good judgment, rapid cycle deliberate practice, and task-training to improve core PEM skills such as procedural competence, crisis resource management, and managing complex medical and traumatic emergencies. Using multiple approaches, we measured outcomes related to learners (attendance, performance, critical procedure opportunities), instructor performance, and program structure. RESULTS Eight/Eight (100%) PEM fellows participated in this curriculum from July 2015 to June 2017 with an overall attendance rate of 68%. Learners self-reported high satisfaction (4.4/5, SD = 0.5) and perceived educational value (4.9/5, SD = 0.38) with the curriculum and overall program structure. Learners had numerous opportunities to practice critical procedures such as airway management (20 opportunities), defibrillator use (ten opportunities), and others (ten opportunities). Learner Debriefing Assessment for Simulation in Healthcare (short version) scores had mean scores greater than 5.8/7 (SD = 0.89) across all six elements. CONCLUSIONS This longitudinal SBL curriculum combining debriefing with good judgment and rapid cycle deliberate practice can be a feasible method of reducing current training gaps (specifically with critical procedure opportunities) in PEM fellowship training. More work is needed to quantify the training gap reduction and to refine the curriculum.
Collapse
|
16
|
Consistency in grading clinical skills. Nurse Educ Pract 2018; 31:136-142. [DOI: 10.1016/j.nepr.2018.05.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2017] [Revised: 05/07/2018] [Accepted: 05/22/2018] [Indexed: 11/19/2022]
|
17
|
Changing Systems Through Effective Teams: A Role for Simulation. Acad Emerg Med 2018; 25:128-143. [PMID: 28727258 DOI: 10.1111/acem.13260] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 07/16/2017] [Indexed: 01/25/2023]
Abstract
Teams are the building blocks of the healthcare system, with growing evidence linking the quality of healthcare to team effectiveness, and team effectiveness to team training. Simulation has been identified as an effective modality for team training and assessment. Despite this, there are gaps in methodology, measurement, and implementation that prevent maximizing the impact of simulation modalities on team performance. As part of the 2017 Academic Emergency Medicine Consensus Conference "Catalyzing System Change Through Health Care Simulation: Systems, Competency, and Outcomes," we explored the impact of simulation on various aspects of team effectiveness. The consensus process included an extensive literature review, group discussions, and the conference "workshop" involving emergency medicine physicians, medical educators, and team science experts. The objectives of this work were to: 1) explore the antecedents and processes that support team effectiveness, 2) summarize the current role of simulation in developing and understanding team effectiveness, and 3) identify research targets to further improve team-based training and assessment, with the ultimate goal of improving healthcare systems.
Collapse
|
18
|
Conducting multicenter research in healthcare simulation: Lessons learned from the INSPIRE network. Adv Simul (Lond) 2017; 2:6. [PMID: 29450007 PMCID: PMC5806260 DOI: 10.1186/s41077-017-0039-0] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Accepted: 02/08/2017] [Indexed: 01/29/2023] Open
Abstract
Simulation-based research has grown substantially over the past two decades; however, relatively few published simulation studies are multicenter in nature. Multicenter research confers many distinct advantages over single-center studies, including larger sample sizes for more generalizable findings, sharing resources amongst collaborative sites, and promoting networking. Well-executed multicenter studies are more likely to improve provider performance and/or have a positive impact on patient outcomes. In this manuscript, we offer a step-by-step guide to conducting multicenter, simulation-based research based upon our collective experience with the International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE). Like multicenter clinical research, simulation-based multicenter research can be divided into four distinct phases. Each phase has specific differences when applied to simulation research: (1) Planning phase, to define the research question, systematically review the literature, identify outcome measures, and conduct pilot studies to ensure feasibility and estimate power; (2) Project Development phase, when the primary investigator identifies collaborators, develops the protocol and research operations manual, prepares grant applications, obtains ethical approval and executes subsite contracts, registers the study in a clinical trial registry, forms a manuscript oversight committee, and conducts feasibility testing and data validation at each site; (3) Study Execution phase, involving recruitment and enrollment of subjects, clear communication and decision-making, quality assurance measures and data abstraction, validation, and analysis; and (4) Dissemination phase, where the research team shares results via conference presentations, publications, traditional media, social media, and implements strategies for translating results to practice. With this manuscript, we provide a guide to conducting quantitative multicenter research with a focus on simulation-specific issues.
Collapse
|
19
|
When less is more: validating a brief scale to rate interprofessional team competencies. MEDICAL EDUCATION ONLINE 2017; 22:1314751. [PMID: 28475438 PMCID: PMC5508637 DOI: 10.1080/10872981.2017.1314751] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 03/29/2017] [Indexed: 05/22/2023]
Abstract
BACKGROUND There is a need for validated and easy-to-apply behavior-based tools for assessing interprofessional team competencies in clinical settings. The seven-item observer-based Modified McMaster-Ottawa scale was developed for the Team Objective Structured Clinical Encounter (TOSCE) to assess individual and team performance in interprofessional patient encounters. OBJECTIVE We aimed to improve scale usability for clinical settings by reducing item numbers while maintaining generalizability; and to explore the minimum number of observed cases required to achieve modest generalizability for giving feedback. DESIGN We administered a two-station TOSCE in April 2016 to 63 students split into 16 newly-formed teams, each consisting of four professions. The stations were of similar difficulty. We trained sixteen faculty to rate two teams each. We examined individual and team performance scores using generalizability (G) theory and principal component analysis (PCA). RESULTS The seven-item scale shows modest generalizability (.75) with individual scores. PCA revealed multicollinearity and singularity among scale items and we identified three potential items for removal. Reducing items for individual scores from seven to four (measuring Collaboration, Roles, Patient/Family-centeredness, and Conflict Management) changed scale generalizability from .75 to .73. Performance assessment with two cases is associated with reasonable generalizability (.73). Students in newly-formed interprofessional teams show a learning curve after one patient encounter. Team scores from a two-station TOSCE demonstrate low generalizability whether the scale consisted of four (.53) or seven items (.55). CONCLUSION The four-item Modified McMaster-Ottawa scale for assessing individual performance in interprofessional teams retains the generalizability and validity of the seven-item scale. Observation of students in teams interacting with two different patients provides reasonably reliable ratings for giving feedback. The four-item scale has potential for assessing individual student skills and the impact of IPE curricula in clinical practice settings. ABBREVIATIONS IPE: Interprofessional education; SP: Standardized patient; TOSCE: Team objective structured clinical encounter.
Collapse
|
20
|
Summative Simulated-Based Assessment in Nursing Programs. J Nurs Educ 2016; 55:323-8. [DOI: 10.3928/01484834-20160516-04] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2015] [Accepted: 04/05/2016] [Indexed: 11/20/2022]
|
21
|
Structuring feedback and debriefing to achieve mastery learning goals. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:1501-8. [PMID: 26375272 DOI: 10.1097/acm.0000000000000934] [Citation(s) in RCA: 106] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Mastery learning is a powerful educational strategy in which learners gain knowledge and skills that are rigorously measured against predetermined mastery standards with different learners needing variable time to reach uniform outcomes. Central to mastery learning are repetitive deliberate practice and robust feedback that promote performance improvement. Traditional health care simulation involves a simulation exercise followed by a facilitated postevent debriefing in which learners discuss what went well and what they should do differently next time, usually without additional opportunities to apply the specific new knowledge. Mastery learning approaches enable learners to "try again" until they master the skill in question. Despite the growing body of health care simulation literature documenting the efficacy of mastery learning models, to date insufficient details have been reported on how to design and implement the feedback and debriefing components of deliberate-practice-based educational interventions. Using simulation-based training for adult and pediatric advanced life support as case studies, this article focuses on how to prepare learners for feedback and debriefing by establishing a supportive yet challenging learning environment; how to implement educational interventions that maximize opportunities for deliberate practice with feedback and reflection during debriefing; describing the role of within-event debriefing or "microdebriefing" (i.e., during a pause in the simulation scenario or during ongoing case management without interruption), as a strategy to promote performance improvement; and highlighting directions for future research in feedback and debriefing for mastery learning.
Collapse
|