1
|
Buléon C, Mattatia L, Minehart RD, Rudolph JW, Lois FJ, Guillouet E, Philippon AL, Brissaud O, Lefevre-Scelles A, Benhamou D, Lecomte F, group TSAWS, Bellot A, Crublé I, Philippot G, Vanderlinden T, Batrancourt S, Boithias-Guerot C, Bréaud J, de Vries P, Sibert L, Sécheresse T, Boulant V, Delamarre L, Grillet L, Jund M, Mathurin C, Berthod J, Debien B, Gacia O, Der Sahakian G, Boet S, Oriot D, Chabot JM. Simulation-based summative assessment in healthcare: an overview of key principles for practice. ADVANCES IN SIMULATION (LONDON, ENGLAND) 2022; 7:42. [PMID: 36578052 PMCID: PMC9795938 DOI: 10.1186/s41077-022-00238-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 11/30/2022] [Indexed: 12/29/2022]
Abstract
BACKGROUND Healthcare curricula need summative assessments relevant to and representative of clinical situations to best select and train learners. Simulation provides multiple benefits with a growing literature base proving its utility for training in a formative context. Advancing to the next step, "the use of simulation for summative assessment" requires rigorous and evidence-based development because any summative assessment is high stakes for participants, trainers, and programs. The first step of this process is to identify the baseline from which we can start. METHODS First, using a modified nominal group technique, a task force of 34 panelists defined topics to clarify the why, how, what, when, and who for using simulation-based summative assessment (SBSA). Second, each topic was explored by a group of panelists based on state-of-the-art literature reviews technique with a snowball method to identify further references. Our goal was to identify current knowledge and potential recommendations for future directions. Results were cross-checked among groups and reviewed by an independent expert committee. RESULTS Seven topics were selected by the task force: "What can be assessed in simulation?", "Assessment tools for SBSA", "Consequences of undergoing the SBSA process", "Scenarios for SBSA", "Debriefing, video, and research for SBSA", "Trainers for SBSA", and "Implementation of SBSA in healthcare". Together, these seven explorations provide an overview of what is known and can be done with relative certainty, and what is unknown and probably needs further investigation. Based on this work, we highlighted the trustworthiness of different summative assessment-related conclusions, the remaining important problems and questions, and their consequences for participants and institutions of how SBSA is conducted. CONCLUSION Our results identified among the seven topics one area with robust evidence in the literature ("What can be assessed in simulation?"), three areas with evidence that require guidance by expert opinion ("Assessment tools for SBSA", "Scenarios for SBSA", "Implementation of SBSA in healthcare"), and three areas with weak or emerging evidence ("Consequences of undergoing the SBSA process", "Debriefing for SBSA", "Trainers for SBSA"). Using SBSA holds much promise, with increasing demand for this application. Due to the important stakes involved, it must be rigorously conducted and supervised. Guidelines for good practice should be formalized to help with conduct and implementation. We believe this baseline can direct future investigation and the development of guidelines.
Collapse
Affiliation(s)
- Clément Buléon
- grid.460771.30000 0004 1785 9671Department of Anesthesiology, Intensive Care and Perioperative Medicine, Caen Normandy University Hospital, 6th Floor, Caen, France ,grid.412043.00000 0001 2186 4076Medical School, University of Caen Normandy, Caen, France ,grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA
| | - Laurent Mattatia
- grid.411165.60000 0004 0593 8241Department of Anesthesiology, Intensive Care and Perioperative Medicine, Nîmes University Hospital, Nîmes, France
| | - Rebecca D. Minehart
- grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA ,grid.32224.350000 0004 0386 9924Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA USA ,grid.38142.3c000000041936754XHarvard Medical School, Boston, MA USA
| | - Jenny W. Rudolph
- grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA ,grid.32224.350000 0004 0386 9924Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA USA ,grid.38142.3c000000041936754XHarvard Medical School, Boston, MA USA
| | - Fernande J. Lois
- grid.4861.b0000 0001 0805 7253Department of Anesthesiology, Intensive Care and Perioperative Medicine, Liège University Hospital, Liège, Belgique
| | - Erwan Guillouet
- grid.460771.30000 0004 1785 9671Department of Anesthesiology, Intensive Care and Perioperative Medicine, Caen Normandy University Hospital, 6th Floor, Caen, France ,grid.412043.00000 0001 2186 4076Medical School, University of Caen Normandy, Caen, France
| | - Anne-Laure Philippon
- grid.411439.a0000 0001 2150 9058Department of Emergency Medicine, Pitié Salpêtrière University Hospital, APHP, Paris, France
| | - Olivier Brissaud
- grid.42399.350000 0004 0593 7118Department of Pediatric Intensive Care, Pellegrin University Hospital, Bordeaux, France
| | - Antoine Lefevre-Scelles
- grid.41724.340000 0001 2296 5231Department of Emergency Medicine, Rouen University Hospital, Rouen, France
| | - Dan Benhamou
- grid.413784.d0000 0001 2181 7253Department of Anesthesiology, Intensive Care and Perioperative Medicine, Kremlin Bicêtre University Hospital, APHP, Paris, France
| | - François Lecomte
- grid.411784.f0000 0001 0274 3893Department of Emergency Medicine, Cochin University Hospital, APHP, Paris, France
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
2
|
Exploring Validation and Verification: How they Different and What They Mean to Healthcare Simulation. Simul Healthc 2018; 13:356-362. [PMID: 29771813 DOI: 10.1097/sih.0000000000000298] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
STATEMENT The healthcare simulation (HCS) community recognizes the importance of quality management because many novel simulation devices and techniques include some sort of description of how they tested and assured their simulation's quality. Verification and validation play a key role in quality management; however, literature published on HCS has many different interpretations of what these terms mean and how to accomplish them. The varied use of these terms leads to varied interpretations of how verification process is different from validation process. We set out to explore the concepts of verification and validation in this article by reviewing current psychometric science description of the concepts and exploring how other communities relevant to HCS, such as medical device manufacturing, aviation simulation, and the fields of software and engineering, which are building blocks of technology-enhanced HCS, use the terms, with the focus of trying to clarify the process of verification. We also review current literature available on verification, as compared with validation in HCS and, finally, offer a working definition and concept for each of these terms with hopes to facilitate improved communication within, and with colleagues outside, the HCS community.
Collapse
|
3
|
Oyebode F, George S, Math V, Haque S. Inter-examiner reliability of the clinical parts of MRCPsych part II examinations. PSYCHIATRIC BULLETIN 2018. [DOI: 10.1192/pb.bp.106.012906] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Aims and MethodThe aim of the study was to investigate the interrater reliability of the clinical component of the MRCPsych part II examinations, namely the individual patient assessment and the patient management problems. In the study period, there were 1546 candidates and 773 pairs of examiners. Kappa scores for pairs of examiners in both these assessments were calculated.ResultsThe kappa scores for exact numerical agreement between the pairs of examiners in both individual patient assessment and patient management problems were only moderate (0.4 –0.5). However, the kappa scores for agreement between pairs of examiners for the reclassified pass and fail categories were very good (0.8).Clinical ImplicationsThe poor reliability of the traditional long case and oral examinations in general is one of the most potent arguments against their use. Our finding suggests that the College clinical examinations are at least not problematic from this point of view, particularly if global pass or fail judgements rather than discrete scores are applied.
Collapse
|
4
|
Noveanu J, Amsler F, Ummenhofer W, von Wyl T, Zuercher M. Assessment of Simulated Emergency Scenarios: Are Trained Observers Necessary? PREHOSP EMERG CARE 2017; 21:511-524. [DOI: 10.1080/10903127.2017.1302528] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
5
|
Lake CL. Simulation in Cardiothoracic and Vascular Anesthesia Education: Tool or Toy? Semin Cardiothorac Vasc Anesth 2016; 9:265-73. [PMID: 16322876 DOI: 10.1177/108925320500900401] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The use of simulators in cardiothoracic and vascular anesthesia runs the gamut from standardized patients and parttask trainers to full-scale high-fidelity human patient simulators. The use of simulation to teach medical students, anesthesiology residents, board-certified anesthesiologists with subspecialty interests, hospital administrators, attorneys, and the lay public is still evolving as educational research evaluates the use of simulation and health professional educators struggle to define its role and value. This article provides a general overview of the field and attempts to critically evaluate what is and what is not scientifically determined about simulation and simulators.
Collapse
Affiliation(s)
- Carol L Lake
- Verefi Technologies, Inc, Elizabethtown, PA 17022, USA.
| |
Collapse
|
6
|
Henning MA, Malpas P, Ram S, Rajput V, Krstić V, Boyd M, Hawken SJ. Students' responses to scenarios depicting ethical dilemmas: a study of pharmacy and medical students in New Zealand. JOURNAL OF MEDICAL ETHICS 2016; 42:466-473. [PMID: 27154898 DOI: 10.1136/medethics-2015-103253] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Accepted: 04/12/2016] [Indexed: 06/05/2023]
Abstract
One of the key learning objectives in any health professional course is to develop ethical and judicious practice. Therefore, it is important to address how medical and pharmacy students respond to, and deal with, ethical dilemmas in their clinical environments. In this paper, we examined how students communicated their resolution of ethical dilemmas and the alignment between these communications and the four principles developed by Beauchamp and Childress. Three hundred and fifty-seven pharmacy and medical students (overall response rate=63%) completed a questionnaire containing four clinical case scenarios with an ethical dilemma. Data were analysed using multiple methods. The findings revealed that 73% of the qualitative responses could be exclusively coded to one of the 'four principles' determined by the Beauchamp and Childress' framework. Additionally, 14% of responses overlapped between the four principles (multiple codes) and 13% of responses could not be coded using the framework. The subsequent subgroup analysis revealed different response patterns depending on the case being reviewed. The findings showed that when students are faced with challenging ethical dilemmas their responses can be aligned with the Beauchamp and Childress framework, although more contentious dilemmas involving issues of law are less easily categorised. The differences between year and discipline groups show students are developing ethical frames of reference that may be linked with their teaching environments and their levels of understanding. Analysis of these response patterns provides insight into the way students will likely respond in 'real' settings and this information may help educators prepare students for these clinical ethical dilemmas.
Collapse
Affiliation(s)
- Marcus A Henning
- Centre for Medical and Health Sciences Education, University of Auckland, Auckland, New Zealand
| | - Phillipa Malpas
- Department of Psychological Medicine, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Sanya Ram
- School of Pharmacy, University of Auckland, Auckland, New Zealand
| | - Vijay Rajput
- Ross University School of Medicine, Miramar, Florida, USA
| | - Vladimir Krstić
- Department of Philosophy, University of Auckland, Auckland, New Zealand
| | - Matt Boyd
- Independent researcher, formerly University of Auckland, Auckland, New Zealand
| | - Susan J Hawken
- Department of General Practice and Primary Healthcare, Faculty of Medical and Health Sciences, University of Auckland, Auckland New Zealand
| |
Collapse
|
7
|
Improving Patient Safety through Simulation Training in Anesthesiology: Where Are We? Anesthesiol Res Pract 2016; 2016:4237523. [PMID: 26949389 PMCID: PMC4753320 DOI: 10.1155/2016/4237523] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2015] [Revised: 12/28/2015] [Accepted: 01/03/2016] [Indexed: 12/21/2022] Open
Abstract
There have been colossal technological advances in the use of simulation in anesthesiology in the past 2 decades. Over the years, the use of simulation has gone from low fidelity to high fidelity models that mimic human responses in a startlingly realistic manner, extremely life-like mannequin that breathes, generates E.K.G, and has pulses, heart sounds, and an airway that can be programmed for different degrees of obstruction. Simulation in anesthesiology is no longer a research fascination but an integral part of resident education and one of ACGME requirements for resident graduation. Simulation training has been objectively shown to increase the skill-set of anesthesiologists. Anesthesiology is leading the movement in patient safety. It is rational to assume a relationship between simulation training and patient safety. Nevertheless there has not been a demonstrable improvement in patient outcomes with simulation training. Larger prospective studies that evaluate the improvement in patient outcomes are needed to justify the integration of simulation training in resident education but ample number of studies in the past 5 years do show a definite benefit of using simulation in anesthesiology training. This paper gives a brief overview of the history and evolution of use of simulation in anesthesiology and highlights some of the more recent studies that have advanced simulation-based training.
Collapse
|
8
|
Abstract
Maintenance of certification (MOC) is a process through which practitioners are able to show continuing competence in their areas of expertise. Simulation plays an increasingly important role in the assessment of students and residents, as well as in the initial practice certification for health care professionals. The use of simulation as an assessment tool in MOC has been sluggish to be universally accepted. This article discusses the role of simulation in health care education, how simulation might be effectively applied in the MOC process, and the future role of simulation in the MOC process.
Collapse
Affiliation(s)
- Brian K Ross
- Department of Anesthesiology and Pain Medicine, University of Washington, Box 356540, Seattle, WA 98195, USA.
| | - Julia Metzner
- Department of Anesthesiology and Pain Medicine, University of Washington, Box 356540, Seattle, WA 98195, USA
| |
Collapse
|
9
|
Kundra P, Cherian A. Simulation for "Evaluation" and teaching "Standard operating procedures". J Anaesthesiol Clin Pharmacol 2015; 31:270-1. [PMID: 25948922 PMCID: PMC4411855 DOI: 10.4103/0970-9185.155208] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- Pankaj Kundra
- Department of Anaesthesiology and Critical Care, JIPMER, Puducherry, India
| | - Anusha Cherian
- Department of Anaesthesiology and Critical Care, JIPMER, Puducherry, India
| |
Collapse
|
10
|
Lau N, Jamieson GA, Skraaning G. Inter-rater reliability of query/probe-based techniques for measuring situation awareness. ERGONOMICS 2014; 57:959-972. [PMID: 24800794 DOI: 10.1080/00140139.2014.910612] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
UNLABELLED Query- or probe-based situation awareness (SA) measures sometimes rely on process experts to evaluate operator actions and system states when used in representative settings. This introduces variability of human judgement into the measurements that require inter-rater reliability assessment. However, the literature neglects inter-rater reliability of query/probe-based SA measures. We recruited process experts to provide reference keys to SA queries in trials of a full-scope nuclear power plant simulator experiment to investigate the inter-rater reliability of a query-based SA measure. The query-based SA measure demonstrated only 'moderate' inter-rater reliability even though the queries were seemingly direct. The level of agreement was significantly different across pairs of experts who had different levels of exposure to the experiment. The results caution that inter-rater reliability of query/probe-based techniques for measuring SA cannot be assumed in representative settings. Knowledge about the experiment as well as the domain is critical to forming reliable expert judgements. PRACTITIONER SUMMARY When the responses of domain experts are treated as the correct answers to the queries or probes of SA measures used in representative or industrial settings, practitioners should take caution in assuming (or otherwise assess) inter-rater reliability of the situation awareness measures.
Collapse
Affiliation(s)
- Nathan Lau
- a Department of Systems and Information Engineering , University of Virginia , Charlottesville , VA , USA
| | | | | |
Collapse
|
11
|
|
12
|
Correlation of rater training and reliability in performance assessment: Experience in a school of dentistry. J Dent Sci 2013. [DOI: 10.1016/j.jds.2013.01.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
13
|
Hsiao YL, Drury C, Wu C, Paquet V. Predictive models of safety based on audit findings: Part 1: Model development and reliability. APPLIED ERGONOMICS 2013; 44:261-273. [PMID: 22939287 DOI: 10.1016/j.apergo.2012.07.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2011] [Revised: 07/08/2012] [Accepted: 07/16/2012] [Indexed: 06/01/2023]
Abstract
This consecutive study was aimed at the quantitative validation of safety audit tools as predictors of safety performance, as we were unable to find prior studies that tested audit validity against safety outcomes. An aviation maintenance domain was chosen for this work as both audits and safety outcomes are currently prescribed and regulated. In Part 1, we developed a Human Factors/Ergonomics classification framework based on HFACS model (Shappell and Wiegmann, 2001a,b), for the human errors detected by audits, because merely counting audit findings did not predict future safety. The framework was tested for measurement reliability using four participants, two of whom classified errors on 1238 audit reports. Kappa values leveled out after about 200 audits at between 0.5 and 0.8 for different tiers of errors categories. This showed sufficient reliability to proceed with prediction validity testing in Part 2.
Collapse
Affiliation(s)
- Yu-Lin Hsiao
- Department of Industrial and Systems Engineering, Chung Yuan Christian University, Chung Li 32023, Taiwan.
| | | | | | | |
Collapse
|
14
|
Levine AI, Schwartz AD, Bryson EO, Demaria S. Role of simulation in U.S. physician licensure and certification. ACTA ACUST UNITED AC 2012; 79:140-53. [PMID: 22238047 DOI: 10.1002/msj.21291] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
The evolution of simulation from an educational tool to an emerging evaluative tool has been rapid. Physician certification has a long history and serves an important role in assuring that practicing physicians are competent and capable of providing a high level of safe care to patients. Traditional assessment methods have relied mostly on multiple-choice exams or continuing medical education exercises. These methods may not be adequate to assess all competencies necessary for excellence in medical practice. Simulation enables assessment of physician competencies in real time and represents the next step in physician certification in the modern age of healthcare.
Collapse
|
15
|
The effect of simulation in improving students’ performance in laparoscopic surgery: a meta-analysis. Surg Endosc 2012; 26:3215-24. [DOI: 10.1007/s00464-012-2327-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2012] [Accepted: 04/26/2012] [Indexed: 01/05/2023]
|
16
|
Liaw SY, Scherpbier A, Klainin-Yobas P, Rethans JJ. Rescuing A Patient In Deteriorating Situations (RAPIDS): An evaluation tool for assessing simulation performance on clinical deterioration. Resuscitation 2011; 82:1434-9. [DOI: 10.1016/j.resuscitation.2011.06.008] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2011] [Revised: 05/09/2011] [Accepted: 06/05/2011] [Indexed: 11/29/2022]
|
17
|
Clinicians can accurately assign Apgar scores to video recordings of simulated neonatal resuscitations. Simul Healthc 2011; 5:204-12. [PMID: 21330798 DOI: 10.1097/sih.0b013e3181dcfb22] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
INTRODUCTION The Apgar score is used to describe the clinical condition of newborns. However, clinicians show low reliability when assigning Apgar scores to video recordings of actual neonatal resuscitations. Simulators provide a controlled environment for recreating and recording resuscitations. Clinicians assigned Apgar scores to such recordings to test the representativeness of simulator and recordings. Study design was guided by Brunswik's probabilistic functionalism. METHOD Judgment analysis methods were used to design 51 recordings of neonatal resuscitation scenarios, simulated with SimNewB (Laerdal, Stavanger, Norway). A step-by-step explanation of the design, preparation, and testing of the recordings is provided. ANALYSIS Recorded Apgar scores, calculated from the presentation of clinical signs, were compared against the designed scores. Working independently and without feedback, three experts assigned Apgar scores to confirm that the recordings could be interpreted as intended. Seventeen neonatal resuscitation clinicians scored the recordings in a separate experiment. RESULTS Correlations between Apgar scores assigned by the 20 viewers (experts plus clinicians) and recorded Apgar scores were high (0.78-0.91) and significant (P < 0.01). Fourteen of the 20 viewers scored the recordings without significant bias. Correlations between viewers' scores and scores of individualized linear models calculated for each viewer were high (0.79-0.97) and significant (P < 0.01), indicating systematic judgments. CONCLUSIONS SimNewB provided a realistic presentation of clinical conditions that was preserved in the recordings. Clinicians could interpret clinical conditions systematically and accurately without feedback or detailed instructions. These methods are applicable to future research about accuracy of clinical assessments in actual and simulated environments.
Collapse
|
18
|
Abstract
Simulation, a strategy for improving the quality and safety of patient care, is used for the training of technical and nontechnical skills and for training in teamwork and communication. This article reviews simulation-based research, with a focus on anesthesiology, at 3 different levels of outcome: (1) as measured in the simulation laboratory, (2) as measured in clinical performance, and (3) as measured in patient outcomes. It concludes with a discussion of some current uses of simulation, which include the identification of latent failures and the role of simulation in continuing professional practice assessment for anesthesiologists.
Collapse
Affiliation(s)
- Christine S Park
- Department of Anesthesiology, Northwestern University Feinberg School of Medicine, 251 East Huron Street, F5-704, Chicago, IL 60611, USA.
| |
Collapse
|
19
|
Decker S, Utterback VA, Thomas MB, Mitchell M, Sportsman S. Assessing continued competency through simulation: A call for stringent action. Nurs Educ Perspect 2011; 32:120-125. [PMID: 21667795 DOI: 10.5480/1536-5026-32.2.120] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
This article proposes that simulation has potential as a method to validate critical and reflective thinking skills and continued competency of registered nurses. The authors recognize the challenges and benefits for using simulation in assessing competency. Furthermore, the authors stress that the potential use of simulation in competency testing cannot be achieved until educators and researchers acquire the specific knowledge and skills to make informed decisions and recommend policy.
Collapse
Affiliation(s)
- Sharon Decker
- Texas Tech University Health Sciences Center School of Nursing, Lubbock, USA.
| | | | | | | | | |
Collapse
|
20
|
Nunnink L, Venkatesh B, Krishnan A, Vidhani K, Udy A. A prospective comparison between written examination and either simulation-based or oral viva examination of intensive care trainees' procedural skills. Anaesth Intensive Care 2010; 38:876-82. [PMID: 20865872 DOI: 10.1177/0310057x1003800511] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We compared results of written assessment of intensive care trainees' procedural skills with results obtained from one of two live assessment formats for the purposes of assessing the concurrent validity of the different test methods. Forty-five Australasian senior trainees in intensive care medicine completed a written test relating to a procedural skill, as well as either a simulation format or oral viva assessment on the same procedural skill. We analysed correlation between written exam results and results obtained from simulation format or oral viva assessment. For those who completed the simulation format examination, we also maintained a narrative of actions and identified critical errors. There was limited correlation between written exam results and live (simulation or viva) procedure station results (r = 0.31). Correlation with written exam results was very low for simulation format assessments (r = 0.08) but moderate for oral viva format assessment (r = 0.58). Participants who passed a written exam based on management of a blocked tracheostomy scenario performed a number of dangerous errors when managing a simulated patient in that scenario. The lack of correlation between exam formats supports multi-modal assessment, as currently it is not known which format best represents workplace performance. Correlation between written and oral viva results may indicate redundancy between those test formats, whereas limited correlation between simulation and written exams may support the use of both formats as part of an integrated assessment strategy. We hypothesise that identification of critical candidate errors in a simulation format exam that were not exposed in a written exam may indicate better predictive validity for simulation format examination of procedural skills.
Collapse
Affiliation(s)
- L Nunnink
- Intensive Care Unit, Princess Alexandra Hospital, Brisbane, Queensland, Australia.
| | | | | | | | | |
Collapse
|
21
|
The use of multi-modality simulation in the retraining of the physician for medical licensure. J Clin Anesth 2010; 22:294-9. [DOI: 10.1016/j.jclinane.2008.12.031] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2008] [Revised: 12/29/2008] [Accepted: 12/30/2008] [Indexed: 11/19/2022]
|
22
|
Gallagher CJ, Tan JM. The Current Status of Simulation in the Maintenance of Certification in Anesthesia. Int Anesthesiol Clin 2010; 48:83-99. [DOI: 10.1097/aia.0b013e3181eace5e] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
23
|
Edler AA, Fanning RG, Chen MI, Claure R, Almazan D, Struyk B, Seiden SC. Patient simulation: a literary synthesis of assessment tools in anesthesiology. JOURNAL OF EDUCATIONAL EVALUATION FOR HEALTH PROFESSIONS 2009; 6:3. [PMID: 20046456 PMCID: PMC2796725 DOI: 10.3352/jeehp.2009.6.3] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2009] [Accepted: 12/12/2009] [Indexed: 05/23/2023]
Abstract
High-fidelity patient simulation (HFPS) has been hypothesized as a modality for assessing competency of knowledge and skill in patient simulation, but uniform methods for HFPS performance assessment (PA) have not yet been completely achieved. Anesthesiology as a field founded the HFPS discipline and also leads in its PA. This project reviews the types, quality, and designated purpose of HFPS PA tools in anesthesiology. We used the systematic review method and systematically reviewed anesthesiology literature referenced in PubMed to assess the quality and reliability of available PA tools in HFPS. Of 412 articles identified, 50 met our inclusion criteria. Seventy seven percent of studies have been published since 2000; more recent studies demonstrated higher quality. Investigators reported a variety of test construction and validation methods. The most commonly reported test construction methods included "modified Delphi Techniques" for item selection, reliability measurement using inter-rater agreement, and intra-class correlations between test items or subtests. Modern test theory, in particular generalizability theory, was used in nine (18%) of studies. Test score validity has been addressed in multiple investigations and shown a significant improvement in reporting accuracy. However the assessment of predicative has been low across the majority of studies. Usability and practicality of testing occasions and tools was only anecdotally reported. To more completely comply with the gold standards for PA design, both shared experience of experts and recognition of test construction standards, including reliability and validity measurements, instrument piloting, rater training, and explicit identification of the purpose and proposed use of the assessment tool, are required.
Collapse
Affiliation(s)
- Alice A Edler
- Department of Graduate Medical Education, Stanford Hospitals and Clinics, Stanford, CA
| | | | | | | | | | | | | |
Collapse
|
24
|
Lammers RL, Byrwa MJ, Fales WD, Hale RA. Simulation-based assessment of paramedic pediatric resuscitation skills. PREHOSP EMERG CARE 2009; 13:345-56. [PMID: 19499472 DOI: 10.1080/10903120802706161] [Citation(s) in RCA: 74] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
BACKGROUND Emergency medical services (EMS) providers infrequently encounter seriously ill and injured pediatric patients. Clinical simulations are useful for assessing skill level, especially for low-frequency, high-risk problems. OBJECTIVE To identify the most common performance deficiencies in paramedics' management of three simulated pediatric emergencies. METHODS Paramedics from five EMS agencies in Michigan were eligible subjects for this prospective, observational study. Three clinical assessment modules (CAMs) were designed and validated using pediatric simulators with varying technologic complexity. Scenarios included an infant cardiopulmonary arrest, sepsis/seizure, and child asthma/respiratory arrest. Each scenario required paramedics to perform an assessment and provide appropriate pediatric patient care within a 12-minute time limit. Trained instructors conducted the simulations by following strict guidelines for sequences of events and responses. Videos of CAMs were reviewed by an independent evaluator to verify scoring accuracy. Percentage of steps completed for each of the three scenarios and specific performance deficiencies were recorded. RESULTS Two hundred twelve paramedics completed the CAMs. The average percentages of steps completed were as follows: arrest CAM, 45.3%; asthma CAM, 51.6%; and sepsis CAM, 47.1%. Performance deficiencies included lack of airway support or protection; lack of support of ventilations or cardiac function; inappropriate use of length-based treatment tapes; and inaccurate calculation and administration of medications and fluids. CONCLUSION Multiple deficiencies in paramedics' performance of pediatric resuscitation skills were objectively identified using three manikin-based simulations. EMS educators and EMS medical directors should target these specific skill deficiencies when developing continuing education in prehospital pediatric patient care.
Collapse
Affiliation(s)
- Richard Lee Lammers
- Department of Emergency Medicine, Michigan State University/Kalamazoo Center for Medical Studies, Kalamazoo, Michigan, USA.
| | | | | | | |
Collapse
|
25
|
A comparison of global rating scale and checklist scores in the validation of an evaluation tool to assess performance in the resuscitation of critically ill patients during simulated emergencies (abbreviated as "CRM simulator study IB"). Simul Healthc 2009; 4:6-16. [PMID: 19212245 DOI: 10.1097/sih.0b013e3181880472] [Citation(s) in RCA: 102] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
BACKGROUND Crisis resource management (CRM) skills are a set of nonmedical skills required to manage medical emergencies. There is currently no gold standard for evaluation of CRM performance. A prior study examined the use of a global rating scale (GRS) to evaluate CRM performance. This current study compared the use of a GRS and a checklist as formal rating instruments to evaluate CRM performance during simulated emergencies. METHODS First-year and third-year residents participated in two simulator scenarios each. Three raters then evaluated resident performance in CRM using edited video recordings using both a GRS and a checklist. The Ottawa GRS provides a seven-point anchored ordinal scale for performance in five categories of CRM, and an overall performance score. The Ottawa CRM checklist provides 12 items in the five categories of CRM, with a maximum cumulative score of 30 points. Construct validity was measured on the basis of content validity, response process, internal structure, and response to other variables. T-test analysis of Ottawa GRS scores was conducted to examine response to the variable of level of training. Intraclass correlation coefficient (ICC) scores were used to measure inter-rater reliability for both scenarios. RESULTS Thirty-two first-year and 28 third-year residents participated in the study. Third-year residents produced higher mean scores for overall CRM performance than first-year residents (P < 0.05), and in all individual categories within the Ottawa GRS (P < 0.05) and the Ottawa CRM checklist (P < 0.05). This difference was noted for both scenarios and for each individual rater (P < 0.05). No statistically significant difference in resident scores was observed between scenarios for both instruments. ICC scores of 0.59 and 0.61 were obtained for Scenarios 1 and 2 with the Ottawa GRS, whereas ICC scores of 0.63 and 0.55 were obtained with the Ottawa CRM checklist. Users indicated a strong preference for the Ottawa GRS given ease of scoring, presence of an overall score, and the potential for formative evaluation. CONCLUSION Construct validity seems to be present when using both the Ottawa GRS and CRM checklist to evaluate CRM performance during simulated emergencies. Data also indicate the presence of moderate inter-rater reliability when using both the Ottawa GRS and CRM checklist.
Collapse
|
26
|
Lee KHK, Grantham H, Boyd R. Comparison of high- and low-fidelity mannequins for clinical performance assessment. Emerg Med Australas 2008; 20:508-14. [DOI: 10.1111/j.1742-6723.2008.01137.x] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
27
|
Abstract
The historical roots of simulation might be described with the broadest definition of medical simulation: "an imitation of some real thing, state of affairs, or process" for the practice of skills, problem solving, and judgment. From the first "blue box" flight simulator to the military's impetus in the transfer of modeling and simulation technology to medicine, worldwide acceptance of simulation training is growing. Large collaborative simulation centers support the expectation of increases in multidisciplinary, interprofessional, and multimodal simulation training. Virtual worlds, both immersive and Web-based, are at the frontier of innovation in medical education.
Collapse
|
28
|
The use of multimodality simulation in the evaluation of physicians with suspected lapsed competence. J Crit Care 2008; 23:197-202. [PMID: 18538212 DOI: 10.1016/j.jcrc.2007.12.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2007] [Accepted: 12/05/2007] [Indexed: 11/23/2022]
Abstract
A simulator-based educational program has been incorporated into many anesthesia residency training programs. The effectiveness of this method of teaching has been validated by several studies and is generally accepted as an effective method of resident education. Evaluation of performance and positive critical feedback through debriefing has been attributed to the effectiveness of simulator-based education. Perhaps, this process can be used to evaluate the competence of practicing physicians. We report our experience using multimodality simulator technology to assess physicians who may have allowed their competence to lapse. We discuss our simulator-based assessment process and the strengths and limitations of our program. We also discuss the legal ramifications of participating in such assessments. Because of confidentiality agreements signed by all parties involved with this process, cases are discussed in general terms to assure anonymity.
Collapse
|
29
|
Lui PW. Things we should know when designing simulator-based teaching in difficult airway management. J Chin Med Assoc 2008; 71:163-5. [PMID: 18436497 DOI: 10.1016/s1726-4901(08)70098-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
|
30
|
Morgan PJ, Lam-McCulloch J, Herold-McIlroy J, Tarshis J. Simulation performance checklist generation using the Delphi technique. Can J Anaesth 2008; 54:992-7. [PMID: 18056208 DOI: 10.1007/bf03016633] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
PURPOSE Performance assessment using high fidelity simulation is problematic, due to the difficulty in developing valid and reliable evaluation tools. The Delphi technique is a consensus based content generation method used for multiple purposes such as policy development, best-evidence practice guidelines and competency assessments. The purpose of this study was to develop checklists using a modified Delphi technique to evaluate the performance of practicing anesthesiologists managing two simulated scenarios. METHODS The templates for two simulation scenarios were emailed to five anesthesiologists who were asked to generate performance items. Data were collated anonymously and returned. An a priori decision was made to delete items endorsed by </= 20% of participants. This process of collection, collation and re-evaluation was repeated until consensus was reached. Four independent raters used the checklist to assess three subjects managing the two simulation scenarios. Interrater reliability was assessed using average measures intraclass correlation (ICC) and repeated measures analysis of variance (ANOVA) was used to assess differences in difficulty between scenarios. RESULTS The final checklists included 131 items for scenario 1 and 126 items for scenario 2. The mean inter-rater reliability was 0.921 for scenario 1 and 0.903 for scenario 2. Repeated measures ANOVA revealed no statistically significant difference in difficulty between scenarios. DISCUSSION The Delphi technique can be very useful to generate consensus based evaluation tools with high content and face validity compared to subjective evaluative tools. Since there was no difference in scenario difficulty, these scenarios can be used to determine the effect of educational interventions on performance.
Collapse
Affiliation(s)
- Pamela J Morgan
- Department of Anesthesia, Women's College Hospital, University of Toronto, Toronto, Ontario, Canada.
| | | | | | | |
Collapse
|
31
|
Brett-Fleegler MB, Vinci RJ, Weiner DL, Harris SK, Shih MC, Kleinman ME. A simulator-based tool that assesses pediatric resident resuscitation competency. Pediatrics 2008; 121:e597-603. [PMID: 18283069 DOI: 10.1542/peds.2005-1259] [Citation(s) in RCA: 80] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND Competency in pediatric resuscitation is an essential goal of pediatric residency training. Both the exigencies of patient care and the Accreditation Council for Graduate Medical Education require assessment of this competency. Although there are standard courses in pediatric resuscitation, no published, validated assessment tool exists for pediatric resuscitation competency. OBJECTIVE The purpose of this work was to develop a simulation-based tool for the assessment of pediatric residents' resuscitation competency and to evaluate the tool's reliability and preliminarily its validity in a pilot study. METHODS We developed a 72-question yes-or-no questionnaire, the Tool for Resuscitation Assessment Using Computerized Simulation, representing 4 domains of resuscitation competency: basic resuscitation, airway support, circulation and arrhythmia management, and leadership behavior. We enrolled 25 subjects at each of 5 different training levels who all participated in 3 standardized code scenarios using the Laerdal SimMan universal patient simulator. Performances were videotaped and then reviewed by 2 independent expert raters. RESULTS The final version of the tool is presented. The intraclass correlation coefficient between the 2 raters ranged from 0.70 to 0.76 for the 4 domain scores and was 0.80 for the overall summary score. Between the 2 raters, the mean percent exact agreement across items in each domain ranged from 81.0% to 85.1% and averaged 82.1% across all of the items in the tool. Across subject groups, there was a trend toward increasing scores with increased training, which was statistically significant for the airway and summary scores. CONCLUSIONS In this pilot study, the Tool for Resuscitation Assessment Using Computerized Simulation demonstrated good interrater reliability within each domain and for summary scores. Performance analysis shows trends toward improvement with increasing years of training, providing preliminary construct validity.
Collapse
Affiliation(s)
- Marisa B Brett-Fleegler
- Division of Emergency Medicine, Main South Basement, Room CB0120, Children's Hospital Boston, 300 Longwood Ave, Boston, MA 02115, USA.
| | | | | | | | | | | |
Collapse
|
32
|
Trauma Training in Simulation: Translating Skills From SIM Time to Real Time. ACTA ACUST UNITED AC 2008; 64:255-63; discussion 263-4. [DOI: 10.1097/ta.0b013e31816275b0] [Citation(s) in RCA: 90] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
33
|
Sundar E, Sundar S, Pawlowski J, Blum R, Feinstein D, Pratt S. Crew resource management and team training. Anesthesiol Clin 2007; 25:283-300. [PMID: 17574191 DOI: 10.1016/j.anclin.2007.03.011] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
This article reviews medical team training using the principles of crew resource management (CRM). It also briefly discusses crisis resource management, a subset of CRM, as applied to high-acuity medical situations. Guidelines on setting up medical team training programs are presented. Team training programs are classified and examples of simulation-based and classroom-based programs are offered and their merits discussed. Finally, a brief look at the future of team training concludes this review article.
Collapse
Affiliation(s)
- Eswar Sundar
- Department of Anesthesiology, Harvard Medical School, Beth Israel Deaconess Medical Center, CC-539, 1 Deaconess Road, Boston, MA 02215, USA.
| | | | | | | | | | | |
Collapse
|
34
|
Kim J, Neilipovitz D, Cardinal P, Chiu M, Clinch J. A pilot study using high-fidelity simulation to formally evaluate performance in the resuscitation of critically ill patients: The University of Ottawa Critical Care Medicine, High-Fidelity Simulation, and Crisis Resource Management I Study. Crit Care Med 2006; 34:2167-74. [PMID: 16775567 DOI: 10.1097/01.ccm.0000229877.45125.cc] [Citation(s) in RCA: 207] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Resuscitation of critically ill patients requires medical knowledge, clinical skills, and nonmedical skills, or crisis resource management (CRM) skills. There is currently no gold standard for evaluation of CRM performance. The primary objective was to examine the use of high-fidelity simulation as a medium to evaluate CRM performance. Since no gold standard for measuring performance exists, the secondary objective was the validation of a measuring instrument for CRM performance-the Ottawa Crisis Resource Management Global Rating Scale (or Ottawa GRS). DESIGN First- and third-year residents participated in two simulator scenarios, recreating emergencies seen in acute care settings. Three raters then evaluated resident performance using edited video recordings of simulator performance. SETTING A Canadian university tertiary hospital. INTERVENTIONS : The Ottawa GRS was used, which provides a 7-point Likert scale for performance in five categories of CRM and an overall performance score. MEASUREMENTS AND MAIN RESULTS Construct validity was measured on the basis of content validity, response process, internal structure, and response to other variables. One variable measured in this study was the level of training. A t-test analysis of Ottawa GRS scores was conducted to examine response to the variable of level of training. Intraclass correlation coefficient scores were used to measure interrater reliability for both scenarios. Thirty-two first-year and 28 third-year residents participated in the study. Third-year residents produced higher mean scores for overall CRM performance than first-year residents (p < .0001) and in all individual categories within the Ottawa GRS (p = .0019 to p < .0001). This difference was noted for both scenarios and for each individual rater (p = .0061 to p < .0001). No statistically significant difference in resident scores was observed between scenarios. Intraclass correlation coefficient scores of .59 and .61 were obtained for scenarios 1 and 2, respectively. CONCLUSIONS Data obtained using the Ottawa GRS in measuring CRM performance during high-fidelity simulation scenarios support evidence of construct validity. Data also indicate the presence of acceptable interrater reliability when using the Ottawa GRS.
Collapse
Affiliation(s)
- John Kim
- Division of Critical Care Medicine and Department of Anesthesiology at the University of Ottawa, The Ottawa Hospital, USA
| | | | | | | | | |
Collapse
|
35
|
Weller JM, Robinson BJ, Jolly B, Watterson LM, Joseph M, Bajenov S, Haughton AJ, Larsen PD. Psychometric characteristics of simulation-based assessment in anaesthesia and accuracy of self-assessed scores. Anaesthesia 2005; 60:245-50. [PMID: 15710009 DOI: 10.1111/j.1365-2044.2004.04073.x] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The purpose of this study was to define the psychometric properties of a simulation-based assessment of anaesthetists. Twenty-one anaesthetic trainees took part in three highly standardised simulations of anaesthetic emergencies. Scenarios were videotaped and rated independently by four judges. Trainees also assessed their own performance in the simulations. Results were analysed using generalisability theory to determine the influence of subject, case and judge on the variance in judges' scores and to determine the number of cases and judges required to produce a reliable result. Self-assessed scores were compared to the mean score of the judges. The results suggest that 12-15 cases are required to rank trainees reliably on their ability to manage simulated crises. Greater reliability is gained by increasing the number of cases than by increasing the number of judges. There was modest but significant correlation between self-assessed scores and external assessors' scores (rho = 0.321; p = 0.01). At the lower levels of performance, trainees consistently overrated their performance compared to those performing at higher levels (p = 0.0001).
Collapse
Affiliation(s)
- J M Weller
- Faculty Education Unit, University of Auckland, Private Bag 92019, Auckland, New Zealand.
| | | | | | | | | | | | | | | |
Collapse
|
36
|
Abstract
PURPOSE With the advent of competency-based curriculum, technology such as full scale computer simulators have acquired an increasingly important role in anesthesia both in training and evaluation. This article reviews the current role of full scale computer simulators in teaching and evaluation in anesthesia. SOURCE This review draws from existing anesthesia and medical education literature in order to examine and assess the current role of full scale computer simulators in anesthesia education today. PRINCIPAL FINDINGS The last decade has witnessed a major increase in the use of full scale computer simulators in anesthesia. Many applications have been found for these simulators including teaching and training, evaluation and research. Despite the increasing use and application of full scale computers in anesthesia in the area of teaching and training, definitive studies evaluating its cost effectiveness, its efficacy compared to traditional training methods or its impact on patient outcome are still pending. Although there is some preliminary evidence of reliability and validity in using the simulator to evaluate clinical competence, development in this area has not progressed enough to justify its use in formal, summative evaluation of competence in anesthesia at this time. CONCLUSIONS As technology acquires an increasingly important role in medical education, full scale computer simulators represent an exciting potential in anesthesia. However, the full potential and role of simulators in anesthesia is still in development and will require a dovetailing of clinical theory and practice with current research in medical education.
Collapse
Affiliation(s)
- Anne K Wong
- Department of Anaesthesia, McMaster University, St. Joseph's Healthcare, Hamilton, Ontario, Canada.
| |
Collapse
|
37
|
Morgan PJ, Cleave-Hogg D, DeSousa S, Tarshis J. High-fidelity patient simulation: validation of performance checklists. Br J Anaesth 2004; 92:388-92. [PMID: 14742327 DOI: 10.1093/bja/aeh081] [Citation(s) in RCA: 56] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Standardized scenarios can be used for performance assessments geared to the level of the learner. The purpose of this study was to validate checklists used for the assessments of medical students' performance using high-fidelity patient simulation. METHODS Our undergraduate committee designed 10 scenarios based on curriculum objectives. Fifteen faculty members with undergraduate educational experience identified items considered appropriate for medical students' performance level and identified items that, if omitted, would negatively affect grades. Items endorsed by less than 20% of faculty were omitted. For remaining items, weighting was calculated according to faculty responses. Students managed at least one scenario during which their performance was videotaped. Two raters independently completed the checklists for three consecutive sessions to determine inter-rater reliability. Validity was determined using Cronbach's alpha with an alpha>or=0.6 and <or=0.9 considered acceptable internal consistency. Item analysis was performed by recalculating Cronbach's alpha with each item deleted to determine if that item contributed to a low internal consistency. RESULTS 135 students participated in the study. Inter-rater reliability of the two raters determined on the third session was 0.97 and therefore one rater completed the remaining performance assessments. Cronbach's alpha for the 10 scenarios ranged from 0.16 to 0.93 with two scenarios demonstrating acceptable internal consistency with all items. Three scenarios demonstrated acceptable internal consistency with one item deleted. CONCLUSIONS Five scenarios developed for this study were shown to be valid when using the faculty criteria for expected performance level.
Collapse
Affiliation(s)
- P J Morgan
- Department of Anesthesia, Sunnybrook and Women's College Health Sciences Centre, Women's College Campus, 76 Grenville Street, Toronto, Ontario, Canada M5S 1B2.
| | | | | | | |
Collapse
|
38
|
Tsai TC, Harasym PH, Nijssen-Jordan C, Jennett P, Powell G. The quality of a simulation examination using a high-fidelity child manikin. MEDICAL EDUCATION 2003; 37 Suppl 1:72-78. [PMID: 14641642 DOI: 10.1046/j.1365-2923.37.s1.3.x] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE Developing quality examinations that measure physicians' clinical performance in simulations is difficult. The goal of this study was to develop a quality simulation examination using a high-fidelity child manikin in evaluating paediatric residents' competence about managing critical cases in a simulated emergency room. Quality was determined by evidence of the reliability, validity and feasibility of the examination. In addition, the participants' responses regarding its realism, effectiveness and value are presented. METHOD Scenario scripts and rating instruments were carefully developed in this study. Experts were used to validate the case scenarios and provide evidence of construct validity. Eighteen paediatric residents, 'working' as pairs, participated in a manikin-based simulation pre-test, a training session and a post-test. Three independent raters rated the participants' performance on task-specific technical skills, medications used and behaviours displayed. At the end of the simulation, the participants completed an evaluation questionnaire. RESULTS The manikin-based simulation examination was found to be a realistic, valid and reliable tool. Validity (i.e. face, content and construct) of the test instrument was evident. The level of inter-rater concordance of participants' clinical performance was good to excellent. The item analysis showed good to excellent internal consistency on all the performance scores except the post-test technical score. CONCLUSIONS With a carefully designed rating instrument and simulation operation, the manikin-based simulation examination was shown to be reliable and valid. However, a further refinement of the test instrument will be required for higher stake examinations.
Collapse
Affiliation(s)
- T-C Tsai
- Department of Pediatrics, Mackay Memorial Hospital, Taipei, Taiwan.
| | | | | | | | | |
Collapse
|
39
|
|
40
|
Olympio MA, Whelan R, Ford RPA, Saunders ICM. Failure of simulation training to change residents' management of oesophageal intubation. Br J Anaesth 2003; 91:312-8. [PMID: 12925467 DOI: 10.1093/bja/aeg183] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND There are few scientific reports documenting the effects of simulation training on learning. Issues of scientific validity challenge investigators who measure such outcomes. We perceived a failure of residents to change their technical management of oesophageal intubation after simulation training and sought clarification of this observation. METHODS Twenty-one residents were randomly exposed to two deliberate oesophageal intubation scenarios, first as a junior assistant (JS group) or as a senior managing resident (SS group), and secondly as a senior managing resident. After the first episode, residents were given an explanation and demonstration of the suggested technical management strategy, including: (i) confirmation of oesophageal intubation with a second direct laryngoscopy; and (ii) concurrent insertion of a second tube into the trachea. After the second episode, we retrospectively sought to confirm improvement in technical management within the SS group by measuring videotaped performances. Questionnaires were sent to the residents before and after reporting their performance results. RESULTS There were 14 SS and seven JS subjects. Within SS, there was no improvement in "confirmation of oesophageal intubation with direct laryngoscopy" (8/14 vs 9/14) or any improvement in "concurrent insertion of a second ETT (tracheal) tube" (1/14 vs 2/14). Questionnaire responses offered considerable insight into these negative results. CONCLUSIONS This failure to change may have been secondary to a lack of criterion validity, lack of repetition or a long duration between episodes. The expectations for management were not regarded as being advantageous in simulation, but they were successfully adopted in actual clinical emergencies.
Collapse
Affiliation(s)
- M A Olympio
- Department of Anesthesiology, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1009, USA.
| | | | | | | |
Collapse
|
41
|
Weller J, Bloch M, Young S, Maze M, Oyesola S, Wyner J, Dob D, Haire K, Durbridge J, Walker T, Newble D. Evaluation of high fidelity patient simulator in assessment of performance of anaesthetists. Br J Anaesth 2003. [DOI: 10.1093/bja/aeg002] [Citation(s) in RCA: 80] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
42
|
The Use of a Human Patient Simulator in the Evaluation of and Development of a Remedial Prescription for an Anesthesiologist with Lapsed Medical Skills. Anesth Analg 2002. [DOI: 10.1213/00000539-200201000-00028] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
43
|
Wong TKS, Chung JWY. Diagnostic reasoning processes using patient simulation in different learning environments. J Clin Nurs 2002; 11:65-72. [PMID: 11845757 DOI: 10.1046/j.1365-2702.2002.00580.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
The purpose of the study was to explore the diagnostic reasoning process among nursing students with different learning environments. A case-study design was adopted. Twenty subjects were randomly drawn from the last year of a pre-registration nursing programme in two institutions, 10 from a university and 10 from a nursing school. They were asked to complete the Bigg's Study Process Questionnaire and identify the differential diagnosis for the three simulated scenarios. The results showed no significant difference in study approaches between the two groups. Two subjects from the university made an incorrect differential diagnosis, as did one from the nursing school. Subjects from the university showed a mix of horizontal (66.6%) and vertical reasoning patterns (33.4%), while those from the nursing school used horizontal (100%) reasoning patterns. The results indicated that all subjects from the nursing school adopted backward chaining strategies (horizontal) for decision-making, i.e. hypothesis-driven. About a third of the subjects from the university adopted forward chaining strategies (vertical), i.e. data-driven. The study did not show any particular advantages from either of the two learning environments in terms of study approach. However, it highlighted the variations in decision strategies among students in the university setting.
Collapse
Affiliation(s)
- Thomas K S Wong
- Department of Nursing and Health Sciences, Hong Kong Polytechnic University, Hunghom, Hong Kong
| | | |
Collapse
|
44
|
Rosenblatt MA, Abrams KJ. The use of a human patient simulator in the evaluation of and development of a remedial prescription for an anesthesiologist with lapsed medical skills. Anesth Analg 2002; 94:149-53, table of contents. [PMID: 11772818 DOI: 10.1097/00000539-200201000-00028] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
The New York State Society of Anesthesiologists' Committee on Continuing Medical Education and Remediation has been charged by the Office of Professional Medical Conduct of the New York State Department of Health to develop a remediation program for individuals ordered into retraining. We describe the development of an anesthesiology-specific evaluation to identify areas of deficiency to both determine a candidate's suitability, as well as to facilitate the creation of an appropriate prescription for retraining. A human patient simulator was used to aid in the gathering of information during the evaluation process. Specifically, the use of simulation allowed the exploration of a candidate's preparation, approach to clinical situations, technical abilities, response to clinical problems, ability to problem solve, and accuracy of medical record keeping. Human patient simulation should be considered a valuable tool in the process of evaluating physicians with lapsed medical skills.
Collapse
Affiliation(s)
- Meg A Rosenblatt
- Department of Anesthesiology, The Mount Sinai School of Medicine, New York, New York 10029-6574, USA.
| | | |
Collapse
|
45
|
Abstract
The number of short 'life support' and emergency care courses available are increasing. Variability in examiner assessments has been reported previously in more traditional types of examinations but there is little data on the reliability of the assessments used on these newer courses. This study evaluated the reliability and consistency of instructor marking for the Resuscitation Council UK Advanced Life Support Course. Twenty five instructors from 15 centres throughout the UK were shown four staged video recorded defibrillation tests (one repeated) and three cardiac arrest simulation tests in order to assess inter-observer and intra-observer variability. These tests form part of the final assessment of competence on an Advanced Life Support course. Significant levels of variability were demonstrated between instructors with poor levels of agreement of 52-80% for defibrillation tests and 52-100% for cardiac arrest simulation tests. There was evidence of differences in the observation/recognition of errors and rating tendencies of instructors. Four instructors made a different pass/fail decision when shown defibrillation test 2 for a second time leading to only moderate levels of intra-observer agreement (kappa=0.43). In conclusion there is significant variability between instructors in the assessment of advanced life support skills, which may undermine the present assessment mechanisms for the advanced life support course. Validation of the assessment tools for the rapidly growing number of life support courses is required with urgent steps to improve reliability where required.
Collapse
Affiliation(s)
- G D Perkins
- Department of Intensive Care Medicine, Birmingham Heartlands Hospital, Bordesley Green East, Birmingham B9 5SS, UK.
| | | | | |
Collapse
|
46
|
Morgan PJ, Cleave-Hogg DM, Guest CB, Herold J. Validity and reliability of undergraduate performance assessments in an anesthesia simulator. Can J Anaesth 2001; 48:225-33. [PMID: 11305821 DOI: 10.1007/bf03019750] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
Abstract
PURPOSE To examine the validity and reliability of performance assessment of undergraduate students using the anesthesia simulator as an evaluation tool. METHODS After ethics approval and informed consent, 135 final year medical students and 5 elective students participated in a videotaped simulator scenario with a Link-Med Patient Simulator (CAE-Link Corporation). Scenarios were based on published educational objectives of the undergraduate curriculum in anesthesia at the University of Toronto. During the simulator sessions, faculty followed a script guiding student interaction with the mannequin. Two faculty independently viewed and evaluated each videotaped performance with a 25-point criterion-based checklist. Means and standard deviations of simulator-based marks were determined and compared with clinical and written evaluations received during the rotation. Internal consistency of the evaluation protocol was determined using inter-item and item-total correlations and correlations of specific simulator items to existing methods of evaluation. RESULTS Mean reliability estimates for single and average paired assessments were 0.77 and 0.86 respectively. Means of simulator scores were low and there was minimal correlation between the checklist and clinical marks (r = 0.13), checklist and written marks (r = 0.19) and clinical and written marks (r = 0.23). Inter-item and item-total correlations varied widely and correlation between simulator items and existing evaluation tools was low. CONCLUSIONS Simulator checklist scoring demonstrated acceptable reliability. Low correlation between different methods of evaluation may reflect reliability problems with the written and clinical marks, or that different aspects are being tested. The performance assessment demonstrated low internal consistency and further work is required.
Collapse
Affiliation(s)
- P J Morgan
- Department of Anesthesia, Sunnybrook and Women's College Health Sciences Centre, University of Toronto, Ontario, Canada.
| | | | | | | |
Collapse
|
47
|
Byrne AJ, Greaves JD. Assessment instruments used during anaesthetic simulation: review of published studies. Br J Anaesth 2001; 86:445-50. [PMID: 11573541 DOI: 10.1093/bja/86.3.445] [Citation(s) in RCA: 87] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
This review was undertaken to discover what assessment instruments have been used as measures of performance during anaesthesia simulation and whether their validity and reliability has been established. The literature describing the assessment of performance during simulated anaesthesia amounted to 13 reports published between 1980 and 2000. Only four of these were designed to investigate the validity or reliability of the assessment systems. We conclude that the efficacy of methodologies for assessment of performance during simulation is largely undetermined. The introduction of simulator-based tests for certification or re-certification of anaesthetists would be premature.
Collapse
|
48
|
Ali J, Gana TJ, Howard M. Trauma mannequin assessment of management skills of surgical residents after advanced trauma life support training. J Surg Res 2000; 93:197-200. [PMID: 10945963 DOI: 10.1006/jsre.2000.5968] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
BACKGROUND We tested the effectiveness of Advanced Trauma Life Support (ATLS) training among surgical residents using a specially designed mannequin. MATERIALS AND METHODS Thirty-two Postgraduate Year I surgical residents were randomly assigned to two groups of 16 each. By use of a trauma mannequin, the 32 residents' performances were scored using four trauma scenarios before 16 residents (ATLS group) completed a standard ATLS course. Performances were also scored after the ATLS course on another four trauma scenarios. The scores were standardized to a maximum of 20 for each scenario. Organized Approach scores with a range of 1 to 5, Priority scores ranging from 1 to 7, and global ratings of Honors, Pass, Borderline, or Fail were assigned for each clinical scenario. RESULTS The pre-ATLS assessment scores were similar for both groups ranging between 9.4 +/- 3.5 and 11.4 +/- 2.9 for the ATLS group and between 10.2 +/- 3.8 and 11.4 +/- 3.9 for the non-ATLS group. The ATLS group scores ranged from 16.0 +/- 1.3 to 17.4 +/- 3.1 after the course and the non-ATLS group scores ranged from 11.4 +/- 4.2 to 12.9 +/- 4.0 (P < 0.05). Pre-ATLS Organized Approach scores were 2.9 +/- 1.0 and 2.7 +/- 1.1 (NS) for the ATLS and non-ATLS groups, respectively, with post-ATLS scores being significantly higher in the ATLS group (4.9 +/- 1.2 compared with 2.8 +/- 1.2 for the non-ATLS group, P < 0. 05). Initial Priority scores were also similar for both groups (3.2 +/- 1.4 for the ATLS group and 3.3 +/- 2.0 for the non-ATLS group). Post-ATLS Priority scores were significantly higher (6.4 +/- 1.4) in the ATLS group compared with 4.2 +/- 1.9 for the non-ATLS group (P < 0.05). The pre-ATLS global ratings were similar for both groups and post-ATLS there were 10 Honors ratings in the ATLS group and none for the control group. CONCLUSIONS Using a trauma mannequin, for assessment, surgical residents completing the ATLS course demonstrated superior resuscitation skills compared with a non-ATLS group.
Collapse
Affiliation(s)
- J Ali
- Department of Surgery, University of Toronto at St. Michael's Hospital, 30 Bond Street, Toronto, Ontario, M5B 1W8, Canada
| | | | | |
Collapse
|
49
|
Ellis C, Hughes G. Use of human patient simulation to teach emergency medicine trainees advanced airway skills. J Accid Emerg Med 1999; 16:395-9. [PMID: 10572807 PMCID: PMC1343398 DOI: 10.1136/emj.16.6.395] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Affiliation(s)
- C Ellis
- Emergency Department, Wellington Hospital, New Zealand
| | | |
Collapse
|
50
|
Gouvitsos F, Vallet B, Scherpereel P. [Anesthesia simulators: benefits and limits of experience gained at several European university hospitals]. ANNALES FRANCAISES D'ANESTHESIE ET DE REANIMATION 1999; 18:787-95. [PMID: 10486633 DOI: 10.1016/s0750-7658(00)88459-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Simulation has become essential in all situations where reality was too risky, too expensive, difficult to manage or inaccessible. In anaesthesia, the low rate of accidents and incidents, as well as the necessity to assure patient's safety, limit education and training in crisis management. The progress in data processing allowed the development of realistic anaesthesia simulators, associating the usual environment of an operating room, and made possible the simulation of a wide range of events. Most clinical incidents, mishaps, or manipulation errors can be simulated. A video recording allows the focus of attention on human factors. We assessed simulators in three European University hospitals. In Brussels as in Leiden, simulation was mainly used for training in crisis management. In Basel, the complete operating room staff participated in sessions, including also surgical simulation and improvement of communication within the team was one of the main goals. Simulation is valuable for residents' training, as well as continuing medical education, in crisis management and a better understanding of human factors. It remains without risk for the patient, with video possibilities improving the repetition of selected cases. However, its use for evaluation seems to be premature, due to the absence of studies demonstrating the validity and reproducibility of the results gained with simulation. Beyond technical limits which are amended continuously, the development of simulation is hindered by the very high cost of equipment and instructors.
Collapse
Affiliation(s)
- F Gouvitsos
- Département d'anesthésie-réanimation, hôpital Sainte-Marguerite, Marseille, France
| | | | | |
Collapse
|