1
|
Butler-O'Hara M, Goldman M, Aspenleiter T, Vanini C, Dadiz R. Comprehensive Program Redesign: Procedural Education, Quality Improvement, and Credentialing Needs for Advanced Practice Providers. Neonatal Netw 2024; 43:343-355. [PMID: 39753353 DOI: 10.1891/nn-2024-0020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2025]
Abstract
Advanced practice providers (APPs) experience limited clinical opportunities to perform neonatal procedures to maintain competency and hospital credentialing, especially high-acuity procedures that are extremely rare but crucial during patient emergencies. Incorporating simulation as part of continuing professional education can help APPs maintain clinical procedural competency and learn new procedural techniques to improve the quality and safety of procedures performed in the clinical setting. In 2013, we successfully developed and implemented an annual didactic and simulation-based neonatal procedural skills program. Since then, our APP group has experienced significant growth, which introduced challenges to sustain a high-quality program that would be valued by participants. These challenges presented the opportunity for a major program redesign addressing education, competency, credentialing, safety, and quality improvement. In this article, we describe the challenges that we uncovered from a comprehensive needs assessment that informed program redesign. We also present evaluation of the redesigned program, which includes learner, patient care, and systems-based outcomes.
Collapse
|
2
|
Sawyer T, Gray MM. Competency-based assessment in neonatal simulation-based training. Semin Perinatol 2023; 47:151823. [PMID: 37748942 DOI: 10.1016/j.semperi.2023.151823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/27/2023]
Abstract
Simulation is a cornerstone of training in neonatal clinical care, allowing learners to practice skills in a safe and controlled environment. Competency-based assessment provides a systematic approach to evaluating technical and behavioral skills observed in the simulation environment to ensure the learner is prepared to safely perform the skill in a clinical setting. Accurate assessment of competency requires the creation of tools with evidence of validity and reliability. There has been considerable work on the use of competency-based assessment in the field of neonatology. In this chapter, we review neonatal simulation-based training, examine competency-based assessment tools, explore methods to gather evidence of the validity and reliability, and review an evidence-based approach to competency-based assessment using simulation.
Collapse
Affiliation(s)
- Taylor Sawyer
- Division of Neonatology, Department of Pediatrics, University of Washington School of Medicine, Seattle Children's Hospital, Seattle, Washington, United States; Neonatal Education and Simulation-based Training (NEST) Program, Division of Neonatology, Department of Pediatrics, University of Washington School of Medicine, Seattle, Washington, United States.
| | - Megan M Gray
- Division of Neonatology, Department of Pediatrics, University of Washington School of Medicine, Seattle Children's Hospital, Seattle, Washington, United States; Neonatal Education and Simulation-based Training (NEST) Program, Division of Neonatology, Department of Pediatrics, University of Washington School of Medicine, Seattle, Washington, United States
| |
Collapse
|
3
|
Liu A, Duffy M, Tse S, Zucker M, McMillan H, Weldon P, Quet J, Long M. Concurrent versus terminal feedback: The effect of feedback delivery on lumbar puncture skills in simulation training. MEDICAL TEACHER 2023; 45:906-912. [PMID: 36931315 DOI: 10.1080/0142159x.2023.2189540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
INTRODUCTION Simulation-based medical education (SBME) is widely used to teach bedside procedural skills. Feedback is crucial to SBME but research on optimal timing to support novice learners' skill development has produced conflicting results. METHODS We randomly assigned 32 novice medical students to receive feedback either during (concurrent) or after (terminal) trialing lumbar puncture (LP). Participants completed pre- and post-acquisition tests, as well as retention and transfer tests, graded on a LP checklist by two blinded expert raters. Cognitive load and anxiety were also assessed, as well as learners' perceptions of feedback. RESULTS Participants who received concurrent feedback demonstrated significantly higher LP checklist scores (M = 91.54, SE = 1.90) after controlling for baseline levels, than those who received terminal feedback (M = 85.64, SE = 1.90), collapsed across post, retention, and transfer tests. There was no difference in cognitive load and anxiety between groups. In open-ended responses, participants who received concurrent feedback more often expressed satisfaction with their learning experience compared to those who received terminal feedback. DISCUSSION AND CONCLUSIONS Concurrent may be superior to terminal feedback when teaching novice learners complex procedures and has the potential to improve learning if incorporated into SBME and clinical teaching. Further research is needed to elucidate underlying cognitive processes to explain this finding.
Collapse
Affiliation(s)
- Anna Liu
- School of Medicine and Dentistry, University of Western Ontario, London, Canada
| | - Melissa Duffy
- Department of Educational Studies, University of South Carolina, Columbia, SC, USA
| | - Sandy Tse
- Department of Pediatrics, University of Ottawa, Ottawa, Canada
- Children's Hospital of Eastern Ontario, Ottawa, Canada
| | - Marc Zucker
- Department of Pediatrics, University of Ottawa, Ottawa, Canada
- Children's Hospital of Eastern Ontario, Ottawa, Canada
| | - Hugh McMillan
- Department of Pediatrics, University of Ottawa, Ottawa, Canada
- Children's Hospital of Eastern Ontario, Ottawa, Canada
| | - Patrick Weldon
- Department of Pediatrics, University of Ottawa, Ottawa, Canada
- Children's Hospital of Eastern Ontario, Ottawa, Canada
| | - Julie Quet
- Department of Pediatrics, University of Ottawa, Ottawa, Canada
- Children's Hospital of Eastern Ontario, Ottawa, Canada
| | - Michelle Long
- Department of Pediatrics, University of Ottawa, Ottawa, Canada
- Children's Hospital of Eastern Ontario, Ottawa, Canada
| |
Collapse
|
4
|
Mallory LA, Doughty CB, Davis KI, Cheng A, Calhoun AW, Auerbach MA, Duff JP, Kessler DO. A Decade Later-Progress and Next Steps for Pediatric Simulation Research. Simul Healthc 2022; 17:366-376. [PMID: 34570084 DOI: 10.1097/sih.0000000000000611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
SUMMARY STATEMENT A decade ago, at the time of formation of the International Network for Pediatric Simulation-based Innovation, Research, and Education, the group embarked on a consensus building exercise. The goal was to forecast the facilitators and barriers to growth and maturity of science in the field of pediatric simulation-based research. This exercise produced 6 domains critical to progress in the field: (1) prioritization, (2) research methodology and outcomes, (3) academic collaboration, (4) integration/implementation/sustainability, (5) technology, and (6) resources/support/advocacy. This article reflects on and summarizes a decade of progress in the field of pediatric simulation research and suggests next steps in each domain as we look forward, including lessons learned by our collaborative grass roots network that can be used to accelerate research efforts in other domains within healthcare simulation science.
Collapse
Affiliation(s)
- Leah A Mallory
- From the Tufts University School of Medicine (L.A.M.), Boston, MA; Department of Medical Education (L.A.M.), The Hannaford Center for Simulation, Innovation and Education; Section of Hospital Medicine (L.A.M.), Department of Pediatrics, The Barbara Bush Children's Hospital at Maine Medical Center, Portland, ME; Section of Emergency Medicine (C.B.D.), Department of Pediatrics, Baylor College of Medicine; Simulation Center (C.B.D.), Texas Children's Hospital, Pediatric Emergency Medicine, Baylor College of Medicine; Section of Critical Care Medicine (K.I.D.), Department of Pediatrics, Baylor College of Medicine, Texas Children's Hospital, Houston, TX; Departments of Pediatrics and Emergency Medicine (A.C.), University of Calgary, Calgary, Canada; Division of Pediatric Critical Care (A.W.C.), University of Louisville School of Medicine and Norton Children's Hospital, Louisville, KY; Section of Emergency Medicine (M.A.A.), Yale University School of Medicine, New Haven, CT; Division of Critical Care (J.P.D.), University of Alberta, Alberta, Canada; and Columbia University Vagelos College of Physicians and Surgeons (D.O.K.), New York, NY
| | | | | | | | | | | | | | | |
Collapse
|
5
|
De Mol L, Desender L, Van Herzeele I, Van de Voorde P, Konge L, Willaert W. Assessing competence in Chest Tube Insertion with the ACTION-tool: A Delphi study. Int J Surg 2022; 104:106791. [DOI: 10.1016/j.ijsu.2022.106791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 07/06/2022] [Accepted: 07/08/2022] [Indexed: 10/16/2022]
|
6
|
Babalola O, Goudge J, Levin J, Brown C, Griffiths F. Assessing the Utility of a Quality-of-Care Assessment Tool Used in Assessing Comprehensive Care Services Provided by Community Health Workers in South Africa. Front Public Health 2022; 10:868252. [PMID: 35651863 PMCID: PMC9149253 DOI: 10.3389/fpubh.2022.868252] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 04/25/2022] [Indexed: 11/18/2022] Open
Abstract
Background Few studies exist on the tools for assessing quality-of-care of community health worker (CHW) who provide comprehensive care, and for available tools, evidence on the utility is scanty. We aimed to assess the utility components of a previously-reported quality-of-care assessment tool developed for summative assessment in South Africa. Methods In two provinces, we used ratings by 21 CHWs and three team leaders in two primary health care facilities per province regarding whether the tool covered everything that happens during their household visits and whether they were happy to be assessed using the tool (acceptability and face validity), to derive agreement index (≥85%, otherwise the tool had to be revised). A panel of six experts quantitatively validated 11 items of the tool (content validity). Content validity index (CVI), of individual items (I-CVI) or entire scale (S-CVI), should be >80% (excellent). For the inter-rater reliability (IRR), we determined agreement between paired observers' assigned quality-of-care messages and communication scores during 18 CHW household visits (nine households per site). Bland and Altman plots and multilevel model analysis, for clustered data, were used to assess IRR. Results In all four CHW and team leader sites, agreement index was ≥85%, except for whether they were happy to be assessed using the tool, where it was <85% in one facility. The I-CVI of the 11 items in the tool ranged between 0.83 and 1.00. For the S-CVI, all six experts agreed on relevancy (universal agreement) in eight of 11 items (0.72) whereas the average of I-CVIs, was 0.95. The Bland-Altman plot limit of agreements between paired observes were −0.18 to 0.44 and −0.30 to 0.44 (messages score); and −0.22 to 0.45 and −0.28 to 0.40 (communication score). Multilevel modeling revealed an estimated reliability of 0.77 (messages score) and 0.14 (communication score). Conclusion The quality-of-care assessment tool has a high face and content validity. IRR was substantial for quality-of-care messages but not for communication score. This suggests that the tool may only be useful in the formative assessment of CHWs. Such assessment can provide the basis for reflection and discussion on CHW performance and lead to change.
Collapse
Affiliation(s)
- Olukemi Babalola
- Centre for Health Policy, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | - Jane Goudge
- Centre for Health Policy, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | - Jonathan Levin
- Department of Epidemiology and Biostatistics, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | - Celia Brown
- Division of Health Sciences, University of Warwick, Warwick Medical School, Coventry, United Kingdom
| | - Frances Griffiths
- Division of Health Sciences, University of Warwick, Warwick Medical School, Coventry, United Kingdom
| |
Collapse
|
7
|
Whalen AM, Merves MH, Kharayat P, Barry JS, Glass KM, Berg RA, Sawyer T, Nadkarni V, Boyer DL, Nishisaki A. Validity Evidence for a Novel, Comprehensive Bag-Mask Ventilation Assessment Tool. J Pediatr 2022; 245:165-171.e13. [PMID: 35181294 DOI: 10.1016/j.jpeds.2022.02.017] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 01/20/2022] [Accepted: 02/09/2022] [Indexed: 01/15/2023]
Abstract
OBJECTIVE To develop a comprehensive competency assessment tool for pediatric bag-mask ventilation (pBMV) and demonstrate multidimensional validity evidence for this tool. STUDY DESIGN A novel pBMV assessment tool was developed consisting of 3 components: a 22-item-based checklist (trichotomized response), global rating scale (GRS, 5-point), and entrustment assessment (4-point). Participants' performance in a realistic simulation scenario was video-recorded and assessed by blinded raters. Multidimensional validity evidence for procedural assessment, including evidence for content, response-process, internal structure, and relation to other variables, was assessed. The scores of each scale were compared with training level. Item-based checklist scores also were correlated with GRS and entrustment scores. RESULTS Fifty-eight participants (9 medical students, 10 pediatric residents, 18 critical care/neonatology fellows, 21 critical care/neonatology attendings) were evaluated. The pBMV tool was supported by high internal consistency (Cronbach α = 0.867). Inter-rater reliability for the item-based checklist component was acceptable (r = 0.65, P < .0001). The item-based checklist scores differentiated between medical students and other providers (P < .0001), but not by other trainee level. GRS and entrustment scores significantly differentiated between training levels (P < .001). Correlation between skill item-based checklist and GRS was r = 0.489 (P = .0001) and between item-based checklist and entrustment score was r = 0.52 (P < .001). This moderate correlation suggested each component measures pBMV skills differently. The GRS and entrustment scores demonstrated moderate inter-rater reliability (0.42 and 0.46). CONCLUSIONS We established evidence of multidimensional validity for a novel entrustment-based pBMV competence assessment tool, incorporating global and entrustment-based assessments. This comprehensive tool can provide learner feedback and aid in entrustment decisions as learners progress through training.
Collapse
Affiliation(s)
- Allison M Whalen
- Division of Pediatric Critical Care Medicine, Department of Pediatrics, Medical University of South Carolina, Charleston, SC.
| | - Matthew H Merves
- Division of Neonatology, Department of Pediatrics, University of Arkansas for Medical Sciences and Arkansas Children's Hospital, Little Rock, AR
| | - Priyanka Kharayat
- Department of Pediatrics, Albert Einstein Medical Center, Philadelphia, PA
| | - James S Barry
- Section of Neonatology, Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO
| | - Kristen M Glass
- Division of Neonatal-Perinatal Medicine, Department of Pediatrics, Penn State College of Medicine, Milton S. Hershey Medical Center, Hershey, PA
| | - Robert A Berg
- Division of Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, PA; Department of Anesthesiology & Critical Care, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA
| | - Taylor Sawyer
- Division of Neonatology, Department of Pediatrics, University of Washington School of Medicine, Seattle Children's Hospital, Seattle, WA
| | - Vinay Nadkarni
- Division of Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, PA; Department of Anesthesiology & Critical Care, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA
| | - Donald L Boyer
- Division of Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, PA; Department of Anesthesiology & Critical Care, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA
| | - Akira Nishisaki
- Division of Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, PA; Department of Anesthesiology & Critical Care, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
8
|
Brydges R, Fiume A, Grierson L. Mastery versus invention learning: impacts on future learning of simulated procedural skills. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:441-456. [PMID: 35320441 DOI: 10.1007/s10459-022-10094-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 01/23/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Invention and mastery learning approaches differ in their foundational educational paradigms, proposed mechanisms of learning, and potential impacts on learning outcomes. They also differ in their resource requirements. We explored the relative effects of 'invent and problem-solve, followed by instruction' (PS-I) learning compared to mastery learning (i.e., standards-based training) on immediate post-test and Preparation for Future Learning (PFL) assessments. PFL assessments measure learners' capacity to use their existing knowledge and strategies to learn about and solve novel problems. METHODS In this non-inferiority trial, pre-clerkship medical students were randomized to either PS-I, Mastery Learning (ML), or instruction then practice (CON) during simulation-based training of infant lumbar puncture (LP). After a 2-week delay, participants returned to learn and complete a PFL assessment of simulated Knee Arthrocentesis. Two independent raters assessed performances with a 5-point global rating scale. RESULTS Based on our non-inferiority margin, analyses showed that for both the immediate post-test and the PFL assessment, the PS-I condition resulted in non-inferior outcomes relative to the ML condition. Results for the CON condition were mixed with respect to non-inferiority compared to either PS-I or ML. CONCLUSIONS We suggest cautiously that the PS-I approach was not inferior to the ML approach, based on skill acquisition and PFL assessment outcomes. With ML anecdotally and empirically requiring more time, greater faculty involvement, and higher costs, our findings question the preference ML has received relative to other instructional designs, especially in the healthcare simulation community. We encourage researchers to study the educational and resource impacts of instructional designs using non-inferiority designs.
Collapse
Affiliation(s)
- Ryan Brydges
- Allan Waters Family Simulation Centre, St. Michael's Hospital, Unity Health Toronto, Toronto, Canada.
- Department of Medicine, University of Toronto, Toronto, Canada.
- The Wilson Centre, University of Toronto, Toronto, Canada.
- Professorship in Technology-Enabled Education, St. Michael's Hospital & Li Ka Shing Knowledge Institute, 209 Victoria St, ON M5B 1T8, Toronto, Canada.
| | - Andrea Fiume
- The Wilson Centre, University of Toronto, Toronto, Canada
- Department of Pediatrics, McMaster University, Hamilton, Canada
| | - Lawrence Grierson
- Department of Family Medicine, McMaster University, Hamilton, Canada
- McMaster Education Research, Innovation, and Theory (MERIT) Program, McMaster University, Hamilton, Canada
| |
Collapse
|
9
|
Davitadze M, Ooi E, Ng CY, Zhou D, Thomas L, Hanania T, Blaggan P, Evans N, Chen W, Melson E, Arlt W, Kempegowda P. SIMBA: using Kolb's learning theory in simulation-based learning to improve participants' confidence. BMC MEDICAL EDUCATION 2022; 22:116. [PMID: 35193557 PMCID: PMC8861259 DOI: 10.1186/s12909-022-03176-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Accepted: 02/09/2022] [Indexed: 05/30/2023]
Abstract
BACKGROUND Simulation via Instant Messaging- Birmingham Advance (SIMBA) delivers simulation-based learning (SBL) through WhatsApp® and Zoom® based on Kolb's experiential learning theory. This study describes how Kolb's theory was implemented in practice during SIMBA adrenal session. METHODS SIMBA adrenal session was conducted for healthcare professionals and replicated Kolb's 4-stage cycle: (a) concrete experience-online simulation of real-life clinical scenarios, (b) reflective observation-discussion and Q&A following simulation, (c) abstract conceptualisation-post-session MCQs, and (d) active experimentation-intentions to implement the acquired knowledge in future practice. Participants' self-reported confidence levels for simulated and non-simulated cases pre- and post-SIMBA were analysed using Wilcoxon Signed-Rank test. Key takeaway and feedback were assessed quantitatively and qualitatively in a thematic analysis. RESULTS Thirty-three participants were included in the analysis. A Wilcoxon signed-rank test showed that the SIMBA session elicited a statistically significant change in participants' self-reported confidence in their approach to Cushing's syndrome (Z = 3.873, p = 0.0001) and adrenocortical carcinoma (Z = 3.970, p < 0.0001). 93.9% (n = 31/33) and 84.8% (n = 28/33) strongly agreed/agreed the topics were applicable to their clinical practice and accommodated their personal learning style, respectively. 81.8% (n = 27/33) reported increase in knowledge on patient management, and 75.8% (n = 25/33) anticipated implementing learning points in their practice. CONCLUSIONS SIMBA effectively adopts Kolb's theory to provide best possible experience to learners, highlighting the advantages of utilising social media platforms for SBL in medical education. The ability to conduct SIMBA sessions at modest cost internationally paves way to engage more healthcare professionals worldwide.
Collapse
Affiliation(s)
- Meri Davitadze
- Georgian-American Family Medicine Clinic "Medical House", Tbilisi, Georgia
| | - Emma Ooi
- RCSI & UCD Malaysia Campus, Penang, Malaysia
| | - Cai Ying Ng
- RCSI & UCD Malaysia Campus, Penang, Malaysia
| | - Dengyi Zhou
- University of Birmingham Medical School, Birmingham, United Kingdom
| | - Lucretia Thomas
- University of Birmingham Medical School, Birmingham, United Kingdom
| | - Thia Hanania
- University of Birmingham Medical School, Birmingham, United Kingdom
| | - Parisha Blaggan
- University of Birmingham Medical School, Birmingham, United Kingdom
| | - Nia Evans
- Royal Glamorgan Hospital, Cwm Taf Morgannwg University Health Board, Ynysmaerdy, Pontypridd, UK
| | - Wentin Chen
- University of Birmingham Medical School, Birmingham, United Kingdom
| | - Eka Melson
- Ninewells Hospital, NHS Tayside, Dundee, UK
- College of Medical and Dental Sciences, Wellcome Trust Clinical Research Fellow, Institute of Metabolism and Systems research, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
| | - Wiebke Arlt
- College of Medical and Dental Sciences, Wellcome Trust Clinical Research Fellow, Institute of Metabolism and Systems research, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
- Department of Endocrinology, Queen Elizabeth Hospital, University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
| | - Punith Kempegowda
- College of Medical and Dental Sciences, Wellcome Trust Clinical Research Fellow, Institute of Metabolism and Systems research, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK.
- Department of Endocrinology, Queen Elizabeth Hospital, University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom.
| |
Collapse
|
10
|
Goldman MP, Palladino LE, Malik RN, Powers EM, Rudd AV, Aronson PL, Auerbach MA. A Workplace Procedure Training Cart to Augment Pediatric Resident Procedural Learning. Pediatr Emerg Care 2022; 38:e816-e820. [PMID: 35100781 DOI: 10.1097/pec.0000000000002397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE Our primary aim was to describe pediatric residents' use of a workplace procedural training cart. An exploratory aim was to examine if the cart associated with increased resident procedural experiences with real patients. METHODS Guided by the procedural training construct of "Learn, See, Practice, Prove, Do, Maintain," we created a novel workplace procedural training cart with videos (learn and see) and simulation equipment (practice and prove). An electronic logbook recorded resident use data, and a brief survey solicited residents' perceptions of the cart's educational impact. We queried our electronic medical record to compare the proportion of real procedures completed by residents before and after the intervention. RESULTS From August 1 to December 31, 2019, 24 pediatric residents (10 interns and 14 seniors) rotated in the pediatric emergency department. Twenty-one cart encounters were logged, mostly by interns (67% [14/21]). The 21 cart encounters yielded 32 learning activities (8 videos watched and 24 procedures practiced), reflecting the residents' interest in laceration repair (50% [4/8], 54% [13/24]) and lumbar puncture (38% [3/8], 33% [8/24]). All users agreed (29% [6/21]) or strongly agreed (71% [15/21]) the cart encouraged practice and improved confidence in independently performing procedures. No changes were observed in the proportion of actual procedures completed by residents. CONCLUSIONS A workplace procedural training cart was used mostly by pediatric interns. The cart cultivated residents' perceived confidence in real procedures but was not used by all residents or influenced residents' procedural behaviors in the pediatric emergency department.
Collapse
Affiliation(s)
- Michael P Goldman
- From the Section of Emergency Medicine, Departments of Pediatrics and Emergency Medicine, Yale University School of Medicine, New Haven, CT
| | - Lauren E Palladino
- Department of Emergency Medicine, Children's Hospital of Philadelphia, Philadelphia, PA
| | - Rabia N Malik
- From the Section of Emergency Medicine, Departments of Pediatrics and Emergency Medicine, Yale University School of Medicine, New Haven, CT
| | - Emily M Powers
- From the Section of Emergency Medicine, Departments of Pediatrics and Emergency Medicine, Yale University School of Medicine, New Haven, CT
| | - Alexis V Rudd
- From the Section of Emergency Medicine, Departments of Pediatrics and Emergency Medicine, Yale University School of Medicine, New Haven, CT
| | - Paul L Aronson
- From the Section of Emergency Medicine, Departments of Pediatrics and Emergency Medicine, Yale University School of Medicine, New Haven, CT
| | - Marc A Auerbach
- From the Section of Emergency Medicine, Departments of Pediatrics and Emergency Medicine, Yale University School of Medicine, New Haven, CT
| |
Collapse
|
11
|
Oriot D, Trigolet M, Kessler DO, Auerbach MA, Ghazali DA. Stress: A Factor Explaining the Gap Between Simulated and Clinical Procedure Success. Pediatr Emerg Care 2021; 37:e1192-e1196. [PMID: 31977780 DOI: 10.1097/pec.0000000000001962] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Stress may impair the success of procedures in emergency medicine. The aims were to assess residents' stress during simulated and clinical lumbar punctures (LPs) and to explore the correlation of stress and performance. METHODS A prospective study (2013-2016) was carried out in a pediatric emergency department. A mastery training and subsequently a just-in-time training were conducted immediately preceding each clinical LP. Stress was self-assessed by the Stress-O-Meter scale (0-10). Performance (checklist 0-6 points) and success rate (cerebrospinal fluid with <1000 red blood cells/mm3) were recorded by a trained supervisor. A survey explored self-confidence and potential causes of stress. RESULTS Thirty-three residents performed 35 LPs. There was no stress during simulation procedure. Stress levels significantly increased for clinical procedure (P < 0.0001). Performance was similar in simulation and in clinic (respectively, 5.50 ± 0.93 vs 5.42 ± 0.83; P = 0.75). Success significantly decreased during clinical LP (P < 0.0001). The 2 most reported stress-related factors were fear of technical errors and personal fatigue. CONCLUSIONS Performance scores and success rates in simulation are insufficient to predict success in clinical situations. Stress level and stress-related factors (fear of technical errors and personal fatigue) might be different in simulated or real conditions and consequently impact success of a technical procedure even if a high-performance score is recorded.
Collapse
Affiliation(s)
| | - Marine Trigolet
- Pediatric and Neonatal Intensive Care Unit, University Hospital of Limoges, Limoges, France
| | | | - Marc A Auerbach
- Department of Emergency Medicine, School of Medicine, Yale University, New Haven, CT
| | | |
Collapse
|
12
|
Wing R, Baird J, Duffy S, Brown L, Overly F, Kelley MN, Merritt C. Pediatric Airway Assessment Tool (PAAT): A Rating Tool to Assess Resident Proficiency in Simulated Pediatric Airway Skills Performance. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2020; 16:10997. [PMID: 33117887 PMCID: PMC7586756 DOI: 10.15766/mep_2374-8265.10997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 06/04/2020] [Indexed: 06/11/2023]
Abstract
INTRODUCTION The Accreditation Council for Graduate Medical Education has identified the need for assessment of core skills for pediatric and emergency medicine residents, which includes pediatric airway management. Although there are standard courses for pediatric airway management, there is no validated tool to assess basic and advanced pediatric airway skills performance. Our objective was to develop a simulation-based tool for the formative assessment of resident pediatric airway skills performance that was concise, yet comprehensive, and to evaluate the evidence supporting the argument for the tool's validity. METHODS We developed a pediatric airway assessment tool (PAAT) to assess six major domains of pediatric airway skills performance: basic airway maneuvers, airway adjuncts, bag-valve mask ventilation, advanced airway equipment preparation, direct laryngoscopy, and video laryngoscopy. This tool consisted of a 72-item pediatric airway skills assessment checklist to be used in simulation. We enrolled 12 subjects at four different training levels to participate. Assessment scores were rated by two independent expert raters. RESULTS The interrater agreement was high, ranging from 0.92 (adult bagging rate) to 1 (basic airway maneuvers). There was a significant trend of increasing scores with increased training level. DISCUSSION The PAAT demonstrated excellent interrater reliability and provided evidence of the construct's validity. Although further validation of this assessment tool is needed, these results suggest that the PAAT may eventually be useful for assessment of resident proficiency in pediatric airway skills performance.
Collapse
Affiliation(s)
- Robyn Wing
- Assistant Professor, Departments of Emergency Medicine & Pediatrics, Division of Pediatric Emergency Medicine, Alpert Medical School of Brown University and Rhode Island Hospital/Hasbro Children's Hospital; Director of Pediatric Simulation, Lifespan Medical Simulation Center
| | - Janette Baird
- Associate Professor, Department of Emergency Medicine and Injury Prevention Center, Alpert Medical School of Brown University
| | - Susan Duffy
- Professor, Departments of Emergency Medicine & Pediatrics, Division of Pediatric Emergency Medicine, Alpert Medical School of Brown University and Rhode Island Hospital/Hasbro Children's Hospital
| | - Linda Brown
- Associate Professor, Departments of Emergency Medicine & Pediatrics, Division of Pediatric Emergency Medicine, Alpert Medical School of Brown University and Rhode Island Hospital/Hasbro Children's Hospital; Vice Chair of Pediatric Emergency Medicine; Director of the Lifespan Medical Simulation Center
| | - Frank Overly
- Professor, Departments of Emergency Medicine & Pediatrics, Division of Pediatric Emergency Medicine, Alpert Medical School of Brown University and Rhode Island Hospital/Hasbro Children's Hospital; Medical Director of Hasbro Emergency Department
| | - Mariann Nocera Kelley
- Assistant Professor, Departments of Pediatrics and Emergency Medicine/Traumatology, Division of Pediatric Emergency Medicine, University of Connecticut School of Medicine, Connecticut Children's Medical Center; Director of Simulation Education, University of Connecticut School of Medicine
| | - Chris Merritt
- Associate Professor, Departments of Emergency Medicine & Pediatrics, Division of Pediatric Emergency Medicine, Alpert Medical School of Brown University and Rhode Island Hospital/Hasbro Children's Hospital; Director, Brown Emergency Medicine Medical Education Research Fellowship
| |
Collapse
|
13
|
Park YS, Chun KH, Lee KS, Lee YH. A study on evaluator factors affecting physician-patient interaction scores in clinical performance examinations: a single medical school experience. Yeungnam Univ J Med 2020; 38:118-126. [PMID: 32759629 PMCID: PMC8016627 DOI: 10.12701/yujm.2020.00423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 06/25/2020] [Indexed: 11/22/2022] Open
Abstract
Background This study is an analysis of evaluator factors affecting physician-patient interaction (PPI) scores in clinical performance examination (CPX). The purpose of this study was to investigate possible ways to increase the reliability of the CPX evaluation. Methods The six-item Yeungnam University Scale (YUS), four-item analytic global rating scale (AGRS), and one-item holistic rating scale (HRS) were used to evaluate student performance in PPI. A total of 72 fourth-year students from Yeungnam University College of Medicine in Korea participated in the evaluation with 32 faculty and 16 standardized patient (SP) raters. The study then examined the differences in scores between types of scale, raters (SP vs. faculty), faculty specialty, evaluation experience, and level of fatigue as time passes. Results There were significant differences between faculty and SP scores in all three scales and a significant correlation among raters’ scores. Scores given by raters on items related to their specialty were lower than those given by raters on items out of their specialty. On the YUS and AGRS, there were significant differences based on the faculty’s evaluation experience; scores by raters who had three to ten previous evaluation experiences were lower than others’ scores. There were also significant differences among SP raters on all scales. The correlation between the YUS and AGRS/HRS declined significantly according to the length of evaluation time. Conclusion In CPX, PPI score reliability was found to be significantly affected by the evaluator factors as well as the type of scale.
Collapse
Affiliation(s)
- Young Soon Park
- Department of Medical Education, Konyang University College of Medicine, Daejeon, Korea
| | - Kyung Hee Chun
- Department of Medical Education, Konyang University College of Medicine, Daejeon, Korea
| | - Kyeong Soo Lee
- Department of Preventive Medicine and Public Health, Yeungnam University College of Medicine, Daegu, Korea
| | - Young Hwan Lee
- Department of Medical Humanities, Yeungnam University College of Medicine, Daegu, Korea
| |
Collapse
|
14
|
Abstract
OBJECTIVES When obtaining informed permission from parents for invasive procedures, trainees and supervisors often do not disclose information about the trainee's level of experience. The objectives of this study were 3-fold: (1) to assess parents' understanding of both academic medical training and the role of the trainee and the supervisor, (2) to explore parents' preferences about transparency related to a trainee's experience, and (3) to examine parents' willingness to allow trainees to perform invasive procedures. METHODS This qualitative study involved 23 one-on-one interviews with parents of infants younger than 30 days who had undergone a lumbar puncture. In line with grounded theory, researchers independently coded transcripts and then collectively refined codes and created themes. Data collection and analysis continued until thematic saturation was achieved. In addition, to triangulate the findings, a focus group was conducted with Yale School of Medicine's Community Bioethics Forum. RESULTS Our analysis revealed 4 primary themes: (1) the invasive nature of a lumbar puncture and the vulnerability of the newborn creates fear in parents, which may be mitigated by improved communication; (2) parents have varying degrees of awareness of the medical training system; (3) most parents expect transparency about provider experience level and trust that a qualified provider will be performing the procedure; and (4) parents prefer an experienced provider to perform a procedure, but supervisor presence may be a qualifying factor for inexperienced providers. CONCLUSIONS Physicians must find a way to improve transparency when caring for pediatric patients while still developing critical procedural skills.
Collapse
|
15
|
Neonatal Intubation Competency Assessment Tool: Development and Validation. Acad Pediatr 2019; 19:157-164. [PMID: 30103050 DOI: 10.1016/j.acap.2018.07.008] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 07/12/2018] [Accepted: 07/23/2018] [Indexed: 12/13/2022]
Abstract
BACKGROUND Neonatal tracheal intubation (NTI) is an important clinical skill. Suboptimal performance is associated with patient harm. Simulation training can improve NTI performance. Improving performance requires an objective assessment of competency. Competency assessment tools need strong evidence of validity. We hypothesized that an NTI competency assessment tool with multisource validity evidence could be developed and be used for formative and summative assessment during simulation-based training. METHODS An NTI assessment tool was developed based on a literature review. The tool was refined through 2 rounds of a modified Delphi process involving 12 subject-matter experts. The final tool included a 22-item checklist, a global skills assessment, and an entrustable professional activity (EPA) level. The validity of the checklist was assessed by having 4 blinded reviewers score 23 videos of health care providers intubating a neonatal simulator. RESULTS The checklist items had good internal consistency (overall α = 0.79). Checklist scores were greater for providers at greater training levels and with more NTI experience. Checklist scores correlated with global skills assessment (ρ = 0.85; P < .05), EPA levels (ρ = 0.87; P < .05), percent glottic exposure (r = 0.59; P < .05), and Cormack-Lehane scores (ρ = 0.95; P < .05). Checklist scores reliably predicted EPA levels. CONCLUSIONS We developed an NTI competency assessment tool with multisource validity evidence. The tool was able to discriminate NTI performance based on experience. The tool can be used during simulation-based NTI training to provide formative and summative assessment and can aid with entrustment decisions.
Collapse
|
16
|
Seo S, Thomas A, Uspal NG. A Global Rating Scale and Checklist Instrument for Pediatric Laceration Repair. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2019; 15:10806. [PMID: 30931385 PMCID: PMC6415009 DOI: 10.15766/mep_2374-8265.10806] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Accepted: 01/16/2019] [Indexed: 05/27/2023]
Abstract
INTRODUCTION Laceration repair is a core procedural skill in which pediatric residents are expected to attain proficiency per the Accreditation Council for Graduate Medical Education. Restricted trainee work hours have decreased clinical opportunities for laceration repair, and simulation may be a modality to fill that clinical gap. There is a therefore a need for objective measures of pediatric resident competence in laceration repair. METHODS We created a global rating scale and checklist to assess laceration repair in the pediatric emergency department. We adapted the global rating scale from the Objective Structured Assessment of Technical Skills tool used to evaluate surgical residents' technical skills and adapted the checklist from a mastery training checklist related to infant lumbar puncture. We tested both tools in the pediatric emergency department. Eight supervising physicians used the tools to evaluate 30 residents' technical skills in laceration repair. We performed validation testing of both tools in the simulation environment. Based on formal evaluation, we developed a video to train future evaluators on the use of the global rating scale. RESULTS The global rating scale and checklist showed fair concordance across reviewers. Both tools received positive feedback from supervising physicians who used them. DISCUSSION We found that the global rating scale and checklist are more applicable to formative, rather than summative, training for resident laceration repair. We recommend using these educational tools with trainees in the simulation environment prior to trainees performing laceration repairs on actual patients.
Collapse
Affiliation(s)
- Suzanne Seo
- Pediatric Emergency Medicine Fellow, Seattle Children's Hospital
- Pediatric Emergency Medicine Fellow, University of Washington School of Medicine
| | - Anita Thomas
- Assistant Professor, Department of Pediatrics, Division of Emergency Medicine, University of Washington School of Medicine
- Assistant Professor, Department of Pediatrics, Division of Emergency Medicine, Seattle Children's Hospital
| | - Neil G. Uspal
- Associate Professor, Department of Pediatrics, Division of Emergency Medicine, University of Washington School of Medicine
- Associate Professor, Department of Pediatrics, Division of Emergency Medicine, Seattle Children's Hospital
| |
Collapse
|
17
|
Festekjian A, Mody AP, Chang TP, Ziv N, Nager AL. Novel Transfer of Care Sign-out Assessment Tool in a Pediatric Emergency Department. Acad Pediatr 2018; 18:86-93. [PMID: 28843485 DOI: 10.1016/j.acap.2017.08.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Revised: 08/10/2017] [Accepted: 08/15/2017] [Indexed: 10/19/2022]
Abstract
OBJECTIVE Transfer of care sign-outs (TOCS) for admissions from a pediatric emergency department have unique challenges. Standardized and reliable assessment tools for TOCS remain elusive. We describe the development, reliability, and validity of a TOCS assessment tool. METHODS Video recordings of resident TOCS were assessed to capture 4 domains: completeness, synopsis, foresight, and professionalism. In phase 1, 56 TOCS were used to modify the tool and improve reliability. In phase 2, 91 TOCS were used to examine validity. Analyses included Cronbach's alpha for internal structure, intraclass correlation and Cohen's kappa for interrater reliability, Pearson's correlation for relationships between variables, and 95% confidence interval of the mean for resident group comparisons. RESULTS Cronbach's alpha was 0.52 for internal structure of the tool's subjective rating scale. Intraclass correlation for the subjective rating scale items ranged from 0.70 to 0.80. Cohen's kappa for most objective checklist items ranged from 0.43 to 1. Content completeness was significantly correlated with synopsis, foresight, and professionalism (Pearson's r ranged from 0.36 to 0.62, P values were <0.001). House staff senior residents scored higher (on average) than interns and rotating senior residents in synopsis and foresight. Also, house staff interns scored higher (on average) than rotating senior residents in professionalism. House staff senior residents scored higher (on average) than rotating senior residents in content completeness. CONCLUSIONS We provide validity evidence to support using scores from the TOCS tool to assess higher-level transfer of care comprehension and communication by pediatric emergency department residents and to test interventions to improve TOCS.
Collapse
Affiliation(s)
- Ara Festekjian
- Department of Pediatrics, Division of Emergency and Transport Medicine, Children's Hospital Los Angeles, Los Angeles, Calif; Keck School of Medicine, University of Southern California, Los Angeles, Calif.
| | - Ameer P Mody
- Department of Pediatrics, Division of Emergency and Transport Medicine, Children's Hospital Los Angeles, Los Angeles, Calif; Keck School of Medicine, University of Southern California, Los Angeles, Calif
| | - Todd P Chang
- Department of Pediatrics, Division of Emergency and Transport Medicine, Children's Hospital Los Angeles, Los Angeles, Calif; Keck School of Medicine, University of Southern California, Los Angeles, Calif
| | - Nurit Ziv
- Department of Pediatrics, Division of Emergency and Transport Medicine, Children's Hospital Los Angeles, Los Angeles, Calif
| | - Alan L Nager
- Department of Pediatrics, Division of Emergency and Transport Medicine, Children's Hospital Los Angeles, Los Angeles, Calif; Keck School of Medicine, University of Southern California, Los Angeles, Calif
| |
Collapse
|
18
|
Whalen AM, Boyer DL, Nishisaki A. Checklist-Based Assessment of Procedural Skills: A Missing Piece in the Link between Medical Education Interventions and Patient Outcomes. J Pediatr 2017; 188:11-13. [PMID: 28595763 DOI: 10.1016/j.jpeds.2017.05.040] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2017] [Accepted: 05/15/2017] [Indexed: 11/19/2022]
Affiliation(s)
- Allison M Whalen
- Pediatric Critical Care Medicine Department of Anesthesiology and Critical Care Medicine The Children's Hospital of Philadelphia
| | - Donald L Boyer
- Pediatric Critical Care Medicine Fellowship Program The Children's Hospital of Philadelphia; Clinical Anesthesiology, Critical Care, and Pediatrics Perelman School of Medicine University of Pennsylvania
| | - Akira Nishisaki
- Department of Anesthesiology and Critical Care Medicine The Children's Hospital of Philadelphia Philadelphia, Pennsylvania.
| |
Collapse
|
19
|
Quality Improvement in Pediatric Endoscopy: A Clinical Report From the NASPGHAN Endoscopy Committee. J Pediatr Gastroenterol Nutr 2017. [PMID: 28644360 DOI: 10.1097/mpg.0000000000001592] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
The current era of healthcare reform emphasizes the provision of effective, safe, equitable, high-quality, and cost-effective care. Within the realm of gastrointestinal endoscopy in adults, renewed efforts are in place to accurately define and measure quality indicators across the spectrum of endoscopic care. In pediatrics, however, this movement has been less-defined and lacks much of the evidence-base that supports these initiatives in adult care. A need, therefore, exists to help define quality metrics tailored to pediatric practice and provide a toolbox for the development of robust quality improvement (QI) programs within pediatric endoscopy units. Use of uniform standards of quality reporting across centers will ensure that data can be compared and compiled on an international level to help guide QI initiatives and inform patients and their caregivers of the true risks and benefits of endoscopy. This report is intended to provide pediatric gastroenterologists with a framework for the development and implementation of endoscopy QI programs within their own centers, based on available evidence and expert opinion from the members of the NASPGHAN Endoscopy Committee. This clinical report will require expansion as further research pertaining to endoscopic quality in pediatrics is published.
Collapse
|
20
|
Henriksen MJV, Wienecke T, Thagesen H, Jacobsen RVB, Subhi Y, Ringsted C, Konge L. Assessment of Residents Readiness to Perform Lumbar Puncture: A Validation Study. J Gen Intern Med 2017; 32:610-618. [PMID: 28168539 PMCID: PMC5442009 DOI: 10.1007/s11606-016-3981-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2016] [Revised: 12/13/2016] [Accepted: 12/30/2016] [Indexed: 10/20/2022]
Abstract
BACKGROUND Lumbar puncture is a common procedure in many specialties. The procedure serves to diagnose life-threatening conditions, often requiring rapid performance. However, junior doctors possess uncertainties regarding performing the procedure and frequently perform below expectations. Hence, proper training and assessment of performance is crucial before entering clinical practice. OBJECTIVE To develop and collect validity evidence for an assessment tool for lumbar puncture performance, including a standard to determine when trainees are ready for clinical practice. DESIGN Development of a new tool, based on clinician interviews and a literature review, was followed by an explorative study to gather validity evidence. PARTICIPANTS AND MAIN MEASURES We interviewed 12 clinicians from different specialties. The assessment tool was used to assess 11 doctors at the advanced beginners' level and 18 novices performing the procedure in a simulated, ward-like setting with a standardized patient. Procedural performance was assessed by three content experts. We used generalizability theory to explore reliability. The discriminative ability of the tool was explored by comparing performance scores between the two groups. The contrasting groups method was used to set a pass/fail standard and the consequences of this was explored. KEY RESULTS The interviews identified that in addition to the technical aspects of the procedure, non-technical elements involving planning and conducting the procedure are important. Cronbach's alpha = 0.92, Generalizability-coefficient was 0.88 and a Decision-study found one rater was sufficient for low-stakes assessments (G-coefficient 0.71). The discriminative ability was confirmed by a significant difference between the mean scores of novices, 40.9 (SD 6.1) and of advanced beginners, 47.8 (SD 4.0), p = 0.004. A standard of 44.0 was established which was consistent with the raters' global judgments of pass/fail. CONCLUSION We developed and demonstrated strong validity evidence for the lumbar puncture assessment tool. The tool can be used to assess readiness for practice.
Collapse
Affiliation(s)
- Mikael Johannes Vuokko Henriksen
- Copenhagen Academy for Medical Education and Simulation, The Capital Region of Denmark, Rigshospitalet section 5404, Blegdamsvej 9, 2100, Copenhagen, Denmark. .,Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
| | - Troels Wienecke
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.,Department of Neurology, Zealand University Hospital, Roskilde, Denmark
| | - Helle Thagesen
- Department of Neurology, Zealand University Hospital, Roskilde, Denmark
| | - Rikke Vita Borre Jacobsen
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.,Center for Head and Orthopedic/UFU 4231 Anesthesiology, Rigshospitalet, Copenhagen, Denmark
| | - Yousif Subhi
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.,Department of Ophthalmology, Zealand University Hospital, Roskilde, Denmark
| | - Charlotte Ringsted
- Centre for Health Science Education, Faculty of Health, Aarhus University, Aarhus, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation, The Capital Region of Denmark, Rigshospitalet section 5404, Blegdamsvej 9, 2100, Copenhagen, Denmark.,Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
21
|
Reporting Guidelines for Health Care Simulation Research: Extensions to the CONSORT and STROBE Statements. Simul Healthc 2017; 11:238-48. [PMID: 27465839 DOI: 10.1097/sih.0000000000000150] [Citation(s) in RCA: 238] [Impact Index Per Article: 29.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
INTRODUCTION Simulation-based research (SBR) is rapidly expanding but the quality of reporting needs improvement. For a reader to critically assess a study, the elements of the study need to be clearly reported. Our objective was to develop reporting guidelines for SBR by creating extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statements. METHODS An iterative multistep consensus-building process was used on the basis of the recommended steps for developing reporting guidelines. The consensus process involved the following: (1) developing a steering committee, (2) defining the scope of the reporting guidelines, (3) identifying a consensus panel, (4) generating a list of items for discussion via online premeeting survey, (5) conducting a consensus meeting, and (6) drafting reporting guidelines with an explanation and elaboration document. RESULTS The following 11 extensions were recommended for CONSORT item 1 (title/abstract), item 2 (background), item 5 (interventions), item 6 (outcomes), item 11 (blinding), item 12 (statistical methods), item 15 (baseline data), item 17 (outcomes/estimation), item 20 (limitations), item 21 (generalizability), and item 25 (funding). The following 10 extensions were recommended for STROBE: item 1 (title/abstract), item 2 (background/rationale), item 7 (variables), item 8 (data sources/measurement), item 12 (statistical methods), item 14 (descriptive data), item 16 (main results), item 19 (limitations), item 21 (generalizability), and item 22 (funding). An elaboration document was created to provide examples and explanation for each extension. CONCLUSIONS We have developed extensions for the CONSORT and STROBE Statements that can help improve the quality of reporting for SBR.
Collapse
|
22
|
Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, Duval-Arnould J, Lin Y, Pusic M, Auerbach M. Conducting multicenter research in healthcare simulation: Lessons learned from the INSPIRE network. Adv Simul (Lond) 2017; 2:6. [PMID: 29450007 PMCID: PMC5806260 DOI: 10.1186/s41077-017-0039-0] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Accepted: 02/08/2017] [Indexed: 01/29/2023] Open
Abstract
Simulation-based research has grown substantially over the past two decades; however, relatively few published simulation studies are multicenter in nature. Multicenter research confers many distinct advantages over single-center studies, including larger sample sizes for more generalizable findings, sharing resources amongst collaborative sites, and promoting networking. Well-executed multicenter studies are more likely to improve provider performance and/or have a positive impact on patient outcomes. In this manuscript, we offer a step-by-step guide to conducting multicenter, simulation-based research based upon our collective experience with the International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE). Like multicenter clinical research, simulation-based multicenter research can be divided into four distinct phases. Each phase has specific differences when applied to simulation research: (1) Planning phase, to define the research question, systematically review the literature, identify outcome measures, and conduct pilot studies to ensure feasibility and estimate power; (2) Project Development phase, when the primary investigator identifies collaborators, develops the protocol and research operations manual, prepares grant applications, obtains ethical approval and executes subsite contracts, registers the study in a clinical trial registry, forms a manuscript oversight committee, and conducts feasibility testing and data validation at each site; (3) Study Execution phase, involving recruitment and enrollment of subjects, clear communication and decision-making, quality assurance measures and data abstraction, validation, and analysis; and (4) Dissemination phase, where the research team shares results via conference presentations, publications, traditional media, social media, and implements strategies for translating results to practice. With this manuscript, we provide a guide to conducting quantitative multicenter research with a focus on simulation-specific issues.
Collapse
Affiliation(s)
- Adam Cheng
- Department of Pediatrics, Alberta Children’s Hospital, KidSim-ASPIRE Research Program, Section of Emergency Medicine, University of Calgary, 2888 Shaganappi Trail NW, Calgary, AB Canada T3B 6A8
| | - David Kessler
- Division of Pediatric Emergency Medicine, Columbia University Medical School, 3959 Broadway, CHN-1-116, New York, NY 10032 USA
| | - Ralph Mackinnon
- Department of Paediatric Anaesthesia and NWTS, First Floor Theatres, Royal Manchester Children’s Hospital, Hathersage Road, Manchester, UK M13 9WL
| | - Todd P. Chang
- Children’s Hospital Los Angeles, 4650 Sunset Blvd, Mailstop 113, Los Angeles, CA 90027 USA
| | - Vinay M. Nadkarni
- The Children’s Hospital of Philadelphia, University of Pennsylvania Perelman School of Medicine, 3401 Civic Center Blvd, Philadelphia, PA 19104 USA
| | - Elizabeth A. Hunt
- Charlotte R. Bloomberg Children’s Center, Johns Hopkins University School of Medicine, 1800 Orleans St, Room 6321, Baltimore, MD 21287 USA
| | - Jordan Duval-Arnould
- Charlotte R. Bloomberg Children’s Center, Johns Hopkins University School of Medicine, 1800 Orleans St, Room 6321, Baltimore, MD 21287 USA
| | - Yiqun Lin
- Alberta Children’s Hospital, Cumming School of Medicine, University of Calgary, 2888 Shaganappi Trail NW, Calgary, AB Canada T3B 6A8
| | - Martin Pusic
- Institute for Innovations in Medical Education, 550 First Ave, MSB G109, New York, NY 10016 USA
| | - Marc Auerbach
- Section of Pediatric Emergency Medicine, 100 York Street, Suite 1F, New Haven, CT 06520 USA
| |
Collapse
|
23
|
Kessler DO, Chang TP, Auerbach M, Fein DM, Lavoie ME, Trainor J, Lee MO, Gerard JM, Grossman D, Whitfill T, Pusic M. Screening residents for infant lumbar puncture readiness with just-in-time simulation-based assessments. BMJ SIMULATION & TECHNOLOGY ENHANCED LEARNING 2017; 3:17-22. [DOI: 10.1136/bmjstel-2016-000130] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/13/2016] [Indexed: 11/03/2022]
Abstract
BackgroundDetermining when to entrust trainees to perform procedures is fundamental to patient safety and competency development.ObjectiveTo determine whether simulation-based readiness assessments of first year residents immediately prior to their first supervised infant lumbar punctures (LPs) are associated with success.MethodsThis prospective cohort study enrolled paediatric and other first year residents who perform LPs at 35 academic hospitals from 2012 to 2014. Within a standardised LP curriculum, a validated 4-point readiness assessment of first year residents was required immediately prior to their first supervised LP. A score ≥3 was required for residents to perform the LP. The proportion of successful LPs (<1000 red blood cells on first attempt) was determined. Process measures included success on any attempt, number of attempts, analgesia usage and use of the early stylet removal technique.ResultsWe analysed 726 LPs reported from 1722 residents (42%). Of the 432 who underwent readiness assessments, 174 (40%, 95% CI 36% to 45%) successfully performed their first LP. Those who were not assessed succeeded in 103/294 (35%, 95% CI 30% to 41%) LPs. Assessed participants reported more frequent direct attending supervision of the LP (diff 16%; 95% CI 8% to 22%), greater use of topical analgesia (diff 6%; 95% CI 1% to 12%) and greater use of the early stylet removal technique (diff 11%; 95% CI 4% to 19%) but no difference in number of attempts or overall procedural success.ConclusionsSimulation-based readiness assessments performed in a point-of-care fashion were associated with several desirable behaviours but were not associated with greater clinical success with LP.
Collapse
|
24
|
Abstract
This review examines the current environment of neonatal procedural learning, describes an updated model of skills training, defines the role of simulation in assessing competency, and discusses potential future directions for simulation-based competency assessment. In order to maximize impact, simulation-based procedural training programs should follow a standardized and evidence-based approach to designing and evaluating educational activities. Simulation can be used to facilitate the evaluation of competency, but must incorporate validated assessment tools to ensure quality and consistency. True competency evaluation cannot be accomplished with simulation alone: competency assessment must also include evaluations of procedural skill during actual clinical care. Future work in this area is needed to measure and track clinically meaningful patient outcomes resulting from simulation-based training, examine the use of simulation to assist physicians undergoing re-entry to practice, and to examine the use of procedural skills simulation as part of a maintenance of competency and life-long learning.
Collapse
Affiliation(s)
- Taylor Sawyer
- Division of Neonatology, Department of Pediatrics, Neonatal Education and Simulation-Based Training (NEST) Program, University of Washington School of Medicine and Seattle Children's Hospital, 1959 NE Pacific St, RR451 HSB, Box 356320, Seattle, WA.
| | - Megan M Gray
- Division of Neonatology, Department of Pediatrics, Neonatal Education and Simulation-Based Training (NEST) Program, University of Washington School of Medicine and Seattle Children's Hospital, 1959 NE Pacific St, RR451 HSB, Box 356320, Seattle, WA
| |
Collapse
|
25
|
Sagalowsky ST, Wynter SA, Auerbach M, Pusic MV, Kessler DO. Simulation-Based Procedural Skills Training in Pediatric Emergency Medicine. CLINICAL PEDIATRIC EMERGENCY MEDICINE 2016. [DOI: 10.1016/j.cpem.2016.05.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
26
|
Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, Duval-Arnould J, Lin Y, Cook DA, Pusic M, Hui J, Moher D, Egger M, Auerbach M. Reporting Guidelines for Health Care Simulation Research. Clin Simul Nurs 2016. [DOI: 10.1016/j.ecns.2016.04.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
27
|
Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, Duval-Arnould J, Lin Y, Cook DA, Pusic M, Hui J, Moher D, Egger M, Auerbach M, for the International Network for Simulation-based Pediatric Innovation, Research, and Education (INSPIRE) Reporting Guidelines Investigators. Reporting guidelines for health care simulation research: extensions to the CONSORT and STROBE statements. Adv Simul (Lond) 2016; 1:25. [PMID: 29449994 PMCID: PMC5806464 DOI: 10.1186/s41077-016-0025-y] [Citation(s) in RCA: 165] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Accepted: 07/08/2016] [Indexed: 12/17/2022] Open
Abstract
BACKGROUND Simulation-based research (SBR) is rapidly expanding but the quality of reporting needs improvement. For a reader to critically assess a study, the elements of the study need to be clearly reported. Our objective was to develop reporting guidelines for SBR by creating extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statements. METHODS An iterative multistep consensus-building process was used on the basis of the recommended steps for developing reporting guidelines. The consensus process involved the following: (1) developing a steering committee, (2) defining the scope of the reporting guidelines, (3) identifying a consensus panel, (4) generating a list of items for discussion via online premeeting survey, (5) conducting a consensus meeting, and (6) drafting reporting guidelines with an explanation and elaboration document. RESULTS The following 11 extensions were recommended for CONSORT: item 1 (title/abstract), item 2 (background), item 5 (interventions), item 6 (outcomes), item 11 (blinding), item 12 (statistical methods), item 15 (baseline data), item 17 (outcomes/ estimation), item 20 (limitations), item 21 (generalizability), and item 25 (funding). The following 10 extensions were recommended for STROBE: item 1 (title/abstract), item 2 (background/rationale), item 7 (variables), item 8 (data sources/measurement), item 12 (statistical methods), item 14 (descriptive data), item 16 (main results), item 19 (limitations), item 21 (generalizability), and item 22 (funding). An elaboration document was created to provide examples and explanation for each extension. CONCLUSIONS We have developed extensions for the CONSORT and STROBE Statements that can help improve the quality of reporting for SBR (Sim Healthcare 00:00-00, 2016).
Collapse
Affiliation(s)
- Adam Cheng
- Section of Emergency Medicine, Department of Pediatrics, Alberta Children’s Hospital, University of Calgary KidSim-ASPIRE Research Program, 2888 Shaganappi Trail NW, Calgary, Alberta T3B 6A8 Canada
| | - David Kessler
- Columbia University College of Physicians and Surgeons, New York, NY USA
| | - Ralph Mackinnon
- Royal Manchester Children’s Hospital, Central Manchester University Hospitals NHS Foundation Trust, Manchester, UK
- Department of Learning, Informatics, Management and Ethics, Karolinska Institute, Stockholm, Sweden
| | - Todd P. Chang
- Children’s Hospital Los Angeles, University of Southern California, Los Angeles, CA USA
| | - Vinay M. Nadkarni
- The Children’s Hospital of Philadelphia, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA USA
| | | | | | - Yiqun Lin
- Alberta Children’s Hospital, Cumming School of Medicine, University of Calgary, Calgary, Alberta Canada
| | - David A. Cook
- Multidisciplinary Simulation Center, Mayo Clinic Online Learning, and Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, USA
| | - Martin Pusic
- Institute for Innovations in Medical Education, Division of Education Quality and Analytics, NYU School of Medicine, New York, NY USA
| | - Joshua Hui
- Department of Emergency Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA USA
| | - David Moher
- Ottawa Methods Centre, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario Canada
| | - Matthias Egger
- Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
| | - Marc Auerbach
- Department of Pediatrics, Section of Emergency Medicine, Yale University School of Medicine, New Haven, CT USA
| | - for the International Network for Simulation-based Pediatric Innovation, Research, and Education (INSPIRE) Reporting Guidelines Investigators
- Section of Emergency Medicine, Department of Pediatrics, Alberta Children’s Hospital, University of Calgary KidSim-ASPIRE Research Program, 2888 Shaganappi Trail NW, Calgary, Alberta T3B 6A8 Canada
- Columbia University College of Physicians and Surgeons, New York, NY USA
- Royal Manchester Children’s Hospital, Central Manchester University Hospitals NHS Foundation Trust, Manchester, UK
- Department of Learning, Informatics, Management and Ethics, Karolinska Institute, Stockholm, Sweden
- Children’s Hospital Los Angeles, University of Southern California, Los Angeles, CA USA
- The Children’s Hospital of Philadelphia, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA USA
- Johns Hopkins University School of Medicine, Baltimore, MD USA
- Alberta Children’s Hospital, Cumming School of Medicine, University of Calgary, Calgary, Alberta Canada
- Multidisciplinary Simulation Center, Mayo Clinic Online Learning, and Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, USA
- Institute for Innovations in Medical Education, Division of Education Quality and Analytics, NYU School of Medicine, New York, NY USA
- Department of Emergency Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA USA
- Ottawa Methods Centre, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario Canada
- Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
- Department of Pediatrics, Section of Emergency Medicine, Yale University School of Medicine, New Haven, CT USA
| |
Collapse
|
28
|
Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, Duval-Arnould J, Lin Y, Cook DA, Pusic M, Hui J, Moher D, Egger M, Auerbach M. Reporting guidelines for health care simulation research: Extensions to the CONSORT and STROBE statements. BMJ SIMULATION & TECHNOLOGY ENHANCED LEARNING 2016; 2:51-60. [DOI: 10.1136/bmjstel-2016-000124] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
29
|
Development and Testing of Screen-Based and Psychometric Instruments for Assessing Resident Performance in an Operating Room Simulator. Anesthesiol Res Pract 2016; 2016:9348478. [PMID: 27293430 PMCID: PMC4879220 DOI: 10.1155/2016/9348478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Accepted: 03/29/2016] [Indexed: 01/30/2023] Open
Abstract
Introduction. Medical simulators are used for assessing clinical skills and increasingly for testing hypotheses. We developed and tested an approach for assessing performance in anesthesia residents using screen-based simulation that ensures expert raters remain blinded to subject identity and experimental condition. Methods. Twenty anesthesia residents managed emergencies in an operating room simulator by logging actions through a custom graphical user interface. Two expert raters rated performance based on these entries using custom Global Rating Scale (GRS) and Crisis Management Checklist (CMC) instruments. Interrater reliability was measured by calculating intraclass correlation coefficients (ICC), and internal consistency of the instruments was assessed with Cronbach's alpha. Agreement between GRS and CMC was measured using Spearman rank correlation (SRC). Results. Interrater agreement (GRS: ICC = 0.825, CMC: ICC = 0.878) and internal consistency (GRS: alpha = 0.838, CMC: alpha = 0.886) were good for both instruments. Subscale analysis indicated that several instrument items can be discarded. GRS and CMC scores were highly correlated (SRC = 0.948). Conclusions. In this pilot study, we demonstrated that screen-based simulation can allow blinded assessment of performance. GRS and CMC instruments demonstrated good rater agreement and internal consistency. We plan to further test construct validity of our instruments by measuring performance in our simulator as a function of training level.
Collapse
|
30
|
Wani S, Hall M, Wang AY, DiMaio CJ, Muthusamy VR, Keswani RN, Brauer BC, Easler JJ, Yen RD, El Hajj I, Fukami N, Ghassemi KF, Gonzalez S, Hosford L, Hollander TG, Wilson R, Kushnir VM, Ahmad J, Murad F, Prabhu A, Watson RR, Strand DS, Amateau SK, Attwell A, Shah RJ, Early D, Edmundowicz SA, Mullady D. Variation in learning curves and competence for ERCP among advanced endoscopy trainees by using cumulative sum analysis. Gastrointest Endosc 2016; 83:711-9.e11. [PMID: 26515957 DOI: 10.1016/j.gie.2015.10.022] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Accepted: 10/11/2015] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS There are limited data on learning curves and competence in ERCP. By using a standardized data collection tool, we aimed to prospectively define learning curves and measure competence among advanced endoscopy trainees (AETs) by using cumulative sum (CUSUM) analysis. METHODS AETs were evaluated by attending endoscopists starting with the 26th hands-on ERCP examination and then every ERCP examination during the 12-month training period. A standardized ERCP competency assessment tool (using a 4-point scoring system) was used to grade the examination. CUSUM analysis was applied to produce learning curves for individual technical and cognitive components of ERCP performance (success defined as a score of 1, acceptable and unacceptable failures [p1] of 10% and 20%, respectively). Sensitivity analyses varying p1 and by using a less-stringent definition of success were performed. RESULTS Five AETs were included with a total of 1049 graded ERCPs (mean ± SD, 209.8 ± 91.6/AET). The majority of cases were performed for a biliary indication (80%). The overall and native papilla allowed cannulation times were 3.1 ± 3.6 and 5.7 ± 4, respectively. Overall learning curves demonstrated substantial variability for individual technical and cognitive endpoints. Although nearly all AETs achieved competence in overall cannulation, none achieved competence for cannulation in cases with a native papilla. Sensitivity analyses increased the proportion of AETs who achieved competence. CONCLUSION This study demonstrates that there is substantial variability in ERCP learning curves among AETs. A specific case volume does not ensure competence, especially for native papilla cannulation.
Collapse
Affiliation(s)
- Sachin Wani
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Matthew Hall
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Andrew Y Wang
- University of Virginia Health System, Charlottesville, Virginia, USA
| | | | | | - Rajesh N Keswani
- Feinberg School of Medicine Northwestern University, Chicago, Illinois, USA
| | - Brian C Brauer
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Jeffrey J Easler
- Washington University School of Medicine, St. Louis, Missouri, USA
| | - Roy D Yen
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Ihab El Hajj
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Norio Fukami
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | | | - Susana Gonzalez
- Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Lindsay Hosford
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | | | - Robert Wilson
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | | | - Jawad Ahmad
- Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Faris Murad
- Washington University School of Medicine, St. Louis, Missouri, USA
| | - Anoop Prabhu
- Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | | | - Daniel S Strand
- University of Virginia Health System, Charlottesville, Virginia, USA
| | - Stuart K Amateau
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Augustin Attwell
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Raj J Shah
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Dayna Early
- Washington University School of Medicine, St. Louis, Missouri, USA
| | | | - Daniel Mullady
- Washington University School of Medicine, St. Louis, Missouri, USA
| |
Collapse
|
31
|
The Correlation of Workplace Simulation-Based Assessments With Interns’ Infant Lumbar Puncture Success. Simul Healthc 2016; 11:126-33. [DOI: 10.1097/sih.0000000000000135] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
32
|
Simulation With PARTS (Phase-Augmented Research and Training Scenarios): A Structure Facilitating Research and Assessment in Simulation. Simul Healthc 2016; 10:178-87. [PMID: 25932706 DOI: 10.1097/sih.0000000000000085] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
INTRODUCTION Assessment in simulation is gaining importance, as are scenario design methods increasing opportunity for assessment. We present our approach to improving measurement in complex scenarios using PARTS [Phase-Augmented Research and Training Scenarios], essentially separating cases into clearly delineated phases. METHODS We created 7 PARTS with real-time rating instruments and tested these in 63 cases during 4 weeks of simulation. Reliability was tested by comparing real-time rating with postsimulation video-based rating using the same instrument. Validity was tested by comparing preintervention and postintervention total results, by examining the difference in improvement when focusing on the phase-specific results addressed by the intervention, and further explored by trying to demonstrate the discrete improvement expected from proficiency in the rare occurrence of leader inclusive behavior. RESULTS Intraclass correlations [3,1] between real-time and postsimulation ratings were 0.951 (95% confidence interval [CI], 0.794-0.990), 1.00 (95% CI, --to--), 0.948 (95% CI, 0.783-0.989), and 0.995 (95% CI, 0.977-0.999) for 3 phase-specific scores and total scenario score, respectively. Paired t tests of prelecture-postlecture performance showed an improvement of 14.26% (bias-corrected and accelerated bootstrap [BCa] 95% CI, 4.71-23.82; P = 0.009) for total performance but of 28.57% (BCa 95% CI, 13.84-43.30; P = 0.002) for performance in the respective phase. The correlation of total scenario performance with leader inclusiveness was not significant (rs = 0.228; BCa 95% CI. -0.082 to 0.520; P = 0.119) but significant for specific phase performance (rs = 0.392; BCa 95% CI, 0.118-0.632; P = 0.006). CONCLUSIONS The PARTS allowed for improved reliability and validity of measurements in complex scenarios.
Collapse
|
33
|
Burns R, Adler M, Mangold K, Trainor J. A Brief Boot Camp for 4th-Year Medical Students Entering into Pediatric and Family Medicine Residencies. Cureus 2016; 8:e488. [PMID: 27014522 PMCID: PMC4786377 DOI: 10.7759/cureus.488] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The transition from medical student to intern is a challenging process characterized by a steep learning curve. Focused courses targeting skills necessary for success as a resident have increased self-perceived preparedness, confidence, and medical knowledge. Our aim was to create a brief educational intervention for 4th-year medical students entering pediatric, family practice, and medicine/pediatric residencies to target skills necessary for an internship. The curriculum used a combination of didactic presentations, small group discussions, role-playing, facilitated debriefing, and simulation-based education. Participants completed an objective structured clinical exam requiring synthesis and application of multiple boot camp elements before and after the elective. Participants completed anonymous surveys assessing self-perceived preparedness for an internship, overall and in regards to specific skills, before the elective and after the course. Participants were asked to provide feedback about the course. Using checklists to assess performance, students showed an improvement in performing infant lumbar punctures (47.2% vs 77.0%; p < 0.01, 95% CI for the difference 0.2, 0.4%) and providing signout (2.5 vs. 3.9 (5-point scale) p < 0.01, 95% CI for the difference 0.6, 2.3). They did not show an improvement in communication with a parent. Participants demonstrated an increase in self-reported preparedness for all targeted skills, except for obtaining consults and interprofessional communication. There was no increase in reported overall preparedness. All participants agreed with the statements, “The facilitators presented the material in an effective manner,” “I took away ideas I plan to implement in internship,” and “I think all students should participate in a similar experience.” When asked to assess the usefulness of individual modules, all except order writing received a mean Likert score > 4. A focused boot camp addressing key knowledge and skills required for pediatric-related residencies was well received and led to improved performance of targeted skills and increased self-reported preparedness in many targeted domains.
Collapse
Affiliation(s)
- Rebekah Burns
- Pediatrics, Seattle Children's Hospital - University of Washington School of Medicine
| | - Mark Adler
- Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Karen Mangold
- Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Jennifer Trainor
- Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| |
Collapse
|
34
|
|
35
|
Walzak A, Bacchus M, Schaefer JP, Zarnke K, Glow J, Brass C, McLaughlin K, Ma IWY. Diagnosing technical competence in six bedside procedures: comparing checklists and a global rating scale in the assessment of resident performance. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:1100-8. [PMID: 25881644 DOI: 10.1097/acm.0000000000000704] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
PURPOSE To compare procedure-specific checklists and a global rating scale in assessing technical competence. METHOD Two trained raters used procedure-specific checklists and a global rating scale to independently evaluate 218 video-recorded performances of six bedside procedures of varying complexity for technical competence. The procedures were completed by 47 residents participating in a formative simulation-based objective structured clinical examination at the University of Calgary in 2011. Pass/fail (competent/not competent) decisions were based on an overall global assessment item on the global rating scale. Raters provided written comments on performances they deemed not competent. Checklist minimum passing levels were set using traditional standard-setting methods. RESULTS For each procedure, the global rating scale demonstrated higher internal reliability and lower interrater reliability than the checklist. However, interrater reliability was almost perfect for decisions on competence using the overall global assessment (Kappa range: 0.84-1.00). Clinically significant procedural errors were most often cited as reasons for ratings of not competent. Using checklist scores to diagnose competence demonstrated acceptable discrimination: The area under the curve ranged from 0.84 (95% CI 0.72-0.97) to 0.93 (95% CI 0.82-1.00). Checklist minimum passing levels demonstrated high sensitivity but low specificity for diagnosing competence. CONCLUSIONS Assessment using a global rating scale may be superior to assessment using a checklist for evaluation of technical competence. Traditional standard-setting methods may establish checklist cut scores with too-low specificity: High checklist scores did not rule out incompetence. The role of clinically significant errors in determining procedural competence should be further evaluated.
Collapse
Affiliation(s)
- Alison Walzak
- A. Walzak is clinical instructor, Department of Medicine, University of British Columbia, Victoria, British Columbia, Canada. M. Bacchus is associate professor, Department of Medicine, University of Calgary, Calgary, Alberta, Canada. J.P. Schaefer is clinical professor, Department of Medicine, University of Calgary, Calgary, Alberta, Canada. K. Zarnke is associate professor, Department of Medicine, University of Calgary, Calgary, Alberta, Canada. J. Glow is internal medicine residency program administrator, University of Calgary, Calgary, Alberta, Canada. C. Brass is internal medicine residency program assistant, University of Calgary, Calgary, Alberta, Canada. K. McLaughlin is associate professor, Department of Medicine, University of Calgary, Calgary, Alberta, Canada. I.W.Y. Ma is associate professor, Department of Medicine, University of Calgary, Calgary, Alberta, Canada
| | | | | | | | | | | | | | | |
Collapse
|
36
|
Kessler D, Pusic M, Chang TP, Fein DM, Grossman D, Mehta R, White M, Jang J, Whitfill T, Auerbach M. Impact of Just-in-Time and Just-in-Place Simulation on Intern Success With Infant Lumbar Puncture. Pediatrics 2015; 135:e1237-46. [PMID: 25869377 DOI: 10.1542/peds.2014-1911] [Citation(s) in RCA: 65] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/24/2015] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND AND OBJECTIVE Simulation-based skill trainings are common; however, optimal instructional designs that improve outcomes are not well specified. We explored the impact of just-in-time and just-in-place training (JIPT) on interns' infant lumbar puncture (LP) success. METHODS This prospective study enrolled pediatric and emergency medicine interns from 2009 to 2012 at 34 centers. Two distinct instructional design strategies were compared. Cohort A (2009-2010) completed simulation-based training at commencement of internship, receiving individually coached practice on the LP simulator until achieving a predefined mastery performance standard. Cohort B (2010-2012) had the same training plus JIPT sessions immediately before their first clinical LP. Main outcome was LP success, defined as obtaining fluid with first needle insertion and <1000 red blood cells per high-power field. Process measures included use of analgesia, early stylet removal, and overall attempts. RESULTS A total of 436 first infant LPs were analyzed. The LP success rate in cohort A was 35% (13/37), compared with 38% (152/399) in cohort B (95% confidence interval for difference [CI diff], -15% to +18%). Cohort B exhibited greater analgesia use (68% vs 19%; 95% CI diff, 33% to 59%), early stylet removal (69% vs 54%; 95% CI diff, 0% to 32%), and lower mean number of attempts (1.4 ± 0.6 vs 2.1 ± 1.6, P < .01) compared with cohort A. CONCLUSIONS Across multiple institutions, intern success rates with infant LP are poor. Despite improving process measures, adding JIPT to training bundles did not improve success rate. More research is needed on optimal instructional design strategies for infant LP.
Collapse
Affiliation(s)
- David Kessler
- Columbia University Medical Center, New York, New York;
| | - Martin Pusic
- New York University Langone Medical Center, New York, New York
| | - Todd P Chang
- Children's Hospital of Los Angeles, Los Angeles, California
| | - Daniel M Fein
- Albert Einstein College of Medicine, Children's Hospital at Montefiore, Bronx, New York
| | | | - Renuka Mehta
- Medical College of Georgia at Georgia Regents University, Augusta, Georgia
| | - Marjorie White
- University of Alabama at Birmingham, Birmingham, Alabama; and
| | - Jaewon Jang
- Yale University School of Medicine, New Haven, Connecticut
| | | | - Marc Auerbach
- Yale University School of Medicine, New Haven, Connecticut
| | | |
Collapse
|
37
|
Cheng A, Auerbach M, Hunt EA, Chang TP, Pusic M, Nadkarni V, Kessler D. Designing and conducting simulation-based research. Pediatrics 2014; 133:1091-101. [PMID: 24819576 DOI: 10.1542/peds.2013-3267] [Citation(s) in RCA: 138] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
As simulation is increasingly used to study questions pertaining to pediatrics, it is important that investigators use rigorous methods to conduct their research. In this article, we discuss several important aspects of conducting simulation-based research in pediatrics. First, we describe, from a pediatric perspective, the 2 main types of simulation-based research: (1) studies that assess the efficacy of simulation as a training methodology and (2) studies where simulation is used as an investigative methodology. We provide a framework to help structure research questions for each type of research and describe illustrative examples of published research in pediatrics using these 2 frameworks. Second, we highlight the benefits of simulation-based research and how these apply to pediatrics. Third, we describe simulation-specific confounding variables that serve as threats to the internal validity of simulation studies and offer strategies to mitigate these confounders. Finally, we discuss the various types of outcome measures available for simulation research and offer a list of validated pediatric assessment tools that can be used in future simulation-based studies.
Collapse
Affiliation(s)
- Adam Cheng
- University of Calgary, Section of Emergency Medicine, Department of Pediatrics, Alberta Children's Hospital;
| | - Marc Auerbach
- Department of Pediatrics, Section of Emergency Medicine, Yale University School of Medicine, New Haven, Connecticut
| | - Elizabeth A Hunt
- Departments of Anesthesiology, Critical Care Medicine and Pediatrics, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Todd P Chang
- Division of Emergency Medicine, Children's Hospital Los Angeles, Los Angeles, California
| | - Martin Pusic
- Office of Medical Education, Division of Educational Informatics, New York University School of Medicine, New York, New York
| | - Vinay Nadkarni
- Division of Anesthesia and Critical Care Medicine, Children's Hospital of Philadelphia, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania; and
| | - David Kessler
- Department of Pediatrics, Division of Pediatric Emergency Medicine, Columbia University College of Physicians and Surgeons, New York, New York
| |
Collapse
|
38
|
Walsh CM, Ling SC, Khanna N, Cooper MA, Grover SC, May G, Walters TD, Rabeneck L, Reznick R, Carnahan H. Gastrointestinal Endoscopy Competency Assessment Tool: development of a procedure-specific assessment tool for colonoscopy. Gastrointest Endosc 2014; 79:798-807.e5. [PMID: 24321390 DOI: 10.1016/j.gie.2013.10.035] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2013] [Accepted: 10/17/2013] [Indexed: 02/08/2023]
Abstract
BACKGROUND Ensuring competence remains a seminal objective of endoscopy training programs, professional organizations, and accreditation bodies; however, no widely accepted measure of endoscopic competence currently exists. OBJECTIVE By using Delphi methodology, we aimed to develop and establish the content validity of the Gastrointestinal Endoscopy Competency Assessment Tool for colonoscopy. DESIGN An international panel of endoscopy experts rated potential checklist and global rating items for their importance as indicators of the competence of trainees learning to perform colonoscopy. After each round, responses were analyzed and sent back to the experts for further ratings until consensus was reached. MAIN OUTCOME MEASUREMENTS Consensus was defined a priori as ≥80% of experts, in a given round, scoring ≥4 of 5 on all remaining items. RESULTS Fifty-five experts agreed to be part of the Delphi panel: 43 gastroenterologists, 10 surgeons, and 2 endoscopy managers. Seventy-three checklist and 34 global rating items were generated through a systematic literature review and survey of committee members. An additional 2 checklist and 4 global rating items were added by Delphi panelists. Five rounds of surveys were completed before consensus was achieved, with response rates ranging from 67% to 100%. Seven global ratings and 19 checklist items reached consensus as good indicators of the competence of clinicians performing colonoscopy. LIMITATIONS Further validation required. CONCLUSION Delphi methodology allowed for the rigorous development and content validation of a new measure of endoscopic competence, reflective of practice across institutions. Although further evaluation is required, it is a promising step toward the objective assessment of competency for use in colonoscopy training, practice, and research.
Collapse
Affiliation(s)
- Catharine M Walsh
- Division of Gastroenterology, Hepatology and Nutrition, Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada; Department of Paediatrics, University of Toronto, Toronto, Ontario, Canada; The Wilson Centre, University of Toronto, Toronto, Ontario, Canada
| | - Simon C Ling
- Division of Gastroenterology, Hepatology and Nutrition, Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada; Department of Paediatrics, University of Toronto, Toronto, Ontario, Canada
| | - Nitin Khanna
- Division of Gastroenterology, St. Joseph's Health Centre, University of Western Ontario, Toronto, Ontario, Canada
| | - Mary Anne Cooper
- Division of Gastroenterology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada; Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Samir C Grover
- Division of Gastroenterology, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada; Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Gary May
- Division of Gastroenterology, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada; Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Thomas D Walters
- Division of Gastroenterology, Hepatology and Nutrition, Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada; Department of Paediatrics, University of Toronto, Toronto, Ontario, Canada
| | - Linda Rabeneck
- Cancer Care Ontario, Queen's University, Toronto, Ontario, Canada; Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Richard Reznick
- Faculty of Health Sciences, Queen's University, Toronto, Ontario, Canada
| | - Heather Carnahan
- Centre for Ambulatory Care Education, Women's College Hospital, University of Toronto, Toronto, Ontario, Canada; The Wilson Centre, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|