1
|
Yadav SK, Jose A, Sharma D, Biyani CS. Simulation to Scalpel: A Systematic Review of True Evidence of Skills Transfer as Seen Through the Lens of Patient Outcomes. World J Surg 2025; 49:906-915. [PMID: 40050029 DOI: 10.1002/wjs.12525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2024] [Revised: 01/24/2025] [Accepted: 02/16/2025] [Indexed: 04/14/2025]
Abstract
INTRODUCTION Simulation-based training (SBT) has become an essential component of surgical education. However, the definitive evidence for dissrect patient outcomes remains inconsistent. This prompted us to conduct this systematic review and meta-analysis to evaluate Kirkpatrick Level 4 evidence on whether SBT translates into clinical benefits and improves patient outcomes. METHODS We designed a search protocol a priori and followed PRISMA guidelines for systematic reviews. Medline (via PubMed), Cochrane Library, online clinical trial registers, and websites were reviewed from their inception until 31st October 2024. Included studies were randomized controlled trials with patients undergoing any invasive intervention involving manual skills after SBT compared to the same intervention involving manual skills without SBT and comparing Clavien-Dindo complication grades. The methodological quality of included studies was assessed using the Cochrane's revised tool to assess the risk of bias in randomized trials. The Cochrane Collaboration's Review Manager software version 5.3 was utilized for data analysis. The grading of recommendation, assessment, development, and evaluation (GRADE) instrument was used for recommendation strength in the included studies in the meta-analysis. RESULTS Ten studies were included in the final meta-analysis; all were rated as low risk of bias. The results favored simulation, but no statistically significant difference was observed between simulation and conventional training. The GRADE assessment reflected moderate certainty. DISCUSSION We evaluated the effectiveness of simulation-based training (SBT) in improving patient-centric outcomes, classified by Clavien-Dindo complication grades using Kirkpatrick Level 4 evidence from randomized controlled trials, and discovered that results were comparable to traditional training. Future studies are needed to address this limitation in the current evidence base for simulation-based training to confirm and maximize its patient-centered benefits.
Collapse
Affiliation(s)
| | - Animesh Jose
- Department of Surgery, NSCB Medical College, Jabalpur, India
| | | | - Chandra Shekhar Biyani
- Department of Urology, St James's University Hospital, Leeds Teaching Hospitals NHS Trust, Leeds, UK
- CADSIM (Advanced Cadaveric Surgical Simulation Program), Anatomy Department, University of Leeds, Leeds, UK
| |
Collapse
|
2
|
Potter A, Munsch C, Watson E, Hopkins E, Kitromili S, O'Neill IC, Larbie J, Niittymaki E, Ramsay C, Burke J, Ralph N. Identifying Research Priorities in Digital Education for Health Care: Umbrella Review and Modified Delphi Method Study. J Med Internet Res 2025; 27:e66157. [PMID: 39969988 PMCID: PMC11888089 DOI: 10.2196/66157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Revised: 10/10/2024] [Accepted: 10/29/2024] [Indexed: 02/20/2025] Open
Abstract
BACKGROUND In recent years, the use of digital technology in the education of health care professionals has surged, partly driven by the COVID-19 pandemic. However, there is still a need for focused research to establish evidence of its effectiveness. OBJECTIVE This study aimed to define the gaps in the evidence for the efficacy of digital education and to identify priority areas where future research has the potential to contribute to our understanding and use of digital education. METHODS We used a 2-stage approach to identify research priorities. First, an umbrella review of the recent literature (published between 2020 and 2023) was performed to identify and build on existing work. Second, expert consensus on the priority research questions was obtained using a modified Delphi method. RESULTS A total of 8857 potentially relevant papers were identified. Using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology, we included 217 papers for full review. All papers were either systematic reviews or meta-analyses. A total of 151 research recommendations were extracted from the 217 papers. These were analyzed, recategorized, and consolidated to create a final list of 63 questions. From these, a modified Delphi process with 42 experts was used to produce the top-five rated research priorities: (1) How do we measure the learning transfer from digital education into the clinical setting? (2) How can we optimize the use of artificial intelligence, machine learning, and deep learning to facilitate education and training? (3) What are the methodological requirements for high-quality rigorous studies assessing the outcomes of digital health education? (4) How does the design of digital education interventions (eg, format and modality) in health professionals' education and training curriculum affect learning outcomes? and (5) How should learning outcomes in the field of health professions' digital education be defined and standardized? CONCLUSIONS This review provides a prioritized list of research gaps in digital education in health care, which will be of use to researchers, educators, education providers, and funding agencies. Additional proposals are discussed regarding the next steps needed to advance this agenda, aiming to promote meaningful and practical research on the use of digital technologies and drive excellence in health care education.
Collapse
Affiliation(s)
- Alison Potter
- Technology Enhanced Learning, NHS England, Southampton, United Kingdom
| | - Chris Munsch
- Technology Enhanced Learning, NHS England, Leeds, United Kingdom
| | - Elaine Watson
- Technology Enhanced Learning, NHS England, Oxford, United Kingdom
| | - Emily Hopkins
- Knowledge Management Service, NHS England, Manchester, United Kingdom
| | - Sofia Kitromili
- Technology Enhanced Learning, NHS England, Southampton, United Kingdom
| | | | - Judy Larbie
- Technology Enhanced Learning, NHS England, London, United Kingdom
| | - Essi Niittymaki
- Technology Enhanced Learning, NHS England, London, United Kingdom
| | - Catriona Ramsay
- Technology Enhanced Learning, NHS England, Newcastle upon Tyne, United Kingdom
| | - Joshua Burke
- Manchester Foundation Trust, Manchester, United Kingdom
| | - Neil Ralph
- Technology Enhanced Learning, NHS England, London, United Kingdom
| |
Collapse
|
3
|
Gill P, Levin M, Farhood Z, Asaria J. Surgical Training Simulators for Rhinoplasty: A Systematic Review. Facial Plast Surg 2024; 40:86-92. [PMID: 37172948 DOI: 10.1055/a-2092-6564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/15/2023] Open
Abstract
Rhinoplasty training currently follows an apprenticeship model that is largely observational. Trainees have limited experience in performing maneuvers of this complex surgery. Rhinoplasty simulators can address this issue by providing trainees with the opportunity to gain surgical simulator experience that could improve technical competences in the operating room. This review amalgamates the collective understanding of rhinoplasty simulators described to date. In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, PubMed, OVID Embase, OVID Medline, and Web of Science databases were all searched for original research on surgical simulators for rhinoplasty education and reviewed by independent reviewers. Articles underwent title and abstract screening, and then relevant articles underwent full-text review to extract simulator data. Seventeen studies, published between 1984 and 2021, were included for final analysis. Study participant numbers ranged from 4 to 24, and included staff surgeons, fellows, residents (postgraduate year 1-6), and medical students. Cadaveric surgical simulators comprised eight studies, of which three were with human cadavers, one study was a live animal simulator, two were virtual simulators, and six were three-dimensional (3D) models. Both animal and human-based simulators increased the confidence of trainees significantly. Significant improvement in various aspects of rhinoplasty knowledge occurred with implementation of a 3D-printed model in rhinoplasty education. Rhinoplasty simulators are limited by a lack of an automated method of evaluation and a large reliance on feedback from experienced rhinoplasty surgeons. Rhinoplasty simulators have the potential to provide trainees with the opportunity for hands-on training to improve skill and develop competencies without putting patients in harm's way. Current literature on rhinoplasty simulators largely focuses on simulator development, with few simulators being validated and assessed for utility. For wider implementation and acceptance, further refinement of simulators, validation, and assessment of outcomes is required.
Collapse
Affiliation(s)
- P Gill
- Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - M Levin
- Department of Otolaryngology - Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Z Farhood
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology - Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
- FACE Cosmetic Surgery, Toronto, Ontario, Canada
| | - J Asaria
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology - Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
- FACE Cosmetic Surgery, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Lüscher M, Konge L, Tingsgaard P, Barrett TQ, Andersen SAW. Gathering validity evidence for a 3D-printed simulator for training of myringotomy and ventilation tube insertion. Laryngoscope Investig Otolaryngol 2023; 8:1357-1364. [PMID: 37899878 PMCID: PMC10601587 DOI: 10.1002/lio2.1123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 06/17/2023] [Accepted: 06/30/2023] [Indexed: 10/31/2023] Open
Abstract
Objectives This study aimed to gather validity evidence according to Messick's framework for a novel 3D-printed simulator for myringotomy with ventilation tube insertion for use in technical skills training of otorhinolaryngology (ORL) residents. Methods The study included 15 junior ORL residents (trainees) and 13 experienced teaching otolaryngologists (experts). Experts and trainees first received an identically structured introduction to the procedure, simulator, and simulation setup. Five procedures performed by each participant were video-recorded and ordered randomly for blinded rating by two independent raters. The rating tools used were a global rating scale (GBRS) and a task-specific checklist. Validity evidence was collected according to Messick's framework. Differences in time consumption and performance scores were analyzed. Finally, a pass/fail standard was established using the contrasting groups' method. Results Trainees used significantly more time per procedure (109 s, 95% CI: 99-120) than experts (82 s, 95% CI: 71-93; p < .001). Adjusted for repetition and rater leniency, experts achieved an average GBRS score of 18.8 (95% CI: 18.3-19.2) out of 20 points, whereas trainees achieved an average of 17.1 points (95% CI: 16.6-17.5; p < .001). In contrast to the task-specific checklist, the GBRS score discriminated between repetition number and participant experience. The pass/fail standard for the GBRS was established at 18.4 points. Conclusion We established educational validity evidence for a novel 3D-printed model for simulation-based training of ventilation tube insertion and established a reliable pass/fail standard. Level of Evidence 1b.
Collapse
Affiliation(s)
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES)Center for Human Resources & EducationCopenhagenDenmark
| | | | | | | |
Collapse
|
5
|
DeWalt NC, Stahorsky KA, Sturges S, Bena JF, Morrison SL, Drobnich Sulak L, Szczepinski L, Albert NM. Simulation Versus Written Fall Prevention Education in Older Hospitalized Adults: A Randomized Controlled Study. Clin Nurs Res 2023; 32:278-287. [PMID: 35291853 DOI: 10.1177/10547738221082192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Using a randomized controlled, non-blinded, two-group design, differences in fall risk assessment, post-discharge sustainable fall risk changes, fall events and re-hospitalization were examined in 77 older adults who received a simulation (n = 36) or written (n = 41) education intervention. Between-group differences and changes in pre- versus post-fall risk assessment scores were examined using Pearson's chi-square, Wilcoxon rank sum or Fisher's exact tests (categorical variables) and two-sample t-tests (continuous variables). There were no statistically significant differences between groups in demographic characteristics. Patients who received simulation education had higher fall risk post-assessment scores than the written education group, p = .022. Change in fall risk assessment scores (post-vs.-pre; 95% confidence intervals) were higher in the simulation group compared to the written education group, 1.43 (0.37, 2.50), p = .009. At each post-discharge assessment, fall events were numerically fewer but not significantly different among simulation and education group participants. There were no statistically significant between-group differences in re-hospitalization.
Collapse
Affiliation(s)
- Nancy C DeWalt
- Cleveland Clinic Hillcrest Hospital, Mayfield Heights, OH, USA
| | | | - Susan Sturges
- Cleveland Clinic Hillcrest Hospital, Mayfield Heights, OH, USA
| | - James F Bena
- Cleveland Clinic Hillcrest Hospital, Mayfield Heights, OH, USA
| | | | | | | | | |
Collapse
|
6
|
Haiser A, Aydin A, Kunduzi B, Ahmed K, Dasgupta P. A Systematic Review of Simulation-Based Training in Vascular Surgery. J Surg Res 2022; 279:409-419. [PMID: 35839575 PMCID: PMC9483723 DOI: 10.1016/j.jss.2022.05.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 04/20/2022] [Accepted: 05/22/2022] [Indexed: 11/25/2022]
Abstract
Introduction Recent advancements in surgical technology, reduced working hours, and training opportunities exacerbated by the COVID-19 pandemic have led to an increase in simulation-based training. Furthermore, a rise in endovascular procedures has led to a requirement for high-fidelity simulators that offer comprehensive feedback. This review aims to identify vascular surgery simulation models and assess their validity and levels of effectiveness (LoE) for each model in order to successfully implement them into current training curricula. Methods PubMed and EMBASE were searched on January 1, 2021, for full-text English studies on vascular surgery simulators. Eligible articles were given validity ratings based on Messick’s modern concept of validity alongside an LoE score according to McGaghie’s translational outcomes. Results Overall 76 eligible articles validated 34 vascular surgery simulators and training courses for open and endovascular procedures. High validity ratings were achieved across studies for: content (35), response processes (12), the internal structure (5), relations to other variables (57), and consequences (2). Only seven studies achieved an LoE greater than 3/5. Overall, ANGIO Mentor was the most highly validated and effective simulator and was the only simulator to achieve an LoE of 5/5. Conclusions Simulation-based training in vascular surgery is a continuously developing field with exciting future prospects, demonstrated by the vast number of models and training courses. To effectively integrate simulation models into current vascular surgery curricula and assessments, there is a need for studies to look at trainee skill retention over a longer period of time. A more detailed discussion on cost-effectiveness is also needed.
Collapse
Affiliation(s)
- Alexander Haiser
- Guy's, King's and St Thomas' School of Medical Education, King's College London, London, UK
| | - Abdullatif Aydin
- MRC Centre for Transplantation, Guy's Hospital, King's College London, London, UK.
| | - Basir Kunduzi
- Department of Transplant Surgery, Guy's and St. Thomas' NHS Foundation Trust, London, London, UK
| | - Kamran Ahmed
- MRC Centre for Transplantation, Guy's Hospital, King's College London, London, UK
| | - Prokar Dasgupta
- MRC Centre for Transplantation, Guy's Hospital, King's College London, London, UK
| |
Collapse
|