1
|
Andrade-Espinoza B, Oviedo-Peñata C, Maldonado-Estrada JG. Use of a Composed Simulator by Veterinarian Non-Experts in Minimally Invasive Surgery for Training and Acquisition of Surgical Skills for Laparoscopic Ovariectomy in Dogs. Animals (Basel) 2023; 13:2698. [PMID: 37684962 PMCID: PMC10487008 DOI: 10.3390/ani13172698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 07/12/2023] [Accepted: 07/17/2023] [Indexed: 09/10/2023] Open
Abstract
This study aims to assess the acquisition of surgical skills for laparoscopic ovariectomy (LOE) in dogs by veterinary surgeons with no experience in minimally invasive surgery using the CALMA Veterinary Lap-trainer simulator (CVLTS) in an experimental and analytical setting. Veterinary surgeons with no experience in minimally invasive surgery (MIS) (experimental, n = 5), and MIS experts (experts, n = 3) were evaluated. Experimental and expert group participants watched an instructional video (initial time) before practicing the LOE on uterine tissues and ovaries freshly reconstituted after elective ovariohysterectomy (initial time evaluation). Then, the experimental group practiced five training sessions on the composite simulator with permanent feedback and then performed the LOE again (final time evaluation). Surgical performances in initial and final evaluations were video recorded and further evaluated by three external MIS experts using Global objective assessment of laparoscopic skills (GOALS) and LOE-specific rating scales (SRSs) in a double-blinded schedule. In addition, a hands movement assessment system (HMAS) attached to the back of the hands was used to quantitatively measure completion time, angularity, and movement smoothness. Data were analyzed with one-factor ANOVA and Tukey's contrast test. No statistically significant differences were found between the novice group's performance after training and the expert group's performance according to the GOALS (p < 0.01) and SRS (p < 0.05) scores. Moreover, the novices had significantly improved time, number of movements, and angularity in the final time compared with the initial time (p < 0.05), with no significant differences compared to the expert group (p > 0.05). LOE training using a composed simulator resulted in significantly improved laparoscopic skills and time, number, and angularity of movements data, providing evidence of the usefulness and reliability of CVLTS in training LOE.
Collapse
Affiliation(s)
- Belén Andrade-Espinoza
- OHVRI-Research Group, College of Veterinary Medicine, Faculty of Agrarian Sciences, University of Antioquia, Medellín 050010, Colombia; (B.A.-E.); (J.G.M.-E.)
- Master of Science Program in Canine and Feline Internal Medicine, University of Cuenca, Cuenca 010107, Ecuador
| | - Carlos Oviedo-Peñata
- OHVRI-Research Group, College of Veterinary Medicine, Faculty of Agrarian Sciences, University of Antioquia, Medellín 050010, Colombia; (B.A.-E.); (J.G.M.-E.)
- Tropical Animal Production Research Group, Faculty of Veterinary Medicine and Zootechny, University of Cordoba, Monteria 230002, Colombia
| | - Juan G. Maldonado-Estrada
- OHVRI-Research Group, College of Veterinary Medicine, Faculty of Agrarian Sciences, University of Antioquia, Medellín 050010, Colombia; (B.A.-E.); (J.G.M.-E.)
| |
Collapse
|
2
|
Balvardi S, Kammili A, Hanson M, Mueller C, Vassiliou M, Lee L, Schwartzman K, Fiore JF, Feldman LS. The association between video-based assessment of intraoperative technical performance and patient outcomes: a systematic review. Surg Endosc 2022; 36:7938-7948. [PMID: 35556166 DOI: 10.1007/s00464-022-09296-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 04/18/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND Efforts to improve surgical safety and outcomes have traditionally placed little emphasis on intraoperative performance, partly due to difficulties in measurement. Video-based assessment (VBA) provides an opportunity for blinded and unbiased appraisal of surgeon performance. Therefore, we aimed to systematically review the existing literature on the association between intraoperative technical performance, measured using VBA, and patient outcomes. METHODS Major databases (Medline, Embase, Cochrane Database, and Web of Science) were systematically searched for studies assessing the association of intraoperative technical performance measured by tools supported by validity evidence with short-term (≤ 30 days) and/or long-term postoperative outcomes. Study quality was assessed using the Newcastle-Ottawa Scale. Results were appraised descriptively as study heterogeneity precluded meta-analysis. RESULTS A total of 11 observational studies were identified involving 8 different procedures in foregut/bariatric (n = 4), colorectal (n = 4), urologic (n = 2), and hepatobiliary surgery (n = 1). The number of surgeons assessed ranged from 1 to 34; patient sample size ranged from 47 to 10,242. High risk of bias was present in 5 of 8 studies assessing short-term outcomes and 2 of 6 studies assessing long-term outcomes. Short-term outcomes were reported in 8 studies (i.e., morbidity, mortality, and readmission), while 6 reported long-term outcomes (i.e., cancer outcomes, weight loss, and urinary continence). Better intraoperative performance was associated with fewer postoperative complications (6 of 7 studies), reoperations (3 of 4 studies), and readmissions (1 of 4 studies). Long-term outcomes were less commonly investigated, with mixed results. CONCLUSION Current evidence supports an association between superior intraoperative technical performance measured using surgical videos and improved short-term postoperative outcomes. Intraoperative performance analysis using video-based assessment represents a promising approach to surgical quality-improvement.
Collapse
Affiliation(s)
- Saba Balvardi
- Department of Surgery, McGill University, 1650 Cedar Ave, D6-136, Montreal, QC, H3G 1A4, Canada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, QC, Canada
| | - Anitha Kammili
- Department of Surgery, McGill University, 1650 Cedar Ave, D6-136, Montreal, QC, H3G 1A4, Canada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, QC, Canada
| | - Melissa Hanson
- Department of Surgery, McGill University, 1650 Cedar Ave, D6-136, Montreal, QC, H3G 1A4, Canada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, QC, Canada
| | - Carmen Mueller
- Department of Surgery, McGill University, 1650 Cedar Ave, D6-136, Montreal, QC, H3G 1A4, Canada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, QC, Canada
| | - Melina Vassiliou
- Department of Surgery, McGill University, 1650 Cedar Ave, D6-136, Montreal, QC, H3G 1A4, Canada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, QC, Canada
| | - Lawrence Lee
- Department of Surgery, McGill University, 1650 Cedar Ave, D6-136, Montreal, QC, H3G 1A4, Canada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, QC, Canada
| | - Kevin Schwartzman
- Respiratory Division, Department of Medicine, McGill University, Montreal, QC, Canada
- McGill International Tuberculosis Centre, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
| | - Julio F Fiore
- Department of Surgery, McGill University, 1650 Cedar Ave, D6-136, Montreal, QC, H3G 1A4, Canada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, QC, Canada
| | - Liane S Feldman
- Department of Surgery, McGill University, 1650 Cedar Ave, D6-136, Montreal, QC, H3G 1A4, Canada.
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, QC, Canada.
| |
Collapse
|
3
|
Esposito AC, Coppersmith NA, White EM, Yoo PS. Video Coaching in Surgical Education: Utility, Opportunities, and Barriers to Implementation. JOURNAL OF SURGICAL EDUCATION 2022; 79:717-724. [PMID: 34972670 DOI: 10.1016/j.jsurg.2021.12.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 11/30/2021] [Accepted: 12/04/2021] [Indexed: 06/14/2023]
Abstract
OBJECTIVE This review discusses the literature on Video-Based Coaching (VBC) and explores the barriers to widespread implementation. DESIGN A search was performed on Scopus and PubMed for the terms "operation," "operating room," "surgery," "resident," "house staff," "graduate medical education," "teaching," "coaching," "assessment," "reflection," "camera," and "video" on July 27, 2021, in English. This yielded 828 results. A single author reviewed the titles and abstracts and eliminated any results that did not pertain to operative VBC or assessment. All bibliographies were reviewed, and appropriate manuscripts were included in this study. This resulted in a total of 52 manuscripts included in this review. SETTING/PARTICIPANTS Original, peer-reviewed studies focused on VBC or assessment. RESULTS VBC has been both subjectively and objectively found to be a valuable educational tool. Nearly every study of video recording in the operating room found that subjects, including surgical residents and seasoned surgeons alike, overwhelmingly considered it a useful, non-redundant adjunct to their training. Most studies that evaluated skill acquisition via standardized assessment tools found that surgical residents who underwent a VBC program had significant improvements compared to their counterparts who did not undergo video review. Despite this evidence of effectiveness, fewer than 5% of residency programs employ video recording in the operating room. Barriers to implementation include significant time commitments for proposed coaching curricula and difficulty with integration of video cameras into the operating room. CONCLUSIONS VBC has significant educational benefits, but a scalable curriculum has not been developed. An optimal solution would ensure technical ease and expediency, simple, high-quality cameras, immediate review, and overcoming entrenched surgical norms and culture.
Collapse
Affiliation(s)
- Andrew C Esposito
- Yale School of Medicine, Department of Surgery, New Haven, Connecticut
| | | | - Erin M White
- Yale School of Medicine, Department of Surgery, New Haven, Connecticut
| | - Peter S Yoo
- Yale School of Medicine, Department of Surgery, New Haven, Connecticut.
| |
Collapse
|
4
|
Anderson TN, Lau JN, Shi R, Sapp RW, Aalami LR, Lee EW, Tekian A, Park YS. The Utility of Peers and Trained Raters in Technical Skill-based Assessments a Generalizability Theory Study. JOURNAL OF SURGICAL EDUCATION 2022; 79:206-215. [PMID: 34353764 DOI: 10.1016/j.jsurg.2021.07.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 05/26/2021] [Accepted: 07/05/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE The gold standard for evaluation of resident procedural competence is that of validated assessments from faculty surgeons. A provision of adequate trainee assessments is challenged by a shortage of faculty due to increased clinical and administrative responsibilities. We hypothesized that with a well constructed assessment instrument and training, there would be minimal differences in procedural assessments made by near-peer resident raters (RR), faculty raters (FR), and trained raters (TR). DESIGN Deidentified videos of residents performing hand-sewn (HA) and stapled (SA) anastomoses were distributed to blinded reviewers of 3 types. Intra-class correlation (ICC) of RR, FR and TR assessments was determined for each procedure. A fully-crossed design was used to examine the internal structure validity in a generalizability study. A Decision study was performed to make projections on the number of raters needed for a g-coefficient > 0.70. SETTING This study was conducted within a private academic institution, using the creation of intestinal anastomoses as the procedural model. PARTICIPANTS Raters consisted of residents who were untrained to the assessment (UTA) tool, UTA faculty surgeons, and individuals with training. RESULTS Twenty nine videos were reviewed (15 HA and 14 SA) by a total of 9 video reviewers (4 RR, 2 FR, and 3 TR). HA ICC values were 0.84 (Confidence Interval [CI]:0.81-0.87) for RR, 0.89 (CI:0.86-0.92) for FR, and 0.88 (CI:0.86-0.90) for TR. SA ICC values were 0.77 (CI:0.72-0.80) for RR, 0.79 (CI:0.75-0.83) for FR, and 0.86 (CI:0.83-0.88) for TR. The g-coefficient was RR = 0.72, FR = 0.85, and TR = 0.77 for HA; and RR = 0.33, FR = 0.38, and TR = 0.4 for SA. The D-study indicated that at least 2 raters of any type were needed for HA and > 11 FR for SA. CONCLUSIONS Faculty without training have high assessment agreement. Peers for surgical skills assessment is an option for formative evaluation without training. Training to assessment tools should be performed for any assessment, formative or summative, for the optimal evaluation of procedural competence.
Collapse
Affiliation(s)
- Tiffany N Anderson
- Department of Surgery, Stanford University School of Medicine, Stanford, California.
| | - James N Lau
- Department of Surgery, Stanford University School of Medicine, Stanford, California
| | - Robert Shi
- Department of Surgery, Stanford University School of Medicine, Stanford, California
| | - Richard W Sapp
- Department of Surgery, Stanford University School of Medicine, Stanford, California
| | - Lauren R Aalami
- Department of Surgery, Stanford University School of Medicine, Stanford, California
| | - Edmund W Lee
- Department of Surgery, Stanford University School of Medicine, Stanford, California
| | - Ara Tekian
- Department of medical education, University of Illinois at Chicago, Chicago, Illinois
| | - Yoon Soo Park
- Department of medical education, University of Illinois at Chicago, Chicago, Illinois
| |
Collapse
|
5
|
Naples R, French JC, Han AY, Lipman JM, Awad MM. The Impact of Simulation Training on Operative Performance in General Surgery: Lessons Learned from a Prospective Randomized Trial. J Surg Res 2021; 270:513-521. [PMID: 34801802 DOI: 10.1016/j.jss.2021.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 09/26/2021] [Accepted: 10/10/2021] [Indexed: 10/19/2022]
Abstract
BACKGROUND Practice in the simulated environment can improve surgical skills. However, the transfer of open complex surgical skills to the operating room is unclear. This study evaluated the effect of resident operative performance following a simulation experience on a hand-sewn small bowel anastomosis and determined the impact of utilizing proficiency-based training. METHODS Nine categorical interns performed a hand-sewn small bowel anastomosis in the operating room prior to (pre-test) and following (post-test) a 3-h simulation training session with an assessment at the end. Participants were randomly assigned to 1of 2 simulation training groups: proficiency-based or standard. Operative performance was videotaped. 2 independent, blinded faculty surgeons assessed performances by a global rating scale. Pre- and post-confidence levels were obtained on a 5-point Likert scale. RESULTS Overall, pre-test and post-test operative performance was similar (3 [IQR, 2.5 -3.5] versus 3 [IQR, 3 -3], P = 0.59). Furthermore, no difference was observed in the post-test performance with proficiency-based or standard training (3 [IQR, 3 -3] versus 3 [IQR, 3 -3], P = 0.73). Self-reported confidence with the skills, however, significantly improved (median 1 versus 4, P = 0.007). CONCLUSIONS In this prospective, randomized study, we did not observe an improvement in operative performance following simulation instruction and assessment, with both training groups. Overcoming barriers to skills transfer will be paramount in the future to optimize simulation training in general surgery. These findings highlight the importance of continued study for the ideal conditions and timing of technical skills training.
Collapse
Affiliation(s)
- Robert Naples
- Department of General Surgery, Cleveland Clinic, Cleveland, Ohio; Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, Ohio.
| | - Judith C French
- Department of General Surgery, Cleveland Clinic, Cleveland, Ohio; Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, Ohio
| | - Amy Y Han
- Department of General Surgery, Cleveland Clinic, Cleveland, Ohio
| | - Jeremy M Lipman
- Department of General Surgery, Cleveland Clinic, Cleveland, Ohio; Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, Ohio
| | - Michael M Awad
- Department of Surgery, Washington University School of Medicine in St Louis, St Louis, Missouri
| |
Collapse
|
6
|
Ward TM, Fer DM, Ban Y, Rosman G, Meireles OR, Hashimoto DA. Challenges in surgical video annotation. Comput Assist Surg (Abingdon) 2021; 26:58-68. [PMID: 34126014 DOI: 10.1080/24699322.2021.1937320] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
Abstract
Annotation of surgical video is important for establishing ground truth in surgical data science endeavors that involve computer vision. With the growth of the field over the last decade, several challenges have been identified in annotating spatial, temporal, and clinical elements of surgical video as well as challenges in selecting annotators. In reviewing current challenges, we provide suggestions on opportunities for improvement and possible next steps to enable translation of surgical data science efforts in surgical video analysis to clinical research and practice.
Collapse
Affiliation(s)
- Thomas M Ward
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Danyal M Fer
- Department of Surgery, University of California San Francisco East Bay, Hayward, CA, USA
| | - Yutong Ban
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA.,Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Guy Rosman
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA.,Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ozanan R Meireles
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Daniel A Hashimoto
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
7
|
Ghiasian L, Hadavandkhani A, Abdolalizadeh P, Janani L, Es'haghi A. Comparison of video-based observation and direct observation for assessing the operative performance of residents undergoing phacoemulsification training. Indian J Ophthalmol 2021; 69:574-578. [PMID: 33595476 PMCID: PMC7942115 DOI: 10.4103/ijo.ijo_1166_20] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Purpose: To compare the video observation of procedural skills (VOPS) method with the direct observation of procedural skills (DOPS) method in the assessment of senior residents' performance utilizing the International Council of Ophthalmology's Ophthalmology Surgical Competency Assessment Rubric for phacoemulsification (ICO-OSCAR; phaco). Methods: This is a prospective comparative study conducted at a university-affiliated hospital. Six ophthalmology residents of postgraduate year 4 participated. Their performance in phacoemulsification was rated via DOPS and later in a masked manner through VOPS by a single faculty assessor. Results: Seventy-one surgeries were evaluated. There were no statistically significant differences between the scores of VOPS and DOPS regarding all ICO-OSCAR indices except “instrument insertion into the eye” in which DOPS had higher scores (P = 0.035). A significant correlation was observed in total scores of “task-specific” (r = 0.64, P < 0.001) and “global” (r = 0.38, P = 0.003) indices between VOPS and DOPS while some subscales did not show a correlation between the two methods of assessment. The Bland-Altman analysis demonstrated that nearly all data points of total “task-specific” and “global” scores fell within the 95% limits of agreement ([-5.84, 6.87] and [-4.78, 4.86], respectively). Conclusion: This study demonstrated that VOPS holds promise for a general rating of residents' performance.
Collapse
Affiliation(s)
- Leila Ghiasian
- Eye Research Center, The Five Senses Institute, Rassoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Ali Hadavandkhani
- Eye Research Center, The Five Senses Institute, Rassoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Parya Abdolalizadeh
- Eye Research Center, The Five Senses Institute, Rassoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Leila Janani
- Department of Biostatistics, School of Public Health, Iran University of Medical Sciences, Tehran, Iran
| | - Acieh Es'haghi
- Eye Research Center, The Five Senses Institute, Rassoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
8
|
Chen AB, Liang S, Nguyen JH, Liu Y, Hung AJ. Machine learning analyses of automated performance metrics during granular sub-stitch phases predict surgeon experience. Surgery 2020; 169:1245-1249. [PMID: 33160637 DOI: 10.1016/j.surg.2020.09.020] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 09/16/2020] [Accepted: 09/21/2020] [Indexed: 11/27/2022]
Abstract
Automated performance metrics objectively measure surgeon performance during a robot-assisted radical prostatectomy. Machine learning has demonstrated that automated performance metrics, especially during the vesico-urethral anastomosis of the robot-assisted radical prostatectomy, are predictive of long-term outcomes such as continence recovery time. This study focuses on automated performance metrics during the vesico-urethral anastomosis, specifically on stitch versus sub-stitch levels, to distinguish surgeon experience. During the vesico-urethral anastomosis, automated performance metrics, recorded by a systems data recorder (Intuitive Surgical, Sunnyvale, CA, USA), were reported for each overall stitch (Ctotal) and its individual components: needle handling/targeting (C1), needle driving (C2), and suture cinching (C3) (Fig 1, A). These metrics were organized into three datasets (GlobalSet [whole stitch], RowSet [independent sub-stitches], and ColumnSet [associated sub-stitches] (Fig 1, B) and applied to three machine learning models (AdaBoost, gradient boosting, and random forest) to solve two classifications tasks: experts (≥100 cases) versus novices (<100 cases) and ordinary experts (≥100 and <2,000 cases) versus super experts (≥2,000 cases). Classification accuracy was determined using analysis of variance. Input features were evaluated through a Jaccard index. From 68 vesico-urethral anastomoses, we analyzed 1,570 stitches broken down into 4,708 sub-stitches. For both classification tasks, ColumnSet best distinguished experts (n = 8) versus novices (n = 9) and ordinary experts (n = 5) versus super experts (n = 3) at an accuracy of 0.774 and 0.844, respectively. Feature ranking highlighted Endowrist articulation and needle handling/targeting as most important in classification. Surgeon performance measured by automated performance metrics on a granular sub-stitch level more accurately distinguishes expertise when compared with summary automated performance metrics over whole stitches.
Collapse
Affiliation(s)
- Andrew B Chen
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, University of Southern California Institute of Urology, Los Angeles, CA
| | - Siqi Liang
- Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA
| | - Jessica H Nguyen
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, University of Southern California Institute of Urology, Los Angeles, CA
| | - Yan Liu
- Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA
| | - Andrew J Hung
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, University of Southern California Institute of Urology, Los Angeles, CA.
| |
Collapse
|
9
|
Yeates P, Moult A, Lefroy J, Walsh-House J, Clews L, McKinley R, Fuller R. Understanding and developing procedures for video-based assessment in medical education. MEDICAL TEACHER 2020; 42:1250-1260. [PMID: 32749915 DOI: 10.1080/0142159x.2020.1801997] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
INTRODUCTION Novel uses of video aim to enhance assessment in health-professionals education. Whilst these uses presume equivalence between video and live scoring, some research suggests that poorly understood variations could challenge validity. We aimed to understand examiners' and students' interaction with video whilst developing procedures to promote its optimal use. METHODS Using design-based research we developed theory and procedures for video use in assessment, iteratively adapting conditions across simulated OSCE stations. We explored examiners' and students' perceptions using think-aloud, interviews and focus group. Data were analysed using constructivist grounded-theory methods. RESULTS Video-based assessment produced detachment and reduced volitional control for examiners. Examiners ability to make valid video-based judgements was mediated by the interaction of station content and specifically selected filming parameters. Examiners displayed several judgemental tendencies which helped them manage videos' limitations but could also bias judgements in some circumstances. Students rarely found carefully-placed cameras intrusive and considered filming acceptable if adequately justified. DISCUSSION Successful use of video-based assessment relies on balancing the need to ensure station-specific information adequacy; avoiding disruptive intrusion; and the degree of justification provided by video's educational purpose. Video has the potential to enhance assessment validity and students' learning when an appropriate balance is achieved.
Collapse
Affiliation(s)
- Peter Yeates
- School of Medicine, Keele University, Keele, UK
- Department of Acute Medicine, Fairfield General Hospital, Pennine Acute Hospital NHS Trust, Bury, UK
| | - Alice Moult
- School of Medicine, Keele University, Keele, UK
| | | | | | | | | | - Richard Fuller
- School of Medicine, University of Liverpool, Liverpool, UK
| |
Collapse
|
10
|
Affiliation(s)
- Elif Bilgic
- Department of Surgery, Division of Surgical Education, McGill University, McGill University Health Centre, 1650 Cedar Avenue, #D6.136, Montreal, Quebec H3G 1A4, Canada
| | - Sofia Valanci-Aroesty
- Department of Surgery, Division of Experimental Surgery, McGill University, McGill University Health Centre, 1650 Cedar Avenue, #D6.136, Montreal, Quebec H3G 1A4, Canada
| | - Gerald M Fried
- Department of Surgery, McGill University, McGill University Health Centre, 1650 Cedar Avenue, #D6.136, Montreal, Quebec H3G 1A4, Canada.
| |
Collapse
|
11
|
Pugh CM, Hashimoto DA, Korndorffer JR. The what? How? And Who? Of video based assessment. Am J Surg 2020; 221:13-18. [PMID: 32665080 DOI: 10.1016/j.amjsurg.2020.06.027] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 06/19/2020] [Indexed: 01/25/2023]
Abstract
BACKGROUND Currently, there is significant variability in the development, implementation and overarching goals of video review for assessment of surgical performance. METHODS This paper evaluates the current methods in which video review is used for evaluation of surgical performance and identifies which processes are critical for successful, widespread implementation of video-based assessment. RESULTS Despite the advances in video capture technology and growing interest in video-based assessment, there is a notable gap in the implementation and longitudinal use of formative and summative assessment using video. CONCLUSION Validity, scalability and discoverability are current but removable barriers to video-based assessment.
Collapse
Affiliation(s)
- Carla M Pugh
- Department of Surgery, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, CA, 94305, USA.
| | - Daniel A Hashimoto
- Department of Surgery, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, 02114, USA.
| | - James R Korndorffer
- Department of Surgery, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, CA, 94305, USA.
| |
Collapse
|
12
|
Oviedo-Peñata CA, Tapia-Araya AE, Lemos JD, Riaño-Benavides C, Case JB, Maldonado-Estrada JG. Validation of Training and Acquisition of Surgical Skills in Veterinary Laparoscopic Surgery: A Review. Front Vet Sci 2020; 7:306. [PMID: 32582781 PMCID: PMC7283875 DOI: 10.3389/fvets.2020.00306] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 05/05/2020] [Indexed: 02/06/2023] Open
Abstract
At present, veterinary laparoscopic surgery training is lacking in experiences that provide a controlled and safe environment where surgeons can practice specific techniques while receiving experts' feedback. Surgical skills acquired using simulators must be certified and transferable to the operating room. Most models for practicing laparoscopic skills in veterinary minimally invasive surgery are general task trainers and consist of boxes (simulators) designed for training human surgery. These simulators exhibit several limitations, including anatomic species and procedural differences, as well as general psychomotor training rather than in vivo skill recreation. In this paper, we review the existing methods of training, evaluation, and validation of technical skills in veterinary laparoscopic surgery. Content includes global and specific scales, and the conditions a structured curriculum should meet for improving the performance of novice surgeons during and after training. A focus on trainee-specific assessment and tailored-technical instruction should influence training programs. We provide a comprehensive analysis of current theories and concepts related to the evaluation and validation of simulators for training laparoscopic surgery in small animal surgery. We also highlight the need to develop new training models and complementary evaluation scales for the validation of training and acquisition of basic and advanced skills in veterinary laparoscopic surgery.
Collapse
Affiliation(s)
- Carlos A Oviedo-Peñata
- Tropical Animal Production Research Group, Faculty of Veterinary Medicine and Zootechny, University of Cordoba, Monteria, Colombia.,Surgery and Theriogenology Branch OHVRI-Group, College of Veterinary Medicine, University of Antioquia, Medellin, Colombia
| | | | - Juan D Lemos
- Bioinstrumentation and Clinical Engineering Research Group (GIBIC), Bioengineering Department, Engineering Faculty, Universidad de Antioquia, Medellín, Colombia
| | - Carlos Riaño-Benavides
- Surgery and Theriogenology Branch OHVRI-Group, College of Veterinary Medicine, University of Antioquia, Medellin, Colombia
| | - J Brad Case
- Department of Small Animal Clinical Sciences, College of Veterinary Medicine, University of Florida, Gainesville, FL, United States
| | - Juan G Maldonado-Estrada
- Surgery and Theriogenology Branch OHVRI-Group, College of Veterinary Medicine, University of Antioquia, Medellin, Colombia
| |
Collapse
|
13
|
Takao M, Bilgic E, Kaneva P, Waschke K, Endo S, Nakano Y, Kawara F, Tanaka S, Ishida T, Morita Y, Toyonaga T, Umegaki E, Kodama Y, Fried GM. Development and validation of an endoscopic submucosal dissection video assessment tool. Surg Endosc 2020; 35:2671-2678. [PMID: 32483698 DOI: 10.1007/s00464-020-07688-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Accepted: 05/27/2020] [Indexed: 12/27/2022]
Abstract
BACKGROUND Despite a need for assessment of endoscopic submucosal dissection (ESD) skills in order to track progress and determine competence, there is no structured measure of assessing competency in ESD performance. The present study aims to develop and examine validity evidence for an assessment tool to evaluate the recorded performance of ESD for gastric neoplasms. METHODS The ESD video assessment tool (EVAT) was systematically developed by ESD experienced endoscopists. The EVAT consists of a 25-item global rating scale and 3-item checklist to assess competencies required to perform ESD. Five unedited videos were each evaluated by 2-blinded experienced ESD endoscopists to assess inter-rater reliability using intraclass correlation coefficients (ICC). Seventeen unedited videos in total were rated by 3 blinded experienced ESD endoscopists. Validity evidence for relationship to other variables was examined by comparing scores of inexperienced (fellows) and experienced endoscopists (attending staff), and by evaluating the relationship between the EVAT scores and ESD case experience. Internal consistency was evaluated using Cronbach's alpha. RESULTS The inter-rater reliability for the total score was high at 0.87 (95% confidence interval 0.11 to 0.99). The total score [median, interquartile range (IQR)] was significantly different between the inexperienced (71, 63-77) and experienced group (95, 91-97) (P = 0.005). The total scores demonstrated high correlation with the number of ESD cases (Spearman's ρ = 0.79, P < 0.01). The internal consistency was 0.97. CONCLUSIONS This study provides preliminary validity evidence for the assessment of video-recorded ESD performances for gastric neoplasms using EVAT.
Collapse
Affiliation(s)
- Madoka Takao
- Department of Surgery and Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University, Montreal, QC, Canada
- Division of Gastroenterology, Department of Internal Medicine, Graduate School of Medicine, Kobe University, Kobe, Hyogo, Japan
| | - Elif Bilgic
- Department of Surgery and Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University, Montreal, QC, Canada
| | - Pepa Kaneva
- Department of Surgery and Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University, Montreal, QC, Canada
| | - Kevin Waschke
- Division of Gastroenterology, McGill University, Montreal, QC, Canada
| | - Satoshi Endo
- Department of Surgery and Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University, Montreal, QC, Canada
| | - Yoshiko Nakano
- Division of Gastroenterology, Department of Internal Medicine, Graduate School of Medicine, Kobe University, Kobe, Hyogo, Japan
| | - Fumiaki Kawara
- Division of Gastroenterology, Department of Internal Medicine, Graduate School of Medicine, Kobe University, Kobe, Hyogo, Japan
| | - Shinwa Tanaka
- Division of Gastroenterology, Department of Internal Medicine, Graduate School of Medicine, Kobe University, Kobe, Hyogo, Japan
| | - Tsukasa Ishida
- Division of Gastroenterology, Department of Internal Medicine, Graduate School of Medicine, Kobe University, Kobe, Hyogo, Japan
| | - Yoshinori Morita
- Division of Gastroenterology, Department of Internal Medicine, Graduate School of Medicine, Kobe University, Kobe, Hyogo, Japan
| | - Takashi Toyonaga
- Department of Endoscopy, Kobe University Hospital, Kobe, Hyogo, Japan
| | - Eiji Umegaki
- Division of Gastroenterology, Department of Internal Medicine, Graduate School of Medicine, Kobe University, Kobe, Hyogo, Japan
| | - Yuzo Kodama
- Division of Gastroenterology, Department of Internal Medicine, Graduate School of Medicine, Kobe University, Kobe, Hyogo, Japan
| | - Gerald M Fried
- Department of Surgery and Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University, Montreal, QC, Canada.
- Faculty of Medicine, McGill University, L9.313, 1650 Cedar Avenue, Montreal, QC, H3G 1A4, Canada.
| |
Collapse
|
14
|
Raja A, Thomas P, Harrison A, Tompkins M, Braman J. Validation of Assessing Arthroscopic Skill using the ASSET Evaluation. JOURNAL OF SURGICAL EDUCATION 2019; 76:1640-1644. [PMID: 31447182 DOI: 10.1016/j.jsurg.2019.05.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Accepted: 05/15/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education and the American Board of Orthopaedic Surgery have implemented "milestones" to evaluate residents during their progression in medical education. The purpose of this study was to determine whether a validated evaluation tool correlates with surgical experience, year in training, and progression over time. DESIGN This was a retrospective study of already collected curriculum assessment data where 2 unbiased, blinded orthopedic surgeons evaluated resident performance on basic diagnostic knee arthroscopy using the Arthroscopic Surgical Skills Evaluation Tool (ASSET) over 3 years. Residents also gained arthroscopy experience through a structured arthroscopy curriculum and clinical experience. SETTING The study was conducted at the TRIA Orthopaedic Center (Bloomington, Minnesota, USA), an institutional site for The University of Minnesota orthopedic surgery residency program. PARTICIPANTS Eleven orthopedic surgery residents at postgraduate years 2 to 5 were evaluated using the ASSET. RESULTS The Pearson's Correlation Coefficient was used to validate both the number of arthroscopic procedures performed by residents (r = 0.946) and their level in training (r = 0.89). Residents who were re-evaluated after undergoing the arthroscopy curriculum throughout the year displayed significant increases in total ASSET scores (p < 0.01). CONCLUSION Resident performance on the ASSET correlated with arthroscopic experience based on year-in training. More importantly, performance improved with additional years of training, demonstrating validity over time. The data also demonstrates interobserver reliability. Due to these correlations between exposure to surgery and score on the ASSET, we believe the tool could serve as a suitable means for assessing residents' technical proficiency as required by The Accreditation Council for Graduate Medical Education program guidelines.
Collapse
Affiliation(s)
- Avais Raja
- TRIA Orthopaedic Center, Minneapolis, Minnesota.
| | - Phillip Thomas
- University of Minnesota, Department of Orthopaedic Surgery, Minneapolis, Minnesota.
| | - Alicia Harrison
- University of Minnesota, Department of Orthopaedic Surgery, Minneapolis, Minnesota.
| | - Marc Tompkins
- University of Minnesota, Department of Orthopaedic Surgery, Minneapolis, Minnesota.
| | - Jonathan Braman
- University of Minnesota, Department of Orthopaedic Surgery, Minneapolis, Minnesota.
| |
Collapse
|
15
|
McQueen S, McKinnon V, VanderBeek L, McCarthy C, Sonnadara R. Video-Based Assessment in Surgical Education: A Scoping Review. JOURNAL OF SURGICAL EDUCATION 2019; 76:1645-1654. [PMID: 31175065 DOI: 10.1016/j.jsurg.2019.05.013] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 05/15/2019] [Accepted: 05/18/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Video-based assessment of residents' surgical skills may offer several advantages over direct observations of clinical performance in terms of objectivity, time-efficiency, and feasibility. Although video-based assessment is becoming more common in surgical training, a broad understanding of its utility is lacking. This scoping review explores video-based assessment in surgical training and presents the evidence supporting its use. DESIGN A literature search was conducted using the Web of Science database with key words related to video-based assessment and surgical training. Exclusion criteria included articles not published in English and articles on undergraduate medical education, continuing professional development, or non-surgical disciplines. Initially, 702 articles were identified; after title, abstract, and full-text screening by two independent reviewers (SM and VM), 199 articles remained. RESULTS We present the benefits of video-based assessment, including the ability to capture clinical ability in the operating room without decreasing intraoperative efficiency, as well as the potential to improve formative assessment and feedback practices. We describe the validity, reliability, and challenges of video-based assessment, as well as the use of video-based methods in clinical and simulated settings. We conclude by discussing questions that remain to be addressed. CONCLUSIONS Although further research and cost-benefit analyses are required, greater adoption of video-based assessment into surgical training may help meet increased assessment demands in an era of competency-based medical education.
Collapse
Affiliation(s)
- Sydney McQueen
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Victoria McKinnon
- Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Laura VanderBeek
- Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Colm McCarthy
- Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Ranil Sonnadara
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada; Department of Surgery, McMaster University, Hamilton, Ontario, Canada.
| |
Collapse
|
16
|
Abstract
OBJECTIVE To identify current trends in the use of validity frameworks in surgical simulation, to provide an overview of the evidence behind the assessment of technical skills in all surgical specialties, and to present recommendations and guidelines for future validity studies. SUMMARY OF BACKGROUND DATA Validity evidence for assessment tools used in the evaluation of surgical performance is of paramount importance to ensure valid and reliable assessment of skills. METHODS We systematically reviewed the literature by searching 5 databases (PubMed, EMBASE, Web of Science, PsycINFO, and the Cochrane Library) for studies published from January 1, 2008, to July 10, 2017. We included original studies evaluating simulation-based assessments of health professionals in surgical specialties and extracted data on surgical specialty, simulator modality, participant characteristics, and the validity framework used. Data were synthesized qualitatively. RESULTS We identified 498 studies with a total of 18,312 participants. Publications involving validity assessments in surgical simulation more than doubled from 2008 to 2010 (∼30 studies/year) to 2014 to 2016 (∼70 to 90 studies/year). Only 6.6% of the studies used the recommended contemporary validity framework (Messick). The majority of studies used outdated frameworks such as face validity. Significant differences were identified across surgical specialties. The evaluated assessment tools were mostly inanimate or virtual reality simulation models. CONCLUSION An increasing number of studies have gathered validity evidence for simulation-based assessments in surgical specialties, but the use of outdated frameworks remains common. To address the current practice, this paper presents guidelines on how to use the contemporary validity framework when designing validity studies.
Collapse
|
17
|
Dedhia PH, Barrett M, Ives G, Magas CP, Varban OA, Wong SL, Sandhu G. Intraoperative Feedback: A Video-BasedAnalysis of Faculty and Resident Perceptions. JOURNAL OF SURGICAL EDUCATION 2019; 76:906-915. [PMID: 30826263 DOI: 10.1016/j.jsurg.2019.02.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2019] [Revised: 01/31/2019] [Accepted: 02/03/2019] [Indexed: 06/09/2023]
Abstract
OBJECTIVE Residents and faculty identify intraoperative feedback as a critical component of surgical education. Studies have demonstrated that residents perceive lower quality and frequency of intraoperative feedback compared to faculty. These differences in perception may be due to dissimilar identification of feedback. The purpose of this study was to determine if residents and faculty differently identify intraoperative interactions as feedback. DESIGN Residents and faculty viewed a segment of a laparoscopic cholecystectomy video and then timestamped the video where they perceived moments of intraoperative feedback. Validated surveys on timing, amount, specificity, and satisfaction with operative feedback were administered. SETTING Viewing of the video and survey administration was conducted at the University of Michigan. PARTICIPANTS A total of 23 of 41 residents (56%) and 29 of 33 faculty (88%) participated in this study. RESULTS Survey analysis demonstrated that residents perceived operative feedback to occur with less immediacy, specificity, and frequency compared to faculty. During the 10-minute video, residents and faculty identified feedback 21 and 29 times, respectively (p = 0.13). Ten-second interval analysis demonstrated 7 statistically significant intervals (p < 0.05) where residents identified feedback less frequently than faculty. Analysis of these 7 intervals revealed that faculty were more likely to identify interactions, especially nonverbal ones, as feedback. Review of free-text comments confirmed these findings and suggested that residents may be more receptive to feedback at the conclusion of the case. CONCLUSIONS Using video review, we show that residents and faculty identify different intraoperative interactions as feedback. This disparity in identification of feedback may limit resident satisfaction and effective intraoperative learning. Timing and labeling of feedback, continued use of video review, and structured teaching models may overcome these differences and improve surgical education.
Collapse
Affiliation(s)
- Priya H Dedhia
- Department of Surgery, Michigan Medicine, Ann Arbor, Michigan
| | | | - Graham Ives
- Department of Surgery, Michigan Medicine, Ann Arbor, Michigan
| | | | - Oliver A Varban
- Department of Surgery, Michigan Medicine, Ann Arbor, Michigan
| | - Sandra L Wong
- Department of Surgery, Dartmouth-Hitchcock, Lebanon, New Hampshire
| | - Gurjit Sandhu
- Department of Surgery, Michigan Medicine, Ann Arbor, Michigan.
| |
Collapse
|
18
|
Irfan W, Sheahan C, Mitchell EL, Sheahan MG. The pathway to a national vascular skills examination and the role of simulation-based training in an increasingly complex specialty. Semin Vasc Surg 2019; 32:48-67. [DOI: 10.1053/j.semvascsurg.2018.12.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
19
|
Simulation in Vascular Surgery. COMPREHENSIVE HEALTHCARE SIMULATION: SURGERY AND SURGICAL SUBSPECIALTIES 2019. [DOI: 10.1007/978-3-319-98276-2_26] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
20
|
Julian O, Patrick H, Felix N, Tilman W, Mirco F, Beat-Peter MS, Gerhard S, Tanner MC. Development and validation of an objective assessment scale for chest tube insertion under 'direct' and 'indirect' rating. BMC MEDICAL EDUCATION 2018; 18:320. [PMID: 30587187 PMCID: PMC6307220 DOI: 10.1186/s12909-018-1430-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 12/14/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND There is an increasing need for objective and validated educational concepts. This holds especially true for surgical procedures like chest tube insertion (CTI). Thus, we developed an instrument for objectification of learning successes: the assessment scale based on Objective Structured Assessment of Technical Skill (OSATS) for chest tube insertion, which is evaluated in this study. Primary endpoint was the evaluation of intermethod reliability (IM). Secondary endpoints are 'indirect' interrater reliability (IR) and construct validity of the scale (CV). METHODS Every participant (N = 59) performed a CTI on a porcine thorax. Participants received three ratings (one 'direct' on site, two 'indirect' via video rating). IM compares 'direct' with 'indirect' ratings. IR was assessed between 'indirect' ratings. CV was investigated by subgroup analysis based on prior experience in CTI for 'direct' and 'indirect' rating. RESULTS We included 59 medical students to our study. IM showed moderate conformity ('direct' vs. 'indirect 1' ICC = 0.735, 95% CI: 0.554-0.843; 'direct' vs. 'indirect 2' ICC = 0.722, 95% CI 0.533-0.835) and good conformity between 'direct' vs. 'average indirect' rating (ICC = 0.764, 95% CI: 0.6-0.86). IR showed good conformity (ICC = 0.84, 95% CI: 0.707-0.91). CV was proven between subgroups in 'direct' (p = 0.037) and 'indirect' rating (p = 0.013). CONCLUSION Results for IM suggest equivalence for 'direct' and 'indirect' ratings, while both IR and CV was demonstrated in both rating methods. Thus, the assessment scale seems a reliable method for rating trainees' performances 'directly' as well as 'indirectly'. It may help to objectify and facilitate the assessment of training of chest tube insertion.
Collapse
Affiliation(s)
- Ober Julian
- HTRG – Heidelberg Trauma Research Group, Center for Orthopedics, Trauma Surgery and Spinal Cord Injury, Trauma and Reconstructive Surgery, Heidelberg University Hospital, Schlierbacher Landstrasse 200a, D-69118 Heidelberg, Germany
| | - Haubruck Patrick
- HTRG – Heidelberg Trauma Research Group, Center for Orthopedics, Trauma Surgery and Spinal Cord Injury, Trauma and Reconstructive Surgery, Heidelberg University Hospital, Schlierbacher Landstrasse 200a, D-69118 Heidelberg, Germany
| | - Nickel Felix
- Department of General, Visceral and Transplantation Surgery Heidelberg University Hospital, D-69120 Heidelberg, Germany
| | - Walker Tilman
- HTRG – Heidelberg Trauma Research Group, Center for Orthopedics, Trauma Surgery and Spinal Cord Injury, Trauma and Reconstructive Surgery, Heidelberg University Hospital, Schlierbacher Landstrasse 200a, D-69118 Heidelberg, Germany
| | - Friedrich Mirco
- Department of General, Visceral and Transplantation Surgery Heidelberg University Hospital, D-69120 Heidelberg, Germany
| | - Müller-Stich Beat-Peter
- Department of General, Visceral and Transplantation Surgery Heidelberg University Hospital, D-69120 Heidelberg, Germany
| | - Schmidmaier Gerhard
- HTRG – Heidelberg Trauma Research Group, Center for Orthopedics, Trauma Surgery and Spinal Cord Injury, Trauma and Reconstructive Surgery, Heidelberg University Hospital, Schlierbacher Landstrasse 200a, D-69118 Heidelberg, Germany
| | - Michael C. Tanner
- HTRG – Heidelberg Trauma Research Group, Center for Orthopedics, Trauma Surgery and Spinal Cord Injury, Trauma and Reconstructive Surgery, Heidelberg University Hospital, Schlierbacher Landstrasse 200a, D-69118 Heidelberg, Germany
| |
Collapse
|
21
|
Vergis A, Steigerwald S. Skill Acquisition, Assessment, and Simulation in Minimal Access Surgery: An Evolution of Technical Training in Surgery. Cureus 2018; 10:e2969. [PMID: 30221097 PMCID: PMC6136887 DOI: 10.7759/cureus.2969] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Diminishing resources and expanding technologies, such as minimal access surgery, have complicated the acquisition and assessment of technical skills in surgical training programs. However, these challenges have been met with both innovation and an evolution in our understanding of how learners develop technical competence and how to better measure it. As these skills continue to grow in breadth and complexity, so too must the surgical education systems’ ability. This literature review examines and describes the pressures placed on surgical education programs and the development of methods to ameliorate them with a focus on surgical simulation.
Collapse
|
22
|
Direct observation of procedural skills (DOPS) evaluation method: Systematic review of evidence. Med J Islam Repub Iran 2018; 32:45. [PMID: 30159296 PMCID: PMC6108252 DOI: 10.14196/mjiri.32.45] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2017] [Indexed: 11/18/2022] Open
Abstract
Background: Evaluation is one of the most important aspects of medical education. Thus, new methods of effective evaluation are required in this area, and direct observation of procedural skills (DOPS) is one of these methods. This study was conducted to systematically review the evidence involved in this type of assessment to allow the effective use of this method.
Methods: Data were collected searching such keywords as evaluation, assessment, medical education, and direct observation of procedural skills (DOPS) on Google Scholar, PubMed, Science Direct, SID, Medlib and Google and by searching unpublished sources (Gray literature) and selected references (reference of reference).
Results: Of 236 papers, 28 were studied. Satisfaction with DOPS method was found to be moderate. The major strengths of this evaluation method are as follow: providing feedback to the participants and promoting independence and practical skills during assessment. However, stressful evaluation, time limitation for participants, and bias between assessors are the main drawbacks of this method. Positive impact of DOPS method on improving student performance has been noted in most studies. The results showed that the validity and reliability of DOPS are relatively acceptable. Performance of participants using DOPS was relatively satisfactory. However, not providing necessary trainings on how to take DOPS test, not providing essential feedback to participants, and insufficient time for the test are the major drawbacks of the DOPS tests.
Conclusion: According to the results of this study, DOPS tests can be applied as a valuable and effective evaluation method in medical education. However, more attention should be paid to the quality of these tests.
Collapse
|
23
|
Time crunch: increasing the efficiency of assessment of technical surgical skill via brief video clips. Surgery 2018; 163:933-937. [DOI: 10.1016/j.surg.2017.11.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Accepted: 11/15/2017] [Indexed: 11/22/2022]
|
24
|
Scaffidi MA, Grover SC, Carnahan H, Yu JJ, Yong E, Nguyen GC, Ling SC, Khanna N, Walsh CM. A prospective comparison of live and video-based assessments of colonoscopy performance. Gastrointest Endosc 2018; 87:766-775. [PMID: 28859953 DOI: 10.1016/j.gie.2017.08.020] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Accepted: 08/20/2017] [Indexed: 02/08/2023]
Abstract
BACKGROUND AND AIMS Colonoscopy performance is typically assessed by a supervisor in the clinical setting. There are limitations of this approach, however, because it allows for rater bias and increases supervisor workload demand during the procedure. Video-based assessment of recorded procedures has been proposed as a complementary means by which to assess colonoscopy performance. This study sought to investigate the reliability, validity, and feasibility of video-based assessments of competence in performing colonoscopy compared with live assessment. METHODS Novice (<50 previous colonoscopies), intermediate (50-500), and experienced (>1000) endoscopists from 5 hospitals participated. Two views of each colonoscopy were videotaped: an endoscopic (intraluminal) view and a recording of the endoscopist's hand movements. Recorded procedures were independently assessed by 2 blinded experts using the Gastrointestinal Endoscopy Competency Assessment Tool (GiECAT), a validated procedure-specific assessment tool comprising a global rating scale (GRS) and checklist (CL). Live ratings were conducted by a non-blinded expert endoscopist. Outcomes included agreement between live and blinded video-based ratings of clinical colonoscopies, intra-rater reliability, inter-rater reliability and discriminative validity of video-based assessments, and perceived ease of assessment. RESULTS Forty endoscopists participated (20 novices, 10 intermediates, and 10 experienced). There was good agreement between the live and video-based ratings (total, intra-class correlation [ICC] = 0.847; GRS, ICC = 0.868; CL, ICC = 0.749). Intra-rater reliability was excellent (total, ICC = 0.99; GRS, ICC = 0.99; CL, ICC = 0.98). Inter-rater reliability between the 2 blinded video-based raters was high (total, ICC = 0.91; GRS, ICC = 0.918; CL, ICC = 0.862). GiECAT total, GRS, and CL scores differed significantly among novice, intermediate, and experienced endoscopists (P < .001). Video-based assessments were perceived as "fairly easy," although live assessments were rated as significantly easier (P < .001). CONCLUSIONS Video-based assessments of colonoscopy procedures using the GiECAT have strong evidence of reliability and validity. In addition, assessments using videos were feasible, although live assessments were easier.
Collapse
Affiliation(s)
- Michael A Scaffidi
- Division of Gastroenterology, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Samir C Grover
- Division of Gastroenterology, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada; Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Heather Carnahan
- School of Human Kinetics and Recreation, Memorial University of Newfoundland, St. John's, Newfoundland, Canada
| | - Jeffrey J Yu
- Wilson Centre, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Elaine Yong
- Division of Gastroenterology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada; Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Geoffrey C Nguyen
- Division of Gastroenterology, Mount Sinai Hospital University of Toronto, Toronto, Ontario, Canada; Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Simon C Ling
- Department of Paediatrics, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada; Division of Gastroenterology, Hepatology and Nutrition, Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada
| | - Nitin Khanna
- Division of Gastroenterology, St. Joseph's Health Centre, University of Western Ontario, London, Ontario, Canada
| | - Catharine M Walsh
- Department of Paediatrics, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada; Division of Gastroenterology, Hepatology and Nutrition, Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada; Wilson Centre, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
25
|
Video review program enhances resident training in laparoscopic inguinal hernia: a randomized blinded controlled trial. Surg Endosc 2017; 32:2847-2851. [DOI: 10.1007/s00464-017-5992-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 12/02/2017] [Indexed: 10/18/2022]
|
26
|
Chen CY, Ragle CA, Lencioni R, Fransson BA. Comparison of 2 training programs for basic laparoscopic skills and simulated surgery performance in veterinary students. Vet Surg 2017; 46:1187-1197. [DOI: 10.1111/vsu.12729] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Revised: 04/21/2017] [Accepted: 06/01/2017] [Indexed: 11/30/2022]
Affiliation(s)
- Chi-Ya Chen
- Department of Veterinary Clinical Sciences; Washington State University; Pullman Washington
| | - Claude A. Ragle
- Department of Veterinary Clinical Sciences; Washington State University; Pullman Washington
| | - Rachael Lencioni
- Department of Veterinary Clinical Sciences; Washington State University; Pullman Washington
| | - Boel A. Fransson
- Department of Veterinary Clinical Sciences; Washington State University; Pullman Washington
| |
Collapse
|
27
|
Vergis A, Steigerwald S. A Preliminary Investigation of General and Technique-specific Assessments for the Evaluation of Laparoscopic Technical Skills. Cureus 2017; 9:e1757. [PMID: 29226047 PMCID: PMC5720594 DOI: 10.7759/cureus.1757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Background Both general and technique-specific assessments of technical skill have been validated in surgical education. The purpose of this study was to assess the correlation between the objective structured assessment of technical skills (OSATS) and the global operative assessment of laparoscopic skills (GOALS) rating scales using a high-fidelity porcine laparoscopic cholecystectomy model. Methods Post-graduate year-one general surgery and urology residents (n=14) performed a live laparoscopic porcine cholecystectomy. Trained surgeons rated their performance using OSATS and GOALS assessment scales. Results Pearson's correlation coefficient between OSATS and GOALS was 0.96 for overall scores. It ranged from 0.78 - 0.89 for domains that overlapped between the two scales. Conclusion There is a very high correlation between OSATS and GOALS. This implies that they likely measure similar constructs and that either may be used for summative-type assessments of trainee skill. However, further investigation is needed to determine if technique-specific assessments may provide more useful feedback in formative evaluation.
Collapse
|
28
|
Yeung C, Carrillo B, Pope V, Hosseinpour S, Gerstle JT, Azzie G. Video assessment of laparoscopic skills by novices and experts: implications for surgical education. Surg Endosc 2017; 31:3883-3889. [DOI: 10.1007/s00464-017-5417-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2016] [Accepted: 01/20/2017] [Indexed: 11/30/2022]
|
29
|
Strandbygaard J, Scheele F, Sørensen JL. Twelve tips for assessing surgical performance and use of technical assessment scales. MEDICAL TEACHER 2017; 39:32-37. [PMID: 27678279 DOI: 10.1080/0142159x.2016.1231911] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Using validated assessment scales for technical competence can help structure and standardize assessment and feedback for both the trainee and the supervisor and thereby avoid bias and drive learning. Correct assessment of operative skills can establish learning curves and allow adequate monitoring. However, the assessment of surgical performance is not an easy task, since it includes many proxy parameters, which are hard to measure. Although numerous technical assessment scales exist, both within laparoscopic and open surgery, the validity evidence is often sparse, and this can raise doubts about reliability and educational outcome. Furthermore, the implementation of technical assessment scales varies due to several obstacles and doubts about accurate use. In this 12-tips article, we aim to give the readers a critical and useful appraisal of some of the common questions and misunderstandings regarding the use of surgical assessment scales and provide tips to ease and overcome potential pitfalls.
Collapse
Affiliation(s)
- Jeanett Strandbygaard
- a Department of Obstetrics and Gynaecology, Juliane Marie Centre, Centre for Children, Women and Reproduction , University Hospital of Copenhagen , Copenhagen , Denmark
| | - Fedde Scheele
- b School of Medical Sciences VUmc and Athena Institute for Transdisciplinary Research , VU University , Amsterdam , Netherlands
| | - Jette Led Sørensen
- c Juliane Marie Centre, Centre for Children, Women and Reproduction , University Hospital of Copenhagen , Copenhagen , Denmark
| |
Collapse
|
30
|
Mellinger JD, Williams RG, Sanfey H, Fryer JP, DaRosa D, George BC, Bohnen JD, Schuller MC, Sandhu G, Minter RM, Gardner AK, Scott DJ. Teaching and assessing operative skills: From theory to practice. Curr Probl Surg 2016; 54:44-81. [PMID: 28212782 DOI: 10.1067/j.cpsurg.2016.11.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Accepted: 11/22/2016] [Indexed: 11/22/2022]
Affiliation(s)
- John D Mellinger
- Department of Surgery, Southern Illinois University School of Medicine, Springfield, IL.
| | - Reed G Williams
- Department of Surgery, Southern Illinois University School of Medicine, Springfield, IL; Department of Surgery, Indiana University School of Medicine, Indianapolis, IN
| | - Hilary Sanfey
- Department of Surgery, Southern Illinois University School of Medicine, Springfield, IL; American College of Surgeons, Chicago, IL
| | - Jonathan P Fryer
- Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL
| | - Debra DaRosa
- Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL
| | - Brian C George
- Department of Surgery, University of Michigan, Ann Arbor, MI
| | - Jordan D Bohnen
- Department of General Surgery, Massachussetts General Hospital and Harvard University, Boston, MA
| | - Mary C Schuller
- Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL
| | - Gurjit Sandhu
- Department of Surgery, University of Michigan, Ann Arbor, MI; Department of Learning Health Sciences, University of Michigan, Ann Arbor, MI
| | - Rebecca M Minter
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX
| | - Aimee K Gardner
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX; UT Southwestern Simulation Center, University of Texas Southwestern Medical Center, Dallas, TX
| | - Daniel J Scott
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX; UT Southwestern Simulation Center, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
31
|
Poudel S, Kurashima Y, Kawarada Y, Watanabe Y, Murakami Y, Matsumura Y, Kato H, Miyazaki K, Shichinohe T, Hirano S. Development and validation of a checklist for assessing recorded performance of laparoscopic inguinal hernia repair. Am J Surg 2016; 212:468-74. [DOI: 10.1016/j.amjsurg.2015.09.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Revised: 08/20/2015] [Accepted: 09/22/2015] [Indexed: 10/22/2022]
|
32
|
Stylopoulos N, Vosburgh KG. Assessing Technical Skill in Surgery and Endoscopy: A Set of Metrics and an Algorithm (C-PASS) to Assess Skills in Surgical and Endoscopic Procedures. Surg Innov 2016; 14:113-21. [PMID: 17558017 DOI: 10.1177/1553350607302330] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Historically, the performance of surgeons has been assessed subjectively by senior surgical staff in both training and operating environments. In this work, the position and motion of surgical instruments are analyzed through an objective process, denoted C-PASS, to measure surgeon performance of laparoscopic, endoscopic, and image-guided procedures. To develop C-PASS, clinically relevant performance characteristics were identified. Then measurement techniques for parameters that represented each characteristic were derived, and analytic techniques were implemented to transform these parameters into explicit, robust metrics. The metrics comprise the C-PASS performance assessment method, which has been validated over the last 3 years in studies of laparoscopy and endoscopy. These studies show that C-PASS is straightforward, reproducible, and accurate. It is sufficiently powerful to assess the efficiency of these complex processes. It is likely that C-PASS and similar approaches will improve skills acquisition and learning and also enable the objective comparison of systems and techniques.
Collapse
Affiliation(s)
- Nicholas Stylopoulos
- Department of Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | | |
Collapse
|
33
|
Vassiliou MC, Feldman LS, Fraser SA, Charlebois P, Chaudhury P, Stanbridge DD, Fried GM. Evaluating Intraoperative Laparoscopic Skill: Direct Observation Versus Blinded Videotaped Performances. Surg Innov 2016; 14:211-6. [DOI: 10.1177/1553350607308466] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The Global Operative Assessment of Laparoscopic Skill (GOALS) has been shown to meet high standards for direct observation. The purpose of this study was to investigate the reliability and validity of GOALS when applied to blinded, videotaped performances. Five novice surgeons and 5 experienced surgeons were each evaluated by 2 observers during a laparoscopic cholecystectomy. Subsequently, 4 laparoscopists (V1 to V4) evaluated the videotaped procedures using GOALS. Two of the raters (V1 and V3) had prior experience using GOALS. The interrater reliabilities between video raters (VRs) and between VRs and direct raters (DRs) were calculated using the intraclass correlation coefficient. Construct validity was assessed using 2-way analysis of variance. Interrater reliability between the 4 VRs and the 2 DRs was 0.72. The intraclass correlation coefficient for the 4 VRs was 0.68 and for each VR compared with the mean DR was 0.86, 0.39, 0.94, and 0.76, respectively. All raters, except V2, differentiated between novice and experienced groups ( P values ranged from .01 to .05). These data suggest that GOALS can be used to assess laparoscopic skill based on videotaped performances but that rater training may play an important role in ensuring the reliability and validity of the instrument. Experience with the tool in the operating room may improve the reliability of video rating and could be of value in training evaluators.
Collapse
Affiliation(s)
- Melina C. Vassiliou
- Department of Surgery, McGill University, Montreal, Canada, Steinberg-Bernstein Centre for Minimally Invasive Surgery McGill University, Montreal, Canada
| | - Liane S. Feldman
- Department of Surgery, McGill University, Montreal, Canada, Steinberg-Bernstein Centre for Minimally Invasive Surgery McGill University, Montreal, Canada
| | - Shannon A. Fraser
- Department of Surgery, McGill University, Montreal, Canada, Steinberg-Bernstein Centre for Minimally Invasive Surgery McGill University, Montreal, Canada
| | | | | | - Donna D. Stanbridge
- Steinberg-Bernstein Centre for Minimally Invasive Surgery McGill University, Montreal, Canada
| | - Gerald M. Fried
- Department of Surgery, McGill University, Montreal, Canada, , Steinberg-Bernstein Centre for Minimally Invasive Surgery McGill University, Montreal, Canada
| |
Collapse
|
34
|
Hassanpour N, Chen R, Baikpour M, Moghimi S. Video observation of procedural skills for assessment of trabeculectomy performed by residents. J Curr Ophthalmol 2016; 28:61-4. [PMID: 27331148 PMCID: PMC4909707 DOI: 10.1016/j.joco.2016.03.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2016] [Revised: 03/19/2016] [Accepted: 03/20/2016] [Indexed: 11/22/2022] Open
Abstract
PURPOSE The efficacy and sufficiency of a healthcare system is directly related to the knowledge and skills of graduates working in the system. In this regard, many different assessment methods have been proposed to evaluate various skills of the learners. Video Observation of Procedural Skills (VOPS) is one newly-proposed method. In this study we aimed to compare the results of the VOPS method with the more commonly used Direct Observation of Procedural Skills (DOPS). METHODS In this prospective study conducted in 2012, all 10 ophthalmology residents of post graduate year 4 were selected for participation. Three months into training in the glaucoma ward, these residents performed trabeculectomy surgery on patients, and their procedural skills were assessed in real time by an expert via the DOPS method. All surgeries were also recorded and later evaluated via the VOPS method by an expert. Bland-Altman plot also was used to compare the two methods and calculating the mean and 95% limit of agreement. RESULTS Residents have been done a mean of 14.9 ± 3.5 (range 10-20) independent trabeculectomy before the assessments. DOPS grade was positively associated with number of independent trabeculectomy during glaucoma rotation (β=0.227, p = 0.004). The intra-observer reproducibility of VOPS measurements was 0.847 (95% CI: 0.634, 0.961). The mean VOPS grade was significantly lower than the mean DOPS grade (8.4 vs. 8.9, p = 0.02). However, a good correlation was observed between the grades of VOPS and DOPS (r = 0.89, p = 0.001). Bland-Altman analysis demonstrated that all data points fell within the 95% limits of agreement (-1.46, 0.46). CONCLUSION The present study showed that VOPS might be considered a feasible, valid, and reliable assessment method for procedural skills of medical students and residents that can be used as an alternative to the DOPS method. However, VOPS might underestimate DOPS in evaluating surgical skills of residents.
Collapse
Affiliation(s)
- Narges Hassanpour
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Rebecca Chen
- Koret Eye Research Center, University of California at San Francisco, USA
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Masoud Baikpour
- Department of Medicine, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Sasan Moghimi
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
- Koret Eye Research Center, University of California at San Francisco, USA
| |
Collapse
|
35
|
Establishing the concurrent validity of general and technique-specific skills assessments in surgical education. Am J Surg 2016; 211:268-73. [DOI: 10.1016/j.amjsurg.2015.04.024] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Revised: 04/25/2015] [Accepted: 04/30/2015] [Indexed: 11/21/2022]
|
36
|
Do orthopaedic fracture skills courses improve resident performance? Injury 2015; 46:547-51. [PMID: 25476015 DOI: 10.1016/j.injury.2014.10.061] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2014] [Revised: 10/18/2014] [Accepted: 10/22/2014] [Indexed: 02/02/2023]
Abstract
INTRODUCTION We hypothesized that resident participation in a hands-on fracture fixation course leads to significant improvement in their performance as assessed in a simulated fracture fixation model. METHODS Twenty-three junior orthopaedic surgery residents were tasked to treat radial shaft fractures with standard fixation techniques in a sawbones fracture fixation simulation twice during the year. Before the first simulation, 6 of the residents participated in a fraction fixation skills course. The simulation repeated 6 months later after all residents attended the course. Residents also completed a 15-question written examination. Assessment included evaluation of each step of the procedure, a score based on the objective structured assessment of technical skill (OSATS) system, and grade on the examination. Comparisons were made between the two cohorts and the two testing time points. RESULTS Significant improvements were present in the percentage of tasks completed correctly (64.1% vs 84.3%) the overall OSATS score (13.8 vs 17.1) and examination correct answers (8.6 vs 12.5) for the overall cohort between the two testing time points (p<0.001, p<0.03, p<0.04 respectively). Residents who had not participated in the surgical skills course at the time of their initial simulation demonstrated significant improvements in percentage of tasks completed correctly (61.3% vs 81.2%) and OSATS score (12.4 vs 17.0) (p<0.002, p<0.01 respectively). No significant difference was noted in performance for the cohort who had already participated in the course (p=0.87 and p=0.68). The cohort that had taken the course prior to the initial simulation showed significantly higher scores at initial evaluation (88.5% vs 58.5% percentage of tasks completed correctly, 17.3 vs 12.0 OSATS score, 12.5 vs 8.6 correct answers on the examination). At the second simulation, no significant difference was seen with task completion or examination grade, but a significant difference still existed with respect to the OSATS score (20.0 vs 17.0; p<0.03). CONCLUSION Participation in a formal surgical skills course significantly improved practical operative skills as assessed by the simulation. The benefits of the course were maintained to 6 months with residents who completed the training earlier continuing to demonstrate an advantage in skills. Such courses are a valuable training resource which directly impact resident performance.
Collapse
|
37
|
Psychomotor skills assessment in medical training based on virtual reality using a Weighted Possibilistic approach. Knowl Based Syst 2014. [DOI: 10.1016/j.knosys.2014.05.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
38
|
Bowles P, Harries M, Young P, Das P, Saunders N, Fleming J. A validation study on the use of intra-operative video recording as an objective assessment tool for core ENT surgery. Clin Otolaryngol 2014; 39:102-7. [DOI: 10.1111/coa.12240] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2014] [Indexed: 11/28/2022]
Affiliation(s)
- P.F.D. Bowles
- Department of Otorhinolaryngology, Head and Neck Surgery; Royal Sussex County Hospital; Brighton UK
| | - M. Harries
- Department of Otorhinolaryngology, Head and Neck Surgery; Royal Sussex County Hospital; Brighton UK
| | - P. Young
- Department of Otorhinolaryngology, Head and Neck Surgery; Royal Sussex County Hospital; Brighton UK
| | - P. Das
- Department of Otorhinolaryngology, Head and Neck Surgery; Royal Sussex County Hospital; Brighton UK
| | - N. Saunders
- Department of Otorhinolaryngology, Head and Neck Surgery; Royal Sussex County Hospital; Brighton UK
| | - J.C. Fleming
- Department of Otorhinolaryngology, Head and Neck Surgery; Royal Sussex County Hospital; Brighton UK
| |
Collapse
|
39
|
Moktar J, Popkin CA, Howard A, Murnaghan ML. Development of a cast application simulator and evaluation of objective measures of performance. J Bone Joint Surg Am 2014; 96:e76. [PMID: 24806022 DOI: 10.2106/jbjs.l.01266] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND Surgical simulation offers a low-risk learning environment with repetitive practice opportunities for orthopaedic residents. It is increasingly prevalent in many training programs, as acquisition of technical skills in the face of educational demands and reduced work hours becomes more challenging. In addition to surgical skills, orthopaedic residents must also learn the technique of cast application. Deficiencies in casting skill are risk factors for re-displacement of fractures and cast-specific complications. Formal educational models to instruct or to evaluate casting technique have not been well described or tested. The purposes of this study were to develop a cast application simulator and to validate a novel method of evaluating casting skill. METHODS A module that simulates short arm cast application on a synthetic forearm model was developed. An Objective Structured Assessment of Technical Skill checklist was created with use of Delphi methodology involving nine content experts (five orthopaedic surgeons and four orthopaedic technologists). Nine participants (three medical students, three orthopaedic residents, two orthopaedic fellows, and one orthopaedic technologist) were used to evaluate the reliability and validity of the checklist. Nine de-identified videos of cast application were recorded and were utilized to test the newly developed Objective Structured Assessment of Technical Skill checklist and Modified Global Rating Scale for reliability and validity. Participants were grouped by training level (medical students, orthopaedic residents, and orthopaedic fellows or orthopaedic technologists) and were evaluated twice. RESULTS Reliability was high as shown by intraclass correlation. The inter-rater reliability was 0.85 for the Objective Structured Assessment of Technical Skill, 0.81 for the Modified Global Rating Scale performance, and 0.78 for the Modified Global Rating Scale final product; the intra-rater reliability was 0.88 for the Objective Structured Assessment of Technical Skill, 0.85 for the Modified Global Rating Scale performance, and 0.81 for the Modified Global Rating Scale final product. The Objective Structured Assessment of Technical Skill checklist scores were 9.28 points for the medical students, 17.46 points for the orthopaedic residents, and 18.85 points for the orthopaedic fellows or orthopaedic technologists (p < 0.05, F = 6.32). The Modified Global Rating Scale performance and final product scores also reflected the level of training. Post hoc analysis showed a significant difference between the medical students and orthopaedic fellows or orthopaedic technologists for the Objective Structured Assessment of Technical Skill checklist and Modified Global Rating Scale. CONCLUSIONS This casting simulation model and evaluation instrument is a reliable assessment of casting skill in applying a short arm cast. However, given the inability to stratify all three groups on the basis of the level of training, further work is needed to establish construct validity.
Collapse
Affiliation(s)
- Joel Moktar
- Division of Orthopaedic Surgery S107, The Hospital for Sick Children, 555 University Avenue, Toronto, Ontario, Canada, M5G 1X8. E-mail address for M.L. Murnaghan:
| | - Charles A Popkin
- Division of Orthopaedic Surgery S107, The Hospital for Sick Children, 555 University Avenue, Toronto, Ontario, Canada, M5G 1X8. E-mail address for M.L. Murnaghan:
| | - Andrew Howard
- Division of Orthopaedic Surgery S107, The Hospital for Sick Children, 555 University Avenue, Toronto, Ontario, Canada, M5G 1X8. E-mail address for M.L. Murnaghan:
| | - M Lucas Murnaghan
- Division of Orthopaedic Surgery S107, The Hospital for Sick Children, 555 University Avenue, Toronto, Ontario, Canada, M5G 1X8. E-mail address for M.L. Murnaghan:
| |
Collapse
|
40
|
Gurusamy KS, Nagendran M, Toon CD, Davidson BR. Laparoscopic surgical box model training for surgical trainees with limited prior laparoscopic experience. Cochrane Database Syst Rev 2014; 2014:CD010478. [PMID: 24585169 PMCID: PMC10875408 DOI: 10.1002/14651858.cd010478.pub2] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
BACKGROUND Surgical training has traditionally been one of apprenticeship, where the surgical trainee learns to perform surgery under the supervision of a trained surgeon. This is time consuming, costly, and of variable effectiveness. Training using a box model physical simulator is an option to supplement standard training. However, the value of this modality on trainees with limited prior laparoscopic experience is unknown. OBJECTIVES To compare the benefits and harms of box model training for surgical trainees with limited prior laparoscopic experience versus standard surgical training or supplementary animal model training. SEARCH METHODS We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index Expanded to May 2013. SELECTION CRITERIA We planned to include all randomised clinical trials comparing box model trainers versus other forms of training including standard laparoscopic training and supplementary animal model training in surgical trainees with limited prior laparoscopic experience. We also planned to include trials comparing different methods of box model training. DATA COLLECTION AND ANALYSIS Two authors independently identified trials and collected data. We analysed the data with both the fixed-effect and the random-effects models using Review Manager 5. For each outcome, we calculated the risk ratio (RR), mean difference (MD), or standardised mean difference (SMD) with 95% confidence intervals (CI) based on intention-to-treat analysis whenever possible. MAIN RESULTS We identified eight trials that met the inclusion criteria. One trial including 17 surgical trainees did not contribute to the meta-analysis. We included seven trials (249 surgical trainees belonging to various postgraduate years ranging from year one to four) in which the participants were randomised to supplementary box model training (122 trainees) versus standard training (127 trainees). Only one trial (50 trainees) was at low risk of bias. The box trainers used in all the seven trials were video trainers. Six trials were conducted in USA and one trial in Canada. The surgeries in which the final assessments were made included laparoscopic total extraperitoneal hernia repairs, laparoscopic cholecystectomy, laparoscopic tubal ligation, laparoscopic partial salpingectomy, and laparoscopic bilateral mid-segment salpingectomy. The final assessments were made on a single operative procedure.There were no deaths in three trials (0/82 (0%) supplementary box model training versus 0/86 (0%) standard training; RR not estimable; very low quality evidence). The other trials did not report mortality. The estimated effect on serious adverse events was compatible with benefit and harm (three trials; 168 patients; 0/82 (0%) supplementary box model training versus 1/86 (1.1%) standard training; RR 0.36; 95% CI 0.02 to 8.43; very low quality evidence). None of the trials reported patient quality of life. The operating time was significantly shorter in the supplementary box model training group versus the standard training group (1 trial; 50 patients; MD -6.50 minutes; 95% CI -10.85 to -2.15). The proportion of patients who were discharged as day-surgery was significantly higher in the supplementary box model training group versus the standard training group (1 trial; 50 patients; 24/24 (100%) supplementary box model training versus 15/26 (57.7%) standard training; RR 1.71; 95% CI 1.23 to 2.37). None of the trials reported trainee satisfaction. The operating performance was significantly better in the supplementary box model training group versus the standard training group (seven trials; 249 trainees; SMD 0.84; 95% CI 0.57 to 1.10).None of the trials compared box model training versus animal model training or versus different methods of box model training. AUTHORS' CONCLUSIONS There is insufficient evidence to determine whether laparoscopic box model training reduces mortality or morbidity. There is very low quality evidence that it improves technical skills compared with standard surgical training in trainees with limited previous laparoscopic experience. It may also decrease operating time and increase the proportion of patients who were discharged as day-surgery in the first total extraperitoneal hernia repair after box model training. However, the duration of the benefit of box model training is unknown. Further well-designed trials of low risk of bias and random errors are necessary. Such trials should assess the long-term impact of box model training on clinical outcomes and compare box training with other forms of training.
Collapse
Affiliation(s)
- Kurinchi Selvan Gurusamy
- Royal Free Campus, UCL Medical SchoolDepartment of SurgeryRoyal Free HospitalRowland Hill StreetLondonUKNW3 2PF
| | - Myura Nagendran
- Department of SurgeryUCL Division of Surgery and Interventional Science9th Floor, Royal Free HospitalPond StreetLondonUKNW3 2QG
| | - Clare D Toon
- West Sussex County CouncilPublic Health1st Floor, The GrangeTower StreetChichesterWest SussexUKPO19 1QT
| | - Brian R Davidson
- Royal Free Campus, UCL Medical SchoolDepartment of SurgeryRoyal Free HospitalRowland Hill StreetLondonUKNW3 2PF
| | | |
Collapse
|
41
|
Ma IWY, Zalunardo N, Brindle ME, Hatala R, McLaughlin K. Notes From the Field: Direct Observation Versus Rating by Videos for the Assessment of Central Venous Catheterization Skills. Eval Health Prof 2014; 38:419-22. [PMID: 24419501 DOI: 10.1177/0163278713518942] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Blinded assessments of technical skills using video-recordings may offer more objective assessments than direct observations. This study seeks to compare these two modalities. Two trained assessors independently assessed 18 central venous catheterization performances by direct observation and video-recorded assessments using two tools. Although sound quality was deemed adequate in all videos, portions of the video for wire handling and drape handling were frequently out of view (n = 13, 72% for wire-handling; n = 17, 94% for drape-handling). There were no differences in summary global rating scores, checklist scores, or pass/fail decisions for either modality (p > 0.05). Inter-rater reliability was acceptable for both modalities. Of the 26 discrepancies identified between direct observation and video-recorded assessments, three discrepancies (12%) were due to inattention during video review, while one (4%) discrepancy was due to inattention during direct observation. In conclusion, although scores did not differ between the two assessment modalities, techniques of video-recording may significantly impact individual items of assessments.
Collapse
Affiliation(s)
- Irene W Y Ma
- Department of Medicine, University of Calgary, Calgary, Canada W21C, University of Calgary, Calgary, Canada
| | - Nadia Zalunardo
- Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Mary E Brindle
- Department of Surgery, University of Calgary, Calgary, Canada
| | - Rose Hatala
- Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | | |
Collapse
|
42
|
Koehler RJ, Amsdell S, Arendt EA, Bisson LJ, Braman JP, Butler A, Cosgarea AJ, Harner CD, Garrett WE, Olson T, Warme WJ, Nicandri GT. The Arthroscopic Surgical Skill Evaluation Tool (ASSET). Am J Sports Med 2013; 41:1229-37. [PMID: 23548808 PMCID: PMC4134966 DOI: 10.1177/0363546513483535] [Citation(s) in RCA: 81] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
BACKGROUND Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice; however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. HYPOTHESIS The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability when used to assess the technical ability of surgeons performing diagnostic knee arthroscopic surgery on cadaveric specimens. STUDY DESIGN Cross-sectional study; Level of evidence, 3. METHODS Content validity was determined by a group of 7 experts using the Delphi method. Intra-articular performance of a right and left diagnostic knee arthroscopic procedure was recorded for 28 residents and 2 sports medicine fellowship-trained attending surgeons. Surgeon performance was assessed by 2 blinded raters using the ASSET. Concurrent criterion-oriented validity, interrater reliability, and test-retest reliability were evaluated. RESULTS Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in the total ASSET score (P < .05) between novice, intermediate, and advanced experience groups were identified. Interrater reliability: The ASSET scores assigned by each rater were strongly correlated (r = 0.91, P < .01), and the intraclass correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: There was a significant correlation between ASSET scores for both procedures attempted by each surgeon (r = 0.79, P < .01). CONCLUSION The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopic surgery in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live operating room and other simulated environments.
Collapse
Affiliation(s)
- Ryan J Koehler
- School of Medicine, University of Rochester, Rochester, NY 14642, USA
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
43
|
Rolls A, Riga C, Bicknell C, Stoyanov D, Shah C, Van Herzeele I, Hamady M, Cheshire N. A Pilot Study of Video-motion Analysis in Endovascular Surgery: Development of Real-time Discriminatory Skill Metrics. Eur J Vasc Endovasc Surg 2013; 45:509-15. [DOI: 10.1016/j.ejvs.2013.02.004] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2012] [Accepted: 02/05/2013] [Indexed: 11/16/2022]
|
44
|
Weston MK, Stephens JH, Schafer A, Hewett PJ. Warm-up before laparoscopic surgery is not essential. ANZ J Surg 2012; 84:143-7. [DOI: 10.1111/j.1445-2197.2012.06321.x] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/24/2012] [Indexed: 11/30/2022]
Affiliation(s)
- Maree K. Weston
- Division of Surgery; The Queen Elizabeth Hospital; Woodville South South Australia Australia
| | | | - Amy Schafer
- Division of Surgery; The Queen Elizabeth Hospital; Woodville South South Australia Australia
| | - Peter J. Hewett
- Division of Surgery; The Queen Elizabeth Hospital; Woodville South South Australia Australia
- Discipline of Surgery; The University of Adelaide; Adelaide South Australia Australia
| |
Collapse
|
45
|
Rooney DM, Hungness ES, Darosa DA, Pugh CM. Can skills coaches be used to assess resident performance in the skills laboratory? Surgery 2012; 151:796-802. [PMID: 22652120 DOI: 10.1016/j.surg.2012.03.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2011] [Accepted: 03/15/2012] [Indexed: 11/20/2022]
Abstract
INTRODUCTION The purpose of this study was to compare faculty ratings between live versus video-recorded resident performances and faculty versus skills coaches' ratings of video-recorded resident performances. METHODS PGY1 residents were observed, video-recorded, and rated during a Verification of Proficiency examination on 4 stations (ie, suturing, laparotomy, central line, and cricothyroidotomy). One surgeon and 2 trained skills coaches independently rated each video-recorded performance (N = 25). The chi-square test was used to compare checklist ratings. Analysis of variance was used to compare global ratings. Intraclass correlations were used to evaluate inter-rater agreement. RESULTS There were no statistical differences in faculty checklist ratings for live versus video-recorded performances (P > .05), and we found a nearly perfect interrater agreement, intraclass correlation coefficient (ICC) = 0.99 (P < .001). When comparing faculty versus skills coaches' ratings on video-recorded performances, we found no differences for the global or checklist ratings. Inter-rater agreement was moderately high for the global ratings, ICC = 0.71 (P <. 0.01, 95% confidence interval 0.23-0.96), and nearly perfect for the checklist ratings, ICC = 0.99 (P < .001, 95% confidence interval 0.94-1.00). CONCLUSION When assessing residents' performances, use of video-recorded performance ratings and skills coaches may be viable alternatives to live ratings performed by surgical faculty.
Collapse
Affiliation(s)
- Deborah M Rooney
- Northwestern Center for Advanced Surgical Education, Simulation Technology and Immersive Learning, Feinberg School of Medicine, Chicago, IL 60611, USA.
| | | | | | | |
Collapse
|
46
|
Intelligent assessment based on Beta Regression for realistic training on medical simulators. Knowl Based Syst 2012. [DOI: 10.1016/j.knosys.2011.09.010] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
47
|
Cristancho S, Moussa F, Dubrowski A. Simulation-augmented training program for off-pump coronary artery bypass surgery: Developing and validating performance assessments. Surgery 2012; 151:785-95. [DOI: 10.1016/j.surg.2012.03.015] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2011] [Accepted: 03/15/2012] [Indexed: 10/28/2022]
|
48
|
Sonnadara R, Rittenhouse N, Khan A, Mihailidis A, Drozdzal G, Safir O, Leung SO. A novel multimodal platform for assessing surgical technical skills. Am J Surg 2012; 203:32-6. [DOI: 10.1016/j.amjsurg.2011.08.008] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2011] [Revised: 08/27/2011] [Accepted: 08/27/2011] [Indexed: 10/15/2022]
|
49
|
Sachdeva AK, Buyske J, Dunnington GL, Sanfey HA, Mellinger JD, Scott DJ, Satava R, Fried GM, Jacobs LM, Burns KJ. A new paradigm for surgical procedural training. Curr Probl Surg 2011; 48:854-968. [PMID: 22078788 DOI: 10.1067/j.cpsurg.2011.08.003] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Affiliation(s)
- Ajit K Sachdeva
- Division of Education, American College of Surgeons, Chicago, Illinois, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
50
|
Ahmed K, Miskovic D, Darzi A, Athanasiou T, Hanna GB. Observational tools for assessment of procedural skills: a systematic review. Am J Surg 2011; 202:469-480.e6. [PMID: 21798511 DOI: 10.1016/j.amjsurg.2010.10.020] [Citation(s) in RCA: 171] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2010] [Revised: 10/11/2010] [Accepted: 10/11/2010] [Indexed: 11/19/2022]
Abstract
BACKGROUND Assessment by direct observation of procedural skills is an important source of constructive feedback. The aim of this study was to identify observational tools for technical skill assessment, to assess characteristics of these tools, and to assess their usefulness for assessment. METHODS Included studies reported tools for observational assessment of technical skills. A total of 106 articles were included. RESULTS Three main categories included global assessment scales evaluating generic skills (n = 29), task-specific methods assessing procedure-specific skills (n = 30), and combinations of tools evaluating both generic and task-specific skills (n = 47). In most studies, content validity was not evaluated using an accepted scientific method. All tools were assessed for inter-rater reliability and construct validity. Data on feasibility, acceptability, and educational impact were sparse. CONCLUSIONS There is evidence of validity and reliability for observational assessment tools at the trainee level. In most studies a comprehensive analysis of the tools was not achieved. Evaluation of technical skill using current observational assessment tools is not reliable and valid at the specialist level. Future research needs to focus on further systematic tool development and analysis, especially at the specialist level.
Collapse
Affiliation(s)
- Kamran Ahmed
- Department of Surgery and Cancer, Imperial College London, St. Mary's Hospital Campus, UK
| | | | | | | | | |
Collapse
|