1
|
Montgomery KB, Mellinger JD, McLeod MC, Jones A, Zmijewski P, Sarosi GA, Brasel KJ, Klingensmith ME, Minter RM, Buyske J, Lindeman B. Decision-Making Confidence of Clinical Competency Committees for Entrustable Professional Activities. JAMA Surg 2024:2818486. [PMID: 38717759 PMCID: PMC11079788 DOI: 10.1001/jamasurg.2024.0809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 02/02/2024] [Indexed: 05/12/2024]
Abstract
Importance A competency-based assessment framework using entrustable professional activities (EPAs) was endorsed by the American Board of Surgery following a 2-year feasibility pilot study. Pilot study programs' clinical competency committees (CCCs) rated residents on EPA entrustment semiannually using this newly developed assessment tool, but factors associated with their decision-making are not yet known. Objective To identify factors associated with variation in decision-making confidence of CCCs in EPA summative entrustment decisions. Design, Setting, and Participants This cohort study used deidentified data from the EPA Pilot Study, with participating sites at 28 general surgery residency programs, prospectively collected from July 1, 2018, to June 30, 2020. Data were analyzed from September 27, 2022, to February 15, 2023. Exposure Microassessments of resident entrustment for pilot EPAs (gallbladder disease, inguinal hernia, right lower quadrant pain, trauma, and consultation) collected within the course of routine clinical care across four 6-month study cycles. Summative entrustment ratings were then determined by program CCCs for each study cycle. Main Outcomes and Measures The primary outcome was CCC decision-making confidence rating (high, moderate, slight, or no confidence) for summative entrustment decisions, with a secondary outcome of number of EPA microassessments received per summative entrustment decision. Bivariate tests and mixed-effects regression modeling were used to evaluate factors associated with CCC confidence. Results Among 565 residents receiving at least 1 EPA microassessment, 1765 summative entrustment decisions were reported. Overall, 72.5% (1279 of 1765) of summative entrustment decisions were made with moderate or high confidence. Confidence ratings increased with increasing mean number of EPA microassessments, with 1.7 (95% CI, 1.4-2.0) at no confidence, 1.9 (95% CI, 1.7-2.1) at slight confidence, 2.9 (95% CI, 2.6-3.2) at moderate confidence, and 4.1 (95% CI, 3.8-4.4) at high confidence. Increasing number of EPA microassessments was associated with increased likelihood of higher CCC confidence for all except 1 EPA phase after controlling for program effects (odds ratio range: 1.21 [95% CI, 1.07-1.37] for intraoperative EPA-4 to 2.93 [95% CI, 1.64-5.85] for postoperative EPA-2); for preoperative EPA-3, there was no association. Conclusions and Relevance In this cohort study, the CCC confidence in EPA summative entrustment decisions increased as the number of EPA microassessments increased, and CCCs endorsed moderate to high confidence in most entrustment decisions. These findings provide early validity evidence for this novel assessment framework and may inform program practices as EPAs are implemented nationally.
Collapse
Affiliation(s)
| | - John D. Mellinger
- American Board of Surgery, Philadelphia, Pennsylvania
- Department of Surgery, Southern Illinois University, Springfield
| | | | - Andrew Jones
- American Board of Surgery, Philadelphia, Pennsylvania
| | | | | | - Karen J. Brasel
- Department of Surgery, Oregon Health & Science University, Portland
| | - Mary E. Klingensmith
- American Board of Surgery, Philadelphia, Pennsylvania
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
- Department of Surgery, Washington University in St Louis, St Louis, Missouri
| | | | - Jo Buyske
- American Board of Surgery, Philadelphia, Pennsylvania
- Department of Surgery, University of Pennsylvania, Philadelphia
| | | |
Collapse
|
2
|
Toale C, Morris M, O'Keeffe D, Boland F, Ryan DM, Nally DM, Kavanagh DO. Assessing operative competence in core surgical training: A reliability analysis. Am J Surg 2023; 226:588-595. [PMID: 37481408 DOI: 10.1016/j.amjsurg.2023.06.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 05/22/2023] [Accepted: 06/18/2023] [Indexed: 07/24/2023]
Abstract
BACKGROUND This study quantifies the number of observations required to reliably assess the operative competence of Core Surgical Trainees (CSTs) in Ireland, using the Supervised Structured Assessment of Operative Performance (SSAOP) tool. METHODS SSAOPs (April 2016-February 2021) were analysed across a mix of undifferentiated procedures, as well as for three commonly performed general surgery procedures in CST: appendicectomy, abdominal wall hernia repair, and skin/subcutaneous lesion excision. Generalizability and Decision studies determined the number of observations required to achieve dependability indices ≥0.8, appropriate for use in high-stakes assessment. RESULTS A total of 2,294 SSAOPs were analysed. Four assessors, each observing 10 cases, can generate scores sufficiently reliable for use in high-stakes assessments. Focusing on a selection of core procedures yields more favourable reliability indices. CONCLUSION Trainers should conduct repeated assessments across a smaller number of procedures to improve reliability. Programs should increase the assessor mix to yield sufficient dependability indices for high-stakes assessment.
Collapse
Affiliation(s)
- Conor Toale
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, Ireland.
| | - Marie Morris
- Data Science Centre, University of Medicine and Health Sciences at the Royal College of Surgeons in Ireland, Ireland
| | - Dara O'Keeffe
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, Ireland
| | - Fiona Boland
- Data Science Centre, University of Medicine and Health Sciences at the Royal College of Surgeons in Ireland, Ireland
| | - Donncha M Ryan
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, Ireland
| | - Deirdre M Nally
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, Ireland
| | - Dara O Kavanagh
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, Ireland
| |
Collapse
|
3
|
Hackney L, O'Neill S, O'Donnell M, Spence R. A scoping review of assessment methods of competence of general surgical trainees. Surgeon 2023; 21:60-69. [PMID: 35300909 DOI: 10.1016/j.surge.2022.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 01/19/2022] [Indexed: 02/01/2023]
Abstract
BACKGROUND Only rigorous evaluation of competence will result in the production of safe surgeons that are able to provide the best care for patients. The development of competency-based assessment should ultimately be evidence driven. OBJECTIVES Explore the volume of existing evidence pertaining to the different objective assessment methods reported in the literature. ELIGIBILITY CRITERIA Studies describing objective assessment of postgraduate general surgical trainees within the last 20 years. SOURCES OF EVIDENCE PubMed, Ovid Medline and Web of Sciences. CHARTING METHODS A data chart proforma was designed and data were extracted into tables. Basic numerical analysis of extracted data and narrative synthesis of charted data. RESULTS A total of 343 papers were reviewed. 26 were eligible for inclusion. 92% of articles were published from 2008 onwards. 50% have been published in the last five years. The articles originated from 6 different countries, predominantly the United Kingdom (42%), followed by the United States of America (38%). In addition, a small number were published from Canada (8%), Japan (4%), Germany (4%) and Australia (4%). UK publications were predominantly between 2008 and 2014 while the USA had a later predominance between 2015 and 2018. 42% were based on quantitative methodology, 27% had a qualitative approach while 31% had mixed analysis. There were sixteen assessment methods presented. The most common type of assessment was Objective Structured Assessments (27%), which included Objective Structured Assessment of Technical Skill (OSATS) (23%) and Objective Structured Assessment of Non-Technical Skill (4%). Procedure Based Assessment (PBA) (23%) and Entrustability Scales (23%) were also prevalent. CONCLUSIONS This scoping review has identified a range of different assessment methods. The assessment methods with a higher volume and level of supporting evidence were OSATS, PBAs and Entrustability Scales. There was a lower volume and level of supporting evidence found within this review for the remaining assessment methods.
Collapse
Affiliation(s)
| | | | | | - Roy Spence
- Queens University Belfast, United Kingdom
| |
Collapse
|
4
|
Response to the Comment on "A Proposed Blueprint for Operative Performance Training, Assessment, and Certification". Ann Surg 2021; 274:e938-e939. [PMID: 34784685 DOI: 10.1097/sla.0000000000005082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
5
|
Nicolas JD, Huang R, Teitelbaum EN, Bilimoria KY, Hu YY. Constructing Learning Curves to Benchmark Operative Performance of General Surgery Residents Against a National Cohort of Peers. JOURNAL OF SURGICAL EDUCATION 2020; 77:e94-e102. [PMID: 33109492 DOI: 10.1016/j.jsurg.2020.10.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 08/27/2020] [Accepted: 10/02/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE No method or data exist to allow surgical trainees or their programs to contextualize their technical progress. The objective of this study was to create peer benchmarks for Cumulative Sum (CUSUM) charts based upon operative evaluations from a national cohort of general surgery residents. DESIGN, SETTING, PARTICIPANTS In 2016-2018, faculty from 26 general surgery residency programs nationwide rated 328 residents' operative performance on a case-by-case basis using a validated 5-point Likert scale. An individual case was considered a "misstep" if scoring below the national median score for that procedure in that postgraduate year (PGY). We constructed 2-sided observed-expected CUSUM charts to capture each resident's cumulative performance over time relative to the national medians. Upper (failure) and lower (positive outlier) benchmarks were established based on the PGY-specific 75th percentile and median misstep rates; consistent/repeated missteps are reflected by crossing of the upper boundary. Procedures with ≤10 observations and residents who were evaluated <10 times for each PGY were excluded. RESULTS Around 8,161 evaluations on 76 procedure types were analyzed. The individual misstep rate was lowest among PGY-3s at 13.3% and highest among PGY-4s at 28.6%. No interns had curves that crossed the failure boundary. 8.7% of PGY-2s and 8.9% of PGY-3s finished the year past the failure boundary. PGY-2s had the most positive outliers, with 28.3% of them demonstrating an outlying success performance beyond the lower boundary for at least once. PGY-5s most frequently failed, with 16.7% ever crossing the upper boundary and 11.1% remaining above it at graduation. CONCLUSIONS CUSUM is a valid statistical approach for benchmarking individual residents' operative performance against national peers as they progress through the year in real-time. With further validation, CUSUM could be used to set progression and/or graduation standards and objectively identify residents who might benefit from remediation.
Collapse
Affiliation(s)
- Joseph D Nicolas
- Surgical Outcomes and Quality Improvement Center (SOQIC), Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Reiping Huang
- Surgical Outcomes and Quality Improvement Center (SOQIC), Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Ezra N Teitelbaum
- Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Karl Y Bilimoria
- Surgical Outcomes and Quality Improvement Center (SOQIC), Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, Illinois; Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Yue-Yung Hu
- Surgical Outcomes and Quality Improvement Center (SOQIC), Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, Illinois; Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, Illinois; Division of Pediatric Surgery, Ann and Robert H. Lurie Children's Hospital, Chicago, Illinois.
| |
Collapse
|
6
|
Cheung WJ, Wood TJ, Gofton W, Dewhirst S, Dudek N. The Ottawa Emergency Department Shift Observation Tool (O-EDShOT): A New Tool for Assessing Resident Competence in the Emergency Department. AEM EDUCATION AND TRAINING 2020; 4:359-368. [PMID: 33150278 PMCID: PMC7592826 DOI: 10.1002/aet2.10419] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 11/01/2019] [Accepted: 11/13/2019] [Indexed: 05/23/2023]
Abstract
OBJECTIVES The outcome of emergency medicine (EM) training is to produce physicians who can competently run an emergency department (ED) shift. However, there are few tools with supporting validity evidence specifically designed to assess multiple key competencies across an entire shift. The investigators developed and gathered validity evidence for a novel entrustment-based tool to assess a resident's ability to safely run an ED shift. METHODS Through a nominal group technique, local and national stakeholders identified dimensions of performance that are reflective of a competent ED physician and are required to safely manage an ED shift. These were included as items in the Ottawa Emergency Department Shift Observation Tool (O-EDShOT), and each item was scored using an entrustment-based rating scale. The tool was implemented in 2018 at the University of Ottawa Department of Emergency Medicine, and quantitative data and qualitative feedback were collected over 6 months. RESULTS A total of 1,141 forms were completed by 78 physicians for 45 residents. An analysis of variance demonstrated an effect of training level with statistically significant increases in mean O-EDShOT scores with each subsequent postgraduate year (p < 0.001). Scores did not vary by ED treatment area. Residents rated as able to safely run the shift had significantly higher mean ± SD scores (4.8 ± 0.3) than those rated as not able (3.8 ± 0.6; p < 0.001). Faculty and residents reported that the tool was feasible to use and facilitated actionable feedback aimed at progression toward independent practice. CONCLUSIONS The O-EDShOT successfully discriminated between trainees of different levels regardless of ED treatment area. Multiple sources of validity evidence support the O-EDShOT as a tool to assess a resident's ability to safely run an ED shift. It can serve as a stimulus for daily observation and feedback making it practical to use within an EM residency program.
Collapse
Affiliation(s)
- Warren J. Cheung
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Timothy J. Wood
- Department of Innovation in Medical EducationUniversity of OttawaOttawaOntarioCanada
| | - Wade Gofton
- Department of SurgeryDivision of Orthopaedic SurgeryUniversity of OttawaOttawaOntarioCanada
| | | | - Nancy Dudek
- Department of MedicineDivision of Physical Medicine and RehabilitationUniversity of OttawaOttawaOntarioCanada
| |
Collapse
|
7
|
Perkins SQ, Dabaja A, Atiemo H. Best Approaches to Evaluation and Feedback in Post-Graduate Medical Education. Curr Urol Rep 2020; 21:36. [PMID: 32789759 DOI: 10.1007/s11934-020-00991-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The objectives of this literature review are to appraise current approaches and assess new technologies that have been utilized for evaluation and feedback of residents, with focus on surgical trainees. RECENT FINDINGS In 1999, the Accreditation Council for Graduate Medical Education introduced the Milestone system as a tool for summative evaluation. The organization allows individual program autonomy on how evaluation and feedback are performed. In the past, questionnaire evaluations and informal verbal feedback were employed. However, with the advent of technology, they have taken a different shape in the form of crowdsourcing, mobile platforms, and simulation. Limited data is available on new methods but studies show promise citing low cost and positive impact on resident education. No one "best approach" exists for evaluation and feedback. However, it is apparent that a multimodal approach that is based on the ACGME Milestones can be effective and aid in guiding programs.
Collapse
Affiliation(s)
- Sara Q Perkins
- Henry Ford Health System, 2799 W Grand Blvd, K9, Detroit, MI, 48202, USA
| | - Ali Dabaja
- Henry Ford Health System, 2799 W Grand Blvd, K9, Detroit, MI, 48202, USA
| | - Humphrey Atiemo
- Henry Ford Health System, 2799 W Grand Blvd, K9, Detroit, MI, 48202, USA.
| |
Collapse
|
8
|
Resident training experience with robotic assisted transabdominal preperitoneal inguinal hernia repair. Am J Surg 2020; 219:278-282. [DOI: 10.1016/j.amjsurg.2019.11.014] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Revised: 09/05/2019] [Accepted: 11/06/2019] [Indexed: 12/18/2022]
|
9
|
Chen JX, Kozin E, Bohnen J, George B, Deschler DG, Emerick K, Gray ST. Assessments of Otolaryngology Resident Operative Experiences Using Mobile Technology: A Pilot Study. Otolaryngol Head Neck Surg 2019; 161:939-945. [DOI: 10.1177/0194599819868165] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Objectives Surgical education has shifted from the Halstedian model of “see one, do one, teach one” to a competency-based model of training. Otolaryngology residency programs can benefit from a fast and simple system to assess residents’ surgical skills. In this quality initiative, we hypothesize that a novel smartphone application called System for Improving and Measuring Procedural Learning (SIMPL) could be applied in an otolaryngology residency to facilitate the assessment of resident operative experiences. Methods The Plan Do Study Act method of quality improvement was used. After researching tools of surgical assessment and trialing SIMPL in a resident-attending pair, we piloted SIMPL across an otolaryngology residency program. Faculty and residents were trained to use SIMPL to rate resident operative performance and autonomy with a previously validated Zwisch Scale. Results Residents (n = 23) and faculty (n = 17) were trained to use SIMPL using a standardized curriculum. A total of 833 assessments were completed from December 1, 2017, to June 30, 2018. Attendings completed a median 20 assessments, and residents completed a median 14 self-assessments. All evaluations were resident initiated, and attendings had a 78% median response rate. Evaluations took residents a median 22 seconds to complete; 126 unique procedures were logged, representing all 14 key indicator cases for otolaryngology. Discussion This is the first residency-wide application of a mobile platform to track the operative experiences of otolaryngology residents. Implications for Practice We adapted and implemented a novel assessment tool in a large otolaryngology program. Future multicenter studies will benchmark resident operative experiences nationwide.
Collapse
Affiliation(s)
- Jenny X. Chen
- Department of Otolaryngology–Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts, USA
- Department of Otolaryngology–Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Elliott Kozin
- Department of Otolaryngology–Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts, USA
- Department of Otolaryngology–Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Jordan Bohnen
- Department of General Surgery Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Brian George
- Center for Surgical Training and Research, Department of Surgery, University of Michigan, Ann Arbor, Michigan, USA
| | - Daniel G. Deschler
- Department of Otolaryngology–Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts, USA
- Department of Otolaryngology–Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Kevin Emerick
- Department of Otolaryngology–Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts, USA
- Department of Otolaryngology–Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Stacey T. Gray
- Department of Otolaryngology–Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts, USA
- Department of Otolaryngology–Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
10
|
Saliken D, Dudek N, Wood TJ, MacEwan M, Gofton WT. Comparison of the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) to a Single-Item Performance Score. TEACHING AND LEARNING IN MEDICINE 2019; 31:146-153. [PMID: 30514128 DOI: 10.1080/10401334.2018.1503961] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2018] [Revised: 06/22/2018] [Accepted: 07/12/2018] [Indexed: 06/09/2023]
Abstract
UNLABELLED Construct: We compared a single-item performance score with the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) for their ability in assessing surgical competency. BACKGROUND Surgical programs are adopting competency-based frameworks. The adoption of these frameworks for assessment requires tools that produce accurate and valid assessments of knowledge and technical performance. An assessment tool that is quick to complete could improve feasibility, reduce delays, and result in a higher volume of assessments of learners. Previous work demonstrated that the 9-item O-SCORE can produce valid results; the goal of this study was to determine if a single-item performance rating (Is candidate competent to independently complete procedure: yes or no) completed at a separate viewing would correlate to the O-SCORE, thus increasing feasibility of procedural competence assessment. APPROACH Nineteen residents and 2 staff orthopedic surgeons from the University of Ottawa volunteered for a 2-part OSCE-style station including a written questionnaire and videotaped simulated open reduction and internal fixation midshaft radius fracture. Each performance was rated independently by 3 orthopedic surgeons using a single-item performance score (Time 1). The performances were assessed again 6 weeks later by the 3 raters using the O-SCORE (Time 2). Correlation between the single-item performance score and the O-SCORE were evaluated. RESULTS Three orthopedic surgeons completed 21 ratings each resulting in 63 orthopedic ratings. There was a high level of correlation and agreement between the single-item performance score at Time 1 and Time 2 (κ correlation =0.72-1.00; p < .001; percentage agreement =90%-100%). The reliability of the O-SCORE at Time 2 with three raters was 0.83 and the internal consistency was 0.89. There was a tendency for each rater to assign more yes responses to the more senior trainees. CONCLUSIONS A single-item performance score correlated highly with the O-SCORE in an orthopedic setting. A single-item score could be used to supplement a multi-item score with similar results in orthopedics. There is still benefit in completing multi-item scores such as the O-SCORE evaluations to guide specific areas of improvement and direct feedback.
Collapse
Affiliation(s)
- David Saliken
- a Department of Surgery , RebalanceMD , Victoria , British Columbia , Canada
| | - Nancy Dudek
- b Department of Surgery , University of Ottawa , Ottawa , Ontario , Canada
| | - Timothy J Wood
- c Department of Innovation in Medical Education , University of Ottawa , Ottawa , Ontario , Canada
| | - Matthew MacEwan
- d Department of Orthopedic Surgery , University of Ottawa , Ottawa , Ontario , Canada
- e Departments of Surgery and Innovation in Medical Education , University of Ottawa , Ottawa , Ontario , Canada
| | - Wade T Gofton
- c Department of Innovation in Medical Education , University of Ottawa , Ottawa , Ontario , Canada
| |
Collapse
|
11
|
Bello RJ, Meyer ML, Cooney DS, Rosson GD, Lifchez SD, Cooney CM. The reliability of operative rating tool evaluations: How late is too late to provide operative performance feedback? Am J Surg 2018; 216:1052-1055. [DOI: 10.1016/j.amjsurg.2018.04.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 03/31/2018] [Accepted: 04/09/2018] [Indexed: 11/30/2022]
|
12
|
Anton NE, Sawyer JM, Korndorffer JR, DuCoin CG, McRary G, Timsina LR, Stefanidis D. Developing a robust suturing assessment: validity evidence for the intracorporeal suturing assessment tool. Surgery 2018; 163:560-564. [DOI: 10.1016/j.surg.2017.10.029] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 10/10/2017] [Accepted: 10/20/2017] [Indexed: 10/18/2022]
|
13
|
|
14
|
Köhler TS. Assessing Competence in Surgical Training and Becoming a Better Educator. J Sex Med 2017; 14:761-764. [PMID: 28583336 DOI: 10.1016/j.jsxm.2017.04.663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Revised: 04/10/2017] [Accepted: 04/11/2017] [Indexed: 10/19/2022]
Affiliation(s)
- Tobias S Köhler
- Division of Urology, Southern Illinois University School of Medicine, Springfield, IL, USA.
| |
Collapse
|
15
|
Nathwani JN, Glarner CE, Law KE, McDonald RJ, Zelenski AB, Greenberg JA, Foley EF. Integrating Postoperative Feedback Into Workflow: Perceived Practices and Barriers. JOURNAL OF SURGICAL EDUCATION 2017; 74:406-414. [PMID: 27894938 PMCID: PMC5485837 DOI: 10.1016/j.jsurg.2016.11.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2016] [Revised: 10/11/2016] [Accepted: 11/09/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE Previous studies have found that both resident and staff surgeons highly value postoperative feedback; and that such feedback has high educational value. However, little is known about how to consistently deliver this feedback. Our aim was to understand how often surgical residents should receive feedback and what barriers are preventing this from occurring. DESIGN Surveys were distributed to resident and attending surgeons. Questions focused on the current frequency of postoperative feedback, desired frequency and methods of feedback, and perceived barriers. Quantitative data were analyzed with descriptive statistics, and text responses were examined using coding. SETTING University-based general surgery department at a Midwestern institution. PARTICIPANTS General surgery residents (n = 23) and attending surgeons (n = 22) participated in this study. RESULTS Residents reported receiving and staff reported giving feedback for procedure-specific performance after 25% versus 34% of cases, general technical feedback after 36% versus 32%, and nontechnical performance after 17% versus 18%. Both perceived procedure-specific and general technical feedback should be given more than 80% of the time, and nontechnical feedback should happen for nearly 60% of cases. Verbal feedback immediately after the operation was rated as best practice. Both parties identified time, conflicting responsibilities, lack of privacy, and discomfort with giving and receiving meaningful feedback as barriers. CONCLUSIONS Both resident and staff surgeons agree that postoperative feedback is given far less often than it should. Future work should study intraoperative and postoperative feedback to validate resident and attending surgeons' perceptions such that interventions to improve and facilitate this process can be developed.
Collapse
Affiliation(s)
- Jay N Nathwani
- Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Carly E Glarner
- Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Katherine E Law
- Department of Industrial and Systems Engineering, University of Wisconsin School of Engineering, Madison, Wisconsin
| | - Robert J McDonald
- Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Amy B Zelenski
- Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Jacob A Greenberg
- Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Eugene F Foley
- Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin.
| |
Collapse
|
16
|
Byram JN, Seifert MF, Brooks WS, Fraser-Cotlin L, Thorp LE, Williams JM, Wilson AB. Using generalizability analysis to estimate parameters for anatomy assessments: A multi-institutional study. ANATOMICAL SCIENCES EDUCATION 2017; 10:109-119. [PMID: 27458988 DOI: 10.1002/ase.1631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2016] [Revised: 06/09/2016] [Accepted: 06/09/2016] [Indexed: 06/06/2023]
Abstract
With integrated curricula and multidisciplinary assessments becoming more prevalent in medical education, there is a continued need for educational research to explore the advantages, consequences, and challenges of integration practices. This retrospective analysis investigated the number of items needed to reliably assess anatomical knowledge in the context of gross anatomy and histology. A generalizability analysis was conducted on gross anatomy and histology written and practical examination items that were administered in a discipline-based format at Indiana University School of Medicine and in an integrated fashion at the University of Alabama School of Medicine and Rush University Medical College. Examination items were analyzed using a partially nested design s×(i:o) in which items were nested within occasions (i:o) and crossed with students (s). A reliability standard of 0.80 was used to determine the minimum number of items needed across examinations (occasions) to make reliable and informed decisions about students' competence in anatomical knowledge. Decision study plots are presented to demonstrate how the number of items per examination influences the reliability of each administered assessment. Using the example of a curriculum that assesses gross anatomy knowledge over five summative written and practical examinations, the results of the decision study estimated that 30 and 25 items would be needed on each written and practical examination to reach a reliability of 0.80, respectively. This study is particularly relevant to educators who may question whether the amount of anatomy content assessed in multidisciplinary evaluations is sufficient for making judgments about the anatomical aptitude of students. Anat Sci Educ 10: 109-119. © 2016 American Association of Anatomists.
Collapse
Affiliation(s)
- Jessica N Byram
- Department of Anatomy and Cell Biology, Indiana University School of Medicine, Indianapolis, Indiana
| | - Mark F Seifert
- Department of Anatomy and Cell Biology, Indiana University School of Medicine, Indianapolis, Indiana
| | - William S Brooks
- Department of Cell, Developmental, and Integrative Biology, University of Alabama at Birmingham School of Medicine, Birmingham, Alabama
| | - Laura Fraser-Cotlin
- Department of Cell, Developmental, and Integrative Biology, University of Alabama at Birmingham School of Medicine, Birmingham, Alabama
| | - Laura E Thorp
- Department of Physical Therapy, University of Illinois at Chicago, Chicago, Illinois
| | - James M Williams
- Department of Anatomy and Cell Biology, Rush University, Chicago, Illinois
| | - Adam B Wilson
- Department of Anatomy and Cell Biology, Rush University, Chicago, Illinois
| |
Collapse
|
17
|
Bello RJ, Major MR, Cooney DS, Rosson GD, Lifchez SD, Cooney CM. Empirical validation of the Operative Entrustability Assessment using resident performance in autologous breast reconstruction and hand surgery. Am J Surg 2017; 213:227-232. [DOI: 10.1016/j.amjsurg.2016.09.054] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2016] [Revised: 08/22/2016] [Accepted: 09/07/2016] [Indexed: 11/24/2022]
|
18
|
Mellinger JD, Williams RG, Sanfey H, Fryer JP, DaRosa D, George BC, Bohnen JD, Schuller MC, Sandhu G, Minter RM, Gardner AK, Scott DJ. Teaching and assessing operative skills: From theory to practice. Curr Probl Surg 2016; 54:44-81. [PMID: 28212782 DOI: 10.1067/j.cpsurg.2016.11.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Accepted: 11/22/2016] [Indexed: 11/22/2022]
Affiliation(s)
- John D Mellinger
- Department of Surgery, Southern Illinois University School of Medicine, Springfield, IL.
| | - Reed G Williams
- Department of Surgery, Southern Illinois University School of Medicine, Springfield, IL; Department of Surgery, Indiana University School of Medicine, Indianapolis, IN
| | - Hilary Sanfey
- Department of Surgery, Southern Illinois University School of Medicine, Springfield, IL; American College of Surgeons, Chicago, IL
| | - Jonathan P Fryer
- Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL
| | - Debra DaRosa
- Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL
| | - Brian C George
- Department of Surgery, University of Michigan, Ann Arbor, MI
| | - Jordan D Bohnen
- Department of General Surgery, Massachussetts General Hospital and Harvard University, Boston, MA
| | - Mary C Schuller
- Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL
| | - Gurjit Sandhu
- Department of Surgery, University of Michigan, Ann Arbor, MI; Department of Learning Health Sciences, University of Michigan, Ann Arbor, MI
| | - Rebecca M Minter
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX
| | - Aimee K Gardner
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX; UT Southwestern Simulation Center, University of Texas Southwestern Medical Center, Dallas, TX
| | - Daniel J Scott
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX; UT Southwestern Simulation Center, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
19
|
Williams RG, Kim MJ, Dunnington GL. Practice Guidelines for Operative Performance Assessments. Ann Surg 2016; 264:934-948. [DOI: 10.1097/sla.0000000000001685] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
20
|
Empirical Validation of the Operative Entrustability Assessment Using Resident Performance in Autologous Breast Reconstruction and Hand Surgery. Plast Reconstr Surg Glob Open 2016. [PMCID: PMC4956894 DOI: 10.1097/gox.0000000000000775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|