1
|
Optimization of a standardized letter of recommendation for faculty who wish to support candidates applying to surgical training programs. Am J Surg 2024:S0002-9610(24)00173-9. [PMID: 38575444 DOI: 10.1016/j.amjsurg.2024.03.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 02/20/2024] [Accepted: 03/08/2024] [Indexed: 04/06/2024]
Abstract
Letters of recommendation (LORs) play an important role in applicant selection for graduate medical education programs. LORs may be of increasing importance in the evaluation of applicants given the recent change of the USMLE Step 1 to pass/fail scoring and the relative lack of other objective measures by which to differentiate and stratify applicants. Narrative letters of recommendation (NLORs), although widely used, have certain limitations, namely variability in interpretation, introduction of gender/race bias, and performance inflation. Standardized letters of recommendation (SLOR) have been proposed as a potential corrective to these limitations. We conducted a series of semi-structured interviews and focus groups to gather perspectives from letter writers and readers to inform methods for improving information elicited by SLORs from which we collected and analyzed data using the constant comparative method and a process of iterative coding. We applied our findings to the development of a novel SLOR for use in surgical residency program applications and were subsequently invited to help revise existing SLORs for a surgical post-graduate training program.
Collapse
|
2
|
Evaluating the Construct Validity of Competencies: A Retrospective Analysis. MEDICAL SCIENCE EDUCATOR 2023; 33:729-736. [PMID: 37501811 PMCID: PMC10368597 DOI: 10.1007/s40670-023-01794-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/28/2023] [Indexed: 07/29/2023]
Abstract
Background A competency-based framework focuses on alignment between professional standards and assessment design. This alignment implies improved measurement validity, yet it has not been established that competence in one context predicts performance in another context. High-stakes competence assessments offer insights into the relationship between assessment design and competencies. Methods/Analyses The internationally educated nurses competency assessment program (IENCAP) was developed at Touchstone Institute in collaboration with the College of Nurses of Ontario (CNO) and includes a 12-station OSCE. Each station evaluated the same 10 competencies. We submitted competency scores to a multi-trait multi-method matrix analysis to evaluate the convergent and discriminant validity of competencies. Results/Observations All correlations were significant and positive; however, we did not find evidence of convergent or discriminant validity. Correlations were higher between different competencies evaluated within the same station (mean correlation = 0.60) compared to identical competencies evaluated across different stations (mean correlation = 0.19). Discussion The results do not provide evidence of construct validity for competencies. While competency-based approaches emphasize various generalized knowledge, skills, and attitudes, these findings indicate that the clinical context is a major determinant of performance. Conclusion The context-dependent nature of competencies requires multiple assessments in varied contexts. Performance on a single competency cannot be determined in a single occasion. Supplementary Information The online version contains supplementary material available at 10.1007/s40670-023-01794-z.
Collapse
|
3
|
Development and preliminary testing of a virtual reality measurement for assessing intake assessment skills. INTERNATIONAL JOURNAL OF PSYCHOLOGY 2023; 58:237-246. [PMID: 36720650 DOI: 10.1002/ijop.12898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 12/22/2022] [Indexed: 02/02/2023]
Abstract
Objective structured clinical examinations (OSCEs) have been widely used in health care education to simultaneously assess knowledge, skill and attitude. Due to the high cost of running an OSCE, its application in professional psychology is still limited. To solve this problem, virtual standardised patient (VSP) implementations in creating psychology OSCEs can be a cost-effective method for administering a psychology OSCE regularly. This study aimed to develop and examine the psychometric properties of the VSP version of the Intake OSCE (VSP-Intake OSCE) in measuring psychologists' psychological assessment competencies (PACs) from entry to early practice. The initial development of the VSP-Intake OSCE contains a VSP station and a follow-up written station to measure PACs when conducting an intake assessment. To administer the VSP station, we built a new VSP system that allows psychologists to interact with a VSP verbally. A sample of 36 participants, including 27 graduate students and nine psychologists, were recruited to examine the psychometric properties of the VSP-Intake OSCE. As a newly developed instrument, the VSP-Intake OSCE revealed good inter-rater reliability and construct validity. We believe using VSP implementations to develop psychology OSCEs will be essential in promoting OSCE applications in professional psychology.
Collapse
|
4
|
Teacher, Gatekeeper, or Team Member: supervisor positioning in programmatic assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022:10.1007/s10459-022-10193-9. [PMID: 36469231 DOI: 10.1007/s10459-022-10193-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 11/27/2022] [Indexed: 06/17/2023]
Abstract
Competency-based assessment is undergoing an evolution with the popularisation of programmatic assessment. Fundamental to programmatic assessment are the attributes and buy-in of the people participating in the system. Our previous research revealed unspoken, yet influential, cultural and relationship dynamics that interact with programmatic assessment to influence success. Pulling at this thread, we conducted secondary analysis of focus groups and interviews (n = 44 supervisors) using the critical lens of Positioning Theory to explore how workplace supervisors experienced and perceived their positioning within programmatic assessment. We found that supervisors positioned themselves in two of three ways. First, supervisors universally positioned themselves as a Teacher, describing an inherent duty to educate students. Enactment of this position was dichotomous, with some supervisors ascribing a passive and disempowered position onto students while others empowered students by cultivating an egalitarian teaching relationship. Second, two mutually exclusive positions were described-either Gatekeeper or Team Member. Supervisors positioning themselves as Gatekeepers had a duty to protect the community and were vigilant to the detection of inadequate student performance. Programmatic assessment challenged this positioning by reorientating supervisor rights and duties which diminished their perceived authority and led to frustration and resistance. In contrast, Team Members enacted a right to make a valuable contribution to programmatic assessment and felt liberated from the burden of assessment, enabling them to assent power shifts towards students and the university. Identifying supervisor positions revealed how programmatic assessment challenged traditional structures and ideologies, impeding success, and provides insights into supporting supervisors in programmatic assessment.
Collapse
|
5
|
Design guidelines for assessing students' interprofessional competencies in healthcare education: a consensus study. PERSPECTIVES ON MEDICAL EDUCATION 2022; 11:316-324. [PMID: 36223031 PMCID: PMC9743853 DOI: 10.1007/s40037-022-00728-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 08/31/2022] [Accepted: 09/02/2022] [Indexed: 06/16/2023]
Abstract
INTRODUCTION Healthcare systems require healthcare professionals and students educated in an interprofessional (IP) context. Well-designed assessments are needed to evaluate whether students have developed IP competencies, but we currently lack evidence-informed guidelines to create them. This study aims to provide guidelines for the assessment of IP competencies in healthcare education. METHODS A qualitative consensus study was conducted to establish guidelines for the design of IP assessments using the nominal group technique. First, five expert groups (IP experts, patients, educational scientists, teachers, and students) were asked to discuss design guidelines for IP assessment and reach intra-group consensus. Second, one heterogeneous inter-group meeting was organized to reach a consensus among the expert groups on IP assessment guidelines. RESULTS This study yielded a comprehensive set of 26 guidelines to help design performance assessments for IP education: ten guidelines for both the IP assessment tasks and the IP assessors and six guidelines for the IP assessment procedures. DISCUSSION The results showed that IP assessment is complex and, compared to mono-professional assessment, high-quality IP assessments require additional elements such as multiple IP products and processes to be assessed, an IP pool of assessors, and assessment procedures in which standards are included for the IP collaboration process as well as individual contributions. The guidelines are based on expert knowledge and experience, but an important next step is to test these design guidelines in educational practice.
Collapse
|
6
|
A Literature Review: Entrustable Professional Activities, an assessment tool for postgraduate dental training? J Dent 2022; 120:104099. [PMID: 35337899 DOI: 10.1016/j.jdent.2022.104099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 02/23/2022] [Accepted: 02/24/2022] [Indexed: 10/18/2022] Open
Abstract
Assessing when dental trainees are ready to independently undertake clinical procedures at specialist level is critical for dental postgraduate programmes to determine when a trainee is 'work ready', in addition to ensuring patient safety. Entrustable professional activities (EPA) are a novel method of competency-based assessment. An EPA is a unit of professional practice or critical clinical activity identified within dental training programmes, which should be assessed during training, to establish if trainees are ready for independent practice, with a progressive decrease in supervision, based on supervisors' entrustment decisions. This article describes EPAs, entrustment decisions, including entrustment supervision scales and the process recommended to develop EPAs within dental curricula. EPAs have not been formally introduced for assessment within dental education programmes in the United Kingdom, but recent developments have been described in undergraduate dental education globally. Clinical significance: Competency-based assessments need to be continually developed to adapt to rapidly changing population health care and dental needs, to determine when dental trainees are ready for independent clinical practice. Early development of entrustable professional activities for assessment in undergraduate dental programmes has been well received by both trainees and supervisors. Further investigation is required to consider formal development of EPAs within postgraduate dental programmes.
Collapse
|
7
|
The ASK-SEAT: a competency-based assessment scale for students majoring in clinical medicine. BMC MEDICAL EDUCATION 2022; 22:76. [PMID: 35114990 PMCID: PMC8815145 DOI: 10.1186/s12909-022-03140-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 01/25/2022] [Indexed: 02/05/2023]
Abstract
BACKGROUND To validate a competency-based assessment scale for students majoring in clinical medicine, ASK-SEAT. Students' competency growth across grade years was also examined for trends and gaps. METHODS Questionnaires were distributed online from May through August in 2018 to Year-2 to Year-6 students who majored in clinical medicine at the Shantou University Medical College (China). Cronbach alpha values were calculated for reliability of the scale, and exploratory factor analysis employed for structural validity. Predictive validity was explored by correlating Year-4 students' self-assessed competency ratings with their licensing examination scores (based on Kendall's tau-b values). All students' competency development over time was examined using the Mann-Whitney U test. RESULTS A total of 760 questionnaires meeting the inclusion criteria were analyzed. The overall Cronbach's alpha value was 0.964, and the item-total correlations were all greater than 0.520. The overall KMO measure was 0.966 and the KMO measure for each item was greater than 0.930 (P < 0.001). The eigenvalues of the top 3 components extracted were all greater than 1, explaining 55.351, 7.382, and 5.316% of data variance respectively, and 68.048% cumulatively. These components were aligned with the competency dimensions of skills (S), knowledge (K), and attitude (A). Significant and positive correlations (0.135 < Kendall's tau-b < 0.276, p < 0.05) were found between Year-4 students' self-rated competency levels and their scores for the licensing examination. Steady competency growth was associated with almost all indicators, with the most pronounced growth in the domain of skills. A lack of steady growth was seen in the indicators of "applying the English language" and "conducting scientific research & innovating". CONCLUSIONS The ASK-SEAT, a competency-based assessment scale developed to measure medical students' competency development shows good reliability and structural validity. For predictive validity, weak-to-moderate correlations are found between Year-4 students' self-assessment and their performance at the national licensing examination (Year-4 students start their clinical clerkship during the 2nd semester of their 4th year of study). Year-2 to Year-6 students demonstrate steady improvement in the great majority of clinical competency indicators, except in the indicators of "applying the English language" and "conducting scientific research & innovating".
Collapse
|
8
|
Establishing competency-based measures for Department of Veterans Affairs post-graduate nurse practitioner residencies. J Prof Nurs 2021; 37:962-970. [PMID: 34742529 DOI: 10.1016/j.profnurs.2021.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Indexed: 11/23/2022]
Abstract
BACKGROUND In the past decade, numerous nurse residency models have been created and implemented nationwide; however, validated specialty-specific competency standards have not been established to evaluate Nurse Practitioner (NP) resident core competencies. PURPOSE To report the specialty-specific competency assessment tool devised to assess Department of Veterans Affairs (VA) NP residents' competencies and discuss the VA NP residency program's effectiveness in expanding new graduate NP knowledge and skills in the veteran-centric care setting. METHODS The VA Nursing Academic Partnership NP residency faculty established and piloted a web-based Nurse Practitioner Resident Competency Assessment (NPRCA) instrument for the comprehensive, specialty-specific assessment of individual NP resident's skill competencies across 24 areas. RESULTS The VA specialty-specific competency assessment instrument demonstrates strong internal consistency. The robust VA NP residency program enhances new graduate NP competencies. CONCLUSIONS The VA NP residency model can further the goal of standardizing clinical competencies in NP residency programs.
Collapse
|
9
|
Evaluating chief resident readiness for the teaching assistant role: The Teaching Evaluation assessment of the chief resident (TEACh-R) instrument. Am J Surg 2021; 222:1112-1119. [PMID: 34600735 DOI: 10.1016/j.amjsurg.2021.09.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 09/23/2021] [Accepted: 09/24/2021] [Indexed: 11/21/2022]
Abstract
BACKGROUND The American Board of Surgery has mandated chief residents complete 25 cases in the teaching assistant (TA) role. We developed a structured instrument, the Teaching Evaluation and Assessment of the Chief Resident (TEACh-R), to determine readiness and provide feedback for residents in this role. METHODS Senior (PGY3-5) residents were scored on technical and teaching performance by faculty observers using the TEACh-R instrument in the simulation lab. Residents were provided with their TEACh-R scores and surveyed on their experience. RESULTS Scores in technical (p < 0.01) and teaching (p < 0.01) domains increased with PGY. Higher technical, but not teaching, scores correlated with attending-rated readiness for operative independence (p 0.02). Autonomy mismatch was inversely correlated with teaching competence (p < 0.01). Residents reported satisfaction with TEACh-R feedback and desire for use of this instrument in operating room settings. CONCLUSION Our TEACh-R instrument is an effective way to assess technical and teaching performance in the TA role.
Collapse
|
10
|
"We're called upon to be nonjudgmental": A qualitative exploration of United States medical students' discussions of abortion as a reflection of their professionalism. Contraception 2021; 106:57-63. [PMID: 34529953 DOI: 10.1016/j.contraception.2021.09.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 09/05/2021] [Accepted: 09/07/2021] [Indexed: 11/30/2022]
Abstract
OBJECTIVES Medical educators may assess learners' professionalism through clinical scenarios eliciting value conflicts - situations in which an individual's values differ from others' perceived values. We examined the extent to which United States (US) medical students' discussion of abortion highlights their professionalism according to the 6 American Association of Medical Colleges (AAMC) professionalism competencies. STUDY DESIGN We conducted anonymous, semistructured qualitative interviews with 74 US medical students applying to OB/GYN residency. Interviews explored attitudes toward abortion and abortion case vignettes. We analyzed interview transcripts using directed content analysis for alignment with the AAMC professionalism competencies: humanism, patient needs superseding self-interest, patient autonomy, physician accountability, sensitivity to diverse populations, and commitment to ethical principles. RESULTS Students' genders, races, religions, and geographic regions were diverse. Attitudes toward abortion varied, but all students commented on themes related to at least 1 AAMC professionalism competency when discussing abortion care. Statements demonstrating students' humanism, prioritization of patient autonomy, and sense of physician accountability were common. Most comments reflected positive professionalism practices, regardless of personal views on abortion or provision intentions; very few students made statements that were not aligned with the AAMC professionalism competencies. CONCLUSIONS All students in this study exhibited professionalism when discussing abortion, regardless of personal views on abortion or intention to provide this care. Case-based discussions involving abortion could be used to explore professionalism competencies among medical learners. IMPLICATIONS Discussing abortion has the potential to elicit values conflict, which enables learners to exhibit professionalism. Case-based abortion education should be included in medical school curricula to measure medical professionalism in future physicians, and to serve as a tool for teaching professionalism in medical school.
Collapse
|
11
|
Development of an International Standardized Curriculum for Laparoscopic Sleeve Gastrectomy Teaching Utilizing Modified Delphi Methodology. Obes Surg 2021; 31:4257-4263. [PMID: 34296371 DOI: 10.1007/s11695-021-05572-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Revised: 06/18/2021] [Accepted: 06/30/2021] [Indexed: 10/20/2022]
Abstract
BACKGROUND The performance of laparoscopic sleeve gastrectomy has increased markedly to become the single-most performed bariatric surgical procedure globally. To date, a means of standardized trainee teaching has not been developed. The aim of this study was to design a laparoscopic curriculum for trainees of bariatric surgery utilizing modified Delphi consensus methodology. METHODS A panel of surgeons was assembled to devise an academic framework of technical, non-technical and cognitive skills utilized in the performance of laparoscopic sleeve gastrectomy. The panel invited 18 bariatric surgeons experienced in laparoscopic gastrectomy from 11 countries to rate the items for inclusion in the curriculum to a predefined level of agreement. RESULTS A consensus of experts was achieved for 24 of the 30 proposed elements for inclusion within the first round of the curriculum Delphi panel. All components pertaining to anatomical knowledge, peri-operative considerations and non-technical items were accepted. A second round further examined six statements, of which three were accepted. Agreement of the panel was reached for 27 of the cognitive, technical and non-technical components after two rounds. Three statements found no consensus. CONCLUSIONS Utilizing modified Delphi methodology, a curriculum outlining the most important components of teaching the procedure of laparoscopic sleeve gastrectomy, has been determined by a consensus of international experts in bariatric surgery. The curriculum is suggested as a standard in proficiency-based training of this procedure. It forms a generic template which facilitates individual jurisdictions to perform content validation, adapting the curriculum to local requirements in teaching the next generation of bariatric surgeons.
Collapse
|
12
|
Case item creation and video case presentation as summative assessment tools for distance learning in the pandemic era. Med J Armed Forces India 2021; 77:S466-S474. [PMID: 34393331 PMCID: PMC8346809 DOI: 10.1016/j.mjafi.2021.05.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 05/18/2021] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND There is an urgent need for more diverse methods for student evaluation, given the sudden shift to online learning necessitated by the coronavirus disease 2019 (COVID-19) pandemic. Innovative assessment tools will need to cover the required competencies and should be used to drive self-learning. Self-assessments and peer assessments may be added to the traditional classroom-based evaluations to identify individual insecurities or overconfidence. Identification of these factors is essential to medical education and is a focus of current research. METHODS A modified operational assessment was introduced for the evaluation of third-year medical students. This intervention has facilitated sustained education and has promoted interactive student learning. Members of the entering class of 2017 participated in an integrated team and a competency-based online project that involved innovative item creation and case presentation methods. RESULTS The new assessment process has been implemented successfully with positive feedback from all the participants; a usable product has been generated. CONCLUSIONS We created new assessment tools in response to the COVID-19 pandemic that have been used successfully at our institution. These tools have provided a framework for integrated and interactive evaluations that can be used to facilitate the modification of traditional assessment methods.
Collapse
|
13
|
Abstract
Medical education programs are failing to meet the health needs of patients and communities. Misalignments exist on multiple levels, including content (what trainees learn), pedagogy (how trainees learn), and culture (why trainees learn). To address these challenges effectively, competency-based assessment (CBA) for psychiatric medical education must simultaneously produce life-long learners who can self-regulate their own growth and trustworthy processes that determine and accelerate readiness for independent practice. The key to effectively doing so is situating assessment within a carefully designed system with several, critical, interacting components: workplace-based assessment, ongoing faculty development, learning analytics, longitudinal coaching, and fit-for-purpose clinical competency committees.
Collapse
|
14
|
Abstract
With the adoption of competency-based medical education, assessment has shifted from traditional classroom domains of knows and knows how to the workplace domain of doing. This workplace-based assessment has 2 purposes; assessment of learning (summative feedback) and the assessment for learning (formative feedback). What the trainee does becomes the basis for identifying growth edges and determining readiness for advancement and ultimately independent practice. High-quality workplace-based assessment programs require thoughtful choices about the framework of assessment, the tools themselves, the platforms used, and the contexts in which the assessments take place, with an emphasis on direct observation.
Collapse
|
15
|
Trainee Autonomy in Minimally Invasive General Surgery in the United States: Establishing a National Benchmark. JOURNAL OF SURGICAL EDUCATION 2020; 77:e52-e62. [PMID: 33250116 DOI: 10.1016/j.jsurg.2020.07.033] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 07/14/2020] [Accepted: 07/19/2020] [Indexed: 06/12/2023]
Abstract
OBJECTIVE Minimally invasive surgery (MIS) is an integral component of General Surgery training and practice. Yet, little is known about how much autonomy General Surgery residents achieve in MIS procedures, and whether that amount is sufficient. This study aims to establish a contemporary benchmark for trainee autonomy in MIS procedures. We hypothesize that trainees achieve progressive autonomy, but fail to achieve meaningful autonomy in a substantial percentage of MIS procedures prior to graduation. SETTING/PARTICIPANTS Fifty General Surgery residency programs in the United States, from September 1, 2015 to March 19, 2020. All Categorical General Surgery Residents and Attending Surgeons within these programs were eligible. DESIGN Data were collected prospectively from attending surgeons and categorical General Surgery residents. Trainee autonomy was assessed using the 4-level Zwisch scale (Show and Tell, Active Help, Passive Help, and Supervision Only) on a smartphone application (SIMPL). MIS procedures included all laparoscopic, thoracoscopic, endoscopic, and endovascular/percutaneous procedures performed by residents during the study. Primary outcomes of interest were "meaningful autonomy" rates (i.e., scores in the top 2 categories of the Zwisch scale) by postgraduate year (PGY), and "progressive autonomy" (i.e., differences in autonomy between PGYs) in MIS procedures, as rated by attending surgeons. Primary outcomes were determined with descriptive statistics, one-way analysis of variance (ANOVA) and Z-tests. Secondary analyses compared (i) progressive autonomy between common MIS procedures, and (ii) progressive autonomy in MIS vs. non-MIS procedures. RESULTS A total of 106,054 evaluations were performed across 50 General Surgery residency programs, of which 38,985 (37%) were for MIS procedures. Attendings performed 44,842 (42%) of all evaluations, including 16,840 (43%) of MIS evaluations, while residents performed the rest. Overall, meaningful autonomy in MIS procedures increased from 14.1% (PGY1s) to 75.9% (PGY5s), with significant (p < 0.001) increases between each PGY level. Meaningful autonomy rates were higher in the MIS vs. non-MIS group [57.2% vs. 48.0%, p < 0.001], and progressed more rapidly in MIS vs. non-MIS, (p < 0.05). The 7 most common MIS procedures accounted for 83.5% (n = 14,058) of all MIS evaluations. Among PGY5s performing these procedures, meaningful autonomy rates (%) were: laparoscopic appendectomy (95%); laparoscopic cholecystectomy (93%); diagnostic laparoscopy (87%); upper/lower endoscopy (85%); laparoscopic hernia repair (72%); laparoscopic partial colectomy (58%); and laparoscopic sleeve gastrectomy (45%). CONCLUSIONS US General Surgery residents receive progressive autonomy in MIS procedures, and appear to progress more rapidly in MIS versus non-MIS procedures. However, residents fail to achieve meaningful autonomy in nearly 25% of MIS cases in their final year of residency, with higher rates of meaningful autonomy only achieved in a small subset of basic MIS procedures.
Collapse
|
16
|
A mobile app to capture EPA assessment data: Utilizing the consolidated framework for implementation research to identify enablers and barriers to engagement. PERSPECTIVES ON MEDICAL EDUCATION 2020; 9:210-219. [PMID: 32504446 PMCID: PMC7459074 DOI: 10.1007/s40037-020-00587-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
INTRODUCTION Mobile apps that utilize the framework of entrustable professional activities (EPAs) to capture and deliver feedback are being implemented. If EPA apps are to be successfully incorporated into programmatic assessment, a better understanding of how they are experienced by the end-users will be necessary. The authors conducted a qualitative study using the Consolidated Framework for Implementation Research (CFIR) to identify enablers and barriers to engagement with an EPA app. METHODS Structured interviews of faculty and residents were conducted with an interview guide based on the CFIR. Transcripts were independently coded by two study authors using directed content analysis. Differences were resolved via consensus. The study team then organized codes into themes relevant to the domains of the CFIR. RESULTS Eight faculty and 10 residents chose to participate in the study. Both faculty and residents found the app easy to use and effective in facilitating feedback immediately after the observed patient encounter. Faculty appreciated how the EPA app forced brief, distilled feedback. Both faculty and residents expressed positive attitudes and perceived the app as aligned with the department's philosophy. Barriers to engagement included faculty not understanding the EPA framework and scale, competing clinical demands, residents preferring more detailed feedback and both faculty and residents noting that the app's feedback should be complemented by a tool that generates more systematic, nuanced, and comprehensive feedback. Residents rarely if ever returned to the feedback after initial receipt. DISCUSSION This study identified key enablers and barriers to engagement with the EPA app. The findings provide guidance for future research and implementation efforts focused on the use of mobile platforms to capture direct observation feedback.
Collapse
|
17
|
Building a core competency assessment program for all stakeholders: the design and building of sailing ships can inform core competency frameworks. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2020; 25:189-193. [PMID: 32030572 DOI: 10.1007/s10459-020-09962-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Accepted: 01/21/2020] [Indexed: 06/10/2023]
Abstract
When educators are developing an effective and workable assessment program in graduate medical education by employing action research and stakeholder mapping to identify core competency domains and directives, the multi-stage process can be guided and informed by utilizing the story of designing, building and sea-testing sailing ships as a metaphor. However, the current challenge of physician burnout demands additional attention when formulating medical training frameworks, assessment guidelines and mentoring programs in 2020. The possibility of job-crafting is raised for consideration by designers of core competency frameworks in the health professions.
Collapse
|
18
|
Considerations that will determine if competency-based assessment is a sustainable innovation. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2019; 24:413-421. [PMID: 29777463 DOI: 10.1007/s10459-018-9833-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 05/11/2018] [Indexed: 06/08/2023]
Abstract
Educational assessment for the health professions has seen a major attempt to introduce competency based frameworks. As high level policy developments, the changes were intended to improve outcomes by supporting learning and skills development. However, we argue that previous experiences with major innovations in assessment offer an important road map for developing and refining assessment innovations, including careful piloting and analyses of their measurement qualities and impacts. Based on the literature, numerous assessment workshops, personal interactions with potential users, and our 40 years of experience in implementing assessment change, we lament the lack of a coordinated approach to clarify and improve measurement qualities and functionality of competency based assessment (CBA). To address this worrisome situation, we offer two roadmaps to guide CBA's further development. Initially, reframe and address CBA as a measurement development opportunity. Secondly, using a roadmap adapted from the management literature on sustainable innovation, the medical assessment community needs to initiate an integrated plan to implement CBA as a sustainable innovation within existing educational programs and self-regulatory enterprises. Further examples of down-stream opportunities to refocus CBA at the implementation level within faculties and within the regulatory framework of the profession are offered. In closing, we challenge the broader assessment community in medicine to step forward and own the challenge and opportunities to reframe CBA as an innovation to improve the quality of the clinical educational experience. The goal is to optimize assessment in health education and ultimately improve the public's health.
Collapse
|
19
|
[Competence-based assessment in the national licensing examination in Germany]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2019; 61:171-177. [PMID: 29230515 DOI: 10.1007/s00103-017-2668-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In Germany, future physicians have to pass a national licensing examination at the end of their medical studies. Passing this examination is the requirement for the license to practice medicine. The Masterplan Medizinstudium 2020 with its 41 measures aims to shift the paradigm in medical education and medical licensing examinations.The main goals of the Masterplan include the development towards competency-based and practical medical education and examination as well as the strengthening of general medicine. The healthcare policy takes into account social developments, which are very important for the medical education and licensing examination.Seven measures of the Masterplan relate to the realignment of the licensing examinations. Their function to drive learning should better support students in achieving the study goal defined in the German Medical Licensure Act: to educate a medical doctor scientifically and practically who is qualified for autonomous and independent professional practice, postgraduate education and continuous training.
Collapse
|
20
|
Programmatic Assessment of Professionalism in Psychiatry Education: A Literature Review and Implementation Guide. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2017. [PMID: 28971430 DOI: 10.1007/978-3-319-57348-9_19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Programmatic assessment is being adopted as a preferred method of assessment in postgraduate medical education in Australia. Programmatic assessment of professionalism is likely to receive increasing attention. This paper reviews the literature regarding the assessment of professionalism in psychiatry. A search using the terms 'professionalism AND psychiatry' was conducted in the ERIC database. Only original articles relevant to professionalism education and assessment in psychiatry were selected, rather than theoretical or review papers that applied research from other fields of medicine to psychiatry. Articles regarding the need for professionalism education in psychiatry were included as they provided a rationale for curriculum development in this field as a precursor to assessment. Key findings from the literature were summarised in light of the author's own experience as an educator and assessor of both medical students and trainees in psychiatry, and incorporated into a guide to implementing programmatic assessment of professionalism in psychiatry. Within psychiatry, the specific evidence base for use of particular tools in assessing professionalism is limited. However, used in conjunction with psychiatrists' views about what is important in professionalism education, as well as knowledge from other medical disciplines regarding professionalism assessment tools, this evidence can inform implementation of programmatic assessment of professionalism in undergraduate, postgraduate and continuing professional development settings. Given the emergent nature of such assessment initiatives, they should be subjected to rigorous evaluation.
Collapse
|
21
|
A Delphi study to validate competency-based criteria to assess undergraduate midwifery students' competencies in the maternity ward. Midwifery 2017; 53:1-8. [PMID: 28708987 DOI: 10.1016/j.midw.2017.07.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Revised: 06/26/2017] [Accepted: 07/04/2017] [Indexed: 11/15/2022]
Abstract
BACKGROUND workplace learning plays a crucial role in midwifery education. Twelve midwifery schools in Flanders (Belgium) aimed to implement a standardised and evidence-based method to learn and assess competencies in practice. This study focuses on the validation of competency-based criteria to guide and assess undergraduate midwifery students' postnatal care competencies in the maternity ward. METHOD an online Delphi study was carried out. During three consecutive sessions, experts from workplaces and schools were invited to score the assessment criteria as to their relevance and feasibility, and to comment on the content and their formulation. A descriptive quantitative analysis, and a qualitative thematic content analysis of the comments were carried out. A Mann-Whitney U-test was used to investigate differences between expert groups. FINDINGS eleven competencies and fifty-six assessment criteria were found appropriate to assess midwifery students' competencies in the maternity ward. Overall median scores were high and consensus was obtained for all criteria, except for one during the first round. Although all initial assessment criteria (N=89) were scored as relevant, some of them appeared not feasible in practice. Little difference was found between the expert groups. Comments mainly included remarks about concreteness and measurability. CONCLUSION this study resulted in validated criteria to assess postnatal care competencies in the maternity ward.
Collapse
|
22
|
Strategies for increasing the feasibility of performance assessments during competency-based education: Subjective and objective evaluations correlate in the operating room. Am J Surg 2016; 214:365-372. [PMID: 27634423 DOI: 10.1016/j.amjsurg.2016.07.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2016] [Revised: 06/23/2016] [Accepted: 07/06/2016] [Indexed: 11/22/2022]
Abstract
BACKGROUND Competency-based education necessitates assessments that determine whether trainees have acquired specific competencies. The evidence on the ability of internal raters (staff surgeons) to provide accurate assessments is mixed; however, this has not yet been directly explored in the operating room. This study's objective is to compare the ratings given by internal raters vs an expert external rater (independent to the training process) in the operating room. METHODS Raters assessed general surgery residents during a laparoscopic cholecystectomy for their technical and nontechnical performance. RESULTS Fifteen cases were observed. There was a moderately positive correlation (rs = .618, P = .014) for technical performance and a strong positive correlation (rs = .731, P = .002) for nontechnical performance. The internal raters were less stringent for technical (mean rank 3.33 vs 8.64, P = .007) and nontechnical (mean rank 3.83 vs 8.50, P = .01) performances. CONCLUSIONS This study provides evidence to help operationalize competency-based assessments.
Collapse
|
23
|
Measuring nursing competencies in the operating theatre: instrument development and psychometric analysis using Item Response Theory. NURSE EDUCATION TODAY 2013; 33:1088-1093. [PMID: 22608826 DOI: 10.1016/j.nedt.2012.04.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2011] [Revised: 04/03/2012] [Accepted: 04/11/2012] [Indexed: 06/01/2023]
Abstract
BACKGROUND Concern about the process of identifying underlying competencies that contribute to effective nursing performance has been debated with a lack of consensus surrounding an approved measurement instrument for assessing clinical performance. Although a number of methodologies are noted in the development of competency-based assessment measures, these studies are not without criticism. RESEARCH AIM The primary aim of the study was to develop and validate a Performance Based Scoring Rubric, which included both analytical and holistic scales. The aim included examining the validity and reliability of the rubric, which was designed to measure clinical competencies in the operating theatre. RESEARCH METHOD The fieldwork observations of 32 nurse educators and preceptors assessing the performance of 95 instrument nurses in the operating theatre were used in the calibration of the rubric. The Rasch model, a particular model among Item Response Models, was used in the calibration of each item in the rubric in an attempt at improving the measurement properties of the scale. This is done by establishing the 'fit' of the data to the conditions demanded by the Rasch model. RESULTS Acceptable reliability estimates, specifically a high Cronbach's alpha reliability coefficient (0.940), as well as empirical support for construct and criterion validity for the rubric were achieved. Calibration of the Performance Based Scoring Rubric using Rasch model revealed that the fit statistics for most items were acceptable. CONCLUSION The use of the Rasch model offers a number of features in developing and refining healthcare competency-based assessments, improving confidence in measuring clinical performance. The Rasch model was shown to be useful in developing and validating a competency-based assessment for measuring the competence of the instrument nurse in the operating theatre with implications for use in other areas of nursing practice.
Collapse
|