1
|
Leep Hunderfund AN, Santilli AR, Rubin DI, Laughlin RS, Sorenson EJ, Park YS. Assessing electrodiagnostic skills among residents and fellows: Relationships between workplace-based assessments using the Electromyography Direct Observation Tool and other measures of trainee performance. Muscle Nerve 2022; 66:671-678. [PMID: 35470901 DOI: 10.1002/mus.27566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 04/21/2022] [Accepted: 04/23/2022] [Indexed: 12/14/2022]
Abstract
INTRODUCTION/AIMS Graduate medical education programs must ensure residents and fellows acquire skills needed for independent practice. Workplace-based observational assessments are informative but can be time- and resource-intensive. In this study we sought to gather "relations-to-other-variables" validity evidence for scores generated by the Electromyography Direct Observation Tool (EMG-DOT) to inform its use as a measure of electrodiagnostic skill acquisition. METHODS Scores on multiple assessments were compiled by trainees during Clinical Neurophysiology and Electromyography rotations at a large US academic medical center. Relationships between workplace-based EMG-DOT scores (n = 298) and scores on a prerequisite simulated patient exercise, patient experience surveys (n = 199), end-of-rotation evaluations (n = 301), and an American Association of Neuromuscular & Electrodiagnostic Medicine (AANEM) self-assessment examination were assessed using Pearson correlations. RESULTS Among 23 trainees, EMG-DOT scores assigned by physician raters correlated positively with end-of-rotation evaluations (r = 0.63, P = .001), but EMG-DOT scores assigned by technician raters did not (r = 0.10, P = .663). When physician and technician ratings were combined, higher EMG-DOT scores correlated with better patient experience survey scores (r = 0.42, P = .047), but not with simulated patient or AANEM self-assessment examination scores. DISCUSSION End-of-rotation evaluations can provide valid assessments of trainee performance when completed by individuals with ample opportunities to directly observe trainees. Inclusion of observational assessments by technicians and patients provides a more comprehensive view of trainee performance. Workplace- and classroom-based assessments provide complementary information about trainee performance, reflecting underlying differences in types of skills measured.
Collapse
Affiliation(s)
| | - Ashley R Santilli
- Department of Neurology at Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Devon I Rubin
- Department of Neurology at Mayo Clinic College of Medicine, Jacksonville, Florida
| | - Ruple S Laughlin
- Department of Neurology at Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Eric J Sorenson
- Department of Neurology at Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Yoon S Park
- Department of Medical Education, University of Illinois College of Medicine, Chicago, Illinois.,Health Professions Education Research at Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
2
|
Becker B, London ZN. Assessment of clinical skills in electrodiagnostic medicine. Muscle Nerve 2022; 66:647-649. [PMID: 36213950 DOI: 10.1002/mus.27733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 09/29/2022] [Accepted: 10/04/2022] [Indexed: 12/14/2022]
Affiliation(s)
- Benjamin Becker
- Department of Neurology, Division of Neuromuscular Medicine, University of Michigan School of Medicine, Ann Arbor, Michigan
| | - Zachary N London
- Department of Neurology, Division of Neuromuscular Medicine, University of Michigan School of Medicine, Ann Arbor, Michigan
| |
Collapse
|
3
|
Young JQ, Sugarman R, Schwartz J, O'Sullivan PS. Faculty and Resident Engagement With a Workplace-Based Assessment Tool: Use of Implementation Science to Explore Enablers and Barriers. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1937-1944. [PMID: 32568853 DOI: 10.1097/acm.0000000000003543] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Implementation of workplace-based assessment programs has encountered significant challenges. Faculty and residents alike often have a negative view of these programs as "tick-box" or "jump through the hoops" exercises. A number of recommendations have been made to address these challenges. To understand the experience with a workplace-based assessment tool that follows many of these recommendations, the authors conducted a qualitative study using the Consolidated Framework for Implementation Research (CFIR) to identify enablers and barriers to engagement with the tool. METHOD The Psychopharmacotherapy-Structured Clinical Observation (P-SCO) is a direct observation tool designed to assess resident performance during a psychiatric medication management visit. From August 2017 to February 2018, the P-SCO was implemented in the outpatient continuity clinics for second- and third-year residents at Zucker Hillside Hospital/Northwell Health. In February and March 2019, the authors conducted semistructured interviews of participating faculty and residents. Interview guides based on the CFIR were used to capture the enablers and barriers to engagement. Interview transcripts were independently coded. Codes were then organized into themes relevant to the domains of the CFIR. RESULTS Ten faculty and 10 residents were interviewed. Overall, participants had a positive experience with the P-SCO. Enabling factors for faculty and residents included the ongoing training, design features of the P-SCO, predisposing beliefs, dedicated faculty time, and the perception that the P-SCO improved verbal feedback quality. Barriers for faculty included checklist length and discomfort with feedback that threatens identity, and barriers for residents included faculty variability in timeliness and quality of feedback and minimal review of the feedback after initial receipt. CONCLUSIONS This study demonstrates that the negative experience of faculty and residents with workplace-based assessment tools shown in prior studies can be overcome, at least in part, when specific implementation strategies are pursued. The findings provide guidance for future research and implementation efforts.
Collapse
Affiliation(s)
- John Q Young
- J.Q. Young is professor of psychiatry and vice chair for education, Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, and Zucker Hillside Hospital at Northwell Health, Glen Oaks, New York
| | - Rebekah Sugarman
- R. Sugarman is a research assistant, Department of Psychiatry, Zucker Hillside Hospital at Northwell Health, Glen Oaks, New York
| | - Jessica Schwartz
- J. Schwartz is a resident, Department of Psychiatry, Zucker Hillside Hospital at Northwell Health, Glen Oaks, New York
| | - Patricia S O'Sullivan
- P.S. O'Sullivan is professor, Department of Medicine, and director of research and development in medical education, University of California, San Francisco, School of Medicine, San Francisco, California
| |
Collapse
|
4
|
Young JQ, Sugarman R, Schwartz J, McClure M, O'Sullivan PS. A mobile app to capture EPA assessment data: Utilizing the consolidated framework for implementation research to identify enablers and barriers to engagement. PERSPECTIVES ON MEDICAL EDUCATION 2020; 9:210-219. [PMID: 32504446 PMCID: PMC7459074 DOI: 10.1007/s40037-020-00587-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
INTRODUCTION Mobile apps that utilize the framework of entrustable professional activities (EPAs) to capture and deliver feedback are being implemented. If EPA apps are to be successfully incorporated into programmatic assessment, a better understanding of how they are experienced by the end-users will be necessary. The authors conducted a qualitative study using the Consolidated Framework for Implementation Research (CFIR) to identify enablers and barriers to engagement with an EPA app. METHODS Structured interviews of faculty and residents were conducted with an interview guide based on the CFIR. Transcripts were independently coded by two study authors using directed content analysis. Differences were resolved via consensus. The study team then organized codes into themes relevant to the domains of the CFIR. RESULTS Eight faculty and 10 residents chose to participate in the study. Both faculty and residents found the app easy to use and effective in facilitating feedback immediately after the observed patient encounter. Faculty appreciated how the EPA app forced brief, distilled feedback. Both faculty and residents expressed positive attitudes and perceived the app as aligned with the department's philosophy. Barriers to engagement included faculty not understanding the EPA framework and scale, competing clinical demands, residents preferring more detailed feedback and both faculty and residents noting that the app's feedback should be complemented by a tool that generates more systematic, nuanced, and comprehensive feedback. Residents rarely if ever returned to the feedback after initial receipt. DISCUSSION This study identified key enablers and barriers to engagement with the EPA app. The findings provide guidance for future research and implementation efforts focused on the use of mobile platforms to capture direct observation feedback.
Collapse
Affiliation(s)
- John Q Young
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Hempstead, NY, USA.
| | - Rebekah Sugarman
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Hempstead, NY, USA
| | - Jessica Schwartz
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Hempstead, NY, USA
| | - Matthew McClure
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Hempstead, NY, USA
| | - Patricia S O'Sullivan
- Department of Medicine, University of California at San Francisco School of Medicine, San Francisco, USA
| |
Collapse
|
5
|
Young JQ. Advancing Our Understanding of Narrative Comments Generated by Direct Observation Tools: Lessons From the Psychopharmacotherapy-Structured Clinical Observation. J Grad Med Educ 2019; 11:570-579. [PMID: 31636828 PMCID: PMC6795331 DOI: 10.4300/jgme-d-19-00207.1] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2019] [Revised: 07/07/2019] [Accepted: 08/05/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND While prior research has focused on the validity of quantitative ratings generated by direct observation tools, much less is known about the written comments. OBJECTIVE This study examines the quality of written comments and their relationship with checklist scores generated by a direct observation tool, the Psychopharmacotherapy-Structured Clinical Observation (P-SCO). METHODS From 2008 to 2012, faculty in a postgraduate year 3 psychiatry outpatient clinic completed 601 P-SCOs. Twenty-five percent were randomly selected from each year; the sample included 8 faculty and 57 residents. To assess quality, comments were coded for valence (reinforcing or corrective), behavioral specificity, and content. To assess the relationship between comments and scores, the authors calculated the correlation between comment and checklist score valence and examined the degree to which comments and checklist scores addressed the same content. RESULTS Ninety-one percent of the comments were behaviorally specific. Sixty percent were reinforcing, and 40% were corrective. Eight themes were identified, including 2 constructs not adequately represented by the checklist. Comment and checklist score valence was moderately correlated (Spearman's rho = 0.57, P < .001). Sixty-seven percent of high and low checklist scores were associated with a comment of the same valence and content. Only 50% of overall comments were associated with a checklist score of the same valence and content. CONCLUSIONS A direct observation tool such as the P-SCO can generate high-quality written comments. Narrative comments both explain checklist scores and convey unique content. Thematic coding of comments can improve the content validity of a checklist.
Collapse
|
6
|
Implementation of a Multifaceted Interactive Electrodiagnostic Medicine Workshop in a Physical Medicine and Rehabilitation Residency Program. Am J Phys Med Rehabil 2019; 97:134-140. [PMID: 28953032 DOI: 10.1097/phm.0000000000000828] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Electrodiagnostic medicine is a required component of Physical Medicine and Rehabilitation residency education, but limited resources exist to guide curriculum development. Our objective was to create a focused workshop to enhance our residency program's electrodiagnostic curriculum. We created two separate 1.5-day workshops, one basic and one advanced, for all residents. Each workshop included didactic sessions, case discussion, question and answer sessions, demonstrations, and hands-on participation with direct supervision and feedback. Presurveys and postsurveys were administered to evaluate the value of the workshops. We also assessed trends in electrodiagnostic self-assessment examination scores. Residents reported clinical electrodiagnostic rotations to be more valuable to their education than previous didactic sessions and independent learning. Self-reported knowledge of electrodiagnostic concepts, resident comfort level in planning, performing, and interpreting studies, and perceived value in independent learning of electrodiagnostic medicine improved after implementation of the workshops. There was a 7% improvement in the American Association of Neuromuscular and Electrodiagnostic Medicine electrodiagnostic self-assessment examination score compared with the previous year and a 15% improvement in the Physical Medicine and Rehabilitation self-assessment examination electrodiagnostic subscore compared with the previous 5 yrs. All participants recommended similar educational experience for other residents. This successful workshop may serve as a resource for other training programs.
Collapse
|
7
|
Young JQ, Rasul R, O'Sullivan PS. Evidence for the Validity of the Psychopharmacotherapy-Structured Clinical Observation Tool: Results of a Factor and Time Series Analysis. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2018; 42:759-764. [PMID: 29951950 DOI: 10.1007/s40596-018-0928-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Accepted: 04/18/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE The Psychopharmacotherapy-Structured Clinical Observation (P-SCO) is a direct observation tool designed to assess resident performance of a medication visit. This study examines two dimensions of validity for the P-SCO: internal structure and how scores correlate with another variable associated with competence (experience). METHODS The faculty completed 601 P-SCOs over 4 years. Multilevel exploratory factor analysis was performed with minimum thresholds for eigenvalue (≥ 1.0) and proportion of variance explained (≥ 5.0%). Internal reliability was assessed with Cronbach alpha. To examine how scores changed with experience, mean ratings (1-4 scale) were calculated for each factor by quarter of the academic year. Separate linear mixed models were also performed. RESULTS The analysis yielded three factors that explained 50% of the variance and demonstrated high internal reliability: affective tasks (alpha = 0.90), cognitive tasks (alpha = 0.84), and hard tasks (alpha = 0.74). Items within "hard tasks" were assessment of substance use, violence risk, and adherence, and inquiry about interactions with other providers. Monitoring adverse effects did not load on the hard task factor but also had overall low mean ratings. Compared to the first quarter, fourth quarter scores for affective tasks (b = 0.54, p < 0.01) and hard tasks (b = 0.46, p = 0.02) were significantly improved while cognitive tasks had a non-significant increase. For the hard tasks, the proportion of residents with a low mean rating improved but was still over 30% during the fourth quarter. CONCLUSIONS The results provide evidence for the validity of the P-SCO with respect to its internal structure and how scores correlate with experience. Curricular implications are explored, especially for the tasks that were hard to learn.
Collapse
Affiliation(s)
- John Q Young
- Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA.
| | | | | |
Collapse
|
8
|
Leep Hunderfund AN, Reed DA, Starr SR, Havyer RD, Lang TR, Norby SM. Ways to Write a Milestone: Approaches to Operationalizing the Development of Competence in Graduate Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:1328-1334. [PMID: 28353504 DOI: 10.1097/acm.0000000000001660] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
PURPOSE To identify approaches to operationalizing the development of competence in Accreditation Council for Graduate Medical Education (ACGME) milestones. METHOD The authors reviewed all 25 "Milestone Project" documents available on the ACGME Web site on September 11, 2013, using an iterative process to identify approaches to operationalizing the development of competence in the milestones associated with each of 601 subcompetencies. RESULTS Fifteen approaches were identified. Ten focused on attributes and activities of the learner, such as their ability to perform different, increasingly difficult tasks (304/601; 51%), perform a task better and faster (171/601; 45%), or perform a task more consistently (123/601; 20%). Two approaches focused on context, inferring competence from performing a task in increasingly difficult situations (236/601; 29%) or an expanding scope of engagement (169/601; 28%). Two used socially defined indicators of competence such as progression from "learning" to "teaching," "leading," or "role modeling" (271/601; 45%). One approach focused on the supervisor's role, inferring competence from a decreasing need for supervision or assistance (151/601; 25%). Multiple approaches were often combined within a single set of milestones (mean 3.9, SD 1.6). CONCLUSIONS Initial ACGME milestones operationalize the development of competence in many ways. These findings offer insights into how physicians understand and assess the developmental progression of competence and an opportunity to consider how different approaches may affect the validity of milestone-based assessments. The results of this analysis can inform the work of educators developing or revising milestones, interpreting milestone data, or creating assessment tools to inform milestone-based performance measures.
Collapse
Affiliation(s)
- Andrea N Leep Hunderfund
- A.N. Leep Hunderfund is assistant professor of neurology, Mayo Clinic, Rochester, Minnesota. D.A. Reed is associate professor of medical education and medicine and senior associate dean of academic affairs, Mayo Medical School, Mayo Clinic, Rochester, Minnesota. S.R. Starr is assistant professor of pediatric and adolescent medicine and director of science of health care delivery education, Mayo Medical School, Mayo Clinic, Rochester, Minnesota. R.D. Havyer is assistant professor of medicine, Mayo Clinic, Rochester, Minnesota. T.R. Lang is assistant professor of pediatric and adolescent medicine, Mayo Clinic, Rochester, Minnesota (now at Gundersen Health System, LaCrosse, Wisconsin). S.M. Norby is associate professor of medicine, Mayo Clinic, Rochester, Minnesota
| | | | | | | | | | | |
Collapse
|