1
|
Mullikin DR, Flanagan RP, Merkebu J, Durning SJ, Soh M. Physiologic measurements of cognitive load in clinical reasoning. Diagnosis (Berl) 2024; 11:125-131. [PMID: 38282337 DOI: 10.1515/dx-2023-0143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/08/2024] [Indexed: 01/30/2024]
Abstract
OBJECTIVES Cognitive load is postulated to be a significant factor in clinical reasoning performance. Monitoring physiologic measures, such as heart rate variability (HRV) may serve as a way to monitor changes in cognitive load. The pathophysiology of why HRV has a relationship to cognitive load is unclear, but it may be related to blood pressure changes that occur in a response to mental stress. METHODS Fourteen residents and ten attendings from Internal Medicine wore Holter monitors and watched a video depicting a medical encounter before completing a post encounter form used to evaluate their clinical reasoning and standard psychometric measures of cognitive load. Blood pressure was obtained before and after the encounter. Correlation analysis was used to investigate the relationship between HRV, blood pressure, self-reported cognitive load measures, clinical reasoning performance scores, and experience level. RESULTS Strong positive correlations were found between increasing HRV and increasing mean arterial pressure (MAP) (p=0.01, Cohen's d=1.41). There was a strong positive correlation with increasing MAP and increasing cognitive load (Pearson correlation 0.763; 95 % CI [; 95 % CI [-0.364, 0.983]). Clinical reasoning performance was negatively correlated with increasing MAP (Pearson correlation -0.446; 95 % CI [-0.720, -0.052]). Subjects with increased HRV, MAP and cognitive load were more likely to be a resident (Pearson correlation -0.845; 95 % CI [-0.990, 0.147]). CONCLUSIONS Evaluating HRV and MAP can help us to understand cognitive load and its implications on trainee and physician clinical reasoning performance, with the intent to utilize this information to improve patient care.
Collapse
Affiliation(s)
- Dolores R Mullikin
- Department of Pediatrics, Uniformed Services University of Health Sciences, Bethesda, USA
| | - Ryan P Flanagan
- Department of Pediatric Cardiology, Landstuhl Regional Medical Center, Landstuhl, Germany
| | - Jerusalem Merkebu
- Department of Medicine, Center for Health Professions Education, Uniformed Services University of Health Sciences, USA
| | - Steven J Durning
- Department of Medicine, Center for Health Professions Education, Uniformed Services University of Health Sciences, USA
| | - Michael Soh
- Department of Medicine, Center for Health Professions Education, Uniformed Services University of Health Sciences, USA
| |
Collapse
|
2
|
Bond WF, Zhou J, Bhat S, Park YS, Ebert-Allen RA, Ruger RL, Yudkowsky R. Automated Patient Note Grading: Examining Scoring Reliability and Feasibility. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S90-S97. [PMID: 37983401 DOI: 10.1097/acm.0000000000005357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
PURPOSE Scoring postencounter patient notes (PNs) yields significant insights into student performance, but the resource intensity of scoring limits its use. Recent advances in natural language processing (NLP) and machine learning allow application of automated short answer grading (ASAG) for this task. This retrospective study evaluated psychometric characteristics and reliability of an ASAG system for PNs and factors contributing to implementation, including feasibility and case-specific phrase annotation required to tune the system for a new case. METHOD PNs from standardized patient (SP) cases within a graduation competency exam were used to train the ASAG system, applying a feed-forward neural networks algorithm for scoring. Using faculty phrase-level annotation, 10 PNs per case were required to tune the ASAG system. After tuning, ASAG item-level ratings for 20 notes were compared across ASAG-faculty (4 cases, 80 pairings) and ASAG-nonfaculty (2 cases, 40 pairings). Psychometric characteristics were examined using item analysis and Cronbach's alpha. Inter-rater reliability (IRR) was examined using kappa. RESULTS ASAG scores demonstrated sufficient variability in differentiating learner PN performance and high IRR between machine and human ratings. Across all items the ASAG-faculty scoring mean kappa was .83 (SE ± .02). The ASAG-nonfaculty pairings kappa was .83 (SE ± .02). The ASAG scoring demonstrated high item discrimination. Internal consistency reliability values at the case level ranged from a Cronbach's alpha of .65 to .77. Faculty time cost to train and supervise nonfaculty raters for 4 cases was approximately $1,856. Faculty cost to tune the ASAG system was approximately $928. CONCLUSIONS NLP-based automated scoring of PNs demonstrated a high degree of reliability and psychometric confidence for use as learner feedback. The small number of phrase-level annotations required to tune the system to a new case enhances feasibility. ASAG-enabled PN scoring has broad implications for improving feedback in case-based learning contexts in medical education.
Collapse
Affiliation(s)
- William F Bond
- W.F. Bond is professor, Department of Emergency Medicine, University of Illinois College of Medicine, Peoria, Illinois, and is affiliated with Jump Simulation, an OSF HealthCare and University of Illinois College of Medicine at Peoria Collaboration; ORCID: http://orcid.org/0000-0001-6714-7152
| | - Jianing Zhou
- J. Zhou is a PhD student, Department of Computer Science, University of Illinois, Urbana-Champaign, Champaign, Illinois
| | - Suma Bhat
- S. Bhat is assistant professor, Department of Electrical and Computer Engineering, University of Illinois, Urbana-Champaign, Champaign, Illinois; ORCID: http://orcid.org/0000-0003-0324-5890
| | - Yoon Soo Park
- Y.S. Park is professor, Department of Medical Education, University of Illinois College of Medicine, Chicago, Illinois
| | - Rebecca A Ebert-Allen
- R.A. Ebert-Allen is a research project manager, Jump Simulation, an OSF HealthCare and University of Illinois College of Medicine at Peoria Collaboration, Peoria, Illinois; ORCID: http://orcid.org/0000-0001-6607-0229
| | - Rebecca L Ruger
- R.L. Ruger was a research assistant, Jump Simulation, and is now a graduate student, Department of Psychology, Penn State University, University Park, Pennsylvania; ORCID: http://orcid.org/0009-0005-8739-3226
| | - Rachel Yudkowsky
- R. Yudkowsky is professor, Department of Medical Education, University of Illinois College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-2145-7582
| |
Collapse
|
3
|
Régent A, Thampy H, Singh M. Assessing clinical reasoning in the OSCE: pilot-testing a novel oral debrief exercise. BMC MEDICAL EDUCATION 2023; 23:718. [PMID: 37789308 PMCID: PMC10548592 DOI: 10.1186/s12909-023-04668-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 09/11/2023] [Indexed: 10/05/2023]
Abstract
INTRODUCTION Clinical reasoning (CR) is a complex skill enabling transition from clinical novice to expert decision maker. The Objective Structured Clinical Examination (OSCE) is widely used to evaluate clinical competency, though there is limited literature exploring how this assessment is best used to assess CR skills. This proof-of-concept study explored the creation and pilot testing of a post-station CR assessment, named Oral Debrief (OD), in the context of undergraduate medical education. METHODS A modified-Delphi technique was used to create a standardised domain-based OD marking rubric encapsulating the key skills of CR that drew upon existing literature and our existing placement-based CR tool. 16 OSCE examiners were recruited to score three simulated OD recordings that were scripted to portray differing levels of competency. Adopting a think-aloud approach, examiners vocalised their thought processes while utilising the rubric to assess each video. Thereafter, semi-structured interviews explored examiners' views on the OD approach. Recordings were transcribed, anonymised and analysed deductively and inductively for recurring themes. Additionally, inter-rater agreement of examiners' scoring was determined using the Fleiss Kappa statistic both within group and in comparison to a reference examiner group. RESULTS The rubric achieved fair to good levels of inter-rater reliability metrics across its constituent domains and overall global judgement scales. Think-aloud scoring revealed that participating examiners considered several factors when scoring students' CR abilities. This included the adoption of a confident structured approach, discriminating between relevant and less-relevant information, and the ability to prioritise and justify decision making. Furthermore, students' CR skills were judged in light of potential risks to patient safety and examiners' own illness scripts. Feedback from examiners indicated that whilst additional training in rubric usage would be beneficial, OD offered a positive approach for examining CR ability. CONCLUSION This pilot study has demonstrated promising results for the use of a novel post-station OD task to evaluate medical students' CR ability in the OSCE setting. Further work is now planned to evaluate how the OD approach can most effectively be implemented into routine assessment practice.
Collapse
Affiliation(s)
- Alexis Régent
- Service de médecine interne, Centre de référence maladies auto-immunes et systémiques rares d'ile de France, APHP-CUP, Hôpital Cochin, F-75014, Paris, France.
- Université de Paris, 15 rue de l'école de médecine, F-75006, Paris, France.
| | - Harish Thampy
- Division of Medical Education, Faculty of Medicine, Biology and Health, University of Manchester, Manchester, UK
| | - Mini Singh
- Division of Medical Education, Faculty of Medicine, Biology and Health, University of Manchester, Manchester, UK
| |
Collapse
|
4
|
Diogo PG, Pereira VH, Papa F, van der Vleuten C, Durning SJ, Sousa N. Semantic competence and prototypical verbalizations are associated with higher OSCE and global medical degree scores: a multi-theory pilot study on year 6 medical student verbalizations. Diagnosis (Berl) 2023; 10:249-256. [PMID: 36916145 DOI: 10.1515/dx-2021-0048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 02/20/2023] [Indexed: 03/15/2023]
Abstract
OBJECTIVES The organization of medical knowledge is reflected in language and can be studied from the viewpoints of semantics and prototype theory. The purpose of this study is to analyze student verbalizations during an Objective Structured Clinical Examination (OSCE) and correlate them with test scores and final medical degree (MD) scores. We hypothesize that students whose verbalizations are semantically richer and closer to the disease prototype will show better academic performance. METHODS We conducted a single-center study during a year 6 (Y6) high-stakes OSCE where one probing intervention was included at the end of the exam to capture students' reasoning about one of the clinical cases. Verbalizations were transcribed and coded. An assessment panel categorized verbalizations regarding their semantic value (Weak, Good, Strong). Semantic categories and prototypical elements were compared with OSCE, case-based exam and global MD scores. RESULTS Students with Semantic 'Strong' verbalizations displayed higher OSCE, case-based exam and MD scores, while the use of prototypical elements was associated with higher OSCE and MD scores. CONCLUSIONS Semantic competence and verbalizations matching the disease prototype may identify students with better organization of medical knowledge. This work provides empirical groundwork for future research on language analysis to support assessment decisions.
Collapse
Affiliation(s)
| | | | - Frank Papa
- University of North Texas Health Science Center, Fort Worth, TX, USA
| | | | - Steven J Durning
- Center for Health Professions Education, Uniformed Services University, Bethesda, MD, USA
| | - Nuno Sousa
- Escola de Medicina da Universidade do Minho, Braga, Portugal
| |
Collapse
|
5
|
Burt L, Olson A. Development and psychometric testing of the Diagnostic Competency During Simulation-based (DCDS) learning tool. J Prof Nurs 2023; 45:51-59. [PMID: 36889893 DOI: 10.1016/j.profnurs.2023.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 01/11/2023] [Accepted: 01/18/2023] [Indexed: 02/16/2023]
Abstract
BACKGROUND Despite diagnostic errors impacting an estimated 12 million people yearly in the United States, educational strategies that foster diagnostic performance among nurse practitioner (NP) students remain elusive. One possible solution is to focus explicitly on competencies fundamental for diagnostic excellence. Currently, no educational tools were found that comprehensively address individual diagnostic reasoning competencies during simulated-based learning experiences. PURPOSE Our research team developed and explored psychometric properties of the "Diagnostic Competency During Simulation-based (DCDS) Learning Tool." METHOD Items and domains were developed based on existing frameworks. Content validity was determined by a convenience sample of eight experts. Inter-rater reliability was determined by four faculty rating eight simulation scenarios. RESULTS Final individual competency domain scale content validity index (CVI) scores ranged between 0.9175 and 1.0; total scale CVI score was 0.98. The intra-class correlation coefficient (ICC) for the tool was 0.548 (p < 0.0001, 95 % confidence interval CI [0.482-0.612]). CONCLUSIONS Results suggest that the DCDS Learning Tool is relevant to diagnostic reasoning competencies and may be implemented with moderate reliability across varied simulation scenarios and performance levels. The DCDS tool expands the landscape of diagnostic reasoning assessment by providing NP educators with granular, actionable, competency-specific assessment measures to foster improvement.
Collapse
Affiliation(s)
- Leah Burt
- University of Illinois Chicago College of Nursing, Department of Biobehavioral Nursing Science, United States of America.
| | - Andrew Olson
- Division of Hospital Medicine, Department of Medicine and Division of Pediatric Hospital Medicine, Department of Pediatrics, University of Minnesota Medical School, United States of America
| |
Collapse
|
6
|
Buléon C, Mattatia L, Minehart RD, Rudolph JW, Lois FJ, Guillouet E, Philippon AL, Brissaud O, Lefevre-Scelles A, Benhamou D, Lecomte F, group TSAWS, Bellot A, Crublé I, Philippot G, Vanderlinden T, Batrancourt S, Boithias-Guerot C, Bréaud J, de Vries P, Sibert L, Sécheresse T, Boulant V, Delamarre L, Grillet L, Jund M, Mathurin C, Berthod J, Debien B, Gacia O, Der Sahakian G, Boet S, Oriot D, Chabot JM. Simulation-based summative assessment in healthcare: an overview of key principles for practice. ADVANCES IN SIMULATION (LONDON, ENGLAND) 2022; 7:42. [PMID: 36578052 PMCID: PMC9795938 DOI: 10.1186/s41077-022-00238-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 11/30/2022] [Indexed: 12/29/2022]
Abstract
BACKGROUND Healthcare curricula need summative assessments relevant to and representative of clinical situations to best select and train learners. Simulation provides multiple benefits with a growing literature base proving its utility for training in a formative context. Advancing to the next step, "the use of simulation for summative assessment" requires rigorous and evidence-based development because any summative assessment is high stakes for participants, trainers, and programs. The first step of this process is to identify the baseline from which we can start. METHODS First, using a modified nominal group technique, a task force of 34 panelists defined topics to clarify the why, how, what, when, and who for using simulation-based summative assessment (SBSA). Second, each topic was explored by a group of panelists based on state-of-the-art literature reviews technique with a snowball method to identify further references. Our goal was to identify current knowledge and potential recommendations for future directions. Results were cross-checked among groups and reviewed by an independent expert committee. RESULTS Seven topics were selected by the task force: "What can be assessed in simulation?", "Assessment tools for SBSA", "Consequences of undergoing the SBSA process", "Scenarios for SBSA", "Debriefing, video, and research for SBSA", "Trainers for SBSA", and "Implementation of SBSA in healthcare". Together, these seven explorations provide an overview of what is known and can be done with relative certainty, and what is unknown and probably needs further investigation. Based on this work, we highlighted the trustworthiness of different summative assessment-related conclusions, the remaining important problems and questions, and their consequences for participants and institutions of how SBSA is conducted. CONCLUSION Our results identified among the seven topics one area with robust evidence in the literature ("What can be assessed in simulation?"), three areas with evidence that require guidance by expert opinion ("Assessment tools for SBSA", "Scenarios for SBSA", "Implementation of SBSA in healthcare"), and three areas with weak or emerging evidence ("Consequences of undergoing the SBSA process", "Debriefing for SBSA", "Trainers for SBSA"). Using SBSA holds much promise, with increasing demand for this application. Due to the important stakes involved, it must be rigorously conducted and supervised. Guidelines for good practice should be formalized to help with conduct and implementation. We believe this baseline can direct future investigation and the development of guidelines.
Collapse
Affiliation(s)
- Clément Buléon
- grid.460771.30000 0004 1785 9671Department of Anesthesiology, Intensive Care and Perioperative Medicine, Caen Normandy University Hospital, 6th Floor, Caen, France ,grid.412043.00000 0001 2186 4076Medical School, University of Caen Normandy, Caen, France ,grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA
| | - Laurent Mattatia
- grid.411165.60000 0004 0593 8241Department of Anesthesiology, Intensive Care and Perioperative Medicine, Nîmes University Hospital, Nîmes, France
| | - Rebecca D. Minehart
- grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA ,grid.32224.350000 0004 0386 9924Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA USA ,grid.38142.3c000000041936754XHarvard Medical School, Boston, MA USA
| | - Jenny W. Rudolph
- grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA ,grid.32224.350000 0004 0386 9924Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA USA ,grid.38142.3c000000041936754XHarvard Medical School, Boston, MA USA
| | - Fernande J. Lois
- grid.4861.b0000 0001 0805 7253Department of Anesthesiology, Intensive Care and Perioperative Medicine, Liège University Hospital, Liège, Belgique
| | - Erwan Guillouet
- grid.460771.30000 0004 1785 9671Department of Anesthesiology, Intensive Care and Perioperative Medicine, Caen Normandy University Hospital, 6th Floor, Caen, France ,grid.412043.00000 0001 2186 4076Medical School, University of Caen Normandy, Caen, France
| | - Anne-Laure Philippon
- grid.411439.a0000 0001 2150 9058Department of Emergency Medicine, Pitié Salpêtrière University Hospital, APHP, Paris, France
| | - Olivier Brissaud
- grid.42399.350000 0004 0593 7118Department of Pediatric Intensive Care, Pellegrin University Hospital, Bordeaux, France
| | - Antoine Lefevre-Scelles
- grid.41724.340000 0001 2296 5231Department of Emergency Medicine, Rouen University Hospital, Rouen, France
| | - Dan Benhamou
- grid.413784.d0000 0001 2181 7253Department of Anesthesiology, Intensive Care and Perioperative Medicine, Kremlin Bicêtre University Hospital, APHP, Paris, France
| | - François Lecomte
- grid.411784.f0000 0001 0274 3893Department of Emergency Medicine, Cochin University Hospital, APHP, Paris, France
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
7
|
Smith KJ, Childs‐Kean LM, Smith MD. Developing Clinical Reasoning: An Introduction for Pharmacy Preceptors. JOURNAL OF THE AMERICAN COLLEGE OF CLINICAL PHARMACY 2022. [DOI: 10.1002/jac5.1624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Kathryn J. Smith
- University of Oklahoma Health Sciences Center College of Pharmacy 1110 N. Stonewall Ave CPB 229 Oklahoma City Oklahoma
| | | | | |
Collapse
|
8
|
Berge M, Soh M, Christopher F, McKinnon R, Wetstein B, Anderson A, Konopasky A, Durning S. Semantic competency as a marker of clinical reasoning performance. MEDEDPUBLISH 2022. [DOI: 10.12688/mep.17438.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Background: This study sought to explore the relationship between semantic competence (or dyscompetence) displayed during “think-alouds” performed by resident and attending physicians and clinical reasoning performance. Methods: Internal medicine resident physicians and practicing internists participated in think-alouds performed after watching videos of typical presentations of common diseases in internal medicine. The think-alouds were evaluated for the presence of semantic competence and dyscompetence and these results were correlated with clinical reasoning performance. Results: We found that the length of think-aloud was negatively correlated with clinical reasoning performance. Beyond this finding, however, we did not find any other significant correlations between semantic competence or dyscompetence and clinical reasoning performance. Conclusions: While this study did not produce the previously hypothesized findings of correlation between semantic competence and clinical reasoning performance, we discuss the possible implications and areas of future study regarding the relationship between semantic competency and clinical reasoning performance.
Collapse
|
9
|
Battista A, Konopasky A, Durning SJ. The importance of theory and method: A brief reflection on an innovative program of research examining how situational factors influence physicians' clinical reasoning. FASEB Bioadv 2021; 3:490-496. [PMID: 34258518 PMCID: PMC8255829 DOI: 10.1096/fba.2020-00109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 01/28/2021] [Accepted: 01/29/2021] [Indexed: 11/26/2022] Open
Abstract
Clinical reasoning, a complex process that involves gathering and synthesizing information to make diagnostic and treatment decisions, is a topic researchers frequently study to mitigate errors. Scientific reasoning has several similarities with clinical reasoning, including the need to generate hypotheses; observe, gather, and interpret evidence; engage in the process of elimination; draw conclusions; and refine and test new hypotheses. However, researchers have only recently begun to take into consideration the role that situational factors (also known as contextual factors), such as language barriers or the lack of diagnostic test results, can play in diagnostic error. Additionally, questions remain about the best ways to teach these complex processes.
Collapse
Affiliation(s)
- Alexis Battista
- Henry M Jackson Foundation for the Advancement of Military Medicine Bethesda MD USA
| | - Abigail Konopasky
- Henry M Jackson Foundation for the Advancement of Military Medicine Bethesda MD USA
| | - Steven J Durning
- Uniformed Services University of the Health Sciences Bethesda MD USA
| |
Collapse
|
10
|
Torre DM, Hemmer PA, Durning SJ, Dong T, Swygert K, Schreiber-Gregory D, Kelly WF, Pangaro LN. Gathering Validity Evidence on an Internal Medicine Clerkship Multistep Exam to Assess Medical Student Analytic Ability. TEACHING AND LEARNING IN MEDICINE 2021; 33:28-35. [PMID: 32281406 DOI: 10.1080/10401334.2020.1749635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Construct: The definition of clinical reasoning may vary among health profession educators. However, for the purpose of this paper, clinical reasoning is defined as the cognitive processes that are involved in the steps of information gathering, problem representation, generating a differential diagnosis, providing a diagnostic justification to arrive at a leading diagnosis, and formulating diagnostic and management plans. Background: Expert performance in clinical reasoning is essential for success as a physician, and has been difficult for clerkship directors to observe and quantify in a way that fosters the instruction and assessment of clinical reasoning. The purpose of this study was to gather validity evidence for the Multistep exam (MSX) format used by our medicine clerkship to assess analytical clinical reasoning abilities; we did this by examining the relationship between scores on the MSX and other external measures of clinical reasoning abilities. This analysis used dual process theory as the main theoretical framework of clinical reasoning, as well as aspects of Kane's validity framework to guide the selection of validity evidence for the investigation. We hypothesized that there would be an association between the MSX (a three-step clinical reasoning tool developed locally), and the USMLE Step 2 CS, as they share similar concepts in assessing the clinical reasoning of students. We examined the relationship between overall scores on the MSX and the Step 2 CS Integrated Clinical Encounter (ICE) score, in which the student articulates their reasoning for simulated patient cases, while controlling for examinee's internal medicine clerkship performance measures such as the NBME subject exam score and the Medicine clerkship OSCE score. Approach: A total 477 of 487 (97.9%) medical students, representing the graduating classes of 2015, 2016, 2017, who took the MSX at the end of each medicine clerkship (2012-2016), and Step 2 CS (2013-2017) were included in this study. Correlation analysis and multiple linear regression analysis were used to examine the impact of the primary explanatory variables of interest (MSX) onto the outcome variable (ICE score) when controlling for baseline variables (Medicine OSCE and NBME Medicine subject exam). Findings: The overall MSX score had a significant, positive correlation with the Step 2 CS ICE score (r = .26, P < .01). The overall MSX score was a significant predictor of Step 2 CS ICE score (ß = .19, P < .001), explaining an additional 4% of the variance of ICE beyond the NBME Medicine subject score and the Medicine OSCE score (Adjusted R2 = 13%). Conclusion: The stepwise format of the MSX provides a tool to observe clinical reasoning performance, which can be used in an assessment system to provide feedback to students on their analytical clinical reasoning. Future studies should focus on gaining additional validity evidence across different learners and multiple medical schools.
Collapse
Affiliation(s)
- Dario M Torre
- Department of Medicine, Uniformed Services University of Health Sciences, Bethesda, Maryland, USA
| | - Paul A Hemmer
- Educational Programs, Uniformed Services University of Health Sciences, Bethesda, Maryland, USA
| | - Steven J Durning
- Departments of Medicine and Pathology, Uniformed Services University of Health Sciences, Bethesda, Maryland, USA
| | - Ting Dong
- Department of Medicine, Uniformed Services University of Health Sciences, Bethesda, Maryland, USA
| | - Kimberly Swygert
- Professional Services, National Board of Medical Examiners, Philadelphia, Pennsylvania, USA
| | - Deanna Schreiber-Gregory
- Department of Medicine, Uniformed Services University of Health Sciences, Bethesda, Maryland, USA
| | - William F Kelly
- Department of Medicine, Uniformed Services University of Health Sciences, Bethesda, Maryland, USA
| | - Louis N Pangaro
- Department of Medicine, Uniformed Services University of Health Sciences, Bethesda, Maryland, USA
| |
Collapse
|
11
|
Chen W, Reeves TC. Twelve tips for conducting educational design research in medical education. MEDICAL TEACHER 2020; 42:980-986. [PMID: 31498719 DOI: 10.1080/0142159x.2019.1657231] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Despite a steady growth in educational innovations and studies investigating the acceptance and effectiveness of these innovations, medical education has not realized sufficient improvement in practice and outcomes from these investments. In light of this lack of impact, there has been a growing call for studies that more effectively bridge the gap between research and practice. This paper introduces Educational Design Research (EDR) as a promising approach to address this challenge. Twelve tips are provided to inspire and guide medical educators to conduct EDR to achieve the dual goals of tackling a significant educational problem in a specific context while at the same time advancing the theoretical knowledge that may be used to improve practice elsewhere.
Collapse
Affiliation(s)
- Weichao Chen
- Office of Medical Education, School of Medicine, University of Virginia, Charlottesville, VA, USA
| | - Thomas C Reeves
- College of Education, The University of Georgia, Athens, GA, USA
| |
Collapse
|
12
|
Konopasky A, Durning SJ, Battista A, Artino AR, Ramani D, Haynes ZA, Woodard C, Torre D. Challenges in mitigating context specificity in clinical reasoning: a report and reflection. ACTA ACUST UNITED AC 2020; 7:291-297. [PMID: 32651977 DOI: 10.1515/dx-2020-0018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 05/04/2020] [Indexed: 11/15/2022]
Abstract
Objectives Diagnostic error is a growing concern in U.S. healthcare. There is mounting evidence that errors may not always be due to knowledge gaps, but also to context specificity: a physician seeing two identical patient presentations from a content perspective (e.g., history, labs) yet arriving at two distinct diagnoses. This study used the lens of situated cognition theory - which views clinical reasoning as interconnected with surrounding contextual factors - to design and test an instructional module to mitigate the negative effects of context specificity. We hypothesized that experimental participants would perform better on the outcome measure than those in the control group. Methods This study divided 39 resident and attending physicians into an experimental group receiving an interactive computer training and "think-aloud" exercise and a control group, comparing their clinical reasoning. Clinical reasoning performance in a simulated unstable angina case with contextual factors (i.e., diagnostic suggestion) was determined using performance on a post-encounter form (PEF) as the outcome measure. The participants who received the training and did the reflection were compared to those who did not using descriptive statistics and a multivariate analysis of covariance (MANCOVA). Results Descriptive statistics suggested slightly better performance for the experimental group, but MANCOVA results revealed no statistically significant differences (Pillai's Trace=0.20, F=1.9, df=[4, 29], p=0.15). Conclusions While differences were not statistically significant, this study suggests the potential utility of strategies that provide education and awareness of contextual factors and space for reflective practice.
Collapse
Affiliation(s)
- Abigail Konopasky
- Uniformed Services University of the Health Sciences and The Henry M Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
| | - Steven J Durning
- Uniformed Services University of the Health Sciences and The Henry M Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
| | - Alexis Battista
- Uniformed Services University of the Health Sciences and The Henry M Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
| | - Anthony R Artino
- The George Washington University School of Medicine and Health Sciences, Health, Human Function, and Rehabilitation Sciences, Washington, DC, USA
| | - Divya Ramani
- Uniformed Services University of the Health Sciences and The Henry M Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
| | - Zachary A Haynes
- Uniformed Services University of the Health Sciences and The Henry M Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
| | - Catherine Woodard
- Uniformed Services University of the Health Sciences and The Henry M Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
| | - Dario Torre
- Department of Medicine, Uniformed Services University, Bethesda, MD, USA
| |
Collapse
|
13
|
Rencic J, Schuwirth LWT, Gruppen LD, Durning SJ. A situated cognition model for clinical reasoning performance assessment: a narrative review. Diagnosis (Berl) 2020; 7:227-240. [PMID: 32352400 DOI: 10.1515/dx-2019-0106] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 04/04/2020] [Indexed: 02/17/2024]
Abstract
Background Clinical reasoning performance assessment is challenging because it is a complex, multi-dimensional construct. In addition, clinical reasoning performance can be impacted by contextual factors, leading to significant variation in performance. This phenomenon called context specificity has been described by social cognitive theories. Situated cognition theory, one of the social cognitive theories, posits that cognition emerges from the complex interplay of human beings with each other and the environment. It has been used as a valuable conceptual framework to explore context specificity in clinical reasoning and its assessment. We developed a conceptual model of clinical reasoning performance assessment based on situated cognition theory. In this paper, we use situated cognition theory and the conceptual model to explore how this lens alters the interpretation of articles or provides additional insights into the interactions between the assessee, patient, rater, environment, assessment method, and task. Methods We culled 17 articles from a systematic literature search of clinical reasoning performance assessment that explicitly or implicitly demonstrated a situated cognition perspective to provide an "enriched" sample with which to explore how contextual factors impact clinical reasoning performance assessment. Results We found evidence for dyadic, triadic, and quadratic interactions between different contextual factors, some of which led to dramatic changes in the assessment of clinical reasoning performance, even when knowledge requirements were not significantly different. Conclusions The analysis of the selected articles highlighted the value of a situated cognition perspective in understanding variations in clinical reasoning performance assessment. Prospective studies that evaluate the impact of modifying various contextual factors, while holding others constant, can provide deeper insights into the mechanisms by which context impacts clinical reasoning performance assessment.
Collapse
Affiliation(s)
- Joseph Rencic
- Department of Medicine, Boston University School of Medicine, 72 East Concord Street, Boston, MA 02118, USA
| | - Lambert W T Schuwirth
- Prideaux Centre for Research in Health Professions Education, Flinders University, Flinders, Australia
| | - Larry D Gruppen
- Department of Medical Education, University of Michigan, Ann Arbor, MI, USA
| | - Steven J Durning
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
14
|
Konopasky A, Artino AR, Battista A, Ohmer M, Hemmer PA, Torre D, Ramani D, van Merrienboer J, Teunissen PW, McBee E, Ratcliffe T, Durning SJ. Understanding context specificity: the effect of contextual factors on clinical reasoning. Diagnosis (Berl) 2020; 7:257-264. [PMID: 32364516 DOI: 10.1515/dx-2020-0016] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2020] [Accepted: 03/11/2020] [Indexed: 02/17/2024]
Abstract
Background Situated cognition theory argues that thinking is inextricably situated in a context. In clinical reasoning, this can lead to context specificity: a physician arriving at two different diagnoses for two patients with the same symptoms, findings, and diagnosis but different contextual factors (something beyond case content potentially influencing reasoning). This paper experimentally investigates the presence of and mechanisms behind context specificity by measuring differences in clinical reasoning performance in cases with and without contextual factors. Methods An experimental study was conducted in 2018-2019 with 39 resident and attending physicians in internal medicine. Participants viewed two outpatient clinic video cases (unstable angina and diabetes mellitus), one with distracting contextual factors and one without. After viewing each case, participants responded to six open-ended diagnostic items (e.g. problem list, leading diagnosis) and rated their cognitive load. Results Multivariate analysis of covariance (MANCOVA) results revealed significant differences in angina case performance with and without contextual factors [Pillai's trace = 0.72, F = 12.4, df =(6, 29), p < 0.001, η p 2 = 0.72 $\eta _{\rm p}^2 = 0.72$ ], with follow-up univariate analyses indicating that participants performed statistically significantly worse in cases with contextual factors on five of six items. There were no significant differences in diabetes cases between conditions. There was no statistically significant difference in cognitive load between conditions. Conclusions Using typical presentations of common diagnoses, and contextual factors typical for clinical practice, we provide ecologically valid evidence for the theoretically predicted negative effects of context specificity (i.e. for the angina case), with large effect sizes, offering insight into the persistence of diagnostic error.
Collapse
Affiliation(s)
- Abigail Konopasky
- Assistant Professor of Medicine, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Rd, Bethesda, MD 20814, USA
| | - Anthony R Artino
- Human Function, and Rehabilitation Sciences, School of Medicine and Health Sciences, The George Washington University, Washington, DC, USA
| | - Alexis Battista
- Department of Medicine, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Uniformed Services University of the Health Sciences, Bethesda, USA
| | | | - Paul A Hemmer
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Dario Torre
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Divya Ramani
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Jeroen van Merrienboer
- School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
| | - Pim W Teunissen
- School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
| | - Elexis McBee
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Temple Ratcliffe
- Department of Medicine, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Steven J Durning
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
15
|
Cleary TJ, Battista A, Konopasky A, Ramani D, Durning SJ, Artino AR. Effects of live and video simulation on clinical reasoning performance and reflection. Adv Simul (Lond) 2020; 5:17. [PMID: 32760598 PMCID: PMC7393892 DOI: 10.1186/s41077-020-00133-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Accepted: 07/17/2020] [Indexed: 02/04/2023] Open
Abstract
Introduction In recent years, researchers have recognized the need to examine the relative effectiveness of different simulation approaches and the experiences of physicians operating within such environments. The current study experimentally examined the reflective judgments, cognitive processing, and clinical reasoning performance of physicians across live and video simulation environments. Methods Thirty-eight physicians were randomly assigned to a live scenario or video case condition. Both conditions encompassed two components: (a) patient encounter and (b) video reflection activity. Following the condition-specific patient encounter (i.e., live scenario or video), the participants completed a Post Encounter Form (PEF), microanalytic questions, and a mental effort question. Participants were then instructed to re-watch the video (i.e., video condition) or a video recording of their live patient encounter (i.e., live scenario) while thinking aloud about how they came to the diagnosis and management plan. Results Although significant differences did not emerge across all measures, physicians in the live scenario condition exhibited superior performance in clinical reasoning (i.e., PEF) and a distinct profile of reflective judgments and cognitive processing. Generally, the live condition participants focused more attention on aspects of the clinical reasoning process and demonstrated higher level cognitive processing than the video group. Conclusions The current study sheds light on the differential effects of live scenario and video simulation approaches. Physicians who engaged in live scenario simulations outperformed and showed a distinct pattern of cognitive reactions and judgments compared to physicians who practiced their clinical reasoning via video simulation. Additionally, the current study points to the potential advantages of video self-reflection following live scenarios while also shedding some light on the debate regarding whether video-guided reflection, specifically, is advantageous. The utility of context-specific, micro-level assessments that incorporate multiple methods as physicians complete different parts of clinical tasks is also discussed.
Collapse
Affiliation(s)
| | - Alexis Battista
- Center for Health Professions Education, F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814-4712 USA
| | - Abigail Konopasky
- Center for Health Professions Education, F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814-4712 USA
| | - Divya Ramani
- Center for Health Professions Education, F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814-4712 USA
| | - Steven J Durning
- Center for Health Professions Education, F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814-4712 USA
| | - Anthony R Artino
- Department of Health, Human Function, and Rehabilitation Sciences, The George Washington University School of Medicine and Health Sciences, Washington, USA
| |
Collapse
|
16
|
Haring CM, Klaarwater CCR, Bouwmans GA, Cools BM, van Gurp PJM, van der Meer JWM, Postma CT. Validity, reliability and feasibility of a new observation rating tool and a post encounter rating tool for the assessment of clinical reasoning skills of medical students during their internal medicine clerkship: a pilot study. BMC MEDICAL EDUCATION 2020; 20:198. [PMID: 32560648 PMCID: PMC7304120 DOI: 10.1186/s12909-020-02110-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Accepted: 06/11/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Systematic assessment of clinical reasoning skills of medical students in clinical practice is very difficult. This is partly caused by the lack of understanding of the fundamental mechanisms underlying the process of clinical reasoning. METHODS We previously developed an observation tool to assess the clinical reasoning skills of medical students during clinical practice. This observation tool consists of an 11-item observation rating form (ORT). In the present study we verified the validity, reliability and feasibility of this tool and of an already existing post-encounter rating tool (PERT) in clinical practice among medical students during the internal medicine clerkship. RESULTS Six raters each assessed the same 15 student-patient encounters. The internal consistency (Cronbach's alfa) for the (ORT) was 0.87 (0.71-0.84) and the 5-item (PERT) was 0.81 (0.71-0.87). The intraclass-correlation coefficient for single measurements was poor for both the ORT; 0.32 (p < 0.001) as well as the PERT; 0.36 (p < 0.001). The Generalizability study (G-study) and decision study (D-study) showed that 6 raters are required to achieve a G-coefficient of > 0.7 for the ORT and 7 raters for the PERT. The largest sources of variance are the interaction between raters and students. There was a consistent correlation between the ORT and PERT of 0.53 (p = 0.04). CONCLUSIONS The ORT and PERT are both feasible, valid and reliable instruments to assess students' clinical reasoning skills in clinical practice.
Collapse
|
17
|
Soh M, Konopasky A, Durning SJ, Ramani D, McBee E, Ratcliffe T, Merkebu J. Sequence matters: patterns in task-based clinical reasoning. Diagnosis (Berl) 2020; 7:281-289. [DOI: 10.1515/dx-2019-0095] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 03/23/2020] [Indexed: 11/15/2022]
Abstract
Abstract
Background
The cognitive pathways that lead to an accurate diagnosis and efficient management plan can touch on various clinical reasoning tasks (1). These tasks can be employed at any point during the clinical reasoning process and though the four distinct categories of framing, diagnosis, management, and reflection provide some insight into how these tasks map onto clinical reasoning, much is still unknown about the task-based clinical reasoning process. For example, when and how are these tasks typically used? And more importantly, do these clinical reasoning task processes evolve when patient encounters become complex and/or challenging (i.e. with contextual factors)?
Methods
We examine these questions through the lens of situated cognition, context specificity, and cognitive load theory. Sixty think-aloud transcripts from 30 physicians who participated in two separate cases – one with a contextual factor and one without – were coded for 26 clinical reasoning tasks (1). These tasks were organized temporally, i.e. when they emerged in their think-aloud process. Frequencies of each of the 26 tasks were aggregated, categorized, and visualized in order to analyze task category sequences.
Results
We found that (a) as expected, clinical tasks follow a general sequence, (b) contextual factors can distort this emerging sequence, and (c) the presence of contextual factors prompts more experienced physicians to clinically reason similar to that of less experienced physicians.
Conclusions
These findings add to the existing literature on context specificity in clinical reasoning and can be used to strengthen teaching and assessment of clinical reasoning.
Collapse
Affiliation(s)
- Michael Soh
- Uniformed Services University of the Health Sciences, Medicine Bethesda , Bethesda, MD , USA
| | - Abigail Konopasky
- Uniformed Services University of the Health Sciences, Medicine Bethesda , Bethesda, MD , USA
| | - Steven J. Durning
- Uniformed Services University of the Health Sciences, Medicine Bethesda , Bethesda, MD , USA
| | - Divya Ramani
- Uniformed Services University of the Health Sciences, Medicine Bethesda , Bethesda, MD , USA
| | - Elexis McBee
- Uniformed Services University of the Health Sciences, Medicine Bethesda , Bethesda, MD , USA
| | - Temple Ratcliffe
- Uniformed Services University of the Health Sciences, Medicine Bethesda , Bethesda, MD , USA
| | - Jerusalem Merkebu
- Uniformed Services University of the Health Sciences, Medicine Bethesda , Bethesda, MD , USA
| |
Collapse
|
18
|
Konopasky A, Durning SJ, Artino AR, Ramani D, Battista A. The Linguistic Effects of Context Specificity: Exploring Affect, Cognitive Processing, and Agency in Physicians' Think-Aloud Reflections. ACTA ACUST UNITED AC 2020; 7:273-280. [PMID: 32304298 DOI: 10.1515/dx-2019-0103] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Accepted: 03/09/2020] [Indexed: 11/15/2022]
Abstract
Background The literature suggests that affect, higher-level cognitive processes (e.g. decision-making), and agency (the capacity to produce an effect) are important for reasoning; however, we do not know how these factors respond to context. Using situated cognition theory as a framework, and linguistic tools as a method, we explored the effects of context specificity [a physician seeing two patients with identical presentations (symptoms and findings), but coming to two different diagnoses], hypothesizing more linguistic markers of cognitive load in the presence of contextual factors (e.g. incorrect diagnostic suggestion). Methods In this comparative and exploratory study, 64 physicians each completed one case with contextual factors and one without. Transcribed think-aloud reflections were coded by Linguistic Inquiry and Word Count (LIWC) software for markers of affect, cognitive processes, and first-person pronouns. A repeated-measures multivariate analysis of variance was used to inferentially compare these LIWC categories between cases with and without contextual factors. This was followed by exploratory descriptive analysis of subcategories. Results As hypothesized, participants used more affective and cognitive process markers in cases with contextual factors and more I/me pronouns in cases without. These differences were statistically significant for cognitive processing words but not affective and pronominal words. Exploratory analysis revealed more negative emotions, cognitive processes of insight, and third-person pronouns in cases with contextual factors. Conclusions This study exposes linguistic differences arising from context specificity. These results demonstrate the value of a situated cognition view of patient encounters and reveal the utility of linguistic tools for examining clinical reasoning.
Collapse
Affiliation(s)
- Abigail Konopasky
- Assistant Professor of Medicine, The Henry M Jackson Foundation for the Advancement of Military Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Steven J Durning
- Professor of Medicine and Pathology, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Anthony R Artino
- Professor of Health, Human Function and Rehabilitation Sciences, School of Medicine and Health Sciences, The George Washington University, Washington, DC, USA
| | - Divya Ramani
- Research Associate, The Henry M Jackson Foundation for the Advancement of Military Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Alexis Battista
- Assistant Professor of Medicine, The Henry M Jackson Foundation for the Advancement of Military Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
19
|
(En)trust me: Validating an assessment rubric for documenting clinical encounters during a surgery clerkship clinical skills exam. Am J Surg 2020; 219:258-262. [DOI: 10.1016/j.amjsurg.2018.12.055] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Revised: 12/21/2018] [Accepted: 12/21/2018] [Indexed: 11/19/2022]
|
20
|
Prediger S, Schick K, Fincke F, Fürstenberg S, Oubaid V, Kadmon M, Berberat PO, Harendza S. Validation of a competence-based assessment of medical students' performance in the physician's role. BMC MEDICAL EDUCATION 2020; 20:6. [PMID: 31910843 PMCID: PMC6947905 DOI: 10.1186/s12909-019-1919-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 12/22/2019] [Indexed: 05/04/2023]
Abstract
BACKGROUND Assessing competence of advanced undergraduate medical students based on performance in the clinical context is the ultimate, yet challenging goal for medical educators to provide constructive alignment between undergraduate medical training and professional work of physicians. Therefore, we designed and validated a performance-based 360-degree assessment for competences of advanced undergraduate medical students. METHODS This study was conducted in three steps: 1) Ten facets of competence considered to be most important for beginning residents were determined by a ranking study with 102 internists and 100 surgeons. 2) Based on these facets of competence we developed a 360-degree assessment simulating a first day of residency. Advanced undergraduate medical students (year 5 and 6) participated in the physician's role. Additionally knowledge was assessed by a multiple-choice test. The assessment was performed twice (t1 and t2) and included three phases: a consultation hour, a patient management phase, and a patient handover. Sixty-seven (t1) and eighty-nine (t2) undergraduate medical students participated. 3) The participants completed the Group Assessment of Performance (GAP)-test for flight school applicants to assess medical students' facets of competence in a non-medical context for validation purposes. We aimed to provide a validity argument for our newly designed assessment based on Messick's six aspects of validation: (1) content validity, (2) substantive/cognitive validity, (3) structural validity, (4) generalizability, (5) external validity, and (6) consequential validity. RESULTS Our assessment proved to be well operationalised to enable undergraduate medical students to show their competences in performance on the higher levels of Bloom's taxonomy. Its generalisability was underscored by its authenticity in respect of workplace reality and its underlying facets of competence relevant for beginning residents. The moderate concordance with facets of competence of the validated GAP-test provides arguments of convergent validity for our assessment. Since five aspects of Messick's validation approach could be defended, our competence-based 360-degree assessment format shows good arguments for its validity. CONCLUSION According to these validation arguments, our assessment instrument seems to be a good option to assess competence in advanced undergraduate medical students in a summative or formative way. Developments towards assessment of postgraduate medical trainees should be explored.
Collapse
Affiliation(s)
- Sarah Prediger
- III. Department of Internal Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Kristina Schick
- TUM Medical Education Center, School of Medicine, Technical University of Munich, Munich, Germany
| | - Fabian Fincke
- Department of Medical Education and Educational Research, Faculty of Medicine and Health Science, University of Oldenburg, Oldenburg, Germany
| | - Sophie Fürstenberg
- III. Department of Internal Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | | | - Martina Kadmon
- Faculty of Medicine, University of Augsburg, Deanery, Augsburg, Germany
| | - Pascal O. Berberat
- TUM Medical Education Center, School of Medicine, Technical University of Munich, Munich, Germany
| | - Sigrid Harendza
- III. Department of Internal Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
21
|
Yudkowsky R, Hyderi A, Holden J, Kiser R, Stringham R, Gangopadhyaya A, Khan A, Park YS. Can Nonclinician Raters Be Trained to Assess Clinical Reasoning in Postencounter Patient Notes? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:S21-S27. [PMID: 31663941 DOI: 10.1097/acm.0000000000002904] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE Clinical reasoning is often assessed through patient notes (PNs) following standardized patient (SP) encounters. While nonclinicians can score PNs using analytic tools such as checklists, these do not sufficiently encompass the holistic judgments of clinician faculty. To better model faculty judgments, the authors developed checklists with faculty-specified scoring formulas embedded in spreadsheets and studied the resulting interrater reliability (IRR) of nonclinician raters (SPs and medics) and student pass/fail status. METHOD In Study 1, nonclinician and faculty raters rescored PNs of 55 third-year medical students across 5 cases of the 2017 Graduation Competency Examination (GCE) to determine IRR. In Study 2, nonclinician raters scored all notes of the 5-case 2018 GCE (178 students). Faculty rescored all notes of failing students and could modify formula-derived scores if faculty felt appropriate. Faculty also rescored and corrected scores of additional notes for a total of 90 notes (3 cases, including failing notes). RESULTS Mean overall percent exact agreement between nonclinician and faculty ratings was 87% (weighted kappa, 0.86) and 83% (weighted kappa, 0.88) for Study 1 and Study 2, respectively. SP and medic IRRs did not differ significantly. Four students failed the note section in 2018; 3 passed after faculty corrections. Few corrections were made to nonfailing students' notes. CONCLUSIONS Nonclinician PN raters using checklists and scoring rules may provide a feasible alternative to faculty raters for low-stakes assessments and for the bulk of well-performing students. Faculty effort can be targeted strategically at rescoring notes of low-performing students and providing more detailed feedback.
Collapse
Affiliation(s)
- Rachel Yudkowsky
- R. Yudkowsky is professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-2145-7582. A. Hyderi is professor, Department of Clinical Science, and founding senior associate dean for medical education, Kaiser Permanente School of Medicine, Pasadena, California; ORCID: https://orcid.org/0000-0002-8521-7510. J. Holden is research assistant, Department of Medical Education, University of Illinois at Chicago College of Medicine, and PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois. R. Kiser is associate director, Dr. Allan L. and Mary L. Graham Clinical Performance Center of the Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois. R. Stringham is associate professor of clinical medicine, Department of Family Medicine, and assistant dean for curriculum, University of Illinois at Chicago College of Medicine, Chicago, Illinois. A. Gangopadhyaya is assistant professor, Division of General Internal Medicine, Department of Medicine, associate clerkship director, M3 and M4 internal medicine, and associate course director, Doctoring and Clinical Skills, University of Illinois at Chicago College of Medicine, Chicago, Illinois. A. Khan is associate professor of clinical medicine, Division of General Internal Medicine, Department of Medicine, clerkship director, M3 and M4 internal medicine, and course director, Doctoring and Clinical Skills, University of Illinois at Chicago College of Medicine, Chicago, Illinois. Y.S. Park is associate professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335
| | | | | | | | | | | | | | | |
Collapse
|
22
|
Solhjoo S, Haigney MC, McBee E, van Merrienboer JJG, Schuwirth L, Artino AR, Battista A, Ratcliffe TA, Lee HD, Durning SJ. Heart Rate and Heart Rate Variability Correlate with Clinical Reasoning Performance and Self-Reported Measures of Cognitive Load. Sci Rep 2019; 9:14668. [PMID: 31604964 PMCID: PMC6789096 DOI: 10.1038/s41598-019-50280-3] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 09/05/2019] [Indexed: 01/05/2023] Open
Abstract
Cognitive load is a key mediator of cognitive processing that may impact clinical reasoning performance. The purpose of this study was to gather biologic validity evidence for correlates of different types of self-reported cognitive load, and to explore the association of self-reported cognitive load and physiologic measures with clinical reasoning performance. We hypothesized that increased cognitive load would manifest evidence of elevated sympathetic tone and would be associated with lower clinical reasoning performance scores. Fifteen medical students wore Holter monitors and watched three videos depicting medical encounters before completing a post-encounter form and standard measures of cognitive load. Correlation analysis was used to investigate the relationship between cardiac measures (mean heart rate, heart rate variability and QT interval variability) and self-reported measures of cognitive load, and their association with clinical reasoning performance scores. Despite the low number of participants, strong positive correlations were found between measures of intrinsic cognitive load and heart rate variability. Performance was negatively correlated with mean heart rate, as well as single-item cognitive load measures. Our data signify a possible role for using physiologic monitoring for identifying individuals experiencing high cognitive load and those at risk for performing poorly during clinical reasoning tasks.
Collapse
Affiliation(s)
- Soroosh Solhjoo
- Division of Cardiovascular Pathology, Johns Hopkins University School of Medicine, Baltimore, USA.
| | - Mark C Haigney
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of The Health Sciences, Bethesda, USA
| | - Elexis McBee
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of The Health Sciences, Naval Medical Center, San Diego, USA
| | | | - Lambert Schuwirth
- Prideaux Centre for Research in Health Professions Education, Flinders University, Bedford Park, Australia
| | - Anthony R Artino
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of The Health Sciences, Bethesda, USA
| | - Alexis Battista
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of The Health Sciences, Bethesda, USA
| | - Temple A Ratcliffe
- Department of Medicine, University of Texas Health Science Center, San Antonio, USA
| | - Howard D Lee
- San Antonio Uniformed Services Health Education Consortium, San Antonio, USA
| | - Steven J Durning
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of The Health Sciences, Bethesda, USA
| |
Collapse
|
23
|
Ward K, Kinney K, Patania R, Savage L, Motley J, Smith M. Development of a student grading rubric and testing for interrater agreement in a doctor of chiropractic competency program. THE JOURNAL OF CHIROPRACTIC EDUCATION 2019; 33:140-144. [PMID: 30916993 PMCID: PMC6759009 DOI: 10.7899/jce-18-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE Clinical competency is integral to the doctor of chiropractic program and is dictated by the Council of Chiropractic Education accreditation standards. These meta-competencies, achieved through open-ended tasks, can be challenging for interrater agreement among multiple graders. We developed and tested interrater agreement of a newly created analytic rubric for a clinical case-based education program. METHODS Clinical educators and research staff collaborated on rubric development and testing over four phases. Phase 1 tailored existing institutional rubrics to the new clinical case-based program using a 4-level scale of proficiency. Phase 2 tested the performance of the pilot rubric using 16 senior intern assessments graded by four instructors using pre-established grading keys. Phases 3 and 4 refined and retested rubric versions 1 and 2 on 16 and 14 assessments, respectively. RESULTS Exact, adjacent, and pass/fail agreements between six pairs of graders were reported. The pilot rubric achieved 46% average exact, 80% average adjacent, and 63% pass/fail agreements. Rubric version 1 yielded 49% average exact, 86% average adjacent, and 70% pass/fail agreements. Rubric version 2 yielded 60% average exact, 93% average adjacent, and 81% pass/fail agreements. CONCLUSION Our results are similar to those of other rubric interrater reliability studies. Interrater reliability improved with later versions of the rubric likely attributable to rater learning and rubric improvement. Future studies should focus on concurrent validity and comparison of student performance with grade point average and national board scores.
Collapse
|
24
|
Ohmer M, Durning SJ, Kucera W, Nealeigh M, Ordway S, Mellor T, Mikita J, Howle A, Krajnik S, Konopasky A, Ramani D, Battista A. Clinical Reasoning in the Ward Setting: A Rapid Response Scenario for Residents and Attendings. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2019; 15:10834. [PMID: 31773062 PMCID: PMC6869982 DOI: 10.15766/mep_2374-8265.10834] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Accepted: 05/21/2019] [Indexed: 06/10/2023]
Abstract
INTRODUCTION There is a need for educational resources supporting the practice and assessment of the complex processes of clinical reasoning in the inpatient setting along a continuum of physician experience levels. METHODS Using participatory design, we created a scenario-based simulation integrating diagnostic ambiguity, contextual factors, and rising patient acuity to increase complexity. Resources include an open-ended written exercise and think-aloud reflection protocol to elicit diagnostic and management reasoning and reflection on that reasoning. Descriptive statistics were used to analyze the initial implementation evaluation results. RESULTS Twenty physicians from multiple training stages and specialties (interns, residents, attendings, family physicians, internists, surgeons) underwent the simulated scenario. Participants engaged in clinical reasoning processes consistent with the design, considering a total of 19 differential diagnoses. Ten participants provided the correct leading diagnosis, tension pneumothorax, with an additional eight providing pneumothorax and all participants offering relevant supporting evidence. There was also good evidence of management reasoning, with all participants either performing an intervention or calling for assistance and reflecting on management plans in the think-aloud. The scenario was a reasonable approximation of clinical practice, with a mean authenticity rating of 4.15 out of 5. Finally, the scenario presented adequate challenge, with interns and residents rating it as only slightly more challenging (means of 7.83 and 7.17, respectively) than attendings (mean of 6.63 out of 10). DISCUSSION Despite the challenges of scenario complexity, evaluation results indicate that this resource supports the observation and analysis of diagnostic and management reasoning of diverse specialties from interns through attendings.
Collapse
Affiliation(s)
- Megan Ohmer
- Research Assistant, Department of Medicine, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
| | - Steven J. Durning
- Professor, Department of Medicine and Pathology, Uniformed Services University of the Health Sciences
- Director, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
| | - Walter Kucera
- Resident, Department of Surgery, Walter Reed National Military Medical Center
| | - Matthew Nealeigh
- Resident, Department of Surgery, Walter Reed National Military Medical Center
| | - Sarah Ordway
- Fellow, Department of Internal Medicine, Division of Gastroenterology, Walter Reed National Military Medical Center
| | - Thomas Mellor
- Fellow, Department of Internal Medicine, Division of Gastroenterology, Naval Medical Center San Diego
| | - Jeffery Mikita
- Chief, Department of Simulation, Walter Reed National Military Medical Center
- Program Director, Department of Internal Medicine, Division of Pulmonology and Critical Care Medicine, Walter Reed National Military Medical Center
- Associate Professor, Department of Medicine, Uniformed Services University of the Health Sciences
| | - Anna Howle
- Simulation Educator, Department of Simulation, Walter Reed National Military Medical Center
| | - Sarah Krajnik
- Simulation Educator, Department of Simulation, Walter Reed National Military Medical Center
| | - Abigail Konopasky
- Assistant Professor, Department of Medicine, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
| | - Divya Ramani
- Research Assistant, Department of Medicine, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
| | - Alexis Battista
- Assistant Professor, Department of Medicine, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
| |
Collapse
|
25
|
Abstract
Clinical reasoning is a core component of clinical competency that is used in all patient encounters from simple to complex presentations. It involves synthesis of myriad clinical and investigative data, to generate and prioritize an appropriate differential diagnosis and inform safe and targeted management plans.The literature is rich with proposed methods to teach this critical skill to trainees of all levels. Yet, ensuring that reasoning ability is appropriately assessed across the spectrum of knowledge acquisition to workplace-based clinical performance can be challenging.In this perspective, we first introduce the concepts of illness scripts and dual-process theory that describe the roles of analytic system 1 and non-analytic system 2 reasoning in clinical decision making. Thereafter, we draw upon existing evidence and expert opinion to review a range of methods that allow for effective assessment of clinical reasoning, contextualized within Miller's pyramid of learner assessment. Key assessment strategies that allow teachers to evaluate their learners' clinical reasoning ability are described from the level of knowledge acquisition, through to real-world demonstration in the clinical workplace.
Collapse
Affiliation(s)
- Harish Thampy
- Division of Medical Education, School of Medical Sciences, Faculty of Biology, Medicine & Health, University of Manchester, Manchester, UK.
| | - Emma Willert
- Division of Medical Education, School of Medical Sciences, Faculty of Biology, Medicine & Health, University of Manchester, Manchester, UK
| | - Subha Ramani
- Harvard Medical School, Brigham and Women's Hospital, General Internal Medicine, Department of Medicine, Boston, MA, USA
| |
Collapse
|
26
|
McBee E, Blum C, Ratcliffe T, Schuwirth L, Polston E, Artino AR, Durning SJ. Use of clinical reasoning tasks by medical students. ACTA ACUST UNITED AC 2019; 6:127-135. [PMID: 30851156 DOI: 10.1515/dx-2018-0077] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 02/08/2019] [Indexed: 11/15/2022]
Abstract
Background A framework of clinical reasoning tasks used by physicians during clinical encounters was previously developed proposing that clinical reasoning is a complex process composed of 26 possible tasks. The aim of this paper was to analyze the verbalized clinical reasoning processes of medical students utilizing commonly encountered internal medicine cases. Methods In this mixed-methods study, participants viewed three video recorded clinical encounters. After each encounter, participants completed a think-aloud protocol. The qualitative data from the transcribed think-aloud transcripts were analyzed by two investigators using a constant comparative approach. The type, frequency, and pattern of codes used were analyzed. Results Seventeen third and fourth year medical students participated. They used 15 reasoning tasks across all cases. The average number of tasks used in cases 1, 2, and 3 was (respectively) 5.6 (range 3-8), 5.9 (range 4-8), and 5.3 (range 3-10). The order in which medical students verbalized reasoning tasks varied and appeared purposeful but non-sequential. Conclusions Consistent with prior research in residents, participants progressed through the encounter in a purposeful but non-sequential fashion. Reasoning tasks related to framing the encounter and diagnosis were not used in succession but interchangeably. This suggests that teaching successful clinical reasoning may involve encouraging or demonstrating multiple pathways through a problem. Further research exploring the association between use of clinical reasoning tasks and clinical reasoning accuracy could enhance the medical community's understanding of variance in clinical reasoning.
Collapse
Affiliation(s)
- Elexis McBee
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, Naval Medical Center San Diego, 34800 Bob Wilson Drive, San Diego, CA 92134, USA
| | - Christina Blum
- Department of Medicine, Naval Hospital Camp Pendleton, Oceanside, CA, USA
| | - Temple Ratcliffe
- Department of Medicine, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Lambert Schuwirth
- Flinders University, School of Medicine, Adelaide, South Australia, Australia
| | - Elizabeth Polston
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, Bethesda, MD, USA
| | - Anthony R Artino
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, Bethesda, MD, USA
| | - Steven J Durning
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, Bethesda, MD, USA
| |
Collapse
|
27
|
Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, Ratcliffe T, Gordon D, Heist B, Lubarsky S, Estrada CA, Ballard T, Artino AR, Sergio Da Silva A, Cleary T, Stojan J, Gruppen LD. Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:902-912. [PMID: 30720527 DOI: 10.1097/acm.0000000000002618] [Citation(s) in RCA: 120] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE An evidence-based approach to assessment is critical for ensuring the development of clinical reasoning (CR) competence. The wide array of CR assessment methods creates challenges for selecting assessments fit for the purpose; thus, a synthesis of the current evidence is needed to guide practice. A scoping review was performed to explore the existing menu of CR assessments. METHOD Multiple databases were searched from their inception to 2016 following PRISMA guidelines. Articles of all study design types were included if they studied a CR assessment method. The articles were sorted by assessment methods and reviewed by pairs of authors. Extracted data were used to construct descriptive appendixes, summarizing each method, including common stimuli, response formats, scoring, typical uses, validity considerations, feasibility issues, advantages, and disadvantages. RESULTS A total of 377 articles were included in the final synthesis. The articles broadly fell into three categories: non-workplace-based assessments (e.g., multiple-choice questions, extended matching questions, key feature examinations, script concordance tests); assessments in simulated clinical environments (objective structured clinical examinations and technology-enhanced simulation); and workplace-based assessments (e.g., direct observations, global assessments, oral case presentations, written notes). Validity considerations, feasibility issues, advantages, and disadvantages differed by method. CONCLUSIONS There are numerous assessment methods that align with different components of the complex construct of CR. Ensuring competency requires the development of programs of assessment that address all components of CR. Such programs are ideally constructed of complementary assessment methods to account for each method's validity and feasibility issues, advantages, and disadvantages.
Collapse
Affiliation(s)
- Michelle Daniel
- M. Daniel is assistant dean for curriculum and associate professor of emergency medicine and learning health sciences, University of Michigan Medical School, Ann Arbor, Michigan; ORCID: http://orcid.org/0000-0001-8961-7119. J. Rencic is associate program director of the internal medicine residency program and associate professor of medicine, Tufts University School of Medicine, Boston, Massachusetts; ORCID: http://orcid.org/0000-0002-2598-3299. S.J. Durning is director of graduate programs in health professions education and professor of medicine and pathology, Uniformed Services University of the Health Sciences, Bethesda, Maryland. E. Holmboe is senior vice president of milestone development and evaluation, Accreditation Council for Graduate Medical Education, and adjunct professor of medicine, Northwestern Feinberg School of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0003-0108-6021. S.A. Santen is senior associate dean and professor of emergency medicine, Virginia Commonwealth University, Richmond, Virginia; ORCID: http://orcid.org/0000-0002-8327-8002. V. Lang is associate professor of medicine, University of Rochester School of Medicine and Dentistry, Rochester, New York; ORCID: http://orcid.org/0000-0002-2157-7613. T. Ratcliffe is associate professor of medicine, University of Texas Long School of Medicine at San Antonio, San Antonio, Texas. D. Gordon is medical undergraduate education director, associate residency program director of emergency medicine, and associate professor of surgery, Duke University School of Medicine, Durham, North Carolina. B. Heist is clerkship codirector and assistant professor of medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania. S. Lubarsky is assistant professor of neurology, McGill University, and faculty of medicine and core member, McGill Center for Medical Education, Montreal, Quebec, Canada; ORCID: http://orcid.org/0000-0001-5692-1771. C.A. Estrada is staff physician, Birmingham Veterans Affairs Medical Center, and director, Division of General Internal Medicine, and professor of medicine, University of Alabama, Birmingham, Alabama; ORCID: https://orcid.org/0000-0001-6262-7421. T. Ballard is plastic surgeon, Ann Arbor Plastic Surgery, Ann Arbor, Michigan. A.R. Artino Jr is deputy director for graduate programs in health professions education and professor of medicine, preventive medicine, and biometrics pathology, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: http://orcid.org/0000-0003-2661-7853. A. Sergio Da Silva is senior lecturer in medical education and director of the masters in medical education program, Swansea University Medical School, Swansea, United Kingdom; ORCID: http://orcid.org/0000-0001-7262-0215. T. Cleary is chair, Applied Psychology Department, CUNY Graduate School and University Center, New York, New York, and associate professor of applied and professional psychology, Rutgers University, New Brunswick, New Jersey. J. Stojan is associate professor of internal medicine and pediatrics, University of Michigan Medical School, Ann Arbor, Michigan. L.D. Gruppen is director of the master of health professions education program and professor of learning health sciences, University of Michigan Medical School, Ann Arbor, Michigan; ORCID: http://orcid.org/0000-0002-2107-0126
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
28
|
Boileau E, Audétat MC, St-Onge C. Just-in-time faculty development: a mobile application helps clinical teachers verify and describe clinical reasoning difficulties. BMC MEDICAL EDUCATION 2019; 19:120. [PMID: 31039779 PMCID: PMC6492340 DOI: 10.1186/s12909-019-1558-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Accepted: 04/15/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND Although clinical teachers can often identify struggling learners readily and reliably, they can be reluctant to act upon their impressions, resulting in failure to fail. In the absence of a clear process for identifying and remediating struggling learners, clinical teachers can be put off by the prospect of navigating the politically and personally charged waters of remediation and potential failing of students. METHODS To address this gap, we developed a problem-solving algorithm to support clinical teachers from the identification through the remediation of learners with clinical reasoning difficulties, which have significant implications for patient care. Based on this algorithm, a mobile application (Pdx) was developed and assessed in two emergency departments at a Canadian university, from 2015 to 2016, using interpretive description as our research design. Semi-structured interviews were conducted before and after a three-month trial with the application. Interviews were analysed both deductively, using pre-determined categories, and inductively, using emerging categories. RESULTS Twelve clinical teachers were interviewed. Their experience with the application revealed their need to first validate their impressions of difficulties in learners and to find the right words to describe them before difficulties could be addressed. The application was unanimously considered helpful regarding both these aspects, while the mobile format appeared instrumental in allowing clinical teachers to quickly access targeted information during clinical supervision. CONCLUSIONS The value placed on verifying impressions and finding the right words to pinpoint difficulties should be further explored in endeavours that aim to address the failure to fail phenomenon. Moreover, just-in-time mobile solutions, which mirror habitual clinical practices, may be used profitably for knowledge transfer in medical education, as an alternative form of faculty development.
Collapse
Affiliation(s)
- Elisabeth Boileau
- Department of Family and Emergency Medicine, Université de Sherbrooke, Sherbrooke, Canada
| | - Marie-Claude Audétat
- Faculty of Medicine, Université de Genève, Geneva, Switzerland
- Faculty of Medicine, Université de Montréal, Montreal, Canada
| | - Christina St-Onge
- Department of Medicine, Université de Sherbrooke, Sherbrooke, Canada
- Paul Grand’Maison de la SMUS, Université de Sherbrooke, 3001, 12e avenue N, Sherbrooke, QC J1H 5N4 Canada
| |
Collapse
|
29
|
Duca NS, Glod S. Bridging the Gap Between the Classroom and the Clerkship: A Clinical Reasoning Curriculum for Third-Year Medical Students. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2019; 15:10800. [PMID: 31139730 PMCID: PMC6507921 DOI: 10.15766/mep_2374-8265.10800] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Accepted: 12/23/2018] [Indexed: 05/31/2023]
Abstract
INTRODUCTION Clinical reasoning is the complex cognitive process that drives the diagnosis of disease and treatment of patients. There is a national call for medical educators to develop clinical reasoning curricula in undergraduate medical education. To address this need, we developed a longitudinal clinical reasoning curriculum for internal medicine clerkship students. METHODS We delivered six 1-hour sessions to approximately 40 students over the 15-week combined medicine-surgery clerkship at Penn State College of Medicine. We developed the content using previous work in clinical reasoning, including the American College of Physicians' Teaching Medicine Series book Teaching Clinical Reasoning. Students applied a clinical reasoning diagnostic framework to written cases during each workshop. Each session followed a scaffold approach and built upon previously learned clinical reasoning skills. We administered a pre- and postsurvey to assess students' baseline knowledge of clinical reasoning concepts and perceived confidence in performing clinical reasoning skills. Students also provided open-ended responses regarding the effectiveness of the curriculum. RESULTS The curriculum was well received by students and led to increased perceived knowledge of clinical reasoning concepts and increased confidence in applying clinical reasoning skills. Students commented on the usefulness of practicing clinical reasoning in a controlled environment while utilizing a framework that could be deliberately applied to patient care. DISCUSSION The longitudinal clinical reasoning curriculum was effective in reinforcing key concepts of clinical reasoning and allowed for deliberate practice in a controlled environment. The curriculum is generalizable to students in both the preclinical and clinical years.
Collapse
Affiliation(s)
- Nicholas S. Duca
- Assistant Professor, Division of General Internal Medicine, Penn State Health Milton S. Hershey Medical Center
| | - Susan Glod
- Associate Professor, Division of General Internal Medicine, Penn State Health Milton S. Hershey Medical Center
| |
Collapse
|
30
|
Fürstenberg S, Oubaid V, Berberat PO, Kadmon M, Harendza S. Medical knowledge and teamwork predict the quality of case summary statements as an indicator of clinical reasoning in undergraduate medical students. GMS JOURNAL FOR MEDICAL EDUCATION 2019; 36:Doc83. [PMID: 31844655 PMCID: PMC6905359 DOI: 10.3205/zma001291] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Revised: 09/08/2019] [Accepted: 09/26/2019] [Indexed: 05/21/2023]
Abstract
Background: Clinical reasoning refers to a thinking process including medical problem solving and medical decision making skills. Several studies have shown that the clinical reasoning process can be influenced by a number of factors, e.g. context or personality traits, and the results of this thinking process are expressed in case presentation. The aim of this study was to identify factors, which predict the quality of case summary statements as an indicator of clinical reasoning of undergraduate medical students in an assessment simulating the first day of residency. Methods: To investigate factors predicting aspects of clinical reasoning 67 advanced undergraduate medical students participated in the role of a beginning resident in our competence-based assessment, which included a consultation hour, a patient management phase, and a handover. Participants filled out a Post Encounter Form (PEF) to document their case summary statements and other aspects of clinical reasoning. After each phase, they filled out the Strain Perception Questionnaire (STRAIPER) to measure their situation dependent mental strain. To assess medical knowledge the participants completed a 100 questions multiple choice test. To measure stress resistance, adherence to procedures, and teamwork students took part in the Group Assessment of Performance (GAP) test for flight school applicants. These factors were included in a multiple linear regression analysis. Results: Medical knowledge and teamwork predicted the quality of case summary statements as an indicator of clinical reasoning of undergraduate medical students and explained approximately 20.3% of the variance. Neither age, gender, undergraduate curriculum, academic advancement nor high school grade point average of the medical students of our sample had an effect on their clinical reasoning skills. Conclusion: The quality of case summary statements as an indicator of clinical reasoning can be predicted in undergraduate medical students by their medical knowledge and teamwork. Students should be supported in developing abilities to work in a team and to acquire long term knowledge for good case summary statements as an important aspect of clinical reasoning.
Collapse
Affiliation(s)
- Sophie Fürstenberg
- University Medical Center Hamburg-Eppendorf, III. Department of Internal Medicine, Hamburg Germany
| | | | - Pascal O. Berberat
- Technical University of Munich, TUM Medical Education Center, School of Medicine, Munich, Germany
| | - Martina Kadmon
- University of Augsburg, Faculty of Medicine, Deanery, Augsburg, Germany
| | - Sigrid Harendza
- University Medical Center Hamburg-Eppendorf, III. Department of Internal Medicine, Hamburg Germany
- *To whom correspondence should be addressed: Sigrid Harendza, University Medical Center Hamburg-Eppendorf, III. Department of Internal Medicine, Martinistr. 52, D-20246 Hamburg, Germany, Phone: +49 (0)40/7410-53908, Fax: +49 (0)40/7410-40218, E-mail:
| |
Collapse
|
31
|
Battista A, Konopasky A, Ramani D, Ohmer M, Mikita J, Howle A, Krajnik S, Torre D, Durning SJ. Clinical Reasoning in the Primary Care Setting: Two Scenario-Based Simulations for Residents and Attendings. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2018; 14:10773. [PMID: 30800973 PMCID: PMC6346281 DOI: 10.15766/mep_2374-8265.10773] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Accepted: 10/12/2018] [Indexed: 06/09/2023]
Abstract
Introduction We describe the development and implementation of tools medical educators or researchers can use for developing or analyzing residents' through attending physicians' clinical reasoning in an outpatient clinic setting. The resource includes two scenario-based simulations (i.e., diabetes, angina), implementation support materials, an open-ended postencounter form, and a think-aloud reflection protocol. Method We designed two scenarios with potential case ambiguity and contextual factors to add complexity for studying clinical reasoning. The scenarios are designed to be used prior to an open-ended written exercise and a think-aloud reflection to elicit reasoning and reflection. We report on their implementation in a research context but developed them to be used in both educational and research settings. Results Twelve physicians (five interns, three residents, and four attendings) considered between three and six differential diagnoses (M = 4.0) for the diabetes scenario and between three and nine differentials (M = 4.3) for angina. In think-aloud reflections, participants reconsidered their thinking between zero and 14 times (M = 3.5) for diabetes and zero and 11 times (M = 3.3) for angina. Cognitive load scores ranged from 4 to 8 (out of 10; M = 6.2) for diabetes and 5 to 8 (M = 6.6) for angina. Participants rated scenario authenticity between 4 and 5 (out of 5). Discussion The potential case content ambiguity, along with the contextual factors (e.g., patient suggesting alternative diagnoses), provides a complex environment in which to explore or teach clinical reasoning.
Collapse
Affiliation(s)
- Alexis Battista
- Assistant Professor, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
| | - Abigail Konopasky
- Assistant Professor, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
| | - Divya Ramani
- Research Associate, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
| | - Megan Ohmer
- Research Associate, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
| | - Jeffrey Mikita
- Chief, Department of Simulation, Walter Reed National Military Medical Center
| | - Anna Howle
- Simulation Educator, Department of Medical Simulation, Walter Reed National Military Medical Center
| | - Sarah Krajnik
- Nurse Educator, Department of Simulation, Walter Reed National Military Medical Center
| | - Dario Torre
- Associate Director, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
| | - Steven J. Durning
- Director, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
| |
Collapse
|
32
|
McBee E, Ratcliffe T, Schuwirth L, O'Neill D, Meyer H, Madden SJ, Durning SJ. Context and clinical reasoning : Understanding the medical student perspective. PERSPECTIVES ON MEDICAL EDUCATION 2018; 7:256-263. [PMID: 29704167 PMCID: PMC6086813 DOI: 10.1007/s40037-018-0417-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
INTRODUCTION Studies have shown that a physician's clinical reasoning performance can be influenced by contextual factors. We explored how the clinical reasoning performance of medical students was impacted by contextual factors in order to expand upon previous findings in resident and board certified physicians. Using situated cognition as the theoretical framework, our aim was to evaluate the verbalized clinical reasoning processes of medical students in order to describe what impact the presence of contextual factors has on their reasoning performance. METHODS Seventeen medical student participants viewed three video recordings of clinical encounters portraying straightforward diagnostic cases in internal medicine with explicit contextual factors inserted. Participants completed a computerized post-encounter form as well as a think-aloud protocol. Three authors analyzed verbatim transcripts from the think-aloud protocols using a constant comparative approach. After iterative coding, utterances were analyzed and grouped into categories and themes. RESULTS Six categories and ten associated themes emerged, which demonstrated overlap with findings from previous studies in resident and attending physicians. Four overlapping categories included emotional disturbances, behavioural inferences about the patient, doctor-patient relationship, and difficulty with closure. Two new categories emerged to include anchoring and misinterpretation of data. DISCUSSION The presence of contextual factors appeared to impact clinical reasoning performance in medical students. The data suggest that a contextual factor can be innate to the clinical scenario, consistent with situated cognition theory. These findings build upon our understanding of clinical reasoning performance from both a theoretical and practical perspective.
Collapse
Affiliation(s)
- Elexis McBee
- Department of Medicine, F. Edward Hébert School Of Medicine, Uniformed Services University, at Naval Medical Centre San Diego, San Diego, CA, USA.
| | - Temple Ratcliffe
- Department of Medicine, University of Texas Health Science Centre at San Antonio, San Antonio, TX, USA
| | - Lambert Schuwirth
- School of Medicine, Flinders University, Adelaide, South Australia, Australia
| | - Daniel O'Neill
- Department of Medicine, F. Edward Hébert School Of Medicine, Uniformed Services University, Bethesda, MD, USA
| | - Holly Meyer
- Department of Medicine, F. Edward Hébert School Of Medicine, Uniformed Services University, Bethesda, MD, USA
| | - Shelby J Madden
- Department of Medicine, F. Edward Hébert School Of Medicine, Uniformed Services University, Bethesda, MD, USA
| | - Steven J Durning
- Department of Medicine, F. Edward Hébert School Of Medicine, Uniformed Services University, Bethesda, MD, USA
| |
Collapse
|
33
|
Escudier MP, Woolford MJ, Tricio JA. Assessing the application of knowledge in clinical problem-solving: The structured professional reasoning exercise. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2018; 22:e269-e277. [PMID: 28804939 DOI: 10.1111/eje.12286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/30/2017] [Indexed: 06/07/2023]
Abstract
INTRODUCTION Clinical reasoning is a fundamental and core clinical competence of healthcare professionals. The study aimed to investigate the utility of the Structured Professional Reasoning Exercise (SPRE), a new competence assessment method designed to measure dental students' clinical reasoning in simulated scenarios, covering the clinical areas of Oral Disease, Primary Dental Care and Restorative Dentistry, Child Dental Health and Dental Practice and Clinical Governance. MATERIALS AND METHODS A total of 313 year-5 students sat for the assessment. Students spent 45 minutes assimilating the scenarios, before rotating through four pairs of 39 trained examiners who each independently assessed a single scenario over a ten-minute period, using a structured marking sheet. After the assessment, all students and examiners were invited to complete an anonymous perception questionnaire of the exercise. These questionnaires and the examination scores were statistically analysed. RESULTS AND DISCUSSION Oral Disease showed the lowest scores; Dental Practice and Governance the highest. The overall Intraclass Correlation Coefficient (ICC) was 0.770, whilst examiner training helped to increase the ICC from 0.716 in 2013 to 0.835 in 2014. Exploratory factor analysis revealed one major factor with an eigenvalue of 2.75 (68.8% of total variance). The Generalizability coefficient was consistent at 0.806. A total of 295 students and 32 examiners completed the perception questionnaire. Students' lowest examination perceptions were an "Unpleasant" and "Unenjoyable" experience, whilst the highest were "Interesting", "Valuable" and "Important". The majority of students and examiners reported the assessment as acceptable, fair and valid. CONCLUSION The SPRE offers a reliable, valid and acceptable assessment method, provided it comprises at least four scenarios with two independently marking and trained assessors. 3.
Collapse
Affiliation(s)
- M P Escudier
- King's College London Dental Institute, London, UK
| | - M J Woolford
- King's College London Dental Institute, London, UK
| | - J A Tricio
- King's College London Dental Institute, London, UK
- Faculty of Dentistry, University of the Andes, Santiago, Chile
| |
Collapse
|
34
|
Fischer MA, Kennedy KM, Durning S, Schijven MP, Ker J, O’Connor P, Doherty E, Kropmans TJB. Situational awareness within objective structured clinical examination stations in undergraduate medical training - a literature search. BMC MEDICAL EDUCATION 2017; 17:262. [PMID: 29268744 PMCID: PMC5740962 DOI: 10.1186/s12909-017-1105-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Accepted: 12/12/2017] [Indexed: 05/30/2023]
Abstract
BACKGROUND Medical students may not be able to identify the essential elements of situational awareness (SA) necessary for clinical reasoning. Recent studies suggest that students have little insight into cognitive processing and SA in clinical scenarios. Objective Structured Clinical Examinations (OSCEs) could be used to assess certain elements of situational awareness. The purpose of this paper is to review the literature with a view to identifying whether levels of SA based on Endsley's model can be assessed utilising OSCEs during undergraduate medical training. METHODS A systematic search was performed pertaining to SA and OSCEs, to identify studies published between January 1975 (first paper describing an OSCE) and February 2017, in peer reviewed international journals published in English. PUBMED, EMBASE, PsycINFO Ovid and SCOPUS were searched for papers that described the assessment of SA using OSCEs among undergraduate medical students. Key search terms included "objective structured clinical examination", "objective structured clinical assessment" or "OSCE" and "non-technical skills", "sense-making", "clinical reasoning", "perception", "comprehension", "projection", "situation awareness", "situational awareness" and "situation assessment". Boolean operators (AND, OR) were used as conjunctions to narrow the search strategy, resulting in the limitation of papers relevant to the research interest. Areas of interest were elements of SA that can be assessed by these examinations. RESULTS The initial search of the literature retrieved 1127 publications. Upon removal of duplicates and papers relating to nursing, paramedical disciplines, pharmacy and veterinary education by title, abstract or full text, 11 articles were eligible for inclusion as related to the assessment of elements of SA in undergraduate medical students. DISCUSSION Review of the literature suggests that whole-task OSCEs enable the evaluation of SA associated with clinical reasoning skills. If they address the levels of SA, these OSCEs can provide supportive feedback and strengthen educational measures associated with higher diagnostic accuracy and reasoning abilities. CONCLUSION Based on the findings, the early exposure of medical students to SA is recommended, utilising OSCEs to evaluate and facilitate SA in dynamic environments.
Collapse
Affiliation(s)
- Markus A. Fischer
- National University Ireland Galway, School of Medicine, University Road, Galway, H91TK33 Ireland
| | - Kieran M. Kennedy
- National University Ireland Galway, School of Medicine, University Road, Galway, H91TK33 Ireland
| | - Steven Durning
- Department of Internal Medicine, University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814 USA
| | - Marlies P. Schijven
- Department of Surgery, Academic Medical Center Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam-Zuidoost, The Netherlands
| | - Jean Ker
- University of Dundee. Clinical Skills Centre Level 6, Ninewells Hospital & Medical School, Dundee, UK
| | - Paul O’Connor
- National University Galway Ireland, Discipline of General Practice, Distillery Road, Galway, H91TK33 Ireland
| | - Eva Doherty
- Royal College of Surgeons in Ireland, 123 St Stephen’s Green, Dublin 2, Ireland
| | - Thomas J. B. Kropmans
- National University Ireland Galway, School of Medicine, University Road, Galway, H91TK33 Ireland
| |
Collapse
|
35
|
McBee E, Ratcliffe T, Picho K, Schuwirth L, Artino AR, Yepes-Rios AM, Masel J, van der Vleuten C, Durning SJ. Contextual factors and clinical reasoning: differences in diagnostic and therapeutic reasoning in board certified versus resident physicians. BMC MEDICAL EDUCATION 2017; 17:211. [PMID: 29141616 PMCID: PMC5688653 DOI: 10.1186/s12909-017-1041-x] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2016] [Accepted: 11/02/2017] [Indexed: 05/02/2023]
Abstract
BACKGROUND The impact of context on the complex process of clinical reasoning is not well understood. Using situated cognition as the theoretical framework and videos to provide the same contextual "stimulus" to all participants, we examined the relationship between specific contextual factors on diagnostic and therapeutic reasoning accuracy in board certified internists versus resident physicians. METHODS Each participant viewed three videotaped clinical encounters portraying common diagnoses in internal medicine. We explicitly modified the context to assess its impact on performance (patient and physician contextual factors). Patient contextual factors, including English as a second language and emotional volatility, were portrayed in the videos. Physician participant contextual factors were self-rated sleepiness and burnout.. The accuracy of diagnostic and therapeutic reasoning was compared with covariates using Fisher Exact, Mann-Whitney U tests and Spearman Rho's correlations as appropriate. RESULTS Fifteen board certified internists and 10 resident physicians participated from 2013 to 2014. Accuracy of diagnostic and therapeutic reasoning did not differ between groups despite residents reporting significantly higher rates of sleepiness (mean rank 20.45 vs 8.03, U = 0.5, p < .001) and burnout (mean rank 20.50 vs 8.00, U = 0.0, p < .001). Accuracy of diagnosis and treatment were uncorrelated (r = 0.17, p = .65). In both groups, the proportion scoring correct responses for treatment was higher than the proportion scoring correct responses for diagnosis. CONCLUSIONS This study underscores that specific contextual factors appear to impact clinical reasoning performance. Further, the processes of diagnostic and therapeutic reasoning, although related, may not be interchangeable. This raises important questions about the impact that contextual factors have on clinical reasoning and provides insight into how clinical reasoning processes in more authentic settings may be explained by situated cognition theory.
Collapse
Affiliation(s)
- Elexis McBee
- Department of Medicine, Naval Medical Center San Diego, 34800 Bob Wilson Drive, San Diego, 92134 California USA
| | - Temple Ratcliffe
- Department of Medicine, University of Texas Health Science Center at San Antonio, 7703 Floyd Curl Drive, San Antonio, 78229 Texas USA
| | - Katherine Picho
- Department of Medicine, F. Edward Hébert School Of Medicine, Uniformed Services University, 4301 Jones Bridge Rd, Bethesda, 20814 Maryland USA
| | - Lambert Schuwirth
- Flinders University, School of Medicine, GPO Box 2100, Adelaide, 5001 South Australia Australia
| | - Anthony R. Artino
- Department of Medicine, F. Edward Hébert School Of Medicine, Uniformed Services University, 4301 Jones Bridge Rd, Bethesda, 20814 Maryland USA
| | - Ana Monica Yepes-Rios
- Department of Medicine, F. Edward Hébert School Of Medicine, Uniformed Services University, 4301 Jones Bridge Rd, Bethesda, 20814 Maryland USA
| | - Jennifer Masel
- Department of Medicine, Walter Reed National Military Medical Center, 8901 Wisconsin Ave, Bethesda, 20889 Maryland USA
| | - Cees van der Vleuten
- Department of Educational Development and Research, Maastricht University, Maastricht, 6200 MD The Netherlands
| | - Steven J. Durning
- Department of Medicine, F. Edward Hébert School Of Medicine, Uniformed Services University, 4301 Jones Bridge Rd, Bethesda, 20814 Maryland USA
| |
Collapse
|
36
|
Hege I, Kononowicz AA, Adler M. A Clinical Reasoning Tool for Virtual Patients: Design-Based Research Study. JMIR MEDICAL EDUCATION 2017; 3:e21. [PMID: 29097355 PMCID: PMC5691243 DOI: 10.2196/mededu.8100] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Revised: 09/24/2017] [Accepted: 10/11/2017] [Indexed: 05/21/2023]
Abstract
BACKGROUND Clinical reasoning is a fundamental process medical students have to learn during and after medical school. Virtual patients (VP) are a technology-enhanced learning method to teach clinical reasoning. However, VP systems do not exploit their full potential concerning the clinical reasoning process; for example, most systems focus on the outcome and less on the process of clinical reasoning. OBJECTIVES Keeping our concept grounded in a former qualitative study, we aimed to design and implement a tool to enhance VPs with activities and feedback, which specifically foster the acquisition of clinical reasoning skills. METHODS We designed the tool by translating elements of a conceptual clinical reasoning learning framework into software requirements. The resulting clinical reasoning tool enables learners to build their patient's illness script as a concept map when they are working on a VP scenario. The student's map is compared with the experts' reasoning at each stage of the VP, which is technically enabled by using Medical Subject Headings, which is a comprehensive controlled vocabulary published by the US National Library of Medicine. The tool is implemented using Web technologies, has an open architecture that enables its integration into various systems through an open application program interface, and is available under a Massachusetts Institute of Technology license. RESULTS We conducted usability tests following a think-aloud protocol and a pilot field study with maps created by 64 medical students. The results show that learners interact with the tool but create less nodes and connections in the concept map than an expert. Further research and usability tests are required to analyze the reasons. CONCLUSIONS The presented tool is a versatile, systematically developed software component that specifically supports the clinical reasoning skills acquisition. It can be plugged into VP systems or used as stand-alone software in other teaching scenarios. The modular design allows an extension with new feedback mechanisms and learning analytics algorithms.
Collapse
Affiliation(s)
- Inga Hege
- Institute for Medical Education, University Hospital of LMU Munich, Muenchen, Germany
| | - Andrzej A Kononowicz
- Department of Bioinformatics and Telemedicine, Jagiellonian University Medical College, Krakow, Poland
| | | |
Collapse
|
37
|
King MA, Phillipi CA, Buchanan PM, Lewin LO. Developing Validity Evidence for the Written Pediatric History and Physical Exam Evaluation Rubric. Acad Pediatr 2017; 17:68-73. [PMID: 27521461 DOI: 10.1016/j.acap.2016.08.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2016] [Revised: 07/29/2016] [Accepted: 08/03/2016] [Indexed: 11/28/2022]
Abstract
OBJECTIVE The written history and physical examination (H&P) is an underutilized source of medical trainee assessment. The authors describe development and validity evidence for the Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric: a novel tool for evaluating written H&Ps. METHODS Using an iterative process, the authors drafted, revised, and implemented the 10-item rubric at 3 academic institutions in 2014. Eighteen attending physicians and 5 senior residents each scored 10 third-year medical student H&Ps. Inter-rater reliability (IRR) was determined using intraclass correlation coefficients. Cronbach α was used to report consistency and Spearman rank-order correlations to determine relationships between rubric items. Raters provided a global assessment, recorded time to review and score each H&P, and completed a rubric utility survey. RESULTS Overall intraclass correlation was 0.85, indicating adequate IRR. Global assessment IRR was 0.89. IRR for low- and high-quality H&Ps was significantly greater than for medium-quality ones but did not differ on the basis of rater category (attending physician vs. senior resident), note format (electronic health record vs nonelectronic), or student diagnostic accuracy. Cronbach α was 0.93. The highest correlation between an individual item and total score was for assessments was 0.84; the highest interitem correlation was between assessment and differential diagnosis (0.78). Mean time to review and score an H&P was 16.3 minutes; residents took significantly longer than attending physicians. All raters described rubric utility as "good" or "very good" and endorsed continued use. CONCLUSIONS The P-HAPEE rubric offers a novel, practical, reliable, and valid method for supervising physicians to assess pediatric written H&Ps.
Collapse
Affiliation(s)
- Marta A King
- Division of General Academic Pediatrics, Saint Louis University School of Medicine, St Louis, Mo.
| | - Carrie A Phillipi
- Department of Pediatrics, Oregon Health & Science University, Portland, OR
| | - Paula M Buchanan
- Center for Outcomes Research, Saint Louis University, St Louis, Mo
| | - Linda O Lewin
- Department of Pediatrics, University of Maryland School of Medicine, Bethesda, MD
| |
Collapse
|
38
|
Dory V, Gagnon R, Charlin B, Vanpee D, Leconte S, Duyver C, Young M, Loye N. In Brief: Validity of Case Summaries in Written Examinations of Clinical Reasoning. TEACHING AND LEARNING IN MEDICINE 2016; 28:375-384. [PMID: 27294400 DOI: 10.1080/10401334.2016.1190730] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
UNLABELLED Construct: The purpose of this study was to provide initial evidence of the validity of written case summaries as assessments of clinical problem representation in a classroom setting. BACKGROUND To solve clinical problems, clinicians must gain a clear representation of the issues. In the clinical setting, oral case presentations-or summaries-are used to assess learners' ability to gather, synthesize, and "translate" pertinent case information. This ability can be assessed in Objective Structured Clinical Examination and Virtual Patient settings using oral or written case summaries. Evidence of their validity in these settings includes adequate interrater agreement and moderate correlation with other assessments of clinical reasoning. We examined the use of written case summaries in a classroom setting as part of an examination designed to assess clinical reasoning. APPROACH We developed and implemented written examinations for 2 preclerkship general practice courses in Years 4 and 5 of a 7-year curriculum. Examinations included 8 case summary questions in Year 4 and 5 in Year 5. Seven hundred students participated. Cases were scored using 3 criteria: extraction of pertinent findings, semantic quality, and global ratings. We examined the item parameters (using classical test theory) and generalizability of case summary items. We computed correlations between case summary scores and scores on other questions within the examination. RESULTS Item parameters were acceptable (average item difficulty = 0.49-0.73 and 0.59-0.68 in Years 4 and 5; average point-biserials = 0.21-0.24 and 0.18-0.21). Scores were moderately generalizable (G coefficients = 0.40-0.50), with case-specificity a substantial source of measurement error (10.2%-19.5% of variance). Scoring and rater had small effects. Correlations with related constructs were low to moderate. CONCLUSIONS There is good evidence regarding the scoring and generalizability of written case summaries for assessment of clinical problem representation. Further evidence regarding the extrapolation and implications of these assessments is warranted.
Collapse
Affiliation(s)
- Valérie Dory
- a Department of Medicine and Centre for Medical Education, McGill University , Montreal , Quebec , Canada
- b Fonds de la Recherche Scientifique , Brussels , Belgium
- c Institut de Recherche Santé et Société, Université catholique de Louvain , Brussels , Belgium
| | - Robert Gagnon
- d Centre of Pedagogy Applied to Health Sciences, Université de Montréal , Montreal , Quebec , Canada
| | - Bernard Charlin
- d Centre of Pedagogy Applied to Health Sciences, Université de Montréal , Montreal , Quebec , Canada
| | - Dominique Vanpee
- c Institut de Recherche Santé et Société, Université catholique de Louvain , Brussels , Belgium
| | - Sophie Leconte
- c Institut de Recherche Santé et Société, Université catholique de Louvain , Brussels , Belgium
| | - Corentin Duyver
- c Institut de Recherche Santé et Société, Université catholique de Louvain , Brussels , Belgium
| | - Meredith Young
- a Department of Medicine and Centre for Medical Education, McGill University , Montreal , Quebec , Canada
| | - Nathalie Loye
- e Département d'administration et fondements de l'éducation , Université de Montréal , Montreal , Quebec , Canada
| |
Collapse
|
39
|
McBee E, Ratcliffe T, Goldszmidt M, Schuwirth L, Picho K, Artino AR, Masel J, Durning SJ. Clinical Reasoning Tasks and Resident Physicians: What Do They Reason About? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:1022-1028. [PMID: 26650677 DOI: 10.1097/acm.0000000000001024] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
PURPOSE A framework of clinical reasoning tasks thought to occur in a clinical encounter was recently developed. It proposes that diagnostic and therapeutic reasoning comprise 24 tasks. The authors of this current study used this framework to investigate what internal medicine residents reason about when they approach straightforward clinical cases. METHOD Participants viewed three video-recorded clinical encounters portraying common diagnoses. After each video, participants completed a post encounter form and think-aloud protocol. Two authors analyzed transcripts from the think-aloud protocols using a constant comparative approach. They conducted iterative coding of the utterances, classifying each according to the framework of clinical reasoning tasks. They evaluated the type, number, and sequence of tasks the residents used. RESULTS Ten residents participated in the study in 2013-2014. Across all three cases, the residents employed 14 clinical reasoning tasks. Nearly all coded tasks were associated with framing the encounter or diagnosis. The order in which residents used specific tasks varied. The average number of tasks used per case was as follows: Case 1, 4.4 (range 1-10); Case 2, 4.6 (range 1-6); and Case 3, 4.7 (range 1-7). The residents used some tasks repeatedly; the average number of task utterances was 11.6, 13.2, and 14.7 for, respectively, Case 1, 2, and 3. CONCLUSIONS Results suggest that the use of clinical reasoning tasks occurs in a varied, not sequential, process. The authors provide suggestions for strengthening the framework to more fully encompass the spectrum of reasoning tasks that occur in residents' clinical encounters.
Collapse
Affiliation(s)
- Elexis McBee
- E. McBee is assistant professor of medicine, Uniformed Services University of the Health Sciences, based at Naval Medical Center San Diego, San Diego, California. T. Ratcliffe is assistant professor of medicine, University of Texas Health Science Center, San Antonio, Texas. M. Goldszmidt is associate professor of medicine, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada. L. Schuwirth is professor of medicine, Flinders University, Adelaide, Australia. K. Picho is assistant professor of medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland. A.R. Artino Jr is associate professor of preventive medicine and biometrics, Uniformed Services University of the Health Sciences, Bethesda, Maryland. J. Masel is third-year resident, Walter Reed National Military Medical Center, Bethesda, Maryland. S.J. Durning is professor of medicine and pathology, Uniformed Services University of the Health Sciences, Bethesda, Maryland
| | | | | | | | | | | | | | | |
Collapse
|
40
|
Bagnasco A, Tolotti A, Pagnucci N, Torre G, Timmins F, Aleo G, Sasso L. How to maintain equity and objectivity in assessing the communication skills in a large group of student nurses during a long examination session, using the Objective Structured Clinical Examination (OSCE). NURSE EDUCATION TODAY 2016; 38:54-60. [PMID: 26803712 DOI: 10.1016/j.nedt.2015.11.034] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Revised: 11/05/2015] [Accepted: 11/15/2015] [Indexed: 06/05/2023]
Abstract
BACKGROUND While development, testing, and innovation of the Objective Structured Clinical Examination (OSCE) are common in the international literature, studies from the United States of America (USA), Australia, and the United Kingdom (UK) predominate. There is little known about OSCE use in European countries, such as Italy, where other than cost analysis, there is little reporting of OSCE use or validation. OBJECTIVES This paper reports on one Italian initiative, which evaluated the equity and objectivity of the OSCE method of assessing communication skills. DESIGN An OSCE method was used to assess the communication skills of first-year students of the Degree Course in Nursing. A method of simulation was implemented through role-playing with standardized patients. An observational method was used to collect data. PARTICIPANTS AND SETTINGS Four hundred and twenty-one first-year undergraduate nursing students at one university site in Italy took part. METHODS Ten examination sessions were carried out. The students' performances were assessed by two examiners who used a structured observation grid and conducted their assessment separately. A situation simulated by four nurses with experience as actors was used as the topic for the students' examination. RESULTS Calculation of the daily rate of students who passed the examination revealed a random distribution over time. The nonparametric correlation indexes referring to the assessments and to the scores assigned by the two examiners proved statistically significant (P≤0.001). CONCLUSIONS The study confirmed the validity of the OSCE method in ensuring equity and objectivity of communication skills assessment in a large population of nursing students for the purpose of certification throughout the duration of the examination. This has important implications for nurse education and practice as the extent to which OSCE approaches, while deemed objective, are culturally sensitive or valid and reliable across cultures is not clear. This is something that requires further research and examination in this field.
Collapse
Affiliation(s)
- Annamaria Bagnasco
- Department of Health Sciences, University of Genoa, Via Pastore 1, I-16132 Genoa, Italy.
| | - Angela Tolotti
- Department of Health Sciences, University of Genoa, Via Pastore 1, I-16132 Genoa, Italy.
| | - Nicola Pagnucci
- Department of Clinical and Experimental Medicine, University of Pisa, Via Savi 10, I-56100 Pisa, Italy.
| | - Giancarlo Torre
- School of Medical and Pharmaceutical Sciences, University of Genoa, Via Pastore 1, I-16132 Genoa, Italy.
| | - Fiona Timmins
- School of Nursing, Trinity College, College Green, Dublin 2, Ireland.
| | - Giuseppe Aleo
- Department of Health Sciences, University of Genoa, Via Pastore 1, I-16132 Genoa, Italy.
| | - Loredana Sasso
- Department of Health Sciences, University of Genoa, Via Pastore 1, I-16132 Genoa, Italy.
| |
Collapse
|
41
|
Smith S, Kogan JR, Berman NB, Dell MS, Brock DM, Robins LS. The Development and Preliminary Validation of a Rubric to Assess Medical Students' Written Summary Statements in Virtual Patient Cases. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:94-100. [PMID: 26726864 DOI: 10.1097/acm.0000000000000800] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
PURPOSE The ability to create a concise summary statement can be assessed as a marker for clinical reasoning. The authors describe the development and preliminary validation of a rubric to assess such summary statements. METHOD Between November 2011 and June 2014, four researchers independently coded 50 summary statements randomly selected from a large database of medical students' summary statements in virtual patient cases to each create an assessment rubric. Through an iterative process, they created a consensus assessment rubric and applied it to 60 additional summary statements. Cronbach alpha calculations determined the internal consistency of the rubric components, intraclass correlation coefficient (ICC) calculations determined the interrater agreement, and Spearman rank-order correlations determined the correlations between rubric components. Researchers' comments describing their individual rating approaches were analyzed using content analysis. RESULTS The final rubric included five components: factual accuracy, appropriate narrowing of the differential diagnosis, transformation of information, use of semantic qualifiers, and a global rating. Internal consistency was acceptable (Cronbach alpha 0.771). Interrater reliability for the entire rubric was acceptable (ICC 0.891; 95% confidence interval 0.859-0.917). Spearman calculations revealed a range of correlations across cases. Content analysis of the researchers' comments indicated differences in their application of the assessment rubric. CONCLUSIONS This rubric has potential as a tool for feedback and assessment. Opportunities for future study include establishing interrater reliability with other raters and on different cases, designing training for raters to use the tool, and assessing how feedback using this rubric affects students' clinical reasoning skills.
Collapse
Affiliation(s)
- Sherilyn Smith
- S. Smith is professor, Department of Pediatrics, University of Washington School of Medicine, Seattle, Washington.J.R. Kogan is associate professor, Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania.N.B. Berman is professor, Department of Pediatrics, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, and executive medical director, MedU, Lebanon, New Hampshire.M.S. Dell is professor, Department of Pediatrics, Case Western Reserve University School of Medicine, and director of undergraduate medical education, Rainbow Babies and Children's Hospital, Cleveland, Ohio.D.M. Brock is associate professor, Department of Family Medicine and MEDEX Northwest, University of Washington School of Medicine, Seattle, Washington.L.S. Robins is professor, Departments of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | | | | | | | | | | |
Collapse
|
42
|
Wijnen-Meijer M, Ten Cate O, van der Schaaf M, Burgers C, Borleffs J, Harendza S. Vertically integrated medical education and the readiness for practice of graduates. BMC MEDICAL EDUCATION 2015; 15:229. [PMID: 26689282 PMCID: PMC4687104 DOI: 10.1186/s12909-015-0514-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Accepted: 12/14/2015] [Indexed: 05/18/2023]
Abstract
BACKGROUND Medical curricula become more and more vertically integrated (VI) to prepare graduates better for clinical practice. VI curricula show early clinical education, integration of biomedical sciences and focus on increasing clinical responsibility levels for trainees. Results of earlier questionnaire-based studies indicate that the type of the curriculum can affect the perceived preparedness for work as perceived by students or supervisors. The aim of the present study is to determine difference in actual performance of graduates from VI and non-VI curricula. METHODS We developed and implemented an authentic performance assessment based on different facets of competence for medical near-graduates in the role of beginning residents on a very busy day. Fifty nine candidates participated: 30 VI (Utrecht, The Netherlands) and 29 non-VI (Hamburg, Germany). Two physicians, one nurse and five standardized patients independently assessed each candidate on different facets of competence. Afterwards, the physicians indicated how much supervision they estimated each candidate would require on nine so called "Entrustable Professional Activities (EPAs)" unrelated to the observed scenarios. RESULTS Graduates from a VI curriculum received significantly higher scores by the physicians for the facet of competence "active professional development", with features like 'reflection' and 'asking for feedback'. In addition, VI graduates scored better on the EPA "solving a management problem", while the non-VI graduates got higher scores for the EPA "breaking bad news". CONCLUSIONS This study gives an impression of the actual performance of medical graduates from VI and non-VI curricula. Even though not many differences were found, VI graduates got higher scores for features of professional development, which is important for postgraduate training and continuing education.
Collapse
Affiliation(s)
- Marjo Wijnen-Meijer
- Department of Education and Training, Leiden University Medical Center, Leiden, The Netherlands.
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands.
| | - Olle Ten Cate
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands.
- Department of Medicine, University of California, San Francisco, USA.
| | | | - Chantalle Burgers
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands.
| | - Jan Borleffs
- Center for Innovation and Research in Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands.
| | - Sigrid Harendza
- III. Department of Internal Medicine, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany.
| |
Collapse
|
43
|
McBee E, Ratcliffe T, Picho K, Artino AR, Schuwirth L, Kelly W, Masel J, van der Vleuten C, Durning SJ. Consequences of contextual factors on clinical reasoning in resident physicians. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:1225-36. [PMID: 25753295 DOI: 10.1007/s10459-015-9597-x] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2014] [Accepted: 02/19/2015] [Indexed: 05/26/2023]
Abstract
Context specificity and the impact that contextual factors have on the complex process of clinical reasoning is poorly understood. Using situated cognition as the theoretical framework, our aim was to evaluate the verbalized clinical reasoning processes of resident physicians in order to describe what impact the presence of contextual factors have on their clinical reasoning. Participants viewed three video recorded clinical encounters portraying straightforward diagnoses in internal medicine with select patient contextual factors modified. After watching each video recording, participants completed a think-aloud protocol. Transcripts from the think-aloud protocols were analyzed using a constant comparative approach. After iterative coding, utterances were analyzed for emergent themes with utterances grouped into categories, themes and subthemes. Ten residents participated in the study with saturation reached during analysis. Participants universally acknowledged the presence of contextual factors in the video recordings. Four categories emerged as a consequence of the contextual factors: (1) emotional reactions (2) behavioral inferences (3) optimizing the doctor patient relationship and (4) difficulty with closure of the clinical encounter. The presence of contextual factors may impact clinical reasoning performance in resident physicians. When confronted with the presence of contextual factors in a clinical scenario, residents experienced difficulty with closure of the encounter, exhibited as diagnostic uncertainty. This finding raises important questions about the relationship between contextual factors and clinical reasoning activities and how this relationship might influence the cost effectiveness of care. This study also provides insight into how the phenomena of context specificity may be explained using situated cognition theory.
Collapse
Affiliation(s)
- Elexis McBee
- Department of Medicine, Naval Medical Center San Diego, 34800 Bob Wilson Drive, San Diego, CA, 92134, USA.
| | - Temple Ratcliffe
- Department of Medicine, University of Texas Health Science Center at San Antonio, 7703 Floyd Curl Drive, San Antonio, TX, 78229, USA
| | - Katherine Picho
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, 4301 Jones Bridge Rd., Bethesda, MD, 20814, USA
| | - Anthony R Artino
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, 4301 Jones Bridge Rd., Bethesda, MD, 20814, USA
| | - Lambert Schuwirth
- Flinders University, School of Medicine, GPO Box 2100, Adelaide, 5001, South Australia
| | - William Kelly
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, 4301 Jones Bridge Rd., Bethesda, MD, 20814, USA
| | - Jennifer Masel
- Department of Medicine, Walter Reed National Military Medical Center, 8901 Wisconsin Ave., Bethesda, MD, 20889, USA
| | - Cees van der Vleuten
- Department of Educational Development and Research, Maastricht University, 6200 MD, Maastricht, The Netherlands
| | - Steven J Durning
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, 4301 Jones Bridge Rd., Bethesda, MD, 20814, USA
| |
Collapse
|
44
|
Yudkowsky R, Park YS, Hyderi A, Bordage G. Characteristics and Implications of Diagnostic Justification Scores Based on the New Patient Note Format of the USMLE Step 2 CS Exam. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:S56-S62. [PMID: 26505103 DOI: 10.1097/acm.0000000000000900] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
BACKGROUND To determine the psychometric characteristics of diagnostic justification scores based on the patient note format of the United States Medical Licensing Examination Step 2 Clinical Skills exam, which requires students to document history and physical findings, differential diagnoses, diagnostic justification, and plan for immediate workup. METHOD End-of-third-year medical students at one institution wrote notes for five standardized patient cases in May 2013 (n = 180) and 2014 (n = 177). Each case was scored using a four-point rubric to rate each of the four note components. Descriptive statistics and item analyses were computed and a generalizability study done. RESULTS Across cases, 10% to 48% provided no diagnostic justification or had several missing or incorrect links between history and physical findings and diagnoses. The average intercase correlation for justification scores ranged from 0.06 to 0.16; internal consistency reliability of justification scores (coefficient alpha across cases) was 0.38. Overall, justification scores had the highest mean item discrimination across cases. The generalizability study showed that person-case interaction (12%) and task-case interaction (13%) had the largest variance components, indicating substantial case specificity. CONCLUSIONS The diagnostic justification task provides unique information about student achievement and curricular gaps. Students struggled to correctly justify their diagnoses; performance was highly case specific. Diagnostic justification was the most discriminating element of the patient note and had the greatest variability in student performance across cases. The curriculum should provide a wide range of clinical cases and emphasize recognition and interpretation of clinically discriminating findings to promote the development of clinical reasoning skills.
Collapse
|
45
|
Does Objective Structured Clinical Examinations Score Reflect the Clinical Reasoning Ability of Medical Students? Am J Med Sci 2015; 350:64-7. [PMID: 25647834 PMCID: PMC4495861 DOI: 10.1097/maj.0000000000000420] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Background: Clinical reasoning ability is an important factor in a physician's competence and thus should be taught and tested in medical schools. Medical schools generally use objective structured clinical examinations (OSCE) to measure the clinical competency of medical students. However, it is unknown whether OSCE can also evaluate clinical reasoning ability. In this study, the authors investigated whether OSCE scores reflected students' clinical reasoning abilities. Methods: Sixty-five fourth-year medical students participated in this study. Medical students completed the OSCE with 4 cases using standardized patients. For assessment of clinical reasoning, students were asked to list differential diagnoses and the findings that were compatible or not compatible with each diagnosis. The OSCE score (score of patient encounter), diagnostic accuracy score, clinical reasoning score, clinical knowledge score and grade point average (GPA) were obtained for each student, and correlation analysis was performed. Results: Clinical reasoning score was significantly correlated with diagnostic accuracy and GPA (correlation coefficient = 0.258 and 0.380; P = 0.038 and 0.002, respectively) but not with OSCE score or clinical knowledge score (correlation coefficient = 0.137 and 0.242; P = 0.276 and 0.052, respectively). Total OSCE score was not significantly correlated with clinical knowledge test score, clinical reasoning score, diagnostic accuracy score or GPA. Conclusions: OSCE score from patient encounters did not reflect the clinical reasoning abilities of the medical students in this study. The evaluation of medical students' clinical reasoning abilities through OSCE should be strengthened.
Collapse
|
46
|
Melderis S, Gutowski JP, Harendza S. Overspecialized and undertrained? Patient diversity encountered by medical students during their internal medicine clerkship at a university hospital. BMC MEDICAL EDUCATION 2015; 15:62. [PMID: 25880036 PMCID: PMC4384319 DOI: 10.1186/s12909-015-0353-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2014] [Accepted: 03/26/2015] [Indexed: 06/04/2023]
Abstract
BACKGROUND During the four-month internal medicine clerkship in their final year, undergraduate medical students are closely involved in patient care. Little is known about what constitutes their typical learning experiences with respect to patient diversity within the different subspecialties of internal medicine and during on call hours. METHODS 25 final year medical students (16 female, 9 male) on their internal medicine clerkship participated in this observational single-center study. To detail the patient diversity encountered by medical students at a university hospital during their 16-week internal medicine clerkship, all participants self-reported their patient contacts in the different subspecialties and during on call hours on patient encounter cards. Patients' chief complaint, suspected main diagnosis, planned diagnostic investigations, and therapy in seven different internal medicine subspecialties and the on call medicine service were documented. RESULTS 496 PECs were analysed in total. The greatest diversity of chief complaints (CC) and suspected main diagnoses (SMD) was observed in patients encountered on call, with the combined frequencies of the three most common CCs or SMDs accounting for only 23% and 25%, respectively. Combined, the three most commonly encountered CC/SMD accounted for high percentages (82%/63%), i.e. less diversity, in oncology and low percentages (37%/32%), i.e. high diversity, in nephrology. The percentage of all diagnostic investigations and therapies that were classified as "basic" differed between the subspecialties from 82%/94% (on call) to 37%/50% (pulmonology/oncology). The only subspecialty with no significant difference compared with on call was nephrology for diagnostic investigations. With respect to therapy, nephrology and infectious diseases showed no significant differences compared with on call. CONCLUSIONS Internal medicine clerkships at a university hospital provide students with a very limited patient diversity in most internal medicine subspecialties. Shadowing the on call resident or shorter rotations could provide a more extended patient diversity.
Collapse
Affiliation(s)
- Simon Melderis
- III. Medizinische Klinik, Universitätsklinikum Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany.
| | - Jan-Philipp Gutowski
- III. Medizinische Klinik, Universitätsklinikum Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany.
| | - Sigrid Harendza
- III. Medizinische Klinik, Universitätsklinikum Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany.
| |
Collapse
|
47
|
Bouwmans GAM, Denessen E, Hettinga AM, Michels C, Postma CT. Reliability and validity of an extended clinical examination. MEDICAL TEACHER 2015; 37:1072-1077. [PMID: 25683172 DOI: 10.3109/0142159x.2015.1009423] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
INTRODUCTION An extended clinical examination (ECE) was administered to 85 final year medical students at the Radboud University Medical Centre in the Netherlands. The aim of the study was to determine the psychometric quality and the suitability of the ECE as a measurement tool to assess the clinical proficiency of eight separate clinical skills. METHODS Generalizability studies were conducted to determine the generalizability coefficient and the sources of variance of the ECE. An additional D-study was performed to estimate the generalizability coefficients with altering numbers of stations. RESULTS The largest sources of variance were found in skill difficulties (36.18%), the general error term (26.76%) and in the rank ordering of skill difficulties across the stations (21.89%). The generalizability coefficient of the entire ECE was above the 0.70 lower bound (G = 0.74). D studies showed that the separate skills could yield sufficient G coefficients in seven out of eight skills, if the ECE was lengthened from 8 to 14 stations. DISCUSSION The ECE proved to be a reliable clinical assessment that enables examinees to compose a clinical reasoning path through self-obtained data. The ECE can also be used as an assessment tool for separate clinical skills.
Collapse
Affiliation(s)
| | - E Denessen
- b Radboud University Nijmegen , The Netherlands
| | - A M Hettinga
- a Radboud University Medical Centre , The Netherlands
| | - C Michels
- b Radboud University Nijmegen , The Netherlands
| | - C T Postma
- a Radboud University Medical Centre , The Netherlands
| |
Collapse
|
48
|
Baker EA, Ledford CH, Fogg L, Way DP, Park YS. The IDEA Assessment Tool: Assessing the Reporting, Diagnostic Reasoning, and Decision-Making Skills Demonstrated in Medical Students' Hospital Admission Notes. TEACHING AND LEARNING IN MEDICINE 2015; 27:163-173. [PMID: 25893938 DOI: 10.1080/10401334.2015.1011654] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
UNLABELLED Construct: Clinical skills are used in the care of patients, including reporting, diagnostic reasoning, and decision-making skills. Written comprehensive new patient admission notes (H&Ps) are a ubiquitous part of student education but are underutilized in the assessment of clinical skills. The interpretive summary, differential diagnosis, explanation of reasoning, and alternatives (IDEA) assessment tool was developed to assess students' clinical skills using written comprehensive new patient admission notes. BACKGROUND The validity evidence for assessment of clinical skills using clinical documentation following authentic patient encounters has not been well documented. Diagnostic justification tools and postencounter notes are described in the literature (1,2) but are based on standardized patient encounters. To our knowledge, the IDEA assessment tool is the first published tool that uses medical students' H&Ps to rate students' clinical skills. APPROACH The IDEA assessment tool is a 15-item instrument that asks evaluators to rate students' reporting, diagnostic reasoning, and decision-making skills based on medical students' new patient admission notes. This study presents validity evidence in support of the IDEA assessment tool using Messick's unified framework, including content (theoretical framework), response process (interrater reliability), internal structure (factor analysis and internal-consistency reliability), and relationship to other variables. RESULTS Validity evidence is based on results from four studies conducted between 2010 and 2013. First, the factor analysis (2010, n = 216) yielded a three-factor solution, measuring patient story, IDEA, and completeness, with reliabilities of .79, .88, and .79, respectively. Second, an initial interrater reliability study (2010) involving two raters demonstrated fair to moderate consensus (κ = .21-.56, ρ =.42-.79). Third, a second interrater reliability study (2011) with 22 trained raters also demonstrated fair to moderate agreement (intraclass correlations [ICCs] = .29-.67). There was moderate reliability for all three skill domains, including reporting skills (ICC = .53), diagnostic reasoning skills (ICC = .64), and decision-making skills (ICC = .63). Fourth, there was a significant correlation between IDEA rating scores (2010-2013) and final Internal Medicine clerkship grades (r = .24), 95% confidence interval (CI) [.15, .33]. CONCLUSIONS The IDEA assessment tool is a novel tool with validity evidence to support its use in the assessment of students' reporting, diagnostic reasoning, and decision-making skills. The moderate reliability achieved supports formative or lower stakes summative uses rather than high-stakes summative judgments.
Collapse
Affiliation(s)
- Elizabeth A Baker
- a Department of Internal Medicine , Rush University , Chicago , Illinois , USA
| | | | | | | | | |
Collapse
|
49
|
Artino AR, Cleary TJ, Dong T, Hemmer PA, Durning SJ. Exploring clinical reasoning in novices: a self-regulated learning microanalytic assessment approach. MEDICAL EDUCATION 2014; 48:280-91. [PMID: 24528463 PMCID: PMC4235424 DOI: 10.1111/medu.12303] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Revised: 03/12/2013] [Accepted: 07/04/2013] [Indexed: 05/09/2023]
Abstract
OBJECTIVES The primary objectives of this study were to examine the regulatory processes of medical students as they completed a diagnostic reasoning task and to examine whether the strategic quality of these regulatory processes were related to short-term and longer-term medical education outcomes. METHODS A self-regulated learning (SRL) microanalytic assessment was administered to 71 second-year medical students while they read a clinical case and worked to formulate the most probable diagnosis. Verbal responses to open-ended questions targeting forethought and performance phase processes of a cyclical model of SRL were recorded verbatim and subsequently coded using a framework from prior research. Descriptive statistics and hierarchical linear regression models were used to examine the relationships between the SRL processes and several outcomes. RESULTS Most participants (90%) reported focusing on specific diagnostic reasoning strategies during the task (metacognitive monitoring), but only about one-third of students referenced these strategies (e.g. identifying symptoms, integration) in relation to their task goals and plans for completing the task. After accounting for prior undergraduate achievement and verbal reasoning ability, strategic planning explained significant additional variance in course grade (ΔR(2 ) = 0.15, p < 0.01), second-year grade point average (ΔR(2) = 0.14, p < 0.01), United States Medical Licensing Examination Step 1 score (ΔR(2) = 0.08, p < 0.05) and National Board of Medical Examiner subject examination score in internal medicine (ΔR(2) = 0.10, p < 0.05). CONCLUSIONS These findings suggest that most students in the formative stages of learning diagnostic reasoning skills are aware of and think about at least one key diagnostic reasoning process or strategy while solving a clinical case, but a substantially smaller percentage set goals or develop plans that incorporate such strategies. Given that students who developed more strategic plans achieved better outcomes, the potential importance of forethought regulatory processes is underscored.
Collapse
Affiliation(s)
- Anthony R Artino
- Department of Preventive Medicine and Biometrics, Uniformed Services University of the Health SciencesBethesda, Maryland, USA
| | - Timothy J Cleary
- Graduate School of Applied and Professional Psychology, Rutgers UniversityNew Brunswick, New Jersey, USA
| | - Ting Dong
- Department of Preventive Medicine and Biometrics, Uniformed Services University of the Health SciencesBethesda, Maryland, USA
| | - Paul A Hemmer
- Department of Medicine, Uniformed Services University of the Health SciencesBethesda, Maryland, USA
| | - Steven J Durning
- Department of Medicine, Uniformed Services University of the Health SciencesBethesda, Maryland, USA
| |
Collapse
|
50
|
Wijnen-Meijer M, Van der Schaaf M, Booij E, Harendza S, Boscardin C, Van Wijngaarden J, Ten Cate TJ. An argument-based approach to the validation of UHTRUST: can we measure how recent graduates can be trusted with unfamiliar tasks? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:1009-27. [PMID: 23400369 DOI: 10.1007/s10459-013-9444-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2012] [Accepted: 01/16/2013] [Indexed: 05/21/2023]
Abstract
There is a need for valid methods to assess the readiness for clinical practice of medical graduates. This study evaluates the validity of Utrecht Hamburg Trainee Responsibility for Unfamiliar Situations Test (UHTRUST), an authentic simulation procedure to assess whether medical trainees are ready to be entrusted with unfamiliar clinical tasks near the highest level of Miller's pyramid. This assessment, in which candidates were judged by clinicians, nurses and standardized patients, addresses the question: can this trainee be trusted with unfamiliar clinical tasks? The aim of this paper is to provide a validity argument for this assessment procedure. We collected data from various sources during preparation and administration of a UHTRUST-assessment. In total, 60 candidates (30 from the Netherlands and 30 from Germany) participated. To provide a validity argument for the UHTRUST-assessment, we followed Kane's argument-based approach for validation. All available data were used to design a coherent and plausible argument. Considerable data was collected during the development of the assessment procedure. In addition, a generalizability study was conducted to evaluate the reliability of the scores given by assessors and to determine the proportion of variance accounted by candidates and assessors. It was found that most of Kane's validity assumptions were defendable with accurate and often parallel lines of backing. UHTRUST can be used to compare the readiness for clinical practice of medical graduates. Further exploration of the procedures for entrustment decisions is recommended.
Collapse
Affiliation(s)
- M Wijnen-Meijer
- Center for Research and Development of Education, University Medical Center Utrecht, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands,
| | | | | | | | | | | | | |
Collapse
|