1
|
New Training Options for Minimally Invasive Surgery Skills. Vet Clin North Am Small Anim Pract 2024; 54:603-613. [PMID: 38485606 DOI: 10.1016/j.cvsm.2024.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Veterinary minimally invasive surgery (MIS) training options are becoming more available. This article reviews new developments in this area and the current evidence for manual skills and cognitive training of MIS.
Collapse
|
2
|
Should we add patients in concordance of judgment learning tool panels? - An analysis between patients and primary care physicians. MEDICAL TEACHER 2024; 46:697-704. [PMID: 37917989 DOI: 10.1080/0142159x.2023.2274285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
INTRODUCTION The Concordance of Judgment Learning Tool (CJLT) has been developed for distance asynchronous learning of professionalism in health sciences education. The learning of professionalism is induced by a student's comparison of their own responses with those of the panel members. Whilst CJLT programs typically include same profession experts in their panels, we believe that they could also include patients. Accordingly, we conducted a study aimed at comparing CJLT response patterns between two groups of primary care physicians (PCPs) and patients. METHODS We conducted a mixed prospective study of responses to a CJLT program based on a group of PCPs and a group of patients: an analysis of the response patterns of the two groups and a qualitative analysis of justifications. RESULTS A total of 110 participants were included in the study: 70 patients and 40 PCPs. We found a significant difference in response patterns between the PCP and patient groups for nine of the fifteen questions (60%). The qualitative analysis of justifications between groups allowed us to comprehend patients' views on the professionalism of PCPs. CONCLUSIONS Including patients in CJLT panels can enrich the feedback offered to students in these online training programs.
Collapse
|
3
|
Using ChatGPT in Psychiatry to Design Script Concordance Tests in Undergraduate Medical Education: Mixed Methods Study. JMIR MEDICAL EDUCATION 2024; 10:e54067. [PMID: 38596832 PMCID: PMC11007379 DOI: 10.2196/54067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 04/11/2024]
Abstract
Background Undergraduate medical studies represent a wide range of learning opportunities served in the form of various teaching-learning modalities for medical learners. A clinical scenario is frequently used as a modality, followed by multiple-choice and open-ended questions among other learning and teaching methods. As such, script concordance tests (SCTs) can be used to promote a higher level of clinical reasoning. Recent technological developments have made generative artificial intelligence (AI)-based systems such as ChatGPT (OpenAI) available to assist clinician-educators in creating instructional materials. Objective The main objective of this project is to explore how SCTs generated by ChatGPT compared to SCTs produced by clinical experts on 3 major elements: the scenario (stem), clinical questions, and expert opinion. Methods This mixed method study evaluated 3 ChatGPT-generated SCTs with 3 expert-created SCTs using a predefined framework. Clinician-educators as well as resident doctors in psychiatry involved in undergraduate medical education in Quebec, Canada, evaluated via a web-based survey the 6 SCTs on 3 criteria: the scenario, clinical questions, and expert opinion. They were also asked to describe the strengths and weaknesses of the SCTs. Results A total of 102 respondents assessed the SCTs. There were no significant distinctions between the 2 types of SCTs concerning the scenario (P=.84), clinical questions (P=.99), and expert opinion (P=.07), as interpretated by the respondents. Indeed, respondents struggled to differentiate between ChatGPT- and expert-generated SCTs. ChatGPT showcased promise in expediting SCT design, aligning well with Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition criteria, albeit with a tendency toward caricatured scenarios and simplistic content. Conclusions This study is the first to concentrate on the design of SCTs supported by AI in a period where medicine is changing swiftly and where technologies generated from AI are expanding much faster. This study suggests that ChatGPT can be a valuable tool in creating educational materials, and further validation is essential to ensure educational efficacy and accuracy.
Collapse
|
4
|
Reliability, validity and acceptability of an online clinical reasoning simulator for medical students: An international pilot. MEDICAL TEACHER 2024:1-8. [PMID: 38489473 DOI: 10.1080/0142159x.2024.2308082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Accepted: 01/17/2024] [Indexed: 03/17/2024]
Abstract
INTRODUCTION Clinical reasoning skills are essential for decision-making. Current assessment methods are limited when testing clinical reasoning and management of uncertainty. This study evaluates the reliability, validity and acceptability of Practicum Script, an online simulation-based programme, for developing medical students' clinical reasoning skills using real-life cases. METHODS In 2020, we conducted an international, multicentre pilot study using 20 clinical cases with 2457 final-year medical students from 21 schools worldwide. Psychometric analysis was performed (n = 1502 students completing at least 80% of cases). Classical estimates of reliability for three test domains (hypothesis generation, hypothesis argumentation and knowledge application) were calculated using Cronbach's alpha and McDonald's omega coefficients. Validity evidence was obtained by confirmatory factor analysis (CFA) and measurement alignment (MA). Items from the knowledge application domain were analysed using cognitive diagnostic modelling (CDM). Acceptability was evaluated by an anonymous student survey. RESULTS Reliability estimates were high with narrow confidence intervals. CFA revealed acceptable goodness-of-fit indices for the proposed three-factor model. CDM analysis demonstrated good absolute test fit and high classification accuracy estimates. Student survey responses showed high levels of acceptability. CONCLUSION Our findings suggest that Practicum Script is a useful resource for strengthening students' clinical reasoning skills and ability to manage uncertainty.
Collapse
|
5
|
Impact of reference panel composition on scores of script concordance test assessing basic nephrology knowledge in undergraduate medical education. MEDICAL TEACHER 2024; 46:110-116. [PMID: 37544894 DOI: 10.1080/0142159x.2023.2239441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
PURPOSE In the assessment of basic medical knowledge, the composition of the reference panel between specialists and primary care (PC) physicians is a contentious issue. We assessed the effect of panel composition on the scores of undergraduate medical students in a script concordance test (SCT). METHODS The scale of an SCT on basic nephrology knowledge was set by a panel of nephrologists or a mixed panel of nephrologists and PC physicians. The results of the SCTs were compared with ANOVA for repeated measurements. Concordance was assessed with Bland and Altman plots. RESULTS Forty-five students completed the SCT. Their scores differed according to panel composition: 65.6 ± 9.73/100 points for nephrologists, and 70.27 ± 8.82 for the mixed panel, p < 0.001. Concordance between the scores was low with a bias of -4.27 ± 2.19 and a 95% limit of agreement of -8.96 to -0.38. Panel composition led to a change in the ranking of 71% of students (mean 3.6 ± 2.6 places). CONCLUSION The composition of the reference panel, either specialist or mixed, for SCT assessment of basic knowledge has an impact on test results and student rankings.
Collapse
|
6
|
Script concordance testing in genetic counseling training: A pilot study. J Genet Couns 2023; 32:1121-1130. [PMID: 37443441 DOI: 10.1002/jgc4.1752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 04/28/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023]
Abstract
Clinical reasoning is a complex skill that represents a trainee's ability to use their professional knowledge and skills to assess and solve the problems that arise in clinical practice. As an integral tenet of the genetic counseling process, clinical reasoning skills underlie many of the Practice-Based Competencies (2019) across a variety of domains. Despite the long-lasting recognition of the importance of this complex skill in the training of genetic counselors, clinical reasoning has traditionally been difficult to assess in a standardized way in healthcare education. Script concordance testing is a standardized method of assessing clinical reasoning skills in ambiguous clinical situations. The tool has been used to successfully measure the clinical reasoning skills of trainees in various healthcare training programs and has never been used in a genetic counseling training program. We conducted a pilot study to assess the utility of script concordance testing in the field of genetic counseling as an objective measure of clinical reasoning in trainees. The script concordance test was constructed for the field of genetic counseling and administered to 22 second year genetic counseling students in the Joan H. Marks Graduate Program in Human Genetics at Sarah Lawrence College. Twelve genetic counselors served on a panel to provide expert clinical reasoning responses and a scoring grid was developed using the aggregate scores method. The utility of the tool was measured using Cronbach's alpha coefficient, and scores of students and the panel were compared using Hedge's g. Results revealed statistically significant differences between the scores of panelists and students and good reliability. This study shows that script concordance testing can be used to measure clinical reasoning skills in genetic counseling trainees in a way that is reliable, standardized, and easy to use, thereby allowing programs to better assess the clinical reasoning skills of trainees prior to graduation.
Collapse
|
7
|
Validation of a French questionnaire assessing knowledge of suicide. L'ENCEPHALE 2023:S0013-7006(23)00180-X. [PMID: 38040504 DOI: 10.1016/j.encep.2023.08.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 08/25/2023] [Accepted: 08/26/2023] [Indexed: 12/03/2023]
Abstract
OBJECTIVES The objective of this study was to develop and validate the Knowledge of Suicide Scale (KSS), elaborated to assess adherence to myths about suicide. METHODS The KSS is a self-questionnaire including 22 statements relating to myths about suicide for which the respondent is asked to rate his degree of adherence on a scale ranging from 0 ("strongly disagree") to 10 ("completely agree"). Using the script concordance test scoring method, the respondents' scores were compared with those of experts to obtain, for each item, a score between 0 (maximum deviation with the experts) and 1 (minimum deviation with the experts). One thousand and thirty-five individuals (222 psychiatric interns, 332 medical interns in the first semester excluding psychiatry and 481 journalism students) were included. RESULTS According to the exploratory factor analysis, the KSS is a two-dimensional scale: the first subscale includes 15 items and the second seven items. The tool showed excellent face validity, correct convergent and divergent validities (multi-method multi-feature analyzes), and good internal consistency (Cronbach's alpha coefficient between 0.66 and 0.83 for scales and subscales). The KSS is moderately and negatively correlated with the Stigma of Suicide Scale (r=-0.3). It significantly discriminates groups with different expected levels of knowledge regarding suicide (P<0.001). CONCLUSIONS The KSS demonstrated good psychometric properties to measure adherence to myths about suicide. This tool could be useful in assessing the effectiveness of suicide prevention literacy improvement programs.
Collapse
|
8
|
[Impact of the suicidal crisis intervention training program on the confidence and skills of hospital professionals in the Hauts-de-France region]. L'ENCEPHALE 2023; 49:504-509. [PMID: 35985851 DOI: 10.1016/j.encep.2022.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 05/07/2022] [Accepted: 05/12/2022] [Indexed: 11/24/2022]
Abstract
INTRODUCTION Suicide is a major public health issue given its huge human and economic consequences. Symptoms prior to suicide are often not specific. Nevertheless, the majority of suicidal people express suicidal thoughts, and nearly one in two meet a health professional in the period preceding the act. Being able to recognize the warnings and intervene during the suicidal crisis, defined as a mental crisis where the major risk is suicide, is to seize the opportunity to postpone the suicidal plan and to gain time to implement in place lasting strategies to combat suffering. Thus, the training for suicidal crisis intervention is a major axis of the suicide prevention strategy. Recently, crisis intervention training programs have been updated with knowledge accumulated since the early 2000's. In France, one of the countries most concerned by suicide, the Hauts-de-France region is one of the most impacted. In this context, the Regional Health Agency of Hauts-de-France included in its Regional Health Program of 2018-2023 the training of healthcare workers who work with high suicidal risk patients. The suicidal crisis intervention training program (SCIT) has been introduced to hospital staffs in Hauts-de-France. The purpose of this study was to evaluate this program. METHODS Eight training sessions with 15 to 21 participants were carried out from 2019 November to 2021 January in the Hauts-de-France region. Participants were volunteer healthcare professionals in direct contact with suicidal crisis patients. The training included three modules. The first one concerned the suicidal crisis intervention training: definition of the suicidal crisis, typology of the crisis, vulnerability development, crisis evaluation and crisis intervention practice. The second concerned the evaluation with the RED scale (Risk-Emergency-Danger) and the adequate patient orientation to a psychiatric unit. The third was dedicated to the Gatekeeper training with the constitution of a Gatekeeper network to enhance the capacity to detect suicidal risk and to orient the concerned person towards an adequate evaluation or care organization. We evaluated the first two levels of the Kirkpatrick's model: level 1) the participant's satisfaction (rated out of 10), and level 2) the degree of confidence in their professional abilities (rated out of 10) and their skills in responding to a person in a suicidal crisis (using the SIRI-2-VF - French version of the Suicide Intervention Response Inventory-2). The participants were interviewed before (T0), just after (T1) and at one month of training (T2). RESULTS Among the 141 health professionals who followed the training, 139 answered the questionnaire at least one time (13 psychologists, 22 doctors, 97 nurses and 7 head nurses). The participation rates were 99.3 % at T0, 96.4 % at T1 and 46.0 % at T2. Most of the participants were nurses (69.8 %), and 33.1 % of the respondents declared they had already followed a suicidal crisis training. The satisfaction with the training was evaluated at 8.6 (± 1.3) out of 10. There was no significant difference among the professions, neither between those having already received or not a previous training. The self-perceived capacity to manage a suicidal crisis was rate 6.8 (± 1.8) out of 10 at T0. There was a significant increase just after the training (8.1±1.2 vs 6,8±1,8, p<0,001) which persisted at 1 month (8.1±1.1 vs 6.8±1.8, P<0.001). The score at the SIRI-2-VF was 15.0 (± 4.2) out of 30 at T0. There was a significant increase just after the training (17.5±3.5 vs 15.0±4.2, P<0.001), which persisted at 1 month (17.0±4.0 vs 15.0±4.2, P<0.001). DISCUSSION This is the first evaluation of the suicidal crisis intervention training program. This program increased and homogenized the competency of the participants to manage suicidal ideation and behaviors. Those who followed a previous training maintained higher scores than the others, which shows the importance of repeated training to maintain a satisfying level of knowledge over the long term. One of the strengths of this training is the use of roleplay which enhances the learning and abilities to interact with people at suicidal risk. It seems important to integrate a suicidal crisis intervention training in the cursus of health students to avoid suicide and the dramatic consequences for the entourage and the health professionals who are confronted with it. CONCLUSION The SCIT program showed encouraging results in terms of confidence and capacity of the healthcare professionals to intervene in suicidal crisis.
Collapse
|
9
|
Diagnosing virtual patients: the interplay between knowledge and diagnostic activities. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1245-1264. [PMID: 37052740 PMCID: PMC10099021 DOI: 10.1007/s10459-023-10211-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 01/22/2023] [Indexed: 06/19/2023]
Abstract
Clinical reasoning theories agree that knowledge and the diagnostic process are associated with diagnostic success. However, the exact contributions of these components of clinical reasoning to diagnostic success remain unclear. This is particularly the case when operationalizing the diagnostic process with diagnostic activities (i.e., teachable practices that generate knowledge). Therefore, we conducted a study investigating to what extent knowledge and diagnostic activities uniquely explain variance in diagnostic success with virtual patients among medical students. The sample consisted of N = 106 medical students in their third to fifth year of university studies in Germany (6-years curriculum). Participants completed professional knowledge tests before diagnosing virtual patients. Diagnostic success with the virtual patients was assessed with diagnostic accuracy as well as a comprehensive diagnostic score to answer the call for more extensive measurement of clinical reasoning outcomes. The three diagnostic activities hypothesis generation, evidence generation, and evidence evaluation were tracked. Professional knowledge predicted performance in terms of the comprehensive diagnostic score and displayed a small association with diagnostic accuracy. Diagnostic activities predicted comprehensive diagnostic score and diagnostic accuracy. Hierarchical regressions showed that the diagnostic activities made a unique contribution to diagnostic success, even when knowledge was taken into account. Our results support the argument that the diagnostic process is more than an embodiment of knowledge and explains variance in diagnostic success over and above knowledge. We discuss possible mechanisms explaining this finding.
Collapse
|
10
|
Healthcare pathways and practitioners' knowledge about ADHD in children. L'ENCEPHALE 2023:S0013-7006(23)00144-6. [PMID: 37718197 DOI: 10.1016/j.encep.2023.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 06/18/2023] [Accepted: 07/04/2023] [Indexed: 09/19/2023]
Abstract
INTRODUCTION Access to care for children and adolescents affected by ADHD in France remains below the levels attained in most industrialised countries. To contribute to improving ADHD care in France, we assessed existing ADHD knowledge among medical doctors (MDs) and described associated care pathways in two large French regions in 2021. We produced tools to evaluate the regional impact of implementing a stepped-care pathway for ADHD. METHODS A SurveyMonkey® study was sent to professionals from two regions in France accounting for 14 million inhabitants, allowing them to describe their role in child/adolescent ADHD, as well as their representations and knowledge about the disorder. RESULTS Around 9.4% of all MDs potentially involved with children took part in the study; 34.9% considered themselves untrained, 40.5% were involved in ADHD care at a first-tier level, and 19.6% at a second-tier level. Access to a second or third-tier service for ADHD was associated with mean waiting times of 5.7 and 8.5 months, respectively. Initiation of stimulant therapy remained mainly restricted to second or third-tier MDs, and adaptation of dosage or change in the galenic formulation was rarely performed by first-tier MDs (27.2% and 18%, respectively). Training in neurodevelopmental disorders and tier-level were the strongest determinants of knowledge, attitudes and self-assessed expertise about ADHD. CONCLUSIONS This study provides insight into training needs for MDs regarding healthcare pathways in ADHD and should support the implementation of health policies, such as a stepped healthcare access for ADHD. The study design and dissemination have been validated and will be available in France and other countries facing similar obstacles in care pathways for ADHD. Official recommendations on ADHD in children and adults are being updated in France, and our data and the survey design will be a starting point for their implementation.
Collapse
|
11
|
The Role of Large Language Models in Medical Education: Applications and Implications. JMIR MEDICAL EDUCATION 2023; 9:e50945. [PMID: 37578830 PMCID: PMC10463084 DOI: 10.2196/50945] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 07/26/2023] [Accepted: 07/26/2023] [Indexed: 08/15/2023]
Abstract
Large language models (LLMs) such as ChatGPT have sparked extensive discourse within the medical education community, spurring both excitement and apprehension. Written from the perspective of medical students, this editorial offers insights gleaned through immersive interactions with ChatGPT, contextualized by ongoing research into the imminent role of LLMs in health care. Three distinct positive use cases for ChatGPT were identified: facilitating differential diagnosis brainstorming, providing interactive practice cases, and aiding in multiple-choice question review. These use cases can effectively help students learn foundational medical knowledge during the preclinical curriculum while reinforcing the learning of core Entrustable Professional Activities. Simultaneously, we highlight key limitations of LLMs in medical education, including their insufficient ability to teach the integration of contextual and external information, comprehend sensory and nonverbal cues, cultivate rapport and interpersonal interaction, and align with overarching medical education and patient care goals. Through interacting with LLMs to augment learning during medical school, students can gain an understanding of their strengths and weaknesses. This understanding will be pivotal as we navigate a health care landscape increasingly intertwined with LLMs and artificial intelligence.
Collapse
|
12
|
Computerised clinical decision support system for the diagnosis of pulmonary thromboembolism: a preclinical pilot study. BMJ Open Qual 2023; 12:bmjoq-2022-001984. [PMID: 36927628 PMCID: PMC10030901 DOI: 10.1136/bmjoq-2022-001984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 03/04/2023] [Indexed: 03/18/2023] Open
Abstract
BACKGROUND Recommendations for the diagnosis of pulmonary embolism are available for healthcare providers. Yet, real practice data show existing gaps in the translation of evidence-based recommendations. This is a study to assess the effect of a computerised decision support system (CDSS) with an enhanced design based on best practices in content and reasoning representation for the diagnosis of pulmonary embolism. DESIGN Randomised preclinical pilot study of paper-based clinical scenarios in the diagnosis of pulmonary embolism. Participants were clinicians (n=30) from three levels of experience: medical students, residents and physicians. Participants were randomised to two interventions for the diagnosis of pulmonary embolism: a didactic lecture versus a decision tree via a CDSS. The primary outcome of diagnostic pathway concordance (derived as a ratio of the number of correct diagnostic decision steps divided by the ideal number of diagnostic decision steps in diagnostic algorithms) was measured at baseline (five clinical scenarios) and after either intervention for a total of 10 clinical scenarios. RESULTS The mean of diagnostic pathway concordance improved in both study groups: baseline mean=0.73, post mean for the CDSS group=0.90 (p<0.001, 95% CI 0.10-0.24); baseline mean=0.71, post mean for didactic lecture group=0.85 (p<0.001, 95% CI 0.07-0.2). There was no statistically significant difference between the two study groups or between the three levels of participants. INTERPRETATION A computerised decision support system designed for both content and reasoning visualisation can improve clinicians' diagnostic decision-making.
Collapse
|
13
|
Clinical reasoning in undergraduate paramedicine: utilisation of a script concordance test. BMC MEDICAL EDUCATION 2023; 23:39. [PMID: 36658560 PMCID: PMC9849838 DOI: 10.1186/s12909-023-04020-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/11/2023] [Indexed: 06/17/2023]
Abstract
INTRODUCTION Clinical reasoning is a complex cognitive and metacognitive process paramount to patient care in paramedic practice. While universally recognised as an essential component of practice, clinical reasoning has been historically difficult to assess in health care professions. Is the Script Concordance Test (SCT) an achievable and reliable option to test clinical reasoning in undergraduate paramedic students? METHODS This was a single institution observational cohort study designed to use the SCT to measure clinical reasoning in paramedic students. Clinical vignettes were constructed across a range of concepts with varying shades of clinical ambiguity. A reference panel mean scores of the test were compared to that of students. Test responses were graded with the aggregate scoring method with scores awarded for both partially and fully correct responses. RESULTS Eighty-three student paramedic participants (mean age: 21.8 (3.5) years, 54 (65%) female, 27 (33%) male and 2 (2%) non-binary) completed the SCT. The difference between the reference group mean score of 80 (5) and student mean of score of 65.6 (8.4) was statistically significant (p < 0.001). DISCUSSION Clinical reasoning skills are not easily acquired as they are a culmination of education, experience and the ability to apply this in the context to a specific patient. The SCT has shown to be reliable and effective in measuring clinical reasoning in undergraduate paramedics as it has in other health professions such as nursing and medicine. More investigation is required to establish effective pedogeological techniques to optimise clinical reasoning in student and novice paramedics who are devoid of experience.
Collapse
|
14
|
Prueba de concordancia de guiones para entrenar el razonamiento clínico en estudiantes de fonoaudiología. REVISTA DE INVESTIGACIÓN EN LOGOPEDIA 2023. [DOI: 10.5209/rlog.80748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
La prueba de concordancia de guiones (PCG) ha sido utilizada en el entrenamiento y evaluación del razonamiento clínico (RC) como una estrategia innovadora en la formación de profesionales. Sin embargo, no se dispone de evidencia de su aplicación en el pregrado de fonoaudiología. El objetivo de esta investigación fue analizar el desempeño y la percepción de estudiantes de fonoaudiología con respecto al uso de scripts. Se diseñó un piloto pre-experimental y multicéntrico, complementado con tres grupos focales. Las variables cuantitativas continuas fueron resumidas a través de medias y desviación estándar. La comparación entre grupos se ejecutó con Anova one way y la prueba post hoc de Bonferroni, considerando un nivel de significancia p<.05. La fase cualitativa incorporó un análisis de contenido mediante la codificación abierta de textos y la identificación e interpretación de familias de significado emergentes. El rendimiento promedio de los estudiantes fue de 4.03 (DS= 0.35), observándose un incremento en el rendimiento de RC durante el semestre (p= 0.03). La percepción de los estudiantes resulto positiva y se identificó cuatro familias de significado relacionadas con: razonamiento clínico, oportunidades de mejora implementación de la estrategia y retroalimentación docente. A modo de conclusión, la incorporación de scripts en estudiantes de pregrado de fonoaudiología es factible, incrementa el rendimiento y apoya el desarrollo del RC.
Collapse
|
15
|
Development and validation of a script concordance test to assess biosciences clinical reasoning skills: A cross-sectional study of 1st year undergraduate nursing students. NURSE EDUCATION TODAY 2022; 119:105615. [PMID: 36334475 DOI: 10.1016/j.nedt.2022.105615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 09/05/2022] [Accepted: 10/17/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND Developing evaluative measures that assess clinical reasoning remains a major challenge for nursing education. A thorough understanding of biosciences underpins much of nursing practice and is essential to allow nurses to reason effectively. A gap in clinical reasoning can lead to unintended harm. The Script Concordance test holds promise as a measure of clinical reasoning in the context of uncertainty, situations common in nursing practice. The aim of this study is to develop and validate a test for first year undergraduate nursing students that will evaluate how bioscience knowledge is used to clinically reason. METHODS An international team, teaching biosciences to undergraduate nurses constructed a test integrating common clinical cases with a series of related test items: diagnostic, investigative and treatment. An expert panel (n = 10) took the test and commented on authenticity/ambiguities/omissions etc. This step is crucial for validity and for scoring of the student test. The test was administered to 47 first year undergraduate nursing students from the author sites. Students rated educational aspects of the tool both quantitatively and qualitatively. Statistical and content analyses inform the findings. FINDINGS Results indicate that the test is reliable and valid, differentiating between experts and students. Students demonstrated an ability to identify relevant data, link this to their bioscience content and predict outcomes (mean score = 50.78 ± 8.89). However, they lacked confidence in their answers when the scenarios appeared incomplete to them. CONCLUSION Nursing practice is dependent on a thorough understanding of biosciences and the ability to clinically reason. Script concordance tests can be used to promote both competencies. This method of evaluation goes further than probing factual knowledge. It also explores capacities of data interpretation, critical analysis, and clinical reasoning. Evaluating bioscience knowledge and real-world situations encountered in practice is a unique strength of this test.
Collapse
|
16
|
The Use of a Modified Script Concordance Test in Clinical Rounds to Foster and Assess Clinical Reasoning Skills. JOURNAL OF VETERINARY MEDICAL EDUCATION 2022; 49:556-559. [PMID: 34784257 DOI: 10.3138/jvme-2021-0090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The development of clinical reasoning skills is a high priority during clinical service, but an unpredictable case load and limited time for formal instruction makes it challenging for faculty to foster and assess students' individual clinical reasoning skills. We developed an assessment for learning activity that helps students build their clinical reasoning skills based on a modified version of the script concordance test (SCT). To modify the standard SCT, we simplified it by limiting students to a 3-point Likert scale instead of a 5-point scale and added a free-text box for students to provide justification for their answer. Students completed the modified SCT during clinical rounds to prompt a group discussion with the instructor. Student feedback was positive, and the instructor gained valuable insight into the students' thought process. A modified SCT can be adopted as part of a multimodal approach to teaching on the clinic floor. The purpose of this article is to describe our modifications to the standard SCT and findings from implementation in a clinical rounds setting as a method of formative assessment for learning and developing clinical reasoning skills.
Collapse
|
17
|
Development of a script concordance test to assess clinical reasoning in a pharmacy curriculum. CURRENTS IN PHARMACY TEACHING & LEARNING 2022; 14:1135-1142. [PMID: 36154958 DOI: 10.1016/j.cptl.2022.07.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 07/01/2022] [Accepted: 07/20/2022] [Indexed: 06/16/2023]
Abstract
INTRODUCTION Clinical reasoning is a vital skill for student pharmacists in the provision of patient-centered care, but these skills are often difficult to assess in the didactic curriculum. A script concordance test (SCT) is an innovative assessment method that can be used to assess clinical reasoning skills. The objective of this study was to develop and refine an SCT to assess clinical reasoning skills of third year student pharmacists (P3s). METHODS An SCT was written and administered to P3s. Pharmacy practice faculty members served as the expert group. The SCT was scored and Rasch analysis was performed. RESULTS The SCT included 20 case vignettes and 60 questions. Test reliability was 0.34 with mean square values for all items between 0.7 and 1.3. Forty-two questions had a difficulty score between 0 and - 1 logits, indicating there were multiple questions with similar difficulty levels. Two case vignettes and 43.3% of questions (n = 26) were revised to enhance clarity and decrease ambiguity. CONCLUSIONS The SCT is a tool to assess clinical reasoning in the didactic curriculum. Faculty can create the SCT and use statistical methods such as Rasch analysis to assess validity and reliability of the SCT.
Collapse
|
18
|
Assessing clinical reasoning ability in fourth-year medical students via an integrative group history-taking with an individual reasoning activity. BMC MEDICAL EDUCATION 2022; 22:573. [PMID: 35883069 PMCID: PMC9316809 DOI: 10.1186/s12909-022-03649-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 07/18/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND The most important factor in evaluating a physician's competence is strong clinical reasoning ability, leading to correct principal diagnoses. The process of clinical reasoning includes history taking, physical examinations, validating medical records, and determining a final diagnosis. In this study, we designed a teaching activity to evaluate the clinical reasoning competence of fourth-year medical students. METHODS We created five patient scenarios for our standardized patients, including hemoptysis, abdominal pain, fever, anemia, and chest pain. A group history-taking with individual reasoning principles was implemented to teach and evaluate students' abilities to take histories, document key information, and arrive at the most likely diagnosis. Residents were trained to act as teachers, and a post-study questionnaire was employed to evaluate the students' satisfaction with the training activity. RESULTS A total of 76 students, five teachers, and five standardized patients participated in this clinical reasoning training activity. The average history-taking score was 64%, the average key information number was 7, the average diagnosis number was 1.1, and the average correct diagnosis rate was 38%. Standardized patients presenting with abdominal pain (8.3%) and anemia (18.2%) had the lowest diagnosis rates. The scenario of anemia presented the most difficult challenge for students in history taking (3.5/5) and clinical reasoning (3.5/5). The abdominal pain scenario yielded even worse results (history taking: 2.9/5 and clinical reasoning 2.7/5). We found a correlation in the clinical reasoning process between the correct and incorrect most likely diagnosis groups (group history-taking score, p = 0.045; key information number, p = 0.009 and diagnosis number, p = 0.004). The post-study questionnaire results indicated significant satisfaction with the teaching program (4.7/5) and the quality of teacher feedback (4.9/5). CONCLUSIONS We concluded that the clinical reasoning skills of fourth-year medical students benefited from this training course, and the lower correction of the most likely diagnosis rate found with abdominal pain, anemia, and fever might be due to a system-based teaching modules in fourth-year medical students; cross-system remedial reasoning auxiliary training is recommended for fourth-year medical students in the future.
Collapse
|
19
|
The effectiveness of using virtual patient educational tools to improve medical students' clinical reasoning skills: a systematic review. BMC MEDICAL EDUCATION 2022; 22:365. [PMID: 35550085 PMCID: PMC9098350 DOI: 10.1186/s12909-022-03410-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 04/19/2022] [Indexed: 06/10/2023]
Abstract
BACKGROUND Use of virtual patient educational tools could fill the current gap in the teaching of clinical reasoning skills. However, there is a limited understanding of their effectiveness. The aim of this study was to synthesise the evidence to understand the effectiveness of virtual patient tools aimed at improving undergraduate medical students' clinical reasoning skills. METHODS We searched MEDLINE, EMBASE, CINAHL, ERIC, Scopus, Web of Science and PsycINFO from 1990 to January 2022, to identify all experimental articles testing the effectiveness of virtual patient educational tools on medical students' clinical reasoning skills. Quality of the articles was assessed using an adapted form of the MERSQI and the Newcastle-Ottawa Scale. A narrative synthesis summarised intervention features, how virtual patient tools were evaluated and reported effectiveness. RESULTS The search revealed 8,186 articles, with 19 articles meeting the inclusion criteria. Average study quality was moderate (M = 6.5, SD = 2.7), with nearly half not reporting any measurement of validity or reliability for their clinical reasoning outcome measure (8/19, 42%). Eleven articles found a positive effect of virtual patient tools on reasoning (11/19, 58%). Four reported no significant effect and four reported mixed effects (4/19, 21%). Several domains of clinical reasoning were evaluated. Data gathering, ideas about diagnosis and patient management were more often found to improve after virtual patient use (34/47 analyses, 72%) than application of knowledge, flexibility in thinking and problem-solving (3/7 analyses, 43%). CONCLUSIONS Using virtual patient tools could effectively complement current teaching especially if opportunities for face-to-face teaching or other methods are limited, as there was some evidence that virtual patient educational tools can improve undergraduate medical students' clinical reasoning skills. Evaluations that measured more case specific clinical reasoning domains, such as data gathering, showed more consistent improvement than general measures like problem-solving. Case specific measures might be more sensitive to change given the context dependent nature of clinical reasoning. Consistent use of validated clinical reasoning measures is needed to enable a meta-analysis to estimate effectiveness.
Collapse
|
20
|
Script concordance test acceptability and utility for assessing medical students' clinical reasoning: a user's survey and an institutional prospective evaluation of students' scores. BMC MEDICAL EDUCATION 2022; 22:277. [PMID: 35418078 PMCID: PMC9008989 DOI: 10.1186/s12909-022-03339-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 04/01/2022] [Indexed: 06/14/2023]
Abstract
Script Concordance Testing (SCT) is a method for clinical reasoning assessment in the field of health-care training. Our aim was to assess SCT acceptability and utility with a survey and an institutional prospective evaluation of students' scores.With a user's online survey, we collected the opinions and satisfaction data of all graduate students and teachers involved in the SCT setting. We performed a prospective analysis comparing the scores obtained with SCT to those obtained with the national standard evaluation modality.General opinions about SCT were mostly negative. Students tended to express more negative opinions and perceptions. There was a lower proportion of negative responses in the teachers' satisfaction survey. The proportion of neutral responses was higher for teachers. There was a higher proportion of positive positions towards all questions among teachers. PCC scores significantly increased each year, but SCT scores increased only between the first and second tests. PCC scores were found significantly higher than SCT scores for the second and third tests. Medical students' and teachers' global opinion on SCT was negative. At the beginning SCT scores were found quite similar to PCC scores. There was a higher progression for PCC scores through time.
Collapse
|
21
|
Learning Analytics Applied to Clinical Diagnostic Reasoning Using a Natural Language Processing-Based Virtual Patient Simulator: Case Study. JMIR MEDICAL EDUCATION 2022; 8:e24372. [PMID: 35238786 PMCID: PMC8931645 DOI: 10.2196/24372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 02/28/2021] [Accepted: 11/23/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND Virtual patient simulators (VPSs) log all users' actions, thereby enabling the creation of a multidimensional representation of students' medical knowledge. This representation can be used to create metrics providing teachers with valuable learning information. OBJECTIVE The aim of this study is to describe the metrics we developed to analyze the clinical diagnostic reasoning of medical students, provide examples of their application, and preliminarily validate these metrics on a class of undergraduate medical students. The metrics are computed from the data obtained through a novel VPS embedding natural language processing techniques. METHODS A total of 2 clinical case simulations (tests) were created to test our metrics. During each simulation, the students' step-by-step actions were logged into the program database for offline analysis. The students' performance was divided into seven dimensions: the identification of relevant information in the given clinical scenario, history taking, physical examination, medical test ordering, diagnostic hypothesis setting, binary analysis fulfillment, and final diagnosis setting. Sensitivity (percentage of relevant information found) and precision (percentage of correct actions performed) metrics were computed for each issue and combined into a harmonic mean (F1), thereby obtaining a single score evaluating the students' performance. The 7 metrics were further grouped to reflect the students' capability to collect and to analyze information to obtain an overall performance score. A methodological score was computed based on the discordance between the diagnostic pathway followed by students and the reference one previously defined by the teacher. In total, 25 students attending the fifth year of the School of Medicine at Humanitas University underwent test 1, which simulated a patient with dyspnea. Test 2 dealt with abdominal pain and was attended by 36 students on a different day. For validation, we assessed the Spearman rank correlation between the performance on these scores and the score obtained by each student in the hematology curricular examination. RESULTS The mean overall scores were consistent between test 1 (mean 0.59, SD 0.05) and test 2 (mean 0.54, SD 0.12). For each student, the overall performance was achieved through a different contribution in collecting and analyzing information. Methodological scores highlighted discordances between the reference diagnostic pattern previously set by the teacher and the one pursued by the student. No significant correlation was found between the VPS scores and hematology examination scores. CONCLUSIONS Different components of the students' diagnostic process may be disentangled and quantified by appropriate metrics applied to students' actions recorded while addressing a virtual case. Such an approach may help teachers provide students with individualized feedback aimed at filling competence drawbacks and methodological inconsistencies. There was no correlation between the hematology curricular examination score and any of the proposed scores as these scores address different aspects of students' medical knowledge.
Collapse
|
22
|
The Student-Generated Reasoning Tool (SGRT): Linking medical knowledge and clinical reasoning in preclinical education. MEDICAL TEACHER 2022; 44:158-166. [PMID: 34459337 DOI: 10.1080/0142159x.2021.1967904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
INTRODUCTION The simultaneous integration of knowledge acquisition and development of clinical reasoning in preclinical medical education remains a challenge. To help address this challenge, the authors developed and implemented the Student-Generated Reasoning Tool (SGRT)-a tool asking students to propose and justify pathophysiological hypotheses, generate findings, and critically appraise information. METHODS In 2019, students in a first-year preclinical course (n = 171; SGRT group) were assigned to one of 20 teams. Students used the SGRT individually, then in teams, and faculty provided feedback. The control group (n = 168) consisted of students from 2018 who did not use SGRT. Outcomes included academic performance, effectiveness of collaborative environments using the SGRT, and student feedback. RESULTS Students were five times more likely to get questions correct if they were in the SGRT group versus control group. Accuracy of pathophysiological hypotheses was significantly lower for individuals than teams. Qualitative analysis indicated students benefited from generating their own data, justifying their reasoning, and working individually as well as in teams. CONCLUSIONS This study introduces the SGRT as a potentially engaging, case-based, and collaborative learning method that may help preclinical medical students become aware of their knowledge gaps and integrate their knowledge in basic and clinical sciences in the context of clinical reasoning.
Collapse
|
23
|
An Ontology-Driven Learning Assessment Using the Script Concordance Test. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Assessing the level of domain-specific reasoning acquired by students is one of the major challenges in education particularly in medical education. Considering the importance of clinical reasoning in preclinical and clinical practice, it is necessary to evaluate students’ learning achievements accordingly. The traditional way of assessing clinical reasoning includes long-case exams, oral exams, and objective structured clinical examinations. However, the traditional assessment techniques are not enough to answer emerging requirements in the new reality due to limited scalability and difficulty for adoption in online education. In recent decades, the script concordance test (SCT) has emerged as a promising tool for assessment, particularly in medical education. The question is whether the usability of SCT could be raised to a level high enough to match the current education requirements by exploiting opportunities that new technologies provide, particularly semantic knowledge graphs (SCGs) and ontologies. In this paper, an ontology-driven learning assessment is proposed using a novel automated SCT generation platform. SCTonto ontology is adopted for knowledge representation in SCT question generation with the focus on using electronic health records data for medical education. Direct and indirect strategies for generating Likert-type scores of SCT are described in detail as well. The proposed automatic question generation was evaluated against the traditional manually created SCT, and the results showed that the time required for tests creation significantly reduced, which confirms significant scalability improvements with respect to traditional approaches.
Collapse
|
24
|
Evaluating the Clinical Reasoning of Student Health Professionals in Placement and Simulation Settings: A Systematic Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19020936. [PMID: 35055758 PMCID: PMC8775520 DOI: 10.3390/ijerph19020936] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/21/2021] [Accepted: 12/22/2021] [Indexed: 11/16/2022]
Abstract
(1) Background: Clinical reasoning is essential to the effective practice of autonomous health professionals and is, therefore, an essential capability to develop as students. This review aimed to systematically identify the tools available to health professional educators to evaluate students' attainment of clinical reasoning capabilities in clinical placement and simulation settings. (2) Methods: A systemic review of seven databases was undertaken. Peer-reviewed, English-language publications reporting studies that developed or tested relevant tools were included. Searches included multiple terms related to clinical reasoning and health disciplines. Data regarding each tool's conceptual basis and evaluated constructs were systematically extracted and analysed. (3) Results: Most of the 61 included papers evaluated students in medical and nursing disciplines, and over half reported on the Script Concordance Test or Lasater Clinical Judgement Rubric. A number of conceptual frameworks were referenced, though many papers did not reference any framework. (4) Conclusions: Overall, key outcomes highlighted an emphasis on diagnostic reasoning, as opposed to management reasoning. Tools were predominantly aligned with individual health disciplines and with limited cross-referencing within the field. Future research into clinical reasoning evaluation tools should build on and refer to existing approaches and consider contributions across professional disciplinary divides.
Collapse
|
25
|
Script concordance tests: A call for action in dental education. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2021; 25:705-710. [PMID: 33486880 DOI: 10.1111/eje.12649] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 11/23/2020] [Accepted: 12/13/2020] [Indexed: 06/12/2023]
Abstract
The Script Concordance Test (SCT) is an educational tool that aims to assess the ability to interpret medical information under conditions of uncertainty. It is widely used and validated in health education, but almost unknown in dentistry. Based on authentic clinical problem-solving situations, it allows to assess clinical reasoning that experienced health workers develop over the years. A specific scoring system, dedicated to SCT, considers the variability of responses of practitioners in the same clinical situations. Finally, the scores generated by SCT reflect the respondents' ability to interpret clinical data compared to experienced clinicians. This article aims to familiarise the dental educators' community with SCT construction, optimisation and its possible applications.
Collapse
|
26
|
Teaching emergency situations during a psychiatry residency programme using a blended learning approach: a pilot study. BMC MEDICAL EDUCATION 2021; 21:473. [PMID: 34488745 PMCID: PMC8419928 DOI: 10.1186/s12909-021-02887-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 08/19/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Emergency psychiatry is an essential component in the training of psychiatry residents who are required to make patient-centred orientation decisions. This training calls for specific knowledge as well as skills and attitudes requiring experience. Kolb introduced a theory on experiential learning which suggested that effective learners should have four types of abilities: concrete experience, reflective observation, abstract conceptualisation and active experimentation. We aimed to evaluate a resident training programme that we designed for use in an emergency psychiatry setting based on the experimental learning theory. METHODS We designed a four-step training programme for all first-year psychiatry residents: (i) theoretical teaching of psychiatric emergency knowledge, (ii) concrete experience of ability teaching involving an initial simulation session based on three scenarios corresponding to clinical situations frequently encountered in emergency psychiatry (suicidal crisis, hypomania and depressive episodes), (iii) reflective observation and abstract conceptualisation teaching based on videos and clinical interview commentary by a senior psychiatrist for the same three scenarios, (iv) active experimentation teaching during a second simulation session based on the same three frequently encountered clinical situations but with different scenarios. Training-related knowledge acquisition was assessed after the second simulation session based on a multiple-choice quiz (MCQ), short-answer questions and a script concordance test (SCT). The satisfaction questionnaire was assessed after the resident had completed his/her initial session in order to evaluate the relevance of teaching in clinical practice. The descriptive analyses were described using the mean (+/- standard deviation). The comparative analyses were conducted with the Wilcoxon or Student's t tests depending on data distribution. RESULTS The residents' mean MCQ and short-answer question scores and SCT were 7.25/10 (SD = 1.2) 8.33/10 (SD = 1.4), 77.5/100 (SD = 15.8), respectively. The satisfaction questionnaire revealed that 67 % of residents found the teaching consistent. CONCLUSION We designed a blended learning programme that associated, classical theoretical learning to acquire the basic concepts, a learning with simulation training to experiment the clinical situations and a video support to improve learning of interview skills and memory recall. The residents indicate that this training was adequate to prepare them to be on duty. However, despite this encouraging point, this program needs further studies to attest of its efficiency.
Collapse
|
27
|
|
28
|
Teaching clinical reasoning to undergraduate medical students by illness script method: a randomized controlled trial. BMC MEDICAL EDUCATION 2021; 21:87. [PMID: 33531017 PMCID: PMC7856771 DOI: 10.1186/s12909-021-02522-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Accepted: 01/27/2021] [Indexed: 05/29/2023]
Abstract
BACKGROUND The illness script method employs a theoretical outline (e.g., epidemiology, pathophysiology, signs and symptoms, diagnostic tests, interventions) to clarify how clinicians organized medical knowledge for clinical reasoning in the diagnosis domain. We hypothesized that an educational intervention based on the illness script method would improve medical students' clinical reasoning skills in the diagnosis domain. METHODS This study is a randomized controlled trial involving 100 fourth-year medical students in Shiraz Medical School, Iran. Fifty students were randomized to the intervention group, who were taught clinical reasoning skills based on the illness script method for three diseases during one clinical scenario. Another 50 students were randomized to the control group, who were taught the clinical presentation based on signs and symptoms of the same three diseases as the intervention group. The outcomes of interest were learner satisfaction with the intervention and posttest scores on both an internally developed knowledge test and a Script Concordance Test (SCT). RESULTS Of the hundred participating fourth-year medical students, 47 (47%) were male, and 53 (53%) were female. On the knowledge test, there was no difference in pretest scores between the intervention and control group, which suggested a similar baseline knowledge in both groups; however, posttest scores in the intervention group were (15.74 ± 2.47 out of 20) statistically significantly higher than the control group (14.38 ± 2.59 out of 20, P = 0.009). On the SCT, the mean score for the intervention group (6.12 ± 1.95 out of 10) was significantly higher than the control group (4.54 ± 1.56 out of 10; P = 0.0001). Learner satisfaction data indicated that the intervention was well-received by students. CONCLUSION Teaching with the illness script method was an effective way to improve students' clinical reasoning skills in the diagnosis domain suggested by posttest and SCT scores for specific clinical scenarios. Whether this approach translates to improved generalized clinical reasoning skills in real clinical settings merits further study.
Collapse
|
29
|
Does burnout affect clinical reasoning? An observational study among residents in general practice. BMC MEDICAL EDUCATION 2021; 21:35. [PMID: 33413369 PMCID: PMC7792007 DOI: 10.1186/s12909-020-02457-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 12/11/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND Burnout results from excessive demands at work. Caregivers suffering from burnout show a state of emotional exhaustion, leading them to distance themselves from their patients and to become less efficient in their work. While some studies have shown a negative impact of burnout on physicians' clinical reasoning, others have failed to demonstrate any such impacts. To better understand the link between clinical reasoning and burnout, we carried out a study looking for an association between burnout and clinical reasoning in a population of general practice residents. METHODS We conducted a cross-sectional observational study among residents in general practice in 2017 and 2019. Clinical reasoning performance was assessed using a script concordance test (SCT). The Maslach Burnout Inventory for Human Services Survey (MBI-HSS) was used to determine burnout status in both original standards of Maslach's burnout inventory manual (conventional approach) and when individuals reported high emotional exhaustion in combination with high depersonalization or low personal accomplishment compared to a norm group ("emotional exhaustion +1" approach). RESULTS One hundred ninety-nine residents were included. The participants' mean SCT score was 76.44% (95% CI: 75.77-77.10). In the conventional approach, 126 residents (63.31%) had no burnout, 37 (18.59%) had mild burnout, 23 (11.56%) had moderate burnout, and 13 (6.53%) had severe burnout. In the "exhaustion + 1" approach, 38 residents had a burnout status (19.10%). We found no significant correlation between burnout status and SCT scores either for conventional or "exhaustion + 1" approaches. CONCLUSIONS Our data seem to indicate that burnout status has no significant impact on clinical reasoning. However, one speculation is that SCT mostly examines the clinical reasoning process's analytical dimension, whereas emotions are conventionally associated with the intuitive dimension. We think future research might aim to explore the impact of burnout on intuitive clinical reasoning processes.
Collapse
|
30
|
Impact of panelists' experience on script concordance test scores of medical students. BMC MEDICAL EDUCATION 2020; 20:313. [PMID: 32943030 PMCID: PMC7499961 DOI: 10.1186/s12909-020-02243-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Accepted: 09/10/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND The evaluation process of French medical students will evolve in the next few years in order to improve assessment validity. Script concordance testing (SCT) offers the possibility to assess medical knowledge alongside clinical reasoning under conditions of uncertainty. In this study, we aimed at comparing the SCT scores of a large cohort of undergraduate medical students, according to the experience level of the reference panel. METHODS In 2019, the authors developed a 30-item SCT and sent it to experts with varying levels of experience. Data analysis included score comparisons with paired Wilcoxon rank sum tests and concordance analysis with Bland & Altman plots. RESULTS A panel of 75 experts was divided into three groups: 31 residents, 21 non-experienced physicians (NEP) and 23 experienced physicians (EP). Among each group, random samples of N = 20, 15 and 10 were selected. A total of 985 students from nine different medical schools participated in the SCT examination. No matter the size of the panel (N = 20, 15 or 10), students' SCT scores were lower with the NEP group when compared to the resident panel (median score 67.1 vs 69.1, p < 0.0001 if N = 20; 67.2 vs 70.1, p < 0.0001 if N = 15 and 67.7 vs 68.4, p < 0.0001 if N = 10) and with EP compared to NEP (65.4 vs 67.1, p < 0.0001 if N = 20; 66.0 vs 67.2, p < 0.0001 if N = 15 and 62.5 vs 67.7, p < 0.0001 if N = 10). Bland & Altman plots showed good concordances between students' SCT scores, whatever the experience level of the expert panel. CONCLUSIONS Even though student SCT scores differed statistically according to the expert panels, these differences were rather weak. These results open the possibility of including less-experienced experts in panels for the evaluation of medical students.
Collapse
|
31
|
Development and piloting of a Situational Judgement Test for emotion-handling skills using the Verona Coding Definitions of Emotional Sequences (VR-CoDES). PATIENT EDUCATION AND COUNSELING 2020; 103:1839-1845. [PMID: 32423834 DOI: 10.1016/j.pec.2020.04.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 03/31/2020] [Accepted: 04/02/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Emotion-handling skills are key components for interpersonal communication by medical professionals. The Verona Coding Definitions of Emotional Sequences (VR-CoDES) appears useful to develop a Situational Judgment Test (SJT) for assessing emotion-handling skills. METHODS In phase 1 we used a multi-stage process with expert panels (npanel1 = 16; npanel2 = 8; npanel3 = 20) to develop 12 case vignettes. Each vignette includes (1) video representing a critical incident containing concern(s) and/or cue(s), (2) standardized lead-in-question, (3) five response alternatives. In phase 2 we piloted the SJT to assess validity via an experimental study with medical students (n = 88). RESULTS Experts and students rated most of the 'Reduce space' responses as inappropriate and preferred 'Explicit' responses. Women scored higher than men and there was no decline of empathy according to students' year of study. There were medium correlations with self-assessment instruments. The students' acceptance of the SJT was high. CONCLUSION The use of VR-CoDES, authentic vignettes, videos and expert panels contributed to the development and validity of the SJT. PRACTICE IMPLICATIONS Development costs were high but could be made up over time. The agreement on a proper score and the implementation of an adequate feedback structure seem to be useful.
Collapse
|
32
|
Assessment of clinical reasoning: three evolutions of thought. Diagnosis (Berl) 2020; 7:191-196. [PMID: 32182208 DOI: 10.1515/dx-2019-0096] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 02/12/2020] [Indexed: 02/17/2024]
Abstract
Although assessing clinical reasoning is almost universally considered central to medical education it is not a straightforward issue. In the past decades, our insights into clinical reasoning as a phenomenon, and consequently the best ways to assess it, have undergone significant changes. In this article, we describe how the interplay between fundamental research, practical applications, and evaluative research has pushed the evolution of our thinking and our practices in assessing clinical reasoning.
Collapse
|
33
|
Empirical comparison of three assessment instruments of clinical reasoning capability in 230 medical students. BMC MEDICAL EDUCATION 2020; 20:264. [PMID: 32787953 PMCID: PMC7425135 DOI: 10.1186/s12909-020-02185-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Accepted: 08/04/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Several instruments intend to measure clinical reasoning capability, yet we lack evidence contextualizing their scores. The authors compared three clinical reasoning instruments [Clinical Reasoning Task (CRT), Patient Note Scoring rubric (PNS), and Summary Statement Assessment Rubric (SSAR)] using Messick's convergent validity framework in pre-clinical medical students. Scores were compared to a validated clinical reasoning instrument, Clinical Data Interpretation (CDI). METHOD Authors administered CDI and the first clinical case to 235 students. Sixteen randomly selected students (four from each CDI quartile) wrote a note on a second clinical case. Each note was scored with CRT, PNS, and SSAR. Final scores were compared to CDI. RESULTS CDI scores did not significantly correlate with any other instrument. A large, significant correlation between PNS and CRT was seen (r = 0.71; p = 0.002). CONCLUSIONS None of the tested instruments outperformed the others when using CDI as a standard measure of clinical reasoning. Differing strengths of association between clinical reasoning instruments suggest they each measure different components of the clinical reasoning construct. The large correlation between CRT and PNS scoring suggests areas of novice clinical reasoning capability, which may not be yet captured in CDI or SSAR, which are weighted toward knowledge synthesis and hypothesis testing.
Collapse
|
34
|
Examining response process validity of script concordance testing: a think-aloud approach. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2020; 11:127-135. [PMID: 32581143 PMCID: PMC7870454 DOI: 10.5116/ijme.5eb6.7be2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 05/09/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES This study investigated whether medical student responses to Script Concordance Testing (SCT) items represent valid clinical reasoning. Using a think-aloud approach students provided written explanations of the reasoning that underpinned their responses, and these were reviewed for concordance with an expert reference panel. METHODS A set of 12, 11 and 15 SCT items were administered online to Year 3 (2018), Year 4 (2018) and Year 3 (2019) medical students respectively. Students' free-text descriptions of the reasoning supporting each item response were analysed, and compared with those of the expert panel. Response process validity was quantified as the rate of true positives (percentage of full and partial credit responses derived through correct clinical reasoning); and true negatives (percentage of responses with no credit derived through faulty clinical reasoning). RESULTS Two hundred and nine students completed the online tests (response rate = 68.3%). The majority of students who had chosen the response which attracted full or partial credit also provided justifications which were concordant with the experts (true positive rate of 99.6% for full credit; 99.4% for partial credit responses). Most responses that attracted no credit were based on faulty clinical reasoning (true negative of 99.0%). CONCLUSIONS The findings provide support for the response process validity of SCT scores in the setting of undergraduate medicine. The additional written think-aloud component, to assess clinical reasoning, provided useful information to inform student learning. However, SCT scores should be validated on each testing occasion, and in other contexts.
Collapse
|
35
|
Assessment of Emergency Medicine Residents' Clinical Reasoning: Validation of a Script Concordance Test. West J Emerg Med 2020; 21:978-984. [PMID: 32726273 PMCID: PMC7390545 DOI: 10.5811/westjem.2020.3.46035] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 03/23/2020] [Indexed: 11/11/2022] Open
Abstract
INTRODUCTION A primary aim of residency training is to develop competence in clinical reasoning. However, there are few instruments that can accurately, reliably, and efficiently assess residents' clinical decision-making ability. This study aimed to externally validate the script concordance test in emergency medicine (SCT-EM), an assessment tool designed for this purpose. METHODS Using established methodology for the SCT-EM, we compared EM residents' performance on the SCT-EM to an expert panel of emergency physicians at three urban academic centers. We performed adjusted pairwise t-tests to compare differences between all residents and attending physicians, as well as among resident postgraduate year (PGY) levels. We tested correlation between SCT-EM and Accreditation Council for Graduate Medical Education Milestone scores using Pearson's correlation coefficients. Inter-item covariances for SCT items were calculated using Cronbach's alpha statistic. RESULTS The SCT-EM was administered to 68 residents and 13 attendings. There was a significant difference in mean scores among all groups (mean + standard deviation: PGY-1 59 + 7; PGY-2 62 + 6; PGY-3 60 + 8; PGY-4 61 + 8; 73 + 8 for attendings, p < 0.01). Post hoc pairwise comparisons demonstrated that significant difference in mean scores only occurred between each PGY level and the attendings (p < 0.01 for PGY-1 to PGY-4 vs attending group). Performance on the SCT-EM and EM Milestones was not significantly correlated (r = 0.12, p = 0.35). Internal reliability of the exam was determined using Cronbach's alpha, which was 0.67 for all examinees, and 0.89 in the expert-only group. CONCLUSION The SCT-EM has limited utility in reliably assessing clinical reasoning among EM residents. Although the SCT-EM was able to differentiate clinical reasoning ability between residents and expert faculty, it did not between PGY levels, or correlate with Milestones scores. Furthermore, several limitations threaten the validity of the SCT-EM, suggesting further study is needed in more diverse settings.
Collapse
|
36
|
The cognitive process of test takers when using the script concordance test rating scale. MEDICAL EDUCATION 2020; 54:337-347. [PMID: 31912562 DOI: 10.1111/medu.14056] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 12/24/2019] [Accepted: 01/02/2020] [Indexed: 06/10/2023]
Abstract
CONTEXT Clinical decision making (CDM) skills are important to learn and assess in order to establish competence in trainees. A common tool for assessing CDM is the script concordance test (SCT), which asks test takers to indicate how a new clinical finding influences a proposed plan using a Likert-type scale. Most criticisms of the SCT relate to its rating scale but are largely theoretical. The cognitive process of test takers when selecting their responses using the SCT rating scale remains understudied, but is essential to gathering validity evidence for use of the SCT in CDM assessment. METHODS Cases from an SCT used in a national validation study were administered to 29 residents and 14 staff surgeons. Semi-structured cognitive interviews were then conducted with 10 residents and five staff surgeons based on the SCT results. Cognitive interview data were independently coded by two data analysts, who specifically sought to elucidate how participants mapped their internally generated responses to any of the rating scale options. RESULTS Five major issues were identified with the response matching cognitive process: (a) the meaning of the '0' response option; (b) which response corresponds to agreement with the planned management; (c) the rationale for picking '±1' versus '±2'; (d) which response indicates the desire to undertake the planned management plus an additional procedure, and (e) the influence of time on response selection. CONCLUSIONS Studying how test takers (experts and trainees) interpret the SCT rating scale has revealed several issues related to inconsistent and unintended use. Revising the scale to address the variety of interpretations could help to improve the response process validity of the SCT and therefore improve the SCT's ability to be used in CDM skills assessments.
Collapse
|
37
|
Electronic Fetal Monitoring Credentialing Examination: The First 4000. AJP Rep 2020; 10:e93-e100. [PMID: 32190412 PMCID: PMC7075713 DOI: 10.1055/s-0040-1705141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Accepted: 11/15/2019] [Indexed: 11/15/2022] Open
Abstract
Objective Recognized variability in fetal heart rate interpretation led the Perinatal Quality Foundation (PQF) to develop a credentialing exam. We report an evaluation of the 1st 4000 plus PQF Fetal Monitoring Credentialing (FMC) exams. Study Design The PQF FMC exam is an online assessment for obstetric providers and nurses. The exam contains two question types: traditional multiple-choice evaluating knowledge and Script Concordance Theory (SCT) evaluating judgment. Reliability was measured through McDonald's Total Omega and Cronbach's Alpha. Pearson's correlations between knowledge and judgment were measured. Results From February 2014 through September 2018, 4,330 different individuals took the exam. A total of 4,057 records were suitable for reliability analysis: 2,105 (52%) physicians, 1,756 (43%) nurses, and 196 (5%) certified nurse midwives (CNMs). As a measure of test reliability, total Omega was 0.80 for obstetric providers and 0.77 for nurses. There was only moderate correlation between the knowledge scores and judgment scores for obstetric providers (0.38) and for nurses (0.43). Conclusion The PQF FMC exam is a reliable, valid assessment of both Electronic Fetal Monitoring (EFM) knowledge and judgment. It evaluates essential EFM skills for the establishment of practical credentialing. It also reports modest correlation between knowledge and judgment scores, suggesting that knowledge alone does not assure clinical competency.
Collapse
|
38
|
A developmental assessment of clinical reasoning in preclinical medical education. MEDICAL EDUCATION ONLINE 2019; 24:1591257. [PMID: 30935299 PMCID: PMC6450466 DOI: 10.1080/10872981.2019.1591257] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 02/26/2019] [Accepted: 02/28/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND Clinical reasoning is an essential skill to be learned during medical education. A developmental framework for the assessment and measurement of this skill has not yet been described in the literature. OBJECTIVE The authors describe the creation and pilot implementation of a rubric designed to assess the development of clinical reasoning skills in pre-clinical medical education. DESIGN The multi-disciplinary course team used Backwards Design to develop course goals, objectives, and assessment for a new Clinical Reasoning Course. The team focused on behaviors that students were expected to demonstrate, identifying each as a 'desired result' element and aligning these with three levels of performance: emerging, acquiring, and mastering. RESULTS The first draft of the rubric was reviewed and piloted by faculty using sample student entries; this provided feedback on ease of use and appropriateness. After the first semester, the course team evaluated whether the rubric distinguished between different levels of student performance in each competency. A systematic approach based on descriptive analysis of mid- and end of semester assessments of student performance revealed that from mid- to end-of-semester, over half the students received higher competency scores at semester end. CONCLUSION The assessment rubric allowed students in the early stages of clinical reasoning development to understand their trajectory and provided faculty a framework from which to give meaningful feedback. The multi-disciplinary background of the course team supported a systematic and robust course and assessment design process. The authors strongly encourage other colleges to support the use of collaborative and multi-disciplinary course teams.
Collapse
|
39
|
Assessment methods in medical specialist assessments in the DACH region - overview, critical examination and recommendations for further development. GMS JOURNAL FOR MEDICAL EDUCATION 2019; 36:Doc78. [PMID: 31844650 PMCID: PMC6905366 DOI: 10.3205/zma001286] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 07/29/2018] [Revised: 07/29/2019] [Accepted: 09/04/2019] [Indexed: 06/01/2023]
Abstract
Introduction: Specialist medical assessments fulfil the task of ensuring that physicians have the clinical competence to independently represent their field and provide the best possible care to patients, taking into account the current state of knowledge. To date, there are no comprehensive reports on the status of specialist assessments in the German-speaking countries (DACH). For that reason, the assessment methods used in the DACH region are compiled and critically evaluated in this article, and recommendations for further development are described. Methods: The websites of the following institutions were searched for information regarding testing methods used and the organisation of specialist examinations: Homepage of the Swiss Institute for Medical Continuing Education (SIWF), Homepage of the Academy of Physicians (Austria) and Homepage of the German Federal Medical Association (BAEK). Further links were considered and the results were presented in tabular form. The assessment methods used in the specialist assessments are critically examined with regard to established quality criteria and recommendations for the further development of the specialist assessments are derived from these. Results: The following assessment methods are already used in Switzerland and Austria: written examinations with multiple choice and short answer questions, structured oral examinations, the Script Concordance Test (SCT) and the Objective Structured Clinical Examination (OSCE). In some cases, these assessment methods are combined (triangulation). In Germany, on the other hand, the oral examination has so far been conducted in an unstructured manner in the form of a 'collegial content discussion'. In order to test knowledge, practical and communicative competences equally, it is recommended to implement a triangulation of methods and follow the further recommendations described in this article. Conclusion: While there are already accepted approaches for quality-assured and competence-based specialist assessments in Switzerland and Austria at present, there is still a long way to go in Germany. Following the recommendations presented in this article, a contribution could be made to improving the specialist assessments in the DACH region according to the specialist assessments objectives.
Collapse
|
40
|
Concevoir une formation par concordance pour développer le raisonnement professionnel : quelles étapes faut-il parcourir ? ACTA ACUST UNITED AC 2019. [DOI: 10.1051/pmed/2019019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Contexte : Développer le raisonnement est une nécessité dans la formation des professionnels de la santé. La formation par concordance (FpC) est une approche pédagogique qui place les apprenants dans des situations authentiques. Les questions posées sont celles que se posent les professionnels dans leur pratique et les réponses sont comparées à celles qu’ont données les membres d’un panel de référence. But : Décrire les variables dont il est nécessaire de tenir compte pour concevoir une FpC. Méthodes : Ces variables sont au nombre de six : 1) les buts de l’activité pédagogique ; 2) la nature de la tâche ; 3) le contenu et le niveau de complexité ; 4) le panel de référence ; 5) les rétroactions ; 6) l’environnement numérique d’apprentissage. Résultats : Les exemples illustrés dans cet article permettent de constater la versatilité de cette approche pour mettre l’accent sur les divers éléments critiques au raisonnement de diverses professions. Conclusion : Les exemples présentés illustrent comment il est possible de mettre au point des outils de FpC qui se prêtent à l’amélioration à chaque itération. Il est désormais possible d’imaginer qu’un jour cette approche à la formation fera partie importante de toute formation professionnelle.
Collapse
|
41
|
Construct validity of script concordance testing: progression of scores from novices to experienced clinicians. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2019; 10:174-179. [PMID: 31562807 PMCID: PMC6766395 DOI: 10.5116/ijme.5d76.1eee] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 09/09/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVES To investigate the construct validity of Script Concordance Testing (SCT) scores as a measure of the clinical reasoning ability of medical students and practising General Practitioners with different levels of clinical experience. METHODS Part I involved a cross-sectional study, where 105 medical students, 19 junior registrars and 13 experienced General Practitioners completed the same set of SCT questions, and their mean scores were compared using one-way ANOVA. In Part II, pooled and matched SCT scores for 5 cohorts of students (2012 to 2017) in Year 3 (N=584) and Year 4 (N=598) were retrospectively analysed for evidence of significant progression. RESULTS A significant main effect of clinical experience was observed [F(2, 136)=6.215, p=0.003]. The mean SCT score for General Practitioners (M=70.39, SD=4.41, N=13) was significantly higher (p=0.011) than that of students (M = 64.90, SD = 6.30, N=105). Year 4 students (M=68.90, SD= 7.79, N=584) scored a significantly higher mean score [t(552)=12.78, p<0.001] than Year 3 students (M = 64.03, SD=7.98, N=598). CONCLUSIONS The findings that candidate scores increased with increasing level of clinical experience add to current evidence in the international literature in support of the construct validity of Script Concordance Testing. Prospective longitudinal studies with larger sample sizes are recommended to further test and build confidence in the construct validity of SCT scores.
Collapse
|
42
|
Evaluation of the use of script concordance test in a multicampus psychiatric pharmacy elective course. Ment Health Clin 2019; 9:304-308. [PMID: 31534871 PMCID: PMC6728120 DOI: 10.9740/mhc.2019.09.304] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Introduction Evaluating a student's ability to accept complexity, uncertainty, and ambiguity as part of clinical practice is difficult in a classroom setting using written tests. This study was conducted to explore the feasibility and validation of using a script concordance test (SCT) to evaluate pharmacy student knowledge and clinical competence in a psychiatry elective course. Methods This study involved prospective validation of psychiatry-focused SCT questions using a panel of practicing psychiatric pharmacists and retrospective review of student performance on the same SCT questions. The reliability of the SCT was also evaluated using Cronbach alpha coefficient. Results A total of 13 practicing psychiatric pharmacists participated in the validation phase of the study of 75 questions. Pharmacy student scores (n = 17) averaged 39.79 (±5.02) points, and psychiatric pharmacist scores averaged 50.11 (±4.51) points, representing mean percentages of 61.2% and 77.1%, respectively, on the adjusted exam. The Cronbach alpha was 0.94. Discussion The development of a valid and reliable SCT to test student psychiatric pharmacy knowledge and clinical competence after taking a psychiatry elective course was feasible.
Collapse
|
43
|
Guidelines for Creating Written Clinical Reasoning Exams: Insight from a Delphi Study. HEALTH PROFESSIONS EDUCATION 2019. [DOI: 10.1016/j.hpe.2018.09.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
|
44
|
Evaluating Medical Students' Clinical Reasoning in Psychiatry Using Clinical and Basic Science Concepts Presented in Session-level Integration Sessions. MEDICAL SCIENCE EDUCATOR 2019; 29:819-824. [PMID: 34457546 PMCID: PMC8368579 DOI: 10.1007/s40670-019-00761-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
OBJECTIVE The objective of this study was to evaluate improvement in clinical reasoning by preclinical medical students following participation in a clinical presentation curriculum that included both course and session-level integration of psychiatric and basic science concepts. A Script Concordance Test (SCT) for psychiatry was developed to assess differences in clinical reasoning in the students. METHODS Pre- and post-integration session tests were used to evaluate clinical reasoning among second-year medical students (MSII) who attended three integration sessions. Scores were compared between experts and medical students, and the validity and reliability of the SCT for psychiatry was assessed. RESULTS MSII scores improved 11% between the pre-and post-test (p < .001). There was no significant difference in scores between experts and MSII after attending the integration sessions. The SCT for psychiatry that was developed and used in this study provides reliable and valid results. CONCLUSION The concepts included in the integration sessions for this study highlighted possibilities for helping novice learners elaborate causal networks with the intention of cultivate illness script formation and clinical reasoning. Additional studies in this area should be considered to further enhance understanding of the possible benefits of this curriculum model.
Collapse
|
45
|
Abstract
Clinical reasoning is a core component of clinical competency that is used in all patient encounters from simple to complex presentations. It involves synthesis of myriad clinical and investigative data, to generate and prioritize an appropriate differential diagnosis and inform safe and targeted management plans.The literature is rich with proposed methods to teach this critical skill to trainees of all levels. Yet, ensuring that reasoning ability is appropriately assessed across the spectrum of knowledge acquisition to workplace-based clinical performance can be challenging.In this perspective, we first introduce the concepts of illness scripts and dual-process theory that describe the roles of analytic system 1 and non-analytic system 2 reasoning in clinical decision making. Thereafter, we draw upon existing evidence and expert opinion to review a range of methods that allow for effective assessment of clinical reasoning, contextualized within Miller's pyramid of learner assessment. Key assessment strategies that allow teachers to evaluate their learners' clinical reasoning ability are described from the level of knowledge acquisition, through to real-world demonstration in the clinical workplace.
Collapse
|
46
|
Predictors of Clinical Reasoning Using the Reasoning 4 Change Instrument With Physical Therapist Students. Phys Ther 2019; 99:964-976. [PMID: 30869789 PMCID: PMC6665874 DOI: 10.1093/ptj/pzz044] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 02/03/2019] [Indexed: 11/13/2022]
Abstract
BACKGROUND Although physical therapist students must be well prepared to integrate biopsychosocial and behavioral perspectives into their clinical reasoning, there is a lack of knowledge regarding factors that influence such competence. OBJECTIVE This study explored the associations among the independent variables-knowledge, cognition, metacognition, psychological factors, contextual factors, and curriculum orientation vis-à-vis behavioral medicine competencies-and the dependent variables-outcomes of input from client (IC), functional behavioral analysis (FBA), and strategies for behavior change (SBC) as levels in physical therapist students' clinical reasoning processes. DESIGN This study used an exploratory cross-sectional design. METHODS The Reasoning 4 Change instrument was completed by 151 final-semester physical therapist students. Hierarchical multiple regression analyses for IC, FBA, and SBC were conducted. In the first step, curriculum orientation was inserted into the model; in the second step, self-rated knowledge, cognition, and metacognition; and in the third step, psychological factors. RESULTS All independent variables except contextual factors explained 37% of the variance in the outcome of IC. Curriculum orientation explained 3%, cognitive and metacognitive factors an additional 22%, and attitudes another 15%. Variance in the outcomes of FBA and SBC were explained by curriculum orientation only (FBA change in R2 = 0.04; SBC change in R2 = 0.05). Higher scores of the dependent variables were associated with a curriculum having behavioral medicine competencies. LIMITATIONS The limitations of this study are that it was cross-sectional. CONCLUSIONS Cognitive and metacognitive capabilities and skills and positive attitudes are important predictors of physical therapist students' clinical reasoning focused on behavior change at the IC level. Curricula with behavioral medicine competencies are associated with positive outcomes at all clinical reasoning levels.
Collapse
|
47
|
Experts' responses in script concordance tests: a response process validity investigation. MEDICAL EDUCATION 2019; 53:710-722. [PMID: 30779204 DOI: 10.1111/medu.13814] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Revised: 06/25/2018] [Accepted: 12/28/2018] [Indexed: 06/09/2023]
Abstract
CONTEXT The script concordance test (SCT), designed to measure clinical reasoning in complex cases, has recently been the subject of several critical research studies. Amongst other issues, response process validity evidence remains lacking. We explored the response processes of experts on an SCT scoring panel to better understand their seemingly divergent beliefs about how new clinical data alter the suitability of proposed actions within simulated patient cases. METHODS A total of 10 Argentine gastroenterologists who served as the expert panel on an existing SCT re-answered 15 cases 9 months after their original panel participation. They then answered questions probing their reasoning and reactions to other experts' perspectives. RESULTS The experts sometimes noted they would not ordinarily consider the actions proposed for the cases at all (30/150 instances [20%]) or would collect additional data first (54/150 instances [36%]). Even when groups of experts agreed about how new clinical data in a case affected the suitability of a proposed action, there was often disagreement (118/133 instances [89%]) about the suitability of the proposed action before the new clinical data had been introduced. Experts reported confidence in their responses, but showed limited consistency with the responses they had given 9 months earlier (linear weighted kappa = 0.33). Qualitative analyses showed nuanced and complex reasons behind experts' responses, revealing, for example, that experts often considered the unique affordances and constraints of their varying local practice environments when responding. Experts generally found other experts' alternative responses moderately compelling (mean ± standard deviation 2.93 ± 0.80 on a 5-point scale, where 3 = moderately compelling). Experts switched their own preferred responses after seeing others' reasoning in 30 of 150 (20%) instances. CONCLUSIONS Expert response processes were not consistent with the classical interpretation and use of SCT scores. However, several fruitful and justifiable alternatives for the use of SCT-like methods are proposed, such as to guide assessments for learning.
Collapse
|
48
|
Commentary: expert responses in script concordance tests: a response process validity investigation. MEDICAL EDUCATION 2019; 53:644-646. [PMID: 30989693 DOI: 10.1111/medu.13889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
|
49
|
The Power of Subjectivity in the Assessment of Medical Trainees. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:333-337. [PMID: 30334840 DOI: 10.1097/acm.0000000000002495] [Citation(s) in RCA: 88] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Objectivity in the assessment of students and trainees has been a hallmark of quality since the introduction of multiple-choice items in the 1960s. In medical education, this has extended to the structured examination of clinical skills and workplace-based assessment. Competency-based medical education, a pervasive movement that started roughly around the turn of the century, similarly calls for rigorous, objective assessment to ensure that all medical trainees meet standards to assure quality of health care. At the same time, measures of objectivity, such as reliability, have consistently shown disappointing results. This raises questions about the extent to which objectivity in such assessments can be ensured.In fact, the legitimacy of "objective" assessment of individual trainees, particularly in the clinical workplace, may be questioned. Workplaces are highly dynamic and ratings by observers are inherently subjective, as they are based on expert judgment, and experts do not always agree-for good, idiosyncratic, reasons. Thus, efforts to "objectify" these assessments may be problematically distorting the assessment process itself. In addition, "competence" must meet standards, but it is also context dependent.Educators are now arriving at the insight that subjective expert judgments by medical professionals are not only unavoidable but actually should be embraced as the core of assessment of medical trainees. This paper elaborates on the case for subjectivity in assessment.
Collapse
|
50
|
Bridging the Gap Between the Classroom and the Clerkship: A Clinical Reasoning Curriculum for Third-Year Medical Students. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2019; 15:10800. [PMID: 31139730 PMCID: PMC6507921 DOI: 10.15766/mep_2374-8265.10800] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Accepted: 12/23/2018] [Indexed: 05/31/2023]
Abstract
INTRODUCTION Clinical reasoning is the complex cognitive process that drives the diagnosis of disease and treatment of patients. There is a national call for medical educators to develop clinical reasoning curricula in undergraduate medical education. To address this need, we developed a longitudinal clinical reasoning curriculum for internal medicine clerkship students. METHODS We delivered six 1-hour sessions to approximately 40 students over the 15-week combined medicine-surgery clerkship at Penn State College of Medicine. We developed the content using previous work in clinical reasoning, including the American College of Physicians' Teaching Medicine Series book Teaching Clinical Reasoning. Students applied a clinical reasoning diagnostic framework to written cases during each workshop. Each session followed a scaffold approach and built upon previously learned clinical reasoning skills. We administered a pre- and postsurvey to assess students' baseline knowledge of clinical reasoning concepts and perceived confidence in performing clinical reasoning skills. Students also provided open-ended responses regarding the effectiveness of the curriculum. RESULTS The curriculum was well received by students and led to increased perceived knowledge of clinical reasoning concepts and increased confidence in applying clinical reasoning skills. Students commented on the usefulness of practicing clinical reasoning in a controlled environment while utilizing a framework that could be deliberately applied to patient care. DISCUSSION The longitudinal clinical reasoning curriculum was effective in reinforcing key concepts of clinical reasoning and allowed for deliberate practice in a controlled environment. The curriculum is generalizable to students in both the preclinical and clinical years.
Collapse
|