1
|
Enani G, Vassiliou M, Kaneva P, Watanabe Y, Munshi A. A Video-Based Assessment Tool to Measure Intraoperative Laparoscopic Suturing Using a Modified Script Concordance Methodology. JOURNAL OF SURGICAL EDUCATION 2023; 80:1005-1011. [PMID: 37263853 DOI: 10.1016/j.jsurg.2023.04.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 03/27/2023] [Accepted: 04/28/2023] [Indexed: 06/03/2023]
Abstract
OBJECTIVES Laparoscopic suturing (LS) is a challenging laparoscopic skill to teach. Its complexity and nuances are not modeled or measured in current simulation and assessment platforms.The script concordance test (SCT) is used to assess clinical reasoning.The purpose of this study is to provide evidence for validity of this novel SCT based online assessment for LS skills. DESIGN We designed a video-based online SCT for LS using a cognitive task analysis and expert panelists.The CTA yielded 4 LS domains: needle handling (NH), tissue handling (TH), knot tying techniques (KT) and operative ergonomics (OE). Five-point scales with anchoring descriptors from -2 to +2 were used. Scoring was based on a modified SCT methodology. SETTING AND PARTICIPANTS The test was administrated to 37 subjects (18 experts and 19 novices). There was no time limit given. A different expert group from the minimal invasive surgery (MIS) panelist were recruited. Experts were defined as surgeons and fellows with LS experience of >25 cases annually. Validity was assessed by comparing SCT scores of experienced and inexperienced surgeons. Cronbach's alpha was used to assess the internal consistency of the test. RESULTS The survey started off with 47 questions in each of the following domains: 13 NH, 4 TH, 20 KT and 10 OE. Thirty-seven surgeons (18 experts and 19 inexperienced surgeons). Questions that demonstrated a large discrepancy among experts and panelists with a weighted score difference more than 40 were discarded (n = 20). One question was discarded because it received a 100% score from all participants. This yielded 26 remaining questions in the following domains: 8 NH, 2 TH, 11 KT and 5 OE. The test reliability level (Cronbach a) was 0.80. The mean score was 72 ± 9% and 63 ± 15% (p = 0.02) for experts and inexperienced surgeons, respectively. The mean time to complete the test was 21 minutes. CONCLUSION This study provides validity evidence for a novel intraoperative LS assessment. The variability of responses between experts and panelists suggests that SCT may capture the clinical differences/surgeon preferences in performing LS intraoperatively.
Collapse
Affiliation(s)
- Ghada Enani
- Department of Surgery, Faculty of Medicine, King Abdulaziz University, Jeddah,Saudi Arabia.
| | - Melina Vassiliou
- Department of Surgery, McGill University, Montreal, QC, Canada; Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, QC, Canada
| | - Pepa Kaneva
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, QC, Canada
| | - Yusuke Watanabe
- Institute of Health Science Innovation for Medical Care, Hokkaido University Hospital, Sapparo, Hokkaido, Japan
| | - Amani Munshi
- Surgical Specialists of Colorado, Denver, Colorado
| |
Collapse
|
2
|
Ganesan S, Bhandary S, Thulasingam M, Chacko TV, Zayapragassarazan Z, Ravichandran S, Raja K, Ramasamy K, Alexander A, Penubarthi LK. Developing Script Concordance Test Items in Otolaryngology to Improve Clinical Reasoning Skills: Validation using Consensus Analysis and Psychometrics. Int J Appl Basic Med Res 2023; 13:64-69. [PMID: 37614842 PMCID: PMC10443453 DOI: 10.4103/ijabmr.ijabmr_604_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 02/19/2023] [Accepted: 04/15/2023] [Indexed: 08/25/2023] Open
Abstract
Background Script concordance testing is widely practiced to foster and assess clinical reasoning. Our study aimed to develop script concordance test (SCT) in the specialty of otolaryngology and test the validation using panel response pattern and consensus index. Materials and Methods The methodology was an evolving pattern of constructing SCTs, administering them to the panel members, and optimizing the panel with response patterns and consensus index. The SCT's final items were chosen to be administered to the students. Results We developed 98 items of SCT and administered them to 20 panel members. The mean score of the panel members for these 98 items was 79.5 (standard deviation [SD] = 4.4). The consensus index calculated for the 98-item SCT ranged from 25.81 to 100. Sixteen items had bimodal and uniform response patterns; the consensus index improved when eliminated. We administered the rest 82 items of SCT to 30 undergraduate and ten postgraduate students. The mean score of undergraduate students was 61.1 (SD = 7.5) and that of postgraduate students was 67.7 (SD = 6.3). Cronbach's alpha for the 82-item SCT was 0.74. Excluding the 22 poor items, the final SCT instrument of 60 items had a Cronbach's alpha of 0.82. Conclusion Our study revealed that a consensus index above 60 had a good item-total correlation and be used to optimize the items for panel responses in SCT, necessitating further studies on this aspect. Our study also revealed that the panel response clustering pattern could be used to categorize the items, although bimodal and uniform distribution patterns need further differentiation.
Collapse
Affiliation(s)
- Sivaraman Ganesan
- Department of ENT, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| | - Shital Bhandary
- Department of Public Health and Medical Education, Patan Academy of Health Sciences, Lalitpur, Nepal, India
| | - Mahalakshmy Thulasingam
- Department of Preventive and Social Medicine, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| | - Thomas Vengail Chacko
- Department of Community Medicine, Believers Church Medical College Hospital, Thiruvalla, Kerala, India
| | - Z. Zayapragassarazan
- Department of Medical Education, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| | - Surya Ravichandran
- Department of ENT, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| | - Kalaiarasi Raja
- Department of ENT, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| | - Karthikeyan Ramasamy
- Department of ENT, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| | - Arun Alexander
- Department of ENT, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| | - Lokesh Kumar Penubarthi
- Department of ENT, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| |
Collapse
|
3
|
Mamakli S, Alimoğlu MK, Daloğlu M. Scenario-based learning: preliminary evaluation of the method in terms of students' academic achievement, in-class engagement, and learner/teacher satisfaction. ADVANCES IN PHYSIOLOGY EDUCATION 2023; 47:144-157. [PMID: 36656963 DOI: 10.1152/advan.00122.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 01/09/2023] [Accepted: 01/10/2023] [Indexed: 06/17/2023]
Abstract
We sought to evaluate the effectiveness of a newly developed scenario-based learning (SBL) module considering students' academic achievement, in-class engagement, and learner/teacher satisfaction. Third-year students in a 6-year medical education program, who had preexperience in problem-based learning, studied in small groups with facilitators throughout a week allocated for the SBL module. SBL processes, student/facilitator roles, and expectations were explained to students and facilitators in online training before implementation. Three online discussion sessions were scheduled, but the groups were allowed to organize extra online meetings. The students provided with learning objectives were asked to create a problem-based learning (PBL) scenario with a facilitator's guide including answers to scenario questions, evidence-based information, and tips for facilitators. Evaluated outcomes were learner/teacher satisfaction, students' academic achievement, and engagement. Satisfaction was determined using semistructured feedback forms. Generated scenarios were assessed using a checklist. A written exam was performed to assess students' knowledge and reasoning skills. Student engagement during the sessions was evaluated using forms completed by facilitators and students. SBL module outcomes were compared to students' grade point averages (GPAs) and former PBL outcomes. Mean scenario evaluation, student engagement, and satisfaction scores were around 90%. Mean scores for facilitator satisfaction and whole module success were around 80% and 77%, respectively. Academic achievement and student satisfaction were higher in SBL compared to GPA and previous PBL modules. Facilitator satisfaction and student engagement did not differ between SBL and PBL. Student satisfaction and academic achievement were higher in online SBL compared with PBL without any differences in in-class engagement and facilitator satisfaction.NEW & NOTEWORTHY A newly developed scenario-based learning (SBL) module was implemented assigning third-year medical students to create (highest cognitive level) a problem-based learning facilitator scenario studying in small groups with a facilitator. The 1-wk online SBL module was composed of three scheduled and an unlimited number of nonscheduled sessions. The students and facilitators positively received SBL with some recommendations for improvement. Preliminary evaluation suggests SBL can be implemented without compromising (maybe improving) students' academic achievement, satisfaction, and engagement levels.
Collapse
Affiliation(s)
- Sümer Mamakli
- Department of Medical Education, Faculty of Medicine, Akdeniz University, Antalya, Türkiye
| | - Mustafa Kemal Alimoğlu
- Department of Medical Education, Faculty of Medicine, Akdeniz University, Antalya, Türkiye
| | - Mustafa Daloğlu
- Department of Medical Education, Faculty of Medicine, Akdeniz University, Antalya, Türkiye
| |
Collapse
|
4
|
Ross L, Semaan E, Gosling CM, Fisk B, Shannon B. Clinical reasoning in undergraduate paramedicine: utilisation of a script concordance test. BMC MEDICAL EDUCATION 2023; 23:39. [PMID: 36658560 PMCID: PMC9849838 DOI: 10.1186/s12909-023-04020-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/11/2023] [Indexed: 06/17/2023]
Abstract
INTRODUCTION Clinical reasoning is a complex cognitive and metacognitive process paramount to patient care in paramedic practice. While universally recognised as an essential component of practice, clinical reasoning has been historically difficult to assess in health care professions. Is the Script Concordance Test (SCT) an achievable and reliable option to test clinical reasoning in undergraduate paramedic students? METHODS This was a single institution observational cohort study designed to use the SCT to measure clinical reasoning in paramedic students. Clinical vignettes were constructed across a range of concepts with varying shades of clinical ambiguity. A reference panel mean scores of the test were compared to that of students. Test responses were graded with the aggregate scoring method with scores awarded for both partially and fully correct responses. RESULTS Eighty-three student paramedic participants (mean age: 21.8 (3.5) years, 54 (65%) female, 27 (33%) male and 2 (2%) non-binary) completed the SCT. The difference between the reference group mean score of 80 (5) and student mean of score of 65.6 (8.4) was statistically significant (p < 0.001). DISCUSSION Clinical reasoning skills are not easily acquired as they are a culmination of education, experience and the ability to apply this in the context to a specific patient. The SCT has shown to be reliable and effective in measuring clinical reasoning in undergraduate paramedics as it has in other health professions such as nursing and medicine. More investigation is required to establish effective pedogeological techniques to optimise clinical reasoning in student and novice paramedics who are devoid of experience.
Collapse
Affiliation(s)
- Linda Ross
- Department of Paramedicine, School of Primary and Allied Health Care, Faculty of Medicine, Nursing and Health Science, Monash University, PO Box 527, Peninsula Campus, McMahons Road, Frankston, Melbourne, Victoria, 3199, Australia.
| | - Eli Semaan
- Ambulance Victoria, Melbourne, Australia
| | - Cameron M Gosling
- Department of Paramedicine, School of Primary and Allied Health Care, Faculty of Medicine, Nursing and Health Science, Monash University, PO Box 527, Peninsula Campus, McMahons Road, Frankston, Melbourne, Victoria, 3199, Australia
| | - Benjamin Fisk
- Department of Paramedicine, School of Primary and Allied Health Care, Faculty of Medicine, Nursing and Health Science, Monash University, PO Box 527, Peninsula Campus, McMahons Road, Frankston, Melbourne, Victoria, 3199, Australia
- Ambulance Victoria, Melbourne, Australia
| | - Brendan Shannon
- Department of Paramedicine, School of Primary and Allied Health Care, Faculty of Medicine, Nursing and Health Science, Monash University, PO Box 527, Peninsula Campus, McMahons Road, Frankston, Melbourne, Victoria, 3199, Australia
- Ambulance Victoria, Melbourne, Australia
| |
Collapse
|
5
|
Omega A, Wijaya Ramlan AA, Soenarto RF, Heriwardito A, Sugiarto A. Assessing clinical reasoning in airway related cases among anesthesiology fellow residents using Script Concordance Test (SCT). MEDICAL EDUCATION ONLINE 2022; 27:2135421. [PMID: 36258663 PMCID: PMC9586607 DOI: 10.1080/10872981.2022.2135421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 10/04/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
INTRODUCTION Clinical reasoning is a core competency for physicians. In the field of anesthesia, many situations require residents to use their clinical reasoning to make quick and appropriate decisions such as during emergency airway cases. The Script Concordance Test (SCT) is a test developed in recent years and validated that objectively assess clinical reasoning ability. However, studies involving SCT to assess clinical reasoning in airway management is scarce. AIM To evaluate SCT in assessing clinical reasoning for airway management in anesthesiology residents. METHOD A cross-sectional study involving residents and anesthesiology consultants from the Department of Anesthesiology and Intensive Care, Faculty of Medicine Universitas Indonesia was conducted to complete SCT. A panel of five anesthesiology consultants with more than 15 years of work experience constructed 20 SCT vignettes based on prevalent airway cases in our center from the past 10 years. Each SCT has three nested questions, with a total of 60 questions, to be answered within 120 min. RESULTS The SCT of 20 case vignettes with three nested questions were tested on 99 residents from the junior, intermediate, and senior residents, compared to answers from the expert group consisting of ten anesthesiology consultants with more than 5 years of experience. There were significant differences in mean SCT scores in the junior, intermediate, senior and expert groups, 59.3 (46.1-72.8), 64.7 (39.9-74.9), 67.5 (50.6-78.3), and 79.6 (78.4-84.8); p < 0,001 consecutively. Cronbach Alpha 0.69 was obtained, indicating good reliability. CONCLUSION Our SCT was proven to be a valid and reliable test instrument to assess the clinical reasoning in airway management for anesthesiology residents. SCT was able to discriminate between groups of different clinical experiences and should be included to evaluate airway competencies in anesthesiology residents.
Collapse
Affiliation(s)
- Andy Omega
- Department of Anesthesiology and Intensive Care, Cipto Mangunkusumo General Hospital, Faculty of Medicine Universitas Indonesia, DKI Jakarta, Indonesia
| | - Andi Ade Wijaya Ramlan
- Department of Anesthesiology and Intensive Care, Cipto Mangunkusumo General Hospital, Faculty of Medicine Universitas Indonesia, DKI Jakarta, Indonesia
| | - Ratna Farida Soenarto
- Department of Anesthesiology and Intensive Care, Cipto Mangunkusumo General Hospital, Faculty of Medicine Universitas Indonesia, DKI Jakarta, Indonesia
| | - Aldy Heriwardito
- Department of Anesthesiology and Intensive Care, Cipto Mangunkusumo General Hospital, Faculty of Medicine Universitas Indonesia, DKI Jakarta, Indonesia
| | - Adhrie Sugiarto
- Department of Anesthesiology and Intensive Care, Cipto Mangunkusumo General Hospital, Faculty of Medicine Universitas Indonesia, DKI Jakarta, Indonesia
| |
Collapse
|
6
|
Redmond C, Jayanth A, Beresford S, Carroll L, Johnston ANB. Development and validation of a script concordance test to assess biosciences clinical reasoning skills: A cross-sectional study of 1st year undergraduate nursing students. NURSE EDUCATION TODAY 2022; 119:105615. [PMID: 36334475 DOI: 10.1016/j.nedt.2022.105615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 09/05/2022] [Accepted: 10/17/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND Developing evaluative measures that assess clinical reasoning remains a major challenge for nursing education. A thorough understanding of biosciences underpins much of nursing practice and is essential to allow nurses to reason effectively. A gap in clinical reasoning can lead to unintended harm. The Script Concordance test holds promise as a measure of clinical reasoning in the context of uncertainty, situations common in nursing practice. The aim of this study is to develop and validate a test for first year undergraduate nursing students that will evaluate how bioscience knowledge is used to clinically reason. METHODS An international team, teaching biosciences to undergraduate nurses constructed a test integrating common clinical cases with a series of related test items: diagnostic, investigative and treatment. An expert panel (n = 10) took the test and commented on authenticity/ambiguities/omissions etc. This step is crucial for validity and for scoring of the student test. The test was administered to 47 first year undergraduate nursing students from the author sites. Students rated educational aspects of the tool both quantitatively and qualitatively. Statistical and content analyses inform the findings. FINDINGS Results indicate that the test is reliable and valid, differentiating between experts and students. Students demonstrated an ability to identify relevant data, link this to their bioscience content and predict outcomes (mean score = 50.78 ± 8.89). However, they lacked confidence in their answers when the scenarios appeared incomplete to them. CONCLUSION Nursing practice is dependent on a thorough understanding of biosciences and the ability to clinically reason. Script concordance tests can be used to promote both competencies. This method of evaluation goes further than probing factual knowledge. It also explores capacities of data interpretation, critical analysis, and clinical reasoning. Evaluating bioscience knowledge and real-world situations encountered in practice is a unique strength of this test.
Collapse
Affiliation(s)
- Catherine Redmond
- School of Nursing, Midwifery & Health Systems, University College Dublin, Dublin 4, Ireland.
| | - Aiden Jayanth
- Brighton & Sussex Medical School, University of Sussex, UK.
| | | | - Lorraine Carroll
- School of Nursing, Midwifery & Health Systems, University College Dublin, Dublin 4, Ireland.
| | - Amy N B Johnston
- School of Nursing, Midwifery & Social Work, The University of Queensland, Australia; Department of Emergency Medicine, Metro South Health, Woolloongabba, QLD 4102, Australia.
| |
Collapse
|
7
|
Newsom LC, Augustine J, Momary K. Development of a script concordance test to assess clinical reasoning in a pharmacy curriculum. CURRENTS IN PHARMACY TEACHING & LEARNING 2022; 14:1135-1142. [PMID: 36154958 DOI: 10.1016/j.cptl.2022.07.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 07/01/2022] [Accepted: 07/20/2022] [Indexed: 06/16/2023]
Abstract
INTRODUCTION Clinical reasoning is a vital skill for student pharmacists in the provision of patient-centered care, but these skills are often difficult to assess in the didactic curriculum. A script concordance test (SCT) is an innovative assessment method that can be used to assess clinical reasoning skills. The objective of this study was to develop and refine an SCT to assess clinical reasoning skills of third year student pharmacists (P3s). METHODS An SCT was written and administered to P3s. Pharmacy practice faculty members served as the expert group. The SCT was scored and Rasch analysis was performed. RESULTS The SCT included 20 case vignettes and 60 questions. Test reliability was 0.34 with mean square values for all items between 0.7 and 1.3. Forty-two questions had a difficulty score between 0 and - 1 logits, indicating there were multiple questions with similar difficulty levels. Two case vignettes and 43.3% of questions (n = 26) were revised to enhance clarity and decrease ambiguity. CONCLUSIONS The SCT is a tool to assess clinical reasoning in the didactic curriculum. Faculty can create the SCT and use statistical methods such as Rasch analysis to assess validity and reliability of the SCT.
Collapse
Affiliation(s)
- Lydia C Newsom
- Department of Pharmacy Practice, Mercer University College of Pharmacy, 3001 Mercer University Drive, Atlanta, GA 30341-4115, United States.
| | - Jill Augustine
- Department of Pharmacy Practice and the Department of Pharmaceutical Sciences, Mercer University College of Pharmacy, 3001 Mercer University Drive, Atlanta, GA 30341-4115, United States.
| | - Kathryn Momary
- Department of Pharmacy Practice, Mercer University College of Pharmacy, 3001 Mercer University Drive, Atlanta, GA 30341-4115, United States.
| |
Collapse
|
8
|
Kün-Darbois JD, Annweiler C, Lerolle N, Lebdai S. Script concordance test acceptability and utility for assessing medical students' clinical reasoning: a user's survey and an institutional prospective evaluation of students' scores. BMC MEDICAL EDUCATION 2022; 22:277. [PMID: 35418078 PMCID: PMC9008989 DOI: 10.1186/s12909-022-03339-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 04/01/2022] [Indexed: 06/14/2023]
Abstract
Script Concordance Testing (SCT) is a method for clinical reasoning assessment in the field of health-care training. Our aim was to assess SCT acceptability and utility with a survey and an institutional prospective evaluation of students' scores.With a user's online survey, we collected the opinions and satisfaction data of all graduate students and teachers involved in the SCT setting. We performed a prospective analysis comparing the scores obtained with SCT to those obtained with the national standard evaluation modality.General opinions about SCT were mostly negative. Students tended to express more negative opinions and perceptions. There was a lower proportion of negative responses in the teachers' satisfaction survey. The proportion of neutral responses was higher for teachers. There was a higher proportion of positive positions towards all questions among teachers. PCC scores significantly increased each year, but SCT scores increased only between the first and second tests. PCC scores were found significantly higher than SCT scores for the second and third tests. Medical students' and teachers' global opinion on SCT was negative. At the beginning SCT scores were found quite similar to PCC scores. There was a higher progression for PCC scores through time.
Collapse
Affiliation(s)
- Jean-Daniel Kün-Darbois
- Maxillofacial Surgery Department, University Hospital of Angers, 49933, Angers Cedex, France.
- Faculty for Health Sciences and Medicine, University of Angers, Angers, Angers, France.
| | - Cédric Annweiler
- Faculty for Health Sciences and Medicine, University of Angers, Angers, Angers, France
- Geriatric Department, University Hospital of Angers, Angers, France
| | - Nicolas Lerolle
- Faculty for Health Sciences and Medicine, University of Angers, Angers, Angers, France
- Intensive Care Department, University Hospital of Angers, Angers, France
| | - Souhil Lebdai
- Faculty for Health Sciences and Medicine, University of Angers, Angers, Angers, France
- Urology Department, University Hospital of Angers, Angers, France
| |
Collapse
|
9
|
Gordon D, Rencic JJ, Lang VJ, Thomas A, Young M, Durning SJ. Advancing the assessment of clinical reasoning across the health professions: Definitional and methodologic recommendations. PERSPECTIVES ON MEDICAL EDUCATION 2022; 11:108-114. [PMID: 35254653 PMCID: PMC8940991 DOI: 10.1007/s40037-022-00701-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 01/24/2022] [Accepted: 02/04/2022] [Indexed: 06/14/2023]
Abstract
The importance of clinical reasoning in patient care is well-recognized across all health professions. Validity evidence supporting high quality clinical reasoning assessment is essential to ensure health professional schools are graduating learners competent in this domain. However, through the course of a large scoping review, we encountered inconsistent terminology for clinical reasoning and inconsistent reporting of methodology, reflecting a somewhat fractured body of literature on clinical reasoning assessment. These inconsistencies impeded our ability to synthesize across studies and appropriately compare assessment tools. More specifically, we encountered: 1) a wide array of clinical reasoning-like terms that were rarely defined or informed by a conceptual framework, 2) limited details of assessment methodology, and 3) inconsistent reporting of the steps taken to establish validity evidence for clinical reasoning assessments. Consolidating our experience in conducting this review, we provide recommendations on key definitional and methodologic elements to better support the development, description, study, and reporting of clinical reasoning assessments.
Collapse
Affiliation(s)
- David Gordon
- Division of Emergency Medicine, Duke University, Durham, NC, USA.
| | - Joseph J Rencic
- Department of Medicine, Boston University School of Medicine, Boston, MA, USA
| | - Valerie J Lang
- Division of Hospital Medicine, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Aliki Thomas
- School of Physical and Occupational Therapy, Institute of Health Sciences Education, McGill University, Montreal, QC, Canada
| | - Meredith Young
- Department of Medicine and Institute of Health Sciences Education, McGill University, Montreal, QC, Canada
| | - Steven J Durning
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
10
|
Bryant GA, Dy-Boarman EA, Herring MS, Witry MJ. Use of a script concordance test to evaluate the impact of a targeted educational strategy on clinical reasoning in advanced pharmacy practice experiential students. CURRENTS IN PHARMACY TEACHING & LEARNING 2021; 13:1024-1031. [PMID: 34294243 DOI: 10.1016/j.cptl.2021.06.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 01/27/2021] [Accepted: 06/09/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND PURPOSE It is unclear how clinical reasoning is impacted by a single advanced pharmacy practice experience (APPE) and how preceptors can further develop these skills. EDUCATIONAL ACTIVITY AND SETTING Students completing an APPE within four sites were invited to participate. To assess clinical reasoning skills, students completed a 30 item script concordance test (SCT) during week 1 and week 5 of a rotation. Students were divided into control and intervention groups. The intervention group participated in a clinical reasoning discussion, during which students presented a case and led a discussion on how to reason through treatment options. FINDINGS Change in mean SCT scores between week 1 and week 5 were 0.84 (2.8%) and 1.23 (4.1%) in the control (n = 15) and intervention groups (n = 28), respectively. There was no significant change in scores in the control group (P = .07, CI -0.34, 2.01). The change in scores was statistically significant in the intervention group (P = .02, CI 0.23, 2.23). An independent samples t-test comparing the SCT score change for the control and intervention group showed no significant difference (P = .62, CI -1.18, 1.96). SUMMARY This study demonstrated the feasibility of implementing a SCT in experiential education. SCT scores did not significantly improve beyond the standard APPE in response to the focused educational intervention, but investigators found that the discussion facilitated rich conversations about patient cases and was valuable for assessing a student's thinking pattern.
Collapse
Affiliation(s)
- Ginelle A Bryant
- Department of Clinical Sciences, Drake University College of Pharmacy and Health Sciences, 2507 University Avenue, Des Moines, IA 50311-4505, United States.
| | - Eliza A Dy-Boarman
- Department of Clinical Sciences, Drake University College of Pharmacy and Health Sciences, 2507 University Avenue, Des Moines, IA 50311-4505, United States.
| | - Morgan S Herring
- Department of Pharmacy Practice and Science, Division of Applied Clinical Sciences, University of Iowa College of Pharmacy, 180 South Grand Avenue, Iowa City, Iowa 52242, United States.
| | - Matthew J Witry
- Department of Pharmacy Practice and Science, Division of Health Services Research, University of Iowa College of Pharmacy, 180 South Grand Avenue 342 CPB, Iowa City, Iowa 52242, United States.
| |
Collapse
|
11
|
Gawad N, Wood TJ, Malvea A, Cowley L, Raiche I. The Impact of Surgeon Experience on Script Concordance Test Scoring. J Surg Res 2021; 265:265-271. [PMID: 33964636 DOI: 10.1016/j.jss.2021.03.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 03/24/2021] [Accepted: 03/29/2021] [Indexed: 10/21/2022]
Abstract
OBJECTIVE The Script Concordance Test (SCT) is a test of clinical decision-making that relies on an expert panel to create its scoring key. Existing literature demonstrates the value of specialty-specific experts, but the effect of experience among the expert panel is unknown. The purpose of this study was to explore the role of surgeon experience in SCT scoring. DESIGN An SCT was administered to 29 general surgery residents and 14 staff surgeons. Staff surgeons were stratified as either junior or senior experts based on years since completing residency training (<15 versus >25 years). The SCT was scored using the full expert panel, the senior panel, the junior panel, and a subgroup junior panel in practice <5 years. A one-way ANOVA was used to compare the scores of first (R1) and fifth (R5) year residents using each scoring scheme. Cognitive interviews were analyzed for differences between junior and senior expert panelist responses. RESULTS There was no statistically significant difference between the mean score of six R1s and five R5s using the full expert panel (R1 69.08 versus R5 67.06, F1,9 = 0.10, P = 0.76), the junior panel (R1 66.73 versus R5 62.50, F1,9 = 0.35, P = 0.57), or the subgroup panel in practice <5 years (R1 61.07 versus R5 58.79, F1,9 = 0.18, P = 0.75). However, the average score of R1s was significantly lower than R5s when using the senior faculty panel (R1 52.04 versus R5 63.26, F1,9 = 26.90, P = 0.001). Cognitive interview data suggests that some responses of junior experts demonstrate less confidence than those of senior experts. CONCLUSIONS SCT scores are significantly affected by the responses of the expert panel. Expert differences between first and fifth year residents were only demonstrated when using an expert panel consisting of senior faculty members. Confidence may play a role in the response selections of junior experts. When constructing an SCT expert panel, consideration must be given to the experience of panel members.
Collapse
Affiliation(s)
- Nada Gawad
- Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada; Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, Ontario, Canada.
| | - Timothy J Wood
- Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, Ontario, Canada
| | - Anahita Malvea
- Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Lindsay Cowley
- Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, Ontario, Canada
| | - Isabelle Raiche
- Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada; Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
12
|
Ottolini MC, Chua I, Campbell J, Ottolini M, Goldman E. Pediatric Hospitalists' Performance and Perceptions of Script Concordance Testing for Self-Assessment. Acad Pediatr 2021; 21:252-258. [PMID: 33065290 DOI: 10.1016/j.acap.2020.10.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 09/25/2020] [Accepted: 10/10/2020] [Indexed: 12/25/2022]
Abstract
OBJECTIVES The cognitive expertise of Pediatric Hospitalists (PH) lies not in standard knowledge but in making decisions under conditions of uncertainty. To maintain expertise, PH should engage in deliberate practice via self-assessments that promote higher-level cognitive processes necessary to address problems with missing or ambiguous information. Higher levels of cognition are purported with Script Concordance Test (SCT) questions compared to Multiple Choice Questions (MCQ). To determine if PH use higher levels of cognition when answering SCT versus MCQ questions and to analyze participants' perceptions of the utility of using SCT self-assessment for deliberate practice in addressing clinical problems encountered in daily practice. METHODS This is a mixed methods study comparing the cognitive level expressed according to Bloom's Taxonomy by PH answering MCQ versus SCT questions using a "think aloud" (TA) exercise, followed by qualitative analysis of interviews conducted afterward. RESULTS A significantly greater percentage of comments were coded as higher cognitive processes (apply, analyze, evaluate, and create) for SCT versus MCQ (74% vs 19%) compared with lower order (remember, understand); chi-square P < .00001. Analysis of interviews revealed 6 themes. CONCLUSION SCT questions elicited higher level cognition essential to clinical reasoning compared to MCQ questions. PH-indicated MCQ questions measure standard knowledge, while SCT questions better measure decision-making under conditions of uncertainty. PH-perceived SCT could be useful for deliberate practice in Pediatric Hospital Medicine decision-making if they could compare their rationale in answering questions with that of experts.
Collapse
Affiliation(s)
- Mary C Ottolini
- Department of Pediatrics, The Barbara Bush Children's Hospital at Maine Medical Center (MC Ottolini), Portland, Maine.
| | - Ian Chua
- Department of Pediatrics, Children's National Medical Center, George Washington University School of Medicine and Health Sciences (I Chua and J Campbell), Washington, DC
| | - Joyce Campbell
- Department of Pediatrics, Children's National Medical Center, George Washington University School of Medicine and Health Sciences (I Chua and J Campbell), Washington, DC
| | - Martin Ottolini
- Department of Pediatrics, Uniformed Services University of the Health Sciences (M Ottolini), Bethesda, Md
| | - Ellen Goldman
- George Washington University Graduate School of Education and Human Development, George Washington University School of Medicine and Health Sciences (E Goldman), Washington, DC
| |
Collapse
|
13
|
Gawad N, Wood TJ, Cowley L, Raiche I. How do cognitive processes influence script concordance test responses? MEDICAL EDUCATION 2021; 55:354-364. [PMID: 33185303 DOI: 10.1111/medu.14416] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 10/15/2020] [Accepted: 11/09/2020] [Indexed: 06/11/2023]
Abstract
INTRODUCTION The script concordance test (SCT) is a test of clinical decision-making (CDM) that compares the thought process of learners to that of experts to determine to what extent their cognitive 'scripts' align. Without understanding test-takers' cognitive process, however, it is unclear what influences their responses. The objective of this study was to gather response process validity evidence by studying the cognitive process of test-takers to determine whether the SCT tests CDM and what cognitive processes may influence SCT responses. METHODS Cases from an SCT used in a national validation study were administered and semi-structured cognitive interviews were conducted with ten residents and five staff surgeons. A retrospective verbal probing technique was used. Data was independently analysed and coded by two analysts. Themes were identified as factors that influence SCT responses during the cognitive interview. RESULTS Cognitive interviews demonstrated variability in CDM among test-takers. Consistent with dual process theory, test-takers relied on scripts formed through past experiences, when available, to make decisions and used conscious deliberation in the absence of experience. However, test-takers' response process was also influenced by their comprehension of specific terms, desire for additional information, disagreement with the planned management, underlying knowledge gaps and desire to demonstrate confidence or humility. CONCLUSION The rationale behind SCT answers may be influenced by comprehension, underlying knowledge and social desirability in addition to formed scripts and/or conscious deliberation. Having test-takers verbalise their rationale for responses provides a depth of assessment that is otherwise lost in the SCT's current format. With the improved ability to standardise CDM assessment using the SCT, consideration of test-makers improving the SCT construction process and combining the SCT question format with verbal responses may improve the use of the SCT for CDM assessment.
Collapse
Affiliation(s)
- Nada Gawad
- Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
- Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, ON, Canada
| | - Timothy J Wood
- Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, ON, Canada
| | - Lindsay Cowley
- Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, ON, Canada
| | - Isabelle Raiche
- Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
- Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
14
|
Wan MSH, Tor E, Hudson JN. Examining response process validity of script concordance testing: a think-aloud approach. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2020; 11:127-135. [PMID: 32581143 PMCID: PMC7870454 DOI: 10.5116/ijme.5eb6.7be2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 05/09/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES This study investigated whether medical student responses to Script Concordance Testing (SCT) items represent valid clinical reasoning. Using a think-aloud approach students provided written explanations of the reasoning that underpinned their responses, and these were reviewed for concordance with an expert reference panel. METHODS A set of 12, 11 and 15 SCT items were administered online to Year 3 (2018), Year 4 (2018) and Year 3 (2019) medical students respectively. Students' free-text descriptions of the reasoning supporting each item response were analysed, and compared with those of the expert panel. Response process validity was quantified as the rate of true positives (percentage of full and partial credit responses derived through correct clinical reasoning); and true negatives (percentage of responses with no credit derived through faulty clinical reasoning). RESULTS Two hundred and nine students completed the online tests (response rate = 68.3%). The majority of students who had chosen the response which attracted full or partial credit also provided justifications which were concordant with the experts (true positive rate of 99.6% for full credit; 99.4% for partial credit responses). Most responses that attracted no credit were based on faulty clinical reasoning (true negative of 99.0%). CONCLUSIONS The findings provide support for the response process validity of SCT scores in the setting of undergraduate medicine. The additional written think-aloud component, to assess clinical reasoning, provided useful information to inform student learning. However, SCT scores should be validated on each testing occasion, and in other contexts.
Collapse
Affiliation(s)
| | - Elina Tor
- School of Medicine, The University of Notre Dame Australia, Australia
| | - Judith N. Hudson
- Faculty of Health and Medical Sciences, University of Adelaide, Australia
| |
Collapse
|
15
|
Steinberg E, Cowan E, Lin MP, Sielicki A, Warrington S. Assessment of Emergency Medicine Residents' Clinical Reasoning: Validation of a Script Concordance Test. West J Emerg Med 2020; 21:978-984. [PMID: 32726273 PMCID: PMC7390545 DOI: 10.5811/westjem.2020.3.46035] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 03/23/2020] [Indexed: 11/11/2022] Open
Abstract
INTRODUCTION A primary aim of residency training is to develop competence in clinical reasoning. However, there are few instruments that can accurately, reliably, and efficiently assess residents' clinical decision-making ability. This study aimed to externally validate the script concordance test in emergency medicine (SCT-EM), an assessment tool designed for this purpose. METHODS Using established methodology for the SCT-EM, we compared EM residents' performance on the SCT-EM to an expert panel of emergency physicians at three urban academic centers. We performed adjusted pairwise t-tests to compare differences between all residents and attending physicians, as well as among resident postgraduate year (PGY) levels. We tested correlation between SCT-EM and Accreditation Council for Graduate Medical Education Milestone scores using Pearson's correlation coefficients. Inter-item covariances for SCT items were calculated using Cronbach's alpha statistic. RESULTS The SCT-EM was administered to 68 residents and 13 attendings. There was a significant difference in mean scores among all groups (mean + standard deviation: PGY-1 59 + 7; PGY-2 62 + 6; PGY-3 60 + 8; PGY-4 61 + 8; 73 + 8 for attendings, p < 0.01). Post hoc pairwise comparisons demonstrated that significant difference in mean scores only occurred between each PGY level and the attendings (p < 0.01 for PGY-1 to PGY-4 vs attending group). Performance on the SCT-EM and EM Milestones was not significantly correlated (r = 0.12, p = 0.35). Internal reliability of the exam was determined using Cronbach's alpha, which was 0.67 for all examinees, and 0.89 in the expert-only group. CONCLUSION The SCT-EM has limited utility in reliably assessing clinical reasoning among EM residents. Although the SCT-EM was able to differentiate clinical reasoning ability between residents and expert faculty, it did not between PGY levels, or correlate with Milestones scores. Furthermore, several limitations threaten the validity of the SCT-EM, suggesting further study is needed in more diverse settings.
Collapse
Affiliation(s)
- Eric Steinberg
- St. Joseph's University Medical Center, Department of Emergency Medicine, Paterson, New Jersey
| | - Ethan Cowan
- Mount Sinai Beth Israel, Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine, New York, New York
| | - Michelle P Lin
- Mount Sinai Beth Israel, Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine, New York, New York
| | - Anthony Sielicki
- Mount Sinai Beth Israel, Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine, New York, New York
| | - Steven Warrington
- Orange Park Medical Center, Department of Emergency Medicine, Orange Park, Florida
| |
Collapse
|
16
|
Merkebu J, Battistone M, McMains K, McOwen K, Witkop C, Konopasky A, Torre D, Holmboe E, Durning SJ. Situativity: a family of social cognitive theories for understanding clinical reasoning and diagnostic error. Diagnosis (Berl) 2020; 7:169-176. [DOI: 10.1515/dx-2019-0100] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Accepted: 03/11/2020] [Indexed: 11/15/2022]
Abstract
Abstract
The diagnostic error crisis suggests a shift in how we view clinical reasoning and may be vital for transforming how we view clinical encounters. Building upon the literature, we propose clinical reasoning and error are context-specific and proceed to advance a family of theories that represent a model outlining the complex interplay of physician, patient, and environmental factors driving clinical reasoning and error. These contemporary social cognitive theories (i.e. embedded cognition, ecological psychology, situated cognition, and distributed cognition) can emphasize the dynamic interactions occurring amongst participants in particular settings. The situational determinants that contribute to diagnostic error are also explored.
Collapse
Affiliation(s)
- Jerusalem Merkebu
- Uniformed Services University of the Health Sciences (USUHS) , Bethesda , USA
| | - Michael Battistone
- George E. Wahlen Veterans Affairs Salt Lake City Health Care System USA, Department of Medicine, Division of Rheumatology, University of Utah Health Sciences Center , Salt Lake City , USA
| | - Kevin McMains
- Uniformed Services University of the Health Sciences (USUHS) , Bethesda , USA
| | - Kathrine McOwen
- Association of American Medical Colleges (AAMC) , Uniformed Services University of the Health Sciences (USUHS) , Bethesda , USA
| | - Catherine Witkop
- Obstetrics/Gynecology and Preventive Medicine , Uniformed Services University of the Health Sciences (USUHS) , Bethesda , USA
| | - Abigail Konopasky
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Uniformed Services University of the Health Sciences (USUHS) , Bethesda , USA
| | - Dario Torre
- Uniformed Services University of the Health Sciences (USUHS) , Bethesda , USA
| | - Eric Holmboe
- Accreditation Council for Graduate Medical Education (ACGME), Uniformed Services University of the Health Sciences (USUHS) , Chicago, IL , USA
| | - Steven J. Durning
- Uniformed Services University of the Health Sciences (USUHS) , Bethesda , USA
| |
Collapse
|
17
|
Gawad N, Wood TJ, Cowley L, Raiche I. The cognitive process of test takers when using the script concordance test rating scale. MEDICAL EDUCATION 2020; 54:337-347. [PMID: 31912562 DOI: 10.1111/medu.14056] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 12/24/2019] [Accepted: 01/02/2020] [Indexed: 06/10/2023]
Abstract
CONTEXT Clinical decision making (CDM) skills are important to learn and assess in order to establish competence in trainees. A common tool for assessing CDM is the script concordance test (SCT), which asks test takers to indicate how a new clinical finding influences a proposed plan using a Likert-type scale. Most criticisms of the SCT relate to its rating scale but are largely theoretical. The cognitive process of test takers when selecting their responses using the SCT rating scale remains understudied, but is essential to gathering validity evidence for use of the SCT in CDM assessment. METHODS Cases from an SCT used in a national validation study were administered to 29 residents and 14 staff surgeons. Semi-structured cognitive interviews were then conducted with 10 residents and five staff surgeons based on the SCT results. Cognitive interview data were independently coded by two data analysts, who specifically sought to elucidate how participants mapped their internally generated responses to any of the rating scale options. RESULTS Five major issues were identified with the response matching cognitive process: (a) the meaning of the '0' response option; (b) which response corresponds to agreement with the planned management; (c) the rationale for picking '±1' versus '±2'; (d) which response indicates the desire to undertake the planned management plus an additional procedure, and (e) the influence of time on response selection. CONCLUSIONS Studying how test takers (experts and trainees) interpret the SCT rating scale has revealed several issues related to inconsistent and unintended use. Revising the scale to address the variety of interpretations could help to improve the response process validity of the SCT and therefore improve the SCT's ability to be used in CDM skills assessments.
Collapse
Affiliation(s)
- Nada Gawad
- Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Timothy J Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Lindsay Cowley
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Isabelle Raiche
- Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
18
|
Wan MSH, Tor E, Hudson JN. Construct validity of script concordance testing: progression of scores from novices to experienced clinicians. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2019; 10:174-179. [PMID: 31562807 PMCID: PMC6766395 DOI: 10.5116/ijme.5d76.1eee] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 09/09/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVES To investigate the construct validity of Script Concordance Testing (SCT) scores as a measure of the clinical reasoning ability of medical students and practising General Practitioners with different levels of clinical experience. METHODS Part I involved a cross-sectional study, where 105 medical students, 19 junior registrars and 13 experienced General Practitioners completed the same set of SCT questions, and their mean scores were compared using one-way ANOVA. In Part II, pooled and matched SCT scores for 5 cohorts of students (2012 to 2017) in Year 3 (N=584) and Year 4 (N=598) were retrospectively analysed for evidence of significant progression. RESULTS A significant main effect of clinical experience was observed [F(2, 136)=6.215, p=0.003]. The mean SCT score for General Practitioners (M=70.39, SD=4.41, N=13) was significantly higher (p=0.011) than that of students (M = 64.90, SD = 6.30, N=105). Year 4 students (M=68.90, SD= 7.79, N=584) scored a significantly higher mean score [t(552)=12.78, p<0.001] than Year 3 students (M = 64.03, SD=7.98, N=598). CONCLUSIONS The findings that candidate scores increased with increasing level of clinical experience add to current evidence in the international literature in support of the construct validity of Script Concordance Testing. Prospective longitudinal studies with larger sample sizes are recommended to further test and build confidence in the construct validity of SCT scores.
Collapse
Affiliation(s)
| | - Elina Tor
- School of Medicine, University of Notre Dame, Australia
| | - Judith N. Hudson
- Faculty of Health and Medical Sciences, University of Adelaide, Australia
| |
Collapse
|
19
|
Background noise lowers the performance of anaesthesiology residents' clinical reasoning when measured by script concordance: A randomised crossover volunteer study. Eur J Anaesthesiol 2018; 34:464-470. [PMID: 28394819 DOI: 10.1097/eja.0000000000000624] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
BACKGROUND Noise, which is omnipresent in operating rooms and ICUs, may have a negative impact not only patients but also on the concentration of and communication between clinical staff. OBJECTIVE The present study attempted to evaluate the impact of noise on the performance of anaesthesiology residents' clinical reasoning. Changes in clinical reasoning were measured by script concordance tests (SCTs). DESIGN This was a randomised and crossover study. SETTING Single centre at Rouen University Hospital in April 2014. POPULATION All year 1 to 4 residents enrolled in the anaesthesiology training programme were included. INTERVENTION Performance was assessed using a 56-item SCT. Two resident groups were formed, and each was exposed to both quiet and noisy atmospheres during SCT assessment. Group A did the first part of the assessment (28 SCT) in a quiet atmosphere and the second part (28 SCT) in a noisy atmosphere. Group B did the same in reverse order. MAIN OUTCOME MEASURES The primary outcome of this study was residents' performance as measured by SCT, with and without noise (mean of 100 points 95% confidence interval). RESULTS Forty-two residents were included. Residents' performance, measured by SCT, was weaker in a noisy environment than in a quiet environment [59.0 (56.0 to 62.0) vs 62.8 (60.8 to 64.9), P = 0.04]. This difference lessened as medical training advanced, as this difference in performance in noisy vs quiet environments was not observed in year 3 and 4 residents [62.9 (59.2 to 66.5) vs 64.0 (61.9 to 66.1), P = 0.60], whereas it was higher for year 1 and 2 residents [54.8 (50.6 to 59.1) vs 61.5 (57.9 to 65.1), P = 0.02]. CONCLUSION Our study suggests that noise affects clinical reasoning of anaesthesiology residents especially junior residents when measured by SCT. This observation supports the hypothesis that noise should be prevented in operating rooms especially when junior residents are providing care.
Collapse
|
20
|
Development and psychometrics of script concordance test (SCT) in midwifery. Med J Islam Repub Iran 2018; 32:75. [PMID: 30643750 PMCID: PMC6325274 DOI: 10.14196/mjiri.32.75] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Indexed: 11/18/2022] Open
Abstract
Background: Clinical reasoning plays an important role in the accurate diagnosis and treatment of diseases. Script Concordance test (SCT) is one of the tools that assess clinical reasoning skill. This study was conducted to determine the reliability and concurrent and predictive validity of SCT in assessing final lessons and gynecology exams of undergraduate midwifery students.
Methods: At first, 20 clinical scenarios followed by 3 questions were designed by 2 experienced midwives. Then, after examining the content validity, 15 scenarios were selected. The test was used for 55 midwifery students. The correlation of SCT results with grade point average (GPA) was measured. To evaluate the concurrent validity of SCT, the correlation between SCT scores and the final exam of the gynecology course was measured. To measure predictive validity, the correlation of SCT scores with comprehensive exams of midwifery was calculated. Data were analyzed using SPSS software. Descriptive statistics, Pearson correlation, and coefficient Cronbach's alpha were used for analysis. The test’s item difficulty level (IDL) and item discriminative index (IDI) were determined using Whitney and Sabers’ method.
Results: The internal reliability of the test (calculated using Cronbach’s alpha coefficient) was 0.74. All questions were positively correlated with the total score. The highest correlation coefficient was related to GPA and comprehensive test with the score of 0.91. The correlation coefficient between SCT and the final test (concurrent validity) was 0.654, and the correlation coefficient between SCT and comprehensive test (predictive validity) was 0.721. The range of item discriminative index and item difficulty level in this exam was 0.39-0.59 and 0.32-0.66, respectively.
Conclusion: SCT shows a relatively high internal validity and can predict the success rate of students in the comprehensive exams of midwifery. Also, it showed a high concurrent validity in the final test of gynecology course. This test could be a good alternative for formative and summative tests of clinical courses.
Collapse
|
21
|
Subra J, Chicoulaa B, Stillmunkès A, Mesthé P, Oustric S, Rougé Bugat ME. Reliability and validity of the script concordance test for postgraduate students of general practice. Eur J Gen Pract 2018; 23:208-213. [PMID: 28819998 PMCID: PMC5806088 DOI: 10.1080/13814788.2017.1358709] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Background: The script concordance test (SCT) is a validated method of examining students’ clinical reasoning. Medical students’ professional skills are assessed during their postgraduate years as they study for a specialist qualification in general practice. However, no specific provision is made for assessing their clinical reasoning during their postgraduate study. Objective: The aim was to demonstrate the reliability and validity of the SCT in general practice and to determine if this tool could be used to assess medical students’ progress in acquiring clinical reasoning. Methods: A 135-question SCT was administered to postgraduate medical students at the beginning of their first year of specialized training in general practice, and then every six months throughout their three-year training, as well as to a reference panel of 20 expert general practitioners. For score calculation, we used the combined scoring method as the calculator made available by the University of Montreal’s School of Medicine in Canada. For the validity, student’ scores were compared with experts, p <.05 was considered statistically significant. Results: Ninety students completed all six assessments. The experts’ mean score (76.7/100) was significantly higher than the students’ score across all assessments (p <.001), with a Cronbach’s alpha value of over 0.65 for all assessments. Conclusion: The SCT was found to be reliable and capable of discriminating between students and experts, demonstrating that this test is a valid tool for assessing clinical reasoning skills in general practice.
Collapse
Affiliation(s)
- Julie Subra
- a University Department of General Practice , Toulouse-Rangueil Faculty of Medicine , Toulouse , France
| | - Bruno Chicoulaa
- a University Department of General Practice , Toulouse-Rangueil Faculty of Medicine , Toulouse , France
| | - André Stillmunkès
- a University Department of General Practice , Toulouse-Rangueil Faculty of Medicine , Toulouse , France
| | - Pierre Mesthé
- a University Department of General Practice , Toulouse-Rangueil Faculty of Medicine , Toulouse , France
| | - Stéphane Oustric
- a University Department of General Practice , Toulouse-Rangueil Faculty of Medicine , Toulouse , France.,b Inserm U1027 , Faculty of Medicine , Toulouse , France
| | - Marie-Eve Rougé Bugat
- a University Department of General Practice , Toulouse-Rangueil Faculty of Medicine , Toulouse , France.,b Inserm U1027 , Faculty of Medicine , Toulouse , France
| |
Collapse
|
22
|
Elvén M, Hochwälder J, Dean E, Hällman O, Söderlund A. Criterion scores, construct validity and reliability of a web-based instrument to assess physiotherapists' clinical reasoning focused on behaviour change: 'Reasoning 4 Change'. AIMS Public Health 2018; 5:235-259. [PMID: 30280115 PMCID: PMC6141557 DOI: 10.3934/publichealth.2018.3.235] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Accepted: 06/29/2018] [Indexed: 01/22/2023] Open
Abstract
Background and aim: 'Reasoning 4 Change' (R4C) is a newly developed instrument, including four domains (D1-D4), to assess clinical practitioners' and students' clinical reasoning with a focus on clients' behaviour change in a physiotherapy context. To establish its use in education and research, its psychometric properties needed to be evaluated. The aim of the study was to generate criterion scores and evaluate the reliability and construct validity of a web-based version of the R4C instrument. Methods: Fourteen physiotherapy experts and 39 final-year physiotherapy students completed the R4C instrument and the Pain Attitudes and Beliefs Scale for Physiotherapists (PABS-PT). Twelve experts and 17 students completed the R4C instrument on a second occasion. The R4C instrument was evaluated with regard to: internal consistency (five subscales of D1); test-retest reliability (D1-D4); inter-rater reliability (D2-D4); and construct validity in terms of convergent validity (D1.4, D2, D4). Criterion scores were generated based on the experts' responses to identify the scores of qualified practitioners' clinical reasoning abilities. Results: For the expert and student samples, the analyses demonstrated satisfactory internal consistency (α range: 0.67-0.91), satisfactory test-retest reliability (ICC range: 0.46-0.94) except for D3 for the experts and D4 for the students. The inter-rater reliability demonstrated excellent agreement within the expert group (ICC range: 0.94-1.0). The correlations between the R4C instrument and PABS-PT (r range: 0.06-0.76) supported acceptable construct validity. Conclusions: The web-based R4C instrument shows satisfactory psychometric properties and could be useful in education and research. The use of the instrument may contribute to a deeper understanding of physiotherapists' and students' clinical reasoning, valuable for curriculum development and improvements of competencies in clinical reasoning related to clients' behavioural change.
Collapse
Affiliation(s)
- Maria Elvén
- Division of Physiotherapy, School of Health, Care and Social Welfare, Mälardalen University, Västerås, Sweden
| | - Jacek Hochwälder
- Division of Psychology, School of Health, Care and Social Welfare, Mälardalen University, Eskilstuna, Sweden
| | - Elizabeth Dean
- Division of Physiotherapy, School of Health, Care and Social Welfare, Mälardalen University, Västerås, Sweden
- Department of Physical Therapy, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Olle Hällman
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Anne Söderlund
- Division of Physiotherapy, School of Health, Care and Social Welfare, Mälardalen University, Västerås, Sweden
| |
Collapse
|
23
|
Le test de concordance de script : un outil pédagogique multimodal. Rev Med Interne 2018; 39:566-573. [DOI: 10.1016/j.revmed.2017.12.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2017] [Revised: 12/17/2017] [Accepted: 12/28/2017] [Indexed: 11/18/2022]
|
24
|
Phan SV. Cases in Psychiatry: A description of a multi-campus elective course for pharmacy students. Ment Health Clin 2018; 8:18-23. [PMID: 29955540 PMCID: PMC6007521 DOI: 10.9740/mhc.2018.01.018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Cases in Psychiatry was a multi-campus elective course aimed to expand psychiatry knowledge beyond the required course curriculum. The format of the class included didactic course work, small group discussion of patient cases and article evaluation, submission of written notes, debates, and script concordance test questions delivered via a live online platform. Based on student assessment and feedback at the end of the course, the elective course was determined to meet the prespecified course objectives.
Collapse
Affiliation(s)
- Stephanie V Phan
- Clinical Assistant Faculty, Associate Department Head, University of Georgia College of Pharmacy, Southwest Clinical Campus, Albany, Georgia,
| |
Collapse
|
25
|
Lubarsky S, Dory V, Meterissian S, Lambert C, Gagnon R. Examining the effects of gaming and guessing on script concordance test scores. PERSPECTIVES ON MEDICAL EDUCATION 2018; 7:174-181. [PMID: 29904900 PMCID: PMC6002294 DOI: 10.1007/s40037-018-0435-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
INTRODUCTION In a script concordance test (SCT), examinees are asked to judge the effect of a new piece of clinical information on a proposed hypothesis. Answers are collected using a Likert-type scale (ranging from -2 to +2, with '0' indicating no effect), and compared with those of a reference panel of 'experts'. It has been argued, however, that SCT may be susceptible to the influences of gaming and guesswork. This study aims to address some of the mounting concern over the response process validity of SCT scores. METHOD Using published datasets from three independent SCTs, we investigated examinee response patterns, and computed the score a hypothetical examinee would obtain on each of the tests if he 1) guessed random answers and 2) deliberately answered '0' on all test items. RESULTS A simulated random guessing strategy led to scores 2 SDs below mean scores of actual respondents (Z-scores -3.6 to -2.1). A simulated 'all-0' strategy led to scores at least 1 SD above those obtained by random guessing (Z-scores -2.2 to -0.7). In one dataset, stepwise exclusion of items with modal panel response '0' to fewer than 10% of the total number of test items yielded hypothetical scores 2 SDs below mean scores of actual respondents. DISCUSSION Random guessing was not an advantageous response strategy. An 'all-0' response strategy, however, demonstrated evidence of artificial score inflation. Our findings pose a significant threat to the SCT's validity argument. 'Testwiseness' is a potential hazard to all testing formats, and appropriate countermeasures must be established. We propose an approach that might be used to mitigate a potentially real and troubling phenomenon in script concordance testing. The impact of this approach on the content validity of SCTs merits further discussion.
Collapse
Affiliation(s)
- Stuart Lubarsky
- Centre for Medical Education, McGill University, Montreal, Canada.
| | - Valérie Dory
- Centre for Medical Education, McGill University, Montreal, Canada
| | | | - Carole Lambert
- Centre de pédagogie appliquée aux sciences de la santé (CPASS), Université de Montréal, Montreal, Canada
| | - Robert Gagnon
- Centre de pédagogie appliquée aux sciences de la santé (CPASS), Université de Montréal, Montreal, Canada
| |
Collapse
|
26
|
Atayee RS, Lockman K, Brock C, Abazia DT, Brooks TL, Pawasauskas J, Edmonds KP, Herndon CM. Multicentered Study Evaluating Pharmacy Students’ Perception of Palliative Care and Clinical Reasoning Using Script Concordance Testing. Am J Hosp Palliat Care 2018; 35:1394-1401. [DOI: 10.1177/1049909118772845] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Introduction: As the role of the pharmacist on the transdisciplinary palliative care team grows, the need for adequate instruction on palliative care and clinical reasoning skills in schools of pharmacy grows accordingly. Methods: This study evaluates second- and third-year pharmacy students from 6 accredited schools of pharmacy that participated in surveys before and after the delivery of a didactic palliative care elective. The survey collected student demographics, perceptions of the importance of and student skill level in palliative care topics. The script concordance test (SCT) was used to assess clinical decision-making skills on patient cases. Student scores on the SCT were compared to those of a reference panel of experts. Results: A total of 89 students completed the pre-/postsurveys and were included in data analysis. There was no statistically significant difference in student perceived importance of palliative care skills before and after the elective. Students from all 6 institutions showed significant increase in confidence in their palliative care skills at the end of the course. There was also a significant improvement across all institutions in clinical reasoning skills in most of the SCT questions used to assess these skills. Conclusions: Students choosing an elective in palliative care likely do so because they already have an understanding of the importance of these topics in their future practice settings. Delivery of a palliative care elective in the pharmacy curriculum significantly increases both student confidence in their palliative care skills and their clinical reasoning skills in these areas.
Collapse
Affiliation(s)
- Rabia S. Atayee
- UC San Diego Skaggs School of Pharmacy and Pharmaceutical Sciences, La Jolla, CA, USA
- Department of Medicine, Palliative Care Team, UC San Diego Health, CA, USA
| | | | - Cara Brock
- College of Pharmacy, Roosevelt University, Schaumburg, IL, USA
| | - Daniel T. Abazia
- Ernest Mario School of Pharmacy, Rutgers University, Piscataway, NJ, USA
| | - Tracy L. Brooks
- Manchester University College of Pharmacy, Fort Wayne, IN, USA
| | | | - Kyle P. Edmonds
- UC San Diego Skaggs School of Pharmacy and Pharmaceutical Sciences, La Jolla, CA, USA
- Department of Medicine, Palliative Care Team, UC San Diego Health, CA, USA
| | | |
Collapse
|
27
|
Measurement of critical thinking, clinical reasoning, and clinical judgment in culturally diverse nursing students - A literature review. Nurse Educ Pract 2018; 30:91-100. [PMID: 29669305 DOI: 10.1016/j.nepr.2018.04.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2017] [Revised: 04/07/2018] [Accepted: 04/07/2018] [Indexed: 11/23/2022]
|
28
|
Wan MS, Tor E, Hudson JN. Improving the validity of script concordance testing by optimising and balancing items. MEDICAL EDUCATION 2018; 52:336-346. [PMID: 29318646 DOI: 10.1111/medu.13495] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 08/18/2017] [Accepted: 10/19/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND A script concordance test (SCT) is a modality for assessing clinical reasoning. Concerns had been raised about the plausible validity threat to SCT scores if students deliberately avoided the extreme answer options to obtain higher scores. The aims of the study were firstly to investigate whether students' avoidance of the extreme answer options could result in higher scores, and secondly to determine whether a 'balanced approach' by careful construction of SCT items (to include extreme as well as median options as model responses) would improve the validity of an SCT. METHODS Using the paired sample t-test, the actual average student scores for 10 SCT papers from 2012-2016 were compared with simulated scores. The latter were generated by recoding all '-2' responses to '-1' and '+2' responses to '+1' for the whole and bottom 10% of the cohort (simulation 1), and scoring as if all students had chosen '0' for their responses (simulation 2). The actual average and simulated average scores in 2012 (before the 'balanced approach') were compared with those from 2013-2016, when papers had a good balance of modal responses from the expert reference panel. RESULTS In 2012, a score increase was seen in simulation 1 in the third-year cohort, from 50.2 to 55.6% (t [10] = 4.818; p = 0.001). Since 2013, with the 'balanced approach', the actual SCT scores (57.4%) were significantly higher than scores in both simulation 1 and simulation 2 (46.7% and 23.9% respectively). CONCLUSIONS When constructing SCT examinations, apart from the rigorous pre-examination optimisation, it is desirable to achieve a balance between items that attract extreme responses and those that attract median response options. This could mitigate the validity threat to SCT scores, especially for the low-performing students who have previously been shown to only select median responses and avoid the extreme responses.
Collapse
Affiliation(s)
- Michael Sh Wan
- School of Medicine, University of Notre Dame, Sydney, New South Wales, Australia
| | - Elina Tor
- School of Medicine, University of Notre Dame, Sydney, New South Wales, Australia
| | - Judith Nicky Hudson
- Adelaide Medical School, University of Adelaide, Adelaide, South Australia, Australia
| |
Collapse
|
29
|
Abstract
OBJECTIVES Script concordance testing (SCT) is used to assess clinical decision-making. We explore the use of SCT to (1) quantify practice variations in infant lumbar puncture (LP) and (2) analyze physician's characteristics affecting LP decision making. METHODS Using standard SCT processes, a panel of pediatric subspecialty physicians constructed 15 infant LP case vignettes, each with 2 to 4 SCT questions (a total of 47). The vignettes were distributed to pediatric attending physicians and fellows at 10 hospitals within the INSPIRE Network. We determined both raw scores (tendency to perform LP) and SCT scores (agreement with the reference panel) as well as the variation with participant factors. RESULTS Two hundred twenty-six respondents completed all 47 SCT questions. Pediatric emergency medicine physicians tended to select LP more frequently than did general pediatricians, with pediatric emergency medicine physicians showing significantly higher raw scores (20.2 ± 10.2) than general pediatricians (13 ± 15; 95% confidence interval for difference, 1, 13). Concordance with the reference panel varied among subspecialties and by the frequency with which practitioners perform LPs in their practices. CONCLUSION Script concordance testing questions can be used as a tool to detect subspecialty practice variation. We are able to detect significant practice variation in the self-report of use of LP for infants among different pediatric subspecialties.
Collapse
|
30
|
Madani A, Gornitsky J, Watanabe Y, Benay C, Altieri MS, Pucher PH, Tabah R, Mitmaker EJ. Measuring Decision-Making During Thyroidectomy: Validity Evidence for a Web-Based Assessment Tool. World J Surg 2017; 42:376-383. [PMID: 29110159 DOI: 10.1007/s00268-017-4322-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
BACKGROUND Errors in judgment during thyroidectomy can lead to recurrent laryngeal nerve injury and other complications. Despite the strong link between patient outcomes and intraoperative decision-making, methods to evaluate these complex skills are lacking. The purpose of this study was to develop objective metrics to evaluate advanced cognitive skills during thyroidectomy and to obtain validity evidence for them. METHODS An interactive online learning platform was developed ( www.thinklikeasurgeon.com ). Trainees and surgeons from four institutions completed a 33-item assessment, developed based on a cognitive task analysis and expert Delphi consensus. Sixteen items required subjects to make annotations on still frames of thyroidectomy videos, and accuracy scores were calculated based on an algorithm derived from experts' responses ("visual concordance test," VCT). Seven items were short answer (SA), requiring users to type their answers, and scores were automatically calculated based on their similarity to a pre-populated repertoire of correct responses. Test-retest reliability, internal consistency, and correlation of scores with self-reported experience and training level (novice, intermediate, expert) were calculated. RESULTS Twenty-eight subjects (10 endocrine surgeons and otolaryngologists, 18 trainees) participated. There was high test-retest reliability (intraclass correlation coefficient = 0.96; n = 10) and internal consistency (Cronbach's α = 0.93). The assessment demonstrated significant differences between novices, intermediates, and experts in total score (p < 0.01), VCT score (p < 0.01) and SA score (p < 0.01). There was high correlation between total case number and total score (ρ = 0.95, p < 0.01), between total case number and VCT score (ρ = 0.93, p < 0.01), and between total case number and SA score (ρ = 0.83, p < 0.01). CONCLUSION This study describes the development of novel metrics and provides validity evidence for an interactive Web-based platform to objectively assess decision-making during thyroidectomy.
Collapse
Affiliation(s)
- Amin Madani
- Department of Surgery, McGill University Health Centre, 1650 Cedar Avenue, Rm D6-257, Montreal, QC, H3G 1A4, Canada.
| | - Jordan Gornitsky
- Department of Surgery, McGill University Health Centre, 1650 Cedar Avenue, Rm D6-257, Montreal, QC, H3G 1A4, Canada
| | - Yusuke Watanabe
- Department of Gastroenterological Surgery II, Hokkaido University, Sapporo, Japan
| | - Cassandre Benay
- Department of Surgery, McGill University Health Centre, 1650 Cedar Avenue, Rm D6-257, Montreal, QC, H3G 1A4, Canada
| | - Maria S Altieri
- Department of Surgery, Stony Brook University Medical Center, Stony Brook, NY, USA
| | - Philip H Pucher
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Roger Tabah
- Department of Surgery, McGill University Health Centre, 1650 Cedar Avenue, Rm D6-257, Montreal, QC, H3G 1A4, Canada
| | - Elliot J Mitmaker
- Department of Surgery, McGill University Health Centre, 1650 Cedar Avenue, Rm D6-257, Montreal, QC, H3G 1A4, Canada
| |
Collapse
|
31
|
Carvalho ECD, Oliveira-Kumakura ARDS, Morais SCRV. Clinical reasoning in nursing: teaching strategies and assessment tools. Rev Bras Enferm 2017; 70:662-668. [DOI: 10.1590/0034-7167-2016-0509] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Accepted: 12/04/2016] [Indexed: 11/22/2022] Open
Abstract
ABSTRACT Objective: To present the concept and development of teaching strategies and the assessment tools regarding clinical reasoning for accurate practice. Method: This is a theoretical reflection based on scientific studies. Results: Comprehension of the essential concepts of the thought process and its articulation with the different teaching strategies and the assessment tools which has allowed presenting ways to improve the process of diagnostic or therapeutic clinical reasoning. Conclusion: The use of new strategies and assessment tools should be encouraged in order to contribute to the development of skills that lead to safe and effective decision making.
Collapse
|
32
|
Tsai TC. Twelve tips for the construction of ethical dilemma case-based assessment. MEDICAL TEACHER 2017; 39:341-346. [PMID: 28379082 DOI: 10.1080/0142159x.2017.1288862] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Ethical dilemma case-based examination (ethics Script Concordance Test, eSCT) is a written examination that can be delivered to a large group of examinees for the purpose of measuring high-level thinking. As it accommodates for diverse responses from experts, ethics SCT allows partial credits. The framework of ethics SCT includes a vignette with an ethical dilemma and a leading question, which asks the examinee to "agree" or "disagree", plus the shifts of prior decision by adding new information. In this article, the following tips for constructing this type of examination are provided: use "true" dilemmas, select an appropriate ethical issue, target high-level cognitive tasks, list key components, keep a single central theme, device quality scoring system, be important and plausible, be clear, select quality experts, validate, know the limitation, and be familiar with test materials. The use of eSCT to measure ethical reasoning ability appears to be both viable and desirable.
Collapse
Affiliation(s)
- Tsuen-Chiuan Tsai
- a Department of Pediatrics , Kaohsiung Medical University Hospital, Kaohsiung Medical University , Kaohsiung , Taiwan
| |
Collapse
|
33
|
Goos M, Schubach F, Seifert G, Boeker M. Validation of undergraduate medical student script concordance test (SCT) scores on the clinical assessment of the acute abdomen. BMC Surg 2016; 16:57. [PMID: 27535826 PMCID: PMC4989333 DOI: 10.1186/s12893-016-0173-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2016] [Accepted: 08/09/2016] [Indexed: 11/13/2022] Open
Abstract
Background Health professionals often manage medical problems in critical situations under time pressure and on the basis of vague information. In recent years, dual process theory has provided a framework of cognitive processes to assist students in developing clinical reasoning skills critical especially in surgery due to the high workload and the elevated stress levels. However, clinical reasoning skills can be observed only indirectly and the corresponding constructs are difficult to measure in order to assess student performance. The script concordance test has been established in this field. A number of studies suggest that the test delivers a valid assessment of clinical reasoning. However, different scoring methods have been suggested. They reflect different interpretations of the underlying construct. In this work we want to shed light on the theoretical framework of script theory and give an idea of script concordance testing. We constructed a script concordance test in the clinical context of “acute abdomen” and compared previously proposed scores with regard to their validity. Methods A test comprising 52 items in 18 clinical scenarios was developed, revised along the guidelines and administered to 56 4th and 5th year medical students at the end of a blended-learning seminar. We scored the answers using five different scoring methods (distance (2×), aggregate (2×), single best answer) and compared the scoring keys, the resulting final scores and Cronbach’s α after normalization of the raw scores. Results All scores except the single best answers calculation achieved acceptable reliability scores (>= 0.75), as measured by Cronbach’s α. Students were clearly distinguishable from the experts, whose results were set to a mean of 80 and SD of 5 by the normalization process. With the two aggregate scoring methods, the students’ means values were between 62.5 (AGGPEN) and 63.9 (AGG) equivalent to about three expert SD below the experts’ mean value (Cronbach’s α : 0.76 (AGGPEN) and 0.75 (AGG)). With the two distance scoring methods the students’ mean was between 62.8 (DMODE) and 66.8 (DMEAN) equivalent to about two expert SD below the experts’ mean value (Cronbach’s α: 0.77 (DMODE) and 0.79 (DMEAN)). In this study the single best answer (SBA) scoring key yielded the worst psychometric results (Cronbach’s α: 0.68). Conclusion Assuming the psychometric properties of the script concordance test scores are valid, then clinical reasoning skills can be measured reliably with different scoring keys in the SCT presented here. Psychometrically, the distance methods seem to be superior, wherein inherent statistical properties of the scales might play a significant role. For methodological reasons, the aggregate methods can also be used. Despite the limitations and complexity of the underlying scoring process and the calculation of reliability, we advocate for SCT because it allows a new perspective on the measurement and teaching of cognitive skills.
Collapse
Affiliation(s)
- Matthias Goos
- Department of General and Visceral Surgery, University Medical Center Freiburg, Hugstetter Straße 55, 79106, Freiburg, Germany.
| | - Fabian Schubach
- Department of General and Visceral Surgery, University Medical Center Freiburg, Hugstetter Straße 55, 79106, Freiburg, Germany.,Center for Medical Biometry and Medical Informatics, University of Freiburg, Stefan-Meier-Str. 26, 79104, Freiburg, Germany
| | - Gabriel Seifert
- Department of General and Visceral Surgery, University Medical Center Freiburg, Hugstetter Straße 55, 79106, Freiburg, Germany
| | - Martin Boeker
- Center for Medical Biometry and Medical Informatics, University of Freiburg, Stefan-Meier-Str. 26, 79104, Freiburg, Germany
| |
Collapse
|
34
|
Madani A, Watanabe Y, Bilgic E, Pucher PH, Vassiliou MC, Aggarwal R, Fried GM, Mitmaker EJ, Feldman LS. Measuring intra-operative decision-making during laparoscopic cholecystectomy: validity evidence for a novel interactive Web-based assessment tool. Surg Endosc 2016; 31:1203-1212. [PMID: 27412125 DOI: 10.1007/s00464-016-5091-7] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2015] [Accepted: 07/05/2016] [Indexed: 01/12/2023]
Abstract
BACKGROUND Errors in judgment during laparoscopic cholecystectomy can lead to bile duct injuries and other complications. Despite correlations between outcomes, expertise and advanced cognitive skills, current methods to evaluate these skills remain subjective, rater- and situation-dependent and non-systematic. The purpose of this study was to develop objective metrics using a Web-based platform and to obtain validity evidence for their assessment of decision-making during laparoscopic cholecystectomy. METHODS An interactive online learning platform was developed ( www.thinklikeasurgeon.com ). Trainees and surgeons from six institutions completed a 12-item assessment, developed based on a cognitive task analysis. Five items required subjects to draw their answer on the surgical field, and accuracy scores were calculated based on an algorithm derived from experts' responses ("visual concordance test", VCT). Test-retest reliability, internal consistency, and correlation with self-reported experience, Global Operative Assessment of Laparoscopic Skills (GOALS) score and Objective Performance Rating Scale (OPRS) score were calculated. Questionnaires were administered to evaluate the platform's usability, feasibility and educational value. RESULTS Thirty-nine subjects (17 surgeons, 22 trainees) participated. There was high test-retest reliability (intraclass correlation coefficient = 0.95; n = 10) and internal consistency (Cronbach's α = 0.87). The assessment demonstrated significant differences between novices, intermediates and experts in total score (p < 0.01) and VCT score (p < 0.01). There was high correlation between total case number and total score (ρ = 0.83, p < 0.01) and between total case number and VCT (ρ = 0.82, p < 0.01), and moderate to high correlations between total score and GOALS (ρ = 0.66, p = 0.05), VCT and GOALS (ρ = 0.83, p < 0.01), total score and OPRS (ρ = 0.67, p = 0.04), and VCT and OPRS (ρ = 0.78, p = 0.01). Most subjects agreed or strongly agreed that the platform and assessment was easy to use [n = 29 (78 %)], facilitates learning intra-operative decision-making [n = 28 (81 %)], and should be integrated into surgical training [n = 28 (76 %)]. CONCLUSION This study provides preliminary validity evidence for a novel interactive platform to objectively assess decision-making during laparoscopic cholecystectomy.
Collapse
Affiliation(s)
- Amin Madani
- Department of Surgery, McGill University, Montreal, Canada.
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Avenue, Rm D6-257, Montreal, QC, H3G 1A4, Canada.
| | - Yusuke Watanabe
- Department of Gastroenterological Surgery II, Hokkaido University, Sapporo, Japan
| | - Elif Bilgic
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Avenue, Rm D6-257, Montreal, QC, H3G 1A4, Canada
| | - Philip H Pucher
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Melina C Vassiliou
- Department of Surgery, McGill University, Montreal, Canada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Avenue, Rm D6-257, Montreal, QC, H3G 1A4, Canada
| | - Rajesh Aggarwal
- Department of Surgery, McGill University, Montreal, Canada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Avenue, Rm D6-257, Montreal, QC, H3G 1A4, Canada
- Faculty of Medicine, Steinberg Centre for Simulation and Interactive Learning, McGill University, Montreal, Canada
| | - Gerald M Fried
- Department of Surgery, McGill University, Montreal, Canada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Avenue, Rm D6-257, Montreal, QC, H3G 1A4, Canada
- Faculty of Medicine, Steinberg Centre for Simulation and Interactive Learning, McGill University, Montreal, Canada
| | | | - Liane S Feldman
- Department of Surgery, McGill University, Montreal, Canada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Avenue, Rm D6-257, Montreal, QC, H3G 1A4, Canada
| |
Collapse
|
35
|
|
36
|
Compère V, Abily J, Moriceau J, Gouin A, Veber B, Dupont H, Lorne E, Fellahi JL, Hanouz JL, Gerard JL, Sibert L, Dureuil B. Residents in tutored practice exchange groups have better medical reasoning as measured by script concordance test: a controlled, nonrandomized study. J Clin Anesth 2016; 32:236-41. [PMID: 27290981 DOI: 10.1016/j.jclinane.2016.03.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2015] [Revised: 05/16/2015] [Accepted: 03/05/2016] [Indexed: 11/20/2022]
Abstract
STUDY OBJECTIVE Clinical reasoning by anesthesiology residents in emergency situations where optimal management is uncertain could be improved by setting up a tutored practice exchange group. This study attempted to evaluate the impact of a practice exchange group (PEG), tutored by a senior anesthesiologist, on anesthesiology residents in emergency situations. Changes in clinical reasoning were measured by script concordance tests (SCT). DESIGN We conducted a controlled, non-randomized study. SETTING AND PARTICIPANTS Participants are residents in anesthesiology in Rouen, Caen and Amiens University Hospitals. INTERVENTIONS Two resident groups were made up without randomization. The first group was the control group and consisted of residents from Amiens University Hospital and Caen University Hospital. The second study group (PEG group) consisted of residents from Rouen University Hospital, who followed weekly PEG sessions. Two groups had the same learning objectives except the PEG. MEASUREMENTS In both the control group and the study group, each resident's clinical reasoning was assessed in the same formal manner by SCT. The primary outcome measurement of this study was to compare SCT results in the study group with PEG training (PEG group) with those without (control group). MAIN RESULTS Performance in the SCT, expressed as degree of concordance with the expert panel (95% CI), was better in the PEG group (64% [62.1%-66%]) than in control group (60% [57.5%-62.8%])) (P= .004). CONCLUSION Our study strongly suggests that an expert-directed, peer-conducted educational training program may improve the clinical reasoning of anesthesiology residents as measured by SCT.
Collapse
Affiliation(s)
- V Compère
- Department of Anesthesiology and Intensive Care, Rouen University Hospital, France; Laboratory of Neuronal and Neuroendocrine Communication and Differentiation, DC2N, EA4310, U982 Inserm, Federative Institute of Multidisciplinary Research on Neuropeptides 23 (IFRMP 23), Place Emile Blondel, University of Rouen, Mont-Saint-Aignan, France
| | - J Abily
- Department of Anesthesiology and Intensive Care, Rouen University Hospital, France.
| | - J Moriceau
- Department of Anesthesiology and Intensive Care, Rouen University Hospital, France
| | - A Gouin
- Department of Anesthesiology and Intensive Care, Rouen University Hospital, France
| | - B Veber
- Department of Anesthesiology and Intensive Care, Rouen University Hospital, France
| | - H Dupont
- Department of Anesthesiology and Intensive Care, Amiens University Hospital, France
| | - E Lorne
- Department of Anesthesiology and Intensive Care, Amiens University Hospital, France
| | - J L Fellahi
- Department of Anesthesiology and Intensive Care, EA4650, Caen University Hospital, France
| | - J L Hanouz
- Department of Anesthesiology and Intensive Care, EA4650, Caen University Hospital, France
| | - J L Gerard
- Department of Anesthesiology and Intensive Care, EA4650, Caen University Hospital, France
| | - L Sibert
- Department of Medical Pedagogy, University of Rouen, Rouen, France
| | - B Dureuil
- Department of Anesthesiology and Intensive Care, Rouen University Hospital, France
| |
Collapse
|
37
|
Faucher C, Dufour-Guindon MP, Lapointe G, Gagnon R, Charlin B. Assessing clinical reasoning in optometry using the script concordance test. Clin Exp Optom 2016; 99:280-6. [PMID: 27087346 DOI: 10.1111/cxo.12354] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Revised: 06/18/2015] [Accepted: 08/04/2015] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND Clinical reasoning is central to any health profession but its development among learners is difficult to assess. Over the last few decades, the script concordance test (SCT) has been developed to solve this dilemma and has been used in many health professions; however, no study has been published on the use of the script concordance test in optometry. The purpose of this study was to develop and validate a script concordance test for the field of optometry. METHODS A 101-question script concordance test (27 short clinical scenarios) was developed and administered online to a convenience sample of 23 second-year and 19 fourth-year students of optometry. It was also administered to a reference panel of 12 experienced optometrists to develop the scoring key. An item-total correlation was calculated for each question. Cronbach's alpha coefficient was used to evaluate the script concordance test reliability and a t-test compared the two groups. RESULTS A final 77-question script concordance test was created by eliminating questions with low item-total correlation. Cronbach's alpha for this optimised 77-question script concordance test was 0.80. A group comparison revealed that the second-year students' scores (n = 23; mean score = 66.4 ± 7.87 per cent) were statistically lower (t = -4.141; p < 0.001) than those of the fourth-year students (n = 19; mean score = 75.5 ± 5.97 per cent). CONCLUSION The online script concordance test developed for this study was found to be both reliable and capable of discriminating between second- and fourth-year optometric students. These results demonstrate that the script concordance test may be considered as a new tool in the optometric educators' assessment arsenal. Further studies will be needed to cover additional levels of professional development.
Collapse
Affiliation(s)
| | | | | | - Robert Gagnon
- Centre de pédagogie appliquée aux sciences de la santé, Faculté de médecine, Université de Montréal, Montréal, Canada
| | - Bernard Charlin
- Centre de pédagogie appliquée aux sciences de la santé, Faculté de médecine, Université de Montréal, Montréal, Canada
| |
Collapse
|
38
|
Roberti A, Roberti MDRF, Pereira ERS, Costa NMDSC. Script concordance test in medical schools in Brazil: possibilities and limitations. SAO PAULO MED J 2016; 134:116-20. [PMID: 26786613 PMCID: PMC10496543 DOI: 10.1590/1516-3180.2015.00100108] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2015] [Revised: 01/13/2015] [Accepted: 08/01/2015] [Indexed: 11/22/2022] Open
Abstract
CONTEXT AND OBJECTIVE Routine use of the script concordance test (SCT) is not common in Brazilian universities. This study aimed to analyze application of the SCT in the medical school of a Brazilian university. DESIGN AND SETTING Quantitative, analytical and descriptive study in the medical school of a Brazilian university. METHODS A total of 159/550 students participated. The test comprised ten clinical cases within internal medicine, with five items per case, rated on a five-point Likert scale. The test was scored in accordance with a marking key that had been validated by a reference panel. RESULTS In the pre-clinical and clinical phases, the mean scores were 51.6% and 63.4% of the maximum possible scores, respectively. Comparison of the means of the responses among all the years showed that there were significant differences in 40% of the items. The panel marked all the possible answers in five items, while in one item, all the panelists marked a single answer. Cronbach's alpha was 0.64. The results indicated that the more senior students performed better. Construction of an SCT with discriminative questions was not easy. The low reliability index may have occurred due to: a) problems with the construction of the questions; b) limitations of the reference panel; and/or c) the scoring key. CONCLUSION This instrument is very difficult to construct, apply and correct. These difficulties may make application of an SCT as an assessment method unfeasible in units with limited resources.
Collapse
Affiliation(s)
- Alexandre Roberti
- MD, MSc. Assistant Professor, School of Medicine, Universidade Federal de Goiás (UFG), Goiânia, Goiás, Brazil.
| | | | - Edna Regina Silva Pereira
- MD, PhD. Associate Professor, School of Medicine, Universidade Federal de Goiás (UFG), Goiânia, Goiás, Brazil.
| | | |
Collapse
|
39
|
Roberti A, Roberti MDRF, Pereira ERS, Porto CC, Costa NMDSC. Development of clinical reasoning in an undergraduate medical program at a Brazilian university. SAO PAULO MED J 2016; 134:110-5. [PMID: 26648281 PMCID: PMC10496542 DOI: 10.1590/1516-3180.2015.00080108] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2015] [Revised: 01/13/2015] [Accepted: 08/01/2015] [Indexed: 11/21/2022] Open
Abstract
CONTEXT AND OBJECTIVE The cognitive processes relating to the development of clinical reasoning are only partially understood, which explains the difficulties in teaching this skill in medical courses. This study aimed to understand how clinical reasoning develops among undergraduate medical students. DESIGN AND SETTING Quantitative and qualitative exploratory descriptive study conducted at the medical school of Universidade Federal de Goiás. METHODS The focus group technique was used among 40 students who participated in five focus groups, with eight students from each year, from the first to fifth year of the medical school program. The material was subjected to content analysis in categories, and was subsequently quantified and subjected to descriptive statistical analysis and chi-square test for inferential statistics. RESULTS The content of the students' statements was divided into two categories: clinical reasoning - in the preclinical phase, clinical reasoning was based on knowledge of basic medical science and in the clinical phase, there was a change to pattern recognition; knowledge of basic medical science - 80.6% of the students recognized its use, but they stated that they only used it in difficult cases. CONCLUSION In the preclinical phase, in a medical school with a traditional curriculum, clinical reasoning depends on the knowledge acquired from basic medical science, while in the clinical phase, it becomes based on pattern recognition.
Collapse
Affiliation(s)
- Alexandre Roberti
- MD, MSc. Assistant Professor, Medical School, Universidade Federal de Goiás (UFG), Goiânia, Goiás, Brazil.
| | | | - Edna Regina Silva Pereira
- MD, PhD. Adjunct Professor, Medical School, Universidade Federal de Goiás (UFG), Goiânia, Goiás, Brazil.
| | - Celmo Celeno Porto
- MD, PhD. Emeritus Professor, Medical School, Universidade Federal de Goiás (UFG), Goiânia, Goiás, Brazil.
| | | |
Collapse
|
40
|
Dumont K, Loye N, Goudreau J. Le potentiel diagnostique des questions d’un test de concordance de scripts pour évaluer le raisonnement clinique infirmier. ACTA ACUST UNITED AC 2015. [DOI: 10.1051/pmed/2015012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
41
|
Drolet P. Assessing clinical reasoning in anesthesiology: Making the case for the Script Concordance Test. Anaesth Crit Care Pain Med 2015; 34:5-7. [DOI: 10.1016/j.accpm.2015.01.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
42
|
See KC, Tan KL, Lim TK. The script concordance test for clinical reasoning: re-examining its utility and potential weakness. MEDICAL EDUCATION 2014; 48:1069-77. [PMID: 25307634 DOI: 10.1111/medu.12514] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2013] [Revised: 03/26/2014] [Accepted: 04/22/2014] [Indexed: 05/21/2023]
Abstract
CONTEXT The script concordance test (SCT) assesses clinical reasoning under conditions of uncertainty. Relatively little information exists on Z-score (standard deviation [SD]) cut-offs for distinguishing more experienced from less experienced trainees, and whether scores depend on factual knowledge. Additionally, a recent review highlighted the finding that the SCT is potentially weakened by the fact that the mere avoidance of extreme responses may greatly increase test scores. OBJECTIVES This study was conducted in order to elucidate the best cut-off Z-scores, to correlate SCT scores with scores on a separate medical knowledge examination (MKE), and to investigate potential solutions to the weakness of the SCT. METHODS An analysis of scores on pulmonary and critical care medicine tests undertaken during July and August 2013 was performed. Clinical reasoning was tested using 1-hour SCTs (Question Sets 1 or 2). Medical knowledge was tested using a 3-hour, computer-adapted, multiple-choice question examination. RESULTS The expert panel was composed of 16 attending physicians. The SCTs were completed by 16 fellows and 10 residents. Fourteen fellows completed the MKE. Test reliability was acceptable for both Question Sets 1 and 2 (Cronbach's alphas of 0.79 and 0.89, respectively). Z-scores of - 2.91 and - 1.76 best separated the scores of residents from those of fellows, and the scores of fellows from those of attending physicians, respectively. Scores on the SCT and MKE were poorly correlated. Simply avoiding extreme answers boosted the Z-scores of the lowest 10 scorers on both Question Sets 1 and 2 by ≥ 1 SD. Increasing the proportion of questions with extreme modal answers to 50%, and using hypothetical question sets created from Question Set 1 overcame this problem, but consensus scoring did not. CONCLUSIONS The SCT was able to differentiate between test subjects of varying levels of competence, and results were not associated with medical knowledge. However, the test was vulnerable to responses that intentionally avoided extreme values. Increasing the proportion of questions with extreme modal answers may attenuate the effect of candidates exploiting the test weakness related to extreme responses.
Collapse
Affiliation(s)
- Kay C See
- Division of Respiratory and Critical Care Medicine, University Medicine Cluster, National University Hospital, Singapore
| | | | | |
Collapse
|
43
|
Ahmadi SF, Khoshkish S, Soltani-Arabshahi K, Hafezi-Moghadam P, Zahmatkesh G, Heidari P, Baba-Beigloo D, Baradaran HR, Lotfipour S. Challenging script concordance test reference standard by evidence: do judgments by emergency medicine consultants agree with likelihood ratios? Int J Emerg Med 2014; 7:34. [PMID: 25635194 PMCID: PMC4306062 DOI: 10.1186/s12245-014-0034-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2014] [Accepted: 08/30/2014] [Indexed: 11/10/2022] Open
Abstract
Background We aimed to compare the clinical judgments of a reference panel of emergency medicine academic physicians against evidence-based likelihood ratios (LRs) regarding the diagnostic value of selected clinical and paraclinical findings in the context of a script concordance test (SCT). Findings A SCT with six scenarios and five questions per scenario was developed. Subsequently, 15 emergency medicine attending physicians (reference panel) took the test and their judgments regarding the diagnostic value of those findings for given diseases were recorded. The LRs of the same findings for the same diseases were extracted from a series of published systematic reviews. Then, the reference panel judgments were compared to evidence-based LRs. To investigate the test-retest reliability, five participants took the test one month later, and the correlation of their first and second judgments were quantified using Spearman rank-order coefficient. In 22 out of 30 (73.3%) findings, the expert judgments were significantly different from the LRs. The differences included overestimation (30%), underestimation (30%), and judging the diagnostic value in an opposite direction (13.3%). Moreover, the score of a hypothetical test-taker was calculated to be 21.73 out of 30 if his/her answers were based on evidence-based LRs. The test showed an acceptable test-retest reliability coefficient (Spearman coefficient: 0.83). Conclusions Although SCT is an interesting test to evaluate clinical decision-making in emergency medicine, our results raise concerns regarding whether the judgments of an expert panel are sufficiently valid as the reference standard for this test.
Collapse
Affiliation(s)
- Seyed-Foad Ahmadi
- Center for Educational Research in Medical Sciences, Iran University of Medical Sciences, Tehran 14496, Iran ; Program in Public Health, Department of Population Health and Disease Prevention, University of California Irvine, 653 E. Peltason Dr., Irvine 92697, CA, USA
| | - Shahin Khoshkish
- Center for Educational Research in Medical Sciences, Iran University of Medical Sciences, Tehran 14496, Iran ; Klinik für Innere Medizin III, Kardiologie, Angiologie und Internistische Intensivmedizin, Universitätsklinikum des Saarlandes, Homburg/Saar 66421, Germany
| | - Kamran Soltani-Arabshahi
- Center for Educational Research in Medical Sciences, Iran University of Medical Sciences, Tehran 14496, Iran
| | - Peyman Hafezi-Moghadam
- Department of Emergency Medicine, Iran University of Medical Sciences, Tehran 14496, Iran
| | - Golara Zahmatkesh
- Center for Educational Research in Medical Sciences, Iran University of Medical Sciences, Tehran 14496, Iran
| | - Parisa Heidari
- Center for Educational Research in Medical Sciences, Iran University of Medical Sciences, Tehran 14496, Iran ; Department of Neurology, Saarland University Medical Center, Homburg/Saar 66421, Germany
| | | | - Hamid R Baradaran
- Center for Educational Research in Medical Sciences, Iran University of Medical Sciences, Tehran 14496, Iran
| | - Shahram Lotfipour
- Department of Emergency Medicine, School of Medicine, University of California Irvine, Irvine 92697, CA, USA
| |
Collapse
|
44
|
Humbert AJ, Miech EJ. Measuring gains in the clinical reasoning of medical students: longitudinal results from a school-wide script concordance test. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:1046-1050. [PMID: 24979174 DOI: 10.1097/acm.0000000000000267] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
PURPOSE Medical students develop clinical reasoning skills throughout their training. The Script Concordance Test (SCT) is a standardized instrument that assesses clinical reasoning; test takers with more clinical experience consistently outperform those with less experience. SCT studies to date have been cross-sectional, with no studies examining same-student longitudinal performance gains. METHOD This four-year observational study took place between 2008 and 2011 at the Indiana University School of Medicine. Students in two different cohorts took the same SCT as second-year medical students and then again as fourth-year medical students. The authors matched and analyzed same-student data from the two SCT administrations for the classes of 2011 and 2012. They used descriptive statistics, correlation coefficients, and paired t tests. RESULTS Matched data were available for 260 students in the class of 2011 (of 303, 86%) and 264 students in the class of 2012 (of 289, 91%). The mean same-student gain for the class of 2011 was 8.6 (t[259] = 15.9; P < .0001) and for the class of 2012 was 11.3 (t[263] = 21.4; P < .0001). Each cohort gained more than one standard deviation. CONCLUSIONS Medical students made statistically significant gains in their performance on an SCT over a two-year period. These findings demonstrate same-student gains in clinical reasoning over time as measured by the SCT and suggest that the SCT as a standardized instrument can help to evaluate growth in clinical reasoning skills.
Collapse
Affiliation(s)
- Aloysius J Humbert
- Dr. Humbert is associate professor of clinical emergency medicine, Department of Emergency Medicine, Indiana University School of Medicine, Indianapolis, Indiana. Dr. Miech is a research scientist in health services research, Regenstrief Institute, Indianapolis, Indiana
| | | |
Collapse
|
45
|
Fouilloux V, Doguet F, Kotsakis A, Dubrowski A, Berdah S. A model of cardiopulmonary bypass staged training integrating technical and non-technical skills dedicated to cardiac trainees. Perfusion 2014; 30:132-9. [PMID: 24843115 DOI: 10.1177/0267659114534287] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVES To develop a standardized simulation-based curriculum to teach medical knowledge and technical, communication and critical thinking skills necessary to initiate and wean from cardiopulmonary bypass (CPB) to junior cardiac trainees (CTs) in France. Performance on post-curricular tests was compared between CTs who participated in the new curriculum to those who did not. METHODS The simulation-based curriculum was developed by content and education experts. Simulations sequentially taught the skills necessary for initiating and weaning from CPB as well as managing crises by adding fidelity and complexity to scenarios. Nine CTs were randomly assigned to the new curriculum (n=5) or the traditional curriculum (n=4). Skills were assessed using tests of medical knowledge and technical, communication (GRS) and critical thinking (SCT) skills. A two-sample Wilcoxon rank-sum test compared average scores between the two groups. Alpha of 0.05 was set to indicate statistically significant differences. RESULTS The resutls revealed that CTs in the new curriculum significantly outperformed CTs in the traditional curriculum on technical (18.2 vs 14.8, p=0.05) and communication (3.5 vs 2.2, p=0.013) skills. There was no significant difference between CTs in the new curriculum in the Script Concordance Test (16.5 vs 14.8, p=0.141) and knowledge tests (26.9 vs 24.6, p=0.14) compared to CTs in the traditional curriculum. CONCLUSION Our new curriculum teaches communication and technical skills necessary for CPB. The results of this pilot study are encouraging and relevant. They give grounds for future research with a larger panel of trainees. Based on the current distribution of scores, a sample size of 12 CTs per group should yield significant results for all tests.
Collapse
Affiliation(s)
- V Fouilloux
- Aix-Marseille Université, LBA-UMRT24, 13916, Marseille, France Hôpital d'Enfants de la Timone, Service de Chirurgie Thoracique et Cardio-vasculaire, 13385, Marseille, France Department of Cardiovascular Surgery, University of Toronto, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - F Doguet
- Department of Cardiac Surgery, Hôpital Charles Nicolle, Rouen, France
| | - A Kotsakis
- Department of Critical Care Medicine and Division of Cardiology, University of Toronto, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - A Dubrowski
- The Learning Institute Department of Paediatrics, University of Toronto, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - S Berdah
- Aix-Marseille Université, LBA-UMRT24, 13916, Marseille, France
| |
Collapse
|
46
|
Chang TP, Kessler D, McAninch B, Fein DM, Scherzer DJ, Seelbach E, Zaveri P, Jackson JM, Auerbach M, Mehta R, Van Ittersum W, Pusic MV. Script concordance testing: assessing residents' clinical decision-making skills for infant lumbar punctures. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:128-35. [PMID: 24280838 DOI: 10.1097/acm.0000000000000059] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
PURPOSE Residents must learn which infants require a lumbar puncture (LP), a clinical decision-making skill (CDMS) difficult to evaluate because of considerable practice variation. The authors created an assessment model of the CDMS to determine when an LP is indicated, taking practice variation into account. The objective was to detect whether script concordance testing (SCT) could measure CDMS competency among residents for performing infant LPs. METHOD In 2011, using a modified Delphi technique, an expert panel of 14 attending physicians constructed 15 case vignettes (each with 2 to 4 SCT questions) that represented various infant LP scenarios. The authors distributed the vignettes to residents at 10 academic pediatric centers within the International Simulation in Pediatric Innovation, Research, and Education Network. They compared SCT scores among residents of different postgraduate years (PGYs), specialties, training in adult medicine, LP experience, and practice within an endemic Lyme disease area. RESULTS Of 730 eligible residents, 102 completed 47 SCT questions. They could earn a maximum score of 47. Median SCT scores were significantly higher in PGY-3s compared with PGY-1s (difference: 3.0; 95% confidence interval [CI] 1.0-4.9; effect size d = 0.87). Scores also increased with increasing LP experience (difference: 3.3; 95% CI 1.1-5.5) and with adult medicine training (difference: 2.9; 95% CI 0.6-5.0). Residents in Lyme-endemic areas tended to perform more LPs than those in nonendemic areas. CONCLUSIONS SCT questions may be useful as an assessment tool to determine CDMS competency among residents for performing infant LPs.
Collapse
Affiliation(s)
- Todd P Chang
- Dr. Chang is assistant professor of pediatrics, Division of Emergency Medicine and Transport, Children's Hospital Los Angeles and University of Southern California Keck School of Medicine, Los Angeles, California. Dr. Kessler is assistant professor of pediatrics, Department of Pediatrics, Columbia University College of Physicians and Surgeons, New York, New York. Dr. McAninch is assistant professor, Division of Pediatric Emergency Medicine at Children's Hospital of Pittsburgh of the University of Pittsburgh Medical Center and University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania. Dr. Fein is assistant professor of pediatrics, Division of Pediatric Emergency Medicine, Children's Hospital at Montefiore affiliated with Albert Einstein College of Medicine, Bronx, New York. Dr. Scherzer is clinical associate professor of pediatrics, Division of Emergency Medicine, Nationwide Children's Hospital and Ohio State University, Columbus, Ohio. Dr. Seelbach is assistant professor, Department of Pediatrics, University of Kentucky, Lexington, Kentucky. Dr. Zaveri is assistant professor of pediatrics and emergency medicine, Division of Emergency Medicine, Children's National Medical Center and George Washington University, Washington, DC. Dr. Jackson is assistant professor of pediatrics, Wake Forest School of Medicine, Winston-Salem, North Carolina. Dr. Auerbach is assistant professor of pediatrics, Department of Pediatrics, Yale University School of Medicine, New Haven, Connecticut. Dr. Mehta is associate professor of pediatrics, Section of Critical Care, Georgia Regents University, Augusta, Georgia. Dr. Van Ittersum is assistant professor of pediatrics, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, Ohio. Dr. Pusic is assistant professor of emergency medicine, New York University School of Medicine, New York, New York
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
47
|
Mathieu S, Couderc M, Glace B, Tournadre A, Malochet-Guinamand S, Pereira B, Dubost JJ, Soubrier M. Construction and utilization of a script concordance test as an assessment tool for DCEM3 (5th year) medical students in rheumatology. BMC MEDICAL EDUCATION 2013; 13:166. [PMID: 24330600 PMCID: PMC3878954 DOI: 10.1186/1472-6920-13-166] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2013] [Accepted: 12/02/2013] [Indexed: 06/03/2023]
Abstract
BACKGROUND The script concordance test (SCT) is a method for assessing clinical reasoning of medical students by placing them in a context of uncertainty such as they will encounter in their future daily practice. Script concordance testing is going to be included as part of the computer-based national ranking examination (iNRE).This study was designed to create a script concordance test in rheumatology and use it for DCEM3 (fifth year) medical students administered via the online platform of the Clermont-Ferrand medical school. METHODS Our SCT for rheumatology teaching was constructed by a panel of 19 experts in rheumatology (6 hospital-based and 13 community-based). One hundred seventy-nine DCEM3 (fifth year) medical students were invited to take the test. Scores were computed using the scoring key available on the University of Montreal website. Reliability of the test was estimated by the Cronbach alpha coefficient for internal consistency. RESULTS The test comprised 60 questions. Among the 26 students who took the test (26/179: 14.5%), 15 completed it in its entirety. The reference panel of rheumatologists obtained a mean score of 76.6 and the 15 students had a mean score of 61.5 (p = 0.001). The Cronbach alpha value was 0.82. CONCLUSIONS An online SCT can be used as an assessment tool for medical students in rheumatology. This study also highlights the active participation of community-based rheumatologists, who accounted for the majority of the 19 experts in the reference panel.A script concordance test in rheumatology for 5th year medical students.
Collapse
Affiliation(s)
- Sylvain Mathieu
- Service de Rhumatologie, Centre hospitalier Jacques Lacarin, Vichy, France
- Rheumatology Department, Gabriel Montpied Teaching Hospital, Place H Dunant, Clermont-Ferrand 63000, France
| | - Marion Couderc
- Service de Rhumatologie, Centre hospitalier Jacques Lacarin, Vichy, France
| | - Baptiste Glace
- Service de Rhumatologie, Centre hospitalier Jacques Lacarin, Vichy, France
| | - Anne Tournadre
- Service de Rhumatologie, Centre hospitalier Jacques Lacarin, Vichy, France
| | | | - Bruno Pereira
- DRCI. CHU G Montpied, F-63003 Clermont-Ferrand, France, Univ Clermont 1, Fac Médecine, Clermont-Ferrand F-63003, France
| | | | - Martin Soubrier
- Service de Rhumatologie, Centre hospitalier Jacques Lacarin, Vichy, France
| |
Collapse
|
48
|
Petrucci AM, Nouh T, Boutros M, Gagnon R, Meterissian SH. Assessing clinical judgment using the Script Concordance test: the importance of using specialty-specific experts to develop the scoring key. Am J Surg 2013; 205:137-40. [DOI: 10.1016/j.amjsurg.2012.09.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2012] [Revised: 06/15/2012] [Accepted: 09/02/2012] [Indexed: 11/16/2022]
|