1
|
Zhang J, Chen H, Wang X, Huang X, Xie D. Application of flipped classroom teaching method based on ADDIE concept in clinical teaching for neurology residents. BMC MEDICAL EDUCATION 2024; 24:366. [PMID: 38570778 PMCID: PMC10988803 DOI: 10.1186/s12909-024-05343-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 03/23/2024] [Indexed: 04/05/2024]
Abstract
BACKGROUND As an important medical personnel training system in China, standardized residency training plays an important role in enriching residents' clinical experience, improving their ability to communicate with patients and their clinical expertise. The difficulty of teaching neurology lies in the fact that there are many types of diseases, complicated conditions, and strong specialisation, which puts higher requirements on residents' independent learning ability, the cultivation of critical thinking, and the learning effect. Based on the concept of ADDIE (Analysis-Design-Development-Implementation-Evaluation), this study combines the theory and clinical practice of flipped classroom teaching method to evaluate the teaching effect, so as to provide a basis and reference for the implementation of flipped classroom in the future of neurology residency training teaching. METHODS The participants of the study were 90 neurology residents in standardised training in our hospital in the classes of 2019 and 2020. A total of 90 residents were divided into a control group and an observation group of 45 cases each using the random number table method. The control group used traditional teaching methods, including problem based learning (PBL), case-based learning (CBL), and lecture-based learning (LBL). The observation group adopted the flipped classroom teaching method based on the ADDIE teaching concept. A unified assessment of the learning outcomes of the residents was conducted before they left the department in the fourth week, including the assessment of theoretical and skill knowledge, the assessment of independent learning ability, the assessment of critical thinking ability, and the assessment of clinical practice ability. Finally, the overall quality of teaching was assessed. RESULTS The theoretical and clinical skills assessment scores achieved by the observation group were significantly higher than those of the control group, and the results were statistically significant (P < 0.001). The scores of independent learning ability and critical thinking ability of the observation group were better than those of the control group, showing statistically significant differences (P < 0.001). The observation group was better than the control group in all indicators in terms of Mini-Cex score (P < 0.05). In addition, the observation group had better teaching quality compared to the control group (P < 0.001). CONCLUSION Based on the concept of ADDIE combined with flipped classroom teaching method can effectively improve the teaching effect of standardized training of neurology residents, and had a positive effect on the improvement of residents' autonomous learning ability, critical thinking ability, theoretical knowledge and clinical comprehensive ability.
Collapse
Affiliation(s)
- Juan Zhang
- Department of Neurology, The First Affiliated Hospital of Anhui University of Traditional Chinese Medicine, 117 Meishan Road, Hefei, Anhui, China.
| | - Hong Chen
- The First Clinical Medical College of Anhui University of Chinese Medicine, Hefei, China
| | - Xie Wang
- The First Clinical Medical College of Anhui University of Chinese Medicine, Hefei, China
| | - Xiaofeng Huang
- Department of Neurology, The First Affiliated Hospital of Anhui University of Traditional Chinese Medicine, 117 Meishan Road, Hefei, Anhui, China
| | - Daojun Xie
- Department of Neurology, The First Affiliated Hospital of Anhui University of Traditional Chinese Medicine, 117 Meishan Road, Hefei, Anhui, China
| |
Collapse
|
2
|
Rayyan MR. The use of objective structured clinical examination in dental education- a narrative review. FRONTIERS IN ORAL HEALTH 2024; 5:1336677. [PMID: 38370877 PMCID: PMC10869490 DOI: 10.3389/froh.2024.1336677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 01/23/2024] [Indexed: 02/20/2024] Open
Abstract
The Objective Structured Clinical Examination (OSCE) is a performance-based assessment intended to assess medical students' clinical competency in a simulated, standardized environment. Because it measures the student's ability to use clinical knowledge, diagnostic skill, and decision-making, the OSCE is thought to be more objective than traditional tests. OSCE exams have been increasingly employed in dentistry schools, particularly in the last decade, and it is crucial to investigate instructors' and dental students' experiences with this evaluation approach.
Collapse
Affiliation(s)
- Mohammad Ramadan Rayyan
- Prosthodontic Department, College of Medicine and Dentistry, Riyadh Elm University, Riyadh, Saudi Arabia
| |
Collapse
|
3
|
Niu L, Mei Y, Xu X, Guo Y, Li Z, Dong S, Liu R. A novel strategy combining Mini-CEX and OSCE to assess standardized training of professional postgraduates in department of prosthodontics. BMC MEDICAL EDUCATION 2022; 22:888. [PMID: 36550519 PMCID: PMC9773511 DOI: 10.1186/s12909-022-03956-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
OBJECTIVE Mini clinical evaluation exercise (mini-CEX) and objective structured clinical examination (OSCE) are widely acknowledged as effective measures of resident standardization training (RST) in European and American countries. However, in China primary mini-CEX and OSCE forms are mainly limited in undergraduate clinical examination. Little knowledge is available regarding the validity and right way of mini-CEX /OSCE evaluation system in advanced dental clinical education so far. This study aimed to explore whether combination of mini-CEX and OSCE represents a global-dimension assessment for postgraduate clinical competence in RST. METHODS Postgraduates who received RST from June 2017 to June 2019 were selected and evaluated by modified mini-CEX/OSCE scales. Each student received evaluations at least twice in the initial and final stages of training (tested every 4 months). A questionnaire was conducted to investigate the satisfaction with the arrangement of RST. RESULTS Mini-CEX/OSCE test results indicated that postgraduates have significantly improved their comprehensive competence in RST projects in the department of prosthodontics (P < 0.05). Compared to other master of Stomatology students, postgraduates taking up prosthodontics master's degree have made more progresses through a training period of up to 1 year and four sessions of face-to-face feedback tutoring (P < 0.05). Survey results revealed high level of satisfaction on clinical practice evaluation. CONCLUSION Modified mini-CEX/OSCE combined evaluation system is an effective and reliable assessment tool for clinical comprehensive ability in the RST of professional graduates and can fully highlight their respective advantages on the improvement of students' clinical competency, especially after several rounds of assessments.
Collapse
Affiliation(s)
- Lin Niu
- Key laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
- Clinical Research Center of Shaanxi Province for Dental and Maxillofacial Diseases, Xi'an, 710004, Shaanxi, China
- Department of Prosthodontics, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
| | - Yukun Mei
- Key laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
- Clinical Research Center of Shaanxi Province for Dental and Maxillofacial Diseases, Xi'an, 710004, Shaanxi, China
- Department of Prosthodontics, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
| | - Xiaoqiao Xu
- Key laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
- Clinical Research Center of Shaanxi Province for Dental and Maxillofacial Diseases, Xi'an, 710004, Shaanxi, China
- Department of Prosthodontics, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
| | - Yi Guo
- Key laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
- Clinical Research Center of Shaanxi Province for Dental and Maxillofacial Diseases, Xi'an, 710004, Shaanxi, China
- Department of Prosthodontics, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
| | - Zhen Li
- Key laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
- Clinical Research Center of Shaanxi Province for Dental and Maxillofacial Diseases, Xi'an, 710004, Shaanxi, China
- Department of Prosthodontics, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
| | - Shaojie Dong
- Key laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China.
- Clinical Research Center of Shaanxi Province for Dental and Maxillofacial Diseases, Xi'an, 710004, Shaanxi, China.
- Department of Prosthodontics, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China.
| | - Ruirui Liu
- Key laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China.
- Clinical Research Center of Shaanxi Province for Dental and Maxillofacial Diseases, Xi'an, 710004, Shaanxi, China.
- Department of Prosthodontics, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China.
| |
Collapse
|
4
|
Brydges R, Law M, Ma IWY, Gavarkovs A. On embedding assessments of self-regulated learning into licensure activities in the health professions: a call to action. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:100-109. [PMID: 36091729 PMCID: PMC9441114 DOI: 10.36834/cmej.73855] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
How well have healthcare professionals and trainees been prepared for the inevitable demands for new learning that will arise in their future? Given the rapidity with which 'core healthcare knowledge' changes, medical educators have a responsibility to audit whether trainees have developed the capacity to effectively self-regulate their learning. Trainees who engage in effective self-regulated learning (SRL) skillfully monitor and control their cognition, motivation, behaviour, and environment to adaptively meet demands for new learning. However, medical curricula rarely assess trainees' capacity to engage in these strategic processes. In this position paper, we argue for a paradigm shift toward assessing SRL more deliberately in undergraduate and postgraduate programs, as well as in associated licensing activities. Specifically, we explore evidence supporting an innovative blend of principles from the science on SRL, and on preparation for future learning (PFL) assessments. We propose recommendations for how program designers, curriculum developers, and assessment leads in undergraduate and postgraduate training programs, and in licensing bodies can work together to develop integrated assessments that measure how and how well trainees engage in SRL. Claims about lifelong learning in health professions education have gone unmatched by responsive curricular changes for far too long. Further neglecting these important competencies represents a disservice to medical trainees and a potential risk to the future patients they will care for.
Collapse
Affiliation(s)
- Ryan Brydges
- Allan Waters Family Simulation Centre, St. Michael’s Hospital, Unity Health Toronto, Ontario, Canada
| | - Marcus Law
- MD Program, Temerty Faculty of Medicine, University of Toronto, Ontario, Canada
| | - Irene WY Ma
- Division of General Internal Medicine, Department of Medicine, Cumming School of Medicine, University of Calgary, Alberta, Canada
| | - Adam Gavarkovs
- Institute of Health Policy, Management and Evaluation, University of Toronto, Ontario, Canada
| |
Collapse
|
5
|
Shrivastava S, Shrivastava P. Understanding the significance of high stakes and low stakes assessments in medical undergraduate training. MEDICAL JOURNAL OF BABYLON 2022. [DOI: 10.4103/mjbl.mjbl_13_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
|
6
|
Castro-Yuste C, Rodríguez-Cornejo MJ, García-Cabanillas MJ, Paublete-Herrera MDC, Paramio-Cuevas JC, Moreno-Corral LJ. Design of a nursing objective structured clinical examination of a first-year clinical practice program. Rev Esc Enferm USP 2020; 54:e03616. [PMID: 33175019 DOI: 10.1590/s1980-220x2018054203616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Accepted: 01/16/2020] [Indexed: 11/22/2022] Open
Abstract
OBJECTIVE The aim of the present study was to design a content-valid nursing objective structured clinical examination attending a first-year clinical nursing practice program. METHOD The examination was designed following a procedure based on the consensus of experts which was comprised of three phases: selection of the activities in which students should be competent according to the learning outcomes of the course, clinical case design, and integration of the clinical cases designed into the stations of the test. RESULTS Of the 44 surveys submitted for the design of the stations, 37 were answered, of which 31 respondents met the inclusion criteria of the panel of experts. The activities on which the experts reached the highest degrees of consensus were: basic physical assessment and monitoring of vital signs, assessment of hygiene and skin status, ability to develop care plans, management of safety principles in administration of medication and administration of oral medication. Based on the selected activities, the experts developed 20 clinical cases, from which a four-station nursing objective structured clinical examination was designed. CONCLUSION The structured methodology based on the design of experts enabled the design of a content-valid objective structured clinical examination appropriate for the evaluation of the learning outcomes achieved by the students attending a clinical practice program.
Collapse
|
7
|
Tanaka P, Park YS, Liu L, Varner C, Kumar AH, Sandhu C, Yumul R, McCartney KT, Spilka J, Macario A. Assessment Scores of a Mock Objective Structured Clinical Examination Administered to 99 Anesthesiology Residents at 8 Institutions. Anesth Analg 2020; 131:613-621. [PMID: 32149757 DOI: 10.1213/ane.0000000000004705] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND Objective Structured Clinical Examinations (OSCEs) are used in a variety of high-stakes examinations. The primary goal of this study was to examine factors influencing the variability of assessment scores for mock OSCEs administered to senior anesthesiology residents. METHODS Using the American Board of Anesthesiology (ABA) OSCE Content Outline as a blueprint, scenarios were developed for 4 of the ABA skill types: (1) informed consent, (2) treatment options, (3) interpretation of echocardiograms, and (4) application of ultrasonography. Eight residency programs administered these 4 OSCEs to CA3 residents during a 1-day formative session. A global score and checklist items were used for scoring by faculty raters. We used a statistical framework called generalizability theory, or G-theory, to estimate the sources of variation (or facets), and to estimate the reliability (ie, reproducibility) of the OSCE performance scores. Reliability provides a metric on the consistency or reproducibility of learner performance as measured through the assessment. RESULTS Of the 115 total eligible senior residents, 99 participated in the OSCE because the other residents were unavailable. Overall, residents correctly performed 84% (standard deviation [SD] 16%, range 38%-100%) of the 36 total checklist items for the 4 OSCEs. On global scoring, the pass rate for the informed consent station was 71%, for treatment options was 97%, for interpretation of echocardiograms was 66%, and for application of ultrasound was 72%. The estimate of reliability expressing the reproducibility of examinee rankings equaled 0.56 (95% confidence interval [CI], 0.49-0.63), which is reasonable for normative assessments that aim to compare a resident's performance relative to other residents because over half of the observed variation in total scores is due to variation in examinee ability. Phi coefficient reliability of 0.42 (95% CI, 0.35-0.50) indicates that criterion-based judgments (eg, pass-fail status) cannot be made. Phi expresses the absolute consistency of a score and reflects how closely the assessment is likely to reproduce an examinee's final score. Overall, the greatest (14.6%) variance was due to the person by item by station interaction (3-way interaction) indicating that specific residents did well on some items but poorly on other items. The variance (11.2%) due to residency programs across case items was high suggesting moderate variability in performance from residents during the OSCEs among residency programs. CONCLUSIONS Since many residency programs aim to develop their own mock OSCEs, this study provides evidence that it is possible for programs to create a meaningful mock OSCE experience that is statistically reliable for separating resident performance.
Collapse
Affiliation(s)
- Pedro Tanaka
- From the Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois
| | - Linda Liu
- Department of Anesthesia and Perioperative Care, University of California San Francisco, San Francisco, California
| | - Chelsia Varner
- Department of Anesthesiology, University of Southern California, Los Angeles, California
| | - Amanda H Kumar
- Department of Anesthesiology, Duke University School of Medicine, Durham, North Carolina
| | - Charandip Sandhu
- Department of Anesthesiology, University of California Davis, Davis, California
| | - Roya Yumul
- Department of Anesthesiology, Cedars Sinai Medical Center, Los Angeles, California
| | - Kate Tobin McCartney
- Department of Anesthesiology, University of California Irvine, Irvine, California
| | - Jared Spilka
- Naval Medical Center San Diego, San Diego, California
| | - Alex Macario
- From the Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California
| |
Collapse
|
8
|
Spanke J, Raus C, Haase A, Angelow A, Ludwig F, Weckmann G, Schmidt CO, Chenot JF. Fairness and objectivity of a multiple scenario objective structured clinical examination. GMS JOURNAL FOR MEDICAL EDUCATION 2019; 36:Doc26. [PMID: 31211221 PMCID: PMC6545613 DOI: 10.3205/zma001234] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Revised: 11/11/2018] [Accepted: 02/13/2019] [Indexed: 05/16/2023]
Abstract
Introduction: The aim of the Objective Structured Clinical Examination (OSCE) is a standardized and fair assessment of clinical skills. Observing second clinical year medical students during a summative OSCE assessing a General Practice clerkship, we noticed that information exchange with peers led to a progressively faster and overly focused management of simulations. Therefore, we established a Multiple Scenario-OSCE (MS-OSCE) where all students had to manage the same chief complaint at a station but it's underlying scenarios being randomly changed during students' rotation through their parcours. We wanted to ensure they fully explore differential diagnosis instead of managing their task influenced by shared information. We wanted to assess if a MS-OSCE violates the assumption of objectivity and fairness given that students are not tested with the same scenarios. Methods: We developed and piloted five OSCE stations (chest pain, abdominal pain, back pain, fatigue and acute cough) with two or three different underlying scenarios each. At each station these scenarios randomly changed from student to student. Performance was assessed with a checklist and global rating. The effect of scenarios and raters on students' grades was assessed calculating the intraclass correlation coefficient with a fixed effect two level linear model. Results: A total of 169 students and 23 raters participated in the MS-OSCE. The internal consistency over all stations was 0.65 by Cronbach's alpha. The difference of the mean grades between the scenarios of a given chief complaint ranged from 0.03 to 0.4 on a 1 to 5 grading scale. The effect of scenarios on the variance of the final grades at each station ranged from 4% to 9% and of raters from 20% to 50% when adjusted for students' skills. Conclusions: The effect of different scenarios on the grades was relevant but small compared to the effect of raters on grades. Improving rater training is more important to ensure objectivity and fairness of MS-OSCE than providing the same scenario to all students.
Collapse
Affiliation(s)
- Johannes Spanke
- University Medicine Greifswald, Institute for Community Medicine, Department of General Practice and Family Medicine, Greifswald, Germany
- *To whom correspondence should be addressed: Christina Raus, University Medicine Greifswald, Institute for Community Medicine, Department of General Practice and Family Medicine, Fleischmannstr. 6, D-17475 Greifswald, Germany
| | - Christina Raus
- University Medicine Greifswald, Institute for Community Medicine, Department of General Practice and Family Medicine, Greifswald, Germany
- *To whom correspondence should be addressed: Christina Raus, University Medicine Greifswald, Institute for Community Medicine, Department of General Practice and Family Medicine, Fleischmannstr. 6, D-17475 Greifswald, Germany
| | - Annekathrin Haase
- University Medicine Greifswald, Institute for Community Medicine, Department of General Practice and Family Medicine, Greifswald, Germany
| | - Aniela Angelow
- University Medicine Greifswald, Institute for Community Medicine, Department of General Practice and Family Medicine, Greifswald, Germany
| | - Fabian Ludwig
- University Medicine Greifswald, Institute for Community Medicine, Department of General Practice and Family Medicine, Greifswald, Germany
| | - Gesine Weckmann
- University Medicine Greifswald, Institute for Community Medicine, Department of General Practice and Family Medicine, Greifswald, Germany
- European University of Applied Sciences, Faculty of Applied Health Sciences, Rostock, Germany
| | - Carsten Oliver Schmidt
- University Medicine Greifswald, Institute for Community Medicine, SHIP-KEF, Greifswald, Germany
| | - Jean-Francois Chenot
- University Medicine Greifswald, Institute for Community Medicine, Department of General Practice and Family Medicine, Greifswald, Germany
| |
Collapse
|
9
|
Tanaka P, Adriano A, Ngai L, Park YS, Marty A, Wakatsuki S, Brun C, Harrison K, Bushell E, Thomsen JLD, Wen L, Painter C, Chen M, Macario A. Development of an Objective Structured Clinical Examination Using the American Board of Anesthesiology Content Outline for the Objective Structured Clinical Examination Component of the APPLIED Certification Examination. A A Pract 2018; 11:193-197. [DOI: 10.1213/xaa.0000000000000779] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
10
|
Eva KW, Bordage G, Campbell C, Galbraith R, Ginsburg S, Holmboe E, Regehr G. Towards a program of assessment for health professionals: from training into practice. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2016; 21:897-913. [PMID: 26590984 DOI: 10.1007/s10459-015-9653-6] [Citation(s) in RCA: 98] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2015] [Accepted: 11/16/2015] [Indexed: 05/14/2023]
Abstract
Despite multifaceted attempts to "protect the public," including the implementation of various assessment practices designed to identify individuals at all stages of training and practice who underperform, profound deficiencies in quality and safety continue to plague the healthcare system. The purpose of this reflections paper is to cast a critical lens on current assessment practices and to offer insights into ways in which they might be adapted to ensure alignment with modern conceptions of health professional education for the ultimate goal of improved healthcare. Three dominant themes will be addressed: (1) The need to redress unintended consequences of competency-based assessment; (2) The potential to design assessment systems that facilitate performance improvement; and (3) The importance of ensuring authentic linkage between assessment and practice. Several principles cut across each of these themes and represent the foundational goals we would put forward as signposts for decision making about the continued evolution of assessment practices in the health professions: (1) Increasing opportunities to promote learning rather than simply measuring performance; (2) Enabling integration across stages of training and practice; and (3) Reinforcing point-in-time assessments with continuous professional development in a way that enhances shared responsibility and accountability between practitioners, educational programs, and testing organizations. Many of the ideas generated represent suggestions for strategies to pilot test, for infrastructure to build, and for harmonization across groups to be enabled. These include novel strategies for OSCE station development, formative (diagnostic) assessment protocols tailored to shed light on the practices of individual clinicians, the use of continuous workplace-based assessment, and broadening the focus of high-stakes decision making beyond determining who passes and who fails. We conclude with reflections on systemic (i.e., cultural) barriers that may need to be overcome to move towards a more integrated, efficient, and effective system of assessment.
Collapse
Affiliation(s)
- Kevin W Eva
- Centre for Health Education Scholarship, University of British Columbia, JPPN 3324, 910 West 10th Avenue, Vancouver, BC, V5Z 1M9, Canada.
| | | | - Craig Campbell
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
| | | | | | - Eric Holmboe
- Accreditation Council for Graduate Medical Education, Chicago, IL, USA
| | - Glenn Regehr
- Centre for Health Education Scholarship, University of British Columbia, JPPN 3324, 910 West 10th Avenue, Vancouver, BC, V5Z 1M9, Canada
| |
Collapse
|
11
|
Trejo-Mejía JA, Sánchez-Mendiola M, Méndez-Ramírez I, Martínez-González A. Reliability analysis of the objective structured clinical examination using generalizability theory. MEDICAL EDUCATION ONLINE 2016; 21:31650. [PMID: 27543188 PMCID: PMC4991996 DOI: 10.3402/meo.v21.31650] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Revised: 07/25/2016] [Accepted: 07/25/2016] [Indexed: 05/31/2023]
Abstract
BACKGROUND The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. METHODS An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. RESULTS The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. CONCLUSIONS Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.
Collapse
|
12
|
On the Assessment of Paramedic Competence: A Narrative Review with Practice Implications. Prehosp Disaster Med 2015; 31:64-73. [DOI: 10.1017/s1049023x15005166] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractIntroductionParamedicine is experiencing significant growth in scope of practice, autonomy, and role in the health care system. Despite clinical governance models, the degree to which paramedicine ultimately can be safe and effective will be dependent on the individuals the profession deems suited to practice. This creates an imperative for those responsible for these decisions to ensure that assessments of paramedic competence are indeed accurate, trustworthy, and defensible.PurposeThe purpose of this study was to explore and synthesize relevant theoretical foundations and literature informing best practices in performance-based assessment (PBA) of competence, as it might be applied to paramedicine, for design or evaluation of assessment programs.MethodsA narrative review methodology was applied to focus intentionally, but broadly, on purpose relevant, theoretically derived research that could inform assessment protocols in paramedicine. Primary and secondary studies from a number of health professions that contributed to and informed best practices related to the assessment of paramedic clinical competence were included and synthesized.ResultsMultiple conceptual frameworks, psychometric requirements, and emerging lines of research are forwarded. Seventeen practice implications are derived to promote understanding as well as best practices and evaluation criteria for educators, employers, and/or licensing/certifying bodies when considering the assessment of paramedic competence.ConclusionsThe assessment of paramedic competence is a complex process requiring an understanding, appreciation for, and integration of conceptual and psychometric principles. The field of PBA is advancing rapidly with numerous opportunities for research.TavaresW,BoetS.On the assessment of paramedic competence: a narrative review with practice implications.Prehosp Disaster Med.2016;31(1):64–73.
Collapse
|
13
|
Schlegel C, Bonvin R, Rethans JJ, van der Vleuten C. The use of video in standardized patient training to improve portrayal accuracy: A randomized post-test control group study. MEDICAL TEACHER 2015; 37:730-737. [PMID: 25314143 DOI: 10.3109/0142159x.2014.970989] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
INTRODUCTION High-stake objective structured clinical examinations (OSCEs) with standardized patients (SPs) should offer the same conditions to all candidates throughout the exam. SP performance should therefore be as close to the original role script as possible during all encounters. In this study, we examined the impact of video in SP training on SPs' role accuracy, investigating how the use of different types of video during SP training improves the accuracy of SP portrayal. METHODS In a randomized post-test, control group design three groups of 12 SPs each with different types of video training and one control group of 12 SPs without video use in SP training were compared. The three intervention groups used role-modeling video, performance-feedback video, or a combination of both. Each SP from each group had four students encounter. Two blinded faculty members rated the 192 video-recorded encounters, using a case-specific rating instrument to assess SPs' role accuracy. RESULTS SPs trained by video showed significantly (p < 0.001) better role accuracy than SPs trained without video over the four sequential portrayals. There was no difference between the three types of video training. DISCUSSION Use of video during SP training enhances the accuracy of SP portrayal compared with no video, regardless of the type of video intervention used.
Collapse
|
14
|
Isa A, Bernstein I, Trivedi M, Mayes T, Kennard B, Emslie G. Childhood depression subscales using repeated sessions on Children's Depression Rating Scale - revised (CDRS-R) scores. J Child Adolesc Psychopharmacol 2014; 24:318-24. [PMID: 25137188 PMCID: PMC4137336 DOI: 10.1089/cap.2013.0127] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
BACKGROUND Although acute treatments have been shown to be effective in treating early-onset depression, only one-third or thereabouts reach a remission within 3 months. Unfortunately, delayed time to remission in early-onset depression leads to poorer therapeutic outcomes. Clearly, there is a need to identify, diagnose, and provide effective treatment of a depressed patient quickly. A sophisticated understanding of depression subscales and their change over time with treatment could enhance pathways to individualized treatment approaches for childhood depression. OBJECTIVE Previous studies have found that the clinician-measured instrument, Children's Depression Rating Scale-Revised (CDRS-R) measures multiple subscales (or components) of depression. The aim of this study was to see how these subscales may change over the course of a 12-week study. This knowledge will help determine if dimensions/subscales of childhood depression (paralleling the adult literature) using the subscales derived from factor analysis procedure is useful. METHODS We examined two clinical trials in which youth (n=234) with major depressive disorder (MDD) were treated openly with fluoxetine for eight sessions spread over 12 weeks. The CDRS-R was completed based on clinician interviews with parent and child at each session. Classical test theory and component analysis with associated parallel analysis (oblique rotation) were conducted on each week's scores. RESULTS Although more factors were needed for the baseline and first two therapy sessions, a two-factor solution sufficed thereafter. Depressed facial affect, listless speech, and hypoactivity best defined Factor I, whereas sleep problems, appetite disturbance, physical symptoms, irritability, guilt, and weeping best defined Factor II. All other symptoms cross-loaded almost equally on the two factors. The scale's reliability (internal consistency) improved from baseline to exit sessions (α=0.65-0.91). As a result, the clinicians' assessments of the various symptoms became more highly related to one another. This caused the first eigenvalue to increase from 3.24 to 7.38 and the variance explained to increase (%) from 19% to 43% over sessions. These two factors may reflect 1) clinician-observed signs and 2) reported symptoms of depression. CONCLUSIONS Factor analysis of CDRS-R data in a single session consistently generates a complex and difficult to interpret structure of at least three factors. This makes it very difficult to understand what these factors measure. However, when gathered over additional sessions, the CDRS-R structure tends to simplify to two factors. The reasons for this simplification are as yet unclear and in need of further study.
Collapse
Affiliation(s)
- Ameena Isa
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas Texas
| | - Ira Bernstein
- Department of Clinical Sciences, University of Texas Southwestern Medical Center, Dallas Texas
| | - Madhukar Trivedi
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas Texas
| | - Taryn Mayes
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas Texas.,Division of Child and Adolescent Psychiatry, Children's Medical Center, Dallas Texas
| | - Betsy Kennard
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas Texas.,Division of Child and Adolescent Psychiatry, Children's Medical Center, Dallas Texas
| | - Graham Emslie
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas Texas.,Division of Child and Adolescent Psychiatry, Children's Medical Center, Dallas Texas
| |
Collapse
|
15
|
Schoenmakers B, Wens J. The objective structured clinical examination revisited for postgraduate trainees in general practice. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2014; 5:45-50. [PMID: 25341211 PMCID: PMC4224044 DOI: 10.5116/ijme.52eb.f882] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2013] [Accepted: 01/31/2014] [Indexed: 05/27/2023]
Abstract
OBJECTIVE To investigate if the psychometric qualities of an OSCE consisting of more complex simulated patient encounters remain valid and reliable in the assessment of postgraduate trainees in general practice. METHODS In this intervention study without control group, the traditional OSCE was formally replaced by the new, complex version. The study population was composed by all postgraduate trainees (second and third phase) in general practice during the ongoing academic year. Data were handled and collected as part of the formal assessment program. Univariate analyses, the variance of scores and multivariate analyses were performed to assess the test qualities. RESULTS A total of 340 students participated. Average final scores were slightly higher for third-phase students (t-test, p =0.05). Overall test scores were equally distributed on station level, circuit level and phase level. A multiple regression analysis revealed that test scores were dependent on the stations and circuits, but not on the master phase. CONCLUSIONS In a changing learning environment, assessment and evaluation strategies require reorientation. The reliability and validity of the OSCE remain subject to discussion. In particular, when it comes to content and design, the traditional OSCE might underestimate the performance level of postgraduate trainees in general practice. A reshaping of this OSCE to a more sophisticated design with more complex patient encounters appears to restore the validity of the test results.
Collapse
Affiliation(s)
- Birgitte Schoenmakers
- Academic Centre of General Practice, Department of Public Health and Primary Care, University of Leuven, Belgium
| | - Johan Wens
- Department of Public Health and Primary Care, Campus Drie Eiken, University of Antwerp, Belgium
| |
Collapse
|
16
|
|