1
|
Beckett RD, Gratz MA, Marwitz KK, Hanson KM, Isch J, Robison HD. Development, Validation, and Reliability of a P1 Objective Structured Clinical Examination Assessing the National EPAs. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2023; 87:100054. [PMID: 37316140 DOI: 10.1016/j.ajpe.2023.100054] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 01/16/2023] [Accepted: 02/15/2023] [Indexed: 06/16/2023]
Abstract
OBJECTIVE To document the performance of first-year pharmacy students on a revised objective structured clinical examination (OSCE) based on national entrustable professional activities, identify risk factors for poor performance, and assess its validity and reliability. METHODS A working group developed the OSCE to verify students' progress toward readiness for advanced pharmacy practice experiences at the L1 level of entrustment (ready for thoughtful observation) on the national entrustable professional activities, with stations cross-mapped to the Accreditation Council for Pharmacy Education educational outcomes. Baseline characteristics and academic performance were used to investigate risk factors for poor performance and validity, respectively, by comparing students who were successful on the first attempt with those who were not. Reliability was evaluated using re-grading by a blinded, independent grader, and analyzed using Cohen's kappa. RESULTS A total of 65 students completed the OSCE. Of these, 33 (50.8%) successfully completed all stations on first attempt, and 32 (49.2%) had to re-attempt at least 1 station. Successful students had higher Health Sciences Reasoning Test scores (mean difference 5, 95% CI 2-9). First professional year grade point average was higher for students who passed all stations on first attempt (mean difference 0.4 on a 4-point scale, 95% CI 0.1-0.7). When evaluated in a multiple logistic regression, no differences were statistically significant between groups. Most kappa values were above 0.4 (range 0.404-0.708), suggesting moderate to substantial reliability. CONCLUSION Though predictors of poor performance were not identified when accounting for covariates, the OSCE was found to have good validity and reliability.
Collapse
Affiliation(s)
| | - Melissa A Gratz
- Manchester University College of Health Sciences and Pharmacy, Fort Wayne, IN, USA; Lutheran Hospital, Fort Wayne, IN, USA
| | - Kathryn K Marwitz
- Manchester University College of Health Sciences and Pharmacy, Fort Wayne, IN, USA
| | - Kierstan M Hanson
- Manchester University College of Health Sciences and Pharmacy, Fort Wayne, IN, USA
| | - Jason Isch
- Manchester University College of Health Sciences and Pharmacy, Fort Wayne, IN, USA; Saint Joseph Health System, Mishawaka, IN, USA
| | | |
Collapse
|
2
|
Al-Haqan A, Al-Taweel D, Koshy S, Alghanem S. Evolving to Objective Structured Clinical Exams (OSCE): Transitional experience in an undergraduate pharmacy program in Kuwait. Saudi Pharm J 2021; 29:104-113. [PMID: 33603545 PMCID: PMC7873743 DOI: 10.1016/j.jsps.2020.12.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 12/19/2020] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Objective Structured Clinical Exams (OSCEs) can assess professional competencies in a structured manner and facilitate objective evaluation of clinical performance. With limited data from the Eastern Mediterranean region, this study aims to describe the development, implementation, and evaluation of OSCEs for final year pharmacy students in Kuwait. The study also aims to compare students' performance in two academic years (2015-2016 and 2016-2017). METHODS The design, implementation, and evaluation of the competency-based OSCE followed a 3-phase systematic evidence-based approach. The development phase involved establishing an OSCE working group to develop a blueprint and scoring rubrics and to organise assessors and standardised patient/physician training. The implementation phase involved conducting formative and summative OSCEs. The evaluation phase involved undertaking student and staff perception surveys. RESULTS The overall students' OSCE scores for the academic years 2015-2016 and 2016-2017 were (median (interquartile range)) (71.6%, 32.2) and (60.0% (30.7)) and respectively (p < 0.0001). The average students' performance score was high in stations covering 'patient consultation and diagnosis' competency (71.4% (95% CI: 66.7-73.3)) and lower in stations covering 'monitoring of medicine therapy' competency (50.0% (95% CI: 33.3-66.7)). Students perceived stations covering 'monitoring medicines therapy' and 'assessment of medicine' as difficult. However, staff perceived stations related to 'patient consultation and diagnosis' competency as the easiest. Students reported that the OSCE was a positive experience as it provided them an opportunity to practice real life scenarios in a safe learning environment. CONCLUSION The OSCE helped to identify the level of competency of students prior to graduation and areas to improve in the curriculum.
Collapse
Affiliation(s)
- Asmaa Al-Haqan
- Kuwait University, Faculty of Pharmacy, Pharmacy Practice Department, Safat 13110, Kuwait
| | - Dalal Al-Taweel
- Kuwait University, Faculty of Pharmacy, Pharmacy Practice Department, Safat 13110, Kuwait
| | - Samuel Koshy
- Kuwait University, Faculty of Pharmacy, Pharmacy Practice Department, Safat 13110, Kuwait
| | - Sarah Alghanem
- Kuwait University, Faculty of Pharmacy, Pharmacy Practice Department, Safat 13110, Kuwait
| |
Collapse
|
3
|
Wilbur K, Driessen EW, Scheele F, Teunissen PW. Workplace-Based Assessment in Cross-Border Health Professional Education. TEACHING AND LEARNING IN MEDICINE 2020; 32:91-103. [PMID: 31339363 DOI: 10.1080/10401334.2019.1637742] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Construct: The globalization of healthcare has been accentuated by the export of health professional curricula overseas. Yet intact translation of pedagogies and practices devised in one cultural setting may not be possible or necessarily appropriate for alternate environments. Purposeful examination of workplace learning is necessary to understand how the source or "home" program may need adapting in the distributed or "host" setting. Background: Strategies to optimize cross-border medical education partnerships have been largely focused on elements of campus-based learning. Determining how host clinical supervisors approach assessment in experiential settings within a different culture and uphold the standards of home programs is relevant given the influence of context on trainees' demonstrated competencies. In this mixed-methods study, we sought to explore assessor judgments of student workplace-based performance made by preceptors sharing a pharmacy curriculum in Canada and Qatar. Approach: Using modified Delphi consensus technique, we asked clinical supervisors in Canada (n = 18) and in Qatar (n = 14) to categorize trainee performance as described in 16 student vignettes. The proportion of ratings for three levels of expectation (exceeds, meets, or below) was calculated and within-country group consensus achieved if the level of agreement reached 80%. Between-country group comparisons were measured using a chi-square statistic. We then conducted follow-up semi-structured interviews to gain further perspectives and clarify assessor rationale. Transcripts were analyzed using thematic content analysis. Results: The threshold for between-country group differences in assessor impressions was met for only two of the 16 student vignettes. Compared to Canadian clinical supervisors, relatively more preceptors in Qatar judged one described student as meets rather than exceeds expectations and one as meets rather than falls below expectations. Analysis of follow-up interviews exploring how culture may inform variations in assessor judgments identified themes associated with the profession, organization, learner, and supervisor performance theories but not their particular geographic context. Clinical supervisors in both countries were largely aligned in expectations of student knowledge, skills, and behaviors demonstrated in patient care and multidisciplinary team interactions. Conclusions: Our study demonstrated that variation in student assessment was more frequent among clinical supervisors within the same national context than any differences identified between the two countries. In these program settings, national sociocultural norms did not predict global assessor impressions or competency-specific judgments; instead, professional and organizational cultures were more likely to inform student characterizations of performance in workplace-based settings. Further study situated within the specific experiential learning contexts of cross-border health professional curricula is assuredly warranted.
Collapse
Affiliation(s)
- Kerry Wilbur
- Faculty of Pharmaceutical Sciences, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Erik W Driessen
- School of Health Professions Education (SHE), Department of Educational Research and Development, Maastricht University, Maastricht, The Netherlands
| | - Fedde Scheele
- School of Health Professions Education (SHE), Department of Educational Research and Development, Maastricht University, Maastricht, The Netherlands
- Athena Institute, VU School of Medical Sciences, Amsterdam UMC, Amsterdam, The Netherlands
| | - Pim W Teunissen
- School of Health Professions Education (SHE), Department of Educational Research and Development, Maastricht University, Maastricht, The Netherlands
- Department of Obstetrics & Gynecology, VU University Medical Center, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Wilby KJ, Govaerts MJB, Dolmans DHJM, Austin Z, van der Vleuten C. Reliability of narrative assessment data on communication skills in a summative OSCE. PATIENT EDUCATION AND COUNSELING 2019; 102:1164-1169. [PMID: 30711383 DOI: 10.1016/j.pec.2019.01.018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2018] [Revised: 12/20/2018] [Accepted: 01/24/2019] [Indexed: 06/09/2023]
Abstract
OBJECTIVE To quantitatively estimate the reliability of narrative assessment data regarding student communication skills obtained from a summative OSCE and to compare reliability to that of communication scores obtained from direct observation. METHODS Narrative comments and communication scores (scale 1-5) were obtained for 14 graduating pharmacy students across 6 summative OSCE stations with 2 assessors per station who directly observed student performance. Two assessors who had not observed the OSCE reviewed narratives and independently scored communication skills according to the same 5-point scale. Generalizability theory was used to estimate reliability. Correlation was used to evaluate the relationship between scores from each assessment method. RESULTS A total of 168 narratives and communication scores were obtained. The G-coefficients were 0.571 for scores provided by assessors present during the OSCE and 0.612 for scores from assessors who provided scores based on narratives only. Correlation between the two sets of scores was 0.5. CONCLUSION Reliability of communication scores is not dependent on whether assessors directly observe student performance or assess written narratives, yet both conditions appear to measure communication skills somewhat differently. PRACTICE IMPLICATIONS Narratives may be useful for summative decision-making and help overcome the current limitations of using solely quantitative scores.
Collapse
Affiliation(s)
- Kyle John Wilby
- College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
| | - Marjan J B Govaerts
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, Netherlands
| | - Diana H J M Dolmans
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, Netherlands
| | - Zubin Austin
- Leslie Dan Faculty of Pharmacy, University of Toronto, 144 College St., Toronto ON, M5S 3M2, Canada
| | - Cees van der Vleuten
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, Netherlands
| |
Collapse
|
5
|
Ali M, Pawluk SA, Rainkie DC, Wilby KJ. Pass-Fail Decisions for Borderline Performers After a Summative Objective Structured Clinical Examination. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2019; 83:6849. [PMID: 30962642 PMCID: PMC6448521 DOI: 10.5688/ajpe6849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 02/17/2018] [Indexed: 05/12/2023]
Abstract
Objective. To determine what expert assessors value when making pass-fail decisions regarding pharmacy students based on summative data from objective structured clinical examinations (OSCE), and to determine the reliability of these judgments between multiple assessors. Methods. All assessment data from 10 exit-from-degree OSCE stations for seven borderline pharmacy students (determined by standard setting methods) and one control was given to three of eight assessors for review. Assessors determined an overall pass-fail decision based on their perception of graduate competency. Assessors were interviewed to determine their decision-making rationale. Intraclass correlation coefficients were used to calculate reliability between assessor judgments. Results. Expert consensus was achieved for three of the eight students, however, the assessors' decisions did not align with standard-setting results. The reliability of assessors' decisions was poor. Assessors focused on ability to make correct recommendations rather than on gathering information or providing follow-up advice. Global evaluations (including a student's communication skills) rarely influenced the assessors' decision-making. Conclusion. When faced with making pass-fail decisions for borderline students, the assessors focus on evaluating the same competencies in the students but differed in their expected performance levels of these competencies. Pass-fail decisions are primarily based on task-focused components instead of global components (eg, communication skills), despite that global components are weighted the same for scoring purposes.
Collapse
Affiliation(s)
- Mayar Ali
- College of Pharmacy, Qatar University, Doha, Qatar
| | | | | | - Kyle John Wilby
- College of Pharmacy, Qatar University, Doha, Qatar
- School of Pharmacy, University of Otago, New Zealand
| |
Collapse
|
6
|
Exploring the Influence of Language on Assessment Given a Mismatch Between Language of Instruction and Language of Practice. Simul Healthc 2019; 14:271-275. [PMID: 30730468 DOI: 10.1097/sih.0000000000000358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
STATEMENT A phenomenon is occurring in international settings where the language of program delivery and assessment does not match the primary language of practice. It is unknown whether determining competence in English disadvantages students for practice in non-English settings. As such, we conducted a pilot study to determine student performance and perceptions after completion of two Objective Structured Clinical Examinations (OSCEs) examinations, one conducted in English and one conducted in Arabic within an Arabic-speaking Middle Eastern setting. Twenty-two students completed both OSCEs. Overall scores were similar but student rankings differed. Students were more confident performing in Arabic, felt that the Arabic examination was more reflective of practice, and believed that use of Arabic OSCEs can promote better patient care. Findings support the notion that student success may be influenced by language of assessment and that we may need to rethink how we determine assessment validity in these emerging international education settings.
Collapse
|
7
|
Sobh AH, Austin Z, Izham M I M, Diab MI, Wilby KJ. Application of a systematic approach to evaluating psychometric properties of a cumulative exit-from-degree objective structured clinical examination (OSCE). CURRENTS IN PHARMACY TEACHING & LEARNING 2017; 9:1091-1098. [PMID: 29233377 DOI: 10.1016/j.cptl.2017.07.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Revised: 02/25/2017] [Accepted: 07/28/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND PURPOSE Objective structured clinical examinations (OSCEs) are considered gold standard performance-based assessments yet comprehensive evaluation data is currently lacking. The objective of this study was to critically evaluate the psychometric properties of a cumulative OSCE for graduating pharmacy students in Qatar for which policies and procedures were adapted from a Canadian context. EDUCATIONAL ACTIVITY AND SETTING A 10-station OSCE was conducted for graduating students in Qatar. Evaluation included assessment of pass rates, predictive validity, concurrent validity, internal validity, content validity, interrater reliability, and internal consistency. FINDINGS Twenty-six students completed the OSCE. Three stations achieved pass rates < 80%. Scores from professional skills and case-based learning courses, formative OSCEs, and cumulative grade point averages correlated with OSCE scores (p < 0.05). Average correlation between assessors' analytical and global scoring was moderate (r = 0.52). Average interrater reliability was excellent for analytical scoring (ICC = 0.88) and moderate for global scoring (ICC = 0.61). Excellent internal consistency was demonstrated for overall performance (α = 0.927). Students generally agreed stations represented real practice scenarios (range per station, 30-100%). DISCUSSION AND SUMMARY The evaluation model identified strengths and weaknesses in assessment and curricular considerations. The OSCE demonstrated acceptable validity and reliability as an adapted assessment.
Collapse
Affiliation(s)
| | - Zubin Austin
- Leslie Dan Faculty of Pharmacy, University of Toronto, Toronto, Ontario, Canada.
| | | | - Mohammad I Diab
- College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
| | - Kyle John Wilby
- College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
| |
Collapse
|