1
|
Tseng LP, Hou TH, Huang LP, Ou YK. Effectiveness of applying clinical simulation scenarios and integrating information technology in medical-surgical nursing and critical nursing courses. BMC Nurs 2021; 20:229. [PMID: 34781931 PMCID: PMC8591873 DOI: 10.1186/s12912-021-00744-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 10/18/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND To determine the impact of combining clinical simulation scenario training and Information Technology Integrated Instruction (ITII) on the teaching of nursing skills. METHODS 120 4th-year students in a nursing program who were enrolled in medical and surgical nursing courses. 61 received innovative instruction (experimental group) and 59 received conventional instruction (control group). The ADDIE model, systematic method of course development that includes analysis, design, development, implementation, and evaluation,was used to build simulation teaching and clinical scenarios and to create and modify objective structure clinical examination (OSCE) scenario checklists for acute myocardial infarction (AMI) care, basic life support and operation of automated external defibrillator (BLS), and subdural hemorrhage (SDH) care. The modified OSCE checklists were assessed for reliability, consistency, and validity. The innovative training included flipped classrooms, clinical simulation scenarios, ITII and blended learning formats. RESULTS The reliability and validity of the OSCE checklists developed in this study were acceptable and comparable or higher than checklists in past studies and could be utilized as an OSCE performance tool. Students in innovative instruction obtained significantly better OSCE performance, lab scores and improvements from the previous year's grades. Significant differences were found in situational awareness (SA). No strong correlations were found between OSCE scores and clinical internship scores, and no significant differences were found between the groups in overall clinical internship performance. CONCLUSIONS Innovative instruction showed better performance than conventional methods in summative evaluation of knowledge components, OSCE formative evaluation and clinical nursing internship scores, as well as improved situational awareness in nursing students.
Collapse
Affiliation(s)
- Li-Ping Tseng
- Department of Management Center, Sisters of our Lady of China Catholic Medical Foundation, St. Martin De Porres Hospital, Chiayi City, 60069, Taiwan
- Department of Industrial Engineering and Management, National Yunlin University of Science and Technology, Yunlin, 640301, Taiwan
| | - Tung-Hsu Hou
- Department of Industrial Engineering and Management, National Yunlin University of Science and Technology, Yunlin, 640301, Taiwan
| | - Li-Ping Huang
- Department of Nursing, Chung-Jen Junior College of Nursing, Health Sciences and Management, Chiayi, 60077, Taiwan
| | - Yang-Kun Ou
- Department of Creative Product Design, Southern Taiwan University of Science and Technology, No. 1, Nan-Tai Street, Yungkang Dist, Tainan City, 71005, Taiwan.
| |
Collapse
|
2
|
Kolivand M, Esfandyari M, Heydarpour S. Examining validity and reliability of objective structured clinical examination for evaluation of clinical skills of midwifery undergraduate students: a descriptive study. BMC MEDICAL EDUCATION 2020; 20:96. [PMID: 32234047 PMCID: PMC7110752 DOI: 10.1186/s12909-020-02017-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 03/23/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Clinical evaluation is one of the main pillars of medical education. The Objective Structured Clinical Examination is one of the commonly adopted practical tools to evaluate clinical and practical skills of medical students. The purpose of the study is to determine validity and reliability of Objective Structured Clinical Examination for evaluation of clinical skills of midwifery undergraduate students. METHODS Seven clinical skills were evaluated in this descriptive correlative study using a performance checklist. Census method was used for sampling. Thirty-two midwifery students performed the skills at seven stations each monitored by an observer using an evaluation checklist. Criterion validity was obtained through determining the correlation between the clinical and theoretical courses point and the Objective Structured Clinical Evaluation score. The collected data was analyzed in SPSS (v.20) and logistic regression test. RESULTS The correlation score of Objective Structured Clinical Examination was significantly related to the mean score of clinical course "Normal and Abnormal delivery I" (0.399, p = 0.024) and the mean score of clinical course "gynaecology "(0.419, p = 0.017). There was no significant correlation between OSCE scores and the mean score of theoretical courses (0.23, p = 0. 200). The correlation between the total score and mean score of students at the stations showed that out of the seven stations, the correlations of the stations three (communication and collecting medical history) and four (childbirth) were not significant. CONCLUSION Although, it appeared that Objective Structured Clinical Examination was one of the effective and efficient ways to evaluate clinical competencies and practical skills of students, the tool could not evaluate all the aspects.
Collapse
Affiliation(s)
- Mitra Kolivand
- Department of Reproductive Health, Faculty of Nursing and Midwifery, Kermanshah University of Medical Sciences, Kermanshah, Iran
| | - Marzie Esfandyari
- Department of Midwifery, Faculty of Nursing and Midwifery, Kermanshah University of Medical Sciences, Kermanshah, Iran
| | - Sousan Heydarpour
- Department of Midwifery, Faculty of Nursing and Midwifery, Kermanshah University of Medical Sciences, Kermanshah, Iran
| |
Collapse
|
3
|
Müller S, Koch I, Settmacher U, Dahmen U. How the introduction of OSCEs has affected the time students spend studying: results of a nationwide study. BMC MEDICAL EDUCATION 2019; 19:146. [PMID: 31092236 PMCID: PMC6521539 DOI: 10.1186/s12909-019-1570-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 04/23/2019] [Indexed: 06/01/2023]
Abstract
BACKGROUND Medical schools globally now use objective structured clinical examinations (OSCEs) for assessing a student's clinical performance. In Germany, almost all of the 36 medical schools have incorporated at least one summative OSCE into their clinical curriculum. This nationwide study aimed to examine whether the introduction of OSCEs shifted studying time. The authors explored what resources were important for studying in preparation for OSCEs, how much time students spent studying, and how they performed; each compared to traditionally used multiple choice question (MCQ) tests. METHODS The authors constructed a questionnaire comprising two identical sections, one for each assessment method. Either section contained a list of 12 study resources requesting preferences on a 5-point scale, and two open-ended questions about average studying time and average grades achieved. During springtime of 2015, medical schools in Germany were asked to administer the web-based questionnaire to their students in years 3-6. Statistical analysis compared the responses on the open-ended questions between the OSCE and MCQs using a paired t-test. RESULTS The sample included 1131 students from 32 German medical schools. Physical examination courses were most important in preparation for OSCEs, followed by class notes/logs and the skills lab. Other activities in clinical settings (e.g. medical clerkships) and collaborative strategies ranked next. Conversely, resources for gathering knowledge (e.g. lectures or textbooks) were of minor importance when studying for OSCEs. Reported studying time was lower for OSCEs compared to MCQ tests. The reported average grade, however, was better on OSCEs. CONCLUSIONS The study findings suggest that the introduction of OSCEs shifted studying time. When preparing for OSCEs students focus on the acquisition of clinical skills and need less studying time to achieve the expected level of competence/performance, as compared to the MCQ tests.
Collapse
Affiliation(s)
- Stefan Müller
- Department of General, Visceral and Vascular Surgery, Universitätsklinikum Jena, Am Klinikum 1, 07747 Jena, Germany
| | - Ines Koch
- Department of Gynaecology and Reproductive Medicine, Universitätsklinikum Jena, Am Klinikum 1, 07747 Jena, Germany
| | - Utz Settmacher
- Department of General, Visceral and Vascular Surgery, Universitätsklinikum Jena, Am Klinikum 1, 07747 Jena, Germany
| | - Uta Dahmen
- Department of General, Visceral and Vascular Surgery, Experimental Transplantation Surgery, Universitätsklinikum Jena, Drackendorfer Str. 1, 07747 Jena, Germany
| |
Collapse
|
4
|
Duijn CCMA, Dijk EJV, Mandoki M, Bok HGJ, Cate OTJT. Assessment Tools for Feedback and Entrustment Decisions in the Clinical Workplace: A Systematic Review. JOURNAL OF VETERINARY MEDICAL EDUCATION 2019; 46:340-352. [PMID: 31460844 DOI: 10.3138/jvme.0917-123r] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
BACKGROUND: Entrustable Professional Activities (EPAs) combine feedback and evaluation with a permission to act under a specified level of supervision and the possibility to schedule learners for clinical service. This literature review aims to identify workplace-based assessment tools that indicate progression toward unsupervised practice, suitable for entrustment decisions and feedback to learners. METHODS: A systematic search was performed in the PubMed, Embase, ERIC, and PsycINFO databases. Based on title/abstract and full text, articles were selected using predetermined inclusion and exclusion criteria. Information on workplace-based assessment tools was extracted using data coding sheets. The methodological quality of studies was assessed using the medical education research study quality instrument (MERSQI). RESULTS: The search yielded 6,371 articles (180 were evaluated in full text). In total, 80 articles were included, identifying 67 assessment tools. Only a few studies explicitly mentioned assessment tools used as a resource for entrustment decisions. Validity evidence was frequently reported, and the MERSQI score was 10.0 on average. CONCLUSIONS: Many workplace-based assessment tools were identified that potentially support learners with feedback on their development and support supervisors with providing feedback. As expected, only few articles referred to entrustment decisions. Nevertheless, the existing tools or the principals could be used for entrustment decisions, supervision level, or autonomy.
Collapse
|
5
|
|
6
|
Sabzi Z, Modanloo M, Yazdi K, Kolagari S, Aryaie M. The Validity and Reliability of the Objective Structured Clinical Examination (OSCE) in Pre-internship Nursing Students. JOURNAL OF RESEARCH DEVELOPMENT IN NURSING AND MIDWIFERY 2018. [DOI: 10.29252/jgbfnm.15.1.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
|
7
|
Trejo-Mejía JA, Sánchez-Mendiola M, Méndez-Ramírez I, Martínez-González A. Reliability analysis of the objective structured clinical examination using generalizability theory. MEDICAL EDUCATION ONLINE 2016; 21:31650. [PMID: 27543188 PMCID: PMC4991996 DOI: 10.3402/meo.v21.31650] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Revised: 07/25/2016] [Accepted: 07/25/2016] [Indexed: 05/31/2023]
Abstract
BACKGROUND The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. METHODS An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. RESULTS The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. CONCLUSIONS Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.
Collapse
|
8
|
Pedersoli CE, Pedersoli TAM, Faro ACME, Dalri MCB. Ensino do manejo da via aérea com máscara laríngea: estudo randomizado controlado. Rev Bras Enferm 2016; 69:368-74. [DOI: 10.1590/0034-7167.2016690221i] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2015] [Accepted: 11/14/2015] [Indexed: 08/30/2023] Open
Abstract
RESUMO Objetivo: ensinar manejo da via aérea com máscara laríngea a estudantes de enfermagem mediante aula expositivo-dialogada acompanhada de atividade prática em laboratório ou exclusivamente aula simulada. Método: ensaio clínico randomizado controlado. População: bacharelandos oitavo semestre. Amostra: 17 estudantes randomizados em grupo intervenção (GI: aula simulada) ou controle (GC: aula expositivo-dialogada e atividade prática em laboratório). Elaborados e validados instrumentos: avaliação escrita, cenário de simulação, avaliação clínica objetiva estruturada (checklist). Coletou-se dados em workshop. Aplicaram-se teste escrito e avaliação clínica estruturada em cenário de simulação filmada e avaliada por três experts. Resultados: idade 24,4±4,2 anos. Acertos GC: pré-teste 66±10%; pós-teste 84±8%. GI: pré-teste 65±5%; pós-teste 86±11%. Cenário: GC 78±5.2%; GI 84±8.9%. Conclusão: estratégias proporcionaram aquisição de conhecimento, habilidades e tomada de decisão, indispensáveis para atingir objetivos do cenário. Houve incorporação de conhecimento em manejo da via aérea com máscara laríngea, evidenciado pelo incremento dos escores no teste escrito e cenário.
Collapse
|
9
|
Rajiah K, Veettil SK, Kumar S. Standard setting in OSCEs: a borderline approach. CLINICAL TEACHER 2015; 11:551-6. [PMID: 25417986 DOI: 10.1111/tct.12213] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND The evaluation of clinical skills and competencies is a high-stakes process carrying significant consequences for the candidate. Hence, it is mandatory to have a robust method to justify the pass score in order to maintain a valid and reliable objective structured clinical examination (OSCE). The aim was to trial the borderline approach using the two-domain global rating scale for standard setting in the OSCE. METHODS For each domain, a set of six-point (from 5 to 0) scales were used to reflect high and low divisions within the 'pass', 'borderline' and 'fail' categories. Scores on the two individual global scales were summed to create a 'summed global rating'. Similarly task-based checklists for individual stations were summed to get a total score. It is mandatory to have a robust method to justify the pass score in order to maintain a valid and reliable OSCE RESULTS: The Pearson's correlation between task-based checklist scoring and the two-domain global rating scale were moderate and significant. The highest R(2) coefficient of 0.479 was obtained for station 7, and the lowest R(2) value was 0.241 for station 14. DISCUSSION There was a significant positive correlation between the two scales; however, the R(2) value was not satisfactory except for station 7. The pass mark for the OSCE according to the borderline method was 64 per cent, which is higher than the arbitrarily set pass mark of 50 per cent. CONCLUSIONS This study confirms that the two-domain global rating scale is appropriate to assess the abilities of students within the framework of an OSCE. The strong relationships between the two-domain global rating scale and task-based checklists provide evidence that the two-domain global rating scale can be used to genuinely assess students' proficiencies.
Collapse
Affiliation(s)
- Kingston Rajiah
- Department of Pharmacy Practice, International Medical University, Kuala Lumpur, Malaysia
| | | | | |
Collapse
|
10
|
Brannick MT, Erol-Korkmaz HT, Prewett M. A systematic review of the reliability of objective structured clinical examination scores. MEDICAL EDUCATION 2011; 45:1181-9. [PMID: 21988659 DOI: 10.1111/j.1365-2923.2011.04075.x] [Citation(s) in RCA: 177] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
CONTEXT The objective structured clinical examination (OSCE) is comprised of a series of simulations used to assess the skill of medical practitioners in the diagnosis and treatment of patients. It is often used in high-stakes examinations and therefore it is important to assess its reliability and validity. METHODS The published literature was searched (PsycINFO, PubMed) for OSCE reliability estimates (coefficient alpha and generalisability coefficients) computed either across stations or across items within stations. Coders independently recorded information about each study. A meta-analysis of the available literature was computed and sources of systematic variance in estimates were examined. RESULTS A total of 188 alpha values from 39 studies were coded. The overall (summary) alpha across stations was 0.66 (95% confidence interval [CI] 0.62-0.70); the overall alpha within stations across items was 0.78 (95% CI 0.73-0.82). Better than average reliability was associated with a greater number of stations and a higher number of examiners per station. Interpersonal skills were evaluated less reliably across stations and more reliably within stations compared with clinical skills. CONCLUSIONS Overall scores on the OSCE are often not very reliable. It is more difficult to reliably assess communication skills than clinical skills when considering both as general traits that should apply across situations. It is generally helpful to use two examiners and large numbers of stations, but some OSCEs appear more reliable than others for reasons that are not yet fully understood.
Collapse
Affiliation(s)
- Michael T Brannick
- Department of Psychology, College of Arts and Sciences, University of South Florida, Tampa, Florida 33620-7200, USA.
| | | | | |
Collapse
|
11
|
Wilkinson TJ, Tweed MJ, Egan TG, Ali AN, McKenzie JM, Moore M, Rudland JR. Joining the dots: conditional pass and programmatic assessment enhances recognition of problems with professionalism and factors hampering student progress. BMC MEDICAL EDUCATION 2011; 11:29. [PMID: 21649925 PMCID: PMC3121726 DOI: 10.1186/1472-6920-11-29] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2011] [Accepted: 06/07/2011] [Indexed: 05/12/2023]
Abstract
BACKGROUND Programmatic assessment that looks across a whole year may contribute to better decisions compared with those made from isolated assessments alone. The aim of this study is to describe and evaluate a programmatic system to handle student assessment results that is aligned not only with learning and remediation, but also with defensibility. The key components are standards based assessments, use of "Conditional Pass", and regular progress meetings. METHODS The new assessment system is described. The evaluation is based on years 4-6 of a 6-year medical course. The types of concerns staff had about students were clustered into themes alongside any interventions and outcomes for the students concerned. The likelihoods of passing the year according to type of problem were compared before and after phasing in of the new assessment system. RESULTS The new system was phased in over four years. In the fourth year of implementation 701 students had 3539 assessment results, of which 4.1% were Conditional Pass. More in-depth analysis for 1516 results available from 447 students revealed the odds ratio (95% confidence intervals) for failure was highest for students with problems identified in more than one part of the course (18.8 (7.7-46.2) p < 0.0001) or with problems with professionalism (17.2 (9.1-33.3) p < 0.0001). The odds ratio for failure was lowest for problems with assignments (0.7 (0.1-5.2) NS). Compared with the previous system, more students failed the year under the new system on the basis of performance during the year (20 or 4.5% compared with four or 1.1% under the previous system (p < 0.01)). CONCLUSIONS The new system detects more students in difficulty and has resulted in less "failure to fail". The requirement to state conditions required to pass has contributed to a paper trail that should improve defensibility. Most importantly it has helped detect and act on some of the more difficult areas to assess such as professionalism.
Collapse
Affiliation(s)
- Tim J Wilkinson
- University of Otago, Christchurch, C/- The Princess Margaret Hospital, PO Box 800, Christchurch, New Zealand
| | - Mike J Tweed
- Medical Education Unit, University of Otago, Wellington, PO Box 7343, Wellington 6242, New Zealand
| | - Tony G Egan
- Faculty Education Unit, Faculty of Medicine, University of Otago, PO Box 56, Dunedin 9054, New Zealand
| | - Anthony N Ali
- Medical Education Unit, University of Otago, Christchurch, PO Box 4345, Christchurch 8140, New Zealand
| | - Jan M McKenzie
- Medical Education Unit, University of Otago, Christchurch, PO Box 4345, Christchurch 8140, New Zealand
| | - MaryLeigh Moore
- Medical Education Unit, University of Otago, Christchurch, PO Box 4345, Christchurch 8140, New Zealand
| | - Joy R Rudland
- Faculty Education Unit, Faculty of Medicine, University of Otago, PO Box 56, Dunedin 9054, New Zealand
| |
Collapse
|
12
|
de Sousa Eskenazi E, de Arruda Martins M, Ferreira M. Oral Health Promotion Through an Online Training Program for Medical Students. J Dent Educ 2011. [DOI: 10.1002/j.0022-0337.2011.75.5.tb05093.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
| | | | - Mario Ferreira
- Center for Health Promotion; Department of Medicine; Faculty of Medicine; University of São Paulo
| |
Collapse
|
13
|
Walsh M, Bailey PH, Mossey S, Koren I. The novice objective structured clinical evaluation tool: psychometric testing. J Adv Nurs 2010; 66:2807-18. [DOI: 10.1111/j.1365-2648.2010.05421.x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
14
|
Mitchell ML, Henderson A, Groves M, Dalton M, Nulty D. The objective structured clinical examination (OSCE): optimising its value in the undergraduate nursing curriculum. NURSE EDUCATION TODAY 2009; 29:398-404. [PMID: 19056152 DOI: 10.1016/j.nedt.2008.10.007] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2008] [Revised: 07/30/2008] [Accepted: 10/18/2008] [Indexed: 05/15/2023]
Abstract
This article explores the use of the objective structured clinical examination (OSCE) in undergraduate nursing education. The advantages and limitations of this assessment approach are discussed and various applications of the OSCE are described. Attention is given to the complexities of evaluating some psychosocial competency components. The issues are considered in an endeavour to delineate the competency components, or skill sets, that best lend themselves to assessment by the OSCE. We conclude that OSCEs can be used most effectively in nurse undergraduate curricula to assess safe practice in terms of performance of psychomotor skills, as well as the declarative and schematic knowledge associated with their application. OSCEs should be integrated within a curriculum in conjunction with other relevant student evaluation methods.
Collapse
Affiliation(s)
- Marion L Mitchell
- School of Nursing & Midwifery, Logan, Logan campus, Griffith University, Meadowbrook, Queensland 4131, Australia.
| | | | | | | | | |
Collapse
|
15
|
Wilkinson TJ, Smith JD, Margolis SA, Sen Gupta T, Prideaux DJ. Structured assessment using multiple patient scenarios by videoconference in rural settings. MEDICAL EDUCATION 2008; 42:480-487. [PMID: 18363659 DOI: 10.1111/j.1365-2923.2008.03011.x] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
CONTEXT The assessment blueprint of the Australian College of Rural and Remote Medicine postgraduate curriculum highlighted a need to assess clinical reasoning. We describe the development, reliability, feasibility, validity and educational impact of an 8-station assessment tool, StAMPS (structured assessment using multiple patient scenarios), conducted by videoconference. METHODS StAMPS asks each candidate to be examined at each of 8 stations on issues relating to patient diagnosis or management. Each candidate remains located in a rural site but is examined in turn by 8 examiners who are located at a central site. Examiners were rotated through the candidates by either walking between videoconference rooms or by connecting and disconnecting the links. Reliability was evaluated using generalisability theory. Validity and educational impact were evaluated with qualitative interviews. RESULTS Fourteen candidates were assessed on 82 scenarios with a reliability of G = 0.76. There was a reasonable correlation with level of candidate expertise (rho = 0.57). The videoconference links were acceptable to candidates and examiners but the walking rotation system was more reliable. Qualitative comments confirmed relevance and acceptability of the assessment tool and suggest it is likely to have a desirable educational impact. CONCLUSIONS StAMPS not only reflects the content of rural and remote practice but also reflects the process of that work in that it is delivered from a distance and assesses resourcefulness and flexibility in thinking. The reliability and feasibility of this type of assessment has implications for people running any distance-based course, but the assessment could also be used in a face-to-face setting.
Collapse
Affiliation(s)
- Tim J Wilkinson
- Department of Medicine, Christchurch School of Medicine and Health Sciences, University of Otago, Christchurch, New Zealand.
| | | | | | | | | |
Collapse
|
16
|
Kronfly Rubiano E, Ricarte Díez JI, Juncosa Font S, Martínez Carretero JM. [Evaluation of the clinical competence of Catalonian medicine schools 1994-2006. Evolution of examination formats until the objective and structured clinical evaluation (ECOE)]. Med Clin (Barc) 2008; 129:777-84. [PMID: 18093480 DOI: 10.1157/13113768] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
17
|
Larsen T, Jeppe-Jensen D. The introduction and perception of an OSCE with an element of self- and peer-assessment. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2008; 12:2-7. [PMID: 18257758 DOI: 10.1111/j.1600-0579.2007.00449.x] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
The purpose of the present study was to encourage reflective dental students by performing an educational Objective Structured Clinical Examination (OSCE) with an element of self- and peer-assessment. An interdisciplinary OSCE comprising cariology, endodontics and microbiology was set up for all third-year students. A blueprint secured representation of the skills to be tested, i.e. knowledge, interdisciplinary knowledge, communication, clinical reasoning and practical procedures. At each station positive and constructive feedback was given to the students based on predefined criteria. Further, the students received written marks after completion of the OSCE. At one station the feedback and marks were replaced by self- and peer-assessment performed by the students in groups after the OSCE. Afterwards, the 68 students and 8 teachers participating in the OSCE answered a questionnaire on their opinion and perception of the examination. The results showed good correlation between the marks given and the students' perception of task difficulty. Generally, there were no systematic variations in the marks given during the week or by individual assessors at the same station, except for one, as well as agreement with marks of the ordinary clinical assessment. The marks given during self- and peer-assessment differed widely, indicating a need for training in this aspect. The questionnaires revealed a very positive perception of the OSCE from both students and teachers. Thus, the majority found the examination relevant and of educational benefit, capable of improving the learning of the students and useful for assessment purposes. Also, the self- and peer-assessment was found useful by the students. In conclusion, this interdisciplinary OSCE stressing constructive feedback to the students was perceived very positively by students and teachers and recognised for its beneficial possibilities in education and assessment.
Collapse
Affiliation(s)
- T Larsen
- School of Dentistry, University of Copenhagen, Copenhagen, Denmark.
| | | |
Collapse
|
18
|
Rudland J, Wilkinson T, Smith-Han K, Thompson-Fawcett M. "You can do it late at night or in the morning. You can do it at home, I did it with my flatmate." The educational impact of an OSCE. MEDICAL TEACHER 2008; 30:206-11. [PMID: 18464148 DOI: 10.1080/01421590701851312] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND The use of an objective structured clinical examination (OSCE) has been a powerful influence on doctor training but assessments do not always drive study behaviour in predictable ways. AIMS To investigate the impact an OSCE has on study behaviours by exploring how 5th year medical students identify what to learn for a summative OSCE and the role of the clinical environment in their preparation. METHODS A semi-structured questionnaire survey asked about strategies used by students to prepare for the OSCE. Focus group interviews explored successful methods of preparation for the OSCE. Themes were identified and classified. RESULTS The questionnaire response rate was 84%. Topic identification was usually from the list of examinable problems, past OSCE papers and a booklet prepared by a previous student containing a series of OSCE checklists. The study behaviours of students preparing for the OSCE exam were predominantly to practise on each other, and to rehearse routines. Strategic and efficient study habits were favoured over conscious utilization of the clinical environment. CONCLUSION The expectation that an OSCE drives learning into the clinical workplace was not supported by this study. This suggests the role of clinical experience in helping students prepare for the exam may be more subliminal, or that an OSCE is more as a test of psychomotor skills than a marker of clinical experience. An unexpected benefit may be to drive more collaborative learning.
Collapse
|
19
|
Auewarakul C, Downing SM, Jaturatamrong U, Praditsuwan R. Sources of validity evidence for an internal medicine student evaluation system: an evaluative study of assessment methods. MEDICAL EDUCATION 2005; 39:276-283. [PMID: 15733163 DOI: 10.1111/j.1365-2929.2005.02090.x] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
BACKGROUND Medical students' final clinical grades in internal medicine are based on the results of multiple assessments that reflect not only the students' knowledge, but also their skills and attitudes. OBJECTIVE To examine the sources of validity evidence for internal medicine final assessment results comprising scores from 3 evaluations and 2 examinations. METHODS The final assessment scores of 8 cohorts of Year 4 medical students in a 6-year undergraduate programme were analysed. The final assessment scores consisted of scores in ward evaluations (WEs), preceptor evaluations (PREs), outpatient clinic evaluations (OPCs), general knowledge and problem-solving multiple-choice questions (MCQs), and objective structured clinical examinations (OSCEs). Sources of validity evidence examined were content, response process, internal structure, relationship to other variables, and consequences. RESULTS The median generalisability coefficient of the OSCEs was 0.62. The internal consistency reliability of the MCQs was 0.84. Scores for OSCEs correlated well with WE, PRE and MCQ scores with observed (disattenuated) correlation of 0.36 (0.77), 0.33 (0.71) and 0.48 (0.69), respectively. Scores for WEs and PREs correlated better with OSCE than MCQ scores. Sources of validity evidence including content, response process, internal structure and relationship to other variables were shown for most components. CONCLUSION There is sufficient validity evidence to support the utilisation of various types of assessment scores for final clinical grades at the end of an internal medicine rotation. Validity evidence should be examined for any final student evaluation system in order to establish the meaningfulness of the student assessment scores.
Collapse
Affiliation(s)
- Chirayu Auewarakul
- Department of Medicine, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand.
| | | | | | | |
Collapse
|
20
|
Wilkinson TJ, Frampton CM. Comprehensive undergraduate medical assessments improve prediction of clinical performance. MEDICAL EDUCATION 2004; 38:1111-6. [PMID: 15461657 DOI: 10.1111/j.1365-2929.2004.01962.x] [Citation(s) in RCA: 56] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
OBJECTIVES This study aimed to compare an essay-style undergraduate medical assessment with modified essay, multiple-choice question (MCQ) and objective structured clinical examination (OSCE) undergraduate medical assessments in predicting students' clinical performance (predictive validity), and to determine the relative contributions of the written (modified essay and MCQ) assessment and OSCE to predictive validity. DESIGN Before and after cohort study. SETTING One medical school running a 6-year undergraduate course. PARTICIPANTS Study participants included 137 Year 5 medical students followed into their trainee intern year. MAIN OUTCOME MEASURES Aggregated global ratings by senior doctors, junior doctors and nurses as well as comprehensive structured assessments of performance in the trainee intern year. RESULTS Students' scores in the new examinations predicted performance significantly better than scores in the old examinations, with correlation coefficients increasing from 0.05-0.44 to 0.41-0.81. The OSCE was a stronger predictor of subsequent performance than the written assessments but combining assessments had the strongest predictive validity. CONCLUSION Using more comprehensive, more reliable and more authentic undergraduate assessment methods substantially increases predictive validity.
Collapse
Affiliation(s)
- Tim J Wilkinson
- Christchurch School of Medicine and Health Sciences, University of Otago, Christchurch, New Zealand.
| | | |
Collapse
|
21
|
Wilkinson TJ, Frampton CM, Thompson-Fawcett M, Egan T. Objectivity in objective structured clinical examinations: checklists are no substitute for examiner commitment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2003; 78:219-223. [PMID: 12584104 DOI: 10.1097/00001888-200302000-00021] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE This study explored factors that contribute to objectivity in objective structured clinical examinations (OSCEs). The authors quantified the effect of examiners on interrater reliability and separated this effect from that of station construction, determined the effect of objectification on station reliability and validity, and explored examiner factors that may contribute to interrater reliability. METHOD Data came from examiners' mark sheets from four annual OSCEs (1997-2000). The OSCEs were conducted identically and simultaneously at three sites, within the University of Otago medical school in New Zealand, with two examiners at each station. The contribution to interrater correlations of station construction and mark sheet compared with examiners' contribution was partitioned out using a random-effects analysis of variance. For one OSCE, a multiple linear regression was used to determine the independent contributions to interrater reliability of the number of checklist items per mark sheet, examiner experience, and examiner involvement in station construction. RESULTS Station construction and mark sheets contributed 10.1% and examiners contributed 89.9% to the variation in interrater reliability. Following multivariate analysis, the number of items per mark sheet was negatively associated, and examiner involvement in station construction was positively associated, with interrater reliability. Examiner experience in examining or in clinical medicine was not associated with interrater reliability. There was a negative, but nonsignificant, correlation between number of items per mark sheet and that station's correlation with the aggregate OSCE mark. CONCLUSIONS The contribution of objective mark sheets to objectivity is relatively minor compared with examiners' contribution. Increasing the number of checklist items per mark sheet decreased both reliability and validity. Achieving objectivity requires diligent examiners who are involved in the whole assessment.
Collapse
Affiliation(s)
- Tim J Wilkinson
- Christchurch School of Medicine and Health Sciences, University of Otago, Christchurch, New Zealand.
| | | | | | | |
Collapse
|
22
|
Wilkinson TJ, Fontaine S. Patients' global ratings of student competence. Unreliable contamination or gold standard? MEDICAL EDUCATION 2002; 36:1117-1121. [PMID: 12472737 DOI: 10.1046/j.1365-2923.2002.01379.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE To determine whether global ratings by patients are valid and reliable enough to be used within a major summative assessment of medical students' clinical skills. METHOD In 11 stations of an 18-station objective structured clinical examination (OSCE), where a student was asked to educate or take a history from a patient, the patient was asked, 'How likely would you be to come back and discuss your concerns with this student again?' These 11 opinions were aggregated into a single patient opinion mark and correlated with other measures of student competence. The patients were not experienced in student assessment. RESULTS A total of 204 students undertook the OSCE. Reliability of patient opinion across all 11 stations revealed a Cronbach alpha of 0.65. The correlation coefficient between the patient ratings and the total OSCE score was good (r = 0.74; P < 0.001) and was better than the correlation between any single OSCE station and the total OSCE score. It was also better than the correlation between the aggregated patient opinion and tests of student knowledge (r = 0.47). CONCLUSION It is known that patients can reliably complete checklists of clinical skills and that doctors can reliably provide global ratings of students. We have now shown that, by controlling the context, asking the right question and aggregating several opinions, untrained patients can provide a reliable and valid global opinion that contributes to the assessment of a student's clinical skills.
Collapse
Affiliation(s)
- Tim J Wilkinson
- Christchurch School of Medicine and Health Sciences, University of Otago, New Zealand.
| | | |
Collapse
|
23
|
Appel J, Friedman E, Fazio S, Kimmel J, Whelan A. Educational assessment guidelines: a Clerkship Directors in Internal Medicine commentary. Am J Med 2002; 113:172-9. [PMID: 12133764 DOI: 10.1016/s0002-9343(02)01211-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Joel Appel
- Department of Internal Medicine, Sinai-Grace Hospital, Wayne State University School of Medicine, Michigan, USA
| | | | | | | | | |
Collapse
|
24
|
Hamann C, Volkan K, Fishman MB, Silvestri RC, Simon SR, Fletcher SW. How well do second-year students learn physical diagnosis? Observational study of an Objective Structured Clinical Examination (OSCE). BMC MEDICAL EDUCATION 2002; 2:1. [PMID: 11888484 PMCID: PMC80153 DOI: 10.1186/1472-6920-2-1] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2001] [Accepted: 01/10/2002] [Indexed: 05/21/2023]
Abstract
BACKGROUND Little is known about using the Objective Structured Clinical Examination (OSCE) in physical diagnosis courses. The purpose of this study was to describe student performance on an OSCE in a physical diagnosis course. METHODS Cross-sectional study at Harvard Medical School, 1997-1999, for 489 second-year students. RESULTS Average total OSCE score was 57% (range 39-75%). Among clinical skills, students scored highest on patient interaction (72%), followed by examination technique (65%), abnormality identification (62%), history-taking (60%), patient presentation (60%), physical examination knowledge (47%), and differential diagnosis (40%) (p <.0001). Among 16 OSCE stations, scores ranged from 70% for arthritis to 29% for calf pain (p <.0001). Teaching sites accounted for larger adjusted differences in station scores, up to 28%, than in skill scores (9%) (p <.0001). CONCLUSIONS Students scored higher on interpersonal and technical skills than on interpretive or integrative skills. Station scores identified specific content that needs improved teaching.
Collapse
Affiliation(s)
- Claus Hamann
- Geriatric Medicine Unit, Massachusetts General Hospital, 100 Charles River Plaza, Fifth Floor, Boston MA, USA
| | - Kevin Volkan
- Program in Psychology, California State University Channel Islands, Professional Building, University Drive, Camarillo, CA 93012, USA
| | - Mary B Fishman
- Division of General Internal Medicine, Georgetown University Medical Center, 3800 Reservoir Rd. NW, Washington DC 20007, USA
| | - Ronald C Silvestri
- Department of Medicine, Beth Israel Deaconess Medical Center, 330 Brookline Ave, Boston, MA 02215, USA
| | - Steven R Simon
- Department of Ambulatory Care and Prevention, Harvard Medical School and Harvard Pilgrim Health Care, 133 Brookline Avenue, Sixth Floor, Boston, MA, 02215, USA
| | - Suzanne W Fletcher
- Department of Ambulatory Care and Prevention, Harvard Medical School and Harvard Pilgrim Health Care, 133 Brookline Avenue, Sixth Floor, Boston, MA, 02215, USA
| |
Collapse
|
25
|
Wilkinson TJ, Newble DI, Frampton CM. Standard setting in an objective structured clinical examination: use of global ratings of borderline performance to determine the passing score. MEDICAL EDUCATION 2001; 35:1043-1049. [PMID: 11703640 DOI: 10.1046/j.1365-2923.2001.01041.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
BACKGROUND Objective structured clinical examination (OSCE) standard-setting procedures are not well developed and are often time-consuming and complex. We report an evaluation of a simple 'contrasting groups' method, applied to an OSCE conducted simultaneously in three separate schools. SUBJECTS Medical students undertaking an end-of-fifth year multidisciplinary OSCE. METHODS Using structured marking sheets, pairs of examiners independently scored student performance at each OSCE station. Examiners also provided a global rating of overall performance. The actual scores of any borderline candidates at each station were averaged to provide a passing score for each station. The passing scores for all stations were combined to become the passing score for the whole exam. Validity was determined by making comparisons with performance on other fifth-year assessments. Reliability measures comprised interschool agreement, interexaminer agreement and interstation variability. RESULTS The approach was simple and had face validity. There was a stronger association between the performance of borderline candidates on the OSCE and their in-course assessments than with their performance on the written exam, giving a weak measure of construct validity in the absence of a better 'gold standard'. There was good agreement between examiners in identifying borderline candidates. There were significant differences between schools in the borderline score for some stations, which disappeared when more than three stations were aggregated. CONCLUSION This practical method provided a valid and reliable competence-based pass mark. Combining marks from all stations before determining the pass mark was more reliable than making decisions based on individual stations.
Collapse
Affiliation(s)
- T J Wilkinson
- Christchurch School of Medicine, University of Otago, Christchurch, New Zealand
| | | | | |
Collapse
|