1
|
Schaper E, van Haeften T, Wandall J, Iivanainen A, Penell J, Press CM, Lekeux P, Holm P. Development of a shared item repository for progress testing in veterinary education. Front Vet Sci 2023; 10:1296514. [PMID: 38026654 PMCID: PMC10652386 DOI: 10.3389/fvets.2023.1296514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction Progress testing in education is an assessment principle for the measurement of students' progress over time, e.g., from start to graduation. Progress testing offers valid longitudinal formative measurement of the growth in the cognitive skills of the individual students within the subjects of the test as well as a tool for educators to monitor potential educational gaps and mismatches within the curriculum in relation to the basic veterinary learning outcomes. Methods Six veterinary educational establishments in Denmark, Finland, Germany (Hannover), the Netherlands, Norway, and Sweden established in cooperation with the European Association of Establishments for Veterinary Education (EAEVE) a common veterinary item repository that can be used for progress testing in European Veterinary Education Establishments (VEEs), linear as well as computer adaptive, covering the EAEVE veterinary subjects and theoretical "Day One Competencies." First, a blueprint was created, suitable item formats were identified, and a quality assurance process for reviewing and approving items was established. The items were trialed to create a database of validated and calibrated items, and the responses were subsequently psychometrically analyzed according to Modern Test Theory. Results In total, 1,836 items were submitted of which 1,342 were approved by the reviewers for trial testing. 1,119 students from all study years and all partners VEEs participated in one or more of six item trials, and 1,948 responses were collected. Responses were analyzed using Rasch Modeling (analysis of item-fit, differential item function, item-response characteristics). A total of 821 calibrated items of various difficulty levels matching the veterinary students' abilities and covering the veterinary knowledge domains have been banked. Discussion The item bank is now ready to be used for formative progress testing in European veterinary education. This paper presents and discusses possible pitfalls, problems, and solutions when establishing an international veterinary progress test.
Collapse
Affiliation(s)
- Elisabeth Schaper
- Centre for E-learning, Didactics and Educational Research, University of Veterinary Medicine Hannover, Foundation, Hannover, Germany
| | - Theo van Haeften
- Department of Biomolecular Health Sciences, Faculty of Veterinary Medicine & Centre for Academic Teaching and Learning, Utrecht University, Utrecht, Netherlands
| | - Jakob Wandall
- NordicMetrics Aps, Copenhagen, Denmark
- Department of Veterinary and Animal Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Frederiksberg, Denmark
| | - Antti Iivanainen
- Department of Veterinary Biosciences, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Johanna Penell
- Department of Clinical Sciences, Faculty of Veterinary Medicine and Animal Science, Swedish University of Agricultural Sciences, Uppsala, Sweden
| | - Charles McLean Press
- Department of Preclinical Sciences and Pathology, Faculty of Veterinary Medicine, Norwegian University of Life Sciences, Aas, Norway
| | - Pierre Lekeux
- European Association of Establishments for Veterinary Education (EAEVE), Vienna, Austria
| | - Peter Holm
- Department of Veterinary and Animal Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Frederiksberg, Denmark
| |
Collapse
|
2
|
Simões RL, Bicudo AM, Passeri SMRR, Calderan TRA, Rizoli S, Fraga GP. Can trauma leagues contribute to better cognitive performance and technical skills of medical students? The experience of the Unicamp trauma league. Eur J Trauma Emerg Surg 2023; 49:1909-1916. [PMID: 37264152 DOI: 10.1007/s00068-023-02283-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 05/14/2023] [Indexed: 06/03/2023]
Abstract
PURPOSE Trauma leagues (TLs) are extracurricular programs that offer medical students supervised exposure to trauma and acute care surgery, mentorship, and participation in other academic activities. TLs are fully approved medical schools, and currently, over 100 TLs exist in Brazil. We hypothesized that the performance/competence of medical students who participated in TLs was superior compared to non-participants. This study evaluated and compared the cognitive performance and technical skills of the two groups. METHODS This retrospective cohort study evaluated the performance of TL medical students to non-TL alumni from 2005 to 2017, using the students' academic performance coefficient, Clinical Competence Assessment, and Progress Test results. SigmaPlot 12.0 software was used to perform statistical analyses, including Mann-Whitney comparison tests and the Kruskal-Wallis test to confirm the data. RESULTS Of the 1366 medical students who graduated from a Brazilian university, 966 were included, with 17.9% having participated in TL. Compared to non-TL participants, TL students demonstrated better cognitive performance according to the performance coefficient (p = 0.017) and Progress Test result (p < 0.001), and higher achievement in the Clinical Competence Assessment (p < 0.001). CONCLUSION The academic performance of TL students was superior to that of non-TL students at the University of Campinas (Unicamp), suggesting a positive impact of TL in the preparation of future doctors. The study findings suggest that participation in TL at Unicamp was beneficial in preparing better doctors and should be considered by medical schools worldwide. EVIDENCE LEVEL II (Retrospective cohort).
Collapse
Affiliation(s)
- Romeo Lages Simões
- Medical School of the Vale do Rio Doce University (Univale), Governador Valadares, MG, Brazil.
- Federal University of Juiz de Fora, Governador Valadares Campus (UFJF-GV), Governador Valadares, MG, Brazil.
| | - Angélica Maria Bicudo
- Department of Pediatrics, School of Medical Sciences, University of Campinas, R. Tessália Vieira de Camargo, 126 - Cidade Universitária, PO BOX 13083-887, Campinas, SP, Brazil
| | - Sílvia Maria Riceto Ronchin Passeri
- School of Medical Sciences, University of Campinas, R. Tessália Vieira de Camargo, 126 - Cidade Universitária, PO BOX 13083-887, Campinas, SP, Brazil
| | - Thiago Rodrigues Araújo Calderan
- Division of Trauma Surgery, School of Medical Sciences, University of Campinas, R. Tessália Vieira de Camargo, 126 - Cidade Universitária, PO BOX 13083-887, Campinas, SP, Brazil
| | - Sandro Rizoli
- Medical Director for Trauma, Hamad General Hospital, PO BOX 3050, Doha, Qatar
| | - Gustavo Pereira Fraga
- Division of Trauma Surgery, School of Medical Sciences, University of Campinas, R. Tessália Vieira de Camargo, 126 - Cidade Universitária, PO BOX 13083-887, Campinas, SP, Brazil
| |
Collapse
|
3
|
Dion V, St-Onge C, Bartman I, Touchie C, Pugh D. Written-Based Progress Testing: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:747-757. [PMID: 34753858 DOI: 10.1097/acm.0000000000004507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Progress testing is an increasingly popular form of assessment in which a comprehensive test is administered to learners repeatedly over time. To inform potential users, this scoping review aimed to document barriers, facilitators, and potential outcomes of the use of written progress tests in higher education. METHOD The authors followed Arksey and O'Malley's scoping review methodology to identify and summarize the literature on progress testing. They searched 6 databases (Academic Search Complete, CINAHL, ERIC, Education Source, MEDLINE, and PsycINFO) on 2 occasions (May 22, 2018, and April 21, 2020) and included articles written in English or French and pertaining to written progress tests in higher education. Two authors screened articles for the inclusion criteria (90% agreement), then data extraction was performed by pairs of authors. Using a snowball approach, the authors also screened additional articles identified from the included reference lists. They completed a thematic analysis through an iterative process. RESULTS A total of 104 articles were included. The majority of progress tests used a multiple-choice and/or true-or-false question format (95, 91.3%) and were administered 4 times a year (38, 36.5%). The most documented source of validity evidence was internal consistency (38, 36.5%). Four major themes were identified: (1) barriers and challenges to the implementation of progress testing (e.g., need for additional resources); (2) established collaboration as a facilitator of progress testing implementation; (3) factors that increase the acceptance of progress testing (e.g., formative use); and (4) outcomes and consequences of progress test use (e.g., progress testing contributes to an increase in knowledge). CONCLUSIONS Progress testing appears to have a positive impact on learning, and there is significant validity evidence to support its use. Although progress testing is resource- and time-intensive, strategies such as collaboration with other institutions may facilitate its use.
Collapse
Affiliation(s)
- Vincent Dion
- V. Dion is an undergraduate medical education student, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada. He was a research assistant to the Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke research chair in medical education, Sherbrooke, Québec, Canada, at the time this work was completed
| | - Christina St-Onge
- C. St-Onge is professor, Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, and the Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke research chair in medical education, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-5313-0456
| | - Ilona Bartman
- I. Bartman is medical education research associate, Medical Council of Canada, Ottawa, Ontario, Canada; ORCID: https://orcid.org/0000-0002-2056-479X
| | - Claire Touchie
- C. Touchie is professor of medicine, University of Ottawa, Ottawa, Ontario, Canada. She was chief medical education officer, Medical Council of Canada, Ottawa, Ontario, Canada, at the time this work was completed; ORCID: https://orcid.org/0000-0001-7926-9720
| | - Debra Pugh
- D. Pugh is medical education advisor, Medical Council of Canada, and associate professor, Department of Medicine, University of Ottawa and The Ottawa Hospital, Ottawa, Ontario, Canada; ORCID: https://orcid.org/0000-0003-4076-9669
| |
Collapse
|
4
|
Iyeyasu JN, Cecilio-Fernandes D, de Carvalho KM. Longitudinal evaluation of the Ophthalmology residents in Brazil: an observational prospective study. SAO PAULO MED J 2022; 141:e202292. [PMID: 36197351 PMCID: PMC10065116 DOI: 10.1590/1516-3180.2022.0092.r1.01072022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 07/01/2022] [Indexed: 11/21/2022] Open
Abstract
BACKGROUND The longitudinal evaluation of students seems to be a better way to assess their knowledge compared with that of the traditional methods of evaluation, such as modular and final tests. Currently, progress testing is the most consolidated type of longitudinal testing method. However, despite being well consolidated as an assessment tool in medical education, the use of this type of test in residency programs is scarce. OBJECTIVES This study aimed to investigate residents' knowledge growth regarding residency training and to describe the implementation of a longitudinal evaluation test in ophthalmological residency training across several medical schools in Brazil. Finally, the study aimed to check whether performance in the tests can be used as a predictor of the results of the specialist title test. DESIGN AND SETTING This was a prospective observational study. This study was conducted using an online platform. METHODS Online tests were developed following the same pattern as the Brazilian Ophthalmology Council specialist tests. All the residents performed the test simultaneously. The tests were conducted once a year at the end of the school year. RESULTS A progress test was conducted across 13 services with 259 residents. Our results demonstrated that resident scores improved over the years (P < 0.0001) and had a moderate correlation with the Brazilian Opthalmology Council specialist test (P = 0.0156). CONCLUSION The progress test can be considered a valuable tool to assess knowledge, meaning their knowledge increased over residency training. In addition, it can be used as a predictor of the result in the specialist title test.
Collapse
Affiliation(s)
- Josie Naomi Iyeyasu
- MD. Ophthalmologist and Assistant Doctor, Low Vision and
Strabismus Sector, Hospital de Clínicas, Faculdade de Ciências Médicas da
Universidade Estadual de Campinas (HC/FCM/UNICAMP), Campinas (SP), Brazil
| | - Dario Cecilio-Fernandes
- Psy, MSc, PhD. Psycologist and Researcher, Department of
Psychology and Psychiatry, Faculdade de Ciências Médicas da Universidade
Estadual de Campinas (FCM/UNICAMP), Campinas (SP), Brazil
| | - Keila Monteiro de Carvalho
- MD. Ophthalomologist and Full Professor, Faculty of Medical
Sciences, Universidade Estadual de Campinas (UNICAMP); and Chief Department of
Ophthalmo-Otorrynolaringology, Faculdade de Ciências Médicas da Universidade
Estadual de Campinas (FCM/UNICAMP), Campinas (SP), Brazil
| |
Collapse
|
5
|
Cecilio-Fernandes D, Bremers A, Collares CF, Nieuwland W, van der Vleuten C, Tio RA. Investigating possible causes of bias in a progress test translation: an one-edged sword. KOREAN JOURNAL OF MEDICAL EDUCATION 2019; 31:193-204. [PMID: 31455049 PMCID: PMC6715902 DOI: 10.3946/kjme.2019.130] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 07/17/2019] [Accepted: 07/17/2019] [Indexed: 05/29/2023]
Abstract
PURPOSE Assessment in different languages should measure the same construct. However, item characteristics, such as item flaws and content, may favor one test-taker group over another. This is known as item bias. Although some studies have focused on item bias, little is known about item bias and its association with items characteristics. Therefore, this study investigated the association between item characteristics and bias. METHODS The University of Groningen offers both an international and a national bachelor's program in medicine. Students in both programs take the same progress test, but the international progress test is literally translated into English from the Dutch version. Differential item functioning was calculated to analyze item bias in four subsequent progress tests. Items were also classified by their categories, number of alternatives, item flaw, item length, and whether it was a case-based question. RESULTS The proportion of items with bias ranged from 34% to 36% for the various tests. The number of items and the size of their bias was very similar in both programmes. We have identified that the more complex items with more alternatives favored the national students, whereas shorter items and fewer alternatives favored the international students. CONCLUSION Although nearly 35% of all items contain bias, the distribution and the size of the bias were similar for both groups. The findings of this paper may be used to improve the writing process of the items, by avoiding some characteristics that may benefit one group whilst being a disadvantage for others.
Collapse
Affiliation(s)
- Dario Cecilio-Fernandes
- School of Medical Sciences, University of Campinas, Campinas, Brazil
- Center for Education Development and Research in Health Professions (CEDAR), University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - André Bremers
- Department of Surgery, Radboud University Nijmegen Medical Center, Nijmegen, The Netherlands
| | - Carlos Fernando Collares
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences at Maastricht University, Maastricht, The Netherlands
| | - Wybe Nieuwland
- Department of Cardiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Cees van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences at Maastricht University, Maastricht, The Netherlands
| | - René A. Tio
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences at Maastricht University, Maastricht, The Netherlands
- Department of Cardiology, Catharina Hospital, Eindhoven, The Netherlands
| |
Collapse
|
6
|
Kirnbauer B, Avian A, Jakse N, Rugani P, Ithaler D, Egger R. First reported implementation of a German-language progress test in an undergraduate dental curriculum: A prospective study. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2018; 22:e698-e705. [PMID: 29961963 PMCID: PMC6220869 DOI: 10.1111/eje.12381] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 06/11/2018] [Indexed: 06/08/2023]
Abstract
INTRODUCTION Progress testing is a special form of longitudinal and feedback-oriented assessment. Even though well established in human medical curricula, this is not the case in dental education. The aim was the prospective development and implementation of the first reported German-language Dental Progress Test (DPT) for the undergraduate dental curriculum at the Medical University of Graz, Austria. MATERIAL AND METHODS Participation in DPT was compulsory for all dental students in terms 7-12 (years 4-6). Three tests, each consisting of 100 items out of a pool of 375, were administered within 3 consecutive terms in 2016 and 2017. Rasch analyses were used to evaluate the questionnaire and identify misfitting items. RESULTS In the item responses, 59.7% were "correct," 27.0% were "false" and 13.3% were answered with "don't know," with similar results at all 3 time points. The assumption of parallel ICC was met (T1: χ2 = 51.071, df = 74, P = .981; T2: χ2 = 57.044, df = 67, P = .802; T3: χ2 = 58.443, df = 72, P = .876) and item difficulties for the thematic fields were similarly distributed across the latent dimensions. CONCLUSION The newly introduced DPT is appropriate for testing dental students and is well balanced for the tested target group.
Collapse
Affiliation(s)
- B. Kirnbauer
- Division of Oral surgery and OrthodonticsMedical University of GrazGrazAustria
| | - A. Avian
- Institute for Medical Informatics, Statistics and DocumentationMedical University of GrazGrazAustria
| | - N. Jakse
- Division of Oral surgery and OrthodonticsMedical University of GrazGrazAustria
| | - P. Rugani
- Division of Oral surgery and OrthodonticsMedical University of GrazGrazAustria
| | - D. Ithaler
- Organizational Unit for Teaching and StudiesMedical University of GrazGrazAustria
| | - R. Egger
- Institute for Educational ScienceKarl‐Franzens University GrazGrazAustria
| |
Collapse
|
7
|
Dabaliz AA, Kaadan S, Dabbagh MM, Barakat A, Shareef MA, Al-Tannir M, Obeidat A, Mohamed A. Predictive validity of pre-admission assessments on medical student performance. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2017; 8:408-413. [PMID: 29176032 PMCID: PMC5768436 DOI: 10.5116/ijme.5a10.04e1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2017] [Accepted: 11/18/2017] [Indexed: 06/07/2023]
Abstract
OBJECTIVES To examine the predictive validity of pre-admission variables on students' performance in a medical school in Saudi Arabia. METHODS In this retrospective study, we collected admission and college performance data for 737 students in preclinical and clinical years. Data included high school scores and other standardized test scores, such as those of the National Achievement Test and the General Aptitude Test. Additionally, we included the scores of the Test of English as a Foreign Language (TOEFL) and the International English Language Testing System (IELTS) exams. Those datasets were then compared with college performance indicators, namely the cumulative Grade Point Average (cGPA) and progress test, using multivariate linear regression analysis. RESULTS In preclinical years, both the National Achievement Test (p=0.04, B=0.08) and TOEFL (p=0.017, B=0.01) scores were positive predictors of cGPA, whereas the General Aptitude Test (p=0.048, B=-0.05) negatively predicted cGPA. Moreover, none of the pre-admission variables were predictive of progress test performance in the same group. On the other hand, none of the pre-admission variables were predictive of cGPA in clinical years. Overall, cGPA strongly predict-ed students' progress test performance (p<0.001 and B=19.02). CONCLUSIONS Only the National Achievement Test and TOEFL significantly predicted performance in preclinical years. However, these variables do not predict progress test performance, meaning that they do not predict the functional knowledge reflected in the progress test. We report various strengths and deficiencies in the current medical college admission criteria, and call for employing more sensitive and valid ones that predict student performance and functional knowledge, especially in the clinical years.
Collapse
Affiliation(s)
- Al-Awwab Dabaliz
- College of Medicine, Alfaisal University, Riyadh, Kingdom of Saudi Arabia
| | - Samy Kaadan
- College of Medicine, Alfaisal University, Riyadh, Kingdom of Saudi Arabia
| | - M. Marwan Dabbagh
- College of Medicine, Alfaisal University, Riyadh, Kingdom of Saudi Arabia
| | - Abdulaziz Barakat
- College of Medicine, Alfaisal University, Riyadh, Kingdom of Saudi Arabia
| | | | | | - Akef Obeidat
- College of Medicine, Alfaisal University, Riyadh, Kingdom of Saudi Arabia
| | - Ayman Mohamed
- College of Medicine, Alfaisal University, Riyadh, Kingdom of Saudi Arabia
| |
Collapse
|
8
|
Katajavuori N, Salminen O, Vuorensola K, Huhtala H, Vuorela P, Hirvonen J. Competence-Based Pharmacy Education in the University of Helsinki. PHARMACY 2017; 5:pharmacy5020029. [PMID: 28970441 PMCID: PMC5597154 DOI: 10.3390/pharmacy5020029] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2017] [Revised: 05/04/2017] [Accepted: 05/17/2017] [Indexed: 11/16/2022] Open
Abstract
In order to meet the expectations to act as an expert in the health care profession, it is of utmost importance that pharmacy education creates knowledge and skills needed in today's working life. Thus, the planning of the curriculum should be based on relevant and up-to-date learning outcomes. In the University of Helsinki, a university wide curriculum reform called 'the Big Wheel' was launched in 2015. After the reform, the basic degrees of the university are two-cycle (Bachelor-Master) and competence-based, where the learning outcomes form a solid basis for the curriculum goals and implementation. In the Faculty of Pharmacy, this curriculum reform was conducted in two phases during 2012-2016. The construction of the curriculum was based on the most relevant learning outcomes concerning working life via high quality first (Bachelor of Science in Pharmacy) and second (Master of Science in Pharmacy) cycle degree programs. The reform was kicked off by interviewing all the relevant stakeholders: students, teachers, and pharmacists/experts in all the working life sectors of pharmacy. Based on these interviews, the intended learning outcomes of the Pharmacy degree programs were defined including both subject/contents-related and generic skills. The curriculum design was based on the principles of constructive alignment and new structures and methods were applied in order to foster the implementation of the learning outcomes. During the process, it became evident that a competence-based curriculum can be created only in close co-operation with the stakeholders, including teachers and students. Well-structured and facilitated co-operation amongst the teachers enabled the development of many new and innovative teaching practices. The European Union funded PHAR-QA project provided, at the same time, a highly relevant framework to compare the curriculum development in Helsinki against Europe-wide definitions of competences and learning outcomes in pharmacy education.
Collapse
Affiliation(s)
- Nina Katajavuori
- Faculty of Pharmacy, University of Helsinki, P.O. Box 56, 00014 Helsinki, Finland.
| | - Outi Salminen
- Faculty of Pharmacy, University of Helsinki, P.O. Box 56, 00014 Helsinki, Finland.
| | - Katariina Vuorensola
- Faculty of Pharmacy, University of Helsinki, P.O. Box 56, 00014 Helsinki, Finland.
| | - Helena Huhtala
- Faculty of Pharmacy, University of Helsinki, P.O. Box 56, 00014 Helsinki, Finland.
| | - Pia Vuorela
- Faculty of Pharmacy, University of Helsinki, P.O. Box 56, 00014 Helsinki, Finland.
| | - Jouni Hirvonen
- Faculty of Pharmacy, University of Helsinki, P.O. Box 56, 00014 Helsinki, Finland.
| |
Collapse
|
9
|
Albanese M, Case SM. Progress testing: critical analysis and suggested practices. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2016; 21:221-34. [PMID: 25662873 DOI: 10.1007/s10459-015-9587-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 01/19/2015] [Indexed: 05/11/2023]
Abstract
Educators have long lamented the tendency of students to engage in rote memorization in preparation for tests rather than engaging in deep learning where they attempt to gain meaning from their studies. Rote memorization driven by objective exams has been termed a steering effect. Progress testing (PT), in which a comprehensive examination sampling all of medicine is administered repeatedly throughout the entire curriculum, was developed with the stated aim of breaking the steering effect of examinations and of promoting deep learning. PT is an approach historically linked to problem-based learning (PBL) although there is a growing recognition of its applicability more broadly. The purpose of this article is to summarize the salient features of PT drawn from the literature, provide a critical review of these features based upon the same literature and psychometric considerations drawn from the Standards for Educational and Psychological Testing and provide considerations of what should be part of best practices in applying PT from an evidence-based and a psychometric perspective.
Collapse
Affiliation(s)
- Mark Albanese
- University of Wisconsin-Madison Medical School, National Conference of Bar Examiners, 302 South Bedford Street, Madison, WI, 53705, USA.
| | | |
Collapse
|
10
|
Schauber SK, Hecht M, Nouns ZM, Kuhlmey A, Dettmer S. The role of environmental and individual characteristics in the development of student achievement: a comparison between a traditional and a problem-based-learning curriculum. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:1033-52. [PMID: 25616720 DOI: 10.1007/s10459-015-9584-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2014] [Accepted: 01/19/2015] [Indexed: 05/19/2023]
Abstract
In medical education, the effect of the educational environment on student achievement has primarily been investigated in comparisons between traditional and problem-based learning (PBL) curricula. As many of these studies have reached no clear conclusions on the superiority of the PBL approach, the effect of curricular reform on student performance remains an issue. We employed a theoretical framework that integrates antecedents of student achievement from various psychosocial domains to examine how students interact with their curricular environment. In a longitudinal study with N = 1,646 participants, we assessed students in a traditional and a PBL-centered curriculum. The measures administered included students' perception of the learning environment, self-efficacy beliefs, positive study-related affect, social support, indicators of self-regulated learning, and academic achievement assessed through progress tests. We compared the relations between these characteristics in the two curricular environments. The results are two-fold. First, substantial relations of various psychosocial domains and their associations with achievement were identified. Second, our analyses indicated that there are no substantial differences between traditional and PBL-based curricula concerning the relational structure of psychosocial variables and achievement. Drawing definite conclusions on the role of curricular-level interventions in the development of student's academic achievement is constrained by the quasi-experimental design as wells as the selection of variables included. However, in the specific context described here, our results may still support the view of student activity as the key ingredient in the acquisition of achievement and performance.
Collapse
Affiliation(s)
- Stefan K Schauber
- Institute of Medical Sociology and Rehabilitation Science and Department for Assessment, Charité - Universitätsmedizin Berlin, Luisenstraße 57, 10117, Berlin, Germany.
| | - Martin Hecht
- Institute for Educational Quality Improvement, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099, Berlin, Germany
| | - Zineb M Nouns
- Institute of Medical Education, University of Bern, Konsumstrasse 13, 3010, Bern, Switzerland
| | - Adelheid Kuhlmey
- Institute of Medical Sociology and Rehabilitation Science, Charité - Universitätsmedizin Berlin, Luisenstraße 57, 10117, Berlin, Germany
| | - Susanne Dettmer
- Institute of Medical Sociology and Rehabilitation Science, Charité - Universitätsmedizin Berlin, Luisenstraße 57, 10117, Berlin, Germany
| |
Collapse
|
11
|
Schauber SK, Hecht M, Nouns ZM, Dettmer S. On the role of biomedical knowledge in the acquisition of clinical knowledge. MEDICAL EDUCATION 2013; 47:1223-1235. [PMID: 24206156 DOI: 10.1111/medu.12229] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2012] [Revised: 12/10/2012] [Accepted: 03/13/2013] [Indexed: 05/28/2023]
Abstract
CONTEXT Basic science teaching in undergraduate medical education faces several challenges. One prominent discussion is focused on the relevance of biomedical knowledge to the development and integration of clinical knowledge. Although the value of basic science knowledge is generally emphasised, theoretical positions on the relative role of this knowledge and the optimal approach to its instruction differ. The present paper addresses whether and to what extent biomedical knowledge is related to the development of clinical knowledge. METHODS We analysed repeated-measures data for performances on basic science and clinical knowledge assessments. A sample of 598 medical students on a traditional curriculum participated in the study. The entire study covered a developmental phase of 2 years of medical education. Structural equation modelling was used to analyse the temporal relationship between biomedical knowledge and the acquisition of clinical knowledge. RESULTS At the point at which formal basic science education ends and clinical training begins, students show the highest levels of biomedical knowledge. The present data suggest a decline in basic science knowledge that is complemented by a growth in clinical knowledge. Statistical comparison of several structural equation models revealed that the model to best explain the data specified unidirectional relationships between earlier states of biomedical knowledge and subsequent changes in clinical knowledge. However, the parameter estimates indicate that this association is negative. DISCUSSION Our analysis suggests a negative relationship between earlier levels of basic science knowledge and subsequent gains in clinical knowledge. We discuss the limitations of the present study, such as the educational context in which it was conducted and its non-experimental nature. Although the present results do not necessarily contradict the relevance of basic sciences, we speculate on mechanisms that might be related to our findings. We conclude that our results hint at possibly critical issues in basic science education that have been rarely addressed thus far.
Collapse
Affiliation(s)
- Stefan K Schauber
- Institute of Medical Sociology, Charité - Universitätsmedizin Berlin, Berlin, Germany; Dieter Scheffner Center for Medical Teaching and Educational Research, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | | | | | | |
Collapse
|
12
|
Schuwirth LWT, van der Vleuten CPM. The use of progress testing. PERSPECTIVES ON MEDICAL EDUCATION 2012; 1:24-30. [PMID: 23316456 PMCID: PMC3540387 DOI: 10.1007/s40037-012-0007-2] [Citation(s) in RCA: 73] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Progress testing is gaining ground rapidly after having been used almost exclusively in Maastricht and Kansas City. This increased popularity is understandable considering the intuitive appeal longitudinal testing has as a way to predict future competence and performance. Yet there are also important practicalities. Progress testing is longitudinal assessment in that it is based on subsequent equivalent, yet different, tests. The results of these are combined to determine the growth of functional medical knowledge for each student, enabling more reliable and valid decision making about promotion to a next study phase. The longitudinal integrated assessment approach has a demonstrable positive effect on student learning behaviour by discouraging binge learning. Furthermore, it leads to more reliable decisions as well as good predictive validity for future competence or retention of knowledge. Also, because of its integration and independence of local curricula, it can be used in a multi-centre collaborative production and administration framework, reducing costs, increasing efficiency and allowing for constant benchmarking. Practicalities include the relative unfamiliarity of faculty with the concept, the fact that remediation for students with a series of poor results is time consuming, the need to embed the instrument carefully into the existing assessment programme and the importance of equating subsequent tests to minimize test-to-test variability in difficulty. Where it has been implemented-collaboratively-progress testing has led to satisfaction, provided the practicalities are heeded well.
Collapse
Affiliation(s)
- Lambert W. T. Schuwirth
- Flinders Innovation in Clinical Education, Flinders University, Adelaide, Australia
- Department of Educational Development and Research, Maastricht University, Maastricht, the Netherlands
| | - Cees P. M. van der Vleuten
- Chair Department of Educational Development and Research, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
13
|
Wrigley W, van der Vleuten CPM, Freeman A, Muijtjens A. A systemic framework for the progress test: strengths, constraints and issues: AMEE Guide No. 71. MEDICAL TEACHER 2012; 34:683-97. [PMID: 22905655 DOI: 10.3109/0142159x.2012.704437] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
There has been increasing use and significance of progress testing in medical education. It is used in many ways and with several formats to reflect the variety of curricula and assessment purposes. These developments have occurred alongside a recognised sensitivity for error variance inherent in multiple choice tests from which challenges to its validity and reliability have arisen. This Guide presents a generic, systemic framework to help identify and explore improvements in the quality and defensibility of progress test data. The framework draws on the combined experience of the Dutch consortium, an individual medical school in the United Kingdom, and the bulk of the progress test literature to date. It embeds progress testing as a quality-controlled assessment tool for improving learning, teaching and the demonstration of educational standards. The paper describes strengths, highlights constraints and explores issues for improvement. These may assist in the establishment of potential or new progress testing in medical education programmes. They can also guide the evaluation and improvement of existing programmes.
Collapse
Affiliation(s)
- William Wrigley
- Department of Educational Development and Research, Maastricht University, The Netherlands
| | | | | | | |
Collapse
|
14
|
Aarts R, Steidel K, Manuel BAF, Driessen EW. Progress testing in resource-poor countries: a case from Mozambique. MEDICAL TEACHER 2010; 32:461-3. [PMID: 20515372 DOI: 10.3109/0142159x.2010.486059] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
A wealth of evidence for the effectiveness of progress testing in problem-based learning curricula has been collected in the Western academic world, but whether the progress testing can be equally effective in problem-based medical schools in resource-poor countries is a question that remains to be answered. In order to provide an initial answer to this question, we describe our experiences with progress testing in a medical school in Mozambique since its establishment in 2001, specifically focusing on test acceptability, formative educational impact, test validity and test reliability. After 7 years of experience, we think that the conclusion is justified that the progress testing can be a feasible and effective assessment instrument even in a resource poor setting. Institutional collaboration is important to guarantee test quality and sustainability.
Collapse
|
15
|
Freeman A, Van Der Vleuten C, Nouns Z, Ricketts C. Progress testing internationally. MEDICAL TEACHER 2010; 32:451-5. [PMID: 20515370 DOI: 10.3109/0142159x.2010.485231] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Affiliation(s)
- Adrian Freeman
- Peninsula College of Medicine and Dentistry, Exeter, UK.
| | | | | | | |
Collapse
|
16
|
Swanson DB, Holtzman KZ, Butler A, Langer MM, Nelson MV, Chow JWM, Fuller R, Patterson JA, Boohan M. Collaboration across the pond: the multi-school progress testing project. MEDICAL TEACHER 2010; 32:480-5. [PMID: 20515377 DOI: 10.3109/0142159x.2010.485655] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This collaborative project between the National Board of Medical Examiners and four schools in the UK is investigating the feasibility and utility of a cross-school progress testing program drawing on test material recently retired from the United States Medical Licensing Examination (USMLE) Step 2 Clinical Knowledge (CK) examination. This article describes the design of the progress test; the process used to build, translate (localize), review, and finalize test forms; the approach taken to (web-based) test administration; and the procedure used to calculate and report scores. Results to date have demonstrated that it is feasible to use test items written for the US licensing examination as a base for developing progress test forms for use in the UK. Some content areas can be localized more readily than others, and care is clearly needed in review and revision of test materials to ensure that it is clinically appropriate and suitably phrased for use in the UK. Involvement of content experts in review and vetting of the test material is essential, and it is clearly desirable to supplement expert review with the use of quality control procedures based on the item statistics as a final check on the appropriateness of individual test items.
Collapse
Affiliation(s)
- D B Swanson
- National Board of Medical Examiners, Philadelphia, PA 19104-3120, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
17
|
Freeman A, Nicholls A, Ricketts C, Coombes L. Can we share questions? Performance of questions from different question banks in a single medical school. MEDICAL TEACHER 2010; 32:464-6. [PMID: 20515373 DOI: 10.3109/0142159x.2010.486056] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
BACKGROUND To use progress testing, a large bank of questions is required, particularly when planning to deliver tests over a long period of time. The questions need not only to be of good quality but also balanced in subject coverage across the curriculum to allow appropriate sampling. Hence as well as creating its own questions, an institution could share questions. Both methods allow ownership and structuring of the test appropriate to the educational requirements of the institution. METHOD Peninsula Medical School (PMS) has developed a mechanism to validate questions written in house. That mechanism can be adapted to utilise questions from an International question bank International Digital Electronic Access Library (IDEAL) and another UK-based question bank Universities Medical Assessment Partnership (UMAP). These questions have been used in our progress tests and analysed for relative performance. RESULTS Data are presented to show that questions from differing sources can have comparable performance in a progress testing format. CONCLUSION There are difficulties in transferring questions from one institution to another. These include problems of curricula and cultural differences. Whilst many of these difficulties exist, our experience suggests that it only requires a relatively small amount of work to adapt questions from external question banks for effective use. The longitudinal aspect of progress testing (albeit summatively) may allow more flexibility in question usage than single high stakes exams.
Collapse
Affiliation(s)
- Adrian Freeman
- Peninsula College of Medicine and Dentistry, Exeter, UK.
| | | | | | | |
Collapse
|
18
|
Van der Veken J, Valcke M, De Maeseneer J, Schuwirth L, Derese A. Impact on knowledge acquisition of the transition from a conventional to an integrated contextual medical curriculum. MEDICAL EDUCATION 2009; 43:704-713. [PMID: 19573195 DOI: 10.1111/j.1365-2923.2009.03397.x] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
CONTEXT This study set out to test the hypotheses that after the implementation of an integrated contextual medical curriculum (ICMC), ICMC students would attain higher levels of knowledge in both the basic and clinical sciences at an earlier stage than conventional medical curriculum (CMC) students, that ICMC students would perform significantly better on knowledge tests at the end of their education and, finally, that ICMC students would show a more linear acquisition of knowledge in the basic and clinical sciences. METHODS We drew upon the Dutch Inter-University Progress Test (PT) to measure impact on knowledge acquisition and compared PT scores of 393 CMC students with scores of 1028 ICMC students (Years 2-6) in a cross-sectional design. We also compared the scores of 112 CMC students with those of 197 ICMC students in Years 3-6 in a longitudinal design. RESULTS As expected, ICMC students showed a steeper learning curve in both the basic and clinical sciences: at the end of their training students had attained higher levels of knowledge in both domains. The learning curve pertaining to the clinical sciences was almost linear, whereas that for the basic sciences showed a sharper rise, indicating a continuing growth of knowledge. CONCLUSIONS The differential impact on knowledge acquisition of conventional and innovative curricula has seldom been studied in a longitudinal and cross-sectional design. This study confirmed our assumptions about the potential of an integrated contextual curriculum. The differences observed in ICMC students were attributed to the stronger emphasis on clinically relevant basic sciences in the early years of the ICMC and to the stronger integration of basic and clinical sciences in the ICMC.
Collapse
Affiliation(s)
- Jos Van der Veken
- Office of Educational Quality Assurance, Department of Educational Affairs, Ghent University, Ghent, Belgium.
| | | | | | | | | |
Collapse
|
19
|
Löwe B, Hartmann M, Wild B, Nikendei C, Kroenke K, Niehoff D, Henningsen P, Zipfel S, Herzog W. Effectiveness of a 1-year resident training program in clinical research: a controlled before-and-after study. J Gen Intern Med 2008; 23:122-8. [PMID: 17922168 PMCID: PMC2359160 DOI: 10.1007/s11606-007-0397-8] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2007] [Revised: 08/28/2007] [Accepted: 09/13/2007] [Indexed: 10/22/2022]
Abstract
BACKGROUND To increase the number of clinician scientists and to improve research skills, a number of clinical research training programs have been recently established. However, controlled studies assessing their effectiveness are lacking. OBJECTIVE To investigate the effectiveness of a 1-year resident training program in clinical research. DESIGN Controlled before-and-after study. The training program included a weekly class in clinical research methods, completion of a research project, and mentorship. PARTICIPANTS Intervention subjects were 15 residents participating in the 1-year training program in clinical research. Control subjects were 22 residents not participating in the training program. MEASUREMENTS AND MAIN RESULTS Assessments were performed at the beginning and end of the program. Outcomes included methodological research knowledge (multiple-choice progress test), self-assessed research competence, progress on publications and grant applications, and evaluation of the program using quantitative and qualitative methods. RESULTS Intervention subjects and controls were well matched with respect to research experience (5.1 +/- 2.2 vs 5.6 +/- 5.8 years; p = .69). Methodological knowledge improved significantly more in the intervention group compared to the control group (effect size = 2.5; p < .001). Similarly, self-assessed research competence increased significantly more in the intervention group (effect size = 1.1; p = .01). At the end of the program, significantly more intervention subjects compared to controls were currently writing journal articles (87% vs 36%; p = .003). The intervention subjects evaluated the training program as highly valuable for becoming independent researchers. CONCLUSIONS A 1-year training program in clinical research can substantially increase research knowledge and productivity. The program design makes it feasible to implement in other academic settings.
Collapse
Affiliation(s)
- Bernd Löwe
- Department of Psychosomatic Medicine and Psychotherapy, University Medical Center Hamburg-Eppendorf and Hamburg-Eilbek (Schön Clinics), Hamburg, Germany.
| | | | | | | | | | | | | | | | | |
Collapse
|
20
|
Muijtjens AMM, Schuwirth LWT, Cohen-Schotanus J, Thoben AJNM, van der Vleuten CPM. Benchmarking by cross-institutional comparison of student achievement in a progress test. MEDICAL EDUCATION 2008; 42:82-8. [PMID: 18181848 DOI: 10.1111/j.1365-2923.2007.02896.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
OBJECTIVE To determine the effectiveness of single-point benchmarking and longitudinal benchmarking for inter-school educational evaluation. METHODS We carried out a mixed, longitudinal, cross-sectional study using data from 24 annual measurement moments (4 tests x 6 year groups) over 4 years for 4 annual progress tests assessing the graduation-level knowledge of all students from 3 co-operating medical schools. Participants included undergraduate medical students (about 5000) from 3 medical schools. The main outcome measures involved between-school comparisons of progress test results based on different benchmarking methods. RESULTS Variations in relative school performance across different tests and year groups indicate instability and low reliability of single-point benchmarking, which is subject to distortions as a result of school-test and year group-test interaction effects. Deviations of school means from the overall mean follow an irregular, noisy pattern obscuring systematic between-school differences. The longitudinal benchmarking method results in suppression of noise and revelation of systematic differences. The pattern of a school's cumulative deviations per year group gives a credible reflection of the relative performance of year groups. CONCLUSIONS Even with highly comparable curricula, single-point benchmarking can result in distortion of the results of comparisons. If longitudinal data are available, the information contained in a school's cumulative deviations from the overall mean can be used. In such a case, the mean test score across schools is a useful benchmark for cross-institutional comparison.
Collapse
Affiliation(s)
- Arno M M Muijtjens
- Department of Educational Development and Research, Faculty of Medicine, Maastricht University, Maastricht, The Netherlands.
| | | | | | | | | |
Collapse
|