1
|
Sieg M, Roselló Atanet I, Tomova MT, Schoeneberg U, Sehy V, Mäder P, März M. Discovering unknown response patterns in progress test data to improve the estimation of student performance. BMC MEDICAL EDUCATION 2023; 23:193. [PMID: 36978145 PMCID: PMC10053036 DOI: 10.1186/s12909-023-04172-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 03/17/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND The Progress Test Medizin (PTM) is a 200-question formative test that is administered to approximately 11,000 students at medical universities (Germany, Austria, Switzerland) each term. Students receive feedback on their knowledge (development) mostly in comparison to their own cohort. In this study, we use the data of the PTM to find groups with similar response patterns. METHODS We performed k-means clustering with a dataset of 5,444 students, selected cluster number k = 5, and answers as features. Subsequently, the data was passed to XGBoost with the cluster assignment as target enabling the identification of cluster-relevant questions for each cluster with SHAP. Clusters were examined by total scores, response patterns, and confidence level. Relevant questions were evaluated for difficulty index, discriminatory index, and competence levels. RESULTS Three of the five clusters can be seen as "performance" clusters: cluster 0 (n = 761) consisted predominantly of students close to graduation. Relevant questions tend to be difficult, but students answered confidently and correctly. Students in cluster 1 (n = 1,357) were advanced, cluster 3 (n = 1,453) consisted mainly of beginners. Relevant questions for these clusters were rather easy. The number of guessed answers increased. There were two "drop-out" clusters: students in cluster 2 (n = 384) dropped out of the test about halfway through after initially performing well; cluster 4 (n = 1,489) included students from the first semesters as well as "non-serious" students both with mostly incorrect guesses or no answers. CONCLUSION Clusters placed performance in the context of participating universities. Relevant questions served as good cluster separators and further supported our "performance" cluster groupings.
Collapse
Affiliation(s)
- Miriam Sieg
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, AG Progress Test Medizin, Charitéplatz 1, 10117, Berlin, Germany
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Biometry and Clinical Epidemiology, Charitéplatz 1, 10117, Berlin, Germany
| | - Iván Roselló Atanet
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, AG Progress Test Medizin, Charitéplatz 1, 10117, Berlin, Germany
| | - Mihaela Todorova Tomova
- Fakultät für Informatik und Automatisierung, Data-Intensive Systems and Visualization Group (dAI.SY), Technische Universität Ilmenau, Ehrenbergstraße 29, 98693, Ilmenau, Germany
| | - Uwe Schoeneberg
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute of Biometry and Clinical Epidemiology, Charitéplatz 1, 10117, Berlin, Germany
| | - Victoria Sehy
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, AG Progress Test Medizin, Charitéplatz 1, 10117, Berlin, Germany
| | - Patrick Mäder
- Fakultät für Informatik und Automatisierung, Data-Intensive Systems and Visualization Group (dAI.SY), Technische Universität Ilmenau, Ehrenbergstraße 29, 98693, Ilmenau, Germany
- Fakultät für Biowissenschaften, Friedrich Schiller Universität Jena, Schloßgasse 10, 07743, Jena, Germany
| | - Maren März
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, AG Progress Test Medizin, Charitéplatz 1, 10117, Berlin, Germany.
| |
Collapse
|
2
|
Dion V, St-Onge C, Bartman I, Touchie C, Pugh D. Written-Based Progress Testing: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:747-757. [PMID: 34753858 DOI: 10.1097/acm.0000000000004507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Progress testing is an increasingly popular form of assessment in which a comprehensive test is administered to learners repeatedly over time. To inform potential users, this scoping review aimed to document barriers, facilitators, and potential outcomes of the use of written progress tests in higher education. METHOD The authors followed Arksey and O'Malley's scoping review methodology to identify and summarize the literature on progress testing. They searched 6 databases (Academic Search Complete, CINAHL, ERIC, Education Source, MEDLINE, and PsycINFO) on 2 occasions (May 22, 2018, and April 21, 2020) and included articles written in English or French and pertaining to written progress tests in higher education. Two authors screened articles for the inclusion criteria (90% agreement), then data extraction was performed by pairs of authors. Using a snowball approach, the authors also screened additional articles identified from the included reference lists. They completed a thematic analysis through an iterative process. RESULTS A total of 104 articles were included. The majority of progress tests used a multiple-choice and/or true-or-false question format (95, 91.3%) and were administered 4 times a year (38, 36.5%). The most documented source of validity evidence was internal consistency (38, 36.5%). Four major themes were identified: (1) barriers and challenges to the implementation of progress testing (e.g., need for additional resources); (2) established collaboration as a facilitator of progress testing implementation; (3) factors that increase the acceptance of progress testing (e.g., formative use); and (4) outcomes and consequences of progress test use (e.g., progress testing contributes to an increase in knowledge). CONCLUSIONS Progress testing appears to have a positive impact on learning, and there is significant validity evidence to support its use. Although progress testing is resource- and time-intensive, strategies such as collaboration with other institutions may facilitate its use.
Collapse
Affiliation(s)
- Vincent Dion
- V. Dion is an undergraduate medical education student, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec, Canada. He was a research assistant to the Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke research chair in medical education, Sherbrooke, Québec, Canada, at the time this work was completed
| | - Christina St-Onge
- C. St-Onge is professor, Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, and the Paul Grand'Maison de la Société des médecins de l'Université de Sherbrooke research chair in medical education, Sherbrooke, Québec, Canada; ORCID: https://orcid.org/0000-0001-5313-0456
| | - Ilona Bartman
- I. Bartman is medical education research associate, Medical Council of Canada, Ottawa, Ontario, Canada; ORCID: https://orcid.org/0000-0002-2056-479X
| | - Claire Touchie
- C. Touchie is professor of medicine, University of Ottawa, Ottawa, Ontario, Canada. She was chief medical education officer, Medical Council of Canada, Ottawa, Ontario, Canada, at the time this work was completed; ORCID: https://orcid.org/0000-0001-7926-9720
| | - Debra Pugh
- D. Pugh is medical education advisor, Medical Council of Canada, and associate professor, Department of Medicine, University of Ottawa and The Ottawa Hospital, Ottawa, Ontario, Canada; ORCID: https://orcid.org/0000-0003-4076-9669
| |
Collapse
|
3
|
Halman S, Fu AYN, Pugh D. Entrustment within an objective structured clinical examination (OSCE) progress test: Bridging the gap towards competency-based medical education. MEDICAL TEACHER 2020; 42:1283-1288. [PMID: 32805146 DOI: 10.1080/0142159x.2020.1803251] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Progress testing aligns well with competency-based medical education (CBME) frameworks, which stress the importance of continuous improvement. Entrustment is a useful assessment concept in CBME models. The purpose of this study was to explore the use of an entrustability rating scale within the context of an objective structured clinical examination (OSCE) Progress Test. METHODS A 9-case OSCE Progress Test was administered to Internal Medicine residents (PGYs 1-4). Residents were assessed using a checklist (CL), global rating scale (GRS), training level rating scale (TLRS), and entrustability scale (ENT). Reliability was calculated using Cronbach's alpha. Differences in performance by training year were explored using ANOVA and effect sizes were calculated using partial eta-squared. Examiners completed a post-examination survey. RESULTS Ninety one residents and forty two examiners participated in the OSCE. Inter-station reliability was high for all instruments. There was an overall effect of training level for all instruments (p < 0.001). Effect sizes were large. 88% of examiners completed the survey. Most (62%) indicated feeling comfortable in making entrustment decisions during the OSCE. CONCLUSIONS An entrustability scale can be used in an OSCE Progress Test to generate highly reliable ratings that discriminate between learners at different levels of training.
Collapse
Affiliation(s)
- Samantha Halman
- Department of Medicine, The Ottawa Hospital, Ottawa, Ontario, Canada
- Faculty of Medicine, The University of Ottawa, Ottawa, Ontario, Canada
| | - Angel Yi Nam Fu
- Faculty of Medicine, The University of Ottawa, Ottawa, Ontario, Canada
| | - Debra Pugh
- Department of Medicine, The Ottawa Hospital, Ottawa, Ontario, Canada
- Faculty of Medicine, The University of Ottawa, Ottawa, Ontario, Canada
- Medical Council of Canada, Ottawa, Ontario, Canada
| |
Collapse
|
4
|
Schüttpelz-Brauns K, Hecht M, Hardt K, Karay Y, Zupanic M, Kämmer JE. Institutional strategies related to test-taking behavior in low stakes assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2020; 25:321-335. [PMID: 31641942 PMCID: PMC7210238 DOI: 10.1007/s10459-019-09928-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 09/25/2019] [Indexed: 06/10/2023]
Abstract
Low stakes assessment without grading the performance of students in educational systems has received increasing attention in recent years. It is used in formative assessments to guide the learning process as well as in large-scales assessments to monitor educational programs. Yet, such assessments suffer from high variation in students' test-taking effort. We aimed to identify institutional strategies related to serious test-taking behavior in low stakes assessment to provide medical schools with practical recommendations on how test-taking effort might be increased. First, we identified strategies that were already used by medical schools to increase the serious test-taking behavior on the low stakes Berlin Progress Test (BPT). Strategies which could be assigned to self-determination theory of Ryan and Deci were chosen for analysis. We conducted the study at nine medical schools in Germany and Austria with a total of 108,140 observations in an established low stakes assessment. A generalized linear-mixed effects model was used to assess the association between institutional strategies and the odds that students will take the BPT seriously. Overall, two institutional strategies were found to be positively related to more serious test-taking behavior: discussing low test performance with the mentor and consequences for not participating. Giving choice was negatively related to more serious test-taking behavior. At medical schools that presented the BPT as evaluation, this effect was larger in comparison to medical schools that presented the BPT as assessment.
Collapse
Affiliation(s)
- Katrin Schüttpelz-Brauns
- Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany
- Institute of Cognitive and Clinical Neuroscience, Central Institute for Mental Health in Mannheim, J5, 68159 Mannheim, Germany
| | - Martin Hecht
- Faculty of Life Sciences, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany
| | - Katinka Hardt
- Faculty of Life Sciences, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany
| | - Yassin Karay
- Medical Faculty, University of Cologne, Joseph-Stelzmann-Straße 20 (Building 42), 50931 Cologne, Germany
| | - Michaela Zupanic
- Faculty of Health, Witten/Herdecke University, Alfred-Herrhausen-Straße 50, 58448 Witten, Germany
| | - Juliane E. Kämmer
- AG Progress Test Medizin, Charité Universitätsmedizin Berlin, Hannoversche Straße 19, 10115 Berlin, Germany
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany
| |
Collapse
|
5
|
The European Hematology Exam: The Next Step toward the Harmonization of Hematology Training in Europe. Hemasphere 2020; 3:e291. [PMID: 31942544 PMCID: PMC6919465 DOI: 10.1097/hs9.0000000000000291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
6
|
Ali K, Cockerill J, Zahra D, Tredwin C, Ferguson C. Impact of Progress testing on the learning experiences of students in medicine, dentistry and dental therapy. BMC MEDICAL EDUCATION 2018; 18:253. [PMID: 30413204 PMCID: PMC6230280 DOI: 10.1186/s12909-018-1357-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Accepted: 10/22/2018] [Indexed: 06/08/2023]
Abstract
AIMS To investigate the impact of progress testing on the learning experiences of undergraduate students in three programs namely, medicine, dentistry and dental therapy. METHODS Participants were invited to respond to an online questionnaire to share their perceptions and experiences of progress testing. Responses were recorded anonymously, but data on their program, year of study, age, gender, and ethnicity were also captured on a voluntary basis. RESULTS A total of 167 participants completed the questionnaire yielding a response rate of 27.2% (n = 167). These included 96 BMBS students (27.4%), 56 BDS students (24.7%), and 15 BScDTH students (39.5%). A 3 -Program (BMBS, BDS, BScDTH) by 8-Topic (A-H) mixed analysis of variance (ANOVA) was conducted on the questionnaire responses. This revealed statistically significant main effects of Program and Topic, as well as a statistically significant interaction between the two (i.e. the pattern of topic differences was different across programs). CONCLUSIONS Undergraduate students in medicine, dentistry, and dental therapy and hygiene regarded PT as a useful assessment to support their learning needs. However, in comparison to students in dentistry and dental therapy and hygiene, the perceptions of medical students were less positive in several aspects of PT.
Collapse
Affiliation(s)
- Kamran Ali
- Faculty of Medicine and Dentistry, University of Plymouth, Plymouth, UK
- Peninsula Dental School, University of Plymouth, C523 Portland Square, Drake Circus, Plymouth, PL4 8AA UK
| | | | - Daniel Zahra
- Faculty of Medicine and Dentistry, University of Plymouth, Plymouth, UK
| | - Christopher Tredwin
- Peninsula Dental School, University of Plymouth, C523 Portland Square, Drake Circus, Plymouth, PL4 8AA UK
| | - Colin Ferguson
- Faculty of Medicine and Dentistry, University of Plymouth, Plymouth, UK
| |
Collapse
|
7
|
Karay Y, Schauber SK. A validity argument for progress testing: Examining the relation between growth trajectories obtained by progress tests and national licensing examinations using a latent growth curve approach. MEDICAL TEACHER 2018; 40:1123-1129. [PMID: 29950124 DOI: 10.1080/0142159x.2018.1472370] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Background: Progress testing is a longitudinal assessment that aims at tracking students' development of knowledge. This approach is used in many medical schools internationally. Although progress tests are longitudinal in nature, and their focus and use of developmental aspects is a key advantage, individual students' learning trajectories themselves play, to date, only a minor role in the use of the information obtained through progress testing. Methods: We investigate in how far between-person differences in initial levels of performance and within-person rate of growth can be regarded as distinct components of students' development and analyze the extent to which these two components are related to performances on national licensing examinations using a latent growth curve model. Results: Both, higher initial levels of performances and steepness of growth are positively related to long-term outcomes as measured by performance on national licensing examinations. We interpret these findings as evidence for progress tests' suitability to monitor students' growth of knowledge across the course of medical training. Conclusions: This study indicates that individual development as obtained by formative progress tests is related to performance in high-stakes assessments. Future studies may put more focus on the use of between-persons differences in growth of knowledge.
Collapse
Affiliation(s)
- Yassin Karay
- a Medical Faculty of the University of Cologne , Dean's Office Student Affairs , Cologne , Germany
| | - Stefan K Schauber
- b Faculty of Educational Sciences , Centre for Educational Measurement at the University of Oslo (CEMO) , Oslo , Norway
- c Faculty of Medicine , University of Oslo, Centre for Health Sciences Education , Oslo , Norway
| |
Collapse
|
8
|
Rutgers DR, van Schaik JPJ, van Lankeren W, van Raamt F, Cate TJT. Resident and Faculty Attitudes Toward the Dutch Radiology Progress Test as It Transitions from a Formative to a Summative Measure of Licensure Eligibility. MEDICAL SCIENCE EDUCATOR 2018; 28:639-647. [PMID: 30931160 PMCID: PMC6404798 DOI: 10.1007/s40670-018-0605-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
BACKGROUND Progress testing, a regularly administered comprehensive test of a complete knowledge domain, usually serves to provide learners feedback and has a formative nature. OBJECTIVE Our study aimed to investigate the acceptability of introducing a summative component in the postgraduate Dutch Radiology Progress Test (DRPT) among residents and program directors in a competency-based training program. METHODS A 15-item questionnaire with 3 items on acceptability of summative postgraduate knowledge testing, 7 on acceptability of the summative DRPT regulations, 4 on self-reported educational effects, and 1 open comment item was distributed nationally among 349 residents and 81 radiology program directors. RESULTS The questionnaire was filled out by 330 residents (95%) and 48 (59%) program directors. Summative postgraduate knowledge testing was regarded as acceptable by both groups, but more so by program directors than residents. The transition toward summative assessment in the DRPT was received neutrally to slightly positively by residents, while program directors regarded it as an improvement and estimated the summative criteria to be lighter and less stressful than did residents. The residents' self-reported educational effects of summative assessment in the DRPT were limited, whereas program directors expected a greater end-of-training knowledge improvement than residents. CONCLUSIONS Both residents and program directors support summative postgraduate knowledge testing, although it is more accepted by program directors. Residents receive summative radiological progress testing neutrally to slightly positively, while program directors generally value it more positively than residents. Directors should be aware of these different perspectives when introducing or developing summative progress testing in residency programs.
Collapse
Affiliation(s)
- D. R. Rutgers
- Department of Radiology, University Medical Center, Utrecht, The Netherlands
| | - J. P. J. van Schaik
- Department of Radiology, University Medical Center, Utrecht, The Netherlands
| | - W. van Lankeren
- Department of Radiology, Erasmus University, Rotterdam, The Netherlands
| | - F. van Raamt
- Department of Radiology, Gelre Hospital, Apeldoorn, The Netherlands
| | - Th. J. ten Cate
- Center for Research and Development of Education, University Medical Center, Utrecht, The Netherlands
| |
Collapse
|
9
|
Abstract
This paper discusses the advantages of progress testing. A utopia is described where medical schools would work together to develop and administer progress testing. This would lead to a significant reduction of cost, an increase in the quality of measurement and phenomenal feedback to learner and school. Progress testing would also provide more freedom and resources for more creative in-school assessment. It would be an educationally attractive alternative for the creation of cognitive licensing exams. A utopia is always far away in the future, but by formulating a vision for that future we may engage in discussions on how to get there.
Collapse
|
10
|
Heeneman S, Schut S, Donkers J, van der Vleuten C, Muijtjens A. Embedding of the progress test in an assessment program designed according to the principles of programmatic assessment. MEDICAL TEACHER 2017; 39:44-52. [PMID: 27646870 DOI: 10.1080/0142159x.2016.1230183] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
BACKGROUND Progress tests (PT) are used to assess students on topics from all medical disciplines. Progress testing is usually one of the assessment methods of the cognitive domain. There is limited knowledge on how positioning of the PT in a program of assessment (PoA) influences students' PT scores, use of PT feedback and perceived learning value. METHODS We compared PT total scores and use of a PT test feedback (ProF) system in two medical courses, where the PT is either used as a summative assessment or embedded in a comprehensive PoA and used formatively. In addition, an interview study was used to explore the students' perception on use of PT feedback and perceived learning value. RESULTS PT total scores were higher, with considerable effect sizes (ESs) and students made more use of ProF when the PT was embedded in a comprehensive PoA. Analysis of feedback in the portfolio stimulated students to look for patterns in PT results, link the PT to other assessment results, follow-up on learning objectives, and integrate the PT in their learning for the entire PoA. CONCLUSIONS Embedding the PT in an assessment program designed according to the principles of programmatic assessment positively affects PT total scores, use of PT feedback, and perceived learning value.
Collapse
Affiliation(s)
- Sylvia Heeneman
- a Department of Pathology , Maastricht University , Maastricht , The Netherlands
- c School of Health Professions Education, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| | - Suzanne Schut
- b Department of Educational Development and Research , Maastricht University , Maastricht , The Netherlands
- c School of Health Professions Education, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| | - Jeroen Donkers
- b Department of Educational Development and Research , Maastricht University , Maastricht , The Netherlands
- c School of Health Professions Education, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| | - Cees van der Vleuten
- b Department of Educational Development and Research , Maastricht University , Maastricht , The Netherlands
- c School of Health Professions Education, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| | - Arno Muijtjens
- b Department of Educational Development and Research , Maastricht University , Maastricht , The Netherlands
- c School of Health Professions Education, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| |
Collapse
|
11
|
van der Vleuten CPM, Heeneman S. On the issue of costs in programmatic assessment. PERSPECTIVES ON MEDICAL EDUCATION 2016; 5:303-7. [PMID: 27638392 PMCID: PMC5035281 DOI: 10.1007/s40037-016-0295-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Programmatic assessment requires labour and cost intensive activities such as feedback in a quantitative and qualitative form, a system of learner support in guiding feedback uptake and self-directed learning, and a decision-making arrangement that includes committees of experts making a holistic professional judgment while using due process measures to achieve trustworthy decisions. This can only be afforded if we redistribute the resources of assessment in a curriculum. Several strategies are suggested. One is to introduce progress testing as a replacement for costly cognitive assessment formats in modules. In addition, all assessments should be replaced by assessment formats that are maximally aligned with the learning tasks. For performance-based assessment, OSCEs should be sparsely used, while education and work-embedded assessment should be maximized as part of the routine of ongoing instruction and assessment. Information technology may support affordable feedback strategies, as well as the creation of a paper trail on performance. By making more dramatic choices in the way we allocate resources to assessment, the cost-intensive activities of programmatic assessment may be realized.
Collapse
Affiliation(s)
- Cees P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands.
| | - Sylvia Heeneman
- Department of Pathology, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
12
|
Given K, Hannigan A, McGrath D. Red, yellow and green: What does it mean? How the progress test informs and supports student progress. MEDICAL TEACHER 2016; 38:1025-1032. [PMID: 27007618 DOI: 10.3109/0142159x.2016.1147533] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
OBJECTIVES Most medical schools using progress tests (PTs) provide feedback by utilizing a traffic-light system of green (satisfactory), yellow (borderline) and red (unsatisfactory) categories. There is little research assessing students' perceptions or usage of this feedback. Therefore this study proposed to determine the effectiveness of formative PTs at informing and supporting student progress. METHODS A mixed methods study was performed, involving a retrospective analysis of a results database to establish the predictive validity of PT categories and 11 semi-structured interviews to explore students' perceptions of PT feedback in a graduate entry medical programme. RESULTS Quantitative analysis revealed that students who always scored green performed better in their summative exams and graduated with a higher final degree than those who received a yellow or red category at least once. Qualitative analysis revealed that just over half of the interviewed students perceived the PT as having informed their progress. Most participants agreed that the current feedback is insufficient and doesn't guide their on-going learning. CONCLUSION While this study demonstrated that the PT is a useful predictive tool for informing student progress, in its current format it's not fulfilling a truly formative role and supporting student progress sufficiently.
Collapse
Affiliation(s)
- Karen Given
- a Graduate-Entry Medical School, University of Limerick , Limerick , Ireland
| | - Ailish Hannigan
- a Graduate-Entry Medical School, University of Limerick , Limerick , Ireland
| | - Deirdre McGrath
- a Graduate-Entry Medical School, University of Limerick , Limerick , Ireland
| |
Collapse
|
13
|
Tio RA, Schutte B, Meiboom AA, Greidanus J, Dubois EA, Bremers AJA. The progress test of medicine: the Dutch experience. PERSPECTIVES ON MEDICAL EDUCATION 2016; 5:51-5. [PMID: 26754310 PMCID: PMC4754221 DOI: 10.1007/s40037-015-0237-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Progress testing in the Netherlands has a long history. It was first introduced at one medical school which had a problem-based learning (PBL) curriculum from the start. Later, other schools with and without PBL curricula joined. At present, approximately 10,000 students sit a test every three months. The annual progress exam is not a single test. It consists of a series of 4 tests per annum which are summative in the end. The current situation with emphasis on the formative and summative aspects will be discussed. The reader will get insight into the way progress testing can be used as feedback for students and schools.
Collapse
Affiliation(s)
- René A Tio
- Department of Cardiology, University Medical Center Groningen, University of Groningen, 9700, PO Box 30.001, RB Groningen, The Netherlands.
| | - Bert Schutte
- Maastricht University, Maastricht, The Netherlands
| | | | - Janke Greidanus
- University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Eline A Dubois
- Leiden University Medical Center, Leiden, The Netherlands
| | - Andre J A Bremers
- Department of Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
14
|
Van Der Vleuten CPM, Schuwirth LWT, Driessen EW, Govaerts MJB, Heeneman S. Twelve Tips for programmatic assessment. MEDICAL TEACHER 2015; 37:641-646. [PMID: 25410481 DOI: 10.3109/0142159x.2014.973388] [Citation(s) in RCA: 156] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.
Collapse
Affiliation(s)
| | | | - E W Driessen
- a Maastricht University , Maastricht , The Netherlands
| | | | - S Heeneman
- a Maastricht University , Maastricht , The Netherlands
| |
Collapse
|
15
|
Designing Dialogic E-Learning in Pharmacy Professionalism Using Calibrated Feedback Loops (CFLs). PHARMACY 2013. [DOI: 10.3390/pharmacy1010053] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
16
|
Wade L, Harrison C, Hollands J, Mattick K, Ricketts C, Wass V. Student perceptions of the progress test in two settings and the implications for test deployment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2012; 17:573-83. [PMID: 22041871 DOI: 10.1007/s10459-011-9334-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2011] [Accepted: 10/19/2011] [Indexed: 05/12/2023]
Abstract
BACKGROUND The Progress Test (PT) was developed to assess student learning within integrated curricula. Whilst it is effective in promoting and rewarding deep approaches to learning in some settings, we hypothesised that implementation of the curriculum (design and assessment) may impact on students' preparation for the PT and their learning. Aim To compare students' perceptions of and preparations for the PT at two medical schools. METHOD Focus groups were used to generate items for a questionnaire. This was piloted, refined, and then delivered at both schools. Exploratory factor analysis identified the main factors underpinning response patterns. ANOVA was used to compare differences in response by school, year group and gender. RESULTS Response rates were 640 (57%) and 414 (47%) at Schools A and B, respectively. Three major factors were identified: the PT's ability to (1) assess academic learning (2) support clinical learning; (3) the PT's impact on exam preparation. Significant differences were found between settings. In the school with early clinical contact, more frequent PTs and no end of unit tests, students were more likely to appreciate the PT as a support for learning, perceive it as fair and valid, and use a deeper approach to learning-but they also spent longer preparing for the test. CONCLUSION Different approaches to the delivery of the PT can impact significantly on student study patterns. The learning environment has an important impact on student perceptions of assessment and approach to learning. Careful decisions about PT deployment must be taken to ensure its optimal impact.
Collapse
|
17
|
Karay Y, Schauber SK, Stosch C, Schuettpelz-Brauns K. Can computer-based assessment enhance the acceptance of formative multiple choice exams? A utility analysis. MEDICAL TEACHER 2012; 34:292-296. [PMID: 22404878 DOI: 10.3109/0142159x.2012.652707] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
BACKGROUND Students' motivation to participate is one of the main challenges in formative assessment. The utility framework identifies potential points of intervention for improving the acceptance of formative assessment [Van Der Vleuten C. 1996. The assessment of professional competence: Developments, research and practical implications. Adv Health Sci Educ 1(1):41-67]. At the Medical Faculty of the University of Cologne, the paper-based version of the Berlin Progress Test has been transformed into computer-based version providing immediate feedback. AIM To investigate whether the introduction of computer-based assessment (CBA) enhances the acceptance of formative assessment relative to paper-based assessment (PBA). METHODS In a retrospective cohort study (PBA: N = 2597, CBA: N = 2712), students' overall acceptance of the two forms of assessment was surveyed, their comments were analyzed, and we analyzed their test behavior and categorized students into "serious" or "non-serious" test takers. RESULTS In the preclinical phase of medical education, no differences were found in overall acceptance of the two forms of assessment (p > 0.05). In the clinical phase, differences in favor of CBA were found in overall acceptance (p < 0.05), the proportion of positive comments (p < 0.001), and the proportion of serious participants (p < 0.001). CONCLUSIONS Introduction of immediate feedback via CBA can enhance the acceptance and therefore the utility of formative assessment.
Collapse
Affiliation(s)
- Yassin Karay
- Dean's Office for Student Affairs, Medical Faculty of the University of Cologne, Germany.
| | | | | | | |
Collapse
|
18
|
Schuwirth LWT, van der Vleuten CPM. The use of progress testing. PERSPECTIVES ON MEDICAL EDUCATION 2012; 1:24-30. [PMID: 23316456 PMCID: PMC3540387 DOI: 10.1007/s40037-012-0007-2] [Citation(s) in RCA: 73] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Progress testing is gaining ground rapidly after having been used almost exclusively in Maastricht and Kansas City. This increased popularity is understandable considering the intuitive appeal longitudinal testing has as a way to predict future competence and performance. Yet there are also important practicalities. Progress testing is longitudinal assessment in that it is based on subsequent equivalent, yet different, tests. The results of these are combined to determine the growth of functional medical knowledge for each student, enabling more reliable and valid decision making about promotion to a next study phase. The longitudinal integrated assessment approach has a demonstrable positive effect on student learning behaviour by discouraging binge learning. Furthermore, it leads to more reliable decisions as well as good predictive validity for future competence or retention of knowledge. Also, because of its integration and independence of local curricula, it can be used in a multi-centre collaborative production and administration framework, reducing costs, increasing efficiency and allowing for constant benchmarking. Practicalities include the relative unfamiliarity of faculty with the concept, the fact that remediation for students with a series of poor results is time consuming, the need to embed the instrument carefully into the existing assessment programme and the importance of equating subsequent tests to minimize test-to-test variability in difficulty. Where it has been implemented-collaboratively-progress testing has led to satisfaction, provided the practicalities are heeded well.
Collapse
Affiliation(s)
- Lambert W. T. Schuwirth
- Flinders Innovation in Clinical Education, Flinders University, Adelaide, Australia
- Department of Educational Development and Research, Maastricht University, Maastricht, the Netherlands
| | - Cees P. M. van der Vleuten
- Chair Department of Educational Development and Research, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
19
|
Wrigley W, van der Vleuten CPM, Freeman A, Muijtjens A. A systemic framework for the progress test: strengths, constraints and issues: AMEE Guide No. 71. MEDICAL TEACHER 2012; 34:683-97. [PMID: 22905655 DOI: 10.3109/0142159x.2012.704437] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
There has been increasing use and significance of progress testing in medical education. It is used in many ways and with several formats to reflect the variety of curricula and assessment purposes. These developments have occurred alongside a recognised sensitivity for error variance inherent in multiple choice tests from which challenges to its validity and reliability have arisen. This Guide presents a generic, systemic framework to help identify and explore improvements in the quality and defensibility of progress test data. The framework draws on the combined experience of the Dutch consortium, an individual medical school in the United Kingdom, and the bulk of the progress test literature to date. It embeds progress testing as a quality-controlled assessment tool for improving learning, teaching and the demonstration of educational standards. The paper describes strengths, highlights constraints and explores issues for improvement. These may assist in the establishment of potential or new progress testing in medical education programmes. They can also guide the evaluation and improvement of existing programmes.
Collapse
Affiliation(s)
- William Wrigley
- Department of Educational Development and Research, Maastricht University, The Netherlands
| | | | | | | |
Collapse
|
20
|
van der Vleuten CPM, Schuwirth LWT, Driessen EW, Dijkstra J, Tigelaar D, Baartman LKJ, van Tartwijk J. A model for programmatic assessment fit for purpose. MEDICAL TEACHER 2012; 34:205-14. [PMID: 22364452 DOI: 10.3109/0142159x.2012.652239] [Citation(s) in RCA: 413] [Impact Index Per Article: 34.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of training, assessment and learner support activities that are complemented by intermediate and final moments of evaluation on aggregated assessment data points. A key principle is that individual data points are maximised for learning and feedback value, whereas high-stake decisions are based on the aggregation of many data points. Expert judgement plays an important role in the programme. Fundamental is the notion of sampling and bias reduction to deal with the inevitable subjectivity of this type of judgement. Bias reduction is further sought in procedural assessment strategies derived from criteria for qualitative research. We discuss a number of challenges and opportunities around the proposed model. One of its prime virtues is that it enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirically grounded theory.
Collapse
Affiliation(s)
- C P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
21
|
Affiliation(s)
- Steven Burr
- School of Biomedical Sciences, Faculty of Medicine and Health Sciences, Queen's Medical Centre, University of Nottingham, Nottingham NG7 2UH
| | - Elizabeth Brodier
- School of Biomedical Sciences, Faculty of Medicine and Health Sciences, Queen's Medical Centre, University of Nottingham, Nottingham NG7 2UH
| |
Collapse
|