1
|
Watling C, Shaw J, Field E, Ginsburg S. 'For the most part it works': Exploring how authors navigate peer review feedback. MEDICAL EDUCATION 2023; 57:151-160. [PMID: 36031758 DOI: 10.1111/medu.14932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 08/15/2022] [Accepted: 08/25/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Peer review aims to provide meaningful feedback to research authors so that they may improve their work, and yet it constitutes a particularly challenging context for the exchange of feedback. We explore how research authors navigate the process of interpreting and responding to peer review feedback, in order to elaborate how feedback functions when some of the conditions thought to be necessary for it to be effective are not met. METHODS Using constructivist grounded theory methodology, we interviewed 17 recently published health professions education researchers about their experiences with the peer review process. Data collection and analysis were concurrent and iterative. We used constant comparison to identify themes and to develop a conceptual model of how feedback functions in this setting. RESULTS Although participants expressed faith in peer review, they acknowledged that the process was emotionally trying and raised concerns about its consistency and credibility. These potential threats were mitigated by factors including time, team support, experience and the exercise of autonomy. Additionally, the perceived engagement of reviewers and the cultural norms and expectations surrounding the process strengthened authors' willingness and capacity to respond productively. Our analysis suggests a model of feedback within which its perceived usefulness turns on the balance of threats and countermeasures. CONCLUSIONS Feedback is a balancing act. Although threats to the productive uptake of peer review feedback abound, these threats may be neutralised by a range of countermeasures. Among these, opportunities for autonomy and cultural normalisation of both the professional responsibility to engage with feedback and the challenge of doing so may be especially influential and may have implications beyond the peer review setting.
Collapse
Affiliation(s)
- Christopher Watling
- Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Jennifer Shaw
- Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Emily Field
- Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Shiphra Ginsburg
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Pippitt KA, Moore KB, Lindsley JE, Cariello PF, Smith AG, Formosa T, Moser K, Morton DA, Colbert-Getz JM, Chow CJ. Assessment for Learning with Ungraded and Graded Assessments. MEDICAL SCIENCE EDUCATOR 2022; 32:1045-1054. [PMID: 36276764 PMCID: PMC9584017 DOI: 10.1007/s40670-022-01612-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/25/2022] [Indexed: 06/16/2023]
Abstract
Introduction Assessment for learning has many benefits, but learners will still encounter high-stakes decisions about their performance throughout training. It is unknown if assessment for learning can be promoted with a combination model where scores from some assessments are factored into course grades and scores from other assessments are not used for course grading. Methods At the University of Utah School of Medicine, year 1-2 medical students (MS) completed multiple-choice question quiz assessments and final examinations in six systems-based science courses. Quiz and final examination performance counted toward course grades for MS2017-MS2018. Starting with the MS2020 cohort, quizzes no longer counted toward course grades. Quiz, final examination, and Step 1 scores were compared between ungraded quiz and graded quiz cohorts with independent samples t-tests. Student and faculty feedback was collected. Results Quiz performance was not different for the ungraded and graded cohorts (p = 0.173). Ungraded cohorts scored 4% higher on final examinations than graded cohorts (p ≤ 0.001, d = 0.88). Ungraded cohorts scored above the national average and 11 points higher on Step 1 compared to graded cohorts, who had scored below the national average (p ≤ 0.001, d = 0.64). During the study period, Step 1 scores increased by 2 points nationally. Student feedback was positive, and faculty felt it improved their relationship with students. Discussion The change to ungraded quizzes did not negatively affect final examination or Step 1 performance, suggesting a combination of ungraded and graded assessments can effectively promote assessment for learning.
Collapse
Affiliation(s)
- Karly A. Pippitt
- Department of Family and Preventive Medicine, University of Utah School of Medicine, 317 Chipeta Way, Suite A, Salt Lake City, UT 84108 USA
- Community Faculty, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Kathryn B. Moore
- Department of Neurobiology, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Janet E. Lindsley
- Department of Biochemistry, University of Utah School of Medicine, Salt Lake City, UT USA
- Curriculum University of Utah School of Medicine, Salt Lake City, UT USA
| | - Paloma F. Cariello
- Department of Internal Medicine, University of Utah School of Medicine, Salt Lake City, UT USA
- Health Equity, Diversity, and Inclusion, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Andrew G. Smith
- Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Tim Formosa
- Department of Biochemistry, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Karen Moser
- Department of Pathology, University of Utah School of Medicine, Salt Lake City, UT USA
| | - David A. Morton
- Department of Neurobiology, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Jorie M. Colbert-Getz
- Department of Internal Medicine, University of Utah School of Medicine, Salt Lake City, UT USA
- Education Quality Improvement, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Candace J. Chow
- Department of Internal Medicine, University of Utah School of Medicine, Salt Lake City, UT USA
- Education Research, University of Utah School of Medicine, Salt Lake City, UT USA
| |
Collapse
|
3
|
Clemett VJ, Raleigh M. The validity and reliability of clinical judgement and decision-making skills assessment in nursing: A systematic literature review. NURSE EDUCATION TODAY 2021; 102:104885. [PMID: 33894591 DOI: 10.1016/j.nedt.2021.104885] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 02/25/2021] [Accepted: 03/25/2021] [Indexed: 06/12/2023]
Abstract
OBJECTIVES To appraise the validity and reliability of approaches to assessing the clinical decision-making skills of nurses, and use findings to inform the assessment of students as they transition to newly qualified nurses. DESIGN The preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines were used to conduct the review. DATA SOURCES Medline, CINAHL and the British Nursing Index were searched from inception to November 2019. REVIEW METHODS Studies were grouped according to their assessment approach following a competency framework with findings presented as a narrative synthesis. RESULTS 38 articles were included in the review which assessed clinical decision-making in a variety of settings; clinical practice, simulation, written examinations and self-assessment. Multi-level rubric and checklist approaches demonstrated good validity and reliability in practice and simulation settings, and the former was effective at differentiating between students at different stages of their training. Written, case study examinations were also effective at assessing clinical decision-making, although an optimum structure for their presentation was not possible to discern. Students tended to score themselves more highly than faculty staff when undertaking rubric-based self-assessments. CONCLUSIONS Findings suggest that the best approach to assess clinical decision-making for final year students is to use several low-stakes, snap-shot summative assessments in practice environments, which are marked using a multi-level observational rubric. To assure reliability, it is recommended that a small team of expert practice assessors undergo regular training and peer review, have protected time to complete their assessor role and are appropriately supported.
Collapse
Affiliation(s)
- Victoria J Clemett
- King's College London, Florence Nightingale Faculty of Nursing and Midwifery, James Clerk Maxwell Building, 57 Waterloo Road, London SE1 8WA, United Kingdom of Great Britain and Northern Ireland.
| | - Mary Raleigh
- King's College London, Florence Nightingale Faculty of Nursing and Midwifery, James Clerk Maxwell Building, 57 Waterloo Road, London SE1 8WA, United Kingdom of Great Britain and Northern Ireland.
| |
Collapse
|
4
|
Kinnear B, Warm EJ, Caretta-Weyer H, Holmboe ES, Turner DA, van der Vleuten C, Schumacher DJ. Entrustment Unpacked: Aligning Purposes, Stakes, and Processes to Enhance Learner Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S56-S63. [PMID: 34183603 DOI: 10.1097/acm.0000000000004108] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Educators use entrustment, a common framework in competency-based medical education, in multiple ways, including frontline assessment instruments, learner feedback tools, and group decision making within promotions or competence committees. Within these multiple contexts, entrustment decisions can vary in purpose (i.e., intended use), stakes (i.e., perceived risk or consequences), and process (i.e., how entrustment is rendered). Each of these characteristics can be conceptualized as having 2 distinct poles: (1) purpose has formative and summative, (2) stakes has low and high, and (3) process has ad hoc and structured. For each characteristic, entrustment decisions often do not fall squarely at one pole or the other, but rather lie somewhere along a spectrum. While distinct, these continua can, and sometimes should, influence one another, and can be manipulated to optimally integrate entrustment within a program of assessment. In this article, the authors describe each of these continua and depict how key alignments between them can help optimize value when using entrustment in programmatic assessment within competency-based medical education. As they think through these continua, the authors will begin and end with a case study to demonstrate the practical application as it might occur in the clinical learning environment.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is associate professor of internal medicine and pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Eric J Warm
- E.J. Warm is professor of internal medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| | - Holly Caretta-Weyer
- H. Caretta-Weyer is assistant professor of emergency medicine, Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California; ORCID: https://orcid.org/0000-0002-9783-5797
| | - Eric S Holmboe
- E.S. Holmboe is chief, research, milestones development and evaluation officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| | - David A Turner
- D.A. Turner is vice president, Competency-Based Medical Education, American Board of Pediatrics, Chapel Hill, North Carolina
| | - Cees van der Vleuten
- C. van der Vleuten is professor of education, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119
| | - Daniel J Schumacher
- D.J. Schumacher is associate professor of pediatrics, Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0001-5507-8452
| |
Collapse
|
5
|
Young JQ, Frank JR, Holmboe ES. Advancing Workplace-Based Assessment in Psychiatric Education: Key Design and Implementation Issues. Psychiatr Clin North Am 2021; 44:317-332. [PMID: 34049652 DOI: 10.1016/j.psc.2021.03.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
With the adoption of competency-based medical education, assessment has shifted from traditional classroom domains of knows and knows how to the workplace domain of doing. This workplace-based assessment has 2 purposes; assessment of learning (summative feedback) and the assessment for learning (formative feedback). What the trainee does becomes the basis for identifying growth edges and determining readiness for advancement and ultimately independent practice. High-quality workplace-based assessment programs require thoughtful choices about the framework of assessment, the tools themselves, the platforms used, and the contexts in which the assessments take place, with an emphasis on direct observation.
Collapse
Affiliation(s)
- John Q Young
- Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and, Zucker Hillside Hospital at Northwell Health, 75-59 263rd Street, Kaufman Building, Glen Oaks, NY 11004, USA.
| | - Jason R Frank
- Department of Emergency Medicine, University of Ottawa, Royal College of Physicians and Surgeons of Canada, 774 Echo Drive, Ottawa, Ontario K15 5NB, Canada
| | - Eric S Holmboe
- Accreditation Council for Graduate Medical Education, ACGME, 401 North Michigan Avenue, Chicago, IL 60611, USA
| |
Collapse
|
6
|
Ament Giuliani Franco C, Franco RS, Cecilio-Fernandes D, Severo M, Ferreira MA, de Carvalho-Filho MA. Added value of assessing medical students' reflective writings in communication skills training: a longitudinal study in four academic centres. BMJ Open 2020; 10:e038898. [PMID: 33158823 PMCID: PMC7651724 DOI: 10.1136/bmjopen-2020-038898] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 08/13/2020] [Accepted: 10/16/2020] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVES This study describes the development and implementation of a model to assess students' communication skills highlighting the use of reflective writing. We aimed to evaluate the usefulness of the students' reflections in the assessment of communication skills. DESIGN Third-year and fourth-year medical students enrolled in an elective course on clinical communication skills development were assessed using different assessment methods. SETTING AND PARTICIPANTS The communication skills course was offered at four universities (three in Brazil and one in Portugal) and included 69 students. OUTCOME MEASURES The students were assessed by a Multiple-Choice Questionnaire (MCQ), an objective structured clinical examination (OSCE) and reflective writing narratives. The Cronbach's alpha, dimensionality and the person's correlation were applied to evaluate the reliability of the assessment methods and their correlations. Reflective witting was assessed by applying the Reflection Evaluation for Enhanced Competencies Tool Rubric (Reflect Score (RS)) to measure reflections' depth, and the Thematic Score (TS) to map and grade reflections' themes. RESULTS The Cronbach alpha for the MCQ, OSCE global score, TS and RS were, respectively, 0.697, 0.633, 0.784 and 0.850. The interobserver correlation for the TS and RS were, respectively, 0.907 and 0.816. The assessment of reflection using the TS was significantly correlated with the MCQ (r=0.412; p=0.019), OSCE (0.439; p=0.012) and RS (0.410; p=0.020). The RS did not correlate with the MCQ and OSCE. CONCLUSIONS Assessing reflection through mapping the themes and analysing the depth of reflective writing expands the assessment of communication skills. While the assessment of reflective themes is related to the cognitive and behavioural domains of learning, the reflective depth seems to be a specific competence, not correlated with other assessment methods-possibly a metacognitive domain.
Collapse
Affiliation(s)
| | - Renato Soleiman Franco
- Medicine School and Post-Graduate Program in Bioethics, Pontifical Catholic University of Paraná, Curitiba, Brazil
| | - Dario Cecilio-Fernandes
- Department of Medical Psychology and Psychiatry, School of Medical Sciences, University of Campinas, Campinas, Brazil
| | - Milton Severo
- Department of Clinical Epidemiology, Predictive Medicine and Public Health and Public Health and Forensic Sciences, and Medical Education Department, University of Porto Medical School, Porto, Portugal
| | - Maria Amélia Ferreira
- Public Health and Forensic Sciences, and Medical Education Department, University of Porto Faculty of Medicine, Porto, Portugal
| | - Marco Antonio de Carvalho-Filho
- Internal Medicine, University of Minho School of Medicine, Braga, Portugal
- CEDAR - Center for Educational Development and Research in Health Sciences, University Medical Center Groningen, Groningen, The Netherlands
| |
Collapse
|
7
|
Young JQ, Sugarman R, Schwartz J, O'Sullivan PS. Overcoming the Challenges of Direct Observation and Feedback Programs: A Qualitative Exploration of Resident and Faculty Experiences. TEACHING AND LEARNING IN MEDICINE 2020; 32:541-551. [PMID: 32529844 DOI: 10.1080/10401334.2020.1767107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Problem: Prior studies have reported significant negative attitudes amongst both faculty and residents toward direct observation and feedback. Numerous contributing factors have been identified, including insufficient time for direct observation and feedback, poorly understood purpose, inadequate training, disbelief in the formative intent, inauthentic resident-patient clinical interactions, undermining of resident autonomy, lack of trust between the faculty-resident dyad, and low-quality feedback information that lacks credibility. Strategies are urgently needed to overcome these challenges and more effectively engage faculty and residents in direct observation and feedback. Otherwise, the primary goals of supporting both formative and summative assessment will not be realized and the viability of competency-based medical education will be threatened. Intervention: Toward this end, recent studies have recommended numerous strategies to overcome these barriers: protected time for direct observation and feedback; ongoing faculty and resident training on goals and bidirectional, co-constructed feedback; repeated direct observations and feedback within a longitudinal resident-supervisor relationship; utilization of assessment tools with evidence for validity; and monitoring for engagement. Given the complexity of the problem, it is likely that bundling multiple strategies together will be necessary to overcome the challenges. The Direct Observation Structured Feedback Program (DOSFP) incorporated many of the recommended features, including protected time for direct observation and feedback within longitudinal faculty-resident relationships. Using a qualitative thematic approach the authors conducted semi-structured interviews, during February and March, 2019, with 10 supervisors and ten residents. Participants were asked to reflect on their experiences. Interview guide questions explored key themes from the literature on direct observation and feedback. Transcripts were anonymized. Two authors independently and iteratively coded the transcripts. Coding was theory-driven and differences were discussed until consensus was reached. The authors then explored the relationships between the codes and used a semantic approach to construct themes. Context: The DOSFP was implemented in a psychiatry continuity clinic for second and third year residents. Impact: Faculty and residents were aligned around the goals. They both perceived the DOSFP as focused on growth rather than judgment even though residents understood that the feedback had both formative and summative purposes. The DOSFP facilitated educational alliances characterized by trust and respect. With repeated practice within a longitudinal relationship, trainees dropped the performance orientation and described their interactions with patients as authentic. Residents generally perceived the feedback as credible, described feedback quality as high, and valued the two-way conversation. However, when receiving feedback with which they did not agree, residents demurred or, at most, would ask a clarifying question, but then internally discounted the feedback. Lessons Learned: Direct observation and structured feedback programs that bundle recent recommendations may overcome many of the challenges identified by previous research. Yet, residents discounted disagreeable feedback, illustrating a significant limitation and the need for other strategies that help residents reconcile conflict between external data and one's self-appraisal.
Collapse
Affiliation(s)
- John Q Young
- Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
| | - Rebekah Sugarman
- Department of Psychiatry, The Zucker Hillside Hospital at Northwell Health, Glen Oaks, New York, USA
| | - Jessica Schwartz
- Department of Psychiatry, The Zucker Hillside Hospital at Northwell Health, Glen Oaks, New York, USA
| | - Patricia S O'Sullivan
- Office of Medical Education, University of California San Francisco, San Francisco, California, USA
| |
Collapse
|
8
|
Schut S, Heeneman S, Bierer B, Driessen E, van Tartwijk J, van der Vleuten C. Between trust and control: Teachers' assessment conceptualisations within programmatic assessment. MEDICAL EDUCATION 2020; 54:528-537. [PMID: 31998987 PMCID: PMC7318263 DOI: 10.1111/medu.14075] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 01/11/2020] [Accepted: 01/20/2020] [Indexed: 05/14/2023]
Abstract
OBJECTIVES Programmatic assessment attempts to facilitate learning through individual assessments designed to be of low-stakes and used only for high-stake decisions when aggregated. In practice, low-stake assessments have yet to reach their potential as catalysts for learning. We explored how teachers conceptualise assessments within programmatic assessment and how they engage with learners in assessment relationships. METHODS We used a constructivist grounded theory approach to explore teachers' assessment conceptualisations and assessment relationships in the context of programmatic assessment. We conducted 23 semi-structured interviews at two different graduate-entry medical training programmes following a purposeful sampling approach. Data collection and analysis were conducted iteratively until we reached theoretical sufficiency. We identified themes using a process of constant comparison. RESULTS Results showed that teachers conceptualise low-stake assessments in three different ways: to stimulate and facilitate learning; to prepare learners for the next step, and to use as feedback to gauge the teacher's own effectiveness. Teachers intended to engage in and preserve safe, yet professional and productive working relationships with learners to enable assessment for learning when securing high-quality performance and achievement of standards. When teachers' assessment conceptualisations were more focused on accounting conceptions, this risked creating tension in the teacher-learner assessment relationship. Teachers struggled between taking control and allowing learners' independence. CONCLUSIONS Teachers believe programmatic assessment can have a positive impact on both teaching and student learning. However, teachers' conceptualisations of low-stake assessments are not focused solely on learning and also involve stakes for teachers. Sampling across different assessments and the introduction of progress committees were identified as important design features to support teachers and preserve the benefits of prolonged engagement in assessment relationships. These insights contribute to the design of effective implementations of programmatic assessment within the medical education context.
Collapse
Affiliation(s)
- Suzanne Schut
- Department of Educational Development and ResearchFaculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
| | - Sylvia Heeneman
- Department of Educational Development and ResearchFaculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
- Department of PathologyCardiovascular Research Institute MaastrichtFaculty of Health, Medicine and Life SciencesMaastricht UniversityMaastrichtthe Netherlands
| | - Beth Bierer
- Education InstituteCleveland ClinicLerner College of Medicine, Case Western Reserve UniversityClevelandOhioUSA
| | - Erik Driessen
- Department of Educational Development and ResearchFaculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
| | | | - Cees van der Vleuten
- Department of Educational Development and ResearchFaculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
| |
Collapse
|
9
|
Rutgers DR, van Schaik JPJ, Kruitwagen CLJJ, Haaring C, van Lankeren W, van Raamt AF, ten Cate O. Introducing Summative Progress Testing in Radiology Residency: Little Change in Residents' Test Results After Transitioning from Formative Progress Testing. MEDICAL SCIENCE EDUCATOR 2020; 30:943-953. [PMID: 34457753 PMCID: PMC8368876 DOI: 10.1007/s40670-020-00977-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Educational effects of transitioning from formative to summative progress testing are unclear. Our purpose was to investigate whether such transitioning in radiology residency is associated with a change in progress test results. METHODS We investigated a national cohort of radiology residents (N > 300) who were semi-annually assessed through a mandatory progress test. Until 2014, this test was purely formative for all residents, but in 2014/2015, it was transitioned (as part of a national radiology residency program revision) to include a summative pass requirement for new residents. In 7 posttransitioning tests in 2015-2019, including summatively and formatively tested residents who followed the revised and pre-transitioning residency program, respectively, we assessed residents' relative test scores and percentage of residents that reached pass standards. RESULTS Due to our educational setting, most posttransitioning tests had no residents in the summative condition in postgraduate year 4-5, nor residents in the formative condition in year 0.5-2. Across the 7 tests, relative test scores in postgraduate year 1-3 of the summative resident group and year 3.5-4.5 of the formative group differed significantly (p < 0.01 and p < 0.05, respectively, Kruskal-Wallis test). However, scores fluctuated without consistent time trends and without consistent differences between both resident groups. Percentage of residents reaching the pass standard did not differ significantly across tests or between groups. DISCUSSION Transitioning from formative to summative progress testing was associated with overall steady test results of the whole resident group in 4 post-transitioning years. We do not exclude that transitioning may have positive educational effects for resident subgroups.
Collapse
Affiliation(s)
- D. R. Rutgers
- Department of Radiology, University Medical Center, Utrecht University, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
- Examination Committee of the Radiological Society of the Netherlands, Utrecht, The Netherlands
| | - J. P. J. van Schaik
- Department of Radiology, University Medical Center, Utrecht University, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - C. L. J. J. Kruitwagen
- Julius Center, Department of Biostatistics, University Medical Center, Utrecht University, Utrecht, The Netherlands
| | - C. Haaring
- Department of Radiology, University Medical Center, Utrecht University, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - W. van Lankeren
- Department of Radiology, Erasmus MC, Rotterdam, The Netherlands
- Radiological Society of the Netherlands, Utrecht, The Netherlands
| | - A. F. van Raamt
- Examination Committee of the Radiological Society of the Netherlands, Utrecht, The Netherlands
- Department of Radiology, Gelre Hospital, Apeldoorn, The Netherlands
| | - O. ten Cate
- Center for Research and Development of Education, University Medical Center, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
10
|
Schut S, van Tartwijk J, Driessen E, van der Vleuten C, Heeneman S. Understanding the influence of teacher-learner relationships on learners' assessment perception. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2020; 25:441-456. [PMID: 31664546 PMCID: PMC7210223 DOI: 10.1007/s10459-019-09935-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Accepted: 10/21/2019] [Indexed: 05/26/2023]
Abstract
Low-stakes assessments are theorised to stimulate and support self-regulated learning. They are feedback-, not decision-oriented, and should hold little consequences to a learner based on their performance. The use of low-stakes assessment as a learning opportunity requires an environment in which continuous improvement is encouraged. This may be hindered by learners' perceptions of assessment as high-stakes. Teachers play a key role in learners' assessment perceptions. By investigating assessment perceptions through an interpersonal theory-based perspective of teacher-learner relationships, we aim to better understand the mechanisms explaining the relationship between assessment and learning within medical education. First, twenty-six purposefully selected learners, ranging from undergraduates to postgraduates in five different settings of programmatic assessment, were interviewed about their assessment task perception. Next, we conducted a focussed analysis using sensitising concepts from interpersonal theory to elucidate the influence of the teacher-learner relationship on learners' assessment perceptions. The study showed a strong relation between learners' perceptions of the teacher-learner relationship and their assessment task perception. Two important sources for the perception of teachers' agency emerged from the data: positional agency and expert agency. Together with teacher's communion level, both types of teachers' agency are important for understanding learners' assessment perceptions. High levels of teacher communion had a positive impact on the perception of assessment for learning, in particular in relations in which teachers' agency was less dominantly exercised. When teachers exercised these sources of agency dominantly, learners felt inferior to their teachers, which could hinder the learning opportunity. To utilise the learning potential of low-stakes assessment, teachers are required to stimulate learner agency in safe and trusting assessment relationships, while carefully considering the influence of their own agency on learners' assessment perceptions. Interpersonal theory offers a useful lens for understanding assessment relationships. The Interpersonal Circumplex provides opportunities for faculty development that help teachers develop positive and productive relationships with learners in which the potential of low-stakes assessments for self-regulated learning is realised.
Collapse
Affiliation(s)
- Suzanne Schut
- Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands.
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER, Maastricht, The Netherlands.
| | - Jan van Tartwijk
- Department of Education, Utrecht University, Utrecht, The Netherlands
| | - Erik Driessen
- Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER, Maastricht, The Netherlands
| | - Cees van der Vleuten
- Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER, Maastricht, The Netherlands
| | - Sylvia Heeneman
- Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
- Department of Pathology, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
11
|
Eastridge JA. A new take on testing. NURSE EDUCATION TODAY 2019; 80:9-11. [PMID: 31200200 DOI: 10.1016/j.nedt.2019.06.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Accepted: 06/03/2019] [Indexed: 06/09/2023]
Affiliation(s)
- June A Eastridge
- 94 Tower Mustard Ct., Henderson, NV 89002, United States of America; Nevada State College, United States of America.
| |
Collapse
|
12
|
Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. MEDICAL EDUCATION 2019; 53:76-85. [PMID: 30073692 DOI: 10.1111/medu.13645] [Citation(s) in RCA: 175] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2018] [Revised: 04/04/2018] [Accepted: 05/24/2018] [Indexed: 05/25/2023]
Abstract
CONTEXT Models of sound assessment practices increasingly emphasise assessment's formative role. As a result, assessment must not only support sound judgements about learner competence, but also generate meaningful feedback to guide learning. Reconciling the tension between assessment's focus on judgement and decision making and feedback's focus on growth and development represents a critical challenge for researchers and educators. METHODS We synthesise the literature related to this tension, framed around four trends in education research: (i) shifting perspectives on assessment; (ii) shifting perspectives on feedback; (iii) increasing attention on learners' perceptions of assessment and feedback, and (iv) increasing attention on the influence of culture on assessment and feedback. We describe factors that produce and sustain this tension. RESULTS The lines between assessment and feedback frequently blur in medical education. Models of programmatic assessment deliberately use the same data for both purposes: low-stakes individual data points are used formatively, but then are added together to support summative judgements. However, the translation of theory to practice is not straightforward. Efforts to embed meaningful feedback in programmes of learning face a multitude of threats. Learners may perceive assessment with formative intent as summative, restricting their engagement with it as feedback, and thus diminishing its learning value. A learning culture focused on assessment may limit learners' sense of safety to explore, to experiment, and sometimes to fail. CONCLUSIONS Successfully blending assessment and feedback demands clarity of purpose, support for learners, and a system and organisational commitment to a culture of improvement rather than a culture of performance.
Collapse
Affiliation(s)
- Christopher J Watling
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Shiphra Ginsburg
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
13
|
Jorm C, Bleasel J, Haq I. Time to establish comprehensive long-term monitoring of Australian medical graduates? AUST HEALTH REV 2018; 42:635-639. [DOI: 10.1071/ah16292] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Accepted: 06/06/2017] [Indexed: 11/23/2022]
Abstract
We believe that the well being of our medical students (and medical staff throughout the continuum of practice) matters too much not to ask, ‘How do they feel?’ Society, and students themselves, have invested too much in their education not to query ‘How well are they performing in the workplace?’. Our accountability to the community demands we ask, ‘How are their patients going?’ This article presents a schema for building long-term monitoring in Australia, using linked and reliable data, that will enable these questions to be answered. Although the answers will be of interest to many, medical schools will then be well placed to alter their programs and processes based on these three domains of graduate well being, workplace performance and patient outcomes.
Collapse
|
14
|
Peeters MJ. Targeting Assessment for Learning within Pharmacy Education. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2017; 81:6243. [PMID: 29200453 PMCID: PMC5701328 DOI: 10.5688/ajpe6243] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2016] [Accepted: 04/11/2017] [Indexed: 05/30/2023]
Abstract
Formative assessment is critical for deliberate improvement, development and growth. While not entirely synonymous, assessment for learning (AFL) is an approach using formative assessment to specifically improve students' learning. While using formative assessments, AFL can also have summative programmatic-assessment implications. For each learning assessment, summative and formative uses can be leveraged; it can scaffold (formative), foster students' growth (formative), and document students' development in a competency/standard (summative). For example, using a developmental portfolio with iterative reflective-writings (formative), PharmD students showed qualitative development in the "professionalism" competency (summative; ACPE Standard 4.4). (In parallel, this development in professionalism was confirmed quantitatively.) An AFL approach can complement other assessments; it can be integrated with other summative assessments into a multi-method assessment program, wherein developmental portfolio sections could be used for a few specific competencies. While AFL is not a one-size-fits-all silver bullet approach for programmatic assessment, it is one notable robust tool to employ.
Collapse
Affiliation(s)
- Michael J Peeters
- University of Toledo College of Pharmacy and Pharmaceutical Sciences, Toledo, Ohio
| |
Collapse
|