1
|
Hamoen EC, van Blankenstein FM, de Jong PGM, Langeveld K, Reinders MEJ. Internal medicine clerks' motivation in an online course: a mixed-methods study. MEDICAL EDUCATION ONLINE 2025; 30:2445915. [PMID: 39743776 DOI: 10.1080/10872981.2024.2445915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 12/17/2024] [Accepted: 12/18/2024] [Indexed: 01/04/2025]
Abstract
PURPOSE At LUMC, a Small Private Online Course (SPOC) was developed complementary to the clinical learning environment of the internal medicine clerkship. The developers used the self-determination theory in the design of the SPOC's assignments aiming to improve learners' intrinsic motivation. This study investigates the impact of the SPOC and its specific assignments on student motivation. METHODS The study uses a mixed-methods approach. The authors describe a quantitative analysis of students' responses on an intrinsic motivation inventory (IMI), and a qualitative thematic analysis of semi-structured group interviews, respectively. RESULTS Seventy-eight students (response rate 42%) filled out the IMI. Their scores were (7-point Likert scale): interest/enjoyment 3.76, competence 4.02, choice 3.53, value/usefulness 4.20, relatedness 3.85. Thematic analysis of the interviews (14 students) revealed seven themes: collaboration with peers, usefulness, SPOC-related factors, workload, motivation, and performance. CONCLUSIONS Motivation could be optimized creating useful, authentic cases that train skills that are directly transferrable to clinical practice. Challenging, interesting and student-generated assignments positively influenced students' autonomy and motivation. Lack of awareness of online performance negatively affected the feeling of competence. Perceptions of online collaboration were suboptimal. The study can be helpful for other teachers to enhance motivation while developing online courses.
Collapse
Affiliation(s)
- Esther C Hamoen
- Department of Internal Medicine, Leiden University Medical Center, Leiden, The Netherlands
| | - Floris M van Blankenstein
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Peter G M de Jong
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Kirsten Langeveld
- Public Health and Primary Care, Leiden University Medical Center, Leiden, The Netherlands
| | - Marlies E J Reinders
- Department of Internal Medicine, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
2
|
Torre D, Schuwirth L. Programmatic assessment for learning: A programmatically designed assessment for the purpose of learning: AMEE Guide No. 174. MEDICAL TEACHER 2025; 47:918-933. [PMID: 39368061 DOI: 10.1080/0142159x.2024.2409936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Accepted: 09/24/2024] [Indexed: 10/07/2024]
Abstract
Programmatic assessment for learning (PAL) involves programmatically structured collection of assessment data for the purpose of learning. In this guide, we examine and provide recommendations on several aspects: First, we review the evolution that has led to the development of programmatic assessment, providing clarification of some of its terminology. Second, we outline the learning processes that guide the design of PAL, including distributed learning, interleaving, overlearning, and test-enhanced learning. Third, we review the evolving nature of validity and provide insights into validity from a program perspective. Finally, we examine opportunities, challenges, and future directions of assessment in the context of artificial intelligence.
Collapse
Affiliation(s)
- Dario Torre
- University of Central Florida College of Medicine, Orlando, FL, USA
| | - Lambert Schuwirth
- College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
3
|
Collares CF. Decalogue for ensuring mediocrity of health care professions students and patient unsafety. MEDICAL EDUCATION 2025. [PMID: 40390647 DOI: 10.1111/medu.15677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 01/07/2025] [Accepted: 02/07/2025] [Indexed: 05/21/2025]
Abstract
This satirical commentary outlines 10 ironic commandments designed to highlight flaws in current health professions education practices. Through exaggerated emphasis on rote memorization, neglect of clinical reasoning and prioritization of grades over true understanding, the piece critiques practices that foster superficial knowledge and professional stagnation. Additional commandments target the culture of burnout, disregard for interprofessional collaboration, resistance to change and neglect of the social determinants of health. The commentary underscores the detrimental effects of focusing solely on short-term performance, arbitrary standards and cross-sectional assessments. By adhering to these commandments, educational institutions ensure a generation of professionals ill-prepared for the complexities of modern health care, ultimately compromising patient safety. The piece calls for comprehensive reform, advocating for a more holistic, collaborative and adaptive approach to health professions education.
Collapse
Affiliation(s)
- Carlos Fernando Collares
- Inspirali Educação, São Paulo, Brazil
- European Board of Medical Assessors, Cardiff, UK
- Medical Education Unit, Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
- Faculdades Pequeno Príncipe, Curitiba, Brazil
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Largo do Paço, Braga, Portugal
| |
Collapse
|
4
|
Torre D, Hemmer PA, Durning SJ, Pangaro LN. Theoretical considerations of the Reporter-Interpreter-Manager-Educator Assessment Framework. J Gen Intern Med 2025:10.1007/s11606-025-09580-w. [PMID: 40360870 DOI: 10.1007/s11606-025-09580-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2024] [Accepted: 04/24/2025] [Indexed: 05/15/2025]
Abstract
ISSUE The Reporter-Interpreter-Manager-Educator assessment framework provides a concise approach to observing learner progress in clinical settings. Despite its widespread use and study, its theoretical underpinnings remain unexplored. EVIDENCE We identify two key principles and concepts of the Reporter-Interpreter-Manager-Educator (RIME) framework and their underlying theoretical explanations for it. (1) Categorization and prototype theory: RIME allows categorization of the learner, using previously internalized prototypes belonging to a "category membership" (Reporter-Interpreter-Manager/Educator; Story-Observations-Assessment-Plan) which depend on pattern recognition. (2) Theories of cognitive development enabling its "developmental" perspective: the progress of roles in RIME is a series of zone of proximal development (ZPD) challenges in which the teacher and learner identify boundaries of the ZPD, recognize the gap between the actual and the potential development, and provide specific recommendations to foster learners' advancement. Similarly, in a community of practice, learners progress from peripheral to full participation within their community and engage in a meaningful learning process. IMPLICATIONS RIME is a theory-based framework facilitating the assessment of learners in the clinical setting. RIME's theoretical tenets facilitate its use to navigate current tensions in assessment, while offering insights into its integration with other assessment models and theoretical frameworks.
Collapse
Affiliation(s)
- Dario Torre
- University of Central Florida College of Medicine, Orlando, FL, USA
| | - Paul A Hemmer
- Department of Medicine, Uniformed Services University, Bethesda, MD, USA
| | - Steven J Durning
- Department of Medicine, Uniformed Services University, Bethesda, MD, USA
| | - Louis N Pangaro
- Department of Medicine, Uniformed Services University, Bethesda, MD, USA.
| |
Collapse
|
5
|
Parsons AS, Wijesekera TP, Olson APJ, Torre D, Durning SJ, Daniel M. Beyond thinking fast and slow: Implications of a transtheoretical model of clinical reasoning and error on teaching, assessment, and research. MEDICAL TEACHER 2025; 47:665-676. [PMID: 38835283 DOI: 10.1080/0142159x.2024.2359963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 05/22/2024] [Indexed: 06/06/2024]
Abstract
From dual process to a family of theories known collectively as situativity, both micro and macro theories of cognition inform our current understanding of clinical reasoning (CR) and error. CR is a complex process that occurs in a complex environment, and a nuanced, expansive, integrated model of these theories is necessary to fully understand how CR is performed in the present day and in the future. In this perspective, we present these individual theories along with figures and descriptive cases for purposes of comparison before exploring the implications of a transtheoretical model of these theories for teaching, assessment, and research in CR and error.
Collapse
Affiliation(s)
- Andrew S Parsons
- Medicine and Public Health, University of Virginia School of Medicine, Charlottesville, VA, USA
| | | | - Andrew P J Olson
- Medicine and Pediatrics, Medical Education Outcomes Center, University of Minnesota Medical School, Minneapolis, MN, USA
| | - Dario Torre
- Medicine, University of Central Florida College of Medicine, Orlando, FL, USA
| | - Steven J Durning
- Medicine and Pathology, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Michelle Daniel
- Emergency Medicine, University of California San Diego School of Medicine San Diego, CA, USA
| |
Collapse
|
6
|
Dawson LJ, Fox K, Harris M, Jellicoe M, Youngson CC. The safe practitioner framework: an imperative to incorporate a psychosocial sub-curriculum into dental education. Br Dent J 2025; 238:403-407. [PMID: 40148639 PMCID: PMC11949829 DOI: 10.1038/s41415-024-8231-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2024] [Revised: 10/29/2024] [Accepted: 11/12/2024] [Indexed: 03/29/2025]
Abstract
A primary aim of dental schools is to produce competent and caring independent professionals, capable of developing themselves and serving the needs of their patients through reflective practice and self-regulated continuous learning. The General Dental Council has also explicitly recognised the importance of self-regulated learning, and other associated behaviours, in the new The safe practitioner framework. However, traditional learning designs focus on the development of academic and clinical skills, and assume that psychosocial skills, which support self-regulated learning and enable the management of personal challenging circumstances, are already present. Unfortunately, data suggest that the psychosocial skills in many students currently entering healthcare programmes are relatively underdeveloped, impacting upon their approaches to learning and their mental health, and potentially, patient safety. Therefore, there is a need to support students in their psychosocial development. This development starts with teachers understanding the societal, academic and environmental circumstances that their current students have experienced, followed by the consideration of the importance of psychosocial skills within their dental education. This paper discusses these matters and suggests a psychosocial sub-curriculum along with a suggested framework for its implementation.
Collapse
Affiliation(s)
- Luke J Dawson
- Professor of Dental Education, School of Dentistry, Faculty of Health and Life Sciences, University of Liverpool, Liverpool, UK.
| | - Kathryn Fox
- Honorary Senior Lecturer, School of Dentistry, Faculty of Health and Life Sciences, University of Liverpool, Liverpool, UK
| | - Marina Harris
- Associate Professor of Dental Education and Wellbeing, School of Dental, Health and Care Professions, Faculty of Science, University of Portsmouth, Portsmouth, UK
| | - Mark Jellicoe
- Senior Lecturer, The University of Law, Science School (Psychology), Leeds, UK
| | - Callum C Youngson
- Emeritus Professor, School of Dentistry, Faculty of Health and Life Sciences, University of Liverpool, Liverpool, UK
| |
Collapse
|
7
|
Donker EM, van Rosse F, Janssen BJA, Knol W, Dumont G, van Smeden J, Atiqi R, Hessel M, Richir MC, van Agtmael MA, Kramers C, Tichelaar J. The impact of summative, formative or programmatic assessment on the Dutch National Pharmacotherapy assessment: A retrospective multicentre study. Eur J Pharmacol 2025; 989:177267. [PMID: 39798914 DOI: 10.1016/j.ejphar.2025.177267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 01/07/2025] [Accepted: 01/08/2025] [Indexed: 01/15/2025]
Abstract
BACKGROUND The Dutch National Pharmacotherapy Assessment (DNPA), which focuses on assessing medication safety and essential drug knowledge, was introduced to improve clinical pharmacology and therapeutics education in the Netherlands. This study investigated how the performance of final-year medical students on the DPNA was affected by the assessment programme (traditional with summative or formative assessment, and programmatic assessment). METHODS This multicentre retrospective longitudinal observation study (2019-2023) involved final-year medical students from four medical schools in the Netherlands. The DNPA was used in different ways - either as a summative or formative assessment in a traditional assessment programme or as a non-high-stakes assessment in a programmatic assessment programme. Three medical schools changed from assessment programme over time. RESULTS This study involved 1894 students. Summative assessment resulted in significantly higher scores and pass rates than formative assessment in a traditional assessment programme (mean score of 84.3% vs. 67.5%, and pass rate of 60.4% vs. 5.9%). In contrast, slightly lower scores were obtained when the assessment was non-high-stakes as part of a programmatic assessment programme rather than a summative assessment in a traditional assessment programme (mean score of 81.% vs. 84.3%, pass rate of 51.8% vs. 60.4%). In curricula where the assessment became summative instead of formative, scores and pass rates significantly improved (mean increase of +14.4% and 42.3%, respectively), when the assessment programme changed from traditional with summative assessment to programmatic with non-high-stakes assessment, scores and pass rates modestly decreased (decrease of 3.3% and 14.2%, respectively). CONCLUSION Integrating the DNPA within a traditional assessment programme is most effective when assessed summatively, as it results in significantly higher scores compared to formative assessment. In the context of a programmatic assessment programme, the scores may be slightly lower. Changing assessment programmes within a medical school influences DNPA scores. Scores increase when the assessment is summative rather than formative within a traditional assessment programme. Conversely, scores mildly decrease when the assessment programme shifts from traditional with summative assessment to non-high-stakes programmatic assessment.
Collapse
Affiliation(s)
- Erik M Donker
- Amsterdam UMC, Location VUmc, Department of Internal Medicine, Unit Pharmacotherapy, Amsterdam, the Netherlands; Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, the Netherlands.
| | - Floor van Rosse
- Erasmus MC, University Medical Center Rotterdam, Department of Hospital Pharmacy, Rotterdam, the Netherlands
| | - Ben J A Janssen
- Maastricht University, Department of Pharmacology and Toxicology, Maastricht, the Netherlands
| | - Wilma Knol
- University Medical Center Utrecht, Department of Geriatric Medicine, Utrecht University, Utrecht, the Netherlands
| | - Glenn Dumont
- Amsterdam UMC, Location AMC, Department of Hospital Pharmacy and Clinical Pharmacology, Amsterdam, the Netherlands
| | - Jeroen van Smeden
- Centre for Human Drug Research, Department of Education, Leiden, the Netherlands; Leiden University Medical Center, Department of Clinical Pharmacy and Toxicology, Leiden, the Netherlands
| | - Roya Atiqi
- University Medical Center Groningen, Department Clinical Pharmacy and Pharmacology, Groningen, the Netherlands
| | - Marleen Hessel
- Leiden University Medical Center, Department of Clinical Pharmacy and Toxicology, Leiden, the Netherlands
| | - Milan C Richir
- Amsterdam UMC, Location VUmc, Department of Internal Medicine, Unit Pharmacotherapy, Amsterdam, the Netherlands; Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, the Netherlands; University Medical Center Utrecht, Department of Surgery, Utrecht, the Netherlands
| | - Michiel A van Agtmael
- Amsterdam UMC, Location VUmc, Department of Internal Medicine, Unit Pharmacotherapy, Amsterdam, the Netherlands; Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, the Netherlands
| | - Cornelis Kramers
- Radboud University Medical Center, Pharmacology-Toxicology and Internal Medicine, Nijmegen, the Netherlands; Canisius Wilhelmina Ziekenhuis, Department of Clinical Pharmacy, Nijmegen, the Netherlands
| | - Jelle Tichelaar
- Amsterdam UMC, Location VUmc, Department of Internal Medicine, Unit Pharmacotherapy, Amsterdam, the Netherlands; Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, the Netherlands; Interprofessional Collaboration and Medication Safety at the Faculty of Health, Sports and Social Work, Inholland University of Applied Sciences, Amsterdam, the Netherlands; Amsterdam Public Health, Quality of Care, Amsterdam, the Netherlands
| |
Collapse
|
8
|
Donker EM, van Rosse F, Janssen BJA, Knol W, Dumont G, van Smeden J, Atiqi R, Hessel M, Richir MC, van Agtmael MA, Kramers C, Tichelaar J. Students' perspective on the Dutch National Pharmacotherapy Assessment, a national survey study among final-year medical students. Eur J Pharmacol 2025; 989:177266. [PMID: 39798917 DOI: 10.1016/j.ejphar.2025.177266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 01/07/2025] [Accepted: 01/08/2025] [Indexed: 01/15/2025]
Abstract
INTRODUCTION The Dutch National Pharmacotherapy Assessment (DNPA) was introduced in 2013 to improve clinical pharmacology and therapeutics (CPT) education. This study investigated final-year medical students' perceived motivation and level of preparation for the DNPA in different scenarios: mandatory vs. non-mandatory, and traditional high-stakes assessment programme vs. programmatic assessment programme. METHODS In this survey study, students from four Dutch medical schools participated. In two medical schools the DNPA is a mandatory assessment in a programmatic assessment programme, and in two schools it is a mandatory, high-stakes assessment in a traditional assessment programme. The questionnaire included six 5-point Likert-type questions, and one open-ended question. RESULTS A total of 142 final-year medical students completed the survey. Their overall satisfaction with and current preparation for the DNPA was good (both median scores were 4 out of 5), without differences between students with a traditional or programmatic assessment programme. The majority of the students said they would be more (or much more) motivated (62.7%) and prepared (59.2%) if the DNPA were a high-stakes assessment rather than a programmatic assessment; the non-mandatory or mandatory nature of the assessment would only modestly affect their motivation and preparation (62.7% of the students would be less or similar motivated, 74.6% less or similar prepared). Students opined that the DNPA should be given earlier in the curriculum, together with more dedicated CPT education. CONCLUSION While students expressed a greater motivation and preparation when having a high-stakes assessment, and almost a similar motivation and preparation for mandatory and non-mandatory assessments, there were no notable differences in their current perceived motivation and preparation across medical schools with different assessment programmes. This suggests that students appreciate the importance of the DNPA assessment, being almost similarly motivated to prepare for the assessment regardless of whether it is mandatory or non-mandatory or a programmatic or high-stakes assessment.
Collapse
Affiliation(s)
- Erik M Donker
- Amsterdam UMC, location VUmc, Department of Internal Medicine, section Pharmacotherapy, Amsterdam, Netherlands; Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, Netherlands.
| | - Floor van Rosse
- Erasmus MC, University Medical Center Rotterdam, Department of Hospital Pharmacy, Rotterdam, Netherlands
| | - Ben J A Janssen
- Maastricht University, Department of Pharmacology and Toxicology, Maastricht, Netherlands
| | - Wilma Knol
- University Medical Center Utrecht, Department of Geriatric Medicine, Utrecht University, Utrecht, Netherlands
| | - Glenn Dumont
- Amsterdam UMC, location AMC, Department of Hospital Pharmacy and Clinical pharmacology, Amsterdam, Netherlands
| | - Jeroen van Smeden
- Centre for Human Drug Research, Department of Education, Leiden, Netherlands; Leiden University Medical Center, Department of Clinical Pharmacy and Toxicology, Leiden, Netherlands
| | - Roya Atiqi
- University Medical Center Groningen, Department Clinical Pharmacy and Pharmacology, Groningen, Netherlands
| | - Marleen Hessel
- Leiden University Medical Center, Department of Clinical Pharmacy and Toxicology, Leiden, Netherlands
| | - Milan C Richir
- Amsterdam UMC, location VUmc, Department of Internal Medicine, section Pharmacotherapy, Amsterdam, Netherlands; Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, Netherlands; University Medical Center Utrecht, Department of Surgery, Utrecht, Netherlands
| | - Michiel A van Agtmael
- Amsterdam UMC, location VUmc, Department of Internal Medicine, section Pharmacotherapy, Amsterdam, Netherlands; Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, Netherlands
| | - Cornelis Kramers
- Radboud University Medical Center, Department of Pharmacy and Internal Medicine, Nijmegen, Netherlands; Canisius Wilhelmina Ziekenhuis, Department of Clinical Pharmacy, Nijmegen, Netherlands
| | - Jelle Tichelaar
- Amsterdam UMC, location VUmc, Department of Internal Medicine, section Pharmacotherapy, Amsterdam, Netherlands; Research and Expertise Centre in Pharmacotherapy Education (RECIPE), Amsterdam, Netherlands; Interprofessional Collaboration and Medication Safety at the Faculty of Health, Sports and Social Work, Inholland University of Applied Sciences, Amsterdam, Netherlands; Amsterdam Public Health, Quality of Care, Amsterdam, Netherlands
| |
Collapse
|
9
|
Bracken K, Saleh A, Sandor J, Sibbald M, Lee-Poy M, Ngo Q. The Predictive Power of Short Answer Questions in Undergraduate Medical Education Progress Difficulty. MEDICAL SCIENCE EDUCATOR 2025; 35:351-358. [PMID: 40144084 PMCID: PMC11933568 DOI: 10.1007/s40670-024-02197-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/10/2024] [Indexed: 03/28/2025]
Abstract
Background To better understand the link between formative assessments and progress difficulty, we conducted an analysis in the undergraduate MD program of the Michael G. DeGroote School of Medicine by comparing formative assessment scores on Short Answer Questions (SAQ) called Concept Application Exercises (CAE) with subsequent progress difficulty. CAE scores are designed to formatively assess knowledge translation. These scores are not formally incorporated into the progress decision at the end of each curricular unit, which is holistic in nature. Students are referred to a student progress remediation committee if they fail to meet the curricular objectives. We sought to investigate the following research question: Do short answer questions, in the form of CAEs, predict subsequent learner progress difficulty? Methods Data from the last four student cohorts of 2022-2025 were included. To address the predictive power of CAE score characteristics, a binary logistic regression model was constructed with remediation committee referral as the dependent variable and CAE score characteristics as the independent variable. Results This study found that the average CAE score is the most powerful predictor of later progress difficulty, with each point drop in average score associated with a 37% increase in the odds of referral to the remediation committee. Conclusion These findings illustrate the predictive value of the SAQ to identify later progress difficulty.
Collapse
Affiliation(s)
- Keyna Bracken
- Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Canada
| | - Amr Saleh
- Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Canada
| | - Jeremy Sandor
- Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Canada
| | - Matthew Sibbald
- Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Canada
| | - Micheal Lee-Poy
- Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Canada
| | - Quang Ngo
- Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Canada
| |
Collapse
|
10
|
van Wijk EV, van Blankenstein FM, Donkers J, Janse RJ, Bustraan J, Adelmeijer LGM, Dubois EA, Dekker FW, Langers AMJ. Does 'summative' count? The influence of the awarding of study credits on feedback use and test-taking motivation in medical progress testing. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:1665-1688. [PMID: 38502460 PMCID: PMC11549188 DOI: 10.1007/s10459-024-10324-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 03/03/2024] [Indexed: 03/21/2024]
Abstract
Despite the increasing implementation of formative assessment in medical education, its' effect on learning behaviour remains questionable. This effect may depend on how students value formative, and summative assessments differently. Informed by Expectancy Value Theory, we compared test preparation, feedback use, and test-taking motivation of medical students who either took a purely formative progress test (formative PT-group) or a progress test that yielded study credits (summative PT-group). In a mixed-methods study design, we triangulated quantitative questionnaire data (n = 264), logging data of an online PT feedback system (n = 618), and qualitative interview data (n = 21) to compare feedback use, and test-taking motivation between the formative PT-group (n = 316), and the summative PT-group (n = 302). Self-reported, and actual feedback consultation was higher in the summative PT-group. Test preparation, and active feedback use were relatively low and similar in both groups. Both quantitative, and qualitative results showed that the motivation to prepare and consult feedback relates to how students value the assessment. In the interview data, a link could be made with goal orientation theory, as performance-oriented students perceived the formative PT as not important due to the lack of study credits. This led to low test-taking effort, and feedback consultation after the formative PT. In contrast, learning-oriented students valued the formative PT, and used it for self-study or self-assessment to gain feedback. Our results indicate that most students are less motivated to put effort in the test, and use feedback when there are no direct consequences. A supportive assessment environment that emphasizes recognition of the value of formative testing is required to motivate students to use feedback for learning.
Collapse
Affiliation(s)
- Elise V van Wijk
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Floris M van Blankenstein
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Jeroen Donkers
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Roemer J Janse
- Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Jacqueline Bustraan
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Liesbeth G M Adelmeijer
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Eline A Dubois
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Friedo W Dekker
- Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Alexandra M J Langers
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Albinusdreef 2, 2333 ZA, Leiden, The Netherlands.
| |
Collapse
|
11
|
Asoodar M, Janesarvatan F, Yu H, de Jong N. Theoretical foundations and implications of augmented reality, virtual reality, and mixed reality for immersive learning in health professions education. Adv Simul (Lond) 2024; 9:36. [PMID: 39252139 PMCID: PMC11382381 DOI: 10.1186/s41077-024-00311-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 08/29/2024] [Indexed: 09/11/2024] Open
Abstract
BACKGROUND Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR) are emerging technologies that can create immersive learning environments for health professions education. However, there is a lack of systematic reviews on how these technologies are used, what benefits they offer, and what instructional design models or theories guide their use. AIM This scoping review aims to provide a global overview of the usage and potential benefits of AR/VR/MR tools for education and training of students and professionals in the healthcare domain, and to investigate whether any instructional design models or theories have been applied when using these tools. METHODOLOGY A systematic search was conducted in several electronic databases to identify peer-reviewed studies published between and including 2015 and 2020 that reported on the use of AR/VR/MR in health professions education. The selected studies were coded and analyzed according to various criteria, such as domains of healthcare, types of participants, types of study design and methodologies, rationales behind the use of AR/VR/MR, types of learning and behavioral outcomes, and findings of the studies. The (Morrison et al. John Wiley & Sons, 2010) model was used as a reference to map the instructional design aspects of the studies. RESULTS A total of 184 studies were included in the review. The majority of studies focused on the use of VR, followed by AR and MR. The predominant domains of healthcare using these technologies were surgery and anatomy, and the most common types of participants were medical and nursing students. The most frequent types of study design and methodologies were usability studies and randomized controlled trials. The most typical rationales behind the use of AR/VR/MR were to overcome limitations of traditional methods, to provide immersive and realistic training, and to improve students' motivations and engagements. The most standard types of learning and behavioral outcomes were cognitive and psychomotor skills. The majority of studies reported positive or partially positive effects of AR/VR/MR on learning outcomes. Only a few studies explicitly mentioned the use of instructional design models or theories to guide the design and implementation of AR/VR/MR interventions. DISCUSSION AND CONCLUSION The review revealed that AR/VR/MR are promising tools for enhancing health professions education, especially for training surgical and anatomical skills. However, there is a need for more rigorous and theory-based research to investigate the optimal design and integration of these technologies in the curriculum, and to explore their impact on other domains of healthcare and other types of learning outcomes, such as affective and collaborative skills. The review also suggested that the (Morrison et al. John Wiley & Sons, 2010) model can be a useful framework to inform the instructional design of AR/VR/MR interventions, as it covers various elements and factors that need to be considered in the design process.
Collapse
Affiliation(s)
- Maryam Asoodar
- School of Health Professions Education, Department of Educational Development and Research, Faculty of Health, Medicine and Life sciences, Maastricht University, Universiteitssingel 60, Maastricht, 6229 MD, The Netherlands.
| | - Fatemeh Janesarvatan
- School of Health Professions Education, Department of Educational Development and Research, Faculty of Health, Medicine and Life sciences, Maastricht University, Universiteitssingel 60, Maastricht, 6229 MD, The Netherlands
- School of Business and Economics, Educational Research and Development Maastricht University, Maastricht, The Netherlands
| | - Hao Yu
- School of Health Professions Education, Department of Educational Development and Research, Faculty of Health, Medicine and Life sciences, Maastricht University, Universiteitssingel 60, Maastricht, 6229 MD, The Netherlands
| | - Nynke de Jong
- School of Health Professions Education, Department of Educational Development and Research, Faculty of Health, Medicine and Life sciences, Maastricht University, Universiteitssingel 60, Maastricht, 6229 MD, The Netherlands
- Department of Health Services Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
12
|
Scheepers RA, Hilverda F, Vollmann M. Study demands and resources affect academic well-being and life satisfaction of undergraduate medical students in the Netherlands. MEDICAL EDUCATION 2024; 58:1097-1106. [PMID: 38863256 DOI: 10.1111/medu.15456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 05/02/2024] [Accepted: 05/24/2024] [Indexed: 06/13/2024]
Abstract
INTRODUCTION Medical students report poor academic well-being in a context of high study demands. Study Demands-Resources theories have outlined mediating processes involving high study demands and low resources to mitigate academic well-being, which is subsequently associated with diminished overall well-being (i.e. life satisfaction). Furthermore, academic well-being and life satisfaction are also affected by interactions between study demands and resources (referred to as moderating processes). However, these mediating and moderating processes clarifying medical students' well-being still need to be investigated. Therefore, this study investigated the mediating role of academic well-being in the associations of study demands and resources with life satisfaction and the moderating role of study demands and resources in relation to academic well-being and life satisfaction among undergraduate medical students. METHODS In this cross-sectional survey study, 372 undergraduates from Dutch medical schools participated. The survey included the Study Demands-Resources Scale (workload, growth opportunities and peer support) as well as questionnaires on academic well-being (Utrecht Burnout Scale for students and Utrecht Work Engagement Scale-Student Form) and overall well-being (single item on life satisfaction). Based on Study Demands-Resources theories, (moderated) mediation analyses were performed. RESULTS Mediating processes were found as growth opportunities were indirectly associated with higher life satisfaction through lower academic burnout and higher academic engagement. Furthermore, workload was indirectly associated with lower life satisfaction through higher academic burnout. This association was moderated as it became weaker with more perceived peer support. DISCUSSION A high workload and limited growth opportunities are associated with suboptimal academic well-being and life satisfaction. Perceiving support from peer students slightly buffers the unfavourable effect of workload on academic burnout and subsequently life satisfaction. To promote academic well-being and life satisfaction in medical students, universities can consider to reduce the workload, to create a supportive learning environment and to offer development opportunities.
Collapse
Affiliation(s)
- Renée A Scheepers
- Department of Socio-Medical Sciences, Erasmus School of Health Policy & Management, Erasmus University of Rotterdam, Rotterdam, The Netherlands
| | - Femke Hilverda
- Department of Socio-Medical Sciences, Erasmus School of Health Policy & Management, Erasmus University of Rotterdam, Rotterdam, The Netherlands
| | - Manja Vollmann
- Department of Socio-Medical Sciences, Erasmus School of Health Policy & Management, Erasmus University of Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
13
|
van Wijk EV, van Blankenstein FM, Janse RJ, Dubois EA, Langers AMJ. Understanding students' feedback use in medical progress testing: A qualitative interview study. MEDICAL EDUCATION 2024; 58:980-988. [PMID: 38462812 DOI: 10.1111/medu.15378] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 02/08/2024] [Accepted: 02/20/2024] [Indexed: 03/12/2024]
Abstract
BACKGROUND Active engagement with feedback is crucial for feedback to be effective and improve students' learning and achievement. Medical students are provided feedback on their development in the progress test (PT), which has been implemented in various medical curricula, although its format, integration and feedback differ across institutions. Existing research on engagement with feedback in the context of PT is not sufficient to make a definitive judgement on what works and which barriers exist. Therefore, we conducted an interview study to explore students' feedback use in medical progress testing. METHODS All Dutch medical students participate in a national, curriculum-independent PT four times a year. This mandatory test, composed of multiple-choice questions, provides students with written feedback on their scores. Furthermore, an answer key is available to review their answers. Semi-structured interviews were conducted with 21 preclinical and clinical medical students who participated in the PT. Template analysis was performed on the qualitative data using a priori themes based on previous research on feedback use. RESULTS Template analysis revealed that students faced challenges in crucial internal psychological processes that impact feedback use, including 'awareness', 'cognizance', 'agency' and 'volition'. Factors such as stakes, available time, feedback timing and feedback presentation contributed to these difficulties, ultimately hindering feedback use. Notably, feedback engagement was higher during clinical rotations, and students were interested in the feedback when seeking insights into their performance level and career perspectives. CONCLUSION Our study enhanced the understanding of students' feedback utilisation in medical progress testing by identifying key processes and factors that impact feedback use. By recognising and addressing barriers in feedback use, we can improve both student and teacher feedback literacy, thereby transforming the PT into a more valuable learning tool.
Collapse
Affiliation(s)
- Elise V van Wijk
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Floris M van Blankenstein
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Roemer J Janse
- Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Eline A Dubois
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
| | - Alexandra M J Langers
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, The Netherlands
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
14
|
Su JM, Hsu SY, Fang TY, Wang PC. Developing and validating a knowledge-based AI assessment system for learning clinical core medical knowledge in otolaryngology. Comput Biol Med 2024; 178:108765. [PMID: 38897143 DOI: 10.1016/j.compbiomed.2024.108765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 05/11/2024] [Accepted: 06/13/2024] [Indexed: 06/21/2024]
Abstract
BACKGROUND Clinical core medical knowledge (CCMK) learning is essential for medical trainees. Adaptive assessment systems can facilitate self-learning, but extracting experts' CCMK is challenging, especially using modern data-driven artificial intelligence (AI) approaches (e.g., deep learning). OBJECTIVES This study aims to develop a multi-expert knowledge-aggregated adaptive assessment scheme (MEKAS) using knowledge-based AI approaches to facilitate the learning of CCMK in otolaryngology (CCMK-OTO) and validate its effectiveness through a one-month training program for CCMK-OTO education at a tertiary referral hospital. METHODS The MEKAS utilized the repertory grid technique and case-based reasoning to aggregate experts' knowledge to construct a representative CCMK base, thereby enabling adaptive assessment for CCMK-OTO training. The effects of longitudinal training were compared between the experimental group (EG) and the control group (CG). Both groups received a normal training program (routine meeting, outpatient/operation room teaching, and classroom teaching), while EG received MEKAS for self-learning. The EG comprised 22 UPGY trainees (6 postgraduate [PGY] and 16 undergraduate [UGY] trainees) and 8 otolaryngology residents (ENT-R); the CG comprised 24 UPGY trainees (8 PGY and 16 UGY trainees). The training effectiveness was compared through pre- and post-test CCMK-OTO scores, and user experiences were evaluated using a technology acceptance model-based questionnaire. RESULTS Both UPGY (z = -3.976, P < 0.001) and ENT-R (z = -2.038, P = 0.042) groups in EG exhibited significant improvements in their CCMK-OTO scores, while UPGY in CG did not (z = -1.204, P = 0.228). The UPGY group in EG also demonstrated a substantial improvement compared to the UPGY group in CG (z = -4.943, P < 0.001). The EG participants were highly satisfied with the MEKAS system concerning self-learning assistance, adaptive testing, perceived satisfaction, intention to use, perceived usefulness, perceived ease of use, and perceived enjoyment, rating it between an overall average of 3.8 and 4.1 out of 5.0 on all scales. CONCLUSIONS The MEKAS system facilitates CCMK-OTO learning and provides an efficient knowledge aggregation scheme that can be applied to other medical subjects to efficiently build adaptive assessment systems for CCMK learning. Larger-scale validation across diverse institutions and settings is warranted further to assess MEKAS's scalability, generalizability, and long-term impact.
Collapse
Affiliation(s)
- Jun-Ming Su
- Department of Information and Learning Technology, National University of Tainan, Tainan, Taiwan.
| | - Su-Yi Hsu
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan; School of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan; School of Medicine, National Tsing Hua University, Hsinchu, Taiwan.
| | - Te-Yung Fang
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan; School of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan; Department of Otolaryngology, Sijhih Cathay General Hospital, New Taipei City, Taiwan.
| | - Pa-Chun Wang
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan; School of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan; Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan.
| |
Collapse
|
15
|
Gauthier S, Braund H, Dalgarno N, Taylor D. Assessment-Seeking Strategies: Navigating the Decision to Initiate Workplace-Based Assessment. TEACHING AND LEARNING IN MEDICINE 2024; 36:478-487. [PMID: 37384570 DOI: 10.1080/10401334.2023.2229803] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 04/13/2023] [Accepted: 06/01/2023] [Indexed: 07/01/2023]
Abstract
Phenomenon: Competency-based medical education (CBME) relies on workplace-based assessment (WBA) to generate formative feedback (assessment for learning-AfL) and make inferences about competence (assessment of learning-AoL). When approaches to CBME rely on residents to initiate WBA, learners experience tension between seeking WBA for learning and for establishing competence. How learners resolve this tension may lead to unintended consequences for both AfL and AoL. We sought to explore the factors that impact both decisions to seek and not to seek WBA and use the findings to build a model of assessment-seeking strategy used by residents. In building this model we consider how the link between WBA and promotion or progression within a program impacts an individual's assessment-seeking strategy. Approach: We conducted 20 semi-structured interviews with internal medicine residents at Queen's University about the factors that influence their decision to seek or avoid WBA. Using grounded theory methodology, we applied a constant comparative analysis to collect data iteratively and identify themes. A conceptual model was developed to describe the interaction of factors impacting the decision to seek and initiate WBA. Findings: Participants identified two main motivations when deciding to seek assessments: the need to fulfill program requirements and the desire to receive feedback for learning. Analysis suggested that these motivations are often at odds with each other. Participants also described several moderating factors that impact the decision to initiate assessments, irrespective of the primary underlying motivation. These included resident performance, assessor factors, training program expectations, and clinical context. A conceptual framework was developed to describe the factors that lead to strategic assessment-seeking behaviors. Insights: Faced with the dual purpose of WBA in CBME, resident behavior in initiating assessment is guided by specific assessment-seeking strategies. Strategies reflect individual underlying motivations, influenced by four moderating factors. These findings have broad implications for programmatic assessment in a CBME context including validity considerations for assessment data used in summative decision-making including readiness for unsupervised practice.
Collapse
Affiliation(s)
- Stephen Gauthier
- Department of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Heather Braund
- Office of Professional Development and Educational Scholarship, Faculty of Health Sciences, Queen's University, Kingston, Ontario, Canada
| | - Nancy Dalgarno
- Office of Professional Development and Educational Scholarship, Faculty of Health Sciences, Queen's University, Kingston, Ontario, Canada
| | - David Taylor
- Department of Medicine, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
16
|
Ruczynski LI, Schouwenberg BJ, Custers E, Fluit CR, van de Pol MH. The influence of a digital clinical reasoning test on medical student learning behavior during clinical clerkships. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:935-947. [PMID: 37851160 PMCID: PMC11208212 DOI: 10.1007/s10459-023-10288-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 09/24/2023] [Indexed: 10/19/2023]
Abstract
Recently, a new digital clinical reasoning test (DCRT) was developed to evaluate students' clinical-reasoning skills. Although an assessment tool may be soundly constructed, it may still prove inadequate in practice by failing to function as intended. Therefore, more insight is needed into the effects of the DCRT in practice. Individual semi-structured interviews and template analysis were used to collect and process qualitative data. The template, based on the interview guide, contained six themes: (1) DCRT itself, (2) test debriefing, (3) reflection, (4) practice/workplace, (5) DCRT versus practice and (6) 'other'. Thirteen students were interviewed. The DCRT encourages students to engage more in formal education, self-study and workplace learning during their clerkships, particularly for those who received insufficient results. Although the faculty emphasizes the different purposes of the DCRT (assessment of/as/for learning), most students perceive the DCRT as an assessment of learning. This affects their motivation and the role they assign to it in their learning process. Although students appreciate the debriefing and reflection report for improvement, they struggle to fill the identified knowledge gaps due to the timing of receiving their results. Some students are supported by the DCRT in exhibiting lifelong learning behavior. This study has identified several ways in which the DCRT influences students' learning practices in a way that can benefit their clinical-reasoning skills. Additionally, it stresses the importance of ensuring the alignment of theoretical principles with real-world practice, both in the development and utilization of assessment tools and their content. Further research is needed to investigate the long-term impact of the DCRT on young physicians' working practice.
Collapse
Affiliation(s)
- Larissa Ia Ruczynski
- Research on Learning and Education, Radboudumc Health Academy, Radboud University Medical Center, Gerard van Swietenlaan 2 (route 51), 6525 GB, Nijmegen, Netherlands.
| | - Bas Jjw Schouwenberg
- Department of Pharmacology and Toxicology, Radboud University Medical Center Nijmegen, Nijmegen, the Netherlands
- Department of Internal Medicine, Radboud University Medical Center Nijmegen, Nijmegen, the Netherlands
| | - Eugène Custers
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open Universiteit, Heerlen, Netherlands
| | - Cornelia Rmg Fluit
- Research on Learning and Education, Radboudumc Health Academy, Radboud University Medical Center, Nijmegen, Netherlands
| | - Marjolein Hj van de Pol
- Department of Primary and Community care, Radboud University Medical Center Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
17
|
Dohms MC, Rocha A, Rasenberg E, Dielissen P, Thoonen B. Peer assessment in medical communication skills training in programmatic assessment: A qualitative study examining faculty and student perceptions. MEDICAL TEACHER 2024; 46:823-831. [PMID: 38157436 DOI: 10.1080/0142159x.2023.2285248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 11/15/2023] [Indexed: 01/03/2024]
Abstract
INTRODUCTION Current literature recommends assessment of communication skills in medical education combining different settings and multiple observers. There is still a gap in understanding about whether and how peers assessment facilitates learning in communication skills training. METHODS We designed a qualitative study using focus group interviews and thematic analysis, in a medical course in the Netherlands. We aimed to explore medical students' and teachers' experiences, perceptions, and perspectives about challenges and facilitating factors in PACST (Peer assessment in medical communication skills training). RESULTS Most of the participants reported that peer feedback was a valuable experience when learning communication skills. The major challenges for the quality and credibility of PACST reported by the participants are the question whether peer feedback is critical enough for learning and the difficulty of actually engaging students in the assessment process. CONCLUSION Teachers reviewing students' peer assessments may improve the quality and their credibility and the reviewed assessments can best be used for learning purposes. We suggest to pay sufficient attention to teachers' roles in PACST, ensuring a safe and trustworthy environment and additionally helping students to internalize the value of being vulnerable during the evaluation process.
Collapse
Affiliation(s)
- M C Dohms
- Clinique Bouchard, Marseille, France
| | - A Rocha
- DASA (Diagnósticos da América S/A), São Paulo, Brazil
| | | | - P Dielissen
- Medisch Centrum Onder de Linde, Nijmegen, Netherlands
| | - B Thoonen
- Radboud University, Nijmegen, Netherlands
| |
Collapse
|
18
|
Torre D, Daniel M, Ratcliffe T, Durning SJ, Holmboe E, Schuwirth L. Programmatic Assessment of Clinical Reasoning: New Opportunities to Meet an Ongoing Challenge. TEACHING AND LEARNING IN MEDICINE 2024:1-9. [PMID: 38794865 DOI: 10.1080/10401334.2024.2333921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 02/29/2024] [Indexed: 05/26/2024]
Abstract
Issue: Clinical reasoning is essential to physicians' competence, yet assessment of clinical reasoning remains a significant challenge. Clinical reasoning is a complex, evolving, non-linear, context-driven, and content-specific construct which arguably cannot be assessed at one point in time or with a single method. This has posed challenges for educators for many decades, despite significant development of individual assessment methods. Evidence: Programmatic assessment is a systematic assessment approach that is gaining momentum across health professions education. Programmatic assessment, and in particular assessment for learning, is well-suited to address the challenges with clinical reasoning assessment. Several key principles of programmatic assessment are particularly well-aligned with developing a system to assess clinical reasoning: longitudinality, triangulation, use of a mix of assessment methods, proportionality, implementation of intermediate evaluations/reviews with faculty coaches, use of assessment for feedback, and increase in learners' agency. Repeated exposure and measurement are critical to develop a clinical reasoning assessment narrative, thus the assessment approach should optimally be longitudinal, providing multiple opportunities for growth and development. Triangulation provides a lens to assess the multidimensionality and contextuality of clinical reasoning and that of its different, yet related components, using a mix of different assessment methods. Proportionality ensures the richness of information on which to draw conclusions is commensurate with the stakes of the decision. Coaching facilitates the development of a feedback culture and allows to assess growth over time, while enhancing learners' agency. Implications: A programmatic assessment model of clinical reasoning that is developmentally oriented, optimizes learning though feedback and coaching, uses multiple assessment methods, and provides opportunity for meaningful triangulation of data can help address some of the challenges of clinical reasoning assessment.
Collapse
Affiliation(s)
- Dario Torre
- Department of Medical Education, University of Central Florida, Orlando, FL, USA
| | - Michelle Daniel
- Department of Emergency Medicine, University of California, San Diego, CA, USA
| | - Temple Ratcliffe
- Department of Medicine, The Joe R and Teresa Lozano Long School of Medicine at University of Texas Health, Texas, USA
| | - Steven J Durning
- Center for Heath Profession Education, Uniformed Services University Center for Neuroscience and Regenerative Medicine, Bethesda, Maryland, USA
| | - Eric Holmboe
- Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, IL, USA
| | | |
Collapse
|
19
|
Barbagallo C, Osborne K, Dempsey C. Implementation of a programmatic assessment model in radiation oncology medical physics training. J Appl Clin Med Phys 2024; 25:e14354. [PMID: 38620004 PMCID: PMC11087179 DOI: 10.1002/acm2.14354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 01/03/2024] [Accepted: 03/14/2024] [Indexed: 04/17/2024] Open
Abstract
PURPOSE In 2019, a formal review and update of the current training program for medical physics residents/registrars in Australasia was conducted. The purpose of this was to ensure the program met current local clinical and technological requirements, to improve standardization of training across Australia and New Zealand and generate a dynamic curriculum and programmatic assessment model. METHODS A four-phase project was initiated, including a consultant desktop review of the current program and stakeholder consultation. Overarching program outcomes on which to base the training model were developed, with content experts used to update the scientific content. Finally, assessment specialists reviewed a range of assessment models to determine appropriate assessment methods for each learning outcome, creating a model of programmatic assessment. RESULTS The first phase identified a need for increased standardized assessment incorporating programmatic assessment. Seven clear program outcome statements were generated and used to guide and underpin the new curriculum framework. The curriculum was expanded from the previous version to include emerging technologies, while removing previous duplication. Finally, a range of proposed assessments for learning outcomes in the curriculum were generated into the programmatic assessment model. These new assessment methods were structured to incorporate rubric scoring to provide meaningful feedback. CONCLUSIONS An updated training program for Radiation Oncology Medial Physics registrars/residents was released in Australasia. Scientific content from a previous program was used as a foundation and revised for currency with the ability to accommodate a dynamic curriculum model. A programmatic model of assessment was created after comprehensive review and consultation. This new model of assessment provides more structured, ongoing assessment throughout the training period. It contains allowances for local bespoke assessment, and guidance for supervisors by the provision of marking templates and rubrics.
Collapse
Affiliation(s)
- Cathy Barbagallo
- Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM)SydneyNew South WalesAustralia
- Department of Radiation OncologyAlfred HealthPrahranVictoriaAustralia
| | - Kristy Osborne
- Australian Council for Educational ResearchEducation ResearchPolicy and Development DivisionCamberwellVictoriaAustralia
| | - Claire Dempsey
- Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM)SydneyNew South WalesAustralia
- Department of Radiation OncologyCalvary Mater NewcastleWaratahNew South WalesAustralia
- Department of Radiation OncologyUniversity of WashingtonSeattleWashingtonUSA
- School of Health SciencesUniversity of NewcastleCallaghanNew South WalesAustralia
| |
Collapse
|
20
|
Fuentes-Cimma J, Sluijsmans D, Riquelme A, Villagran I, Isbej L, Olivares-Labbe MT, Heeneman S. Designing feedback processes in the workplace-based learning of undergraduate health professions education: a scoping review. BMC MEDICAL EDUCATION 2024; 24:440. [PMID: 38654360 PMCID: PMC11036781 DOI: 10.1186/s12909-024-05439-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 04/17/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND Feedback processes are crucial for learning, guiding improvement, and enhancing performance. In workplace-based learning settings, diverse teaching and assessment activities are advocated to be designed and implemented, generating feedback that students use, with proper guidance, to close the gap between current and desired performance levels. Since productive feedback processes rely on observed information regarding a student's performance, it is imperative to establish structured feedback activities within undergraduate workplace-based learning settings. However, these settings are characterized by their unpredictable nature, which can either promote learning or present challenges in offering structured learning opportunities for students. This scoping review maps literature on how feedback processes are organised in undergraduate clinical workplace-based learning settings, providing insight into the design and use of feedback. METHODS A scoping review was conducted. Studies were identified from seven databases and ten relevant journals in medical education. The screening process was performed independently in duplicate with the support of the StArt program. Data were organized in a data chart and analyzed using thematic analysis. The feedback loop with a sociocultural perspective was used as a theoretical framework. RESULTS The search yielded 4,877 papers, and 61 were included in the review. Two themes were identified in the qualitative analysis: (1) The organization of the feedback processes in workplace-based learning settings, and (2) Sociocultural factors influencing the organization of feedback processes. The literature describes multiple teaching and assessment activities that generate feedback information. Most papers described experiences and perceptions of diverse teaching and assessment feedback activities. Few studies described how feedback processes improve performance. Sociocultural factors such as establishing a feedback culture, enabling stable and trustworthy relationships, and enhancing student feedback agency are crucial for productive feedback processes. CONCLUSIONS This review identified concrete ideas regarding how feedback could be organized within the clinical workplace to promote feedback processes. The feedback encounter should be organized to allow follow-up of the feedback, i.e., working on required learning and performance goals at the next occasion. The educational programs should design feedback processes by appropriately planning subsequent tasks and activities. More insight is needed in designing a full-loop feedback process, in which specific attention is needed in effective feedforward practices.
Collapse
Affiliation(s)
- Javiera Fuentes-Cimma
- Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile.
- School of Health Professions Education, Maastricht University, Maastricht, Netherlands.
| | | | - Arnoldo Riquelme
- Centre for Medical and Health Profession Education, Department of Gastroenterology, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Ignacio Villagran
- Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile
| | - Lorena Isbej
- School of Health Professions Education, Maastricht University, Maastricht, Netherlands
- School of Dentistry, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | | | - Sylvia Heeneman
- Department of Pathology, Faculty of Health, Medicine and Health Sciences, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
21
|
Janssens O, Andreou V, Embo M, Valcke M, De Ruyck O, Robbrecht M, Haerens L. The identification of requirements for competency development during work-integrated learning in healthcare education. BMC MEDICAL EDUCATION 2024; 24:427. [PMID: 38649850 PMCID: PMC11034030 DOI: 10.1186/s12909-024-05428-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 04/15/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND Work-integrated learning (WIL) is widely accepted and necessary to attain the essential competencies healthcare students need at their future workplaces. Yet, competency-based education (CBE) remains complex. There often is a focus on daily practice during WIL. Hereby, continuous competency development is at stake. Moreover, the fact that competencies need to continuously develop is often neglected. OBJECTIVES To ultimately contribute to the optimization of CBE in healthcare education, this study aimed at examining how competency development during WIL in healthcare education could be optimized, before and after graduation. METHODS Fourteen semi-structured interviews with 16 experts in competency development and WIL were carried out. Eight healthcare disciplines were included namely associate degree nursing, audiology, family medicine, nursing (bachelor), occupational therapy, podiatry, pediatrics, and speech therapy. Moreover, two independent experts outside the healthcare domain were included to broaden the perspectives on competency development. A qualitative research approach was used based on an inductive thematic analysis using Nvivo12© where 'in vivo' codes were clustered as sub-themes and themes. RESULTS The analysis revealed eight types of requirements for effective and continuous competency development, namely requirements in the context of (1) competency frameworks, (2) reflection and feedback, (3) assessment, (4) the continuity of competency development, (5) mentor involvement, (6) ePortfolios, (7) competency development visualizations, and (8) competency development after graduation. It was noteworthy that certain requirements were fulfilled in one educational program whereas they were absent in another. This emphasizes the large differences in how competence-based education is taking shape in different educational programs and internship contexts. Nevertheless, all educational programs seemed to recognize the importance of ongoing competency development. CONCLUSION The results of this study indicate that identifying and meeting the requirements for effective and continuous competency development is essential to optimize competency development during practice in healthcare education.
Collapse
Affiliation(s)
- Oona Janssens
- Department of Educational Studies, Faculty of Psychology and Educational Sciences, Ghent University, H. Dunantlaan 2, Ghent, 9000, Belgium.
- Department of Movement and Sports Sciences, Faculty of Medicine and Health Sciences, Ghent University, Ghent, 9000, Belgium.
| | - Vasiliki Andreou
- Department of Public Health and Primacy Care, Academic Center for General Practice, KU Leuven, Kapucijnenvoer 7, Leuven, 3000, Belgium
| | - Mieke Embo
- Department of Educational Studies, Faculty of Psychology and Educational Sciences, Ghent University, H. Dunantlaan 2, Ghent, 9000, Belgium
- Expertise Network Health and Care, Artevelde University of Applied Sciences, Voetweg 66, Ghent, 9000, Belgium
| | - Martin Valcke
- Department of Educational Studies, Faculty of Psychology and Educational Sciences, Ghent University, H. Dunantlaan 2, Ghent, 9000, Belgium
| | - Olivia De Ruyck
- Imec-mict-UGent, Miriam Makebaplein 1, Ghent, 9000, Belgium
- Department of Industrial Systems Engineering and Product Design, Faculty of Engineering and Architecture, Ghent University, Campus Kortrijk, Graaf Karel de Goedelaan 5, Kortrijk, 8500, Belgium
- Department of Communication Sciences, Ghent University, Campus Ufo Vakgroep Communicatiewetenschappen Technicum, T1, Sint‑Pietersnieuwstraat 41, Ghent, 9000, Belgium
| | - Marieke Robbrecht
- Department of Internal Medicine and Pediatrics, Faculty of Medicine and Health Sciences, Ghent University, C. Heymanslaan 10, Ghent, 9000, Belgium
| | - Leen Haerens
- Department of Movement and Sports Sciences, Faculty of Medicine and Health Sciences, Ghent University, Ghent, 9000, Belgium
| |
Collapse
|
22
|
Dory V, Wagner M, Cruess R, Cruess S, Young M. If we assess, will they learn? Students' perspectives on the complexities of assessment-for-learning. CANADIAN MEDICAL EDUCATION JOURNAL 2023; 14:94-104. [PMID: 37719398 PMCID: PMC10500400 DOI: 10.36834/cmej.73875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/19/2023]
Abstract
Introduction Assessment can positively influence learning, however designing effective assessment-for-learning interventions has proved challenging. We implemented a mandatory assessment-for-learning system comprising a workplace-based assessment of non-medical expert competencies and a progress test in undergraduate medical education and evaluated its impact. Methods We conducted semi-structured interviews with year-3 and 4 medical students at McGill University to explore how the assessment system had influenced their learning in year 3. We conducted theory-informed thematic analysis of the data. Results Eleven students participated, revealing that the assessment influenced learning through several mechanisms. Some required little student engagement (i.e., feed-up, test-enhanced learning, looking things up after an exam). Others required substantial engagement (e.g., studying for tests, selecting raters for quality feedback, using feedback). Student engagement was moderated by the perceived credibility of the system and of the costs and benefits of engagement. Credibility was shaped by students' goals-in-context: becoming a good doctor, contributing to the healthcare team, succeeding in assessments. Discussion Our assessment system failed to engage students enough to leverage its full potential. We discuss the inherent flaws and external factors that hindered student engagement. Assessment designers should leverage easy-to-control mechanisms to support assessment-for-learning and anticipate significant collaborative work to modify learning cultures.
Collapse
Affiliation(s)
- Valérie Dory
- Department of General Practice, Faculty of Medicine, Université de Liège, Liège, Belgium
- Department of Medicine and Centre for Medical Education, Faculty of Medicine, McGill University, Quebec, Canada
- Institute of Health Sciences Education and Academic Centre of General Practice, Université catholique de Louvain, Brussels, Belgium
| | - Maryam Wagner
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| | - Richard Cruess
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| | - Sylvia Cruess
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| | - Meredith Young
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| |
Collapse
|
23
|
Zhao C, Xu T, Yao Y, Song Q, Xu B. Comparison of case-based learning using Watson for oncology and traditional method in teaching undergraduate medical students. Int J Med Inform 2023; 177:105117. [PMID: 37301132 DOI: 10.1016/j.ijmedinf.2023.105117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 05/16/2023] [Accepted: 05/30/2023] [Indexed: 06/12/2023]
Abstract
BACKGROUND Watson for Oncology (WFO) is a decision-making system generated by artificial intelligence (AI) and has been widely used in treatment recommendations of cancer patients. However, the application of WFO in clinical teaching among medical students has not been reported. OBJECTIVE To establish a novel teaching and learning method with WFO in undergraduate medical students and evaluate its efficiency and students' satisfaction compared with traditional case-based learning model. METHODS 72 undergraduates majoring in clinical medicine in Wuhan University were enrolled and were randomly divided into the WFO-based group and the control group. 36 students in the WFO-based group learned clinical oncology cases via WFO platform while 36 students in the control group using traditional teaching methods. After the course, final examination and questionnaire survey of teaching assessment were conducted on the two groups of students. RESULTS According to the questionnaire survey of teaching assessment, WFO-based group showed significant higher score in the aspect of cultivating ability of independent learning (17.67 ± 1.39 vs. 15.17 ± 2.02, P = 0.018), increasing knowledge mastery (17.75 ± 1.10 vs. 16.25 ± 1.18, P = 0.001), enhancing learning interest (18.41 ± 1.42 vs. 17.00 ± 1.37, P = 0.002), increasing course participation (18.33 ± 1.67 vs. 15.75 ± 1.67, P = 0.001) and the overall course satisfaction (89.25 ± 5.92 vs. 80.75 ± 3.42, P = 0.001) than those of the control group students. CONCLUSION Our practice has established a novel clinical case-based teaching pattern with WFO, providing undergraduate students with convenient and scientific training and guidance. It empowers students with improved learning experiences and equips them with essential tools for clinical practices.
Collapse
Affiliation(s)
- Chen Zhao
- Cancer Center, Renmin Hospital of Wuhan University, Wuhan 430060, Hubei, China.
| | - Tangpeng Xu
- Cancer Center, Renmin Hospital of Wuhan University, Wuhan 430060, Hubei, China
| | - Yi Yao
- Cancer Center, Renmin Hospital of Wuhan University, Wuhan 430060, Hubei, China
| | - Qibin Song
- Cancer Center, Renmin Hospital of Wuhan University, Wuhan 430060, Hubei, China
| | - Bin Xu
- Cancer Center, Renmin Hospital of Wuhan University, Wuhan 430060, Hubei, China.
| |
Collapse
|
24
|
Greenfield J, Qua K, Prayson RA, Bierer SB. "It Changed How I Think"-Impact of Programmatic Assessment Upon Practicing Physicians: A Qualitative Study. MEDICAL SCIENCE EDUCATOR 2023; 33:963-974. [PMID: 37546195 PMCID: PMC10403454 DOI: 10.1007/s40670-023-01829-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/23/2023] [Indexed: 08/08/2023]
Abstract
Programmatic assessment is a systematic approach used to document and assess learner performance. It offers learners frequent formative feedback from a variety of contexts and uses both high- and low-stakes assessments to determine student progress. Existing research has explored learner and faculty perceptions of programmatic assessment, reporting favorable impact on faculty understanding of the importance of assessment stakes and feedback to learners while students report the ability to establish and navigate towards goals and reflect on their performance. The Cleveland Clinic Lerner College of Medicine (CCLCM) of Case Western Reserve University adopted programmatic assessment methods at its inception. With more than 18 years' experience with programmatic assessment and a portfolio-based assessment system, CCLCM is well-positioned to explore its graduates' perceptions of their programmatic assessment experiences during and after medical school. In 2020, the investigators interviewed 26 of the 339 physician graduates. Participants were purposefully sampled to represent multiple class cohorts (2009-2019), clinical specialties, and practice locations. The investigators analyzed interview transcripts using thematic analysis informed by the frameworks of self-determination theory and professional identity formation. The authors identified themes and support each with participant quotes from the interviews. Based on findings, the investigators compiled a series of recommendations for other institutions who have already or plan to incorporate elements of programmatic assessment into their curricula. The authors concluded by discussing future directions for research and additional avenues of inquiry.
Collapse
Affiliation(s)
- Jessica Greenfield
- University of Virginia School of Medicine, Room 2008A Pinn Hall, Box 800866, Charlottesville, VA 22908-0366 USA
| | - Kelli Qua
- Case Western Reserve University School of Medicine, Cleveland, OH USA
| | - Richard A. Prayson
- Department of Anatomic Pathology, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland Clinic, Cleveland, OH USA
| | - S. Beth Bierer
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
| |
Collapse
|
25
|
Loosveld LM, Driessen EW, Theys M, Van Gerven PWM, Vanassche E. Combining Support and Assessment in Health Professions Education: Mentors' and Mentees' Experiences in a Programmatic Assessment Context. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:271-281. [PMID: 37426357 PMCID: PMC10327863 DOI: 10.5334/pme.1004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 06/23/2023] [Indexed: 07/11/2023]
Abstract
Introduction Mentors in programmatic assessment support mentees with low-stakes feedback, which often also serves as input for high-stakes decision making. That process potentially causes tensions in the mentor-mentee relationship. This study explored how undergraduate mentors and mentees in health professions education experience combining developmental support and assessment, and what this means for their relationship. Methods The authors chose a pragmatic qualitative research approach and conducted semi-structured vignette-based interviews with 24 mentors and 11 mentees that included learners from medicine and the biomedical sciences. Data were analyzed thematically. Results How participants combined developmental support and assessment varied. In some mentor-mentee relationships it worked well, in others it caused tensions. Tensions were also created by unintended consequences of design decisions at the program level. Dimensions impacted by experienced tensions were: relationship quality, dependence, trust, and nature and focus of mentoring conversations. Mentors and mentees mentioned applying various strategies to alleviate tensions: transparency and expectation management, distinguishing between developmental support and assessment, and justifying assessment responsibility. Discussion Combining the responsibility for developmental support and assessment within an individual worked well in some mentor-mentee relationships, but caused tensions in others. On the program level, clear decisions should be made regarding the design of programmatic assessment: what is the program of assessment and how are responsibilities divided between all involved? If tensions arise, mentors and mentees can try to alleviate these, but continuous mutual calibration of expectations between mentors and mentees remains of key importance.
Collapse
Affiliation(s)
- Lianne M. Loosveld
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Erik W. Driessen
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Mattias Theys
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Pascal W. M. Van Gerven
- Department of Educational Development & Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, the Netherlands
| | - Eline Vanassche
- Faculty of Psychology and Educational Sciences, KU Leuven Kulak, Etienne Sabbelaan 51, P.O. Box 7654, 8500 Kortrijk, Belgium
| |
Collapse
|
26
|
Ganesan I, Cham B, Teunissen PW, Busari JO. Stakes of Assessments in Residency: Influence on Previous and Current Self-Regulated Learning and Co-Regulated Learning in Early Career Specialists. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:237-246. [PMID: 37334108 PMCID: PMC10275342 DOI: 10.5334/pme.860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 05/26/2023] [Indexed: 06/20/2023]
Abstract
Introduction Assessments drive learning but the influence of the stakes of the assessments on self-regulated (SRL) during and after residency are unknown. As early career specialists (ECS) must continue learning independently, the answer to this is important as it may inform future assessments with the potential to promote life-long learning after graduation. Methods We utilized constructivist grounded theory to explore the perspectives of eighteen ECS on the influence of stakes of assessments within residency on their SRL during training and in current practice. We conducted semi-structured interviews. Results We initially set out to examine the influence of the stakes of assessments on SRL during residency and after graduation. However, it was apparent that learners increasingly engaged with others in co-regulated learning (CRL) as the perceived stakes of the assessments increased. The individual learner's SRL was embedded in CRL in preparation for the various assessments in residency. For low-stakes assessments, the learner engaged in less CRL, taking less cues from others. As stakes increased, the learner engaged in more CRL with peers with similar intellectual level and supervisors to prepare for these assessments. SRL and CRL influenced by assessments in residency had a knock-on effect in clinical practice as ECS in: 1) developing clinical reasoning, 2) improving doctor-patient communication and negotiation skills, and 3) self-reflections and seeking feedback to deal with expectations of self or others. Discussion Our study supported that the stakes of assessments within residency reinforced SRL and CRL during residency with a continued effect on learning as ECS.
Collapse
Affiliation(s)
- Indra Ganesan
- Department of Pediatrics, Kandang Kerbau Women’s and Children’s Hospital, Singapore
| | - Breana Cham
- Department of Genetics, Kandang Kerbau Women’s and Children’s Hospital, Singapore
| | - Pim W. Teunissen
- School of Health Professions Education (SHE), Faculty of Health, Medicine and Life Sciences, Maastricht University and Department of Obstetrics & Gynecology, Maastricht University Medical Center, Maastricht, the Netherlands
| | - Jamiu O. Busari
- Department of Educational Development & Research, Faculty of Health, Medicine & Life Sciences (FHML), Maastricht University, Maastricht, The Netherlands
- Department of Pediatrics and HOH Academy, Horacio Oduber Hospital, Dr. Horacio E. Oduber Boulevard #1, Oranjestad, Aruba
| |
Collapse
|
27
|
Martin L, Blissett S, Johnston B, Tsang M, Gauthier S, Ahmed Z, Sibbald M. How workplace-based assessments guide learning in postgraduate education: A scoping review. MEDICAL EDUCATION 2023; 57:394-405. [PMID: 36286100 DOI: 10.1111/medu.14960] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 09/16/2022] [Accepted: 10/21/2022] [Indexed: 06/16/2023]
Abstract
INTRODUCTION Competency-based medical education (CBME) led to the widespread adoption of workplace-based assessment (WBA) with the promise of achieving assessment for learning. Despite this, studies have illustrated tensions between the summative and formative role of WBA which undermine learning goals. Models of workplace-based learning (WBL) provide insight, however, these models excluded WBA. This scoping review synthesizes the primary literature addressing the role of WBA to guide learning in postgraduate medical education, with the goal of identifying gaps to address in future studies. METHODS The search was applied to OVID Medline, Web of Science, ERIC and CINAHL databases, articles up to September 2020 were included. Titles and abstracts were screened by two reviewers, followed by a full text review. Two members independently extracted and analysed quantitative and qualitative data using a descriptive-analytic technique rooted in Billett's four premises of WBL. Themes were synthesized and discussed until consensus. RESULTS All 33 papers focused on the perception of learning through WBA. The majority applied qualitative methodology (70%), and 12 studies (36%) made explicit reference to theory. Aligning with Billett's first premise, results reinforce that learning always occurs in the workplace. WBA helped guide learning goals and enhanced feedback frequency and specificity. Billett's remaining premises provided an important lens to understand how tensions that existed in WBL have been exacerbated with frequent WBA. As individuals engage in both work and WBA, they are slowly transforming the workplace. Culture and context frame individual experiences and the perceived authenticity of WBA. Finally, individuals will have different goals, and learn different things, from the same experience. CONCLUSION Analysing WBA literature through the lens of WBL theory allows us to reframe previously described tensions. We propose that future studies attend to learning theory, and demonstrate alignment with philosophical position, to advance our understanding of assessment-for-learning in the workplace.
Collapse
Affiliation(s)
- Leslie Martin
- Department of Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Sarah Blissett
- Department of Medicine, Western University, London, Ontario, Canada
| | - Bronte Johnston
- McMaster Education Research, Innovation, and Theory Program, McMaster University, Hamilton, Ontario, Canada
| | - Michael Tsang
- Department of Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Stephen Gauthier
- Department of Medicine, Queens University, Kingston, Ontario, Canada
| | - Zeeshan Ahmed
- Department of Medicine, Ottawa University, Ottawa, Ontario, Canada
| | - Matthew Sibbald
- Department of Medicine, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
28
|
Chin M, Pack R, Cristancho S. "A whole other competence story": exploring faculty perspectives on the process of workplace-based assessment of entrustable professional activities. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:369-385. [PMID: 35997910 DOI: 10.1007/s10459-022-10156-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 08/07/2022] [Indexed: 05/11/2023]
Abstract
The centrality of entrustable professional activities (EPAs) in competency-based medical education (CBME) is predicated on the assumption that low-stakes, high-frequency workplace-based assessments used in a programmatic approach will result in accurate and defensible judgments of competence. While there have been conversations in the literature regarding the potential of this approach, only recently has the conversation begun to explore the actual experiences of clinical faculty in this process. The purpose of this qualitative study was to explore the process of EPA assessment for faculty in everyday practice. We conducted 18 semi-structured interviews with Anesthesia faculty at a Canadian academic center. Participants were asked to describe how they engage in EPA assessment in daily practice and the factors they considered. Interviews were audio-recorded, transcribed, and analysed using the constant comparative method of grounded theory. Participants in this study perceived two sources of tension in the EPA assessment process that influenced their scoring on official forms: the potential constraints of the assessment forms and the potential consequences of their assessment outcome. This was particularly salient in circumstances of uncertainty regarding the learner's level of competence. Ultimately, EPA assessment in CBME may be experienced as higher-stakes by faculty than officially recognized due to these tensions, suggesting a layer of discomfort and burden in the process that may potentially interfere with the goal of assessment for learning. Acknowledging and understanding the nature of this burden and identifying strategies to mitigate it are critical to achieving the assessment goals of CBME.
Collapse
Affiliation(s)
- Melissa Chin
- Department of Anesthesia and Perioperative Medicine, London Health Sciences Centre, Schulich School of Medicine and Dentistry, University of Western Ontario, London, ON, Canada.
| | - Rachael Pack
- Center for Education Research and Innovation, University of Western Ontario, London, ON, Canada
| | - Sayra Cristancho
- Center for Education Research and Innovation, University of Western Ontario, London, ON, Canada
| |
Collapse
|
29
|
Alkalash SH, Farag NA. Effect of Workplace-Based Assessment Utilization as a Formative Assessment for Learning Among Family Medicine Postgraduates at the Faculty of Medicine, Menoufia University: A Prospective Study. Cureus 2023; 15:e35246. [PMID: 36968896 PMCID: PMC10034738 DOI: 10.7759/cureus.35246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/21/2023] [Indexed: 02/25/2023] Open
Abstract
Background Workplace-based assessment (WBA) is a group of assessment approaches that assesses the trainees' performance through their observation and monitoring in real clinical settings and then provides them with constructive and relevant feedback. Many WBA tools are available, including the mini-clinical evaluation exercise (mini-CEX), direct observation of procedural skills (DOPS), case-based discussions, and multisource feedback (peers, seniors, and patients). A WBA can help medical students improve their clinical competencies and ensure that qualified physicians graduate. Methods This prospective study was done in the family medicine department at the Menoufia Faculty of Medicine in Egypt and passed through two phases. Phase I was introducing an orientation lecture for family medicine staff and a convenient sample of 21 family medicine postgraduates about WBA. Phase II was conducting a monthly mini-CEX and DOPS for the postgraduates. Finally, students' satisfaction with the WBA was assessed, and all collected data were analyzed via Statistical Package for Social Science (SPSS) version 23 (IBM Corp., Armonk, NY). Results A total of 105 feedback sheets were obtained. These feedback sheets were subdivided into 63 mini-CEX feedback sheets (21 sheets from each mini-CEX session for three sessions) and 42 DOPS feedback sheets (21 sheets from each DOPS session for two sessions), all of which were collected and analyzed. A significant improvement was detected in the mini-CEX and DOPS feedback scores of the postgraduates throughout the consecutive sessions (9.5 ± 2.7, 24.9 ± 2.5, 27.29 ± 1.5) (P < 0.001) for Mini-CEX and (6.1 ± 1.8 versus 9.0 ± 1.2) (P < 0.001) for DOPS. About 93% of the postgraduates recommended the application of WBA for their peers, and 86% of them requested to perform it again for other different clinical cases and procedures. Conclusion Workplace-based assessment in the form of Mini-CEX and DOPS revealed its ability to improve clinical knowledge and skills among family medicine postgraduates who became motivated to undergo it again in search of improving their clinical performance and reducing their stresses related to final summative and objective structured clinical examinations (OSCEs).
Collapse
|
30
|
Watling C, Shaw J, Field E, Ginsburg S. 'For the most part it works': Exploring how authors navigate peer review feedback. MEDICAL EDUCATION 2023; 57:151-160. [PMID: 36031758 DOI: 10.1111/medu.14932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 08/15/2022] [Accepted: 08/25/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Peer review aims to provide meaningful feedback to research authors so that they may improve their work, and yet it constitutes a particularly challenging context for the exchange of feedback. We explore how research authors navigate the process of interpreting and responding to peer review feedback, in order to elaborate how feedback functions when some of the conditions thought to be necessary for it to be effective are not met. METHODS Using constructivist grounded theory methodology, we interviewed 17 recently published health professions education researchers about their experiences with the peer review process. Data collection and analysis were concurrent and iterative. We used constant comparison to identify themes and to develop a conceptual model of how feedback functions in this setting. RESULTS Although participants expressed faith in peer review, they acknowledged that the process was emotionally trying and raised concerns about its consistency and credibility. These potential threats were mitigated by factors including time, team support, experience and the exercise of autonomy. Additionally, the perceived engagement of reviewers and the cultural norms and expectations surrounding the process strengthened authors' willingness and capacity to respond productively. Our analysis suggests a model of feedback within which its perceived usefulness turns on the balance of threats and countermeasures. CONCLUSIONS Feedback is a balancing act. Although threats to the productive uptake of peer review feedback abound, these threats may be neutralised by a range of countermeasures. Among these, opportunities for autonomy and cultural normalisation of both the professional responsibility to engage with feedback and the challenge of doing so may be especially influential and may have implications beyond the peer review setting.
Collapse
Affiliation(s)
- Christopher Watling
- Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Jennifer Shaw
- Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Emily Field
- Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Shiphra Ginsburg
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
31
|
Zubiaurre Bitzer LA, Dathatri S, Fine JB, Swan Sein A. Building a student learning-focused assessment and grading system in dental school: One school's experience. J Dent Educ 2023; 87:614-624. [PMID: 36607618 DOI: 10.1002/jdd.13158] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 11/21/2022] [Accepted: 12/06/2022] [Indexed: 01/07/2023]
Abstract
PURPOSE/OBJECTIVES As health professions education moves toward competency-based education, there has been increased focus on the structure of assessment systems that support student competency development and learning. This has been buoyed by a growing body of research supporting assessment for learning processes to promote student growth and learning rather than relying on assessment systems primarily to measure performance. This paper presents the rationale and evidence for moving to an assessment for learning system and the results of a quasi-experimental interrupted time series study using data from 2015 to 2022 to evaluate the impacts of these changes. METHODS Columbia University College of Dental Medicine faculty voted to implement assessment for learning system changes beginning in 2017 with the graduating class of 2021. These changes included moving from using a grading system for didactic courses with Honors, Pass, and Fail as available grades to a grading system with only Pass and Fail as available grades, as well as creating synthesis and assessment weeks, weekly problem sets, post-exam review sessions, exam remediation opportunities, and formative progress exams throughout the curriculum. The revised assessment and grading system changes were communicated to residency program directors, and programmatic competency data about student performance across the curriculum were shared with programs in Dean's Letters. RESULTS Once assessment system changes were implemented, it was found that student exam failure rates were lower, course exam scores were the same or higher, and performance on board exams improved compared to the national average. Students reported positive perceptions with regard to well-being and learning climate that they associated with the adoption of Pass/Fail grading. Match outcomes, including student satisfaction and program director ratings, have remained consistently positive. CONCLUSION As dental educators, our goal is to nurture students to become life-long learners. Adopting a grading structure that is Pass/Fail and an assessment system that fosters learning allows students to shape learning practices that favor long-term retention and application of information, also enhancing the learning environment and student well-being. These system changes may also facilitate the inclusion and support of students whose backgrounds are underrepresented in dentistry.
Collapse
Affiliation(s)
| | - Shubha Dathatri
- Vagelos College of Physicians and Surgeons, Columbia University, New York, New York, USA
| | - James B Fine
- College of Dental Medicine, Columbia University, New York, New York, USA
| | - Aubrie Swan Sein
- Vagelos College of Physicians and Surgeons, Columbia University, New York, New York, USA
| |
Collapse
|
32
|
Costa A, Lamas S, Correia MR, Gomes MS, Costa MJ, Olsson IAS. An Objective Structured Laboratory Animal Science Examination (OSLASE) to ensure researchers' professional competence in laboratory animal science. Lab Anim 2022; 57:149-159. [PMID: 36510479 DOI: 10.1177/00236772221135671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
The evaluation of the competence of personnel working with laboratory animals is currently a challenge. Directive 2010/63/EU establishes that staff must have demonstrated competence before they perform unsupervised work with living animals. Nevertheless, there is a lack of research into education and training in laboratory animal science, and the establishment of assessment strategies to confirm researchers' competence remains largely unaddressed.In this study, we analysed the implementation of a practical assessment strategy over three consecutive years (2018-2021) using the Objective Structured Laboratory Animal Science Exam (OSLASE) developed previously by us to assess professional competence. The interrater reliability (IRR) was determined based on the assessors' rating of candidates' performance at different OSLASE stations using weighted kappa (Kw) and percentage of agreement. Focus group interviews were conducted to access trainees' acceptability regarding the OSLASE.There was a moderate-to-good Kw for the majority of the scales' items (0.79 ± 0.20 ≤ Kw ≥ 0.45 ± 0.13). The percentages of agreement were also acceptable (≥75%) for all scale items but one. Trainees reported that the OSLASE had a positive impact on their engagement during practical training, and that it clarified the standards established for their performance and the skills that required improvement. These preliminary results illustrate how assessment strategies, such as the OSLASE, can be implemented in a manner that is useful for both assessors and trainees.Examen structuré objectif de science animale de laboratoire (OSASSE) pour assurer la compétence professionnelle des chercheurs en SAL.
Collapse
Affiliation(s)
- Andreia Costa
- i3S, Instituto de Investigação e Inovação em Saúde, University of Porto, Portugal.,IBMC, Institute for Molecular and Cell Biology, University of Porto, Portugal.,ICBAS, Abel Salazar Institute of Biomedical Sciences, University of Porto, Portugal
| | - Sofia Lamas
- i3S, Instituto de Investigação e Inovação em Saúde, University of Porto, Portugal.,IBMC, Institute for Molecular and Cell Biology, University of Porto, Portugal
| | - Maria Rui Correia
- i3S, Instituto de Investigação e Inovação em Saúde, University of Porto, Portugal.,IBMC, Institute for Molecular and Cell Biology, University of Porto, Portugal
| | - Maria S Gomes
- i3S, Instituto de Investigação e Inovação em Saúde, University of Porto, Portugal.,IBMC, Institute for Molecular and Cell Biology, University of Porto, Portugal.,ICBAS, Abel Salazar Institute of Biomedical Sciences, University of Porto, Portugal
| | - Manuel J Costa
- ICVS, Life and Health Sciences Research Institute, School of Medicine, University of Minho, Portugal
| | - I Anna S Olsson
- i3S, Instituto de Investigação e Inovação em Saúde, University of Porto, Portugal.,IBMC, Institute for Molecular and Cell Biology, University of Porto, Portugal
| |
Collapse
|
33
|
Mowchun JJ, Davila CH. How Am I Doing in Small Group? Student Perceptions of Feedback in Case-Based Learning Sessions. MEDICAL SCIENCE EDUCATOR 2022; 32:1487-1493. [PMID: 36532402 PMCID: PMC9755430 DOI: 10.1007/s40670-022-01677-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/28/2022] [Indexed: 06/17/2023]
Abstract
Introduction Small group case-based learning (CBL) facilitators are content experts that may provide feedback to students on cognitive reasoning skills and knowledge acquisition. However, student feedback-seeking behavior and response toward faculty feedback in CBL sessions are not known, and it is essential to maximize feedback in this setting where it can be a challenge to observe student performance while groups may have varied emphasis on individual versus team performance. We explored student perceptions of the effectiveness of faculty feedback processes during CBL sessions. Methods This qualitative study used semi-structured interviews with ten second year medical students enrolled in the Geisel School of Medicine preclinical neurology course. Investigator triangulation was used with interpretation comparisons that included independent content analysis. The constructed themes were discussed and final theme consensus was reached. Results Three major themes arose: (1) students value frequent feedback on their understanding of key clinical case concepts; (2) the CBL learning environment is not conducive to individual feedback; and (3) student feedback-seeking behavior and response are influenced by self-perceived level of preparedness for the sessions and overall comfort with the CBL facilitator and learning environment. Conclusions Students value content-based feedback from CBL sessions and need more individualized feedback. The style of the facilitator and overall learning environment can vary widely in the small group setting and has direct impact on feedback opportunities and student feedback-seeking behavior.
Collapse
Affiliation(s)
- Justin J. Mowchun
- Departments of Neurology and Medical Education, Geisel School of Medicine at Dartmouth, Dartmouth-Hitchcock Medical Center, Lebanon, NH USA
| | - Claire Hogue Davila
- Departments of Neurology and Medical Education, Geisel School of Medicine at Dartmouth, Dartmouth-Hitchcock Medical Center, Lebanon, NH USA
| |
Collapse
|
34
|
Pippitt KA, Moore KB, Lindsley JE, Cariello PF, Smith AG, Formosa T, Moser K, Morton DA, Colbert-Getz JM, Chow CJ. Assessment for Learning with Ungraded and Graded Assessments. MEDICAL SCIENCE EDUCATOR 2022; 32:1045-1054. [PMID: 36276764 PMCID: PMC9584017 DOI: 10.1007/s40670-022-01612-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/25/2022] [Indexed: 06/16/2023]
Abstract
Introduction Assessment for learning has many benefits, but learners will still encounter high-stakes decisions about their performance throughout training. It is unknown if assessment for learning can be promoted with a combination model where scores from some assessments are factored into course grades and scores from other assessments are not used for course grading. Methods At the University of Utah School of Medicine, year 1-2 medical students (MS) completed multiple-choice question quiz assessments and final examinations in six systems-based science courses. Quiz and final examination performance counted toward course grades for MS2017-MS2018. Starting with the MS2020 cohort, quizzes no longer counted toward course grades. Quiz, final examination, and Step 1 scores were compared between ungraded quiz and graded quiz cohorts with independent samples t-tests. Student and faculty feedback was collected. Results Quiz performance was not different for the ungraded and graded cohorts (p = 0.173). Ungraded cohorts scored 4% higher on final examinations than graded cohorts (p ≤ 0.001, d = 0.88). Ungraded cohorts scored above the national average and 11 points higher on Step 1 compared to graded cohorts, who had scored below the national average (p ≤ 0.001, d = 0.64). During the study period, Step 1 scores increased by 2 points nationally. Student feedback was positive, and faculty felt it improved their relationship with students. Discussion The change to ungraded quizzes did not negatively affect final examination or Step 1 performance, suggesting a combination of ungraded and graded assessments can effectively promote assessment for learning.
Collapse
Affiliation(s)
- Karly A. Pippitt
- Department of Family and Preventive Medicine, University of Utah School of Medicine, 317 Chipeta Way, Suite A, Salt Lake City, UT 84108 USA
- Community Faculty, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Kathryn B. Moore
- Department of Neurobiology, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Janet E. Lindsley
- Department of Biochemistry, University of Utah School of Medicine, Salt Lake City, UT USA
- Curriculum University of Utah School of Medicine, Salt Lake City, UT USA
| | - Paloma F. Cariello
- Department of Internal Medicine, University of Utah School of Medicine, Salt Lake City, UT USA
- Health Equity, Diversity, and Inclusion, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Andrew G. Smith
- Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Tim Formosa
- Department of Biochemistry, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Karen Moser
- Department of Pathology, University of Utah School of Medicine, Salt Lake City, UT USA
| | - David A. Morton
- Department of Neurobiology, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Jorie M. Colbert-Getz
- Department of Internal Medicine, University of Utah School of Medicine, Salt Lake City, UT USA
- Education Quality Improvement, University of Utah School of Medicine, Salt Lake City, UT USA
| | - Candace J. Chow
- Department of Internal Medicine, University of Utah School of Medicine, Salt Lake City, UT USA
- Education Research, University of Utah School of Medicine, Salt Lake City, UT USA
| |
Collapse
|
35
|
Phinney LB, Fluet A, O'Brien BC, Seligman L, Hauer KE. Beyond Checking Boxes: Exploring Tensions With Use of a Workplace-Based Assessment Tool for Formative Assessment in Clerkships. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:1511-1520. [PMID: 35703235 DOI: 10.1097/acm.0000000000004774] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE To understand the role of a workplace-based assessment (WBA) tool in facilitating feedback for medical students, this study explored changes and tensions in a clerkship feedback activity system through the lens of cultural historical activity theory (CHAT) over 2 years of tool implementation. METHOD This qualitative study uses CHAT to explore WBA use in core clerkships by identifying feedback activity system elements (e.g., community, tools, rules, objects) and tensions among these elements. University of California, San Francisco core clerkship students were invited to participate in semistructured interviews eliciting experience with a WBA tool intended to enhance direct observation and feedback in year 1 (2019) and year 2 (2020) of implementation. In year 1, the WBA tool required supervisor completion in the school's evaluation system on a computer. In year 2, both students and supervisors had WBA completion abilities and could access the form via a smartphone separate from the school's evaluation system. RESULTS Thirty-five students participated in interviews. The authors identified tensions that shifted with time and tool iterations. Year 1 students described tensions related to cumbersome tool design, fear of burdening supervisors, confusion over WBA purpose, WBA as checking boxes, and WBA usefulness depending on clerkship context and culture. Students perceived dissatisfaction with the year 1 tool version among peers and supervisors. The year 2 mobile-based tool and student completion capabilities helped to reduce many of the tensions noted in year 1. Students expressed wider WBA acceptance among peers and supervisors in year 2 and reported understanding WBA to be for low-stakes feedback, thereby supporting formative assessment for learning. CONCLUSIONS Using CHAT to explore changes in a feedback activity system with WBA tool iterations revealed elements important to WBA implementation, including designing technology for tool efficiency and affording students autonomy to document feedback with WBAs.
Collapse
Affiliation(s)
- Lauren B Phinney
- L.B. Phinney is a first-year internal medicine resident, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California
| | - Angelina Fluet
- A. Fluet is a fourth-year medical student, University of California, San Francisco School of Medicine, San Francisco, California
| | - Bridget C O'Brien
- B.C. O'Brien is professor of medicine and education scientist, Department of Medicine and Center for Faculty Educators, University of California, San Francisco School of Medicine, San Francisco, California
| | - Lee Seligman
- L. Seligman is a second-year internal medicine resident, Department of Medicine, New York-Presbyterian Hospital, Columbia University Irving Medical Center, New York, New York
| | - Karen E Hauer
- K.E. Hauer is associate dean for competency assessment and professional standards and professor, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California
| |
Collapse
|
36
|
Pearce J. What do student experiences of programmatic assessment tell us about scoring programmatic assessment data? MEDICAL EDUCATION 2022; 56:872-875. [PMID: 35698736 DOI: 10.1111/medu.14852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 06/06/2022] [Indexed: 06/15/2023]
Affiliation(s)
- Jacob Pearce
- Australian Council for Educational Research - Tertiary Education (Assessment), Camberwell, Victoria, Australia
| |
Collapse
|
37
|
Roberts C, Khanna P, Bleasel J, Lane S, Burgess A, Charles K, Howard R, O'Mara D, Haq I, Rutzou T. Student perspectives on programmatic assessment in a large medical programme: A critical realist analysis. MEDICAL EDUCATION 2022; 56:901-914. [PMID: 35393668 PMCID: PMC9542097 DOI: 10.1111/medu.14807] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 03/28/2022] [Accepted: 04/05/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Fundamental challenges exist in researching complex changes of assessment practice from traditional objective-focused 'assessments of learning' towards programmatic 'assessment for learning'. The latter emphasise both the subjective and social in collective judgements of student progress. Our context was a purposively designed programmatic assessment system implemented in the first year of a new graduate entry curriculum. We applied critical realist perspectives to unpack the underlying causes (mechanisms) that explained student experiences of programmatic assessment, to optimise assessment practice for future iterations. METHODS Data came from 14 in-depth focus groups (N = 112/261 students). We applied a critical realist lens drawn from Bhasker's three domains of reality (the actual, empirical and real) and Archer's concept of structure and agency to understand the student experience of programmatic assessment. Analysis involved induction (pattern identification), abduction (theoretical interpretation) and retroduction (causal explanation). RESULTS As a complex educational and social change, the assessment structures and culture systems within programmatic assessment provided conditions (constraints and enablements) and conditioning (acceptance or rejection of new 'non-traditional' assessment processes) for the actions of agents (students) to exercise their learning choices. The emergent underlying mechanism that most influenced students' experience of programmatic assessment was one of balancing the complex relationships between learner agency, assessment structures and the cultural system. CONCLUSIONS Our study adds to debates on programmatic assessment by emphasising how the achievement of balance between learner agency, structure and culture suggests strategies to underpin sustained changes (elaboration) in assessment practice. These include; faculty and student learning development to promote collective reflexivity and agency, optimising assessment structures by enhancing integration of theory with practice, and changing learning culture by both enhancing existing and developing new social structures between faculty and the student body to gain acceptance and trust related to the new norms, beliefs and behaviours in assessing for and of learning.
Collapse
Affiliation(s)
- Chris Roberts
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Priya Khanna
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Jane Bleasel
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Stuart Lane
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Annette Burgess
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Kellie Charles
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
- Faculty of Medicine and Health, Sydney Pharmacy School, Discipline of PharmacologyThe University of SydneySydneyNew South WalesAustralia
| | - Rosa Howard
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Deborah O'Mara
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Inam Haq
- Faculty of Medicine and HealthThe University of SydneySydneyNew South WalesAustralia
| | - Timothy Rutzou
- School of MedicineThe University of Notre DameChippendaleNew South WalesAustralia
| |
Collapse
|
38
|
Looi JC, Pring W, Allison S, Bastiampillai T. Clinical update on psychiatric outcome measurement: what is the purpose, what is known and what should be done about it? Australas Psychiatry 2022; 30:494-497. [PMID: 35500242 DOI: 10.1177/10398562221092306] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
OBJECTIVE To provide a clinical update on the purposes, evidence-base and recommendations for both clinician and patient-rated outcome measures in psychiatric practice. CONCLUSIONS Private and public sector funders have implemented outcome measurement systems in Australian mental healthcare, in order to improve cost-effectiveness. It is important to consider the ultimate aims of outcome measurement from various perspectives in evaluating the evidence-base, as there are a number of measurement dimensions to address. For individual clinicians, the purpose may be to guide treatment-planning, as well as to assess treatment and clinician efficacy. For patients, the purpose is to assess outcomes in terms of their goals for recovery, as well as to evaluate their satisfaction with the care provided, and their healthcare providers. The other orthogonal dimensions of measurement comprise, the proximal to illness measures of symptomatic severity, and the distal measures of disability, which apply to both clinician and patient outcomes. In turn, these measures may be used by healthcare funders in public or private sectors as proxy measures of the cost-effectiveness of psychiatric care provided. Clinical registries linked to service-mapping would provide better data for patients, providers and funders to assess the availability and effectiveness of psychiatric care in Australia.
Collapse
Affiliation(s)
- Jeffrey Cl Looi
- Academic Unit of Psychiatry and Addiction Medicine, 104822The Australian National University Medical School, Canberra Hospital, Canberra, ACT, Australia; Consortium of Australian-Academic Psychiatrists for Independent Policy and Research Analysis (CAPIPRA), Canberra, ACT, Australia
| | - William Pring
- Consortium of Australian-Academic Psychiatrists for Independent Policy and Research Analysis (CAPIPRA), Canberra, ACT, Australia; Delmont Private Hospital and Department of Psychiatry, 2541Monash University, Clayton, VIC, Australia
| | - Stephen Allison
- Consortium of Australian-Academic Psychiatrists for Independent Policy and Research Analysis (CAPIPRA), Canberra, ACT, Australia; College of Medicine and Public Health, 1065Flinders University, Adelaide, SA, Australia
| | - Tarun Bastiampillai
- Consortium of Australian-Academic Psychiatrists for Independent Policy and Research Analysis (CAPIPRA), Canberra, ACT, Australia; College of Medicine and Public Health, 1065Flinders University, Adelaide, SA, Australia; Department of Psychiatry, 2541Monash University, Clayton, VIC, Australia
| |
Collapse
|
39
|
Torre D, Schuwirth L, Van der Vleuten C, Heeneman S. An international study on the implementation of programmatic assessment: Understanding challenges and exploring solutions. MEDICAL TEACHER 2022; 44:928-937. [PMID: 35701165 DOI: 10.1080/0142159x.2022.2083487] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Programmatic assessment is an approach to assessment aimed at optimizing the learning and decision function of assessment. It involves a set of key principles and ground rules that are important for its design and implementation. However, despite its intuitive appeal, its implementation remains a challenge. The purpose of this paper is to gain a better understanding of the factors that affect the implementation process of programmatic assessment and how specific implementation challenges are managed across different programs. METHODS An explanatory multiple case (collective) approach was used for this study. We identified 6 medical programs that had implemented programmatic assessment with variation regarding health profession disciplines, level of education and geographic location. We conducted interviews with a key faculty member from each of the programs and analyzed the data using inductive thematic analysis. RESULTS We identified two major factors in managing the challenges and complexity of the implementation process: knowledge brokers and a strategic opportunistic approach. Knowledge brokers were the people who drove and designed the implementation process acting by translating evidence into practice allowing for real-time management of the complex processes of implementation. These knowledge brokers used a 'strategic opportunistic' or agile approach to recognize new opportunities, secure leadership support, adapt to the context and take advantage of the unexpected. Engaging in an overall curriculum reform process was a critical factor for a successful implementation of programmatic assessment. DISCUSSION The study contributes to the understanding of the intricacies of implementation processes of programmatic assessment across different institutions. Managing opportunities, adaptive planning, awareness of context, were all critical aspects of thinking strategically and opportunistically in the implementation of programmatic assessment. Future research is needed to provide a more in-depth understanding of values and beliefs that underpin the assessment culture of an organization, and how such values may affect implementation.
Collapse
Affiliation(s)
- Dario Torre
- Director of Assessment, and Professor of Medicine, University of Central Florida College of Medicine, Orlando, FL, USA
| | - Lambert Schuwirth
- College of Medicine and Public Health, Flinders University, Adelaide, Australia
| | - Cees Van der Vleuten
- Department of Educational Development and Research, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| | - Sylvia Heeneman
- Department of Pathology, School Health Profession Education, Cardiovascular Research Institute Maastricht, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
40
|
Landoll RR, Bennion LD, Maranich AM, Hemmer PA, Torre D, Schreiber-Gregory DN, Durning SJ, Dong T. Extending growth curves: a trajectory monitoring approach to identification and interventions in struggling medical student learners. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:645-658. [PMID: 35467305 DOI: 10.1007/s10459-022-10109-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 03/06/2022] [Indexed: 06/14/2023]
Abstract
Given gaps in both identifying and providing targeted interventions to struggling learners, the purpose of this study is to both improve rapid identification and to improve individualized academic advising for learners using this visual representation of performance. Across three graduating classes, individual growth curves were calculated for each student on National Board of Medical Examiners customized assessments during the pre-clerkship period using their deviation from the class average at each assessment point. These deviation scores were cumulatively summed over time and were regressed onto the sequence of exams. We analyzed the difference between the regression slopes of those students placed on Academic Probation (AP) versus not, as well as differences in slopes based on the timing of when a struggling learner was placed on AP to explore learner trajectory after identification. Students on AP had an average growth slope of - 6.06 compared to + 0.89 for those not on AP. Findings also suggested that students who were placed on AP early during pre-clerkship showed significant improvement (positive changes in trajectory) compared to students identified later in the curriculum. Our findings suggest that earlier academic probation and intervention with struggling learners may have a positive effect on academic trajectory. Future research can better explore how academic trajectory monitoring and performance review can be regularly used in advising sessions with students.
Collapse
Affiliation(s)
- Ryan R Landoll
- Department of Family Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Rd, Bethesda, MD, 20814, USA.
| | - Layne D Bennion
- Department of Medical and Clinical Psychology, Uniformed Services University of the Health Sciences, Bethesda, USA
| | - Ashley M Maranich
- Department of Pediatrics, Uniformed Services University of the Health Sciences, Bethesda, USA
| | - Paul A Hemmer
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, USA
| | - Dario Torre
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, USA
| | | | - Steven J Durning
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, USA
| | - Ting Dong
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, USA
| |
Collapse
|
41
|
From Traditional to Programmatic Assessment in Three (Not So) Easy Steps. EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12070487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Programmatic assessment (PA) has strong theoretical and pedagogical underpinnings, but its practical implementation brings a number of challenges—particularly in traditional university settings involving large cohort sizes. This paper presents a detailed case report of an in-progress programmatic assessment implementation involving a decade of assessment innovation occurring in three significant and transformative steps. The starting position and subsequent changes represented in each step are reflected against the framework of established principles and implementation themes of PA. This case report emphasises the importance of ongoing innovation and evaluative research, the advantage of a dedicated team with a cohesive plan, and the fundamental necessity of electronic data collection. It also highlights the challenge of traditional university cultures, the potential advantage of a major pandemic disruption, and the necessity for curriculum renewal to support significant assessment change. Our PA implementation began with a plan to improve the learning potential of individual assessments and over the subsequent decade expanded to encompass a cohesive and course wide assessment program involving meaningful aggregation of assessment data. In our context (large cohort sizes and university-wide assessment policy) regular progress review meetings and progress decisions based on aggregated qualitative and quantitative data (rather than assessment format) remain local challenges.
Collapse
|
42
|
Goldschmidt SL, Root Kustritz MV. Pilot Study Evaluating the Use of Typodonts (Dental Models) for Teaching Veterinary Dentistry as Part of the Core Veterinary Curriculum. JOURNAL OF VETERINARY MEDICAL EDUCATION 2022; 49:340-345. [PMID: 33970838 DOI: 10.3138/jvme-2020-0113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Periodontal disease is one the most common disease pathologies in small animal medicine, yet new graduates report they feel unprepared to perform dentistry in general practice. Novel methodologies to close the knowledge gap in veterinary dentistry need to be identified. Typodonts (dental models) are commonly used in human dental schools to teach basic principles prior to practice on clinical patients and have been shown to be an effective teaching tool. The study aimed to determine if independent study and self-guided practice on a veterinary typodont prior to a structured, in-person cadaver laboratory with feedback increases students' perceived dentistry clinical skills in performing periodontal techniques. We calculated the knowledge gap before and after the cadaver laboratory by comparing the students' perceived and desired skill levels in performing periodontal charting, ultrasonic cleaning, hand scaling, and root planing. Ninety-six percent of students reported that practice with the dental typodont prior to the cadaver laboratory increased their comfort level in performing periodontal skills. However, practice did not result in a significant decrease in knowledge gap compared with participation in the cadaver laboratory alone. Although students perceived a benefit to practicing with the typodont, self-guided practice was not effective in decreasing the knowledge gap, most likely due to a lack of structured feedback with typodont use. Further investigation into the use of typodonts with direct feedback prior to structured laboratory or, alternatively, as an additional practice tool following a structured laboratory would further define if there is a benefit to typodont practice in veterinary dentistry.
Collapse
|
43
|
Rehman DES, Memon I, Mahmood N, Alruwaili N, Alhazzaa R, Alkushi A, Jawdat D. Impacts of Changing the Curriculum Design on the Examination Results of Anatomy and Physiology Course. Cureus 2022; 14:e24405. [PMID: 35619849 PMCID: PMC9126479 DOI: 10.7759/cureus.24405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/22/2022] [Indexed: 11/05/2022] Open
|
44
|
Do Resident Archetypes Influence the Functioning of Programs of Assessment? EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12050293] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
While most case studies consider how programs of assessment may influence residents’ achievement, we engaged in a qualitative, multiple case study to model how resident engagement and performance can reciprocally influence the program of assessment. We conducted virtual focus groups with program leaders from four residency training programs from different disciplines (internal medicine, emergency medicine, neurology, and rheumatology) and institutions. We facilitated discussion with live screen-sharing to (1) improve upon a previously-derived model of programmatic assessment and (2) explore how different resident archetypes (sample profiles) may influence their program of assessment. Participants agreed that differences in resident engagement and performance can influence their programs of assessment in some (mal)adaptive ways. For residents who are disengaged and weakly performing (of which there are a few), significantly more time is spent to make sense of problematic evidence, arrive at a decision, and generate recommendations. Whereas for residents who are engaged and performing strongly (the vast majority), significantly less effort is thought to be spent on discussion and formalized recommendations. These findings motivate us to fulfill the potential of programmatic assessment by more intentionally and strategically challenging those who are engaged and strongly performing, and by anticipating ways that weakly performing residents may strain existing processes.
Collapse
|
45
|
Fabry G. Wie lassen sich professionelle Kompetenzen im Medizinstudium vermitteln? Ethik Med 2022. [DOI: 10.1007/s00481-022-00695-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
46
|
Hamoen EC, De Jong PGM, Van Blankenstein FM, Reinders MEJ. Design and First Impressions of a Small Private Online Course in Clinical Workplace Learning: Questionnaire and Interview Study. JMIR MEDICAL EDUCATION 2022; 8:e29624. [PMID: 35389362 PMCID: PMC9030912 DOI: 10.2196/29624] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 08/19/2021] [Accepted: 02/22/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Clinical workplace learning takes place in a dynamic and complex learning environment that is designated as a site for patient care and education. Challenges in clinical training can be overcome by implementing blended learning, as it offers flexible learning programs suitable for student-centered learning, web-based collaboration, and peer learning. OBJECTIVE The aim of this study is to evaluate the Small Private Online Course (SPOC) by interns' first impressions and satisfaction measures (N=20) on using the SPOC. This study describes the design process of a SPOC from a theoretical and practical perspective and how it has been integrated into a clinical internship in internal medicine. METHODS The design of the SPOC was based on general theoretical principles that learning should be constructive, contextual, collaborative, and self-regulated, and the self-determination theory to stimulate intrinsic motivation. Interns' impressions and level of satisfaction were evaluated with a web-based questionnaire and group interview. RESULTS Interns thought the web-based learning environment to be a useful and accessible alternative to improve knowledge and skills. Peer learning and web-based collaboration through peer interaction was perceived as less effective, as student feedback was felt inferior to teacher feedback. The interns would prefer more flexibility within the course, which could improve self-regulated learning and autonomy. CONCLUSIONS The evaluation shows that the SPOC is a useful and accessible addition to the clinical learning environment, providing an alternative opportunity to improve knowledge and skills. Further research is needed to improve web-based collaboration and interaction in our course.
Collapse
Affiliation(s)
- Esther C Hamoen
- Department of Internal Medicine, Leiden University Medical Center, Leiden, Netherlands
| | - Peter G M De Jong
- Center for Innovation in Medical Education, Leiden University Medical Center, Leiden, Netherlands
| | | | - Marlies E J Reinders
- Department of Internal Medicine, Leiden University Medical Center, Leiden, Netherlands
- Nephrology and Transplantation, Internal Medicine, Erasmus Medical Center Transplantation Institute, Erasmus Medical Center, Rotterdam, Netherlands
| |
Collapse
|
47
|
Bullock JL, Seligman L, Lai CJ, O'Sullivan PS, Hauer KE. Moving toward Mastery: Changes in Student Perceptions of Clerkship Assessment with Pass/Fail Grading and Enhanced Feedback. TEACHING AND LEARNING IN MEDICINE 2022; 34:198-208. [PMID: 34014793 DOI: 10.1080/10401334.2021.1922285] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Revised: 03/08/2021] [Accepted: 03/20/2021] [Indexed: 06/12/2023]
Abstract
ProblemClerkship grades contribute to a summative assessment culture in clerkships and can therefore interfere with students' learning. For example, by focusing on summative, tiered clerkship grades, students often discount accompanying feedback that could inform future learning. This case report seeks to explore whether an assessment system intervention which eliminated tiered grades and enhanced feedback was associated with changes in student perceptions of clerkship assessment and perceptions of the clinical learning environment. Intervention: In January 2019, our institution eliminated tiered clerkship grading (honors/pass/fail) for medical students during the core clerkship year and implemented pass/fail clerkship grading along with required twice weekly, work-based assessments for formative feedback. Context: In this single institution, cross-sectional survey study, we collected data from fourth-year medical students one year after an assessment system intervention. The intervention entailed changing from honors/pass/fail to pass/fail grading in all eight core clerkships and implementing an electronic system to record twice-weekly real-time formative work-based assessments. The survey queried student perceptions on the fairness and accuracy of grading and the clinical learning environment-including whether clerkships were mastery- or performance-oriented. We compared responses from students one year after the assessment intervention to those from the class one year before the intervention. Comparisons were made using unpaired, two-tailed t-tests or chi-squared tests as appropriate with Cohen's d for effect size estimation for score differences. Content analysis was used to analyze responses from two open-ended questions about feedback and grading. Impact: Survey response rates were similar before and after intervention (76% (127/168) vs. 72% (118/163), respectively) with no between-group differences in demographics. The after-intervention group showed statistically significant increases in the following factors: "grades are transparent and fair" (Cohen's d = 0.80), "students receive useful feedback" (d = 0.51), and "resident evaluation procedures are fair" (d = 0.40). After-intervention respondents perceived the clerkship learning environment to be more mastery-oriented (d = 0.52), less performance approach-oriented (d = 0.63), and less performance avoid-oriented (d = 0.49). There were no statistical differences in the factors "attending evaluation procedures are fair," "evaluations are accurate," "evaluations are biased," or "perception of stereotype threat." Open-ended questions revealed student recommendations to improve clerkship summary narratives, burden of work-based assessment, and in-person feedback. Lessons Learned: After an assessment system change to pass/fail grading with work-based assessments, we observed moderate to large improvements in student perceptions of clerkship grading and the mastery orientation of the learning environment. Our intervention did not improve perceptions around bias in assessment in clerkships. Other medical schools may consider similar interventions to begin to address student concerns with clerkship assessment and promote a more adaptive learning environment.
Collapse
Affiliation(s)
- Justin L Bullock
- Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California, USA
| | - Lee Seligman
- Department of Medicine, Columbia University Irving Medical Center, New York-Presbyterian Hospital, New York, New York, USA
| | - Cindy J Lai
- Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California, USA
| | - Patricia S O'Sullivan
- Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California, USA
- Department of Surgery, University of California, San Francisco School of Medicine, San Francisco, California, USA
| | - Karen E Hauer
- Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California, USA
| |
Collapse
|
48
|
Assessment for Learning: The University of Toronto Temerty Faculty of Medicine M.D. Program Experience. EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12040249] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
(1) Background: Programmatic assessment optimizes the coaching, learning, and decision-making functions of assessment. It utilizes multiple data points, fit for purpose, which on their own guide learning, but taken together form the basis of holistic decision making. While they are agreed on principles, implementation varies according to context. (2) Context: The University of Toronto MD program implemented programmatic assessment as part of a major curriculum renewal. (3) Design and implementation: This paper, structured around best practices in programmatic assessment, describes the implementation of the University of Toronto MD program, one of Canada’s largest. The case study illustrates the components of the programmatic assessment framework, tracking and making sense of data, how academic decisions are made, and how data guide coaching and tailored support and learning plans for learners. (4) Lessons learned: Key implementation lessons are discussed, including the role of context, resources, alignment with curriculum renewal, and the role of faculty development and program evaluation. (5) Conclusions: Large-scale programmatic assessment implementation is resource intensive and requires commitment both initially and on a sustained basis, requiring ongoing improvement and steadfast championing of the cause of optimally leveraging the learning function of assessment.
Collapse
|
49
|
A Perspective Review on Integrating VR/AR with Haptics into STEM Education for Multi-Sensory Learning. ROBOTICS 2022. [DOI: 10.3390/robotics11020041] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
As a result of several governments closing educational facilities in reaction to the COVID-19 pandemic in 2020, almost 80% of the world’s students were not in school for several weeks. Schools and universities are thus increasing their efforts to leverage educational resources and provide possibilities for remote learning. A variety of educational programs, platforms, and technologies are now accessible to support student learning; while these tools are important for society, they are primarily concerned with the dissemination of theoretical material. There is a lack of support for hands-on laboratory work and practical experience. This is particularly important for all disciplines related to science, technology, engineering, and mathematics (STEM), where labs and pedagogical assets must be continuously enhanced in order to provide effective study programs. In this study, we describe a unique perspective to achieving multi-sensory learning through the integration of virtual and augmented reality (VR/AR) with haptic wearables in STEM education. We address the implications of a novel viewpoint on established pedagogical notions. We want to encourage worldwide efforts to make fully immersive, open, and remote laboratory learning a reality.
Collapse
|
50
|
The Importance of Professional Development in a Programmatic Assessment System: One Medical School’s Experience. EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12030220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The Cleveland Clinic Lerner College of Medicine of Case Western Reserve University (CCLCM) was created in 2004 as a 5-year undergraduate medical education program with a mission to produce future physician-investigators. CCLCM’s assessment system aligns with the principles of programmatic assessment. The curriculum is organized around nine competencies, where each competency has milestones that students use to self-assess their progress and performance. Throughout the program, students receive low-stakes feedback from a myriad of assessors across courses and contexts. With support of advisors, students construct portfolios to document their progress and performance. A separate promotion committee makes high-stakes promotion decisions after reviewing students’ portfolios. This case study describes a systematic approach to provide both student and faculty professional development essential for programmatic assessment. Facilitators, barriers, lessons learned, and future directions are discussed.
Collapse
|