1
|
Fuentes-Cimma J, Sluijsmans D, Riquelme A, Villagran I, Isbej L, Olivares-Labbe MT, Heeneman S. Designing feedback processes in the workplace-based learning of undergraduate health professions education: a scoping review. BMC MEDICAL EDUCATION 2024; 24:440. [PMID: 38654360 PMCID: PMC11036781 DOI: 10.1186/s12909-024-05439-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 04/17/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND Feedback processes are crucial for learning, guiding improvement, and enhancing performance. In workplace-based learning settings, diverse teaching and assessment activities are advocated to be designed and implemented, generating feedback that students use, with proper guidance, to close the gap between current and desired performance levels. Since productive feedback processes rely on observed information regarding a student's performance, it is imperative to establish structured feedback activities within undergraduate workplace-based learning settings. However, these settings are characterized by their unpredictable nature, which can either promote learning or present challenges in offering structured learning opportunities for students. This scoping review maps literature on how feedback processes are organised in undergraduate clinical workplace-based learning settings, providing insight into the design and use of feedback. METHODS A scoping review was conducted. Studies were identified from seven databases and ten relevant journals in medical education. The screening process was performed independently in duplicate with the support of the StArt program. Data were organized in a data chart and analyzed using thematic analysis. The feedback loop with a sociocultural perspective was used as a theoretical framework. RESULTS The search yielded 4,877 papers, and 61 were included in the review. Two themes were identified in the qualitative analysis: (1) The organization of the feedback processes in workplace-based learning settings, and (2) Sociocultural factors influencing the organization of feedback processes. The literature describes multiple teaching and assessment activities that generate feedback information. Most papers described experiences and perceptions of diverse teaching and assessment feedback activities. Few studies described how feedback processes improve performance. Sociocultural factors such as establishing a feedback culture, enabling stable and trustworthy relationships, and enhancing student feedback agency are crucial for productive feedback processes. CONCLUSIONS This review identified concrete ideas regarding how feedback could be organized within the clinical workplace to promote feedback processes. The feedback encounter should be organized to allow follow-up of the feedback, i.e., working on required learning and performance goals at the next occasion. The educational programs should design feedback processes by appropriately planning subsequent tasks and activities. More insight is needed in designing a full-loop feedback process, in which specific attention is needed in effective feedforward practices.
Collapse
Affiliation(s)
- Javiera Fuentes-Cimma
- Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile.
- School of Health Professions Education, Maastricht University, Maastricht, Netherlands.
| | | | - Arnoldo Riquelme
- Centre for Medical and Health Profession Education, Department of Gastroenterology, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Ignacio Villagran
- Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile
| | - Lorena Isbej
- School of Health Professions Education, Maastricht University, Maastricht, Netherlands
- School of Dentistry, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | | | - Sylvia Heeneman
- Department of Pathology, Faculty of Health, Medicine and Health Sciences, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
2
|
Wawrykow TMJ, McColl T, Velji A, Chan M. Emergency Medicine Oral Case Presentations: Evaluation of a Novel Curriculum. AEM EDUCATION AND TRAINING 2020; 4:379-386. [PMID: 33150280 PMCID: PMC7592827 DOI: 10.1002/aet2.10408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Revised: 10/29/2019] [Accepted: 10/31/2019] [Indexed: 06/11/2023]
Abstract
OBJECTIVES Oral case presentation (OCP) is recognized as a central educational and patient care activity, yet has not been well studied in the emergency medicine (EM) setting. The purpose of this study was to evaluate the effect of a novel curriculum on medical students' EM-OCP skills. METHODS An EM-OCP assessment tool and novel blended curriculum were developed based on results from a Canadian survey of emergency physicians and focus groups with key stakeholders. We conducted a randomized controlled trial of 96 clerkship students between 2017 and 2018. Students were randomly assigned into an intervention group where they completed a novel EM-OCP curriculum or a control group without the curriculum. A pretest baseline assessment of students' OCP skills was performed using a standardized patient case at the beginning of their EM rotation. Similarly, all students completed a posttest assessment with a different standardized patient case at the end of their 6-week EM rotation. Audio recordings of pre- and posttests were assessed using the EM-OCP assessment tool by two blinded assessors. RESULTS Using the Kruskal-Wallis test, all students demonstrated improvement in EM-OCP skills between their pretest and posttest; however, those who received the curriculum (intervention group) showed significantly greater improvement in "synthesis of information," "management," and "overall entrustment decision" scores. CONCLUSIONS Implementation of a novel EM-OCP curriculum resulted in improved clinical reasoning and higher entrustment scores. This curriculum could improve OCP performance not only in EM settings but also across specialties where medical students and residents manage critically ill patients.
Collapse
Affiliation(s)
- Teresa M. J. Wawrykow
- Department of Emergency MedicineMax Rady College of MedicineRady Faculty of Health SciencesUniversity of ManitobaWinnipegManitoba
| | - Tamara McColl
- Department of Emergency MedicineMax Rady College of MedicineRady Faculty of Health SciencesUniversity of ManitobaWinnipegManitoba
| | - Alkarim Velji
- Department of Pediatrics and Child HealthMax Rady College of MedicineRady Faculty of Health SciencesUniversity of ManitobaWinnipegManitobaCanada
| | - Ming‐Ka Chan
- FRCPC Emergency Medicine ProgramUniversity of ManitobaWinnipegManitobaCanada
| |
Collapse
|
3
|
Taylor TA, Swanberg SM. A comparison of peer and faculty narrative feedback on medical student oral research presentations. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2020; 11:222-229. [PMID: 33006959 PMCID: PMC7882133 DOI: 10.5116/ijme.5f64.690b] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Accepted: 09/18/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES The purpose of this project was to evaluate and improve the oral presentation assessment component of a required research training curriculum at an undergraduate medical school by analyzing the quantity, quality, and variety of peer and faculty feedback on medical student oral research presentations. METHODS We conducted a program evaluation of oral presentation assessments during the 2016 and 2017 academic years. Second-year medical students (n=225) provided oral presentations of their research and received narrative feedback from peers and faculty. All comments were inductively coded for themes and Chi-square testing compared faculty and peer feedback differences in quantity, quality, and variety, as well as changes in feedback between the initial and final presentations. Comparative analysis of student PowerPoint presentation files before and after receiving feedback was also conducted. RESULTS Over two years, 2,617 peer and 498 faculty comments were collected and categorized into ten themes, with the top three being: presentation skills, visual presentation, and content. Both peers and judges favored providing positive over improvement comments, with peers tending to give richer feedback, but judges more diverse feedback. Nearly all presenters made some change from the initial to final presentations based on feedback. CONCLUSIONS Data from this analysis was used to restructure the oral presentation requirement for the students. Both peer and faculty formative feedback can contribute to developing medical student competence in providing feedback and delivering oral presentations. Future studies could assess student perceptions of this assessment to determine its value in developing communication skills.
Collapse
Affiliation(s)
- Tracey A.H. Taylor
- Department of Foundational Medical Studies, Oakland University William Beaumont School of Medicine, Rochester, Michigan, USA
| | - Stephanie M. Swanberg
- Department of Foundational Medical Studies, Oakland University William Beaumont School of Medicine, Rochester, Michigan, USA
| |
Collapse
|
4
|
Johnson CE, Weerasuria MP, Keating JL. Effect of face-to-face verbal feedback compared with no or alternative feedback on the objective workplace task performance of health professionals: a systematic review and meta-analysis. BMJ Open 2020; 10:e030672. [PMID: 32213515 PMCID: PMC7170595 DOI: 10.1136/bmjopen-2019-030672] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
OBJECTIVE Verbal face-to-face feedback on clinical task performance is a fundamental component of health professions education. Experts argue that feedback is critical for performance improvement, but the evidence is limited. The aim of this systematic review was to investigate the effect of face-to-face verbal feedback from a health professional, compared with alternative or no feedback, on the objective workplace task performance of another health professional. DESIGN Systematic review and meta-analysis. METHODS We searched the full holdings of Ovid MEDLINE, CENTRAL, Embase, CINAHL and PsycINFO up to 1 February 2019 and searched references of included studies. Two authors independently undertook study selection, data extraction and quality appraisal. Studies were included if they were randomised controlled trials investigating the effect of feedback, in which health professionals were randomised to individual verbal face-to-face feedback compared with no feedback or alternative feedback and available as full-text publications in English. The certainty of evidence was assessed using the Grading of Recommendations, Assessment, Development and Evaluations approach. For feedback compared with no feedback, outcome data from included studies were pooled using a random effects model. RESULTS In total, 26 trials met the inclusion criteria, involving 2307 participants. For the effect of verbal face-to-face feedback on performance compared with no feedback, when studies at high risk of bias were excluded, eight studies involving 392 health professionals were included in a meta-analysis: the standardised mean difference (SMD) was 0.7 (95% CI 0.37 to 1.03; p<0.001) in favour of feedback. The calculated SMD prediction interval was -0.06 to 1.46. For feedback compared with alternative feedback, studies could not be pooled due to substantial design and intervention heterogeneity. All included studies were summarised, and key factors likely to influence performance were identified including components within feedback interventions, instruction and practice opportunities. CONCLUSIONS Verbal face-to-face feedback in the health professions may result in a moderate to large improvement in workplace task performance, compared with no feedback. However, the quality of evidence was low, primarily due to risk of bias and publication bias. Further research is needed. In particular, we found a lack of high-quality trials that clearly reported key components likely to influence performance. TRIAL REGISTRATION NUMBER CRD42017081796.
Collapse
Affiliation(s)
- Christina Elizabeth Johnson
- Monash Doctors Education, Monash Health; Faculty of Medicine, Nursing and Health Sciences, Monash University; Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Victoria, Australia
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Victoria, Australia
| | | | - Jennifer L Keating
- Department of Physiotherapy, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
5
|
Lancaster I, Basson MD. What Skills Do Clinical Evaluators Value Most In Oral Case Presentations? TEACHING AND LEARNING IN MEDICINE 2019; 31:129-135. [PMID: 30551724 DOI: 10.1080/10401334.2018.1512861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Revised: 07/02/2018] [Accepted: 07/30/2018] [Indexed: 06/09/2023]
Abstract
Phenomenon: Trainees and practicing physicians are judged by the way that they present patients. We therefore invest heavily in teaching students how to do oral case presentations (OCPs), but the relative weights by which different aspects of these complex tasks contributes to an overall evaluation is poorly understood. Approach: We sought to contrast how clinical evaluators assess students' OCPs and how medical students expect OCPs to be evaluated. Multiple linear regression and correlation matrices assessed how individual components of 3rd-year medical students' OCPs affect overall faculty assessment of students' performance using the previously validated Patient Presentation Rating tool. Preclinical medical students were surveyed to determine how they expect their OCPs to be evaluated. Findings: Faculty evaluations of students' overall organization and descriptions of patients' situations and vital signs were strongly associated with their overall OCP evaluation. Students believed that describing the patient's situation, chief complaint, and history of present illness would be highly valued but not organization or vital signs descriptions. Insights: Students and faculty differ about what is important in OCPs. Aligning these perceptions through intentional redesign of curricula and feedback instruments may facilitate teaching clinical communication to students.
Collapse
Affiliation(s)
- Ian Lancaster
- a University of North Dakota School of Medicine and the Health Sciences, Deans Office , Grand Forks , North Dakota , USA
| | - Marc D Basson
- b Department of Surgery , University of North Dakota School of Medicine and the Health Sciences , Grand Forks , North Dakota , USA
| |
Collapse
|
6
|
Sox CM, Tenney-Soeiro R, Lewin LO, Ronan J, Brown M, King M, Thompson R, Noelck M, Sutherell JS, Silverstein M, Cabral HJ, Dell M. Efficacy of a Web-Based Oral Case Presentation Instruction Module: Multicenter Randomized Controlled Trial. Acad Pediatr 2018; 18:535-541. [PMID: 29325913 DOI: 10.1016/j.acap.2017.12.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/11/2017] [Accepted: 12/22/2017] [Indexed: 11/28/2022]
Abstract
OBJECTIVE Effective self-directed educational tools are invaluable. Our objective was to determine whether a self-directed, web-based oral case presentation module would improve medical students' oral case presentations compared to usual curriculum, and with similar efficacy as structured oral presentation faculty feedback sessions. METHODS We conducted a pragmatic multicenter cluster randomized controlled trial among medical students rotating in pediatric clerkships at 7 US medical schools. In the clerkship's first 14 days, subjects were instructed to complete an online Computer-Assisted Learning in Pediatrics Program (CLIPP) oral case presentation module, an in-person faculty-led case presentation feedback session, or neither (control). At the clerkship's end, evaluators blinded to intervention status rated the quality of students' oral case presentations on a 10-point scale. We conducted intention-to-treat multivariable analyses clustered on clerkship block. RESULTS Study participants included 256 CLIPP (32.5%), 263 feedback (33.3%), and 270 control (34.2%) subjects. Only 51.1% of CLIPP subjects completed the assigned presentation module, while 98.5% of feedback subjects participated in presentation feedback sessions. Compared to controls, oral presentation quality was significantly higher in the feedback group (adjusted difference in mean quality, 0.28; 95% confidence interval, 0.08, 0.49) and trended toward being significantly higher in the CLIPP group (0.19; 95% confidence interval, -0.006, 0.38). The quality of presentations in the CLIPP and feedback groups was not significantly different (-0.10; 95% confidence interval, -0.31, 0.11). CONCLUSIONS The quality of oral case presentations delivered by students randomized to complete the CLIPP module did not differ from faculty-led presentation feedback sessions and was not statistically superior to control.
Collapse
Affiliation(s)
- Colin M Sox
- Department of Pediatrics, Boston Medical Center & Boston University School of Medicine, Boston, Mass.
| | - Rebecca Tenney-Soeiro
- Department of Pediatrics, Perelman School of Medicine at University of Pennsylvania & The Children's Hospital of Philadelphia, Philadelphia, Pa
| | - Linda O Lewin
- Department of Pediatrics, University of Maryland School of Medicine, Baltimore, Md
| | - Jeanine Ronan
- Department of Pediatrics, Perelman School of Medicine at University of Pennsylvania & The Children's Hospital of Philadelphia, Philadelphia, Pa
| | - Mary Brown
- Department of Pediatrics, Tufts University School of Medicine at the Floating Hospital for Children, Boston, Mass
| | - Marta King
- Department of Pediatrics, Saint Louis University School of Medicine, St Louis, Mo
| | - Rachel Thompson
- Department of Pediatrics, Boston Medical Center & Boston University School of Medicine, Boston, Mass
| | - Michelle Noelck
- Department of Pediatrics, Oregon Health & Science University, Portland, Ore
| | - Jamie S Sutherell
- Department of Pediatrics, Saint Louis University School of Medicine, St Louis, Mo
| | - Michael Silverstein
- Department of Pediatrics, Boston Medical Center & Boston University School of Medicine, Boston, Mass
| | - Howard J Cabral
- Department of Biostatistics, Boston University School of Public Health, Boston, Mass
| | - Michael Dell
- Department of Pediatrics, Case Western Reserve University School of Medicine, Cleveland, Ohio
| |
Collapse
|
7
|
King MA, Phillipi CA, Buchanan PM, Lewin LO. Developing Validity Evidence for the Written Pediatric History and Physical Exam Evaluation Rubric. Acad Pediatr 2017; 17:68-73. [PMID: 27521461 DOI: 10.1016/j.acap.2016.08.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2016] [Revised: 07/29/2016] [Accepted: 08/03/2016] [Indexed: 11/28/2022]
Abstract
OBJECTIVE The written history and physical examination (H&P) is an underutilized source of medical trainee assessment. The authors describe development and validity evidence for the Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric: a novel tool for evaluating written H&Ps. METHODS Using an iterative process, the authors drafted, revised, and implemented the 10-item rubric at 3 academic institutions in 2014. Eighteen attending physicians and 5 senior residents each scored 10 third-year medical student H&Ps. Inter-rater reliability (IRR) was determined using intraclass correlation coefficients. Cronbach α was used to report consistency and Spearman rank-order correlations to determine relationships between rubric items. Raters provided a global assessment, recorded time to review and score each H&P, and completed a rubric utility survey. RESULTS Overall intraclass correlation was 0.85, indicating adequate IRR. Global assessment IRR was 0.89. IRR for low- and high-quality H&Ps was significantly greater than for medium-quality ones but did not differ on the basis of rater category (attending physician vs. senior resident), note format (electronic health record vs nonelectronic), or student diagnostic accuracy. Cronbach α was 0.93. The highest correlation between an individual item and total score was for assessments was 0.84; the highest interitem correlation was between assessment and differential diagnosis (0.78). Mean time to review and score an H&P was 16.3 minutes; residents took significantly longer than attending physicians. All raters described rubric utility as "good" or "very good" and endorsed continued use. CONCLUSIONS The P-HAPEE rubric offers a novel, practical, reliable, and valid method for supervising physicians to assess pediatric written H&Ps.
Collapse
Affiliation(s)
- Marta A King
- Division of General Academic Pediatrics, Saint Louis University School of Medicine, St Louis, Mo.
| | - Carrie A Phillipi
- Department of Pediatrics, Oregon Health & Science University, Portland, OR
| | - Paula M Buchanan
- Center for Outcomes Research, Saint Louis University, St Louis, Mo
| | - Linda O Lewin
- Department of Pediatrics, University of Maryland School of Medicine, Bethesda, MD
| |
Collapse
|
8
|
Lefroy J, Watling C, Teunissen PW, Brand P. Guidelines: the do's, don'ts and don't knows of feedback for clinical education. PERSPECTIVES ON MEDICAL EDUCATION 2015; 4:284-299. [PMID: 26621488 PMCID: PMC4673072 DOI: 10.1007/s40037-015-0231-7] [Citation(s) in RCA: 190] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
INTRODUCTION The guidelines offered in this paper aim to amalgamate the literature on formative feedback into practical Do's, Don'ts and Don't Knows for individual clinical supervisors and for the institutions that support clinical learning. METHODS The authors built consensus by an iterative process. Do's and Don'ts were proposed based on authors' individual teaching experience and awareness of the literature, and the amalgamated set of guidelines were then refined by all authors and the evidence was summarized for each guideline. Don't Knows were identified as being important questions to this international group of educators which if answered would change practice. The criteria for inclusion of evidence for these guidelines were not those of a systematic review, so indicators of strength of these recommendations were developed which combine the evidence with the authors' consensus. RESULTS A set of 32 Do and Don't guidelines with the important Don't Knows was compiled along with a summary of the evidence for each. These are divided into guidelines for the individual clinical supervisor giving feedback to their trainee (recommendations about both the process and the content of feedback) and guidelines for the learning culture (what elements of learning culture support the exchange of meaningful feedback, and what elements constrain it?) CONCLUSION Feedback is not easy to get right, but it is essential to learning in medicine, and there is a wealth of evidence supporting the Do's and warning against the Don'ts. Further research into the critical Don't Knows of feedback is required. A new definition is offered: Helpful feedback is a supportive conversation that clarifies the trainee's awareness of their developing competencies, enhances their self-efficacy for making progress, challenges them to set objectives for improvement, and facilitates their development of strategies to enable that improvement to occur.
Collapse
Affiliation(s)
- Janet Lefroy
- Keele University School of Medicine, Clinical Education Centre RSUH, ST4 6QG, Staffordshire, UK.
| | - Chris Watling
- Schulich School of Medicine and Dentistry, Western University, Ontario, Canada
| | - Pim W Teunissen
- Maastricht University and VU University Medical Center, Amsterdam, The Netherlands
| | - Paul Brand
- Isala Klinieken, Zwolle, The Netherlands
| |
Collapse
|
9
|
Mayorga E, Golnik K, Palis G. One-Year Progress in Ophthalmic Education: Annual Review. Asia Pac J Ophthalmol (Phila) 2015; 4:388-98. [PMID: 26716435 DOI: 10.1097/apo.0000000000000162] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
PURPOSE The aim of this study was to update the practicing ophthalmologist on the English language literature about medical education from the prior year. DESIGN A search of English language literature was performed on PubMed from January 1, 2014, to December 31, 2014. METHODS Because the search using the main topic of the review "medical education" came up with 7394 citations, authors finally decided to narrow the search to 3 topics of their interest:1. Current state of competency-based education and teaching methods of competencies. This section included ophthalmic/ophthalmology education, core competencies, competency-based education, teaching strategies, tools and methods in medical education.2. E-learning. This section included e-learning, online learning, online teaching, Web-based teaching, Web-based learning, and flipped classroom.3. ASSESSMENTS This section included assessment of medical students, residents, fellows, faculty, attending physicians, and medical teachers, assessment of medical student ophthalmology programs, ophthalmology residency programs, residency programs, and fellowship programs. RESULTS The authors reviewed and summarized articles published in 2014 examining or describing the 3 main areas of the review described previously. CONCLUSIONS This review updates the comprehensive ophthalmologist on advances in ophthalmic medical education. Ophthalmic educators could apply the ideas presented in this review according to their possibilities in their own settings and programs.
Collapse
Affiliation(s)
- Eduardo Mayorga
- From the *International Council of Ophthalmology, San Francisco, CA; †School of Medicine and Eye Department, Hospital Italiano de Buenos Aires, Buenos Aires, Argentina; ‡University of Cincinnati; and §Cincinnati Eye Institute, Cincinnati, OH
| | | | | |
Collapse
|