1
|
Dohms MC, Rocha A, Rasenberg E, Dielissen P, Thoonen B. Peer assessment in medical communication skills training in programmatic assessment: A qualitative study examining faculty and student perceptions. MEDICAL TEACHER 2024; 46:823-831. [PMID: 38157436 DOI: 10.1080/0142159x.2023.2285248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 11/15/2023] [Indexed: 01/03/2024]
Abstract
INTRODUCTION Current literature recommends assessment of communication skills in medical education combining different settings and multiple observers. There is still a gap in understanding about whether and how peers assessment facilitates learning in communication skills training. METHODS We designed a qualitative study using focus group interviews and thematic analysis, in a medical course in the Netherlands. We aimed to explore medical students' and teachers' experiences, perceptions, and perspectives about challenges and facilitating factors in PACST (Peer assessment in medical communication skills training). RESULTS Most of the participants reported that peer feedback was a valuable experience when learning communication skills. The major challenges for the quality and credibility of PACST reported by the participants are the question whether peer feedback is critical enough for learning and the difficulty of actually engaging students in the assessment process. CONCLUSION Teachers reviewing students' peer assessments may improve the quality and their credibility and the reviewed assessments can best be used for learning purposes. We suggest to pay sufficient attention to teachers' roles in PACST, ensuring a safe and trustworthy environment and additionally helping students to internalize the value of being vulnerable during the evaluation process.
Collapse
Affiliation(s)
- M C Dohms
- Clinique Bouchard, Marseille, France
| | - A Rocha
- DASA (Diagnósticos da América S/A), São Paulo, Brazil
| | | | - P Dielissen
- Medisch Centrum Onder de Linde, Nijmegen, Netherlands
| | - B Thoonen
- Radboud University, Nijmegen, Netherlands
| |
Collapse
|
2
|
Mishra SD, Rojewski J, Rebitch CB. Peer feedback as a medium to facilitate reflective practice among pharmacy students in a case-based learning environment. CURRENTS IN PHARMACY TEACHING & LEARNING 2022; 14:1387-1396. [PMID: 36137887 DOI: 10.1016/j.cptl.2022.09.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 07/22/2022] [Accepted: 09/07/2022] [Indexed: 06/16/2023]
Abstract
INTRODUCTION The ability to reflect is a key element in preparing pharmacy professionals to meet the challenges of a dynamic health care environment. This mixed-methods study explored the pedagogical benefits of peer feedback by designing, developing, and implementing a peer feedback activity to facilitate reflective practice among pharmacy students. METHODS Twenty second-year doctor of pharmacy (PharmD) students in a required pharmacotherapy course participated in a systematic peer feedback activity and five of these students volunteered for semi-structured interviews. RESULTS No significant correlation was found between perceived effectiveness of peer feedback and students' reflective thinking skills. Qualitative interview data revealed three major themes regarding PharmD students' perception of peer feedback as an instructional strategy to promote reflective practice: (1) the cognitive process of providing feedback, (2) the cognitive process after receiving peer feedback, and (3) perceptions of peer feedback as a tool to exercise reflective practice. CONCLUSIONS Although limited in sample size, important lessons were learned on how to design, develop, and implement a peer feedback activity.
Collapse
Affiliation(s)
- Supriya D Mishra
- 221 River's Crossing, 850 College Station Road, University of Georgia, Athens, GA 30605, United States; Georgia Department of Education, 1562 Twin Towers, 205 Jesse Hill Jr. Dr. SE, Atlanta, GA 30334, United States.
| | - Jay Rojewski
- 221 River's Crossing, 850 College Station Road, University of Georgia, Athens, GA 30605, United States.
| | - Catherine B Rebitch
- University of Pittsburgh School of Pharmacy, Salk Hall Room 5429, 3501 Terrace Street, Pittsburgh, PA 15261, United States.
| |
Collapse
|
3
|
Van Meenen F, Coertjens L, Van Nes MC, Verschuren F. Peer overmarking and insufficient diagnosticity: the impact of the rating method for peer assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:1049-1066. [PMID: 35871407 DOI: 10.1007/s10459-022-10130-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 05/29/2022] [Indexed: 06/15/2023]
Abstract
The present study explores two rating methods for peer assessment (analytical rating using criteria and comparative judgement) in light of concurrent validity, reliability and insufficient diagnosticity (i.e. the degree to which substandard work is recognised by the peer raters). During a second-year undergraduate course, students wrote a one-page essay on an air pollutant. A first cohort (N = 260) relied on analytical rating using criteria to assess their peers' essays. A total of 1297 evaluations were made, and each essay received at least four peer ratings. Results indicate a small correlation between peer and teacher marks, and three essays of substandard quality were not recognised by the group of peer raters. A second cohort (N = 230) used comparative judgement. They completed 1289 comparisons, from which a rank order was calculated. Results suggest a large correlation between the university teacher marks and the peer scores and acceptable reliability of the rank order. In addition, the three essays of substandard quality were discerned as such by the group of peer raters. Although replication research is warranted, the results provide the first evidence that, when peer raters overmark and fail to identify substandard work using analytical rating with criteria, university teachers may consider changing the rating method of the peer assessment to comparative judgement.
Collapse
Affiliation(s)
- Florence Van Meenen
- Psychological Sciences Research Institute, Université catholique de Louvain, 10, Place Cardinal Mercier, 1348, Louvain-la-Neuve, Belgium.
| | - Liesje Coertjens
- Psychological Sciences Research Institute, Université catholique de Louvain, 10, Place Cardinal Mercier, 1348, Louvain-la-Neuve, Belgium
| | - Marie-Claire Van Nes
- Emergency Department, Cliniques Universitaires Saint-Luc, Institue of Experimental and Clinical Research IREC, Université catholique de Louvain, Brussels, Belgium
| | - Franck Verschuren
- Institute of Experimental and Clinical Research, Acute Medicine Department, Université catholique de Louvain, Brussels, Belgium
| |
Collapse
|
4
|
Linn Z, Tashiro Y, Morio K, Hori H. Peer evaluations of group work in different years of medical school and academic achievement: how are they related? BMC MEDICAL EDUCATION 2022; 22:102. [PMID: 35172797 PMCID: PMC8851726 DOI: 10.1186/s12909-022-03165-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 02/08/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND To develop the skills needed in health care teams, training communication and teamwork skills are important in medical education. Small group collaborative learning is one of the methods utilized in such trainings, and peer evaluation is suggested to be useful in reinforcing the effectiveness of group learning activities. In Mie University Faculty of Medicine, group work consisting of book review sessions of liberal arts education in the first grade and problem-based learning (PBL) sessions in preclinical years were conducted using the same peer evaluation system that included three domains: degree of prior learning, contribution to group discussion, and cooperative attitude. This study was conducted to determine the relationships among behaviors during group work and the academic achievement of medical students. METHODS With the data from a cohort of medical students in three consecutive academic years (n = 340), peer evaluation scores in groupworks of book review sessions, those in PBL sessions and paper test scores of preclinical years were analyzed. The correlations were analyzed with Spearman's correlation coefficient, and the respective scores were compared by using the Wilcoxon signed-ranked test. RESULTS Significant correlations were observed among the evaluation scores of respective domains in group work and paper test scores. The degree of prior learning had the strongest relationship among the three domains (rs = 0.355, p < 0.001 between book review sessions and PBL; rs = 0.338, p < 0.001 between book review sessions and paper test score; rs = 0.551, p < 0.001 between PBL and paper test score). Peer evaluation scores of respective domains were found to be significantly higher in PBL. CONCLUSION Medical students maintained their groupwork behaviors to some extent from early school to preclinical years. Those behaviors were positively related to their academic achievement in the later years of the medical education curriculum. Our study highlighted the importance of the early introduction of group work. The results will be useful to motivate medical students to put more effort into group work.
Collapse
Affiliation(s)
- Zayar Linn
- Department of Medical Education, Mie University Graduate School of Medicine, 2-174, Edobashi, Tsu city, Mie prefecture, 514-8507, Japan
| | - Yasura Tashiro
- Center for Medical and Nursing Education, Faculty of Medicine, Mie University, 2-174, Edobashi, Tsu city, Mie prefecture, 514-8507, Japan.
- College of Liberal Arts, Mie University, 2-174, Edobashi, Tsu city, Mie prefecture, 514-8507, Japan.
| | - Kunimasa Morio
- Center for Medical and Nursing Education, Faculty of Medicine, Mie University, 2-174, Edobashi, Tsu city, Mie prefecture, 514-8507, Japan
| | - Hiroki Hori
- Department of Medical Education, Mie University Graduate School of Medicine, 2-174, Edobashi, Tsu city, Mie prefecture, 514-8507, Japan
- Center for Medical and Nursing Education, Faculty of Medicine, Mie University, 2-174, Edobashi, Tsu city, Mie prefecture, 514-8507, Japan
| |
Collapse
|
5
|
Vincent A, Urben T, Becker C, Beck K, Daetwyler C, Wilde M, Gaab J, Langewitz W, Hunziker S. Breaking bad news: A randomized controlled trial to test a novel interactive course for medical students using blended learning. PATIENT EDUCATION AND COUNSELING 2022; 105:105-113. [PMID: 33994021 DOI: 10.1016/j.pec.2021.05.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 04/07/2021] [Accepted: 05/03/2021] [Indexed: 06/12/2023]
Abstract
OBJECTIVE Breaking bad news (BBN) is challenging for physicians and patients and specific communication strategies aim to improve these situations. This study evaluates whether an E-learning assignment could improve medical students' accurate recognition of BBN communication techniques. METHODS This randomized controlled trial was conducted at the University of Basel. After a lecture on BBN, 4th year medical students were randomized to an intervention receiving an E-learning assignment on BBN or to a control group. Both groups then worked on an examination video and identified previously taught BBN elements shown in a physician-patient interaction. The number of correctly, misclassified and incorrectly identified BBN communication elements as well as missed opportunities were assessed in the examination video. RESULTS We included 160 medical students (55% female). The number of correctly identified BBN elements did not differ between control and intervention group (mean [SD] 3.51 [2.50] versus 3.72 [2.34], p = 0.58). However, the mean number of inappropriate BBN elements was significantly lower in the intervention than in the control group (2.33 [2.57] versus 3.33 [3.39], p = 0.037). CONCLUSIONS Use of an E-learning tool reduced inappropriate annotations regarding BBN communication techniques. PRACTICE IMPLICATIONS This E-learning might help to further advance communication skills in medical students.
Collapse
Affiliation(s)
- Alessia Vincent
- Medical Communication and Psychosomatic Medicine, University Hospital Basel, Basel, Switzerland; Division of Clinical Psychology and Psychotherapy, Faculty of Psychology, University of Basel, Basel, Switzerland
| | - Tabita Urben
- Medical Communication and Psychosomatic Medicine, University Hospital Basel, Basel, Switzerland
| | - Christoph Becker
- Medical Communication and Psychosomatic Medicine, University Hospital Basel, Basel, Switzerland; Emergency Department, University Hospital Basel, Basel, Switzerland
| | - Katharina Beck
- Medical Communication and Psychosomatic Medicine, University Hospital Basel, Basel, Switzerland
| | | | - Michael Wilde
- Faculty of Medicine, University of Basel, Basel, Switzerland
| | - Jens Gaab
- Division of Clinical Psychology and Psychotherapy, Faculty of Psychology, University of Basel, Basel, Switzerland
| | - Wolf Langewitz
- Medical Communication and Psychosomatic Medicine, University Hospital Basel, Basel, Switzerland; Faculty of Medicine, University of Basel, Basel, Switzerland
| | - Sabina Hunziker
- Medical Communication and Psychosomatic Medicine, University Hospital Basel, Basel, Switzerland; Faculty of Medicine, University of Basel, Basel, Switzerland.
| |
Collapse
|
6
|
Yu JH, Lee MJ, Kim SS, Yang MJ, Cho HJ, Noh CK, Lee GH, Lee SK, Song MR, Lee JH, Kim M, Jung YJ. Assessment of medical students' clinical performance using high-fidelity simulation: comparison of peer and instructor assessment. BMC MEDICAL EDUCATION 2021; 21:506. [PMID: 34563180 PMCID: PMC8467013 DOI: 10.1186/s12909-021-02952-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 09/16/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND High-fidelity simulators are highly useful in assessing clinical competency; they enable reliable and valid evaluation. Recently, the importance of peer assessment has been highlighted in healthcare education, and studies using peer assessment in healthcare, such as medicine, nursing, dentistry, and pharmacy, have examined the value of peer assessment. This study aimed to analyze inter-rater reliability between peers and instructors and examine differences in scores between peers and instructors in the assessment of high-fidelity-simulation-based clinical performance by medical students. METHODS This study analyzed the results of two clinical performance assessments of 34 groups of fifth-year students at Ajou University School of Medicine in 2020. This study utilized a modified Queen's Simulation Assessment Tool to measure four categories: primary assessment, diagnostic actions, therapeutic actions, and communication. In order to estimate inter-rater reliability, this study calculated the intraclass correlation coefficient and used the Bland and Altman method to analyze agreement between raters. A t-test was conducted to analyze the differences in evaluation scores between colleagues and faculty members. Group differences in assessment scores between peers and instructors were analyzed using the independent t-test. RESULTS Overall inter-rater reliability of clinical performance assessments was high. In addition, there were no significant differences in overall assessment scores between peers and instructors in the areas of primary assessment, diagnostic actions, therapeutic actions, and communication. CONCLUSIONS The results indicated that peer assessment can be used as a reliable assessment method compared to instructor assessment when evaluating clinical competency using high-fidelity simulators. Efforts should be made to enable medical students to actively participate in the evaluation process as fellow assessors in high-fidelity-simulation-based assessment of clinical performance in situations similar to real clinical settings.
Collapse
Affiliation(s)
- Ji Hye Yu
- Office of Medical Education, Ajou University School of Medicine, Suwon, South Korea
| | - Mi Jin Lee
- Department of Medical Humanities and Social medicine, Ajou University School of Medicine, Suwon, South Korea
| | - Soon Sun Kim
- Department of Gastroenterology, Ajou University School of Medicine, Suwon, South Korea
| | - Min Jae Yang
- Department of Gastroenterology, Ajou University School of Medicine, Suwon, South Korea
| | - Hyo Jung Cho
- Department of Gastroenterology, Ajou University School of Medicine, Suwon, South Korea
| | - Choong Kyun Noh
- Department of Gastroenterology, Ajou University School of Medicine, Suwon, South Korea
| | - Gil Ho Lee
- Department of Gastroenterology, Ajou University School of Medicine, Suwon, South Korea
| | - Su Kyung Lee
- Ajou Center for Clinical Excellence, Ajou University School of Medicine, Suwon, South Korea
| | - Mi Ryoung Song
- Office of Medical Education, Ajou University School of Medicine, Suwon, South Korea
| | - Jang Hoon Lee
- Department of Pediatrics, Ajou University School of Medicine, Suwon, South Korea
| | - Miran Kim
- Department of Obstetrics & Gynecology, Ajou University School of Medicine, Suwon, South Korea
| | - Yun Jung Jung
- Department of Pulmonary and Critical Care Medicine, Ajou University School of Medicine, Suwon, South Korea.
| |
Collapse
|
7
|
Tzeng A, Bruno B, Cooperrider J, Dinardo PB, Baird R, Swetlik C, Goldstein BN, Rastogi R, Roth AJ, Gilligan TD, Rish JM. A Structured Peer Assessment Method with Regular Reinforcement Promotes Longitudinal Self-Perceived Development of Medical Students' Feedback Skills. MEDICAL SCIENCE EDUCATOR 2021; 31:655-663. [PMID: 34457918 PMCID: PMC8368272 DOI: 10.1007/s40670-021-01242-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/05/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Given that training is integral to providing constructive peer feedback, we examined the impact of a regularly reinforced, structured peer assessment method on student-reported feedback abilities throughout a two-year preclinical Communication Skills course. METHODS Three consecutive 32-student medical school classes were introduced to the Observation-Reaction-Feedback method for providing verbal assessment during Year 1 Communication Skills orientation. In biweekly small-group sessions, students received worksheets reiterating the method and practiced giving verbal feedback to peers. Periodic questionnaires evaluated student perceptions of feedback delivery and the Observation-Reaction-Feedback method. RESULTS Biweekly reinforcement of the Observation-Reaction-Feedback method encouraged its uptake, which correlated with reports of more constructive, specific feedback. Compared to non-users, students who used the method noted greater improvement in comfort with assessing peers in Year 1 and continued growth of feedback abilities in Year 2. Comfort with providing modifying feedback and verbal feedback increased over the two-year course, while comfort with providing reinforcing feedback and written feedback remained similarly high. Concurrently, student preference for feedback anonymity decreased. CONCLUSIONS Regular reinforcement of a peer assessment framework can increase student usage of the method, which promotes the expansion of self-reported peer feedback skills over time. These findings support investigation of analogous strategies in other medical education settings. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s40670-021-01242-w.
Collapse
Affiliation(s)
- Alice Tzeng
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
| | - Bethany Bruno
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
| | - Jessica Cooperrider
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
| | - Perry B. Dinardo
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
| | - Rachael Baird
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
- Women’s Health Institute, Cleveland Clinic, Cleveland, OH USA
| | - Carol Swetlik
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
- Neurological Institute, Cleveland Clinic, Cleveland, OH USA
| | - Brittany N. Goldstein
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
- Department of Psychiatry & Behavioral Sciences, McGaw Medical Center of Northwestern University, Chicago, Chicago, IL USA
| | - Radhika Rastogi
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
- Department of Pediatrics, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, PA USA
| | - Alicia J. Roth
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
- Sleep Disorders Center, Cleveland Clinic, Cleveland, OH USA
| | - Timothy D. Gilligan
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
- Taussig Cancer Institute, Cleveland Clinic, Cleveland, OH USA
| | - Julie M. Rish
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH USA
- Center for Behavioral Health, Cleveland Clinic, Cleveland, OH USA
- Office of Patient Experience, Cleveland Clinic, Cleveland, OH USA
| |
Collapse
|
8
|
Curran VR, Fairbridge NA, Deacon D. Peer assessment of professionalism in undergraduate medical education. BMC MEDICAL EDUCATION 2020; 20:504. [PMID: 33308207 PMCID: PMC7731547 DOI: 10.1186/s12909-020-02412-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 12/02/2020] [Indexed: 05/16/2023]
Abstract
BACKGROUND Fostering professional behaviour has become increasingly important in medical education and non-traditional approaches to assessment of professionalism may offer a more holistic representation of students' professional behaviour development. Emerging evidence suggests peer assessment may offer potential as an alternative method of professionalism assessment. We introduced peer assessment of professionalism in pre-clerkship phases of undergraduate medical education curriculum at our institution and evaluated suitability of adopting a professional behaviour scale for longitudinal tracking of student development, and student comfort and acceptance of peer assessment. METHODS Peer assessment was introduced using a validated professional behaviours scale. Students conducted repeated, longitudinal assessments of their peers from small-group, clinical skills learning activities. An electronic assessment system was used to collect peer assessments, collate and provide reports to students. Student opinions of peer assessment were initially surveyed before introducing the process, confirmatory analyses were conducted of the adopted scale, and students were surveyed to explore satisfaction with the peer assessment process. RESULTS Students across all phases of the curriculum were initially supportive of anonymous peer assessment using small-group learning sessions. Peer scores showed improvement over time, however the magnitude of increase was limited by ceiling effects attributed to the adopted scale. Students agreed that the professional behaviours scale was easy to use and understand, however a majority disagreed that peer assessment improved their understanding of professionalism or was a useful learning experience. CONCLUSIONS Peer assessment of professional behaviours does expose students to the process of assessing one's peers, however the value of such processes at early stages of medical education may not be fully recognized nor appreciated by students. Electronic means for administering peer assessment is feasible for collecting and reporting peer feedback. Improvement in peer assessed scores was observed over time, however student opinions of the educational value were mixed and indeterminate.
Collapse
Affiliation(s)
- Vernon R Curran
- Office of Professional and Educational Development, Faculty of Medicine, Memorial University, St. John's, Newfoundland, A1B 3V6, Canada.
| | - Nicholas A Fairbridge
- Office of Professional and Educational Development, Faculty of Medicine, Memorial University, St. John's, Newfoundland, A1B 3V6, Canada
| | - Diana Deacon
- Office of Professional and Educational Development, Faculty of Medicine, Memorial University, St. John's, Newfoundland, A1B 3V6, Canada
| |
Collapse
|
9
|
Prediger S, Schick K, Fincke F, Fürstenberg S, Oubaid V, Kadmon M, Berberat PO, Harendza S. Validation of a competence-based assessment of medical students' performance in the physician's role. BMC MEDICAL EDUCATION 2020; 20:6. [PMID: 31910843 PMCID: PMC6947905 DOI: 10.1186/s12909-019-1919-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 12/22/2019] [Indexed: 05/04/2023]
Abstract
BACKGROUND Assessing competence of advanced undergraduate medical students based on performance in the clinical context is the ultimate, yet challenging goal for medical educators to provide constructive alignment between undergraduate medical training and professional work of physicians. Therefore, we designed and validated a performance-based 360-degree assessment for competences of advanced undergraduate medical students. METHODS This study was conducted in three steps: 1) Ten facets of competence considered to be most important for beginning residents were determined by a ranking study with 102 internists and 100 surgeons. 2) Based on these facets of competence we developed a 360-degree assessment simulating a first day of residency. Advanced undergraduate medical students (year 5 and 6) participated in the physician's role. Additionally knowledge was assessed by a multiple-choice test. The assessment was performed twice (t1 and t2) and included three phases: a consultation hour, a patient management phase, and a patient handover. Sixty-seven (t1) and eighty-nine (t2) undergraduate medical students participated. 3) The participants completed the Group Assessment of Performance (GAP)-test for flight school applicants to assess medical students' facets of competence in a non-medical context for validation purposes. We aimed to provide a validity argument for our newly designed assessment based on Messick's six aspects of validation: (1) content validity, (2) substantive/cognitive validity, (3) structural validity, (4) generalizability, (5) external validity, and (6) consequential validity. RESULTS Our assessment proved to be well operationalised to enable undergraduate medical students to show their competences in performance on the higher levels of Bloom's taxonomy. Its generalisability was underscored by its authenticity in respect of workplace reality and its underlying facets of competence relevant for beginning residents. The moderate concordance with facets of competence of the validated GAP-test provides arguments of convergent validity for our assessment. Since five aspects of Messick's validation approach could be defended, our competence-based 360-degree assessment format shows good arguments for its validity. CONCLUSION According to these validation arguments, our assessment instrument seems to be a good option to assess competence in advanced undergraduate medical students in a summative or formative way. Developments towards assessment of postgraduate medical trainees should be explored.
Collapse
Affiliation(s)
- Sarah Prediger
- III. Department of Internal Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Kristina Schick
- TUM Medical Education Center, School of Medicine, Technical University of Munich, Munich, Germany
| | - Fabian Fincke
- Department of Medical Education and Educational Research, Faculty of Medicine and Health Science, University of Oldenburg, Oldenburg, Germany
| | - Sophie Fürstenberg
- III. Department of Internal Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | | | - Martina Kadmon
- Faculty of Medicine, University of Augsburg, Deanery, Augsburg, Germany
| | - Pascal O. Berberat
- TUM Medical Education Center, School of Medicine, Technical University of Munich, Munich, Germany
| | - Sigrid Harendza
- III. Department of Internal Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
10
|
Biesma R, Kennedy MC, Pawlikowska T, Brugha R, Conroy R, Doyle F. Peer assessment to improve medical student's contributions to team-based projects: randomised controlled trial and qualitative follow-up. BMC MEDICAL EDUCATION 2019; 19:371. [PMID: 31615489 PMCID: PMC6794794 DOI: 10.1186/s12909-019-1783-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2018] [Accepted: 08/30/2019] [Indexed: 06/03/2023]
Abstract
BACKGROUND Medical schools increasingly incorporate teamwork in their curricula but medical students often have a negative perception of team projects, in particular when there is unequal participation. The purpose of this study is to evaluate whether a novel peer evaluation system improves teamwork contributions and reduces the risk of students "free loading". METHODS A cluster randomised controlled trial (RCT) with qualitative follow up enrolled 37 teams (n = 223 students). Participating teams were randomised to intervention group (19 teams) or control group (18 teams). The validated Comprehensive Assessment Team Member Effectiveness (CATME) tool was used as the outcome measure, and was completed at baseline (week 2) and at the end of the project (week 10). The team contribution subscale was the primary outcome, with other subscales as secondary outcomes. Six focus group discussions were held with students to capture the team's experiences and perceptions of peer assessment and its effects on team work. RESULTS The results of the RCT showed that there was no difference in team contribution, and other forms of team effectiveness, between intervention and control teams. The focus group discussions highlighted students' negative attitudes, and lack of implementation of this transparent, points-based peer assessment system, out of fear of future consequences for relationships with peers. The need to assess peers in a transparent way to stimulate open discussion was perceived as threatening by participants. Teams suggested that other peer assessment systems could work such as rewarding additional or floating marks to high performing team members. CONCLUSIONS Other models of peer assessment need to be developed and tested that are non-threatening and that facilitate early acceptance of this mode of assessment.
Collapse
Affiliation(s)
- Regien Biesma
- Department of Epidemiology and Public Health Medicine, Royal College of Surgeons in Ireland (RCSI), Dublin, Ireland
| | - Mary-Claire Kennedy
- School of Health Care, Faculty of Medicine and Health, Leeds University, Leeds, UK
| | - Teresa Pawlikowska
- Health Professions Education Centre, Royal College of Surgeons in Ireland (RCSI), Dublin, Ireland
| | - Ruairi Brugha
- Department of Epidemiology and Public Health Medicine, Royal College of Surgeons in Ireland (RCSI), Dublin, Ireland
| | - Ronan Conroy
- Centre for Data Management, Royal College of Surgeons in Ireland (RCSI), Dublin, Ireland
| | - Frank Doyle
- Department of Health Psychology, Royal College of Surgeons in Ireland (RCSI), Dublin, Ireland
| |
Collapse
|
11
|
Ficzere CH, Clauson AS, Lee PH. Reliability of peer assessment of patient education simulations. CURRENTS IN PHARMACY TEACHING & LEARNING 2019; 11:580-584. [PMID: 31213313 DOI: 10.1016/j.cptl.2019.02.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 12/05/2018] [Accepted: 02/18/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND At Belmont University College of Pharmacy, the final introductory pharmacy practice experience (IPPE) course in the IPPE series, IPPE V, is designed to assess readiness for advanced pharmacy practice experiences and includes three patient counseling simulations. These simulations have required greater resources. The objective of our study was to determine if student performance on patient counseling simulations can be accurately assessed by peers. EDUCATIONAL ACTIVITY Students were required to participate in patient counseling simulations throughout the semester. For each simulation, students were assigned one role: pharmacist, patient, or peer-evaluator. Each pharmacist counseled the patient on a specific product while the peer-evaluator assessed the accuracy and completeness of the counseling using a detailed checklist. The patient used a checklist to assess the pharmacist's communication skills. Faculty assessed the student evaluators and the patients by counting the number of discrepancies between the student evaluator's and the live faculty checklists. Students were surveyed at the end of the semester regarding their beliefs and perceptions of peer assessment for the communication simulations. CRITICAL ANALYSIS OF THE EDUCATIONAL ACTIVITY Of 65 students enrolled in the spring 2018 course, complete recordings and checklists were available for 54 simulations (83.1%). Interrater reliability was high with all correlation coefficients exceeding 0.86. Students agreed that they were comfortable assessing patient education content (82.14%) and communication skills (82.14%). Our results indicate that peer evaluation during patient education simulation is reliable and acceptable to students.
Collapse
Affiliation(s)
- Cathy H Ficzere
- Department of Pharmacy Practice, Belmont University College of Pharmacy, 1900 Belmont Blvd, Nashville, TN 37212, United States.
| | - Angela S Clauson
- Department of Pharmacy Practice, Belmont University College of Pharmacy, 1900 Belmont Blvd, Nashville, TN 37212, United States.
| | - Phillip H Lee
- Department of Pharmacy Practice, Belmont University College of Pharmacy, 1900 Belmont Blvd, Nashville, TN 37212, United States.
| |
Collapse
|
12
|
Kiessling C, Tsimtsiou Z, Essers G, van Nuland M, Anvik T, Bujnowska-Fedak MM, Hovey R, Joakimsen R, Perron NJ, Rosenbaum M, Silverman J. General principles to consider when designing a clinical communication assessment program. PATIENT EDUCATION AND COUNSELING 2017; 100:1762-1768. [PMID: 28396057 DOI: 10.1016/j.pec.2017.03.027] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2016] [Revised: 02/26/2017] [Accepted: 03/25/2017] [Indexed: 06/07/2023]
Abstract
OBJECTIVES Assessment of clinical communication helps teachers in healthcare education determine whether their learners have acquired sufficient skills to meet the demands of clinical practice. The aim of this paper is to give input to educators when planning how to incorporate assessment into clinical communication teaching by building on the authors' experience and current literature. METHODS A summary of the relevant literature within healthcare education is discussed, focusing on what and where to assess, how to implement assessment and how to choose appropriate methodology. RESULTS Establishing a coherent approach to teaching, training, and assessment, including assessing communication in the clinical context, is discussed. Key features of how to implement assessment are presented including: establishing a system with both formative and summative approaches, providing feedback that enhances learning and establishing a multi-source and longitudinal assessment program. CONCLUSIONS The implementation of a reliable, valid, credible, feasible assessment method with specific educational relevance is essential for clinical communication teaching. PRACTICE IMPLICATIONS All assessment methods have strengths and limitations. Since assessment drives learning, assessment should be aligned with the purpose of the teaching program. Combining the use of different assessment formats, multiple observations, and independent measurements in different settings is advised.
Collapse
Affiliation(s)
- Claudia Kiessling
- Department Assessment, Brandenburg Medical School, Neuruppin, Germany
| | - Zoi Tsimtsiou
- Department of Hygiene, School of Medicine, Aristotle University of Thessaloniki, University Campus, 54124, Thessaloniki, Greece.
| | - Geurt Essers
- Department of Public Health and Primary Care, Leiden University Medical Centre, Leiden, The Netherlands
| | - Marc van Nuland
- Department of Public Health and Primary Care, University of Leuven, Leuven, Belgium
| | - Tor Anvik
- UiT The Arctic University of Norway, Tromsø, Norway
| | | | | | | | - Noëlle Junod Perron
- Unit of Development and Research in Medical Education, Geneva Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Marcy Rosenbaum
- Department of Family Medicine and Office of Consultation and Research in Medical Education, University of Iowa Carver College of Medicine, Iowa City, USA
| | | |
Collapse
|
13
|
Inayah AT, Anwer LA, Shareef MA, Nurhussen A, Alkabbani HM, Alzahrani AA, Obad AS, Zafar M, Afsar NA. Objectivity in subjectivity: do students' self and peer assessments correlate with examiners' subjective and objective assessment in clinical skills? A prospective study. BMJ Open 2017; 7:e012289. [PMID: 28487454 PMCID: PMC5623435 DOI: 10.1136/bmjopen-2016-012289] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2016] [Revised: 03/15/2017] [Accepted: 03/16/2017] [Indexed: 11/04/2022] Open
Abstract
OBJECTIVES The qualitative subjective assessment has been exercised either by self-reflection (self-assessment (SA)) or by an observer (peer assessment (PA)) and is considered to play an important role in students' development. The objectivity of PA and SA by students as well as those by faculty examiners has remained debated. This matters most when it comes to a high-stakes examination. We explored the degree of objectivity in PA, SA, as well as the global rating by examiners being Examiners' Subjective Assessment (ESA) compared with Objective Structured Clinical Examinations (OSCE). DESIGN Prospective cohort study. SETTING Undergraduate medical students at Alfaisal University, Riyadh. PARTICIPANTS All second-year medical students (n=164) of genders, taking a course to learn clinical history taking and general physical examination. MAIN OUTCOME MEASURES A Likert scale questionnaire was distributed among the participants during selected clinical skills sessions. Each student was evaluated randomly by peers (PA) as well as by himself/herself (SA). Two OSCEs were conducted where students were assessed by an examiner objectively as well as subjectively (ESA) for a global rating of confidence and well-preparedness. OSCE-1 had fewer topics and stations, whereas OSCE-2 was terminal and full scale. RESULTS OSCE-1 (B=0.10) and ESA (B=8.16) predicted OSCE-2 scores. 'No nervousness' in PA (r=0.185, p=0.018) and 'confidence' in SA (r=0.207, p=0.008) correlated with 'confidence' in ESA. In 'well-preparedness', SA correlated with ESA (r=0.234, p=0.003). CONCLUSIONS OSCE-1 and ESA predicted students' performance in the OSCE-2, a high-stakes evaluation, indicating practical 'objectivity' in ESA, whereas SA and PA had minimal predictive role. Certain components of SA and PA correlated with ESA, suggesting partial objectivity given the limited objectiveness of ESA. Such difference in 'qualitative' objectivity probably reflects experience. Thus, subjective assessment can be used with some degree of objectivity for continuous assessment.
Collapse
Affiliation(s)
| | - Lucman A Anwer
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
- Mayo Clinic, Rochester, USA
| | - Mohammad Abrar Shareef
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
- Mercy St. Vincent Medical Center, Toledo, USA
| | - Akram Nurhussen
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | | | | | | | - Muhammad Zafar
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | - Nasir Ali Afsar
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| |
Collapse
|
14
|
Jang HW, Park SW. Effects of personality traits on collaborative performance in problem-based learning tutorials. Saudi Med J 2016; 37:1365-1371. [PMID: 27874153 PMCID: PMC5303776 DOI: 10.15537/smj.2016.12.15708] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVES To examine the relationship between students' collaborative performance in a problem-based learning (PBL) environment and their personality traits. Methods:This retrospective, cross-sectional study was conducted using student data of a PBL program between 2013 and 2014 at Sungkyunkwan University School of Medicine, Seoul, South Korea. Eighty students were included in the study. Student data from the Temperament and Character Inventory were used as a measure of their personality traits. Peer evaluation scores during PBL were used as a measure of students' collaborative performance. Results: Simple regression analyses indicated that participation was negatively related to harm avoidance and positively related to persistence, whereas preparedness for the group work was negatively related to reward dependence. On multiple regression analyses, low reward dependence remained a significant predictor of preparedness. Grade-point average (GPA) was negatively associated with novelty seeking and cooperativeness and was positively associated with persistence. Conclusion: Medical students who are less dependent on social reward are more likely to complete assigned independent work to prepare for the PBL tutorials. The findings of this study can help educators better understand and support medical students who are at risk of struggling in collaborative learning environments.
Collapse
Affiliation(s)
- Hye Won Jang
- Department of Medical Education, School of Medicine, Sungkyunkwan University, Seoul, South Korea. E-mail.
| | | |
Collapse
|
15
|
Carney PA, Palmer RT, Fuqua Miller M, Thayer EK, Estroff SE, Litzelman DK, Biagioli FE, Teal CR, Lambros A, Hatt WJ, Satterfield JM. Tools to Assess Behavioral and Social Science Competencies in Medical Education: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:730-42. [PMID: 26796091 PMCID: PMC4846480 DOI: 10.1097/acm.0000000000001090] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE Behavioral and social science (BSS) competencies are needed to provide quality health care, but psychometrically validated measures to assess these competencies are difficult to find. Moreover, they have not been mapped to existing frameworks, like those from the Liaison Committee on Medical Education (LCME) and Accreditation Council for Graduate Medical Education (ACGME). This systematic review aimed to identify and evaluate the quality of assessment tools used to measure BSS competencies. METHOD The authors searched the literature published between January 2002 and March 2014 for articles reporting psychometric or other validity/reliability testing, using OVID, CINAHL, PubMed, ERIC, Research and Development Resource Base, SOCIOFILE, and PsycINFO. They reviewed 5,104 potentially relevant titles and abstracts. To guide their review, they mapped BSS competencies to existing LCME and ACGME frameworks. The final included articles fell into three categories: instrument development, which were of the highest quality; educational research, which were of the second highest quality; and curriculum evaluation, which were of lower quality. RESULTS Of the 114 included articles, 33 (29%) yielded strong evidence supporting tools to assess communication skills, cultural competence, empathy/compassion, behavioral health counseling, professionalism, and teamwork. Sixty-two (54%) articles yielded moderate evidence and 19 (17%) weak evidence. Articles mapped to all LCME standards and ACGME core competencies; the most common was communication skills. CONCLUSIONS These findings serve as a valuable resource for medical educators and researchers. More rigorous measurement validation and testing and more robust study designs are needed to understand how educational strategies contribute to BSS competency development.
Collapse
Affiliation(s)
- Patricia A Carney
- P.A. Carney is professor of family medicine and of public health and preventive medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. R.T. Palmer is assistant professor of family medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. M.F. Miller is senior research assistant, Department of Family Medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. E.K. Thayer is research assistant, Department of Family Medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. S.E. Estroff is professor, Department of Social Medicine, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina. D.K. Litzelman is D. Craig Brater Professor of Medicine and senior director for research in health professions education and practice, Indiana University School of Medicine, Indianapolis, Indiana. F.E. Biagioli is professor of family medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. C.R. Teal is assistant professor, Department of Medicine, and director, Educational Evaluation and Research, Office of Undergraduate Medical Education, Baylor College of Medicine, Houston, Texas. A. Lambros is active emeritus associate professor, Social Sciences & Health Policy, Wake Forest School of Medicine, Winston-Salem, North Carolina. W.J. Hatt is programmer analyst, Department of Family Medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. J.M. Satterfield is professor of clinical medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
16
|
Hulsman RL, van der Vloodt J. Self-evaluation and peer-feedback of medical students' communication skills using a web-based video annotation system. Exploring content and specificity. PATIENT EDUCATION AND COUNSELING 2015; 98:356-63. [PMID: 25433967 DOI: 10.1016/j.pec.2014.11.007] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2013] [Revised: 10/16/2014] [Accepted: 11/11/2014] [Indexed: 05/21/2023]
Abstract
OBJECTIVE Self-evaluation and peer-feedback are important strategies within the reflective practice paradigm for the development and maintenance of professional competencies like medical communication. Characteristics of the self-evaluation and peer-feedback annotations of medical students' video recorded communication skills were analyzed. METHOD Twenty-five year 4 medical students recorded history-taking consultations with a simulated patient, uploaded the video to a web-based platform, marked and annotated positive and negative events. Peers reviewed the video and self-evaluations and provided feedback. Analyzed were the number of marked positive and negative annotations and the amount of text entered. Topics and specificity of the annotations were coded and analyzed qualitatively. RESULTS Students annotated on average more negative than positive events. Additional peer-feedback was more often positive. Topics most often related to structuring the consultation. Students were most critical about their biomedical topics. Negative annotations were more specific than positive annotations. Self-evaluations were more specific than peer-feedback and both show a significant correlation. Four response patterns were detected that negatively bias specificity assessment ratings. CONCLUSION Teaching students to be more specific in their self-evaluations may be effective for receiving more specific peer-feedback. PRACTICE IMPLICATIONS Videofragmentrating is a convenient tool to implement reflective practice activities like self-evaluation and peer-feedback to the classroom in the teaching of clinical skills.
Collapse
Affiliation(s)
- Robert L Hulsman
- Academic Medical Centre, Department of Medical Psychology, Amsterdam, The Netherlands.
| | - Jane van der Vloodt
- Academic Medical Centre, Department of Medical Psychology, Amsterdam, The Netherlands
| |
Collapse
|
17
|
Humphris G, Entwistle V, Eide H, Visser A. The science of health communication: impressions from the International Conference on Communication in Healthcare in St Andrews, Scotland, UK. PATIENT EDUCATION AND COUNSELING 2013; 92:283-285. [PMID: 23962541 DOI: 10.1016/j.pec.2013.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
|