1
|
Buléon C, Mattatia L, Minehart RD, Rudolph JW, Lois FJ, Guillouet E, Philippon AL, Brissaud O, Lefevre-Scelles A, Benhamou D, Lecomte F, group TSAWS, Bellot A, Crublé I, Philippot G, Vanderlinden T, Batrancourt S, Boithias-Guerot C, Bréaud J, de Vries P, Sibert L, Sécheresse T, Boulant V, Delamarre L, Grillet L, Jund M, Mathurin C, Berthod J, Debien B, Gacia O, Der Sahakian G, Boet S, Oriot D, Chabot JM. Simulation-based summative assessment in healthcare: an overview of key principles for practice. ADVANCES IN SIMULATION (LONDON, ENGLAND) 2022; 7:42. [PMID: 36578052 PMCID: PMC9795938 DOI: 10.1186/s41077-022-00238-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 11/30/2022] [Indexed: 12/29/2022]
Abstract
BACKGROUND Healthcare curricula need summative assessments relevant to and representative of clinical situations to best select and train learners. Simulation provides multiple benefits with a growing literature base proving its utility for training in a formative context. Advancing to the next step, "the use of simulation for summative assessment" requires rigorous and evidence-based development because any summative assessment is high stakes for participants, trainers, and programs. The first step of this process is to identify the baseline from which we can start. METHODS First, using a modified nominal group technique, a task force of 34 panelists defined topics to clarify the why, how, what, when, and who for using simulation-based summative assessment (SBSA). Second, each topic was explored by a group of panelists based on state-of-the-art literature reviews technique with a snowball method to identify further references. Our goal was to identify current knowledge and potential recommendations for future directions. Results were cross-checked among groups and reviewed by an independent expert committee. RESULTS Seven topics were selected by the task force: "What can be assessed in simulation?", "Assessment tools for SBSA", "Consequences of undergoing the SBSA process", "Scenarios for SBSA", "Debriefing, video, and research for SBSA", "Trainers for SBSA", and "Implementation of SBSA in healthcare". Together, these seven explorations provide an overview of what is known and can be done with relative certainty, and what is unknown and probably needs further investigation. Based on this work, we highlighted the trustworthiness of different summative assessment-related conclusions, the remaining important problems and questions, and their consequences for participants and institutions of how SBSA is conducted. CONCLUSION Our results identified among the seven topics one area with robust evidence in the literature ("What can be assessed in simulation?"), three areas with evidence that require guidance by expert opinion ("Assessment tools for SBSA", "Scenarios for SBSA", "Implementation of SBSA in healthcare"), and three areas with weak or emerging evidence ("Consequences of undergoing the SBSA process", "Debriefing for SBSA", "Trainers for SBSA"). Using SBSA holds much promise, with increasing demand for this application. Due to the important stakes involved, it must be rigorously conducted and supervised. Guidelines for good practice should be formalized to help with conduct and implementation. We believe this baseline can direct future investigation and the development of guidelines.
Collapse
Affiliation(s)
- Clément Buléon
- grid.460771.30000 0004 1785 9671Department of Anesthesiology, Intensive Care and Perioperative Medicine, Caen Normandy University Hospital, 6th Floor, Caen, France ,grid.412043.00000 0001 2186 4076Medical School, University of Caen Normandy, Caen, France ,grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA
| | - Laurent Mattatia
- grid.411165.60000 0004 0593 8241Department of Anesthesiology, Intensive Care and Perioperative Medicine, Nîmes University Hospital, Nîmes, France
| | - Rebecca D. Minehart
- grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA ,grid.32224.350000 0004 0386 9924Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA USA ,grid.38142.3c000000041936754XHarvard Medical School, Boston, MA USA
| | - Jenny W. Rudolph
- grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA ,grid.32224.350000 0004 0386 9924Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA USA ,grid.38142.3c000000041936754XHarvard Medical School, Boston, MA USA
| | - Fernande J. Lois
- grid.4861.b0000 0001 0805 7253Department of Anesthesiology, Intensive Care and Perioperative Medicine, Liège University Hospital, Liège, Belgique
| | - Erwan Guillouet
- grid.460771.30000 0004 1785 9671Department of Anesthesiology, Intensive Care and Perioperative Medicine, Caen Normandy University Hospital, 6th Floor, Caen, France ,grid.412043.00000 0001 2186 4076Medical School, University of Caen Normandy, Caen, France
| | - Anne-Laure Philippon
- grid.411439.a0000 0001 2150 9058Department of Emergency Medicine, Pitié Salpêtrière University Hospital, APHP, Paris, France
| | - Olivier Brissaud
- grid.42399.350000 0004 0593 7118Department of Pediatric Intensive Care, Pellegrin University Hospital, Bordeaux, France
| | - Antoine Lefevre-Scelles
- grid.41724.340000 0001 2296 5231Department of Emergency Medicine, Rouen University Hospital, Rouen, France
| | - Dan Benhamou
- grid.413784.d0000 0001 2181 7253Department of Anesthesiology, Intensive Care and Perioperative Medicine, Kremlin Bicêtre University Hospital, APHP, Paris, France
| | - François Lecomte
- grid.411784.f0000 0001 0274 3893Department of Emergency Medicine, Cochin University Hospital, APHP, Paris, France
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
2
|
Koo JH, Ong KY, Yap YT, Tham KY. The role of training in student examiner rating performance in a student-led mock OSCE. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:293-298. [PMID: 33351173 PMCID: PMC8505586 DOI: 10.1007/s40037-020-00643-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Revised: 11/02/2020] [Accepted: 12/02/2020] [Indexed: 06/12/2023]
Abstract
INTRODUCTION Peer assessments are increasingly prevalent in medical education, including student-led mock Objective Structured Clinical Examinations (OSCE). While there is some evidence to suggest that examiner training may improve OSCE assessments, few students undergo training before becoming examiners. We sought to evaluate an examiner training programme in the setting of a student-led mock OSCE. METHODS A year‑2 mock OSCE comprised of history taking (Hx) and physical examination (PE) stations was conducted involving 35 year‑3 (Y3) student examiners and 21 year‑5 (Y5) student examiners who acted as reference examiners. Twelve Y3 student-examiners attended an OSCE examiner training programme conducted by senior faculty. During the OSCE, Y3 and Y5 student examiners were randomly paired to grade the same candidates and scores were compared. Scores for checklist rating (CR) and global rating (GR) domains were assigned for both Hx and PE stations. RESULTS There was moderate to excellent correlation between Y3 and Y5 student examiners for both Hx (ICC 0.71-0.96) and PE stations (ICC 0.71-0.88) across all domains. For both Hx and PE stations, GR domain had poorer correlation than CR domains. Examiner training resulted in better correlations for PE but not Hx stations. Effect sizes were lower than the minimum detectible effect (MDE) sizes for all comparisons made. DISCUSSION Y3 student examiners are effective substitutes for Y5 student examiners in a Y2 mock OSCE. Our findings suggest that examiner training may further improve marking behaviour especially for PE stations. Further studies with larger sample sizes are required to further evaluate the effects of dedicated examiner training.
Collapse
Affiliation(s)
- Jian Hui Koo
- Singapore General Hospital, Singapore, Singapore.
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore.
| | - Kim Yao Ong
- Tan Tock Seng Hospital, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Yun Ting Yap
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Kum Ying Tham
- Tan Tock Seng Hospital, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
3
|
Louridas M, de Montbrun S. Competency-Based Education in Minimally Invasive and Robotic Colorectal Surgery. Clin Colon Rectal Surg 2021; 34:155-162. [PMID: 33814997 DOI: 10.1055/s-0040-1718683] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Minimally invasive and robotic techniques have become increasingly implemented into surgical practice and are now an essential part of the foundational skills of training colorectal surgeons. Over the past 5 years there has been a shift in the surgical educational paradigm toward competency-based education (CBE). CBE recognizes that trainees learn at different rates but regardless, are required to meet a competent threshold of performance prior to independent practice. Thus, CBE attempts to replace the traditional "time" endpoint of training with "performance." Although conceptually sensible, implementing CBE has proven challenging. This article will define competence, outline appropriate assessment tools to assess technical skill, and review the literature on the number of cases required to achieve competence in colorectal procedures while outlining the barriers to implementing CBE.
Collapse
Affiliation(s)
- Marisa Louridas
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | | |
Collapse
|
4
|
Baugh RF, Baugh AD. Cultural influences and the Objective Structured Clinical Examination. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2021; 12:22-24. [PMID: 33507878 PMCID: PMC7883802 DOI: 10.5116/ijme.5ff9.b817] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 01/09/2021] [Indexed: 06/12/2023]
Affiliation(s)
- Reginald F. Baugh
- Department of Surgery, University of Toledo College of Medicine and Life Sciences, Toledo, OH, USA
| | - Aaron D. Baugh
- Pulmonary, Critical Care, Allergy, Sleep Medicine, Department of Internal Medicine University of California San Francis-co Medical School, University of California San Francisco Medical Center, San Francisco, CA, USA
| |
Collapse
|
5
|
Moreno-López R, Sinclair S. Evaluation of a new e-learning resource for calibrating OSCE examiners on the use of rating scales. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2020; 24:276-281. [PMID: 31925850 DOI: 10.1111/eje.12495] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 01/05/2020] [Accepted: 01/07/2020] [Indexed: 06/10/2023]
Abstract
INTRODUCTION Rating scales have been described as better at assessing behaviours such as professionalism during Objective Structured Clinical Examinations (OSCEs). However, there is an increased need to train and calibrate staff on their use prior to student assessment. MATERIAL AND METHODS An online e-learning package was developed and made available to all examiners at the Institute of Dentistry at the University of Aberdeen. The package included videos of three OSCE stations (medical emergency, rubber dam placement and handling a complaint) which were recorded in two different scenarios; (excellent and unsatisfactory candidate). These videos were recorded to meet a pre-defined marking score. The examiners were required to mark the six videos using pre-set marking criteria (checklist and rating scales). The rating scales included professionalism, general clinical ability and/or communication skills. For each video, examiners were given four possible options (unsatisfactory, borderline, satisfactory or excellent), and they were provided with a description for each domain. They were also required to complete a questionnaire to gather their views on the use of this e-learning environment. RESULTS Fifteen examiners completed the task. The total scores given were very similar to the expected scores for the medical emergency and complaint stations; however, this was not the case for the rubber dam station (P-value .017 and .036). This could be attributed to some aspects of the placement of the rubber dam being unclear as commented on in the examiners questionnaires. There was consistency in the selection of marks on the rating scales (inter-examiner correlation ranged between 0.916 and 0.979). CONCLUSION Further studies are required on the field of e-learning training to calibrate examiners for practical assessment; however, this study provides preliminary evidence to support the use of videos as part of an online training package to calibrate OSCE examiners on the use of rating scales.
Collapse
Affiliation(s)
| | - Serena Sinclair
- Institute of Dentistry, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
6
|
Easdown LJ. A Checklist to Help Faculty Assess ACGME Milestones in a Video-Recorded OSCE. J Grad Med Educ 2017; 9:605-610. [PMID: 29075381 PMCID: PMC5646919 DOI: 10.4300/jgme-d-17-00112.1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 05/10/2017] [Accepted: 05/31/2017] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Faculty members need to assess resident performance using the Accreditation Council for Graduate Medical Education Milestones. OBJECTIVE In this randomized study we used an objective structured clinical examination (OSCE) around the disclosure of an adverse event to determine whether use of a checklist improved the quality of milestone assessments by faculty. METHODS In 2013, a total of 20 anesthesiology faculty members from 3 institutions were randomized to 2 groups to assess 5 videos of trainees demonstrating advancing levels of competency on the OSCE. One group used milestones alone, and the other used milestones plus a 13-item checklist with behavioral anchors based on ideal performance. We classified faculty ratings as either correct or incorrect with regard to the competency level demonstrated in each video, and then used logistic regression analysis to assess the effect of checklist use on the odds of correct classification. RESULTS Thirteen of 20 faculty members rated assessing performance using milestones alone as difficult or very difficult. Checklist use was associated with significantly greater odds of correct classification at entry level (odds ratio [OR] = 9.2, 95% confidence interval [CI] 4.0-21.2) and at junior level (OR = 2.7, 95% CI 1.3-5.7) performance. For performance at other competency levels checklist use did not affect the odds of correct classification. CONCLUSIONS A majority of anesthesiology faculty members reported difficulty with assessing a videotaped OSCE of error disclosure using milestones as primary assessment tools. Use of the checklist assisted in correct assessments at the entry and junior levels.
Collapse
|
7
|
Khan R, Payne MWC, Chahine S. Peer assessment in the objective structured clinical examination: A scoping review. MEDICAL TEACHER 2017; 39:745-756. [PMID: 28399690 DOI: 10.1080/0142159x.2017.1309375] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
BACKGROUND The objective structured clinical examination (OSCE), originally designed with experts assessing trainees' competence, is more frequently employed with an element of peer assessment and feedback. Although peer assessment in higher education has been studied, its role in OSCEs has not reviewed. AIMS The aim of this study is to conduct a scoping review and explore the role of peer assessment and feedback in the OSCE. METHODS Electronic database and hand searching yielded 507 articles. Twenty-one full records were screened, of which 13 were included in the review. Two independent reviewers completed each step of the review. RESULTS Peer-based OSCEs are used to assess students' accuracy in assessing OSCE performance and to promote learning. Peer examiners (PE) tend to award better global ratings and variable checklist ratings compared to faculty and provide high-quality feedback. Participating in these OSCEs is perceived as beneficial for learning. CONCLUSIONS Peer assessment and feedback can be used to gauge PE reliability and promote learning. Teachers using these OSCEs must use methodology which fits their purpose. Competency-based education calls for diversification of assessment practices and asks how assessment impacts learning; the peer-based OSCE responds to these demands and will become an important practice in health professions education.
Collapse
Affiliation(s)
- Rishad Khan
- a Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry , University of Western Ontario , London , Canada
| | - Michael W C Payne
- b Department of Physical Medicine and Rehabilitation, Schulich School of Medicine and Dentistry , University of Western Ontario , London , Canada
| | - Saad Chahine
- a Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry , University of Western Ontario , London , Canada
- c Department of Medicine, Schulich School of Medicine and Dentistry , University of Western Ontario , London , Canada
| |
Collapse
|
8
|
Phillips AW, Matthan J, Bookless LR, Whitehead IJ, Madhavan A, Rodham P, Porter ALR, Nesbitt CI, Stansby G. Individualised Expert Feedback is Not Essential for Improving Basic Clinical Skills Performance in Novice Learners: A Randomized Trial. JOURNAL OF SURGICAL EDUCATION 2017; 74:612-620. [PMID: 28041770 DOI: 10.1016/j.jsurg.2016.12.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2016] [Revised: 11/07/2016] [Accepted: 12/07/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE To determine whether unsupervised video feedback (UVF) is as effective as direct expert feedback (DEF) in improving clinical skills performance for medical students learning basic surgical skills-intravenous cannulation, catheterization, and suturing. BACKGROUND Feedback is a vital component of the learning process, yet great variation persists in its quality, quantity, and methods of delivery. The use of video technology to assist in the provision of feedback has been adopted increasingly. METHODS A prospective, blinded randomized trial comparing DEF, an expert reviewing students' performances with subsequent improvement suggestions, and UVF, students reviewing their own performance with an expert teaching video, was carried out. Medical students received an initial teaching lecture on intravenous cannulation, catheterization, and suturing and were then recorded performing the task. They subsequently received either DEF or UVF before reperforming the task. Students' recordings were additionally scored by 2 blinded experts using a validated proforma. RESULTS A total of 71 medical students were recruited. Cannulation scores improved 4.3% with DEF and 9.5% with UVF (p = 0.044), catheterization scores improved 8.7% with DEF and 8.9% with UVF (p = 0.96), and suturing improved 15.6% with DEF and 13.2% with UVF (p = 0.54). Improvement from baseline scores was significant in all cases (p < 0.05). CONCLUSION Video-assisted feedback allows a significant improvement in clinical skills for novices. No significant additional benefit was demonstrated from DEF, and a similar improvement can be obtained using a generic expert video and allowing students to review their own performance. This could have significant implications for the design and delivery of such training.
Collapse
Affiliation(s)
- Alexander W Phillips
- Northern Oesophagogastric Cancer Unit, Royal Victoria Infirmary, Newcastle upon Tyne, United Kingdom.
| | - Joanna Matthan
- Anatomy and Clinical Skills Department, School of Medical Education, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Lucy R Bookless
- Department of General Surgery, James Cook University Hospital, Middlesbrough, United Kingdom
| | - Ian J Whitehead
- Department of General Surgery, St Helen's and Knowsley Hospitals NHS Trust, St Helens, United Kingdom
| | - Anantha Madhavan
- Northern Oesophagogastric Cancer Unit, Royal Victoria Infirmary, Newcastle upon Tyne, United Kingdom
| | - Paul Rodham
- Royal Victoria Infirmary, The Newcastle upon Tyne NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
| | - Anna L R Porter
- Royal Victoria Infirmary, The Newcastle upon Tyne NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
| | - Craig I Nesbitt
- Department of Vascular Surgery, James Cook University Hospital, Middlesbrough, United Kingdom
| | - Gerard Stansby
- Department of Vascular Surgery, Freeman Hospital, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
9
|
Martineau B, Mamede S, St-Onge C, Bergeron L. The Influence of Peer Feedback on the Acquisition of Physical-Examination Skills. HEALTH PROFESSIONS EDUCATION 2016. [DOI: 10.1016/j.hpe.2016.07.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
|
10
|
Abstract
The assessment of clinical competence is becoming increasingly complex, patient cen tered, and student driven. Traditionally, clini cal evaluation methods consisted primarily of faculty observations, oral examinations, and multiple-choice tests. Increased faculty work load, discontent with traditional methods of clinical skill assessment, and developments in the fields of psychology and education have led to the formation of new modalities, namely performance assessments. The literature per taining to the performance assessment with standardized patients is reviewed. Based on this literature, several areas for the future direction of performance assessment are pro posed, including (a) toward evidence-based locally developed assessments, (b) toward an understanding of educational outcomes and noncognitive assessment factors, and (c) toward more student-driven assessments.
Collapse
|
11
|
Ellman MS, Putnam A, Green M, Pfeiffer C, Bia M. Demonstrating Medical Student Competency in Palliative Care: Development and Evaluation of a New Objective Structured Clinical Examination Station. J Palliat Med 2016; 19:706-11. [PMID: 27249323 DOI: 10.1089/jpm.2015.0462] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The observed structured clinical examination (OSCE) is an important tool to assess clinical competencies; however, there are no reported palliative care OSCEs for medical student assessment. OBJECTIVE We aimed to develop, implement, and evaluate the characteristics of a palliative care OSCE for fourth-year medical students. METHODS We created a representative case and a checklist of 14 history items from three core palliative care competency domains. Subjects were fourth-year medical students who had completed our school's longitudinal palliative care curriculum. Measurements were students' scores compiled from the standardized patient's (SP) tally of the checklist results. We determined inter-rater reliability between the SP and a remote observer. Measurements included the difficulty and discrimination index, internal consistency reliability, factor analysis, and relationships between palliative care scores and composite seven station OSCE scores. RESULTS In the implementation year, 95 students scored an average of 74% (standard deviation [SD] = 13%) on the 14 history items. There was 95% agreement in ratings on items between the SP and the remote observer. The Cronbach's alpha was 0.53, demonstrating moderate internal consistency. The palliative care scores correlated with overall OSCE communication scores (R = 0.29, p = 0.01) and history scores (R = 0.61, p = 0.01). CONCLUSIONS A new OSCE to assess palliative care competencies was feasible to implement with high inter-rater reliability, evidence supporting validity, and moderate internal consistency. We believe this OSCE would prove useful to assess students' primary palliative care competency and to evaluate curricula in palliative care.
Collapse
Affiliation(s)
| | | | | | - Carol Pfeiffer
- 2 University of Connecticut School of Medicine , Farmington, Connecticut
| | - Margaret Bia
- 1 Yale School of Medicine , New Haven, Connecticut
| |
Collapse
|
12
|
Martineau B, Mamede S, St-Onge C, Rikers RMJP, Schmidt HG. To observe or not to observe peers when learning physical examination skills; that is the question. BMC MEDICAL EDUCATION 2013; 13:55. [PMID: 23594455 PMCID: PMC3637796 DOI: 10.1186/1472-6920-13-55] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2012] [Accepted: 03/27/2013] [Indexed: 05/12/2023]
Abstract
BACKGROUND Learning physical examination skills is an essential element of medical education. Teaching strategies include practicing the skills either alone or in-group. It is unclear whether students benefit more from training these skills individually or in a group, as the latter allows them to observing their peers. The present study, conducted in a naturalistic setting, investigated the effects of peer observation on mastering psychomotor skills necessary for physical examination. METHODS The study included 185 2nd-year medical students, participating in a regular head-to-toe physical examination learning activity. Students were assigned either to a single-student condition (n = 65), in which participants practiced alone with a patient instructor, or to a multiple-student condition (n = 120), in which participants practiced in triads under patient instructor supervision. The students subsequently carried out a complete examination that was videotaped and subsequently evaluated. Student's performance was used as a measure of learning. RESULTS Students in the multiple-student condition learned more than those who practiced alone (81% vs 76%, p < 0.004). This result possibly derived from a positive effect of observing peers; students who had the possibility to observe a peer (the second and third students in the groups) performed better than students who did not have this possibility (84% vs 76%, p <. 001). There was no advantage of observing more than one peer (83.7% vs 84.1%, p > .05). CONCLUSIONS The opportunity to observe a peer during practice seemed to improve the acquisition of physical examination skills. By using small groups instead of individual training to teach physical examination skills, health sciences educational programs may provide students with opportunities to improve their performance by learning from their peers through modelling.
Collapse
Affiliation(s)
- Bernard Martineau
- Université de Sherbrooke, Faculté de médecine et des sciences de la santé, 3001 12ème avenue nord Sherbrooke, Québec J1H 5N4, Canada
| | - Sílvia Mamede
- Institute of Psychology, Erasmus University Rotterdam, P.O. Box 17383000 DR, Rotterdam, The Netherlands
| | - Christina St-Onge
- Université de Sherbrooke, Faculté de médecine et des sciences de la santé, 3001 12ème avenue nord Sherbrooke, Québec J1H 5N4, Canada
| | - Remy MJP Rikers
- Institute of Psychology, Erasmus University Rotterdam, P.O. Box 17383000 DR, Rotterdam, The Netherlands
| | - Henk G Schmidt
- Institute of Psychology, Erasmus University Rotterdam, P.O. Box 17383000 DR, Rotterdam, The Netherlands
| |
Collapse
|
13
|
Supe A, Prabhu R, Harris I, Downing S, Tekian A. Structured training on box trainers for first year surgical residents: does it improve retention of laparoscopic skills? A randomized controlled study. JOURNAL OF SURGICAL EDUCATION 2012; 69:624-632. [PMID: 22910161 DOI: 10.1016/j.jsurg.2012.05.002] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2011] [Revised: 03/11/2012] [Accepted: 05/07/2012] [Indexed: 06/01/2023]
Abstract
BACKGROUND AND AIM Structured training on box trainers in laparoscopic skills in the initial years of residency has been used and found to be effective. Although there are studies that confirm immediate improvement after training, there is a lack of well-designed trials addressing the crucial issue of retention of these skills over time. The purpose of this study is to assess improvement in laparoscopic skills of surgical trainees after structured training on box trainers, compared with traditional training (observing and assisting laparoscopic procedures in the operation rooms) immediately and after 5 months. METHODS Forty surgical residents in their first 2 months of residency training were randomized to either structured training on box trainers, in addition to traditional training, or to traditional training alone. Groups were equivalent with regards to demographics, previous operative experience, and baseline skills. Structured training consisted of 4 sessions with 6 tasks on box trainers under supervision and self practice. Task-based objective structured practical examinations (OSPE) were completed before and after each task. At the end of the training, residents were assessed by a blinded faculty member with the global operative assessment of laparoscopic skills (GOALS) rating scale. Residents also completed a satisfaction questionnaire. Focus group discussions were conducted for both groups. The GOALS were repeated for both the groups at the end of 5 months to assess retention of skills. RESULTS The mean GOALS score was significantly higher for the structured training group (mean/SD 20.35 + 0.74) compared with the traditional training group (mean/SD 16.35 + 1.75, p < 0.01) at the end of 5 months. The mean global rating scale (GRS) score was significantly higher (Pre 7.55 + 0.99 vs. Post 16.4 + 0.68, p < 0.01) for the structured training group at the end of course. Residents in the structured training group had significantly improved skills immediately after the training and had better retention of skills at the end of five months. CONCLUSIONS Structured training on box trainers, in addition to traditional training, compared with traditional training alone, leads to better skills and improved confidence of residents. There is significant retention of skills at the end of 5 months. These results provide support for incorporation of structured training with box trainers for laparoscopic skills into surgical training programs.
Collapse
Affiliation(s)
- Avinash Supe
- Department of Surgical Gastroenterology, Seth G S Medical College and K E M Hospital, Mumbai, India.
| | | | | | | | | |
Collapse
|
14
|
Abstract
UNLABELLED BACKGROUND⁄ OBJECTIVES Pain-related misbeliefs among health care professionals (HCPs) are common and contribute to ineffective postoperative pain assessment. While standardized patients (SPs) have been effectively used to improve HCPs' assessment skills, not all centres have SP programs. The present equivalence randomized controlled pilot trial examined the efficacy of an alternative simulation method - deteriorating patient-based simulation (DPS) - versus SPs for improving HCPs' pain knowledge and assessment skills. METHODS Seventy-two HCPs were randomly assigned to a 3 h SP or DPS simulation intervention. Measures were recorded at baseline, immediate postintervention and two months postintervention. The primary outcome was HCPs' pain assessment performance as measured by the postoperative Pain Assessment Skills Tool (PAST). Secondary outcomes included HCPs knowledge of pain-related misbeliefs, and perceived satisfaction and quality of the simulation. These outcomes were measured by the Pain Beliefs Scale (PBS), the Satisfaction with Simulated Learning Scale (SSLS) and the Simulation Design Scale (SDS), respectively. Student's t tests were used to test for overall group differences in postintervention PAST, SSLS and SDS scores. One-way analysis of covariance tested for overall group differences in PBS scores. RESULTS DPS and SP groups did not differ on post-test PAST, SSLS or SDS scores. Knowledge of pain-related misbeliefs was also similar between groups. CONCLUSIONS These pilot data suggest that DPS is an effective simulation alternative for HCPs' education on postoperative pain assessment, with improvements in performance and knowledge comparable with SP-based simulation. An equivalence trial to examine the effectiveness of deteriorating patient-based simulation versus standardized patients is warranted.
Collapse
|
15
|
Malau-Aduli BS, Mulcahy S, Warnecke E, Otahal P, Teague PA, Turner R, Vleuten CVD. Inter-Rater Reliability: Comparison of Checklist and Global Scoring for OSCEs. ACTA ACUST UNITED AC 2012. [DOI: 10.4236/ce.2012.326142] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
16
|
Review article: Assessment in anesthesiology education. Can J Anaesth 2011; 59:182-92. [DOI: 10.1007/s12630-011-9637-9] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2011] [Accepted: 11/16/2011] [Indexed: 11/27/2022] Open
|
17
|
Brown RS, Graham CL, Richeson N, Wu J, McDermott S. Evaluation of medical student performance on objective structured clinical exams with standardized patients with and without disabilities. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2010; 85:1766-1771. [PMID: 20881817 DOI: 10.1097/acm.0b013e3181f849dc] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
PURPOSE To investigate whether medical students' performance on a family medicine clerkship objective structured clinical exam (OSCE) differed when the standardized patient (SP) had a disability versus when the SP did not have a disability. METHOD SPs with spinal cord injury (SP-SCI), SPs with intellectual disability (SP-ID), and SPs without a disability participated separately in two OSCE scenarios that were administered by the University of South Carolina School of Medicine's Department of Family and Preventive Medicine from 2007 to 2009. OSCE scores were determined based on the number of critical actions completed by the student, and scores were analyzed to determine differences among scenarios. RESULTS Students scored lower in history, physical exam, lab tests, and interpersonal skills with an SP-SCI, and lower in history, physical exam, and lab tests with an SP-ID than did students interacting with SPs without a disability. The odds ratio for ordering a hemoglobin A1c in one scenario was 4.16 times higher in cases when the SP did not have a disability (95% confidence interval [CI] 1.78-9.17, P = .001). In the second scenario, the odds ratio was 3.08 times higher for ordering a urinalysis (95% CI 1.34-7.08, P = .006) and was 2.15 times higher for providing lifestyle counseling (95% CI 1.04-4.44, P = .038) in students interacting with SPs without a disability. CONCLUSIONS Students performed better when the SP did not have a disability. This suggests that greater emphasis should be placed on teaching appropriate care of patients with a disability.
Collapse
Affiliation(s)
- Rachel S Brown
- Department of Family and Preventive Medicine, University of South Carolina School of Medicine, Columbia, South Carolina, USA.
| | | | | | | | | |
Collapse
|
18
|
|
19
|
Lambert L, Pattison DJ, De Looy AE. Dietetic students’ performance of activities in an objective structured clinical examination. J Hum Nutr Diet 2010; 23:224-9. [DOI: 10.1111/j.1365-277x.2010.01076.x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
20
|
Dastjerdie EV, Saboury A, Mahdian M, Fard MJK. Assessment of Iranian Dental Lecturers Attitude and Perspectives Toward Objective Structured Clinical Examination (OSCE). ACTA ACUST UNITED AC 2010. [DOI: 10.3923/rjbsci.2010.241.245] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
21
|
Paul F. An exploration of student nurses' thoughts and experiences of using a video-recording to assess their performance of cardiopulmonary resuscitation (CPR) during a mock objective structured clinical examination (OSCE). Nurse Educ Pract 2010; 10:285-90. [PMID: 20149746 DOI: 10.1016/j.nepr.2010.01.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2009] [Revised: 11/22/2009] [Accepted: 01/19/2010] [Indexed: 11/19/2022]
Abstract
Cardiopulmonary resuscitation (CPR) is an essential skill taught within undergraduate nursing programmes. At the author's institution, students must pass the CPR objective structured clinical examination (OSCE) before progressing to second year. However, some students have difficulties developing competence in CPR and evidence suggests that resuscitation skills may only be retained for several months. This has implications for practice as nurses are required to be competent in CPR. Therefore, further opportunities for students to develop these skills are necessary. An action research project was conducted with six students who were assessed by an examiner at a video-recorded mock OSCE. Students self-assessed their skills using the video and a checklist. Semi-structured interviews were conducted to compare checklist scores, and explore students' thoughts and experiences of the OSCE. The findings indicate that students may need to repeat this exercise by comparing their previous and current performances to develop both their self-assessment and CPR skills. Although there were some differences between the examiner's and student's checklist scores, all students reported the benefits of participating in this project, e.g. discussion and identification of knowledge and skills deficits, thus emphasising the benefits of formative assessments to prepare students for summative assessments and ultimately clinical practice.
Collapse
Affiliation(s)
- Fiona Paul
- School of Nursing and Midwifery, University of Dundee, 11 Airlie Place, Dundee, United Kingdom.
| |
Collapse
|
22
|
Delzell JE, Chumley H, Webb R, Chakrabarti S, Relan A. Information-gathering patterns associated with higher rates of diagnostic error. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2009; 14:697-711. [PMID: 19219606 DOI: 10.1007/s10459-009-9152-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2008] [Accepted: 01/06/2009] [Indexed: 05/27/2023]
Abstract
Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the percentage of incorrect diagnoses in each cluster. A total of 141 2nd year medical students completed a computer case simulation. Each student's information-gathering pattern included the sequence of history, physical examination, and ancillary testing items chosen from a predefined list. We analyzed the patterns using an artificial neural network and compared percentages of incorrect diagnoses among clusters of information-gathering patterns. We input patterns into a 35 x 35 self organizing map. The network trained for 10,000 epochs. The number of students at each neuron formed a surface that was statistically smoothed into clusters. Each student was assigned to one cluster, the cluster that contributed the largest value to the smoothed function at the student's location in the grid. Seven clusters were identified. Percentage of incorrect diagnoses differed significantly among clusters (Range 0-42%, Chi (2) = 13.62, P = .034). Distance of each cluster from the worst performing cluster was used to rank clusters. This rank was compared to rank determined by percentage incorrect. We found a high positive correlation (Spearman Correlation = .893, P = .007). Clusters closest to the worst performing cluster had the highest percentages of incorrect diagnoses. Patterns of information-gathering were distinct and had different rates of diagnostic error.
Collapse
Affiliation(s)
- John E Delzell
- University of Kansas School of Medicine, 3901 Rainbow Blvd, Mailstop 4010, Kansas City, KS 66160, USA.
| | | | | | | | | |
Collapse
|
23
|
Walsh M, Bailey PH, Koren I. Objective structured clinical evaluation of clinical competence: an integrative review. J Adv Nurs 2009; 65:1584-95. [DOI: 10.1111/j.1365-2648.2009.05054.x] [Citation(s) in RCA: 84] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
24
|
Serpell JW. Evolution of the OSCA-OSCE-Clinical Examination of the Royal Australasian College of Surgeons. ANZ J Surg 2009; 79:161-8. [DOI: 10.1111/j.1445-2197.2008.04834.x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
25
|
Spillane L, Hayden E, Fernandez R, Adler M, Beeson M, Goyal D, Smith-Coggins R, Boulet J. The assessment of individual cognitive expertise and clinical competency: a research agenda. Acad Emerg Med 2008; 15:1071-8. [PMID: 19032553 DOI: 10.1111/j.1553-2712.2008.00271.x] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
There is a large push to utilize evidence-based practices in medical education. At the same time, credentialing bodies are evaluating the use of simulation technologies to assess the competency and safety of its practitioners. At the 2008 Academic Emergency Medicine Consensus Conference on "The Science of Simulation in Healthcare," our breakout session critically evaluated several issues important to the use of simulation in emergency physician (EP) assessment. In this article, we discuss five topics felt to be most critical to simulation-based assessment (SBA). We then offer more specific research questions that would help to define and implement a SBA program in emergency medicine (EM).
Collapse
Affiliation(s)
- Linda Spillane
- Department of Emergency Medicine, University of Rochester School of Medicine, Rochester, NY, USA.
| | | | | | | | | | | | | | | |
Collapse
|
26
|
Peeraer G, Muijtjens AMM, De Winter BY, Remmen R, Hendrickx K, Bossaert L, Scherpbier AJJA. Unintentional failure to assess for experience in senior undergraduate OSCE scoring. MEDICAL EDUCATION 2008; 42:669-675. [PMID: 18588647 DOI: 10.1111/j.1365-2923.2008.03043.x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
CONTEXT One goal of undergraduate assessment is to test students' (future) performance. In the area of skills testing, the objective structured clinical examination (OSCE) has been of great value as a tool with which to test a number of skills in a limited time, with bias reduction and improved reliability. But can OSCEs measure undergraduate internship expertise in basic clinical skills? METHODS Undergraduate students (n = 32) were given a questionnaire listing 182 basic clinical skills. We asked them to score the number of times they had performed each skill during their internships (a 12-month period in Year 6). We assessed the students at the end of Year 5 (before the start of their internships) and again at the start of Year 7 (undergraduate training takes 7 years in Belgium, with internships during Year 6), using a 14-station OSCE assessing basic clinical skills. Global ratings were used to score performance. The relationship between internship experience and the OSCE Year 7 score was analysed using a linear regression model, controlling for variation in OSCE scores from Year 5. A multi-level analysis was performed considering students as level-1 units and stations as level-2 units. RESULTS Year 7 OSCE scores (post-internships) were not affected by the number of times that students practised basic medical skills during their internships. DISCUSSION Scores on OSCEs do not seem to reflect clinical expertise acquired during internships. Other more integrated assessment methods may prove to be more valid for testing final undergraduate skills levels.
Collapse
Affiliation(s)
- Griet Peeraer
- Dean's Office, Faculty of Medicine, University of Antwerp, Antwerp, Belgium.
| | | | | | | | | | | | | |
Collapse
|
27
|
Maran NJ, Glavin RJ. Low- to high-fidelity simulation - a continuum of medical education? MEDICAL EDUCATION 2003; 37 Suppl 1:22-8. [PMID: 14641635 DOI: 10.1046/j.1365-2923.37.s1.9.x] [Citation(s) in RCA: 441] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
CONTEXT Changes in medical training and culture have reduced the acceptability of the traditional apprenticeship style training in medicine and influenced the growth of clinical skills training. Simulation is an educational technique that allows interactive, and at times immersive, activity by recreating all or part of a clinical experience without exposing patients to the associated risks. The number and range of commercially available technologies used in simulation for education of health care professionals is growing exponentially. These range from simple part-task training models to highly sophisticated computer driven models. AIM This paper will review the range of currently available simulators and the educational processes that underpin simulation training. The use of different levels of simulation in a continuum of training will be discussed. Although simulation is relatively new to medicine, simulators have been used extensively for training and assessment in many other domains, most notably the aviation industry. Some parallels and differences will be highlighted.
Collapse
Affiliation(s)
- N J Maran
- Scottish Clinical Simulation Centre, Stirling Royal Infirmary, Livilands Gate, Stirling, UK.
| | | |
Collapse
|
28
|
Norcini J, Boulet J. Methodological issues in the use of standardized patients for assessment. TEACHING AND LEARNING IN MEDICINE 2003; 15:293-297. [PMID: 14612263 DOI: 10.1207/s15328015tlm1504_12] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Affiliation(s)
- John Norcini
- Foundation for the Advancement of International Medical Education and Research (FAIMER), 3624 Market Street, 4th Floor, Philadelphia, PA 19104, USA.
| | | |
Collapse
|