1
|
Andreou V, Peters S, Eggermont J, Schoenmakers B. A needs assessment for enhancing workplace-based assessment: a grounded theory study. BMC MEDICAL EDUCATION 2024; 24:659. [PMID: 38872142 DOI: 10.1186/s12909-024-05636-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 06/06/2024] [Indexed: 06/15/2024]
Abstract
OBJECTIVES Workplace-based assessment (WBA) has been vigorously criticized for not fulfilling its educational purpose by medical educators. A comprehensive exploration of stakeholders' needs regarding WBA is essential to optimize its implementation in clinical practice. METHOD Three homogeneous focus groups were conducted with three groups of stakeholders: General Practitioner (GP) trainees, GP trainers, and GP tutors. Due to COVID-19 measures, we opted for an online asynchronous form to enable participation. An constructivist grounded theory approach was used to employ this study and allow the identification of stakeholders' needs for using WBA. RESULTS Three core needs for WBA were identified in the analysis. Within GP Training, stakeholders found WBA essential, primarily, for establishing learning goals, secondarily, for assessment purposes, and, lastly, for providing or receiving feedback. CONCLUSION All stakeholders perceive WBA as valuable when it fosters learning. The identified needs were notably influenced by agency, trust, availability, and mutual understanding. These were facilitating factors influencing needs for WBA. Embracing these insights can significantly illuminate the landscape of workplace learning culture for clinical educators and guide a successful implementation of WBA.
Collapse
Affiliation(s)
- Vasiliki Andreou
- Academic Centre for General Practice, Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium.
| | - Sanne Peters
- Academic Centre for General Practice, Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium
- School of Health Sciences, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Melbourne, Australia
| | - Jan Eggermont
- Department of Cellular and Molecular Medicine, KU Leuven, Leuven, Belgium
| | - Birgitte Schoenmakers
- Academic Centre for General Practice, Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium
| |
Collapse
|
2
|
Pereira Júnior GA, Hamamoto-Filho PT, Rasslan R, Benevenuto DS, Silva EN, Oliveira AF, Portari Filho PE. Results of the Last 5 Years (2018-2022) of the Specialist Title Exam of The Brazilian College of Surgeons. Rev Col Bras Cir 2024; 51:e20243749. [PMID: 38747884 DOI: 10.1590/0100-6991e-20243749-en] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 04/01/2024] [Indexed: 06/20/2024] Open
Abstract
The article discusses the evolution of the Brazilian College of Surgeons (CBC) specialist title exam, highlighting the importance of evaluating not only theoretical knowledge, but also the practical skills and ethical behavior of candidates. The test was instituted in 1971, initially with only the written phase, and later included the oral practical test, starting with the 13th edition in 1988. In 2022, the assessment process was improved by including the use of simulated stations in the practical test, with the aim of assessing practical and communication skills, as well as clinical reasoning, in order to guarantee excellence in the assessment of surgeons training. The aim of this study is to demonstrate the performance of candidates in the last five years of the Specialist Title Test and to compare the performance results between the different surgical training groups of the candidates. The results obtained by candidates from the various categories enrolled in the test in the 2018 to 2022 editions were analyzed. There was a clear and statistically significant difference between doctors who had completed three years of residency recognized by the Ministry of Education in relation to the other categories of candidates for the Specialist Title..
Collapse
Affiliation(s)
| | - Pedro Tadao Hamamoto-Filho
- - Universidade do Estado de São Paulo (UNESP), Faculdade de Medicina de Botucatu - Botucatu - SP - Brasil
| | - Roberto Rasslan
- - Hospital das Clínicas da FMUSP, Divisão de Clínica Cirúrgica III - São Paulo - SP - Brasil
| | - Dyego Sá Benevenuto
- - Hospital Copa Star, Cirurgia do Aparelho Digestivo - Rio de Janeiro - RJ - Brasil
| | - Eduardo Nacur Silva
- - Santa Casa de Belo Horizonte, III Clínica Cirúrgica - Belo Horizonte - MG - Brasil
| | | | - Pedro Eder Portari Filho
- - Universidade Federal do Estado do Rio de Janeiro (UNIRIO) Escola de Medicina e Cirurgia - Rio de Janeiro - RJ - Brasil
- - Presidente do Colégio Brasileiro de Cirurgiões - Rio de Janeiro - RJ - Brasil
| |
Collapse
|
3
|
Yeates P, Maluf A, Kinston R, Cope N, Cullen K, Cole A, O'Neill V, Chung CW, Goodfellow R, Vallender R, Ensaff S, Goddard-Fuller R, McKinley R, Wong G. A realist evaluation of how, why and when objective structured clinical exams (OSCEs) are experienced as an authentic assessment of clinical preparedness. MEDICAL TEACHER 2024:1-9. [PMID: 38635469 DOI: 10.1080/0142159x.2024.2339413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 04/02/2024] [Indexed: 04/20/2024]
Abstract
INTRODUCTION Whilst rarely researched, the authenticity with which Objective Structured Clinical Exams (OSCEs) simulate practice is arguably critical to making valid judgements about candidates' preparedness to progress in their training. We studied how and why an OSCE gave rise to different experiences of authenticity for different participants under different circumstances. METHODS We used Realist evaluation, collecting data through interviews/focus groups from participants across four UK medical schools who participated in an OSCE which aimed to enhance authenticity. RESULTS Several features of OSCE stations (realistic, complex, complete cases, sufficient time, autonomy, props, guidelines, limited examiner interaction etc) combined to enable students to project into their future roles, judge and integrate information, consider their actions and act naturally. When this occurred, their performances felt like an authentic representation of their clinical practice. This didn't work all the time: focusing on unavoidable differences with practice, incongruous features, anxiety and preoccupation with examiners' expectations sometimes disrupted immersion, producing inauthenticity. CONCLUSIONS The perception of authenticity in OSCEs appears to originate from an interaction of station design with individual preferences and contextual expectations. Whilst tentatively suggesting ways to promote authenticity, more understanding is needed of candidates' interaction with simulation and scenario immersion in summative assessment.
Collapse
Affiliation(s)
- Peter Yeates
- School of Medicine, Keele University, Keele, England
| | - Adriano Maluf
- Faculty of Health and Life Sciences, De Montford University, Leicester, England
| | - Ruth Kinston
- School of Medicine, Keele University, Keele, England
| | - Natalie Cope
- School of Medicine, Keele University, Keele, England
| | - Kathy Cullen
- School of Medicine, Dentistry and Biomedical Sciences, Queens University Belfast, Belfast, Northern Ireland
| | - Aidan Cole
- School of Medicine, Dentistry and Biomedical Sciences, Queens University Belfast, Belfast, Northern Ireland
| | - Vikki O'Neill
- School of Medicine, Dentistry and Biomedical Sciences, Queens University Belfast, Belfast, Northern Ireland
| | - Ching-Wa Chung
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, Scotland
| | | | | | - Sue Ensaff
- School of Medicine, Cardiff University, Cardiff, Wales
| | - Rikki Goddard-Fuller
- Christie Education, Christie Hospitals NHS Foundation Trust, Manchester, England
| | | | - Geoff Wong
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, England
| |
Collapse
|
4
|
van Keulen SG, de Raad T, Raymakers-Janssen P, Ten Cate O, Hennus MP. Exploring Interprofessional Development of Entrustable Professional Activities For Pediatric Intensive Care Fellows: A Proof-of-Concept Study. TEACHING AND LEARNING IN MEDICINE 2024; 36:154-162. [PMID: 37071751 DOI: 10.1080/10401334.2023.2200760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 03/13/2023] [Indexed: 06/19/2023]
Abstract
Phenomenon: Entrustable professional activities (EPAs) delineate major professional activities that an individual in a given specialty must be "entrusted" to perform, ultimately without supervision, to provide quality patient care. Until now, most EPA frameworks have been developed by professionals within the same specialty. As safe, effective, and sustainable health care ultimately depends on interprofessional collaboration, we hypothesized that members of interprofessional teams might have clear and possibly additional insight into which activities are essential to the professional work of a medical specialist. Approach: We recently employed a national modified Delphi study to develop and validate a set of EPAs for Dutch pediatric intensive care fellows. In this proof-of-concept study, we explored what pediatric intensive care physicians' non-physician team members (physician assistants, nurse practitioners, and nurses) constitute as essential professional activities for PICU physicians and how they regarded the newly developed set of nine EPAs. We compared their judgments with the PICU physicians' opinions. Findings: This study shows that non-physician team members share a mental model with physicians about which EPAs are indispensable for pediatric intensive care physicians. Despite this agreement however, descriptions of EPAs are not always clear for non-physician team members who have to work with them on a daily basis. Insights: Ambiguity as to what an EPA entails when qualifying a trainee can have implications for patient safety and trainees themselves. Input from non-physician team members may add to the clarity of EPA descriptions. This finding supports the involvement of non-physician team members in the developmental process of EPAs for (sub)specialty training programs.
Collapse
Affiliation(s)
- Sabrina G van Keulen
- Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Timo de Raad
- Pediatric Intensive Care, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Paulien Raymakers-Janssen
- Pediatric Intensive Care, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Olle Ten Cate
- Utrecht Center for Research and Development of Health Professions Education, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Marije P Hennus
- Pediatric Intensive Care, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
5
|
Paternotte E, Dijksterhuis M, Goverde A, Ezzat H, Scheele F. Comparison of OBGYN postgraduate curricula and assessment methods between Canada and the Netherlands: an auto-ethnographic study. Front Med (Lausanne) 2024; 11:1363222. [PMID: 38601119 PMCID: PMC11004340 DOI: 10.3389/fmed.2024.1363222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Accepted: 02/27/2024] [Indexed: 04/12/2024] Open
Abstract
Introduction Although the Dutch and the Canadian postgraduate Obstetrics and Gynecology (OBGYN) medical education systems are similar in their foundations [programmatic assessment, competency based, involving CanMED roles and EPAs (entrustable professional activities)] and comparable in healthcare outcome, their program structures and assessment methods considerably differ. Materials and methods We compared both countries' postgraduate educational blueprints and used an auto-ethnographic method to gain insight in the effects of training program structure and assessment methods on how trainees work. The research questions for this study are as follows: what are the differences in program structure and assessment program in Obstetrics and Gynecology postgraduate medical education in the Netherlands and Canada? And how does this impact the advancement to higher competency for the postgraduate trainee? Results We found four main differences. The first two differences are the duration of training and the number of EPAs defined in the curricula. However, the most significant difference is the way EPAs are entrusted. In Canada, supervision is given regardless of EPA competence, whereas in the Netherlands, being competent means being entrusted, resulting in meaningful and practical independence in the workplace. Another difference is that Canadian OBGYN trainees have to pass a summative written and oral exit examination. This difference in the assessment program is largely explained by cultural and legal aspects of postgraduate training, leading to differences in licensing practice. Discussion Despite the fact that programmatic assessment is the foundation for assessment in medical education in both Canada and the Netherlands, the significance of entrustment differs. Trainees struggle to differentiate between formative and summative assessments. The trainees experience both formative and summative forms of assessment as a judgement of their competence and progress. Based on this auto-ethnographic study, the potential for further harmonization of the OBGYN PGME in Canada and the Netherlands remains limited.
Collapse
Affiliation(s)
- Emma Paternotte
- Department of Obstetrics and Gynaecology, Gelre Hospitals, Apeldoorn, Netherlands
| | - Marja Dijksterhuis
- Department of Obstetrics and Gynaecology, Amphia Ziekenhuis, Breda, Netherlands
| | - Angelique Goverde
- Department of Obstetrics and Gynaecology, University Medical Center Utrecht, Utrecht, Netherlands
| | - Hanna Ezzat
- Division of General Gynaecology and Obstetrics, University of British Columbia, Vancouver, BC, Canada
| | - Fedde Scheele
- Department of Obstetrics and Gynaecology, Onze Lieve Vrouwe Gasthuis (OLVG), Amsterdam, Netherlands
| |
Collapse
|
6
|
Bogaty C, Frambach J. The CanMEDS Competency Framework in laboratory medicine: a phenomenographic study exploring how professional roles are applied outside the clinical environment. CANADIAN MEDICAL EDUCATION JOURNAL 2024; 15:26-36. [PMID: 38528898 PMCID: PMC10961121 DOI: 10.36834/cmej.77140] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Background The CanMEDS Competency Framework is an internationally recognized model used to outline the proficiencies of a physician. It has predominantly been studied in clinical environments but not all medical specialties take part in direct patient contact. In laboratory medicine, the role of the physician is to promote and enhance patient diagnostics by managing and overseeing the functions of a diagnostic laboratory. Methods This phenomenographic study explores the lived experiences of biochemistry, microbiology, and pathology residency program directors to better understand how they utilize the CanMEDS competencies. Eight laboratory medicine program directors from across Canada were individually interviewed using a semi-structured interview, and the data was analysed using inductive thematic analysis. Results The findings show that the current framework is disconnected from the unique context of laboratory medicine with some competencies appearing unrelatable using the current standardized definitions and expectations. Nevertheless, participants considered the framework to be an appropriate blueprint of the competencies necessary for their professional environment, but to make it accessible more autonomy is required to adapt the framework to their needs. Conclusion Newer renditions of the CanMEDS Competency Framework should better consider the realities of non-clinical disciplines.
Collapse
Affiliation(s)
- Chloe Bogaty
- Service de microbiologie et d'infectiologie, Centre hospitalier affilié universitaire Hôtel-Dieu de Lévis, Quebec, Canada
- School of Health Professions Education (SHE), Maastricht University, Maastricht, The Netherlands
| | - Janneke Frambach
- School of Health Professions Education (SHE), Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
7
|
Liao KC, Ajjawi R, Peng CH, Jenq CC, Monrouxe LV. Striving to thrive or striving to survive: Professional identity constructions of medical trainees in clinical assessment activities. MEDICAL EDUCATION 2023; 57:1102-1116. [PMID: 37394612 DOI: 10.1111/medu.15152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 05/15/2023] [Accepted: 05/30/2023] [Indexed: 07/04/2023]
Abstract
CONTEXT Assessment plays a key role in competence development and the shaping of future professionals. Despite its presumed positive impacts on learning, unintended consequences of assessment have drawn increasing attention in the literature. Considering professional identities and how these can be dynamically constructed through social interactions, as in assessment contexts, our study sought to understand how assessment influences the construction of professional identities in medical trainees. METHODS Within social constructionism, we adopted a discursive, narrative approach to investigate the different positions trainees narrate for themselves and their assessors in clinical assessment contexts and the impact of these positions on their constructed identities. We purposively recruited 28 medical trainees (23 students and five postgraduate trainees), who took part in entry, follow-up and exit interviews of this study and submitted longitudinal audio/written diaries across nine-months of their training programs. Thematic framework and positioning analyses (focusing on how characters are linguistically positioned in narratives) were applied using an interdisciplinary teamwork approach. RESULTS We identified two key narrative plotlines, striving to thrive and striving to survive, across trainees' assessment narratives from 60 interviews and 133 diaries. Elements of growth, development, and improvement were identified as trainees narrated striving to thrive in assessment. Neglect, oppression and perfunctory narratives were elaborated as trainees narrated striving to survive from assessment. Nine main character tropes adopted by trainees with six key assessor character tropes were identified. Bringing these together we present our analysis of two exemplary narratives with elaboration of their wider social implications. CONCLUSION Adopting a discursive approach enabled us to better understand not only what identities are constructed by trainees in assessment contexts but also how they are constructed in relation to broader medical education discourses. The findings are informative for educators to reflect on, rectify and reconstruct assessment practices for better facilitating trainee identity construction.
Collapse
Affiliation(s)
- Kuo-Chen Liao
- Division of Geriatrics and General Internal Medicine, Department of Internal Medicine, Chang Gung Memorial Hospital (CGMH), Linkou, Taiwan (ROC)
- Chang Gung Medical Education Research Centre, CGMH, Linkou, Taiwan (ROC)
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan City, Taiwan (ROC)
| | - Rola Ajjawi
- Centre for Research in Assessment and Digital Learning, Deakin University, Melbourne, Victoria, Australia
| | - Chang-Hsuan Peng
- Chang Gung Medical Education Research Centre, CGMH, Linkou, Taiwan (ROC)
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan City, Taiwan (ROC)
| | - Chang-Chyi Jenq
- Chang Gung Medical Education Research Centre, CGMH, Linkou, Taiwan (ROC)
- Department of Nephrology, CGMH, Linkou, Taiwan (ROC)
- Medical Humanities Center, CGMH, Linkou, Taiwan (ROC)
- Department of Medical Humanities and Social Sciences, School of Medicine, College of Medicine, Chang Gung University, Taoyuan City, Taiwan (ROC)
| | - Lynn V Monrouxe
- Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
8
|
Edwards C, Perry R, Chester D, Childs J. Entrustable professional activities of graduate accredited General Medical Sonographers in Australia - Industry perceptions. J Med Radiat Sci 2023; 70:229-238. [PMID: 37029950 PMCID: PMC10500106 DOI: 10.1002/jmrs.676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 03/20/2023] [Indexed: 04/09/2023] Open
Abstract
INTRODUCTION Linking individual competencies to entrustable professional tasks provides a holistic view of Sonography graduate work readiness. The Australian Sonographers Accreditation Registry (ASAR) publishes a set of entrustable professional activities (EPAs) as part of its Standards for Accreditation of Sonography Courses. EPAs are distinct ultrasound examinations grouped within six critical practice units. This study reports on industry perspectives of current EPAs and their classification for graduates completing general sonography courses in Australia. The article also examines the value of EPAs and links their function to the assessment of graduate competency. METHODS An online survey tool elicited stakeholder feedback on graduate EPAs across six critical practice units and the potential for including a new Paediatric unit. From an original sample size of 655, 309 responded to questions about general sonography courses. RESULTS A majority (55.3%) recommended no changes to the existing EPA list, and 44.7% recommended amending the list. From respondents that recommended changes (138/309), all current EPAs received >80% agreement to be retained; in addition, nine new examinations received >70% agreement for inclusion at the graduate level. Whilst 42.7% (132/309) supported the current ASAR model requiring competency in five out of six critical practice units, 45.6% (141/309) recommended increasing it to all six. There was limited support, 11.7% (36/309), to reduce this number. Responding to the potential to add a new Paediatric specific critical practice unit, 61.8% (181/293) recommended its inclusion. CONCLUSIONS The findings demonstrate that the current list of EPAs aligns with industry expectations. In contrast, there are divergent views on the modelling and grouping of critical practice units. The article's critical analysis of the results and implications provides stakeholders with a practical approach to clinical teaching and EPA assessment, and helps to inform any review of accreditation standards.
Collapse
Affiliation(s)
- Christopher Edwards
- School of Clinical Sciences, Faculty of HealthQueensland University of TechnologyBrisbaneQueenslandAustralia
| | - Rebecca Perry
- Allied Health and Human PerformanceUniversity of South AustraliaAdelaideSouth AustraliaAustralia
| | - Deanne Chester
- School of Health, Medical and Applied SciencesCentral Queensland UniversityBrisbaneQueenslandAustralia
| | - Jessie Childs
- Allied Health and Human PerformanceUniversity of South AustraliaAdelaideSouth AustraliaAustralia
| |
Collapse
|
9
|
Thompson J, Bujalka H, McKeever S, Lipscomb A, Moore S, Hill N, Kinney S, Cham KM, Martin J, Bowers P, Gerdtz M. Educational strategies in the health professions to mitigate cognitive and implicit bias impact on decision making: a scoping review. BMC MEDICAL EDUCATION 2023; 23:455. [PMID: 37340395 DOI: 10.1186/s12909-023-04371-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 05/17/2023] [Indexed: 06/22/2023]
Abstract
BACKGROUND Cognitive and implicit biases negatively impact clinicians' decision-making capacity and can have devastating consequences for safe, effective, and equitable healthcare provision. Internationally, health care clinicians play a critical role in identifying and overcoming these biases. To be workforce ready, it is important that educators proactively prepare all pre-registration healthcare students for real world practice. However, it is unknown how and to what extent health professional educators incorporate bias training into curricula. To address this gap, this scoping review aims to explore what approaches to teaching cognitive and implicit bias, for entry to practice students, have been studied, and what are the evidence gaps that remain. METHODS This scoping review was guided by the Joanna Briggs Institute (JBI) methodology. Databases were searched in May 2022 and included CINAHL, Cochrane, JBI, Medline, ERIC, Embase, and PsycINFO. The Population, Concept and Context framework was used to guide keyword and index terms used for search criteria and data extraction by two independent reviewers. Quantitative and qualitative studies published in English exploring pedagogical approaches and/or educational techniques, strategies, teaching tools to reduce the influence of bias in health clinicians' decision making were sought to be included in this review. Results are presented numerically and thematically in a table accompanied by a narrative summary. RESULTS Of the 732 articles identified, 13 met the aim of this study. Most publications originated from the United States (n=9). Educational practice in medicine accounted for most studies (n=8), followed by nursing and midwifery (n=2). A guiding philosophy or conceptual framework for content development was not indicated in most papers. Educational content was mainly provided via face-to-face (lecture/tutorial) delivery (n=10). Reflection was the most common strategy used for assessment of learning (n=6). Cognitive biases were mainly taught in a single session (n=5); implicit biases were taught via a mix of single (n=4) and multiple sessions (n=4). CONCLUSIONS A range of pedagogical strategies were employed; most commonly, these were face-to-face, class-based activities such as lectures and tutorials. Assessments of student learning were primarily based on tests and personal reflection. There was limited use of real-world settings to educate students about or build skills in biases and their mitigation. There may be a valuable opportunity in exploring approaches to building these skills in the real-world settings that will be the workplaces of our future healthcare workers.
Collapse
Affiliation(s)
- John Thompson
- Department of Nursing, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Level 6, Alan Gilbert Building, 161 Barry Street, Victoria, 3010, Australia.
| | - Helena Bujalka
- Department of Nursing, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Level 6, Alan Gilbert Building, 161 Barry Street, Victoria, 3010, Australia
| | - Stephen McKeever
- Department of Nursing, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Level 6, Alan Gilbert Building, 161 Barry Street, Victoria, 3010, Australia
- Royal Children's Hospital, Parkville, Australia
| | - Adrienne Lipscomb
- Department of Nursing, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Level 6, Alan Gilbert Building, 161 Barry Street, Victoria, 3010, Australia
| | - Sonya Moore
- Department of Physiotherapy, Melbourne School of Health Sciences, University of Melbourne, Melbourne, Australia
| | - Nicole Hill
- Department of Social Work, Melbourne School of Health Sciences, University of Melbourne, Melbourne, Australia
| | - Sharon Kinney
- Department of Nursing, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Level 6, Alan Gilbert Building, 161 Barry Street, Victoria, 3010, Australia
- Royal Children's Hospital, Parkville, Australia
| | - Kwang Meng Cham
- Department of Optometry and Vision Sciences, Melbourne School of Health Sciences, University of Melbourne, Melbourne, Australia
| | - Joanne Martin
- Department of Nursing, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Level 6, Alan Gilbert Building, 161 Barry Street, Victoria, 3010, Australia
| | - Patrick Bowers
- Department of Audiology and Speech Pathology, School of Health Sciences, University of Melbourne, Melbourne, Australia
| | - Marie Gerdtz
- Department of Nursing, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Level 6, Alan Gilbert Building, 161 Barry Street, Victoria, 3010, Australia
| |
Collapse
|
10
|
Stephan A, Cheung G, van der Vleuten C. Entrustable Professional Activities and Learning: The Postgraduate Trainee Perspective. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2023; 47:134-142. [PMID: 36224504 PMCID: PMC10060374 DOI: 10.1007/s40596-022-01712-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 09/16/2022] [Indexed: 05/25/2023]
Abstract
OBJECTIVE Entrustable professional activities (EPAs) are used as clinical activities in postgraduate psychiatry training in Australasia. This study aimed to explore psychiatry trainees' perceptions of the impact of EPAs on their motivation and learning. METHODS A constructivist grounded theory approach was used to conceptualize the impact of EPAs on trainees' motivation and learning. A purposive sample of trainees was recruited from across New Zealand. Semi-structured individual interviews were used for data collection and continued until theoretical saturation was reached. RESULTS The impact of EPAs on learning was mediated by the trainee's appraisals of subjective control, value, and the costs of engaging with EPAs. When appraisals were positive, EPAs encouraged a focus on particular learning needs and structured learning with the supervisor. However, when appraisals were negative, EPAs encouraged a superficial approach to learning. Trainee appraisals and their subsequent impact on motivation and learning were most affected by EPA granularity, alignment of EPAs with clinical practice, and the supervisor's conscientiousness in their approach to EPAs. CONCLUSIONS To stimulate learning, EPAs must be valued by both trainees and supervisors as constituting a coherent work-based curriculum that encompasses the key fellowship competencies. If EPAs are to be effective as clinical tasks for learning, ongoing faculty development must be the leading priority.
Collapse
Affiliation(s)
- Alice Stephan
- Mental Health and Addictions Service, Waikato District Health Board, Hamilton, New Zealand
| | - Gary Cheung
- School of Medicine, Faculty of Medical and Health Sciences, The University of Auckland, Auckland, New Zealand.
| | - Cees van der Vleuten
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, University of Maastricht, Maastricht, Netherlands
| |
Collapse
|
11
|
Boulais I, Ouellet K, Lachiver EV, Marceau M, Bergeron L, Bernier F, St-Onge C. Considering the Structured Oral Examinations Beyond Its Psychometrics Properties. MEDICAL SCIENCE EDUCATOR 2023; 33:345-351. [PMID: 37261009 PMCID: PMC10226970 DOI: 10.1007/s40670-023-01729-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/09/2023] [Indexed: 06/02/2023]
Abstract
Decisions to set aside Structured Oral Examinations (SOE) are, almost invariably, based on their poor psychometric properties. However, considering the perspectives of the stakeholders might help us to understand its potential contribution. To explore this, we conducted focus groups and individual interviews with stakeholders: students, assessors, and administrators. Students and assessors perceived the SOE as a window on students' clinical reasoning, as an authentic assessment, but as a subjective and stressful method. Administrators emphasized the organizational consequences such as logistical challenges. Consequences must be considered when making decisions about SOE and our results support important positive consequences.
Collapse
Affiliation(s)
- Isabelle Boulais
- Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, 3001 12th Avenue North, Sherbrooke, Québec J1H 5N4 Canada
| | - Kathleen Ouellet
- Health Sciences Education Center, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec Canada
| | - Elise Vachon Lachiver
- Faculty of Medicine and Health Sciences, Research in Health Sciences Program, Université de Sherbrooke, Sherbrooke, Québec Canada
| | - Mélanie Marceau
- School of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec Canada
| | - Linda Bergeron
- Health Sciences Education Center, Faculty of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, Québec Canada
| | - Frédéric Bernier
- Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, 3001 12th Avenue North, Sherbrooke, Québec J1H 5N4 Canada
- Centre de Recherche Clinique du CHUS, Sherbrooke, Québec Canada
| | - Christina St-Onge
- Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, 3001 12th Avenue North, Sherbrooke, Québec J1H 5N4 Canada
- Paul Grand’Maison de La Société Des Médecins de L, Université de Sherbrooke Research Chair in Medical Education, Sherbrooke, Québec Canada
| |
Collapse
|
12
|
Renes J, van der Vleuten CPM, Collares CF. Utility of a multimodal computer-based assessment format for assessment with a higher degree of reliability and validity. MEDICAL TEACHER 2023; 45:433-441. [PMID: 36306368 DOI: 10.1080/0142159x.2022.2137011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Multiple choice questions (MCQs) suffer from cueing, item quality and factual knowledge testing. This study presents a novel multimodal test containing alternative item types in a computer-based assessment (CBA) format, designated as Proxy-CBA. The Proxy-CBA was compared to a standard MCQ-CBA, regarding validity, reliability, standard error of measurement, and cognitive load, using a quasi-experimental crossover design. Biomedical students were randomized into two groups to sit a 65-item formative exam starting with the MCQ-CBA followed by the Proxy-CBA (group 1, n = 38), or the reverse (group 2, n = 35). Subsequently, a questionnaire on perceived cognitive load was taken, answered by 71 participants. Both CBA formats were analyzed according to parameters of the Classical Test Theory and the Rasch model. Compared to the MCQ-CBA, the Proxy-CBA had lower raw scores (p < 0.001, η2 = 0.276), higher reliability estimates (p < 0.001, η2 = 0.498), lower SEM estimates (p < 0.001, η2 = 0.807), and lower theta ability scores (p < 0.001, η2 = 0.288). The questionnaire revealed no significant differences between both CBA tests regarding perceived cognitive load. Compared to the MCQ-CBA, the Proxy-CBA showed increased reliability and a higher degree of validity with similar cognitive load, suggesting its utility as an alternative assessment format.
Collapse
Affiliation(s)
- Johan Renes
- Department of Human Biology, Maastricht University, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Research and Development, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Carlos F Collares
- Department of Educational Research and Development, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
- European Board of Medical Assessors, Edinburgh, UK
- Stichting Aphasia.help, Maastricht, The Netherlands
| |
Collapse
|
13
|
Kiessling C, Perron NJ, van Nuland M, Bujnowska-Fedak MM, Essers G, Joakimsen RM, Pype P, Tsimtsiou Z. Does it make sense to use written instruments to assess communication skills? Systematic review on the concurrent and predictive value of written assessment for performance. PATIENT EDUCATION AND COUNSELING 2023; 108:107612. [PMID: 36603470 DOI: 10.1016/j.pec.2022.107612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 12/18/2022] [Accepted: 12/20/2022] [Indexed: 06/17/2023]
Abstract
OBJECTIVES To evaluate possible associations between learners' results in written and performance-based assessments of communication skills (CS), either in concurrent or predictive study designs. METHODS Search included four databases for peer-reviewed studies containing both written and performance-based CS assessment. Eleven studies met the inclusion criteria. RESULTS Included studies predominantly assessed undergraduate medical students. Studies reported mainly low to medium correlations between written and performance-based assessment results (Objective Structured Clinical Examinations or encounters with simulated patients), and gave correlation coefficients ranging from 0.13 to 0.53 (p < 0.05). Higher correlations were reported when specific CS, like motivational interviewing were assessed. Only a few studies gave sufficient reliability indicators of both assessment formats. CONCLUSIONS Written assessment scores seem to predict performance-based assessments to a limited extent but cannot replace them entirely. Reporting of assessment instruments' psychometric properties is essential to improve the interpretation of future findings and could possibly affect their predictive validity for performance. PRACTICE IMPLICATIONS Within longitudinal CS assessment programs, triangulation of assessment including written assessment is recommended, taking into consideration possible limitations. Written assessments with feedback can help students and trainers to elaborate on procedural knowledge as a strong support for the acquisition and transfer of CS to different contexts.
Collapse
Affiliation(s)
- Claudia Kiessling
- Chair for the Education of Personal and Interpersonal Competencies in Health Care, Witten/Herdecke University, Witten, Germany.
| | - Noelle Junod Perron
- Unit of Development and Research in Medical Education and Department of community health and medicine, Geneva Faculty of Medicine and Medical Directorate, Geneva University Hospitals, Geneva, Switzerland
| | - Marc van Nuland
- Academic Center for General Practice, Leuven University, Leuven, Belgium
| | | | - Geurt Essers
- Network of GP Training Programs in the Netherlands, the Netherlands
| | - Ragnar M Joakimsen
- Department of Clinical Medicine, Faculty of Health Sciences, UIT The Arctic University of Norway and Department of Internal Medicine, University Hospital of North Norway, Tromsø, Norway
| | - Peter Pype
- Department of Public Health and Primary Care, Ghent University, Ghent, Belgium
| | - Zoi Tsimtsiou
- Department of Hygiene, Social-Preventive Medicine and Medical Statistics, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|
14
|
Adnan S, Sarfaraz S, Nisar MK, Jouhar R. Faculty perceptions on one-best MCQ development. CLINICAL TEACHER 2023; 20:e13529. [PMID: 36151738 DOI: 10.1111/tct.13529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 09/07/2022] [Indexed: 01/21/2023]
Abstract
OBJECTIVE The aim of this study was to determine the perception of faculty of undergraduate medical and dental programmes in various private and public sector institutes regarding their Readiness, Attitude and Institutional support for developing high-quality one-best MCQs. METHODS A validated questionnaire was designed for recording demographic data and responses related to Readiness, Attitude and Institutional support based on 5-point Likert scale and multiple options. Scores for items on Likert scale were categorised (Readiness: poor 0-12, good 13-24, Attitude: negative 0-12, positive 13-24, Institutional support: no support 0-12, highly supportive 13-24). The individual and overall scores related to Readiness, Attitude and Institutional support were compared to demographic characteristics using Independent samples and Paired samples t-test as appropriate. Data was analysed using SPSS version 25.0. P-value of <0.05 (two-sided) was considered significant. RESULTS With a response rate of 87.5%, the mean scores for Institutional support were higher (14.45 ± 4.73) compared to those for Readiness (13.39 ± 4.51) and Attitude (12.54 ± 4.59). Responses to multiple choice items revealed that faculty considered MCQ writing workshops to be effective while facing most difficulty in formulating scenario and homogenous options. Most faculty reported no commitment issues but desired on-job protected time for item development. No significant association was found between the scores and age group, gender, qualification, institute type, department and designation of participants. CONCLUSION Overall, the faculty were found to be motivated and committed to developing high-quality one-best MCQs. With continued institutional support, faculty can be expected to further engage in writing such items.
Collapse
Affiliation(s)
- Samira Adnan
- Department of Operative Dentistry, Sindh Institute of Oral Health Science, Jinnah Sindh Medical University, Karachi, Pakistan
| | - Shaur Sarfaraz
- Institute of Medical Education, Jinnah Sindh Medical University, Karachi, Pakistan
| | - Muhammad Kashif Nisar
- Department of Biochemistry, Liaquat National Hospital and Medical College, Karachi, Pakistan
| | - Rizwan Jouhar
- Department of Restorative Dentistry and Endodontics, College of Dentistry, King Faisal University, Al-Ahsa, Saudi Arabia
| |
Collapse
|
15
|
Ong AML, Hum C. Entrust Me: Embedding Entrustable Professional Activities in a Gastroenterology Residency Program. Dig Dis Sci 2023; 68:352-356. [PMID: 36609940 DOI: 10.1007/s10620-022-07815-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/07/2022] [Indexed: 01/09/2023]
Abstract
Entrustable Professional Activities (EPAs) are defined as a process of gradually entrusting key tasks to specialty fellows during their training. EPAs are an important component of competency-based medical education; the concept of entrustment is also familiar and intuitive to clinical faculty, even inexperienced evaluators even if not termed as such. In this paper, we describe the process of how the authors adopted an established EPA framework for gastroenterology training, using EPAs to guide curriculum and faculty development and assessment in ten steps: (1) adopting an established framework, (2) mapping EPAs to relevant competencies, (3) specifying expected behaviors for competency of each EPA, (4) training faculty and fellows to have a shared mental model, (5) designing the training curriculum and educational strategies based on EPAs, (6) determining the assessment strategy, (7) designing the assessment tool, (8) ensuring clarity in how assessment data are used to make summative decisions, (9) changing feedback culture of fellows, and (10) using a longitudinal coaching system to improve EPA performance.
Collapse
Affiliation(s)
- Andrew Ming-Liang Ong
- Singhealth Gastroenterology Residency Program, Singapore, Singapore.
- Department of Gastroenterology and Hepatology, Singapore General Hospital, 20 College Road, Level 3, Academia Building, Singapore, 169856, Singapore.
| | - Clasandra Hum
- Singhealth Gastroenterology Residency Program, Singapore, Singapore
| |
Collapse
|
16
|
Reliability and validity testing of the medicines related - consultation assessment tool for assessing pharmacists' consultations. Int J Clin Pharm 2023; 45:201-209. [PMID: 36394786 PMCID: PMC9938801 DOI: 10.1007/s11096-022-01489-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 09/14/2022] [Indexed: 11/18/2022]
Abstract
BACKGROUND Demonstrating a person-centred approach in a consultation is a key component of delivering high-quality healthcare. To support development of such an approach requires training underpinned by valid assessment tools. Given the lack of a suitable pharmacy-specific tool, a new global consultation skills assessment tool: the medicines related-consultation assessment tool (MR-CAT) was designed and tested. AIM This study aimed to test the validity and reliability of the MR-CAT using psychometric methods. METHOD Psychometric testing involved analysis of participants' (n = 13) assessment of fifteen pre-recorded simulated consultations using the MR-CAT. Analysis included discriminant validity testing, intrarater and interrater reliability testing for each of the five sections of the MR-CAT and for the overall global assessment of the consultation. Analysis also included internal consistency testing for the whole tool. RESULTS Internal consistency for the overall global assessment of the consultation was good (Cronbach's alpha = 0.97). The MR-CAT discriminated well for the overall global assessment of the consultation (p < 0.001). Moderate to high intrarater reliability was observed for the overall global assessment of the consultation and for all five sections of the MR-CAT (rho = 0.64-0.84) in the test-retest analysis. Moderate to good interrater reliability (Kendall's W = 0.68-0.90) was observed for the overall global assessment of the consultation and for all five sections of the MR-CAT. CONCLUSION The MR-CAT is a valid and reliable tool for assessing person-centred pharmacist's consultations. Moreover, its unique design means that the MR-CAT can be used in both formative and summative assessment.
Collapse
|
17
|
Kogan JR, Dine CJ, Conforti LN, Holmboe ES. Can Rater Training Improve the Quality and Accuracy of Workplace-Based Assessment Narrative Comments and Entrustment Ratings? A Randomized Controlled Trial. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:237-247. [PMID: 35857396 DOI: 10.1097/acm.0000000000004819] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Prior research evaluating workplace-based assessment (WBA) rater training effectiveness has not measured improvement in narrative comment quality and accuracy, nor accuracy of prospective entrustment-supervision ratings. The purpose of this study was to determine whether rater training, using performance dimension and frame of reference training, could improve WBA narrative comment quality and accuracy. A secondary aim was to assess impact on entrustment rating accuracy. METHOD This single-blind, multi-institution, randomized controlled trial of a multifaceted, longitudinal rater training intervention consisted of in-person training followed by asynchronous online spaced learning. In 2018, investigators randomized 94 internal medicine and family medicine physicians involved with resident education. Participants assessed 10 scripted standardized resident-patient videos at baseline and follow-up. Differences in holistic assessment of narrative comment accuracy and specificity, accuracy of individual scenario observations, and entrustment rating accuracy were evaluated with t tests. Linear regression assessed impact of participant demographics and baseline performance. RESULTS Seventy-seven participants completed the study. At follow-up, the intervention group (n = 41), compared with the control group (n = 36), had higher scores for narrative holistic specificity (2.76 vs 2.31, P < .001, Cohen V = .25), accuracy (2.37 vs 2.06, P < .001, Cohen V = .20) and mean quantity of accurate (6.14 vs 4.33, P < .001), inaccurate (3.53 vs 2.41, P < .001), and overall observations (2.61 vs 1.92, P = .002, Cohen V = .47). In aggregate, the intervention group had more accurate entrustment ratings (58.1% vs 49.7%, P = .006, Phi = .30). Baseline performance was significantly associated with performance on final assessments. CONCLUSIONS Quality and specificity of narrative comments improved with rater training; the effect was mitigated by inappropriate stringency. Training improved accuracy of prospective entrustment-supervision ratings, but the effect was more limited. Participants with lower baseline rating skill may benefit most from training.
Collapse
Affiliation(s)
- Jennifer R Kogan
- J.R. Kogan is associate dean, Student Success and Professional Development, and professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| | - C Jessica Dine
- C.J. Dine is associate dean, Evaluation and Assessment, and associate professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-5894-0861
| | - Lisa N Conforti
- L.N. Conforti is research associate for milestones evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-7317-6221
| | - Eric S Holmboe
- E.S. Holmboe is chief, research, milestones development and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| |
Collapse
|
18
|
Stefan P, Pfandler M, Kullmann A, Eck U, Koch A, Mehren C, von der Heide A, Weidert S, Fürmetz J, Euler E, Lazarovici M, Navab N, Weigl M. Computer-assisted simulated workplace-based assessment in surgery: application of the universal framework of intraoperative performance within a mixed-reality simulation. BMJ SURGERY, INTERVENTIONS, & HEALTH TECHNOLOGIES 2023; 5:e000135. [PMID: 36687799 PMCID: PMC9853221 DOI: 10.1136/bmjsit-2022-000135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 08/24/2022] [Indexed: 01/20/2023] Open
Abstract
Objectives Workplace-based assessment (WBA) is a key requirement of competency-based medical education in postgraduate surgical education. Although simulated workplace-based assessment (SWBA) has been proposed to complement WBA, it is insufficiently adopted in surgical education. In particular, approaches to criterion-referenced and automated assessment of intraoperative surgical competency in contextualized SWBA settings are missing.Main objectives were (1) application of the universal framework of intraoperative performance and exemplary adaptation to spine surgery (vertebroplasty); (2) development of computer-assisted assessment based on criterion-referenced metrics; and (3) implementation in contextualized, team-based operating room (OR) simulation, and evaluation of validity. Design Multistage development and assessment study: (1) expert-based definition of performance indicators based on framework's performance domains; (2) development of respective assessment metrics based on preoperative planning and intraoperative performance data; (3) implementation in mixed-reality OR simulation and assessment of surgeons operating in a confederate team. Statistical analyses included internal consistency and interdomain associations, correlations with experience, and technical and non-technical performances. Setting Surgical simulation center. Full surgical team set-up within mixed-reality OR simulation. Participants Eleven surgeons were recruited from two teaching hospitals. Eligibility criteria included surgical specialists in orthopedic, trauma, or neurosurgery with prior VP or kyphoplasty experience. Main outcome measures Computer-assisted assessment of surgeons' intraoperative performance. Results Performance scores were associated with surgeons' experience, observational assessment (Objective Structured Assessment of Technical Skill) scores and overall pass/fail ratings. Results provide strong evidence for validity of our computer-assisted SWBA approach. Diverse indicators of surgeons' technical and non-technical performances could be quantified and captured. Conclusions This study is the first to investigate computer-assisted assessment based on a competency framework in authentic, contextualized team-based OR simulation. Our approach discriminates surgical competency across the domains of intraoperative performance. It advances previous automated assessment based on the use of current surgical simulators in decontextualized settings. Our findings inform future use of computer-assisted multidomain competency assessments of surgeons using SWBA approaches.
Collapse
Affiliation(s)
- Philipp Stefan
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, München, Germany
| | - Michael Pfandler
- Institute and Outpatient Clinic for Occupational, Social, and Environmental Medicine, University Hospital, Ludwig Maximilians University Munich, München, Germany
| | - Aljoscha Kullmann
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, München, Germany
| | - Ulrich Eck
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, München, Germany
| | - Amelie Koch
- Institute and Outpatient Clinic for Occupational, Social, and Environmental Medicine, University Hospital, Ludwig Maximilians University Munich, München, Germany
| | - Christoph Mehren
- Spine Center, Schön Klinik München Harlaching, München, Germany,Academic Teaching Hospital and Spine Research Institute, Paracelsus Medical University, Salzburg, Austria
| | - Anna von der Heide
- Department of General, Trauma and Reconstructive Surgery, University Hospital, Campus Grosshadern, Ludwig Maximilians University Munich, München, Germany
| | - Simon Weidert
- Department of General, Trauma and Reconstructive Surgery, University Hospital, Campus Grosshadern, Ludwig Maximilians University Munich, München, Germany
| | - Julian Fürmetz
- Department of General, Trauma and Reconstructive Surgery, University Hospital, Campus Innenstadt, Ludwig Maximilians University Munich, München, Germany
| | - Ekkehard Euler
- Department of General, Trauma and Reconstructive Surgery, University Hospital, Campus Innenstadt, Ludwig Maximilians University Munich, München, Germany
| | - Marc Lazarovici
- Institute for Emergency Medicine and Management in Medicine (INM), University Hospital, Ludwig Maximilians University Munich, München, Germany
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, München, Germany
| | - Matthias Weigl
- Institute and Outpatient Clinic for Occupational, Social, and Environmental Medicine, University Hospital, Ludwig Maximilians University Munich, München, Germany,Institute for Patient Safety, University of Bonn, Bonn, Germany
| |
Collapse
|
19
|
Leep Hunderfund AN, Santilli AR, Rubin DI, Laughlin RS, Sorenson EJ, Park YS. Assessing electrodiagnostic skills among residents and fellows: Relationships between workplace-based assessments using the Electromyography Direct Observation Tool and other measures of trainee performance. Muscle Nerve 2022; 66:671-678. [PMID: 35470901 DOI: 10.1002/mus.27566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 04/21/2022] [Accepted: 04/23/2022] [Indexed: 12/14/2022]
Abstract
INTRODUCTION/AIMS Graduate medical education programs must ensure residents and fellows acquire skills needed for independent practice. Workplace-based observational assessments are informative but can be time- and resource-intensive. In this study we sought to gather "relations-to-other-variables" validity evidence for scores generated by the Electromyography Direct Observation Tool (EMG-DOT) to inform its use as a measure of electrodiagnostic skill acquisition. METHODS Scores on multiple assessments were compiled by trainees during Clinical Neurophysiology and Electromyography rotations at a large US academic medical center. Relationships between workplace-based EMG-DOT scores (n = 298) and scores on a prerequisite simulated patient exercise, patient experience surveys (n = 199), end-of-rotation evaluations (n = 301), and an American Association of Neuromuscular & Electrodiagnostic Medicine (AANEM) self-assessment examination were assessed using Pearson correlations. RESULTS Among 23 trainees, EMG-DOT scores assigned by physician raters correlated positively with end-of-rotation evaluations (r = 0.63, P = .001), but EMG-DOT scores assigned by technician raters did not (r = 0.10, P = .663). When physician and technician ratings were combined, higher EMG-DOT scores correlated with better patient experience survey scores (r = 0.42, P = .047), but not with simulated patient or AANEM self-assessment examination scores. DISCUSSION End-of-rotation evaluations can provide valid assessments of trainee performance when completed by individuals with ample opportunities to directly observe trainees. Inclusion of observational assessments by technicians and patients provides a more comprehensive view of trainee performance. Workplace- and classroom-based assessments provide complementary information about trainee performance, reflecting underlying differences in types of skills measured.
Collapse
Affiliation(s)
| | - Ashley R Santilli
- Department of Neurology at Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Devon I Rubin
- Department of Neurology at Mayo Clinic College of Medicine, Jacksonville, Florida
| | - Ruple S Laughlin
- Department of Neurology at Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Eric J Sorenson
- Department of Neurology at Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Yoon S Park
- Department of Medical Education, University of Illinois College of Medicine, Chicago, Illinois.,Health Professions Education Research at Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
20
|
Favier R. Entrustable professional activities: bridging the gap between veterinary education and clinical practice. Vet Rec 2022; 191:378-380. [DOI: 10.1002/vetr.2414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Robert Favier
- IVC Evidensia/ IVC Evidensia Academy Utrecht Netherlands
| |
Collapse
|
21
|
Franklin PD, Drane D, Wakschlag L, Ackerman R, Kho A, Cella D. Development of a learning health system science competency assessment to guide training and proficiency assessment. Learn Health Syst 2022; 6:e10343. [PMID: 36263257 PMCID: PMC9576243 DOI: 10.1002/lrh2.10343] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 08/05/2022] [Accepted: 08/17/2022] [Indexed: 11/05/2022] Open
Abstract
Introduction Learning health systems (LHS) science is fundamentally a transdisciplinary field. To capture the breadth of the competencies of an LHS scientist, AHRQ and national experts defined a series of 42 competencies across seven domains that support success. Clinicians, researchers, and leaders who are new to the LHS field can identify and prioritize proficiency development among these domains. In addition, existing leaders and researchers will assemble teams of experts who together represent the LHS science domains. To serve LHS workforce development and proficiency assessment, the AHRQ‐funded ACCELERAT K12 training program recruited domain experts and trainees to define and operationalize items to include in an LHS Competency Assessment to support emerging and existing LHS scientists in prioritizing and monitoring proficiency development. Methods Sequential interviews with 18 experts iteratively defined skills and tasks to illustrate stage in proficiency, and its progression, for each of 42 competencies in the seven LHS expertise domains: systems science; research questions and standards of scientific evidence; research methods; informatics; ethics of research and implementation in health systems; improvement and implementation science; and engagement, leadership, and research management. An educational assessment expert and LHS scientist refined the assessment criteria at each stage to use parallel language across domains. Last, current trainees reviewed and pilot tested the assessment and the LHS Competency Assessment was further refined using their feedback. The assessment framework was informed by Bloom's revised taxonomy of educational objectives (Anderson and Krathwohl, A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives, 2001) where learning progresses from recalling, defining, understanding, and awareness at the lower levels of the taxonomy, to applying and adopting and finally to creating, designing, and critiquing at the upper levels of the taxonomy. We also developed assessment criteria that could be used for longer term assessment of direct performance. Van der Vleuten et al. (Best Pract Res Clin Obstetr Gynaecol. 2010;24(6):703‐719) define longer term direct assessment methods as assessment that occurs over a period ranging from weeks to even years and involves multiple assessment methods and exposure to the learner's work over an extended period. Results This experience report describes the content of the LHS Competency Assessment. For each domain and competency, the assessment lists examples of evidence to support expertise at each level of proficiency: no exposure; foundational (awareness/understanding); emerging (early application); and proficient (application with a high level of skill). Trainees begin with baseline standard assessment tables, where they can indicate no exposure or mark the foundational and emerging skills with which they have competence. For domains where foundational and emerging skills have been achieved, users can move on to assessment tables that list evidence of proficiency. Conclusion The LHS Competency Assessment offers consistent, graded criteria across the seven LHS domains to guide trainees and mentors to evaluate progress from no experience to foundational knowledge, emerging proficiency, and proficiency. The assessment can also be used to design training and mentoring for those newly exposed to LHS science and for those with key expertise who wish to expand LHS expertise.
Collapse
Affiliation(s)
- Patricia D. Franklin
- Department of Medical Social SciencesNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Denise Drane
- Program Evaluation Core & Searle Center for Advancing Learning and TeachingNorthwestern UniversityEvanstonIllinoisUSA
| | - Lauren Wakschlag
- Department of Medical Social Sciences and Institute for Innovations in Developmental SciencesNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Ronald Ackerman
- Institute for Public Health and Medicine and Department of MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Abel Kho
- Center for Health Information PartnershipsNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - David Cella
- Department of Medical Social SciencesNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| |
Collapse
|
22
|
Graupe T, Giemsa P, Schaefer K, Fischer MR, Strijbos JW, Kiessling C. The role of the emotive, moral, and cognitive components for the prediction of medical students' empathic behavior in an Objective Structured Clinical Examination (OSCE). PATIENT EDUCATION AND COUNSELING 2022; 105:3103-3109. [PMID: 35798614 DOI: 10.1016/j.pec.2022.06.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 06/22/2022] [Accepted: 06/27/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVES Investigate whether medical students' emotive abilities, attitudes, and cognitive empathic professional abilities predict empathic behavior in an Objective Structured Clinical Examination (OSCE). METHODS Linear and multiple regressions were used to test concurrent validity between Interpersonal Reactivity Index (IRI), Jefferson Scale of Physician Empathy (JSPE-S), Situational Judgement Test (SJT-expert-based score (SJT-ES), SJT-theory-based score (SJT-TS)) and empathic behavior in an OSCE measured by Berlin Global Rating (BGR) and Verona Coding Definitions for Emotion Sequences (VR-CoDES). RESULTS Highest amounts of explained variance of empathic behavior measured by VR-CoDES were found for the SJT-ES (R2 = 0.125) and SJT-TS (R2 = 0.131). JSPE-S (R2 = 0.11) and SJT-ES (R2 = 0.10) explained the highest amount of variance in empathic behavior as measured by BGR. Stepwise multiple regression improved the model for BGR by including SJT-ES and JSPE-S, explaining 16.2% of variance. CONCLUSIONS The instrument measuring the emotive component (IRI) did not significantly predict empathic behavior, whereas instruments measuring moral (JSPE-S) and cognitive components (SJT) significantly predicted empathic behavior. However, the explained variance was small. PRACTICE IMPLICATIONS The instrument measuring the emotive component (IRI) did not significantly predict empathic behavior, whereas instruments measuring moral (JSPE-S) and cognitive components (SJT) significantly predicted empathic behavior. However, the explained variance was small. In a longitudinal assessment program, triangulation of different instruments assessing empathy offers a rich perspective of learner's empathic abilities. Empathy training should include the acquisition of knowledge, attitudes, and behavior to support learner's empathic behaviors.
Collapse
Affiliation(s)
- Tanja Graupe
- Institute of Medical Education, University Hospital, LMU Munich, Munich, Germany.
| | - Patrick Giemsa
- Faculty of Health, Chair for the Education of Personal and Interpersonal Competences in Health Care, Witten/Herdecke University, Witten, Germany
| | - Katharina Schaefer
- Institute of Medical Education, University Hospital, LMU Munich, Munich, Germany
| | - Martin R Fischer
- Institute of Medical Education, University Hospital, LMU Munich, Munich, Germany
| | - Jan-Willem Strijbos
- Faculty of Behavioural and Social Sciences, Department of Educational Sciences, University of Groningen, the Netherlands
| | - Claudia Kiessling
- Faculty of Health, Chair for the Education of Personal and Interpersonal Competences in Health Care, Witten/Herdecke University, Witten, Germany
| |
Collapse
|
23
|
Defining Foundational Competence for Prelicensure and Graduate Nursing Students: A Concept Analysis and Conceptual Model. Nurse Educ Pract 2022; 64:103415. [DOI: 10.1016/j.nepr.2022.103415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 07/07/2022] [Accepted: 07/14/2022] [Indexed: 11/15/2022]
|
24
|
Marceau M, St-Onge C, Gallagher F, Young M. Validity as a social imperative: users' and leaders' perceptions. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:22-36. [PMID: 35875440 PMCID: PMC9297243 DOI: 10.36834/cmej.73518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Recently, validity as a social imperative was proposed as an emerging conceptualization of validity in the assessment literature in health professions education (HPE). To further develop our understanding, we explored the perceived acceptability and anticipated feasibility of validity as a social imperative with users and leaders engaged with assessment in HPE in Canada. METHODS We conducted a qualitative interpretive description study. Purposeful and snowball sampling were used to recruit participants for semi-structured individual interviews and focus groups. Each transcript was analyzed by two team members and discussed with the team until consensus was reached. RESULTS We conducted five focus group and eleven interviews with two different stakeholder groups (users and leaders). Our findings suggest that the participants perceived the concept of validity as a social imperative as acceptable. Regardless of group, participants shared similar considerations regarding: the limits of traditional validity models, the concept's timeliness and relevance, the need to clarify some terms used to characterize the concept, the similarities with modern theories of validity, and the anticipated challenges in applying the concept in practice. In addition, participants discussed some limits with current approaches to validity in the context of workplace-based and programmatic assessment. CONCLUSION Validity as a social imperative can be interwoven throughout existing theories of validity and may represent how HPE is adapting traditional models of validity in order to respond to the complexity of assessment in HPE; however, challenges likely remain in operationalizing the concept prior to its implementation.
Collapse
Affiliation(s)
- Mélanie Marceau
- School of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Quebec, Canada
| | - Christina St-Onge
- Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Quebec, Canada
| | - Frances Gallagher
- School of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Quebec, Canada
| | - Meredith Young
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Québec, Canada
| |
Collapse
|
25
|
Taking the Big Leap: A Case Study on Implementing Programmatic Assessment in an Undergraduate Medical Program. EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12070425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The concept of programmatic assessment (PA) is well described in the literature; however, studies on implementing and operationalizing this systemic assessment approach are lacking. The present case study developed a local instantiation of PA, referred to as Assessment System Fribourg (ASF), which was inspired by an existing program. ASF was utilized for a new competency-based undergraduate Master of Medicine program at the State University of Fribourg. ASF relies on the interplay of four key principles and nine main program elements based on concepts of PA, formative assessment, and evaluative judgment. We started our journey in 2019 with the first cohort of 40 students who graduated in 2022. This paper describes our journey implementing ASF, including the enabling factors and hindrances that we encountered, and reflects on our experience and the path that is still in front of us. This case illustrates one possibility for implementing PA.
Collapse
|
26
|
Beckman M, Alfonsson S, Rosendahl I, Berman AH, Lindqvist H. A Behavior-based Coding Tool for Assessing Supervisors' Adherence and Competence: Findings From a Motivational Interviewing Implementation Study. Clin Psychol Psychother 2022; 29:1942-1949. [PMID: 35727807 DOI: 10.1002/cpp.2763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 06/12/2022] [Accepted: 06/14/2022] [Indexed: 11/11/2022]
Abstract
Supervision seems to be an essential part of therapist training, and thus also of implementing evidence-based practices. However, there is a shortage of valid and reliable instruments for objective assessment of supervision competence that include both global measures and frequency counts of behavior - two essential aspects of supervisory competence. This study tests the internal consistency and inter-rater reliability of an assessment tool that includes both these measures. Additionally, strategies and techniques used by ten supervisors in 35 Motivational interviewing supervision sessions are described. Codings were conducted after two separate coding training sessions. The internal consistency across the global measures was acceptable (α = 0.70; 0.71). After the second training, the inter-rater reliabilities for all frequency counts were in the moderate to good range, except for two that were in the poor range; inter-rater reliability for one of the four global measures was in the moderate range, and three were in the poor range. A prerequisite for identifying specific supervisor skills central to the development of therapist skills, teaching these skills to supervisors, and performing quality assurance of supervision, is to create instruments that can measure these behaviors. This study is a step in that direction.
Collapse
Affiliation(s)
- Maria Beckman
- Center for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, & Stockholm Health Care Services, Stockholm County Council, Sweden
| | - Sven Alfonsson
- Center for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, & Stockholm Health Care Services, Stockholm County Council, Sweden
| | - Ingvar Rosendahl
- Center for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, & Stockholm Health Care Services, Stockholm County Council, Sweden
| | - Anne H Berman
- Center for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, & Stockholm Health Care Services, Stockholm County Council, Sweden.,Department of Psychology, Clinical Psychology, Uppsala University, Sweden
| | - Helena Lindqvist
- Center for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, & Stockholm Health Care Services, Stockholm County Council, Sweden
| |
Collapse
|
27
|
Westein MPD, Koster AS, Daelmans HEM, Collares CF, Bouvy ML, Kusurkar RA. Validity evidence for summative performance evaluations in postgraduate community pharmacy education. CURRENTS IN PHARMACY TEACHING & LEARNING 2022; 14:701-711. [PMID: 35809899 DOI: 10.1016/j.cptl.2022.06.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 05/30/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Workplace-based assessment of competencies is complex. In this study, the validity of summative performance evaluations (SPEs) made by supervisors in a two-year longitudinal supervisor-trainee relationship was investigated in a postgraduate community pharmacy specialization program in the Netherlands. The construct of competence was based on an adapted version of the 2005 Canadian Medical Education Directive for Specialists (CanMEDS) framework. METHODS The study had a case study design. Both quantitative and qualitative data were collected. The year 1 and year 2 SPE scores of 342 trainees were analyzed using confirmatory factor analysis and generalizability theory. Semi-structured interviews were held with 15 supervisors and the program director to analyze the inferences they made and the impact of SPE scores on the decision-making process. RESULTS A good model fit was found for the adapted CanMEDS based seven-factor construct. The reliability/precision of the SPE measurements could not be completely isolated, as every trainee was trained in one pharmacy and evaluated by one supervisor. Qualitative analysis revealed that supervisors varied in their standards for scoring competencies. Some supervisors were reluctant to fail trainees. The competency scores had little impact on the high-stakes decision made by the program director. CONCLUSIONS The adapted CanMEDS competency framework provided a valid structure to measure competence. The reliability/precision of SPE measurements could not be established and the SPE measurements provided limited input for the decision-making process. Indications of a shadow assessment system in the pharmacies need further investigation.
Collapse
Affiliation(s)
- Marnix P D Westein
- Department of Pharmaceutical Sciences, Utrecht University, Royal Dutch Pharmacists Association (KNMP), Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Andries S Koster
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Hester E M Daelmans
- Master's programme of Medicine, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Carlos F Collares
- Maastricht University Faculty of Health Medicine and Life Sciences, Maastricht, the Netherlands.
| | - Marcel L Bouvy
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Rashmi A Kusurkar
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| |
Collapse
|
28
|
Perron NJ, Pype P, van Nuland M, Bujnowska-Fedak MM, Dohms M, Essers G, Joakimsen R, Tsimtsiou Z, Kiessling C. What do we know about written assessment of health professionals' communication skills? A scoping review. PATIENT EDUCATION AND COUNSELING 2022; 105:1188-1200. [PMID: 34602334 DOI: 10.1016/j.pec.2021.09.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 08/26/2021] [Accepted: 09/06/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE The aim of this scoping review was to investigate the published literature on written assessment of communication skills in health professionals' education. METHODS Pubmed, Embase, Cinahl and Psychnfo were screened for the period 1/1995-7/2020. Selection was conducted by four pairs of reviewers. Four reviewers extracted and analyzed the data regarding study, instrument, item, and psychometric characteristics. RESULTS From 20,456 assessed abstracts, 74 articles were included which described 70 different instruments. Two thirds of the studies used written assessment to measure training effects, the others focused on the development/validation of the instrument. Instruments were usually developed by the authors, often with little mention of the test development criteria. The type of knowledge assessed was rarely specified. Most instruments included clinical vignettes. Instrument properties and psychometric characteristics were seldom reported. CONCLUSION There are a number of written assessments available in the literature. However, the reporting of the development and psychometric properties of these instruments is often incomplete. Practice implications written assessment of communication skills is widely used in health professions education. Improvement in the reporting of instrument development, items and psychometrics may help communication skills teachers better identify when, how and for whom written assessment of communication should be used.
Collapse
Affiliation(s)
- Noelle Junod Perron
- Unit of Development and Research in Medical Education, Geneva Faculty of Medicine and Institute of Primary Care, Geneva University Hospitals, Geneva, Switzerland.
| | - Peter Pype
- Department of Public Health and Primary Care, Ghent University, Ghent, Belgium
| | - Marc van Nuland
- Academic Center for General Practice, Leuven University, Leuven, Belgium
| | | | | | - Geurt Essers
- Network of GP Training Programs in the Netherlands, Utrecht, The Netherlands
| | - Ragnar Joakimsen
- Department of Clinical Medicine, Faculty of Health Sciences, UIT The Artic University of Norway and Department of Internal Medicine, University Hospital of North Norway, Tromsø, Norway
| | - Zoi Tsimtsiou
- Department of Hygiene, Social-Preventive Medicine and Medical Statistics, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Claudia Kiessling
- Personal and Interpersonal Development in Health Care Education, Witten/Herdecke University, Witten, Germany
| |
Collapse
|
29
|
Schumacher DJ, Teunissen PW, Kinnear B, Driessen EW. Assessing trainee performance: ensuring learner control, supporting development, and maximizing assessment moments. Eur J Pediatr 2022; 181:435-439. [PMID: 34286373 DOI: 10.1007/s00431-021-04182-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 11/28/2022]
Abstract
In this article, the authors provide practical guidance for frontline supervisors' efforts to assess trainee performance. They focus on three areas. First, they argue the importance of promoting learner control in the assessment process, noting that providing learners agency and control can shift the stakes of assessment from high to low and promote a safe environment that facilitates learning. Second, they posit that assessment should be used to support continued development by promoting a relational partnership between trainees and supervisors. This partnership allows supervisors to reinforce desirable aspects of performance, provide real-time support for deficient areas of performance, and sequence learning with the appropriate amount of scaffolding to push trainees from competence (what they can do alone) to capability (what they are able to do with support). Finally, they advocate the importance of optimizing the use of written comments and direct observation while also recognizing that performance is interdependent in efforts to maximize assessment moments.Conclusion: Using best practices in trainee assessment can help trainees take next steps in their development in a learner-centered partnership with clinical supervisors. What is Known: • Many pediatricians are asked to assess the performance of medical students and residents they work with but few have received formal training in assessment. What is New: • This article presents evidence-based best practices for assessing trainees, including giving trainees agency in the assessment process and focusing on helping trainees take next steps in their development.
Collapse
Affiliation(s)
- Daniel J Schumacher
- Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, and Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Pim W Teunissen
- School of Health Professions Education (SHE), Faculty of Health Medicine and Life Sciences and Gynecologist, Department of Obstetrics and Gynecology, Maastricht University Medical Center, Maastricht, the Netherlands
| | - Benjamin Kinnear
- Internal Medicine and Pediatrics, Division of Hospital Medicine, Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Erik W Driessen
- School of Health Professions Education (SHE), Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
30
|
The why, what, when, who and how of assessing CBT competence to support lifelong learning. COGNITIVE BEHAVIOUR THERAPIST 2022. [DOI: 10.1017/s1754470x22000502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Abstract
Assessment of cognitive behaviour therapy (CBT) competence is a critical component in ensuring optimal clinical care, supporting therapists’ skill acquisition, and facilitating continuing professional development. This article provides a framework to support trainers, assessors, supervisors and therapists when making decisions about selecting and implementing effective strategies for assessing CBT competence. The framework draws on the existing evidence base to address five central questions: Why assess CBT competence?; What is CBT competence?; When should CBT competence be assessed?; Who is best placed to assess CBT competence?; and How should CBT competence be assessed? Various methods of assessing CBT competence are explored and the potential benefits and challenges are outlined. Recommendations are made about which approach to use across different contexts and how to use these effectively to facilitate the acquisition, enhancement and evaluation of CBT knowledge and skills.
Key learning aims
After reading this article you will be able to:
(1)
Identify key issues about why, what, when, who and how to assess CBT competence and use this framework to guide decisions about the best strategy to use.
(2)
Be aware of the range of methods for assessing CBT competence and consider the main benefits and potential challenges of these.
(3)
Consider the most effective ways to implement CBT competence assessment strategies as a tool for evaluation and learning.
Collapse
|
31
|
Baranova K, Goebel EA, Wasserman J, Osmond A. A Survey on Changes to the Canadian Anatomical Pathology Certification Examination Due to Coronavirus Disease 2019 and Implications for Competency-Based Medical Education. Acad Pathol 2021; 8:23742895211060711. [PMID: 34926797 PMCID: PMC8679023 DOI: 10.1177/23742895211060711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Revised: 09/07/2021] [Accepted: 10/12/2021] [Indexed: 01/09/2023] Open
Abstract
The coronavirus disease 2019 pandemic resulted in a dramatic change in the Royal
College of Physicians and Surgeons of Canada assessment process through
elimination of the oral and practical components of the 2020 Anatomical
Pathology examination. Our study sought to determine stakeholder opinions and
experiences on these changes in the context of the 2019 implementation of
competency-based medical education. Surveys were designed for residents and
practicing pathologists. In total, 57 residents (estimated response rate 29%)
and 185 pathologists (estimated response rate 19%) participated across Canada;
67% of pathologists disagreed with the 2020 Royal College examination changes,
compared with 30% for residents (P = <.00001). When asked whether the Royal
College examination should be eliminated, 95% of pathologists indicated they
would be against this, compared to only 34% of residents (P = <.00001).
Perceptions on changes to and importance of different components of assessment
in competency-based medical education were similar between pathologists and
residents, with participants perceiving assessment practices to have changed
fairly little since its implementation, with the exception of more frequent
feedback. Analysis of narrative comments identified several common themes around
assessment, including the need for objectivity and standardization and the
problem of failure-to-fail. However, residents identified numerous elements of
their performance that can be assessed only through longitudinal evaluation.
Pathologists, on the other hand, tended to view these aspects of performance as
laden with bias. Our results will hopefully help guide future innovation in
assessment by characterizing different stakeholder perspectives on key issues in
medical education.
Collapse
Affiliation(s)
- Katherina Baranova
- Department of Pathology and Laboratory Medicine, Western University and London Health Sciences Centre, London, Ontario, Canada
| | - Emily A. Goebel
- Department of Pathology and Laboratory Medicine, Western University and London Health Sciences Centre, London, Ontario, Canada
| | - Jason Wasserman
- Department of Pathology and Laboratory Medicine, University of Ottawa, Ontario, Canada
| | - Allison Osmond
- Department of Pathology and Laboratory Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| |
Collapse
|
32
|
Isbej L, Cantarutti C, Fuentes-Cimma J, Fuentes-López E, Montenegro U, Ortuño D, Oyarzo N, Véliz C, Riquelme A. The best mirror of the students' longitudinal performance: Portfolio or structured oral exam assessment at clerkship? J Dent Educ 2021; 86:383-392. [PMID: 34811760 DOI: 10.1002/jdd.12823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 10/07/2021] [Accepted: 10/24/2021] [Indexed: 11/08/2022]
Abstract
OBJECTIVE This study aimed to compare the strength of association (i.e., explained variability) of the cumulative grade point average (GPA) with the grades obtained in the clerkship portfolio and the final structured oral exam by dental students. METHODS A prospective longitudinal study was designed to analyze quantitative data from three cohorts of dental school students. Univariate and multivariate linear regression models were built to evaluate the association between the students' cumulative GPA with the grades obtained in their clerkship portfolio and the final structured oral exam. RESULTS In total, 171 students in the last year of the undergraduate program were considered (76% women, age average 24.8 ± 1.6 years). The dental students' grades of both portfolio and structured oral exam were significantly associated with the GPA score but with different strengths of association. The clerkship portfolio was more strongly associated with cumulative GPA than the structured oral exam (R2 = 19.6% versus R2 = 7.6%). On the opposite, the association between the structured oral exam and GPA can be interpreted as a lower precision in its practical significance and thus reflecting different concurrent validity. CONCLUSIONS Considering the results of this study, it could probably incline the balance toward the portfolio because it may be closer to a programmatic assessment model, with timely feedback, development of metacognition, and the achievement of formative process measurement rather than evidence of a single instance of examination.
Collapse
Affiliation(s)
- Lorena Isbej
- Escuela de Odontología, Facultad de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile.,Programa de Farmacología y Toxicología, Facultad de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Cynthia Cantarutti
- Escuela de Odontología, Facultad de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Javiera Fuentes-Cimma
- Departamento de Ciencias de la Salud, Facultad de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile.,School of Health Professions Education (SHE), Maastricht University, Maastricht, The Netherlands
| | - Eduardo Fuentes-López
- Departamento de Ciencias de la Salud, Facultad de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Uriel Montenegro
- Escuela de Odontología, Facultad de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Duniel Ortuño
- Escuela de Odontología, Facultad de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Natacha Oyarzo
- Escuela de Odontología, Facultad de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile.,Programa de Farmacología y Toxicología, Facultad de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Claudia Véliz
- Escuela de Odontología, Facultad de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Arnoldo Riquelme
- Departamento de Ciencias de la Salud, Facultad de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
33
|
Heeneman S, de Jong LH, Dawson LJ, Wilkinson TJ, Ryan A, Tait GR, Rice N, Torre D, Freeman A, van der Vleuten CPM. Ottawa 2020 consensus statement for programmatic assessment - 1. Agreement on the principles. MEDICAL TEACHER 2021; 43:1139-1148. [PMID: 34344274 DOI: 10.1080/0142159x.2021.1957088] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
INTRODUCTION In the Ottawa 2018 Consensus framework for good assessment, a set of criteria was presented for systems of assessment. Currently, programmatic assessment is being established in an increasing number of programmes. In this Ottawa 2020 consensus statement for programmatic assessment insights from practice and research are used to define the principles of programmatic assessment. METHODS For fifteen programmes in health professions education affiliated with members of an expert group (n = 20), an inventory was completed for the perceived components, rationale, and importance of a programmatic assessment design. Input from attendees of a programmatic assessment workshop and symposium at the 2020 Ottawa conference was included. The outcome is discussed in concurrence with current theory and research. RESULTS AND DISCUSSION Twelve principles are presented that are considered as important and recognisable facets of programmatic assessment. Overall these principles were used in the curriculum and assessment design, albeit with a range of approaches and rigor, suggesting that programmatic assessment is an achievable education and assessment model, embedded both in practice and research. Knowledge on and sharing how programmatic assessment is being operationalized may help support educators charting their own implementation journey of programmatic assessment in their respective programmes.
Collapse
Affiliation(s)
- Sylvia Heeneman
- Department of Pathology, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| | - Lubberta H de Jong
- Department of Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Luke J Dawson
- School of Dentistry, University of Liverpool, Liverpool, UK
| | - Tim J Wilkinson
- Education Unit, University of Otago, Christchurch, New Zealand
| | - Anna Ryan
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Australia
| | - Glendon R Tait
- MD Program, Department of Psychiatry, and The Wilson Centre, University of Toronto, Toronto, Canada
| | - Neil Rice
- College of Medicine and Health, University of Exeter Medical School, Exeter, UK
| | - Dario Torre
- Department of Medicine, Uniformed Services University of Health Sciences, Bethesda, MD, USA
| | - Adrian Freeman
- College of Medicine and Health, University of Exeter Medical School, Exeter, UK
| | - Cees P M van der Vleuten
- Department of Educational Development and Research, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
34
|
Wenghofer EF, Steele RS, Christiansen RG, Carter MH. Evaluation of a High Stakes Physician Competency Assessment: Lessons for Assessor Training, Program Accountability, and Continuous Improvement. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2021; 41:111-118. [PMID: 33929350 DOI: 10.1097/ceh.0000000000000362] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
INTRODUCTION There is a dearth of evidence evaluating postlicensure high-stakes physician competency assessment programs. Our purpose was to contribute to this evidence by evaluating a high-stakes assessment for assessor inter-rater reliability and the relationship between performance on individual assessment components and overall performance. We did so to determine if the assessment tools identify specific competency needs of the assessed physicians and contribute to our understanding of physician dyscompetence more broadly. METHOD Four assessors independently reviewed 102 video-recorded assessments and scored physicians on seven assessment components and overall performance. Inter-rater reliability was measured using intraclass correlation coefficients using a multiple rater, consistency, two-way random effect model. Analysis of variance with least-significant difference post-hoc analyses examined if the mean component scores differed significantly by quartile ranges of overall performance. Linear regression analysis determined the extent to which each component score was associated with overall performance. RESULTS Intraclass correlation coefficients ranged between 0.756 and 0.876 for all components scored and was highest for overall performance. Regression indicated that individual component scores were positively associated with overall performance. Levels of variation in component scores were significantly different across quartile ranges with higher variability in poorer performers. DISCUSSION High-stake assessments can be conducted reliably and identify performance gaps of potentially dyscompetent physicians. Physicians who performed well tended to do so in all aspects evaluated, whereas those who performed poorly demonstrated areas of strength and weakness. Understanding that dyscompetence rarely means a complete or catastrophic lapse competence is vital to understanding how educational needs change through a physician's career.
Collapse
Affiliation(s)
- Elizabeth F Wenghofer
- Dr. Wenghofer: Full Professor, School of Rural and Northern Health, Laurentian University, Sudbury, Ontario, Canada. Dr. Steele: Medical Director of Knowledge, Skills, Training, Assessment, and Training (KSTAR) Physician Programs, A&M Rural and Community Health Institute, Texas A&M University Health Science Center, College Station, TX. Dr. Christiansen: Professor of Medicine, Department of Medicine, University of Illinois College of Medicine, Rockford, IL. Dr. Carter: Clinical Assistant Professor of primary care medicine, Primary Care and Population Health, Texas A&M University Health Science Center, College Station, TX
| | | | | | | |
Collapse
|
35
|
Chandran DS, Muthukrishnan SP, Barman SM, Peltonen LM, Ghosh S, Sharma R, Bhattacharjee M, Rathore BB, Carroll RG, Sengupta J, Chan JYH, Ghosh D. IUPS Physiology Education Workshop series in India: organizational mechanics, outcomes, and lessons. ADVANCES IN PHYSIOLOGY EDUCATION 2020; 44:709-721. [PMID: 33125254 DOI: 10.1152/advan.00128.2020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Active learning promotes the capacity of problem solving and decision making among learners. Teachers who apply instructional processes toward active participation of learners help their students develop higher order thinking skills. Due to the recent paradigm shift toward adopting competency-based curricula in the education of healthcare professionals in India, there is an emergent need for physiology instructors to be trained in active-learning methodologies and to acquire abilities to promote these curriculum changes. To address these issues, a series of International Union of Physiological Sciences (IUPS) workshops on physiology education techniques in four apex centers in India was organized in November 2018 and November 2019. The "hands-on" workshops presented the methodologies of case-based learning, problem-based learning, and flipped classroom; the participants were teachers of basic sciences and human and veterinary medicine. The workshop series facilitated capacity building and creation of a national network of physiology instructors interested in promoting active-learning techniques. The workshops were followed by a brainstorming meeting held to assess the outcomes. The aim of this report is to provide a model for implementing a coordinated series of workshops to support national curriculum change and to identify the organizational elements essential for conducting an effective Physiology Education workshop. The essential elements include a highly motivated core organizing team, constant dialogue between core organizing and local organizing committees, a sufficient time frame for planning and execution of the event, and opportunities to engage students at host institutions in workshop activities.
Collapse
Affiliation(s)
- Dinu S Chandran
- Department of Physiology, All India Institute of Medical Sciences, New Delhi, India
| | | | - Susan M Barman
- Department of Pharmacology and Toxicology, Michigan State University, East Lansing, Michigan
| | - Liisa M Peltonen
- Department of Physiology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Sarmishtha Ghosh
- Centre for Education, International Medical University, Kuala Lumpur, Malaysia
| | - Renuka Sharma
- Department of Physiology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India
| | - Manasi Bhattacharjee
- Department of Physiology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India
| | - Bharti Bhandari Rathore
- Department of Physiology, Government Institute of Medical Sciences, Greater Noida, Gautam Buddha Nagar, Uttar Pradesh, India
| | - Robert G Carroll
- Office of Medical Education, Brody School of Medicine at East Carolina University, Greenville, North Carolina
| | - Jayasree Sengupta
- Department of Physiology, All India Institute of Medical Sciences, New Delhi, India
| | - Julie Y H Chan
- Institute for Translational Research in Biomedicine, Kaohsiung Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| | - Debabrata Ghosh
- Department of Physiology, All India Institute of Medical Sciences, New Delhi, India
| |
Collapse
|
36
|
Virk A, Joshi A, Mahajan R, Singh T. The power of subjectivity in competency-based assessment. J Postgrad Med 2020; 66:200-205. [PMID: 33037168 PMCID: PMC7819378 DOI: 10.4103/jpgm.jpgm_591_20] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
With the introduction of competency-based undergraduate curriculum in India, a paradigm shift in the assessment methods and tools will be the need of the hour. Competencies are complex combinations of various attributes, many of which being not assessable by objective methods. Assessment of affective and communication domains has always been neglected for want of objective methods. Areas like professionalism, ethics, altruism, and communication—so vital for being an Indian Medical Graduate, can be assessed longitudinally applying subjective means only. Though subjectivity has often been questioned as being biased, it has been proven time and again that a subjective assessment in expert hands gives comparable results as that of any objective assessment. By insisting on objectivity, we may compromise the validity of the assessment and deprive the students of enriched subjective feedback and judgement also. This review highlights the importance of subjective assessment in competency-based assessment and ways and means of improving the rigor of subjective assessment, with particular emphasis on the development and use of rubrics.
Collapse
Affiliation(s)
- A Virk
- Adesh Medical College & Hospital, Shahabad (M), Haryana, India
| | - A Joshi
- Pramukhswami Medical College, Karamsad, Gujarat, India
| | - R Mahajan
- Adesh Institute of Medical Sciences & Research, Bathinda, Punjab, India
| | - T Singh
- SGRD Institute of Medical Sciences and Research, Amritsar, Punjab, India
| |
Collapse
|
37
|
Jafri L, Siddiqui I, Khan AH, Tariq M, Effendi MUN, Naseem A, Ahmed S, Ghani F, Alidina S, Shah N, Majid H. Fostering teaching-learning through workplace based assessment in postgraduate chemical pathology residency program using virtual learning environment. BMC MEDICAL EDUCATION 2020; 20:383. [PMID: 33097037 PMCID: PMC7582426 DOI: 10.1186/s12909-020-02299-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 10/09/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND The principle of workplace based assessment (WBA) is to assess trainees at work with feedback integrated into the program simultaneously. A student driven WBA model was introduced and perception evaluation of this teaching method was done subsequently by taking feedback from the faculty as well as the postgraduate trainees (PGs) of a residency program. METHODS Descriptive multimethod study was conducted. A WBA program was designed for PGs in Chemical Pathology on Moodle and forms utilized were case-based discussion (CBD), direct observation of practical skills (DOPS) and evaluation of clinical events (ECE). Consented assessors and PGs were trained on WBA through a workshop. Pretest and posttest to assess PGs knowledge before and after WBA were conducted. Every time a WBA form was filled, perception of PGs and assessors towards WBA, time taken to conduct single WBA and feedback were recorded. Faculty and PGs qualitative feedback on perception of WBA was taken via interviews. WBA tools data and qualitative feedback were used to evaluate the acceptability and feasibility of the new tools. RESULTS Six eligible PGs and seventeen assessors participated in this study. A total of 79 CBDs (assessors n = 7 and PGs n = 6), 12 ECEs (assessors n = 6 and PGs n = 5), and 20 DOPS (assessors n = 6 and PGs n = 6) were documented. PGs average pretest score was 55.6%, which was improved to 96.4% in posttest; p value< 0.05. Scores of annual assessment before and after implementation of WBA also showed significant improvement, p value 0.039, Overall mean time taken to evaluate PG's was 12.6 ± 9.9 min and feedback time 9.2 ± 7.4 min. Mean WBA process satisfaction of assessors and PGs on Likert scale of 1 to 10 was 8 ± 1 and 8.3 ± 0.8 respectively. CONCLUSION Both assessors and fellows were satisfied with introduction and implementation of WBA. It gave the fellows opportunity to interact with assessors more often and learn from their rich experience. Gain in knowledge of PGs was identified from the statistically significant improvement in PGs' assessment scores after WBA implementation.
Collapse
Affiliation(s)
- Lena Jafri
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan.
| | - Imran Siddiqui
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| | - Aysha Habib Khan
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| | - Muhammed Tariq
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Muhammad Umer Naeem Effendi
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| | - Azra Naseem
- Blended & Digital Learning Network, Aga Khan University, Karachi, Pakistan
| | - Sibtain Ahmed
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| | - Farooq Ghani
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| | - Shahnila Alidina
- Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi, Pakistan
| | - Nadir Shah
- eLearning Developer, Department of I.T. Academics and Computing, Aga Khan University, Karachi, Pakistan
| | - Hafsa Majid
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| |
Collapse
|
38
|
Validity of entrustment scales within anesthesiology residency training. Can J Anaesth 2020; 68:53-63. [PMID: 33083924 DOI: 10.1007/s12630-020-01823-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Revised: 07/09/2020] [Accepted: 07/12/2020] [Indexed: 10/23/2022] Open
Abstract
INTRODUCTION Competency-based medical education requires robust assessment in authentic clinical environments. Using work-based assessments, entrustment scales have emerged as a means of describing a trainee's ability to perform competently. Nevertheless, psychometric properties of entrustment-based assessment are relatively unknown, particularly in anesthesiology. This study assessed the generalizability and extrapolation evidence for entrustment scales within a program of assessment during anesthesiology training. METHODS Entrustment scores were collected during the first seven blocks of training for three resident cohorts. Entrustment scores were assessed during daily evaluations using a Clinical Case Assessment Tool (CCAT) within the preoperative, intraoperative, and postoperative setting. The reliability of the entrustment scale was estimated using generalizability theory. Spearman's correlations measured the relationship between median entrustment scores and percentiles scores on the Anesthesia Knowledge Test (AKT)-1 and AKT-6, mean Objective Structured Clinical Examination (OSCE) scores, and rankings of performance by the Clinical Competence Committee (CCC). RESULTS Analyses were derived from 2,309 CCATs from 35 residents. The reliability or generalizability (G) coefficient of the entrustment scale was 0.73 (95% confidence interval [CI], 0.70 to 0.76), and the internal consistency was 0.86 (95% CI, 0.84 to 0.88). Intraoperative entrustment scores significantly correlated with the AKT-6 (rho = 0.51, P = 0.01), mean OSCE (rho = 0.45, P = 0.04), and CCC performance rankings (rho = 0.52, P = 0.006). CONCLUSION As part of an assessment program, entrustment scales used early during anesthesiology training showed evidence of validity. Intraoperative entrustment scores had good reliability and showed acceptable internal consistency. Interpreting entrustment scores in this setting may constitute a valuable adjunct complementing traditional summative evaluations.
Collapse
|
39
|
Tariq M, Govaerts M, Afzal A, Ali SA, Zehra T. Ratings of performance in multisource feedback: comparing performance theories of residents and nurses. BMC MEDICAL EDUCATION 2020; 20:355. [PMID: 33046055 PMCID: PMC7549199 DOI: 10.1186/s12909-020-02276-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 10/01/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Multisource feedback (MSF) is increasingly being used to assess trainee performance, with different assessor groups fulfilling a crucial role in utility of assessment data. However, in health professions education, research on assessor behaviors in MSF is limited. When assessing trainee performance in work settings, assessors use multidimensional conceptualizations of what constitutes effective performance, also called personal performance theories, to distinguish between various behaviors and sub competencies., This may not only explain assessor variability in Multi Source Feedback, but also result in differing acceptance (and use) of assessment data for developmental purposes. The purpose of this study was to explore performance theories of various assessor groups (residents and nurses) when assessing performance of residents. METHODS A constructivist, inductive qualitative research approach and semi-structured interviews following MSF were used to explore performance theories of 14 nurses and 15 residents in the department of internal medicine at Aga Khan University (AKU). Inductive thematic content analysis of interview transcripts was used to identify and compare key dimensions in residents' and nurses' performance theories used in evaluation of resident performance. RESULTS Seven major themes, reflecting key dimensions of assessors' performance theories, emerged from the qualitative data, namely; communication skills, patient care, accessibility, teamwork skills, responsibility, medical knowledge and professional attitude. There were considerable overlaps, but also meaningful differences in the performance theories of residents and the nurses, especially with respect to accessibility, teamwork and medical knowledge. CONCLUSION Residents' and nurses' performance theories for assessing resident performance overlap to some extent, yet also show meaningful differences with respect to the performance dimensions they pay attention to or consider most important. In MSF, different assessor groups may therefore hold different performance theories, depending on their role. Our results further our understanding of assessor source effects in MSF. Implications of our findings are related to implementation of MSF, design of rating scales as well as interpretation and use of MSF data for selection and performance improvement.
Collapse
Affiliation(s)
- Muhammad Tariq
- Department for Educational Development & Department of Medicine, Aga Khan University, P.O. Box 3500, Stadium Road, Karachi, 74800, Pakistan.
| | - Marjan Govaerts
- School of Health Professions Education, Maastricht University, Maastricht, Netherlands
| | - Azam Afzal
- Department for Educational Development and Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Syed Ahsan Ali
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Tabassum Zehra
- Department for Educational Development & Department of Medicine, Aga Khan University, P.O. Box 3500, Stadium Road, Karachi, 74800, Pakistan
| |
Collapse
|
40
|
Reece A, Foard L. START - evaluating a novel assessment of consultant readiness in paediatrics: The entry not the exit. MEDICAL TEACHER 2020; 42:1027-1036. [PMID: 32644838 DOI: 10.1080/0142159x.2020.1779918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The Royal College of Paediatrics and Child Health (RCPCH) incepted a new end-of-training assessment in 2012, known as START, the Speciality Trainee Assessment of Readiness for Tenure [as a Consultant]. It is a novel, formative, multi-scenario, OSCE-style, out-of-workplace assessment using unseen scenarios with generic, external assessors undertaken in the trainees' penultimate training year. This study considers whether this assessment assists in preparing senior paediatric trainees for consultant working. A mixed qualitative and quantitative study in the post-positivist paradigm was designed. Subjects were paediatricians who have taken START and completed their paediatric training. Methods were an on-line questionnaire survey and a key informant interview. The assessment is viewed positively, but some trainees report negative experiences. They find value in the formative feedback which generally helps direct trainees towards focussing their training in their final year before ending their training and consultant appointment. For many respondents, the assessment highlighted areas for further development, was relevant for consultant working and useful for consultant interview preparation. Of least value was travelling, cost, assessor performance, feedback quality, feeling like a summative exam and sub-speciality involvement. Many respondents felt the assessment highlighted areas to develop in their subsequent training. Overall START supports transition to consultant working.
Collapse
Affiliation(s)
- Ashley Reece
- Department of Paediatrics, Watford General Hospital, Hertfordshire, UK
- Royal College of Paediatrics and Child Health, London, UK
| | - Lucy Foard
- Royal College of Paediatrics and Child Health, London, UK
| |
Collapse
|
41
|
Croft H, Gilligan C, Rasiah R, Levett-Jones T, Schneider J. Developing a validity argument for a simulation-based model of entrustment in dispensing skills assessment framework. CURRENTS IN PHARMACY TEACHING & LEARNING 2020; 12:1081-1092. [PMID: 32624137 DOI: 10.1016/j.cptl.2020.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2019] [Revised: 03/13/2020] [Accepted: 04/18/2020] [Indexed: 06/11/2023]
Abstract
INTRODUCTION Integrated assessment of multiple competencies at once, including entrustable professional activity (EPA) based assessment, is emerging as an effective approach to competency-based evaluation of health professionals. However, there is an absence of validated assessment frameworks in entry level pharmacy education. We aimed to develop an assessment framework and establish a validity argument, containing multiple sources of evidence, for use in the integrated assessment of pharmacy student's competency in all aspects of the supply of prescribed medicine(s). METHODS A two-phase prospective study was conducted. Phase 1 involved development and content validation of the Model of Entrustment in Dispensing Skills (MEDS) assessment framework using a literature review, a think-aloud study, and expert consultation. In phase 2, a pilot study was conducted with faculty and expert assessors to test the framework. Subsequent analysis involved psychometric evaluation of rating scales and usability testing. RESULTS Validity evidence was collected and organised across the two study phases. The MEDS framework had good evidence of content validity supported by the rigorous development and consultation process, as well as case sampling, with 88% of national practice-based competencies represented across the two simulations. Reliability coefficients were high and acceptable, supporting strong agreement across domains, students, and simulations as well as a strong correlation between the EPA and total score (spearman correlation rho 0.725, P < .001). CONCLUSIONS This study describes a valid and rigorous approach for the implementation and interpretation of an integrated simulation-based assessment tool for determining pharmacy student's progress towards entrustment for independent medication supply practice.
Collapse
Affiliation(s)
- Hayley Croft
- School of Biomedical Sciences and Pharmacy, Faculty of Health and Medicine, University of Newcastle, NSW, Australia.
| | - Conor Gilligan
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, NSW, Australia.
| | - Rohan Rasiah
- Western Australian Centre for Rural Health, University of Western Australia, WA, Australia.
| | | | - Jennifer Schneider
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, NSW, Australia.
| |
Collapse
|
42
|
Zhang CX, Crawford E, Marshall J, Bernard A, Walker-Smith K. Developing interprofessional collaboration between clinicians, interpreters, and translators in healthcare settings: outcomes from face-to-face training. J Interprof Care 2020; 35:521-531. [PMID: 32693645 DOI: 10.1080/13561820.2020.1786360] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Interprofessional collaboration between clinicians, interpreters, and translators is crucial to providing care for consumers with limited English proficiency. Interprofessional training for these professions has been overlooked outside of the medical field. This study investigated whether face-to-face training for speech pathologists, interpreters, and translators improved their knowledge, confidence, practice, and attitudes to engage in interprofessional collaboration. It also examined whether single-profession training for speech pathologists can produce similar training outcomes when delivered to multiple healthcare professions. Thirty interpreters and translators (30 training), 49 speech pathologists (27 training, 22 control), and a mixed group of 24 clinicians from eight professions (16 training, 8 control) completed surveys before, after, and two months after their respective training event. Training outcomes were similar across cohorts. Knowledge and confidence improved and were maintained after two months. Attitudes toward interprofessional collaboration were positive despite perceptions of challenge, and this was largely unchanged after training. Intent to implement optimal practices after training was greater than self-reported practices two months later. While years of professional experience did not affect training outcomes for clinicians, knowledge improvement for interpreters was associated with having less professional experience. Findings highlight the need to reevaluate service planning, policy, and workforce development strategies alongside foundation level training to deliver effective interprofessional education for clinicians, interpreters, and translators in healthcare settings.
Collapse
Affiliation(s)
- Claire Xiaochi Zhang
- Speech Pathology Department, Queensland Children's Hospital, Children's Health Queensland Hospital and Health Service, Brisbane, Australia.,School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, Australia
| | - Emma Crawford
- School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, Australia.,Poche Centre for Indigenous Health, The University of Queensland, Brisbane, Australia
| | - Jeanne Marshall
- Speech Pathology Department, Queensland Children's Hospital, Children's Health Queensland Hospital and Health Service, Brisbane, Australia.,School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, Australia
| | - Anne Bernard
- Queensland Facility for Advanced Bioinformatics, Institute for Molecular Bioscience, The University of Queensland, Brisbane, Australia
| | - Katie Walker-Smith
- Speech Pathology Department, Queensland Children's Hospital, Children's Health Queensland Hospital and Health Service, Brisbane, Australia
| |
Collapse
|
43
|
Oberink R, Boom SM, Zwitser RJ, van Dijk N, Visser MRM. Assessment of motivational interviewing: Psychometric characteristics of the MITS 2.1 in general practice. PATIENT EDUCATION AND COUNSELING 2020; 103:1311-1318. [PMID: 32107095 DOI: 10.1016/j.pec.2020.02.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 01/07/2020] [Accepted: 02/07/2020] [Indexed: 06/10/2023]
Abstract
OBJECTIVE Motivational Interviewing (MI) is increasingly used in healthcare. The Motivational Interviewing Target Scheme 2.1 (MITS) can be used to assess MI in short consultations. This quantitative validation study is a sequel to a qualitative study, which showed that the MITS is suitable for low-stakes assessment. We collected validity evidence to determine whether its suitability for high-stakes assessment in the GP-setting. METHODS Consultations of GPs and GP-trainees were assessed using the MITS. The internal structure was studied using generalizability theory; intra class correlation (ICC), convergent and divergent validity was determined. RESULTS Two coders and seven consultations were found to be necessary for high stakes assessment. We found higher ICCs as coders were more experienced. Convergent validity was found; results for divergent validity were mixed. CONCLUSION The MITS is a suitable instrument for high-stakes MI assessments in GP-setting. The number of consultations and coders that are needed for assessment are comparable to other instruments for assessing communication skills. PRACTICE IMPLICATIONS The MITS can be used to assess conversations for their MI consistency in GP-setting where most consultations are relatively short and are only partially dedicated to behaviour change. As the MITS assesses complex communication skills, experienced coders are needed.
Collapse
Affiliation(s)
- Riëtta Oberink
- Department of General Practice/ Family Medicine, Amsterdam UMC, Location AMC, University of Amsterdam, Meibergdreef 15, 1105 AZ, Amsterdam, the Netherlands.
| | - Saskia M Boom
- Department of General Practice/ Family Medicine, Amsterdam UMC, Location AMC, University of Amsterdam, Meibergdreef 15, 1105 AZ, Amsterdam, the Netherlands
| | - Robert J Zwitser
- Department of Psychology, University of Amsterdam, the Netherlands
| | - Nynke van Dijk
- Department of General Practice/ Family Medicine, Amsterdam UMC, Location AMC, University of Amsterdam, Meibergdreef 15, 1105 AZ, Amsterdam, the Netherlands
| | - Mechteld R M Visser
- Department of General Practice/ Family Medicine, Amsterdam UMC, Location AMC, University of Amsterdam, Meibergdreef 15, 1105 AZ, Amsterdam, the Netherlands
| |
Collapse
|
44
|
Abstract
Introduction: Anesthesiology requires procedure fulfillment, problem, and real-time crisis resolution, problem, and complications forecast, among others; therefore, the evaluation of its learning should center around how students achieve competence rather than solely focusing on knowledge acquisition. Literature shows that despite the existence of numerous evaluation strategies, these are still underrated in most cases due to unawareness.
Objective: The present article aims to explain the process of competency-based anesthesiology assessment, in addition to suggesting a brief description of the learning domains evaluated, theories of knowledge, instruments, and assessment systems in the area; and finally, to show some of the most relevant results regarding assessment systems in Colombia.
Methodology: The results obtained in “Characteristics of the evaluation systems used by anesthesiology residency programs stakeholders in the educational process, a fact that motivated the publishing of this discussion around the topic of competency-based assessment in anesthesiology. Following a bibliography search with the keywords through PubMed, OVID, ERIC, DIALNET, and REDALYC, 110 articles were reviewed and 75 were established as relevant for the research’s theoretical framework.
Results and conclusion: Anesthesiology assessment should be conceived from the competency’s multidimensionality; it must be longitudinal and focused on the learning objectives.
Collapse
|
45
|
Haring CM, Klaarwater CCR, Bouwmans GA, Cools BM, van Gurp PJM, van der Meer JWM, Postma CT. Validity, reliability and feasibility of a new observation rating tool and a post encounter rating tool for the assessment of clinical reasoning skills of medical students during their internal medicine clerkship: a pilot study. BMC MEDICAL EDUCATION 2020; 20:198. [PMID: 32560648 PMCID: PMC7304120 DOI: 10.1186/s12909-020-02110-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Accepted: 06/11/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Systematic assessment of clinical reasoning skills of medical students in clinical practice is very difficult. This is partly caused by the lack of understanding of the fundamental mechanisms underlying the process of clinical reasoning. METHODS We previously developed an observation tool to assess the clinical reasoning skills of medical students during clinical practice. This observation tool consists of an 11-item observation rating form (ORT). In the present study we verified the validity, reliability and feasibility of this tool and of an already existing post-encounter rating tool (PERT) in clinical practice among medical students during the internal medicine clerkship. RESULTS Six raters each assessed the same 15 student-patient encounters. The internal consistency (Cronbach's alfa) for the (ORT) was 0.87 (0.71-0.84) and the 5-item (PERT) was 0.81 (0.71-0.87). The intraclass-correlation coefficient for single measurements was poor for both the ORT; 0.32 (p < 0.001) as well as the PERT; 0.36 (p < 0.001). The Generalizability study (G-study) and decision study (D-study) showed that 6 raters are required to achieve a G-coefficient of > 0.7 for the ORT and 7 raters for the PERT. The largest sources of variance are the interaction between raters and students. There was a consistent correlation between the ORT and PERT of 0.53 (p = 0.04). CONCLUSIONS The ORT and PERT are both feasible, valid and reliable instruments to assess students' clinical reasoning skills in clinical practice.
Collapse
|
46
|
Berl Q, Resseguier N, Katsogiannou M, Mauviel F, Carcopino X, Boubli L, Blanc J. Objective assessment of obstetrics residents' surgical skills in caesarean: Development and evaluation of a specific rating scale. J Gynecol Obstet Hum Reprod 2020; 50:101812. [PMID: 32439616 DOI: 10.1016/j.jogoh.2020.101812] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 05/09/2020] [Accepted: 05/11/2020] [Indexed: 11/24/2022]
Abstract
OBJECTIVE To develop a modified version of Objective Structured Assessment of Technical Skill (OSATS) rating scale for evaluation of surgical skills specific to caesarean and to assess its relevance in documenting the residents' learning curve during their training. Secondarily, to verify the scale's stability to caesarean's level of difficulty and comparing self-assessment to hetero-assessment in order to propose a practical application of this rating scale during residency. STUDY DESIGN We conducted a multicentre observational prospective study, from May 2018 to November 2018. All residents at that time could participate and fill in the rating scale after caesarean. Senior surgeons had to fill in the same rating scale. We analysed correlation between self-assessments and hetero-assessments and sensitivity to change of the rating scale. Analysis of feature's relevance was performed by principal component analysis, factor analysis and reliability analysis. RESULTS In total, 234 rating scales were completed evaluating 18 residents. Our study demonstrated that our rating scale could be used to evaluate surgical skills of residents during caesarean and distinguish their year of residency (p < 0.001) with a high correlation between self and hetero-assessment (Intraclass Correlation coefficient for global score: 0.78; 95% CI 0.68-0.86). The principal component analysis revealed two dimensions corresponding to the two parts of the rating scale and the factorial analysis allowed us to confirm distribution of features according to these two dimensions. Cronbach's alpha allowed us to highlight the percentage of representation of the scale's features in relation to all potential theoretical features (0.93, 95% CI 0.82-0.95). CONCLUSION Our rating scale could be used for self-assessment during residency and as a hetero-assessment tool for validating defined stages of the internship.
Collapse
Affiliation(s)
- Quentin Berl
- Department of Obstetrics and Gynecology, Nord Hospital, APHM, Chemin des Bourrely, 13015, Marseille, France
| | - Noémie Resseguier
- EA 3279, Public Health, Chronic Diseases and Quality of Life, Research Unit, Aix-Marseille University, 13284, Marseille, France
| | - Maria Katsogiannou
- Hôpital Saint Joseph, Department of Obstetrics and Gynecology, FR-13008, Marseille, France
| | - Franck Mauviel
- Department of Obstetrics and Gynecology, Ste Musse Hospital, 54, rue Henri Sainte Claire Deville, 83000, Toulon, France
| | - Xavier Carcopino
- Department of Obstetrics and Gynecology, Nord Hospital, APHM, Chemin des Bourrely, 13015, Marseille, France; Aix-Marseille University (AMU), Univ Avignon, CNRS, IRD, IMBE UMR, Marseille, France
| | - Léon Boubli
- Department of Obstetrics and Gynecology, Nord Hospital, APHM, Chemin des Bourrely, 13015, Marseille, France
| | - Julie Blanc
- Department of Obstetrics and Gynecology, Nord Hospital, APHM, Chemin des Bourrely, 13015, Marseille, France; EA 3279, Public Health, Chronic Diseases and Quality of Life, Research Unit, Aix-Marseille University, 13284, Marseille, France.
| |
Collapse
|
47
|
Sukhera J, Watling CJ, Gonzalez CM. Implicit Bias in Health Professions: From Recognition to Transformation. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:717-723. [PMID: 31977339 DOI: 10.1097/acm.0000000000003173] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Implicit bias recognition and management curricula are offered as an increasingly popular solution to address health disparities and advance equity. Despite growth in the field, approaches to implicit bias instruction are varied and have mixed results. The concept of implicit bias recognition and management is relatively nascent, and discussions related to implicit bias have also evoked critique and controversy. In addition, challenges related to assessment, faculty development, and resistant learners are emerging in the literature. In this context, the authors have reframed implicit bias recognition and management curricula as unique forms of transformative learning that raise critical consciousness in both individuals and clinical learning environments. The authors have proposed transformative learning theory (TLT) as a guide for implementing educational strategies related to implicit bias in health professions. When viewed through the lens of TLT, curricula to recognize and manage implicit biases are positioned as a tool to advance social justice.
Collapse
Affiliation(s)
- Javeed Sukhera
- J. Sukhera is associate professor of psychiatry and pediatrics and scientist, Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; ORCID: http://orcid.org/0000-0001-8146-4947. C.J. Watling is professor of clinical neurological sciences and oncology and associate dean for postgraduate medical education, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada. C.M. Gonzalez is associate professor of medicine, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, New York. At the time of writing, she was also a scholar, Macy Faculty Scholars Program, Josiah Macy Jr. Foundation, and Amos Medical Faculty Development Program, Robert Wood Johnson Foundation
| | | | | |
Collapse
|
48
|
Development and validation of the Australian Midwifery Standards Assessment Tool (AMSAT) to the Australian Midwife Standards for Practice 2018. Women Birth 2020; 33:135-144. [DOI: 10.1016/j.wombi.2019.08.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 08/06/2019] [Accepted: 08/06/2019] [Indexed: 11/22/2022]
|
49
|
Beshara S, Herron D, Moles RJ, Chaar B. Status of Pharmacy Ethics Education in Australia and New Zealand. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2020; 84:7452. [PMID: 32313274 PMCID: PMC7159001 DOI: 10.5688/ajpe7452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Accepted: 08/01/2019] [Indexed: 06/11/2023]
Abstract
Objective. To explore models of teaching in, resources available to, and delivery of a standardized course in pharmacy ethics. Methods. An email invitation was sent to the educator responsible for teaching pharmacy ethics at each of 19 institutions in Australia and New Zealand. Over a six- to eight-week period, semi-structured interviews were conducted in person, by email, or by phone, and were audio-recorded where possible, transcribed verbatim, and entered into data analysis software. Using an inductive analysis approach, themes related to the topics and issues discussed in the interview process were identified. Results. Of the educators invited to participate, 17 completed an interview and were included in this study. Participants reported a paucity of resources available for teaching pharmacy ethics at schools in Australia and New Zealand. Compounding this issue was the lack of expertise and ad-hoc process educators used to create their courses. Assessment methods varied between institutions. Participants felt schools needed to move toward a more standardized pharmacy ethics course with clear and defined guidelines. Conclusion. This study identified many areas in pharmacy ethics that need improvement and revealed the need to develop resources and course structure that adhere to the highest level of Miller's pyramid, while using known frameworks to evaluate ethical competency.
Collapse
Affiliation(s)
| | - David Herron
- James Cook University, College of Medicine and Dentistry, Queensland, Australia
| | - Rebekah J. Moles
- The University of Sydney, Sydney Pharmacy School, Sydney, Australia
| | - Betty Chaar
- The University of Sydney, Sydney Pharmacy School, Sydney, Australia
| |
Collapse
|
50
|
Dohms MC, Collares CF, Tibério IC. Video-based feedback using real consultations for a formative assessment in communication skills. BMC MEDICAL EDUCATION 2020; 20:57. [PMID: 32093719 PMCID: PMC7041283 DOI: 10.1186/s12909-020-1955-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Accepted: 01/30/2020] [Indexed: 05/04/2023]
Abstract
BACKGROUND Pre-recorded videotapes have become the standard approach when teaching clinical communication skills (CCS). Furthermore, video-based feedback (VF) has proven to be beneficial in formative assessments. However, VF in CCS with the use of pre-recorded videos from real-life settings is less commonly studied than the use of simulated patients. To explore: 1) perceptions about the potential benefits and challenges in this kind of VF; 2) differences in the CCC scores in first-year medical residents in primary care, before and after a communication program using VF in a curricular formative assessment. METHOD We conducted a pre/post study with a control group. The intervention consisted of VF sessions regarding CCS, performed in a small group with peers and a facilitator. They reviewed clinical consultations pre-recorded in a primary care setting with real patients. Before and after the intervention, 54 medical residents performed two clinical examinations with simulated patients (SP), answered quantitative scales (Perception of Patient-Centeredness and Jefferson Empathy Scale), and semi-structured qualitative questionnaires. The performances were scored by SP (Perception of Patient-Centeredness and CARE scale) and by two blind raters (SPIKES protocol-based and CCOG-based scale). The quantitative data analysis employed repeated-measures ANOVA. The qualitative analysis used the Braun and Clarke framework for thematic analysis. RESULTS The quantitative analyses did not reveal any significant differences in the sum scores of the questionnaires, except for the Jefferson Empathy Scale. In the qualitative questionnaires, the main potential benefits that emerged from the thematic analysis of the VF method were self-perception, peer-feedback, patient-centered approach, and incorporation of reflective practices. A challenging aspect that emerged from facilitators was the struggle to relate the VF with theoretical references and the resident's initial stress to record and watch oneself on video. CONCLUSION VF taken from real-life settings seems to be associated with a significant increase in self-perceived empathy. The study of other quantitative outcomes related to this VF intervention needs larger sample sizes. VF with clinical patients from real healthcare settings appears to be an opportunity for a deeper level of self-assessment, peer-feedback, and reflective practices.
Collapse
Affiliation(s)
- M. C. Dohms
- Center for Development in Medical Education, University of Sao Paulo, Av Dr. Arnaldo, São Paulo, 01246-903 Brazil
| | - C. F. Collares
- Department of Educational Development and Research, School of Health Professions, Education, Maastricht University, PO Box 616, 6200MD Maastricht, The Netherlands
| | - I. C. Tibério
- Center for Development in Medical Education, University of Sao Paulo, Av Dr. Arnaldo, São Paulo, 01246-903 Brazil
| |
Collapse
|