1
|
Zic WG. Upholding ethical pillars in nursing academia. Nurs Ethics 2025; 32:892-899. [PMID: 39312643 DOI: 10.1177/09697330241277990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
This manuscript explores the philosophical implications of ethical principles such as honesty, integrity, fairness, reliability, and objectivity and their impact on professional nursing. By examining these values within Western society, the discussion highlights the importance of integrating these virtues into contemporary nursing education. Through a detailed analysis of each precept, the document underscores their potential to enhance the quality of education, improve interactions among faculty and staff, and achieve positive student outcomes. Ultimately, this treatise advocates for a balanced pedagogical approach in nursing that leverages these elements to foster a more compassionate world, where ethical connections in academia underpin our collective existence.
Collapse
|
2
|
Roberts C, Burgess A, Mossman K, Kumar K. Professional judgement: a social practice perspective on a multiple mini-interview for specialty training selection. BMC MEDICAL EDUCATION 2025; 25:18. [PMID: 39754259 DOI: 10.1186/s12909-024-06535-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 12/16/2024] [Indexed: 01/06/2025]
Abstract
BACKGROUND Interviewers' judgements play a critical role in competency-based assessments for selection such as the multiple-mini-interview (MMI). Much of the published research focuses on the psychometrics of selection and the impact of rater subjectivity. Within the context of selecting for entry into specialty postgraduate training, we used an interpretivist and socio-constructivist approach to explore how and why interviewers make judgments in high stakes selection settings whilst taking part in an MMI. METHODS We explored MMI interviewers' work processes through an institutional observational approach, based on the notion that interviewers' judgements are socially constructed and mediated by multiple factors. We gathered data through document analysis, and observations of interviewer training, candidate interactions with interviewers, and interviewer meetings. Interviews included informal encounters in a large selection centre. Data analysis balanced description and explicit interpretation of the meanings and functions of the interviewers' actions and behaviours. RESULTS Three themes were developed from the data showing how interviewers make professional judgements, specifically by; 'Balancing the interplay of rules and agency,' 'Participating in moderation and shared meaning making; and 'A culture of reflexivity and professional growth.' Interviewers balanced the following of institutional rules with making judgment choices based on personal expertise and knowledge. They engaged in dialogue, moderation, and shared meaning with fellow interviewers which enabled their consideration of multiple perspectives of the candidate's performance. Interviewers engaged in self-evaluation and reflection throughout, with professional learning and growth as primary care physicians and supervisors being an emergent outcome. CONCLUSION This study offers insights into the judgment-making processes of interviewers in high-stakes MMI contexts, highlighting the balance between structured protocols and personal expertise within a socially constructed framework. By linking MMI practices to the broader work-based assessment literature, we contribute to advancing the design and implementation of more valid and fair selection tools for postgraduate training. Additionally, the study underscores the dual benefit of MMIs-not only as a selection tool but also as a platform for interviewers' professional growth. These insights offer practical implications for refining future MMI practices and improving the fairness of high-stakes selection processes.
Collapse
Affiliation(s)
- Chris Roberts
- School of Medicine and Population Health, Division of Medicine, The University of Sheffield, Sheffield, UK.
| | - Annette Burgess
- Sydney Medical School - Education Office, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| | - Karyn Mossman
- Sydney Medical School - Northern Clinical School, The University of Sydney, Sydney, NSW, Australia
| | - Koshila Kumar
- Division of Learning and Teaching, Charles Sturt University, Bathurst, NSW, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, SA, Australia
| |
Collapse
|
3
|
Gingerich A, Lingard L, Sebok-Syer SS, Watling CJ, Ginsburg S. "Praise in Public; Criticize in Private": Unwritable Assessment Comments and the Performance Information That Resists Being Written. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:1240-1246. [PMID: 39137257 DOI: 10.1097/acm.0000000000005839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2024]
Abstract
PURPOSE Written assessment comments are needed to archive feedback and inform decisions. Regrettably, comments are often impoverished, leaving performance-relevant information undocumented. Research has focused on content and supervisor's ability and motivation to write it but has not sufficiently examined how well the undocumented information lends itself to being written as comments. Because missing information threatens the validity of assessment processes, this study examined the performance information that resists being written. METHOD Two sequential data collection methods and multiple elicitation techniques were used to triangulate unwritten assessment comments. Between November 2022 and January 2023, physicians in Canada were recruited by email and social media to describe experiences with wanting to convey assessment information but feeling unable to express it in writing. Fifty supervisors shared examples via survey. From January to May 2023, a subset of 13 participants were then interviewed to further explain what information resisted being written and why it seemed impossible to express in writing and to write comments in response to a video prompt or for their own "unwritable" example. Constructivist grounded theory guided data collection and analysis. RESULTS Not all performance-relevant information was equally writable. Information resisted being written as assessment comments when it would require an essay to be expressed in writing, belonged in a conversation and not in writing, or was potentially irrelevant and unverifiable. In particular, disclosing sensitive information discussed in a feedback conversation required extensive recoding to protect the learner and supervisor-learner relationship. CONCLUSIONS When documenting performance information as written comments is viewed as an act of disclosure, it becomes clear why supervisors may feel compelled to leave some comments unwritten. Although supervisors can be supported in writing better assessment comments, their failure to write invites a reexamination of expectations for documenting feedback and performance information as written comments on assessment forms.
Collapse
|
4
|
Tavares W, Kinnear B, Schumacher DJ, Forte M. "Rater training" re-imagined for work-based assessment in medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1697-1709. [PMID: 37140661 DOI: 10.1007/s10459-023-10237-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/30/2023] [Indexed: 05/05/2023]
Abstract
In this perspective, the authors critically examine "rater training" as it has been conceptualized and used in medical education. By "rater training," they mean the educational events intended to improve rater performance and contributions during assessment events. Historically, rater training programs have focused on modifying faculty behaviours to achieve psychometric ideals (e.g., reliability, inter-rater reliability, accuracy). The authors argue these ideals may now be poorly aligned with contemporary research informing work-based assessment, introducing a compatibility threat, with no clear direction on how to proceed. To address this issue, the authors provide a brief historical review of "rater training" and provide an analysis of the literature examining the effectiveness of rater training programs. They focus mainly on what has served to define effectiveness or improvements. They then draw on philosophical and conceptual shifts in assessment to demonstrate why the function, effectiveness aims, and structure of rater training requires reimagining. These include shifting competencies for assessors, viewing assessment as a complex cognitive task enacted in a social context, evolving views on biases, and reprioritizing which validity evidence should be most sought in medical education. The authors aim to advance the discussion on rater training by challenging implicit incompatibility issues and stimulating ways to overcome them. They propose that "rater training" (a moniker they suggest be reserved for strong psychometric aims) be augmented with "assessor readiness" programs that link to contemporary assessment science and enact the principle of compatibility between that science and ways of engaging with advances in real-world faculty-learner contexts.
Collapse
Affiliation(s)
- Walter Tavares
- Department of Health and Society, Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada.
| | - Benjamin Kinnear
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Daniel J Schumacher
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Milena Forte
- Department of Family and Community Medicine, Temerty Faculty of Medicine, Mount Sinai Hospital, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
5
|
Yeates P, Maluf A, Cope N, McCray G, McBain S, Beardow D, Fuller R, McKinley RB. Using video-based examiner score comparison and adjustment (VESCA) to compare the influence of examiners at different sites in a distributed objective structured clinical exam (OSCE). BMC MEDICAL EDUCATION 2023; 23:803. [PMID: 37885005 PMCID: PMC10605484 DOI: 10.1186/s12909-023-04774-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 10/14/2023] [Indexed: 10/28/2023]
Abstract
PURPOSE Ensuring equivalence of examiners' judgements within distributed objective structured clinical exams (OSCEs) is key to both fairness and validity but is hampered by lack of cross-over in the performances which different groups of examiners observe. This study develops a novel method called Video-based Examiner Score Comparison and Adjustment (VESCA) using it to compare examiners scoring from different OSCE sites for the first time. MATERIALS/ METHODS Within a summative 16 station OSCE, volunteer students were videoed on each station and all examiners invited to score station-specific comparator videos in addition to usual student scoring. Linkage provided through the video-scores enabled use of Many Facet Rasch Modelling (MFRM) to compare 1/ examiner-cohort and 2/ site effects on students' scores. RESULTS Examiner-cohorts varied by 6.9% in the overall score allocated to students of the same ability. Whilst only a tiny difference was apparent between sites, examiner-cohort variability was greater in one site than the other. Adjusting student scores produced a median change in rank position of 6 places (0.48 deciles), however 26.9% of students changed their rank position by at least 1 decile. By contrast, only 1 student's pass/fail classification was altered by score adjustment. CONCLUSIONS Whilst comparatively limited examiner participation rates may limit interpretation of score adjustment in this instance, this study demonstrates the feasibility of using VESCA for quality assurance purposes in large scale distributed OSCEs.
Collapse
Affiliation(s)
- Peter Yeates
- School of Medicine, Keele University, David Weatherall Building, Keele, Staffordshire, ST5 5BG, UK.
- Fairfield General Hospital, Northern Care Alliance NHS Foundation Trust, Bury, Greater Manchester, UK.
| | - Adriano Maluf
- School of Medicine, Keele University, David Weatherall Building, Keele, Staffordshire, ST5 5BG, UK
| | - Natalie Cope
- School of Medicine, Keele University, David Weatherall Building, Keele, Staffordshire, ST5 5BG, UK
| | - Gareth McCray
- School of Medicine, Keele University, David Weatherall Building, Keele, Staffordshire, ST5 5BG, UK
| | - Stuart McBain
- School of Medicine, Keele University, David Weatherall Building, Keele, Staffordshire, ST5 5BG, UK
| | - Dominic Beardow
- School of Medicine, Keele University, David Weatherall Building, Keele, Staffordshire, ST5 5BG, UK
| | - Richard Fuller
- Christie Education, Christie Hospitals NHS Foundation Trust, Manchester , UK
| | - Robert Bob McKinley
- School of Medicine, Keele University, David Weatherall Building, Keele, Staffordshire, ST5 5BG, UK
| |
Collapse
|
6
|
Amos A, Halasz G. Science, ideology, and social progress. Australas Psychiatry 2023; 31:582-583. [PMID: 37341442 DOI: 10.1177/10398562231185206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/22/2023]
Affiliation(s)
- Andrew Amos
- Division of Tropical Health and Medicine, College of Medicine and Dentistry, James Cook University, Townsville, QLD, Australia
| | - George Halasz
- Department of Psychological Medicine, Monash Medical Centre, Monash University, Clayton, VIC, Australia
| |
Collapse
|
7
|
Smith JF, Piemonte NM. The Problematic Persistence of Tiered Grading in Medical School. TEACHING AND LEARNING IN MEDICINE 2023; 35:467-476. [PMID: 35619232 DOI: 10.1080/10401334.2022.2074423] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 04/01/2022] [Accepted: 04/25/2022] [Indexed: 06/15/2023]
Abstract
Issue: The evaluation of medical students is a critical, complex, and controversial process. It is tightly woven into the medical school curriculum, beginning at the inception of the medical student's professional journey. In this respect, medical student evaluation is among the first in a series of ongoing, lifelong assessments that influence the interpersonal, ethical, and socioeconomic dimensions necessary for an effective physician workforce. Yet, tiered grading has a questionable historic pedagogic basis in American medical education, and evidence suggests that tiered grading itself is a source of student burnout, anxiety, depression, increased competitiveness, reduced group cohesion, and racial biases. Evidence: In its most basic form, medical student evaluation is an assessment of the initial cognitive and technical competencies ultimately needed for the safe and effective practice of contemporary medicine. At many American medical schools, such evaluation relies largely on norm-based comparisons, such as tiered grading. Yet, tiered grading can cause student distress, is considered unfair by most students, is associated with biases against under-represented minorities, and demonstrates inconsistent correlation with residency performance. While arguments that tiered grading motivates student performance have enjoyed historic precedence in academia, such arguments are not supported by robust data or theories of motivation. Implications: Given the evolving recognition of the deleterious effects on medical student mental health, cohesiveness, and diversity, the use of tiered grading in medical schools to measure or stimulate academic performance, or by residency program directors to distinguish residency applicants, remains questionable. Examination of tiered grading in its historical, psychometric, psychosocial, and moral dimensions and the various arguments used to maintain it reveals a need for investigation of, if not transition to, alternative and non-tiered assessments of our medical students.
Collapse
Affiliation(s)
- James F Smith
- Departments of Medical Education and Medical Humanities, Creighton University, Omaha, Nebraska, USA
| | - Nicole M Piemonte
- Departments of Medical Humanities and Student Affairs, Creighton University, Phoenix, Arizona, USA
| |
Collapse
|
8
|
Valentine N, Durning SJ, Shanahan EM, Schuwirth L. Fairness in Assessment: Identifying a Complex Adaptive System. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:315-326. [PMID: 37520508 PMCID: PMC10377744 DOI: 10.5334/pme.993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 07/02/2023] [Indexed: 08/01/2023]
Abstract
Introduction Assessment design in health professions education is continuously evolving. There is an increasing desire to better embrace human judgement in assessment. Thus, it is essential to understand what makes this judgement fair. This study builds upon existing literature by studying how assessment leaders conceptualise the characteristics of fair judgement. Methods Sixteen assessment leaders from 15 medical schools in Australia and New Zealand participated in online focus groups. Data collection and analysis occurred concurrently and iteratively. We used the constant comparison method to identify themes and build on an existing conceptual model of fair judgement in assessment. Results Fairness is a multi-dimensional construct with components at environment, system and individual levels. Components influencing fairness include articulated and agreed learning outcomes relating to the needs of society, a culture which allows for learner support, stakeholder agency and learning (environmental level), collection, interpretation and combination of evidence, procedural strategies (system level) and appropriate individual assessments and assessor expertise and agility (individual level). Discussion We observed that within the data at fractal, that is an infinite pattern repeating at different scales, could be seen suggesting fair judgement should be considered a complex adaptive system. Within complex adaptive systems, it is primarily the interaction between the entities which influences the outcome it produces, not simply the components themselves. Viewing fairness in assessment through a lens of complexity rather than as a linear, causal model has significant implications for how we design assessment programs and seek to utilise human judgement in assessment.
Collapse
Affiliation(s)
- Nyoli Valentine
- Prideaux Discipline of Clinical Education, Flinders University, Bedford Park, South Australia, Australia
| | - Steven J. Durning
- Department of Medicine, Director, Center for Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, MD, United States
| | | | - Lambert Schuwirth
- Prideaux Discipline of Clinical Education, Flinders University, Bedford Park, South Australia, Australia
| |
Collapse
|
9
|
Hu WCY, Dillon HCB, Wilkinson TJ. Educators as Judges: Applying Judicial Decision-Making Principles to High-Stakes Education Assessment Decisions. TEACHING AND LEARNING IN MEDICINE 2023; 35:168-179. [PMID: 35253558 DOI: 10.1080/10401334.2022.2038176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 01/24/2022] [Indexed: 06/14/2023]
Abstract
Phenomenon: Programmatic assessment and competency-based education have highlighted the need to make robust high-stakes assessment decisions on learner performance from evidence of varying types and quality. Without guidance, lengthy deliberations by decision makers and competence committees can end inconclusively with unresolved concerns. These decisional dilemmas are heightened by their potential impacts. For learners, erroneous decisions may lead to an unjustified exit from a long-desired career, or premature promotion to clinical responsibilities. For educators, there is the risk of wrongful decision-making, leading to successful appeals and mistrust. For communities, ill-prepared graduates risk the quality and safety of care. Approaches such as psychometric analyses are limited when decision-makers are faced with seemingly contradictory qualitative and quantitative evidence about the same individual. Expertise in using such evidence to make fair and defensible decisions is well established in judicial practice but is yet to be practically applied to assessment decision-making. Approach: Through interdisciplinary exchange, we investigated medical education and judicial perspectives on decision-making to explore whether principles of decision-making in law could be applied to educational assessment decision-making. Using Dialogic Inquiry, an iterative process of scholarly and mutual critique, we contrasted assessment decision making in medical education with judicial practice to identify key principles in judicial decision-making relevant to educational assessment decisions. We developed vignettes about common but problematic high-stakes decision-making scenarios to test how these principles could apply. Findings: Over 14 sessions, we identified, described, and applied four principles for fair, reasonable, and transparent assessment decision-making. These were: The person whose interests are affected has a right to know the case against them, and to be heard.Reasons for the decision should be given.Rules should be transparent and consistently applied.Like cases should be treated alike and unlike cases treated differently.Reflecting our dialogic process, we report findings by separately presenting the medical educator and judicial perspectives, followed by a synthesis describing a preferred approach to decision-making in three vignettes. Insights: Judicial principles remind educators to consider both sides of arguments, to be consistent, and to demonstrate transparency when making assessment decisions. Dialogic Inquiry is a useful approach for generating interdisciplinary insights on challenges in medical education by critiquing difference (e.g., the meaning of objectivity) and achieving synthesis where possible (e.g., fairness is not equal treatment of all cases). Our principles and exemplars provide groundwork for promoting good practice and furthering assessment research toward fairer and more robust decisions that will assist learning.
Collapse
Affiliation(s)
- Wendy C Y Hu
- Medical Education Unit, School of Medicine, Western Sydney University, Penrith South, New South Wales, Australia
| | - Hugh C B Dillon
- Faculty of Law, University of New South Wales, Sydney, Australia
| | - Tim J Wilkinson
- Education Unit, University of Otago, Christchurch, New Zealand
| |
Collapse
|
10
|
Yeates P, Maluf A, Kinston R, Cope N, McCray G, Cullen K, O'Neill V, Cole A, Goodfellow R, Vallender R, Chung CW, McKinley RK, Fuller R, Wong G. Enhancing authenticity, diagnosticity and equivalence (AD-Equiv) in multicentre OSCE exams in health professionals education: protocol for a complex intervention study. BMJ Open 2022; 12:e064387. [PMID: 36600366 PMCID: PMC9730346 DOI: 10.1136/bmjopen-2022-064387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 10/12/2022] [Indexed: 12/12/2022] Open
Abstract
INTRODUCTION Objective structured clinical exams (OSCEs) are a cornerstone of assessing the competence of trainee healthcare professionals, but have been criticised for (1) lacking authenticity, (2) variability in examiners' judgements which can challenge assessment equivalence and (3) for limited diagnosticity of trainees' focal strengths and weaknesses. In response, this study aims to investigate whether (1) sharing integrated-task OSCE stations across institutions can increase perceived authenticity, while (2) enhancing assessment equivalence by enabling comparison of the standard of examiners' judgements between institutions using a novel methodology (video-based score comparison and adjustment (VESCA)) and (3) exploring the potential to develop more diagnostic signals from data on students' performances. METHODS AND ANALYSIS The study will use a complex intervention design, developing, implementing and sharing an integrated-task (research) OSCE across four UK medical schools. It will use VESCA to compare examiner scoring differences between groups of examiners and different sites, while studying how, why and for whom the shared OSCE and VESCA operate across participating schools. Quantitative analysis will use Many Facet Rasch Modelling to compare the influence of different examiners groups and sites on students' scores, while the operation of the two interventions (shared integrated task OSCEs; VESCA) will be studied through the theory-driven method of Realist evaluation. Further exploratory analyses will examine diagnostic performance signals within data. ETHICS AND DISSEMINATION The study will be extra to usual course requirements and all participation will be voluntary. We will uphold principles of informed consent, the right to withdraw, confidentiality with pseudonymity and strict data security. The study has received ethical approval from Keele University Research Ethics Committee. Findings will be academically published and will contribute to good practice guidance on (1) the use of VESCA and (2) sharing and use of integrated-task OSCE stations.
Collapse
Affiliation(s)
- Peter Yeates
- School of Medicine, Keele University, Keele, Staffordshire, UK
| | - Adriano Maluf
- School of Medicine, Keele University, Keele, Staffordshire, UK
| | - Ruth Kinston
- School of Medicine, Keele University, Keele, Staffordshire, UK
| | - Natalie Cope
- School of Medicine, Keele University, Keele, Staffordshire, UK
| | - Gareth McCray
- School of Medicine, Keele University, Keele, Staffordshire, UK
| | - Kathy Cullen
- School of Medicine, Dentistry and Biomedical Sciences, Queen's University Belfast, Belfast, UK
| | - Vikki O'Neill
- School of Medicine, Dentistry and Biomedical Sciences, Queen's University Belfast, Belfast, UK
| | - Aidan Cole
- School of Medicine, Dentistry and Biomedical Sciences, Queen's University Belfast, Belfast, UK
| | | | | | - Ching-Wa Chung
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, Scotland, UK
| | | | - Richard Fuller
- School of Medicine, University of Liverpool Faculty of Health and Life Sciences, Liverpool, UK
| | - Geoff Wong
- Nuffield Department of Primary Care Health Sciences, University of Oxford Division of Public Health and Primary Health Care, Oxford, Oxfordshire, UK
| |
Collapse
|
11
|
Abstract
Educational change in higher education is challenging and complex, requiring engagement with a multitude of perspectives and contextual factors. In this paper, we present a case study based on our experiences of enacting a fundamental educational change in a medical program; namely, the steps taken in the transition to programmatic assessment. Specifically, we reflect on the successes and failures in embedding a coaching culture into programmatic assessment. To do this, we refer to the principles of programmatic assessment as they apply to this case and conclude with some key lessons that we have learnt from engaging in this change process. Fostering a culture of programmatic assessment that supports learners to thrive through coaching has required compromise and adaptability, particularly in light of the changes to teaching and learning necessitated by the global pandemic. We continue to inculcate this culture and enact the principles of programmatic assessment with a focus on continuous quality improvement.
Collapse
|
12
|
Desire paths for workplace assessment in postgraduate anaesthesia training: analysing informal processes to inform assessment redesign. Br J Anaesth 2022; 128:997-1005. [DOI: 10.1016/j.bja.2022.03.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 02/24/2022] [Accepted: 03/06/2022] [Indexed: 11/17/2022] Open
|
13
|
Prentice S, Benson J, Dorstyn D, Elliott T. Wellbeing Conceptualizations in Family Medicine Trainees: A Hermeneutic Review. TEACHING AND LEARNING IN MEDICINE 2022; 34:60-68. [PMID: 34126815 DOI: 10.1080/10401334.2021.1919519] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
PHENOMENON High levels of burnout have been widely reported among postgraduate medical trainees, however relatively little literature has examined what 'wellbeing' means for this group. Moreover, the literature that does exist has generally overlooked the potential role of specialty factors in influencing such conceptualizations. This is particularly true for family medicine and general practice trainees - a specialty considered to be unique due, in part, to its focus on community-based care. The present review sought to explore conceptualizations of wellbeing specifically within the context of family medicine and general practice training. APPROACH The Embase, Ovid Medline, and PsycINFO databases were searched from inception to November 2019 for literature examining wellbeing in family medicine and general practice trainees. Literature was iteratively thematically analyzed through the process of a hermeneutic cycle. In total, 36 articles were reviewed over seven rounds, at which point saturation was reached. FINDINGS The findings confirm the complex and multifaceted nature of wellbeing as experienced by family medicine and general practice trainees. An emphasis on psychological factors - including emotional intelligence, positive mental health, self-confidence and resilience - alongside positive interpersonal relationships, rewards, and balanced interactions between trainees' personal and professional demands were deemed critical elements. INSIGHTS A model of wellbeing that emphasizes rich connections between trainees' personal and professional life domains is proposed. Further qualitative research will help to extend current understanding of wellbeing among medical trainees, including the individuality of each specialty's experiences, with the potential to enhance interventional efforts.
Collapse
Affiliation(s)
- Shaun Prentice
- School of Psychology, University of Adelaide, Adelaide, South Australia, Australia
| | - Jill Benson
- School of Medicine, University of Adelaide, Adelaide, South Australia, Australia
- GPEx Ltd, Adelaide, South Australia, Australia
| | - Diana Dorstyn
- School of Psychology, University of Adelaide, Adelaide, South Australia, Australia
| | | |
Collapse
|
14
|
Roberts C, Khanna P, Lane AS, Reimann P, Schuwirth L. Exploring complexities in the reform of assessment practice: a critical realist perspective. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1641-1657. [PMID: 34431028 DOI: 10.1007/s10459-021-10065-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 08/08/2021] [Indexed: 06/13/2023]
Abstract
Although the principles behind assessment for and as learning are well-established, there can be a struggle when reforming traditional assessment of learning to a program which encompasses assessment for and as learning. When introducing and reporting reforms, tensions in faculty may arise because of differing beliefs about the relationship between assessment and learning and the rules for the validity of assessments. Traditional systems of assessment of learning privilege objective, structured quantification of learners' performances, and are done to the students. Newer systems of assessment promote assessment for learning, emphasise subjectivity, collate data from multiple sources, emphasise narrative-rich feedback to promote learner agency, and are done with the students. This contrast has implications for implementation and evaluative research. Research of assessment which is done to students typically asks, "what works", whereas assessment that is done with the students focuses on more complex questions such as "what works, for whom, in which context, and why?" We applied such a critical realist perspective drawing on the interplay between structure and agency, and a systems approach to explore what theory says about introducing programmatic assessment in the context of pre-existing traditional approaches. Using a reflective technique, the internal conversation, we developed four factors that can assist educators considering major change to assessment practice in their own contexts. These include enabling positive learner agency and engagement; establishing argument-based validity frameworks; designing purposeful and eclectic evidence-based assessment tasks; and developing a shared narrative that promotes reflexivity in appreciating the complex relationships between assessment and learning.
Collapse
Affiliation(s)
- Chris Roberts
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia.
| | - Priya Khanna
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia
| | - Andrew Stuart Lane
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia
| | - Peter Reimann
- Centre for Research on Learning and Innovation (CRLI), The University of Sydney, Sydney, NSW, Australia
| | - Lambert Schuwirth
- Prideaux Discipline of Clinical Education, College of Medicine and Public Health, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
15
|
Valentine N, Shanahan EM, Durning SJ, Schuwirth L. Making it fair: Learners' and assessors' perspectives of the attributes of fair judgement. MEDICAL EDUCATION 2021; 55:1056-1066. [PMID: 34060124 DOI: 10.1111/medu.14574] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 05/19/2021] [Accepted: 05/26/2021] [Indexed: 06/12/2023]
Abstract
INTRODUCTION Optimising the use of subjective human judgement in assessment requires understanding what makes judgement fair. Whilst fairness cannot be simplistically defined, the underpinnings of fair judgement within the literature have been previously combined to create a theoretically-constructed conceptual model. However understanding assessors' and learners' perceptions of what is fair human judgement is also necessary. The aim of this study is to explore assessors' and learners' perceptions of fair human judgement, and to compare these to the conceptual model. METHODS A thematic analysis approach was used. A purposive sample of twelve assessors and eight post-graduate trainees undertook semi-structured interviews using vignettes. Themes were identified using the process of constant comparison. Collection, analysis and coding of the data occurred simultaneously in an iterative manner until saturation was reached. RESULTS This study supported the literature-derived conceptual model suggesting fairness is a multi-dimensional construct with components at individual, system and environmental levels. At an individual level, contextual, longitudinally-collected evidence, which is supported by narrative, and falls within ill-defined boundaries is essential for fair judgement. Assessor agility and expertise are needed to interpret and interrogate evidence, identify boundaries and provide narrative feedback to allow for improvement. At a system level, factors such as multiple opportunities to demonstrate competence and improvement, multiple assessors to allow for different perspectives to be triangulated, and documentation are needed for fair judgement. These system features can be optimized through procedural fairness. Finally, appropriate learning and working environments which considers patient needs and learners personal circumstances are needed for fair judgments. DISCUSSION This study builds on the theory-derived conceptual model demonstrating the components of fair judgement can be explicitly articulated whilst embracing the complexity and contextual nature of health-professions assessment. Thus it provides a narrative to support dialogue between learner, assessor and institutions about ensuring fair judgements in assessment.
Collapse
Affiliation(s)
- Nyoli Valentine
- Prideaux Discipline of Clinical Education, Flinders University, SA, Australia
| | | | - Steven J Durning
- Center for Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Lambert Schuwirth
- Prideaux Discipline of Clinical Education, Flinders University, SA, Australia
| |
Collapse
|