1
|
Tavares W, Hodwitz K, Rowland P, Ng S, Kuper A, Friesen F, Shwetz K, Brydges R. Implicit and inferred: on the philosophical positions informing assessment science. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1597-1623. [PMID: 34370126 DOI: 10.1007/s10459-021-10063-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 07/25/2021] [Indexed: 06/13/2023]
Abstract
Assessment practices have been increasingly informed by a range of philosophical positions. While generally beneficial, the addition of options can lead to misalignment in the philosophical assumptions associated with different features of assessment (e.g., the nature of constructs and competence, ways of assessing, validation approaches). Such incompatibility can threaten the quality and defensibility of researchers' claims, especially when left implicit. We investigated how authors state and use their philosophical positions when designing and reporting on performance-based assessments (PBA) of intrinsic roles, as well as the (in)compatibility of assumptions across assessment features. Using a representative sample of studies examining PBA of intrinsic roles, we used qualitative content analysis to extract data on how authors enacted their philosophical positions across three key assessment features: (1) construct conceptualizations, (2) assessment activities, and (3) validation methods. We also examined patterns in philosophical positioning across features and studies. In reviewing 32 papers from established peer-reviewed journals, we found (a) authors rarely reported their philosophical positions, meaning underlying assumptions could only be inferred; (b) authors approached features of assessment in variable ways that could be informed by or associated with different philosophical assumptions; (c) we experienced uncertainty in determining (in)compatibility of philosophical assumptions across features. Authors' philosophical positions were often vague or absent in the selected contemporary assessment literature. Leaving such details implicit may lead to misinterpretation by knowledge users wishing to implement, build on, or evaluate the work. As such, assessing claims, quality and defensibility, may increasingly depend more on who is interpreting, rather than what is being interpreted.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Institute for Health Policy, Management and Evaluation, University of Toronto/University Health Network, Toronto, Ontario, Canada.
| | - Kathryn Hodwitz
- Li Ka Shing Knowledge Institute, St. Michaels Hospital, Toronto, Ontario, Canada
| | - Paula Rowland
- The Wilson Centre, Temerty Faculty of Medicine, Department of Occupational Therapy and Occupational Science, University of Toronto/University Health Network, Toronto, Ontario , Canada
| | - Stella Ng
- The Wilson Centre, Temerty Faculty of Medicine, Department of Speech-Language Pathology, Temerty Faculty of Medicine, The Wilson Centre, University of Toronto, Centre for Faculty Development, Unity Health Toronto, Toronto, Ontario, Canada
| | - Ayelet Kuper
- The Wilson Centre, University Health Network/University of Toronto, Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Farah Friesen
- Centre for Faculty Development, Temerty Faculty of Medicine, University of Toronto at Unity Health Toronto, Toronto, Ontario, Canada
| | - Katherine Shwetz
- Department of English, University of Toronto, Toronto, Ontario, Canada
| | - Ryan Brydges
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Unity Health Toronto, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Chen SH, Chen SC, Lai YP, Chen PH, Yeh KY. The objective structured clinical examination as an assessment strategy for clinical competence in novice nursing practitioners in Taiwan. BMC Nurs 2021; 20:91. [PMID: 34098937 PMCID: PMC8186223 DOI: 10.1186/s12912-021-00608-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 05/25/2021] [Indexed: 12/03/2022] Open
Abstract
Background The conventional written tests and professional assessment have limitation in fair judgement of clinical competence. Because the examiners may not have total objectivity and may lack standardization throughout the assessment process. We sought to design a valid method of competence assessment in medical and nursing specialties. This work was aimed to develop an Objective Structured Clinical Exam (OSCE) to evaluate novice nursing practitioners’ clinical competency, work stress, professional confidence, and career satisfaction. Methods A Quasi-experimental study (pre-post). Fifty-five novice nursing practitioners received the OSCE three-months following their graduation, which consisted of four stations: history taking, physical examination, problem-directed management, interpersonal communication, and the required techniques of related procedures. The examiners had to complete an assessment checklist, and the participants had to complete a pre-post questionnaire (modified from a Nursing Competency Questionnaire, a Stress scale, and Satisfaction with Learning scale). Results Among the novice nursing practitioners, 41 of them (74.5 %) passed the exam with a mean score of 61.38 ± 8.34. There was a significantly higher passing rate among nurses who were working in medical-surgical wards (85.7 %) and the intensive care unit-emergency department (77.8 %) compared to novice nursing practitioners working in other units. All the novice nursing practitioners at Station A had poor performance in assessing patients with a fever. OSCE performance was more associated with educational attainment and work unit, rather than the gender. Finally, the participants showed statistically significant increases in their clinical competency, confidence in their professional competence, satisfaction with the clinical practice, and decreased work stress after the OSCE. Conclusions We found that the OSCE process had a positive educational effect, in providing a meaningful and accurate assessment of the competence of novice nursing practitioners. An appropriate OSCE program is vital for novice nursing practitioners, educators, and administrators. The effective application of OSCEs can help novice nursing practitioners gain confidence in their clinical skills.
Collapse
Affiliation(s)
- Sue-Hsien Chen
- Chang Gung Medical Education Research Centre, Chang Gung Memorial Hospital, Linkou, Taiwan.,Department of Nursing Management, Chang Gung Medical Foundation Administration, Taoyuan, Taiwan.,School of Nursing, Chang Gung University, Taoyuan, Taiwan
| | - Shu-Ching Chen
- Department of Medical Research, National Taiwan University Hospital, Taipei, Taiwan
| | - Yo-Ping Lai
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Pin-Hsuan Chen
- Department of Nursing Management, Chang Gung Medical Foundation Administration, Taoyuan, Taiwan
| | - Kun-Yun Yeh
- Division of Hemato-oncology, Department of Internal Medicine, College of Medicine, Chang Gung Memorial Hospital, Keelung & Chang Gung University, 222 Maijin Road, Keelung, Taiwan.
| |
Collapse
|
3
|
Støve MP. Physiotherapy students' self-assessment of performance-Are there gender differences in self-assessment accuracy? PHYSIOTHERAPY RESEARCH INTERNATIONAL 2020; 26:e1878. [PMID: 32924252 DOI: 10.1002/pri.1878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 07/16/2020] [Accepted: 08/30/2020] [Indexed: 11/08/2022]
Abstract
BACKGROUND AND PURPOSE The ability to critically appraise one's performance is paramount in physiotherapy and, although there is a paucity of research in this area, factors such as gender have been suggested to moderate the self-assessment accuracy of healthcare students. The purpose of this study was to determine a posteriori self-assessment accuracy of first-year physiotherapy students following a multiple-choice anatomy examination and to determine the specific influence of gender as a potential moderator of self-assessment accuracy. METHOD One-hundred-and-forty-two students (n = 72 female) enrolled in their second semester of a three-and-a-half-year physiotherapy programme participated in the study. A purpose-made self-assessment questionnaire was used to measure the students' self-assessment ability, estimating their performance on 11 different anatomical categories following the examination. This was then compared with a criterion measure matched with the questionnaire. The accuracy of the students' self-assessment was investigated by the relation between self-assessment and objective performance. RESULTS The study showed low-to-moderate self-assessment accuracy (rho ranging from 0.318 to 0.675) with the students underestimating their performance in six out of 11 categories (p < 0.019). Gender did not contribute significantly to differences in accuracy between students' self-assessment and the criterion measure (p = 0.474). CONCLUSION According to the results, the students demonstrated low-to-moderate self-assessment accuracy when compared to their performance. Notably, the results clearly showed that gender did not function as a moderator of self-assessment accuracy among first-year physiotherapy students.
Collapse
|
4
|
Mullikin TC, Shahi V, Grbic D, Pawlina W, Hafferty FW. First Year Medical Student Peer Nominations of Professionalism: A Methodological Detective Story about Making Sense of Non-Sense. ANATOMICAL SCIENCES EDUCATION 2019; 12:20-31. [PMID: 29569347 DOI: 10.1002/ase.1782] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2017] [Revised: 02/17/2018] [Accepted: 02/21/2018] [Indexed: 06/08/2023]
Abstract
This article explores the assessment of professionalism within a cohort of medical students during a sequential 13-week medical school histology and anatomy course. Across seven data points, students were asked to identify a professionalism role model from amongst their peers and to score Likert-structured rationales for their decision. Based on density scores, an initial social network analysis identified six peer-nomination "stars." However, analysis of these stars revealed considerable variability and random-like "noise" in both the nomination and explanation data sets. Subsequent analyses of both data sets explored the possibility of underlying patterns in this noise using tests of reliability, principal components factor analysis, and fixed-effects regression analysis. These explorations revealed the presence of two dimensions (professional vs. supportive) in how students sought to explain their nomination decisions. Although data variability remained quite high, significantly less variability was present in the professional than in the supportive dimension, suggesting that academic helpfulness rationales are both empirically distinct and more mutable than rationales grounded in professionalism-related factors. In addition, data showed that the greater the stability in one's choice of a professionalism role model nomination over the T1-T7 data periods, the more stable one's reasons for that nomination-both for professionalism and supportive dimensions. Results indicate that while peer assessment of professionalism by first-year medical students may not be very reliable, students can differentiate between more personal and professional factors, even at this early stage in their professional development. Formal instruction within the pre-clinical curriculum should recognize and address this distinction. Anat Sci Educ. © 2018 American Association of Anatomists.
Collapse
Affiliation(s)
- Trey C Mullikin
- Department of Radiation Oncology Mayo Clinic College of Medicine and Science, Mayo Clinic, Rochester, Minnesota
| | - Varun Shahi
- Department of Emergency Medicine, University of California Los Angeles Medical Center, Los Angeles, California
| | - Douglas Grbic
- Association of American Medical Colleges, District of Columbia, Washington
| | - Wojciech Pawlina
- Department of Anatomy Mayo Clinic College of Medicine and Science, Mayo Clinic, Rochester, Minnesota
- Program in Professionalism and Values, Mayo Clinic, Rochester, Minnesota
| | - Frederic W Hafferty
- Program in Professionalism and Values, Mayo Clinic, Rochester, Minnesota
- Department of General Internal Medicine, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
5
|
Casu G, Gremigni P, Sommaruga M. The Patient-Professional Interaction Questionnaire (PPIQ) to assess patient centered care from the patient's perspective. PATIENT EDUCATION AND COUNSELING 2019; 102:126-133. [PMID: 30098906 DOI: 10.1016/j.pec.2018.08.006] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Revised: 08/02/2018] [Accepted: 08/03/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE To investigate how patients evaluate the provision of patient-centered care (PCC) by healthcare professionals and psychometrically test a questionnaire to assess it. A tool previously developed for self-assessment of professionals' provision of PCC was adapted into a patient-rated form, named Patient-Professional Interaction Questionnaire (PPIQ). METHODS A sample of 1139 patients from six hospitals completed the 16-item PPIQ and the questionnaire structure, reliability, susceptibility to social desirability, and associations with other variables were tested. RESULTS The PPIQ confirmed the original four-factor structure (effective communication, interest in the patient's agenda, empathy, and patient involvement in care) and showed acceptable reliability and measurement invariance across both in-/out-patients and first/non-first encounter with the evaluated professional. Associations with patients' social desirability were negligible and effective communication was rated the highest among the PPIQ dimensions. PPIQ scores varied according to patients' educational level and type of professional evaluated, while associations between first/non-first encounter and PPIQ scores varied according to in-/out-patient. CONCLUSION The PPIQ is a psychometrically sound patient-rated measure of the provision of PCC by healthcare professionals. PRACTICE IMPLICATIONS The PPIQ has potential value in promoting quality patient-professional interactions in the hospital setting, as patients' reported experience is an important dimension of the clinician's performance.
Collapse
Affiliation(s)
- Giulia Casu
- Department of Psychology, University of Bologna, Italy.
| | | | - Marinella Sommaruga
- Clinical Psychology and Social Support Unit, Istituti Clinici Scientifici Maugeri - IRCCS, Milan, Italy.
| |
Collapse
|
6
|
Moreira PS, Santos N, Castanho T, Amorim L, Portugal-Nunes C, Sousa N, Costa P. Longitudinal measurement invariance of memory performance and executive functioning in healthy aging. PLoS One 2018; 13:e0204012. [PMID: 30265688 PMCID: PMC6161843 DOI: 10.1371/journal.pone.0204012] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Accepted: 09/02/2018] [Indexed: 11/18/2022] Open
Abstract
In this work, we examined the longitudinal measurement invariance of a battery composed of distinct cognitive parameters. A sample of 86 individuals (53.5% females; mean age = 65.73), representative of the Portuguese older population, with respect to sex, age and level of education was assessed twice over an average of two years. By means of a confirmatory factor analysis approach, we tested whether a two-factor solution [corresponding to measures of memory performance (MEM) and executive functioning (EXEC)] was reliable over time. Nested models of longitudinal invariance demonstrated the existence of partial strong invariance over time. In other words, this indicates that there is an equivalence of the factorial structure and factor loadings for all items; this was also observed for the item intercepts for all the items, except for one of the items from the EXEC dimension. Stability coefficients revealed high associations between the dimensions over time and that, whereas there was a significant decline of the MEM across time, this was not observed for the EXEC dimension. These findings reveal that changes in MEM and EXEC scores can be attributed to true changes on these constructs, enabling the use of this battery as a reliable method to study cognitive aging.
Collapse
Affiliation(s)
- Pedro Silva Moreira
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal
- ICVS/3B’s, PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Clinical Academic Center–Braga, Braga, Portugal
| | - Nadine Santos
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal
- ICVS/3B’s, PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Clinical Academic Center–Braga, Braga, Portugal
| | - Teresa Castanho
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal
- ICVS/3B’s, PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Clinical Academic Center–Braga, Braga, Portugal
| | - Liliana Amorim
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal
- ICVS/3B’s, PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Clinical Academic Center–Braga, Braga, Portugal
| | - Carlos Portugal-Nunes
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal
- ICVS/3B’s, PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Clinical Academic Center–Braga, Braga, Portugal
| | - Nuno Sousa
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal
- ICVS/3B’s, PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Clinical Academic Center–Braga, Braga, Portugal
| | - Patrício Costa
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal
- ICVS/3B’s, PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Clinical Academic Center–Braga, Braga, Portugal
| |
Collapse
|
7
|
Inayah AT, Anwer LA, Shareef MA, Nurhussen A, Alkabbani HM, Alzahrani AA, Obad AS, Zafar M, Afsar NA. Objectivity in subjectivity: do students' self and peer assessments correlate with examiners' subjective and objective assessment in clinical skills? A prospective study. BMJ Open 2017; 7:e012289. [PMID: 28487454 PMCID: PMC5623435 DOI: 10.1136/bmjopen-2016-012289] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2016] [Revised: 03/15/2017] [Accepted: 03/16/2017] [Indexed: 11/04/2022] Open
Abstract
OBJECTIVES The qualitative subjective assessment has been exercised either by self-reflection (self-assessment (SA)) or by an observer (peer assessment (PA)) and is considered to play an important role in students' development. The objectivity of PA and SA by students as well as those by faculty examiners has remained debated. This matters most when it comes to a high-stakes examination. We explored the degree of objectivity in PA, SA, as well as the global rating by examiners being Examiners' Subjective Assessment (ESA) compared with Objective Structured Clinical Examinations (OSCE). DESIGN Prospective cohort study. SETTING Undergraduate medical students at Alfaisal University, Riyadh. PARTICIPANTS All second-year medical students (n=164) of genders, taking a course to learn clinical history taking and general physical examination. MAIN OUTCOME MEASURES A Likert scale questionnaire was distributed among the participants during selected clinical skills sessions. Each student was evaluated randomly by peers (PA) as well as by himself/herself (SA). Two OSCEs were conducted where students were assessed by an examiner objectively as well as subjectively (ESA) for a global rating of confidence and well-preparedness. OSCE-1 had fewer topics and stations, whereas OSCE-2 was terminal and full scale. RESULTS OSCE-1 (B=0.10) and ESA (B=8.16) predicted OSCE-2 scores. 'No nervousness' in PA (r=0.185, p=0.018) and 'confidence' in SA (r=0.207, p=0.008) correlated with 'confidence' in ESA. In 'well-preparedness', SA correlated with ESA (r=0.234, p=0.003). CONCLUSIONS OSCE-1 and ESA predicted students' performance in the OSCE-2, a high-stakes evaluation, indicating practical 'objectivity' in ESA, whereas SA and PA had minimal predictive role. Certain components of SA and PA correlated with ESA, suggesting partial objectivity given the limited objectiveness of ESA. Such difference in 'qualitative' objectivity probably reflects experience. Thus, subjective assessment can be used with some degree of objectivity for continuous assessment.
Collapse
Affiliation(s)
| | - Lucman A Anwer
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
- Mayo Clinic, Rochester, USA
| | - Mohammad Abrar Shareef
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
- Mercy St. Vincent Medical Center, Toledo, USA
| | - Akram Nurhussen
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | | | | | | | - Muhammad Zafar
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | - Nasir Ali Afsar
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| |
Collapse
|
8
|
Roberts C, Jorm C, Gentilcore S, Crossley J. Peer assessment of professional behaviours in problem-based learning groups. MEDICAL EDUCATION 2017; 51:390-400. [PMID: 28078685 DOI: 10.1111/medu.13151] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2015] [Revised: 02/29/2016] [Accepted: 06/27/2016] [Indexed: 05/28/2023]
Abstract
CONTEXT Peer assessment of professional behaviour within problem-based learning (PBL) groups can support learning and provide opportunities to identify and remediate problem behaviours. OBJECTIVES We investigated whether a peer assessment of learning behaviours in PBL is sufficiently valid to support decision making about student professional behaviours. METHODS Data were available for two cohorts of students, in which each student was rated by all of their PBL group peers using a modified version of a previously validated scale. Following the provision of feedback to the students, their behaviours were again peer-assessed. A generalisability study was undertaken to calculate the students' professional behaviour scores, sources of error that impacted the reliability of the assessment, changes in student rating behaviour, and changes in mean scores after the delivery of feedback. RESULTS Peer assessment of professional learning behaviour was highly reliable for within-group comparisons (G = 0.81-0.87), but poor for across-group comparisons (G = 0.47-0.53). Feedback increased the range of ratings given by assessors and brought their mean ratings into closer alignment. More of the increased variance was attributable to assessee performance than to assessor stringency and hence there was a slight improvement in reliability, especially for comparisons across groups. Mean professional behaviour scores were unchanged. CONCLUSIONS Peer assessment of professional learning behaviours may be unreliable for decision making outside a PBL group. Faculty members should not draw conclusions from peer assessment about a student's behaviour compared with that of their peers in the cohort, and such a tool may not be appropriate for summative assessment. Health professional educators interested in assessing student professional behaviours in PBL groups might focus on opportunities for the provision of formative peer feedback and its impact on learning.
Collapse
Affiliation(s)
- Chris Roberts
- Sydney Medical School - Northern, University of Sydney, Sydney, Australia
| | - Christine Jorm
- Office of Medical Education, Sydney Medical School, University of Sydney, Sydney, Australia
| | - Stacey Gentilcore
- Office of Medical Education, Sydney Medical School, University of Sydney, Sydney, Australia
| | - Jim Crossley
- The Medical School, University of Sheffield, Sheffield, UK
| |
Collapse
|
9
|
Markham SE, Markham IS, Smith JW. A review, analysis, and extension of peer-leader feedback agreement: Contrasting group aggregate agreement vs. self-other agreement using entity analytics and visualization. THE LEADERSHIP QUARTERLY 2017. [DOI: 10.1016/j.leaqua.2016.10.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|