1
|
Munroe H. Modes of Operation in Clinical Supervision: How Clinical Supervisors Perceive Themselves. Br J Occup Ther 2016. [DOI: 10.1177/030802268805101003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Supervisors' mental models for the supervision of students in clinical practice were explored by means of data elicited through concept mapping and associated cognitive tasks. On the basis of the findings from a small sample of 20 clinical supervisors, a tentative typology of four modes of operation was postulated. In addition, the clinical supervisors involved in this study identified four key functions of the clinical supervisor.
Collapse
Affiliation(s)
- Heli Munroe
- Formerly Vice-Principal, Grampian School of Occupational Therapy, Aberdeen
| |
Collapse
|
2
|
Abstract
Rating of teachers by students is a commonly sought, important component of the evaluation of teaching. Whether student raters should be identified is controversial. Student evaluation of medical faculty is done with the knowledge that students may well need to return to these faculty for letters of recommendation for graduate program application. This subjects the rating process to a bias of recall that may result in leniency or inhibition. The special circumstance of medical student rating was studied by interviewing a sample (n = 50) of senior medical students. Nearly 40% of students felt that they would have been inhibited in responding to questions on quality of teaching and personal rating of their medical teachers. Fewer students would have been inhibited in commenting on the organization and operation of courses, but even in this area an element of inhibition would have been felt. The findings support continued protection of the identity of student raters.
Collapse
|
3
|
Gerbase MW, Germond M, Nendaz MR, Vu NV. When the evaluated becomes evaluator: what can we learn from students' experiences during clerkships? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2009; 84:877-885. [PMID: 19550181 DOI: 10.1097/acm.0b013e3181a8171e] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
PURPOSE To identify aspects that influence students' evaluation of the overall quality of clerkships and learning in clinical settings. METHOD The authors analyzed 2,450 questionnaires dated 1997 through 2005 that evaluated clerkships of seven medical specialties (internal medicine, surgery, pediatrics, psychiatry, community medicine, emergency medicine, and obstetrics-gynecology). Students rated 22 questionnaire items addressing clerkships' global evaluation and domains related to structure, supervision, and clinical and problem-solving learning (PSL) activities using a five-point Likert scale. The authors performed statistical analysis using principal component analysis and regression analysis of items associated with students' global evaluation of clerkships. RESULTS Correlation between clerkships' global ratings and ratings derived from the evaluation questionnaire was 0.871 (P < .0001). Clerkships' quality was mainly related to their organization, students' integration into clerkship, improvement of clinical skills, supervision, and residents' availability (r = 0.405; P < .0001). Among learning activities, opportunities for clinical practice predominated as the contributing factor to the overall perceived quality of most clerkships, but less than PSL activities in psychiatry (r = 0.070 versus 0.261, respectively; P < .001) and community medicine (r = 0.126 versus 0.298, respectively; P < .001); in surgery, both clinical practice and PSL activities contributed minimally to the clerkships' perceived quality (r = 0.150 and 0.148, respectively; P > .05). CONCLUSIONS Factors influencing students' evaluation of a clerkship vary among medical specialties and depend not only on the teaching and teacher but also on the clerkship's organization, supervision, and learning activities. For clerkships where direct and multiple access to patients is more difficult, written case-based PSL activities proved complementary to direct patient encounter activities.
Collapse
Affiliation(s)
- Margaret W Gerbase
- Unit of Development and Research in Medical Education, Department of Internal Medicine, Faculty of Medicine, University of Geneva, Geneva, Switzerland.
| | | | | | | |
Collapse
|
4
|
Beckman TJ, Cook DA, Mandrekar JN. Factor instability of clinical teaching assessment scores among general internists and cardiologists. MEDICAL EDUCATION 2006; 40:1209-16. [PMID: 17118115 DOI: 10.1111/j.1365-2929.2006.02632.x] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
CONTEXT We are unaware of studies examining the stability of teaching assessment scores across different medical specialties. A recent study showed that clinical teaching assessments of general internists reduced to interpersonal, clinical teaching and efficiency domains. We sought to determine the factor stability of this 3-dimensional model among cardiologists and to compare domain-specific scores between general internists and cardiologists. METHODS A total of 2000 general internal medicine and cardiology hospital teaching assessments carried out from January 2000 to March 2004 were analysed using principal factor analysis. Internal consistency and inter-rater reliability were calculated. Mean item scores were compared between general internists and cardiologists. RESULTS The interpersonal and clinical teaching domains previously demonstrated among general internists collapsed into 1 domain among cardiologists, whereas the efficiency domain remained stable. Internal consistency of domains (Cronbach's alpha range 0.89-0.93) and inter-rater reliability of items (range 0.65-0.87) were good to excellent for both specialties. General internists scored significantly higher (P<0.05) than cardiologists on most items except for 4 items that more accurately assessed the cardiology teaching environment. CONCLUSIONS We observed factor instability of clinical teaching assessment scores from the same instrument administered to general internists and cardiologists. This finding was attributed to salient differences between these specialties' educational environments and highlights the importance of validating assessments for the specific contexts in which they are to be used. Future research should determine whether interpersonal domain scores identify superior teachers and study the reasons why interpersonal and clinical teaching domains are unstable across different educational settings.
Collapse
Affiliation(s)
- Thomas J Beckman
- Division of General Internal Medicine, Department of Internal Medicine, Mayo Clinic and Mayo Foundation, Rochester, Minnesota 55905, USA.
| | | | | |
Collapse
|
5
|
Abstract
BACKGROUND Although a variety of validity evidence should be utilized when evaluating assessment tools, a review of teaching assessments suggested that authors pursue a limited range of validity evidence. OBJECTIVES To develop a method for rating validity evidence and to quantify the evidence supporting scores from existing clinical teaching assessment instruments. DESIGN A comprehensive search yielded 22 articles on clinical teaching assessments. Using standards outlined by the American Psychological and Education Research Associations, we developed a method for rating the 5 categories of validity evidence reported in each article. We then quantified the validity evidence by summing the ratings for each category. We also calculated weighted kappa coefficients to determine interrater reliabilities for each category of validity evidence. MAIN RESULTS Content and Internal Structure evidence received the highest ratings (27 and 32, respectively, of 44 possible). Relation to Other Variables, Consequences, and Response Process received the lowest ratings (9, 2, and 2, respectively). Interrater reliability was good for Content, Internal Structure, and Relation to Other Variables (kappa range 0.52 to 0.96, all P values < .01), but poor for Consequences and Response Process. CONCLUSIONS Content and Internal Structure evidence is well represented among published assessments of clinical teaching. Evidence for Relation to Other Variables, Consequences, and Response Process receive little attention, and future research should emphasize these categories. The low interrater reliability for Response Process and Consequences likely reflects the scarcity of reported evidence. With further development, our method for rating the validity evidence should prove useful in various settings.
Collapse
Affiliation(s)
- Thomas J Beckman
- Division of General Internal Medicine, Department of Internal Medicine, Mayo Clinic College of Medicine, Mayo Clinic and Mayo Foundation, Rochester, MN, USA.
| | | | | |
Collapse
|
6
|
Beckman TJ, Mandrekar JN. The interpersonal, cognitive and efficiency domains of clinical teaching: construct validity of a multi-dimensional scale. MEDICAL EDUCATION 2005; 39:1221-9. [PMID: 16313581 DOI: 10.1111/j.1365-2929.2005.02336.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
BACKGROUND We are unaware of any hypothesis-driven studies showing that teaching assessments are comprised solely of interpersonal and cognitive domains. Moreover, previous teaching assessments have been biased by heterogeneous samples of evaluators. Consequently, we investigated the construct validity of faculty assessments comprised of interpersonal and cognitive domains, utilising evaluations obtained from resident doctors on an internal medicine hospital service. METHODS A total of 1000 inpatient evaluations were completed on 60 general internal medicine faculty members. Education theory supported a 2-dimensional, 14-item scale. Principal factor analysis was used to explore the scale's dimensionality. Internal reliability and interobserver agreement were determined. Relationships between domains and instructor characteristics were also examined. RESULTS Principal factor analysis revealed interpersonal, clinical teaching and efficiency domains. Internal reliabilities of all domains are high (alpha > 0.90). Interobserver agreement is good (range 0.64-0.83). In the interpersonal domain there is a trend towards higher scores for lower ranking faculty. Significant findings are higher overall scores in the interpersonal domain (P < 0.001), higher scores for assistant professors in the interpersonal domain (P = 0.008) and higher scores for male than female faculty in the interpersonal (P = 0.041) and clinical teaching (P = 0.008) domains. CONCLUSIONS Clinical teaching evaluations are reducible to interpersonal, clinical teaching and efficiency domains. Evidence for construct validity includes predicted domains and high internal and interobserver reliabilities. Utilising a homogenous sample of evaluators minimised variance. Interestingly, lower ranking faculty scored higher in the interpersonal domain, suggesting that lower ranking faculty may focus more attention on teaching activities than full professors do.
Collapse
Affiliation(s)
- Thomas J Beckman
- Division of General Internal Medicine, Department of Internal Medicine, Mayo Clinic and Mayo Foundation, Rochester, Minnesota 55905, USA.
| | | |
Collapse
|
7
|
Beckman TJ, Ghosh AK, Cook DA, Erwin PJ, Mandrekar JN. How reliable are assessments of clinical teaching? A review of the published instruments. J Gen Intern Med 2004; 19:971-7. [PMID: 15333063 PMCID: PMC1492515 DOI: 10.1111/j.1525-1497.2004.40066.x] [Citation(s) in RCA: 107] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
BACKGROUND Learner feedback is the primary method for evaluating clinical faculty, despite few existing standards for measuring learner assessments. OBJECTIVE To review the published literature on instruments for evaluating clinical teachers and to summarize themes that will aid in developing universally appealing tools. DESIGN Searching 5 electronic databases revealed over 330 articles. Excluded were reviews, editorials, and qualitative studies. Twenty-one articles describing instruments designed for evaluating clinical faculty by learners were found. Three investigators studied these papers and tabulated characteristics of the learning environments and validation methods. Salient themes among the evaluation studies were determined. MAIN RESULTS Many studies combined evaluations from both outpatient and inpatient settings and some authors combined evaluations from different learner levels. Wide ranges in numbers of teachers, evaluators, evaluations, and scale items were observed. The most frequently encountered statistical methods were factor analysis and determining internal consistency reliability with Cronbach's alpha. Less common methods were the use of test-retest reliability, interrater reliability, and convergent validity between validated instruments. Fourteen domains of teaching were identified and the most frequently studied domains were interpersonal and clinical-teaching skills. CONCLUSIONS Characteristics of teacher evaluations vary between educational settings and between different learner levels, indicating that future studies should utilize more narrowly defined study populations. A variety of validation methods including temporal stability, interrater reliability, and convergent validity should be considered. Finally, existing data support the validation of instruments comprised solely of interpersonal and clinical-teaching domains.
Collapse
Affiliation(s)
- Thomas J Beckman
- Department of Internal Medicine, Department of Medicine, Mayo Clinic College of Medicine, Mayo Clinic, Mayo Foundation, Rochester, MN, USA.
| | | | | | | | | |
Collapse
|
8
|
Steiner IP, Franc-Law J, Kelly KD, Rowe BH. Faculty evaluation by residents in an emergency medicine program: a new evaluation instrument. Acad Emerg Med 2000; 7:1015-21. [PMID: 11043997 DOI: 10.1111/j.1553-2712.2000.tb02093.x] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
OBJECTIVE Evaluation of preceptors in training programs is essential; however, little research has been performed in the setting of the emergency department (ED). The goal of this pilot study was to determine the validity and reliability of a faculty evaluation instrument-the Emergency Rotation (ER) scale-developed specifically for use in emergency medicine (EM). METHODS A prospective study comparing the ER scale with two alternative faculty evaluation instruments was completed in three of the five EDs affiliated with an EM teaching program, where emergency physicians are members of the clinical teaching faculty. The participants were 18 residents (postgraduate years 1, 2, and 3) who were completing four-week clinical rotations in EM. Residents at the end of the rotation recorded their evaluations of each emergency physician with whom they had clinical encounters on the following evaluation tools: the ER scale, a longer validated scale (Irby), and a global assessment scale (GAS). Domain scores were correlated with the previously validated scale and the GAS to determine validity using a multitrait-multimethod matrix. The reliability of the ER scale was measured using a Chronbach's alpha coefficient. RESULTS Forty-eight preceptor evaluations were completed on 29 individual preceptors. The rating of preceptors was high using the ER scale (median: 16 of 20; IQR: 13, 18), Irby (median: 300 of 378; IQR: 267, 321), or GAS (mean: 7.8 of 10; SD: 1.3). Domain scores for each tool were used in the multitrait-multimethod matrix and the correlations between a previously validated tool and the ER scale were found to be high (>0.70) in the various domains. The internal consistency of the ER scale was also high (r = 0.85). CONCLUSIONS The ER scale appears to be valid and reliable. It performs well when compared with previously psychometrically tested tools. It is a sensible, well-adapted tool for the teaching environment offered by EM.
Collapse
Affiliation(s)
- I P Steiner
- Division of Emergency Medicine, Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada. ivan.hippocrates.family.med.ualberta.ca
| | | | | | | |
Collapse
|
9
|
Collins J, Albanese MA, Thakor SK, Propeck PA, Scanlan KA. Development of a radiology faculty appraisal instrument by using critical incident interviewing. Acad Radiol 1997; 4:795-801. [PMID: 9412691 DOI: 10.1016/s1076-6332(97)80256-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
RATIONALE AND OBJECTIVES To develop a valid and reliable radiology faculty appraisal instrument based on scientific methods. MATERIALS AND METHODS Fifteen radiology residents participated in critical incident interviewing. During a 1-hour interview, a resident was asked to describe five incidents each of effective and ineffective faculty behavior. Two investigators independently listened to the tape-recorded interviews, and two different investigators sorted the incidents into broad categories. A faculty appraisal instrument was developed by listing similar incidents under broad categories. A five-point rating scale was applied to each item. Content validity was assessed by resident and faculty critique of the appraisal instrument. RESULTS A total of 168 incidents of faculty behavior were generated. The frequency with which similar incidents were reported was recorded. The most common behaviors reported were related to staff expertise and teaching. Interjudge reliability was good, as determined by computing K indices of agreement (overall K = 0.59). There was good agreement regarding instrument content validity among residents but not among faculty. CONCLUSION Residents supported the use of the new appraisal instrument, but further tests of validity and reliability and faculty acceptance of the instrument will determine its usefulness as a tool for monitoring faculty teaching performance and making decisions regarding faculty promotion.
Collapse
Affiliation(s)
- J Collins
- Department of Radiology, University of Wisconsin Clinical and Medical Science Centers, Madison, USA
| | | | | | | | | |
Collapse
|
10
|
Premadasa IG, Hijazi Z, Moosa A. An instrument to evaluate clinical instructional skills. MEDICAL EDUCATION 1995; 29:355-359. [PMID: 8699973 DOI: 10.1111/j.1365-2923.1995.tb00025.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
A questionnaire, which consisted of 10 statements dealing with the attributes of effective clinical instruction, was designed for use by medical students. Three groups of trainees who followed consecutive clinical rotations in paediatrics assessed the instructional skills of their tutors using the instrument. Summary reports on students' perceptions were made available to the teachers soon after each rotation. The results showed that although individual instructors exhibited varying degrees of the desired skills, they maintained a consistent pattern through the assessments. When considered on an overall basis, teacher behaviours such as allowing the students to ask questions and giving satisfactory answers, and helping in students' learning problems with relevant feedback, received a higher percentage of positive ratings than emphasizing problem-solving, demonstrating and supervising physical examinations and procedures, and stimulating the students' interest in the subject. It appears that the instrument developed is suitable for obtaining feedback from the students to identify the strengths and weaknesses of the instructional skills of their clinical teachers. Such feedback would become useful when modifying programme presentation and in planning and conducting faculty development activities.
Collapse
|
11
|
Affiliation(s)
- C E Blane
- Department of Radiology, University of Michigan, Ann Arbor, USA
| | | | | |
Collapse
|
12
|
McLeod PJ, James CA, Abrahamowicz M. Clinical tutor evaluation: a 5-year study by students on an in-patient service and residents in an ambulatory care clinic. MEDICAL EDUCATION 1993; 27:48-54. [PMID: 8433660 DOI: 10.1111/j.1365-2923.1993.tb00228.x] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Medical students on an in-patient service and residents working in an ambulatory care clinic have regularly evaluated their clinical tutors over the 5 years 1985-1989. Both groups of raters reliably and predictably evaluated their tutors and both emphasize between-tutor comparisons more than actual rating values for individual tutors. Tutors active in both contexts regularly receive higher ratings from the medical students than from the residents. Mid-course feedback to tutors in the medical course had no impact on end-of-course ratings. In neither context did tutor ratings improve from one evaluation to the next. Both groups reliably discriminate between the teaching skills and the personality traits of individual tutors.
Collapse
Affiliation(s)
- P J McLeod
- Department of Medicine, McGill University, Montreal, Quebec, Canada
| | | | | |
Collapse
|