1
|
Spruit E, Mol MF, Bos PK, Bierma-Zeinstra SM, Krastman P, Runhaar J. Self-Assessment of Competence and Referral Behavior for Musculoskeletal Injections among Dutch General Practitioners. J Clin Med 2020; 9:jcm9061880. [PMID: 32560156 PMCID: PMC7356219 DOI: 10.3390/jcm9061880] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 06/10/2020] [Accepted: 06/15/2020] [Indexed: 11/16/2022] Open
Abstract
General practitioners (GPs) are qualified and trained to administer therapeutic musculoskeletal injections when indicated. However, it is unknown to what extend Dutch GPs feel competent to administer these injections in clinical practice. Reluctance among GPs to inject might lead to unnecessary and costly referral to secondary care. An online and offline questionnaire was spread among Dutch GPs, querying demographics, GPs' self-assessment of injection competence, the number of administered/referred injections and management strategy for musculoskeletal injections. A total of 355 GPs responded. In total, 81% of the GPs considered themselves competent in administering musculoskeletal injections. Self-assessed incompetent GPs performed less injections the last month than self-assessed competent GPs (1.2 ± 1.4 vs 4.8 ± 4.6 injections, P < 0.001). Additionally, they referred four times more often to a colleague GP (0.4 ± 1.0 vs 0.1 ± 0.6 injections per month, P < 0.001) and twice as often to secondary care (1.0 ± 1.3 vs 0.5 ± 0.9 injections per month, P = 0.001). Self-assessed incompetence was associated with female sex (OR [95% CI] = 4.94 [2.39, 10.21]) and part-time work (OR [95% CI] = 2.58 [1.43, 4.66]). The most frequently addressed barriers were a lack of confidence in injection skills, lack of practical training, and uncertainty about the effectiveness and diagnosis of musculoskeletal injections. Although most GPs considered themselves competent to administer musculoskeletal injections, the referral rate to secondary care for several injections was strikingly high. To decrease secondary care referrals, addressing some of the most frequently indicated barriers is highly recommended.
Collapse
Affiliation(s)
- Emely Spruit
- Department of General Practice, Erasmus MC, University Medical Center, 3000 CA Rotterdam, The Netherlands; (E.S.); (M.F.M.); (S.M.A.B.-Z.); (P.K.)
| | - Marianne F. Mol
- Department of General Practice, Erasmus MC, University Medical Center, 3000 CA Rotterdam, The Netherlands; (E.S.); (M.F.M.); (S.M.A.B.-Z.); (P.K.)
| | - P. Koen Bos
- Department of Orthopaedic Surgery, Erasmus MC, University Medical Center, 3000 CA Rotterdam, The Netherlands;
| | - Sita M.A. Bierma-Zeinstra
- Department of General Practice, Erasmus MC, University Medical Center, 3000 CA Rotterdam, The Netherlands; (E.S.); (M.F.M.); (S.M.A.B.-Z.); (P.K.)
- Department of Orthopaedic Surgery, Erasmus MC, University Medical Center, 3000 CA Rotterdam, The Netherlands;
| | - Patrick Krastman
- Department of General Practice, Erasmus MC, University Medical Center, 3000 CA Rotterdam, The Netherlands; (E.S.); (M.F.M.); (S.M.A.B.-Z.); (P.K.)
| | - Jos Runhaar
- Department of General Practice, Erasmus MC, University Medical Center, 3000 CA Rotterdam, The Netherlands; (E.S.); (M.F.M.); (S.M.A.B.-Z.); (P.K.)
- Correspondence:
| |
Collapse
|
2
|
Kim SC, Ro YS, Shin SD, Wi DH, Jeong J, Park JO, Sun KM, Bae K. Assessment of Competence in Emergency Medicine among Healthcare Professionals in Cameroon. J Korean Med Sci 2017; 32:1931-1937. [PMID: 29115073 PMCID: PMC5680490 DOI: 10.3346/jkms.2017.32.12.1931] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 09/10/2017] [Indexed: 11/20/2022] Open
Abstract
Development of a competence-based curriculum is important. This study aimed to develop competence assessment tools in emergency medicine and use it to assess competence of Cameroonian healthcare professionals. This was a cross-sectional, descriptive study. Through literature review, expert survey, and discrimination tests, we developed a self-survey questionnaire and a scenario-based competence assessment tool for assessing clinical knowledge and self-confidence to perform clinical practices or procedures. The self-survey consisted of 23 domains and 94 questionnaires on a 5-point Likert scale. Objective scenario-based competence assessment tool was used to validate the self-survey results for five life-threatening diseases presenting frequently in emergency rooms of Cameroon. Response rate of the self-survey was 82.6%. In this first half of competence assessment, knowledge of infectious disease had the highest score (4.6 ± 0.4) followed by obstetrics and gynecology (4.2 ± 0.6) and hematology and oncology (4.2 ± 0.5); in contrast, respondents rated the lowest score in the domains of disaster, abuse and assault, and psychiatric and behavior disorder (all of mean 2.8). In the scenario-based test, knowledge of multiple trauma had the highest score (4.3 ± 1.2) followed by anaphylaxis (3.4 ± 1.4), diabetic ketoacidosis (3.3 ± 1.0), ST-elevation myocardial infarction (2.5 ± 1.4), and septic shock (2.2 ± 1.1). Mean difference between the self-survey and scenario-based test was statistically insignificant (mean, -0.02; 95% confidence interval, -0.41 to 0.36), and agreement rate was 58.3%. Both evaluation tools showed a moderate correlation, and the study population had relatively low competence for specific aspects of emergency medicine and clinical procedures and skills.
Collapse
Affiliation(s)
- Sang Chul Kim
- Department of Emergency Medicine, Chungbuk National University Hospital, Cheongju, Korea
- Laboratory of Emergency Medical Services, Seoul National University Hospital Biomedical Research Institute, Seoul, Korea
| | - Young Sun Ro
- Laboratory of Emergency Medical Services, Seoul National University Hospital Biomedical Research Institute, Seoul, Korea.
| | - Sang Do Shin
- Laboratory of Emergency Medical Services, Seoul National University Hospital Biomedical Research Institute, Seoul, Korea
- Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, Korea
| | - Dae Han Wi
- Laboratory of Emergency Medical Services, Seoul National University Hospital Biomedical Research Institute, Seoul, Korea
- Department of Emergency Medicine, Wonkwang University School of Medicine, Iksan, Korea
| | | | - Ju Ok Park
- Laboratory of Emergency Medical Services, Seoul National University Hospital Biomedical Research Institute, Seoul, Korea
- Department of Emergency Medicine, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| | - Kyong Min Sun
- Laboratory of Emergency Medical Services, Seoul National University Hospital Biomedical Research Institute, Seoul, Korea
- Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, Korea
| | - Kwangsoo Bae
- Laboratory of Emergency Medical Services, Seoul National University Hospital Biomedical Research Institute, Seoul, Korea
- Department of Emergency Medicine, Andong Hospital, Andong, Korea
| |
Collapse
|
3
|
Li M, Shu Z, Huang X, Du Z, Wu J, Xia Q, Liu K, Lou J, Jing L. Capacity evaluation for general practitioners in Pudong new area of Shanghai: an empirical study. Int J Equity Health 2016; 15:192. [PMID: 27894308 PMCID: PMC5126805 DOI: 10.1186/s12939-016-0484-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2016] [Accepted: 11/21/2016] [Indexed: 11/18/2022] Open
Abstract
Background Building highly qualified General Practitioners (GPs) is key to the development of primary health care. It’s therefore urgent to ensure the GPs’ quality service under the background of the new round of health care system reforms in China. A new model of GP qualification examination was originally implemented in Pudong New Area of Shanghai, China, which aimed to empirically evaluate the GPs’ capability in terms of clinical performance and social recognition. In the current study, an analysis was made of the first two years (2014–2015) of such theoretical and practical examinations on the GPs there with a view to getting a deep insight into the GP community so as to identify the barriers to such a form of GP qualification examination. Methods The agency survey method was applied to the two-year database of the GP examinees, the formative research conducted to explore the key elements for developing the examination model. The data analysis was performed with SPSS for Windows (Version 19.0) to describe the GPs’ overall characteristics, and to make comparisons between different groups. Results In 2015, the total number of GPs was 1264 in the area, in different districts of which, statistically significant differences were found in sex, age, professional title and employment span (P < 0.05). Such results were found to be similar to those in 2014. The examinees’ theoretical scores were statistically different (F = 7.76; P < 0.05), showing a sloping trend from the urban district to the suburban, to the rural and then to the farther rural, as indicated by LSD-t test (P < 0.05). From the theoretical examinations the scores were higher on the western medicine than on the traditional Chinese medicine (F = 22.11; P < 0.05). Conclusions As suggested by the current study on the GPs’ qualification examination, which was pioneered in Pudong New Area of Shanghai, the construction of GP community was far from sufficient. It was a preliminary study and further studies are merited along the construction and development in terms of continuing medical education, performance appraisal and incentive mechanism.
Collapse
Affiliation(s)
- Ming Li
- Shanghai Pudong Institute for Health Development, Shanghai, China.,Pudong New Area Commission of Health and Family Planning, Shanghai, China
| | - Zhiqun Shu
- Shanghai Pudong Institute for Health Development, Shanghai, China
| | - Xuan Huang
- Shanghai Pudong Institute for Health Development, Shanghai, China.,Pudong New Area Commission of Health and Family Planning, Shanghai, China
| | - Zhaohui Du
- Medical Institutions Administration Center of Pudong New Area, Shanghai, China
| | - Jun Wu
- Pudong New Area Commission of Health and Family Planning, Shanghai, China
| | - Qingshi Xia
- Medical Institutions Administration Center of Pudong New Area, Shanghai, China
| | - Kun Liu
- Shanghai Pudong Institute for Health Development, Shanghai, China
| | - Jiquan Lou
- Shanghai Pudong Institute for Health Development, Shanghai, China
| | - Limei Jing
- Shanghai Pudong Institute for Health Development, Shanghai, China.
| |
Collapse
|
4
|
Wenghofer EF, Henzel TR, Miller SH, Norcross W, Boal P. Value of General Medical Knowledge Examinations in Performance Assessment of Practicing Physicians With Potential Competence and Performance Deficiencies. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2016; 36:113-118. [PMID: 27262154 DOI: 10.1097/ceh.0000000000000063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
INTRODUCTION Problems with a physician's performance may arise at any point during their career. As such, there is a need for effective, valid tools and processes to accurately assess and identify deficiencies in competence or performance. Although scores on multiple-choice questions have been shown to be predictive of some aspects of physician performance in practicing physicians, their relationship to overall clinical competence is somewhat uncertain particularly after the first 10 years of practice. As such, the purpose of this study was to examine how a general medical knowledge multiple-choice question examination is associated with a comprehensive assessment of competence and performance in experienced practicing physicians with potential competence and performance deficiencies. METHODS The study included 233 physicians, of varying specialties, assessed by the University of California, San Diego Physician Assessment and Clinical Education Program (PACE), between 2008 and 2012, who completed the Post-Licensure Assessment System Mechanisms of Disease (MoD) examination. Logistic regression determined if the examination score significantly predicted passing assessment outcome after correcting for gender, international medical graduate status, certification status, and age. RESULTS Most physicians (89.7%) received an overall passing assessment outcome on the PACE assessment. The mean MoD score was 66.9% correct, with a median of 68.0%. Logistic regression (P = .038) was significant in indicating that physicians with higher MoD examination scores had an increased likelihood of achieving a passing assessment outcome (odds ratio = 1.057). DISCUSSION Physician MoD scores are significant predictors of overall physician competence and performance as evaluated by PACE assessment.
Collapse
Affiliation(s)
- Elizabeth F Wenghofer
- Dr. Wenghofer: Associate Professor, School of Rural and Northern Health, Laurentian University, Sudbury, ON, Canada, and Research Director, Physician Assessment and Clinical Education (PACE) Program, University of California San Diego, San Diego, CA. Dr. Henzel: Research Analyst, Policy & Product Development, National Board of Medical Examiners, Philadelphia, PA. Dr. Miller: Voluntary Clinical Professor of Family and Preventive Medicine and Surgery, University of California San Diego, San Diego, CA. Dr. Norcross: Clinical Professor and Director Physician Assessment and Clinical Education (PACE) Program, University of California San Diego, San Diego, CA. Mr. Boal: Associate Director, Physician Assessment and Clinical Education (PACE) Program, University of California San Diego, San Diego, CA
| | | | | | | | | |
Collapse
|
5
|
Chambers DW, LaBarre EE. The Effects of Student Self-Assessment on Learning in Removable Prosthodontics Laboratory. J Dent Educ 2014. [DOI: 10.1002/j.0022-0337.2014.78.5.tb05719.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
| | - Eugene E. LaBarre
- Integrated Reconstructive Dental Sciences; University of the Pacific Arthur A. Dugoni School of Dentistry
| |
Collapse
|
6
|
Medication Management Skills of Nursing Students: Comparing the Students and Their Instructors` Evaluation in two Universities. Nurs Midwifery Stud 2013. [DOI: 10.5812/nms.8555] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
7
|
Liaw SY, Scherpbier A, Rethans JJ, Klainin-Yobas P. Assessment for simulation learning outcomes: a comparison of knowledge and self-reported confidence with observed clinical performance. NURSE EDUCATION TODAY 2012; 32:e35-e39. [PMID: 22064013 DOI: 10.1016/j.nedt.2011.10.006] [Citation(s) in RCA: 101] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2011] [Revised: 09/20/2011] [Accepted: 10/12/2011] [Indexed: 05/27/2023]
Abstract
BACKGROUND With extensive use of simulation in nursing education, researchers around the world are evaluating learning outcomes from simulation. Numerous studies reported the use of knowledge tests and self-reported measures to evaluate simulation outcomes. AIM To determine whether self-reported confidence and knowledge measures are indicators of clinical performance observed in a simulation-based assessment. METHOD Thirty-one third year nursing students were randomized into intervention and control group. The intervention group received a six hour simulation-based programme in care of a patient with physiological deterioration. Pre and post-tests using knowledge test, confidence scale and simulation-based assessment were conducted immediately before and after the simulation program. RESULTS The intervention group had a significantly higher post-test mean score than the control group for knowledge and clinical performances. Both groups demonstrated a significant improvement on post-test scores from pre-test scores for self-confidence with no significant differences detected among the two groups. Correlation tests indicated no significant correlation between self-confidence and clinical performance, and between knowledge and clinical performance. CONCLUSION The study did not provide evidence to support the validity of the knowledge test and self-confidence measures for predicting clinical performance. Most importantly, it revealed potential danger of a simulation-based assessment that could lead toward overestimation of self-confidence.
Collapse
Affiliation(s)
- Sok Ying Liaw
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Level 2, Clinical Research Centre, Block MD11 10 Medical Drive 117597, Singapore.
| | | | | | | |
Collapse
|
8
|
Rush S, Firth T, Burke L, Marks-Maran D. Implementation and evaluation of peer assessment of clinical skills for first year student nurses. Nurse Educ Pract 2012; 12:219-26. [PMID: 22357193 DOI: 10.1016/j.nepr.2012.01.014] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2011] [Revised: 01/05/2012] [Accepted: 01/14/2012] [Indexed: 11/24/2022]
Abstract
Enabling student nurses to learn and develop evidence-based clinical skills is the cornerstone of nursing education programmes. This article describes the implementation of a peer assessment scheme for clinical skills within a skills laboratory in a university school of nursing, and the link between peer assessment and clinical skills development. This was a qualitative evaluative study that used questionnaires for data collection and was undertaken on one cohort of students. Findings showed that nearly half of all the statements made by students were about the positive impact of PACS on their skills learning. Students identified giving and receiving peer feedback, reflection and working with peers in small groups as being particularly valuable in clinical skills learning. Increased confidence was also a dominant finding as was the value of repeated practice in a simulation setting on skills development. This study supports some of the previous literature related to use of simulation and peer assessment but the discussion presented in this article also highlights that the findings of this study contradicts other findings in the literature. What makes this study unique is its contribution to the literature is the link that was established by students between the peer-assessment process and clinical skills learning.
Collapse
Affiliation(s)
- Sue Rush
- Kingston University/St George's University of London, Frank Lampl Building, Kingston Hill, Kingston upon Thames, Surrey KT2 7LB, United Kingdom
| | | | | | | |
Collapse
|
9
|
Bhanji F, Gottesman R, de Grave W, Steinert Y, Winer L. Paediatric resuscitation training — Do medical students believe it should be a mandatory component of the curriculum? Resuscitation 2011; 82:584-7. [DOI: 10.1016/j.resuscitation.2011.01.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2010] [Revised: 12/21/2010] [Accepted: 01/09/2011] [Indexed: 11/16/2022]
|
10
|
Mancini ME, Soar J, Bhanji F, Billi JE, Dennett J, Finn J, Ma MHM, Perkins GD, Rodgers DL, Hazinski MF, Jacobs I, Morley PT. Part 12: Education, implementation, and teams: 2010 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science With Treatment Recommendations. Circulation 2010; 122:S539-81. [PMID: 20956260 DOI: 10.1161/circulationaha.110.971143] [Citation(s) in RCA: 88] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
11
|
Soar J, Mancini ME, Bhanji F, Billi JE, Dennett J, Finn J, Ma MHM, Perkins GD, Rodgers DL, Hazinski MF, Jacobs I, Morley PT. Part 12: Education, implementation, and teams: 2010 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science with Treatment Recommendations. Resuscitation 2010; 81 Suppl 1:e288-330. [PMID: 20956038 PMCID: PMC7184565 DOI: 10.1016/j.resuscitation.2010.08.030] [Citation(s) in RCA: 127] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Affiliation(s)
- Jasmeet Soar
- Southmead Hospital, North Bristol NHS Trust, Bristol,United Kingdom.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
12
|
Rodgers DL, Bhanji F, McKee BR. Written evaluation is not a predictor for skills performance in an Advanced Cardiovascular Life Support course. Resuscitation 2010; 81:453-6. [DOI: 10.1016/j.resuscitation.2009.12.018] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2009] [Revised: 11/10/2009] [Accepted: 12/14/2009] [Indexed: 11/27/2022]
|
13
|
Wenghofer E, Klass D, Abrahamowicz M, Dauphinee D, Jacques A, Smee S, Blackmore D, Winslade N, Reidel K, Bartman I, Tamblyn R. Doctor scores on national qualifying examinations predict quality of care in future practice. MEDICAL EDUCATION 2009; 43:1166-1173. [PMID: 19930507 DOI: 10.1111/j.1365-2923.2009.03534.x] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
OBJECTIVES This study aimed to determine if national licensing examinations that measure medical knowledge (QE1) and clinical skills (QE2) predict the quality of care delivered by doctors in future practice. METHODS Cohorts of doctors who took the Medical Council of Canada Qualifying Examinations Part I (QE1) and Part II (QE2) between 1993 and 1996 and subsequently entered practice in Ontario, Canada (n = 2420) were followed for their first 7-10 years in practice. The 208 of these doctors who were randomly selected for peer assessment of quality of care were studied. Main outcome measures included quality of care (acceptable/unacceptable) as assessed by doctor peer-examiners using a structured chart review and interview. Multivariate logistic regression was used to determine if qualifying examination scores predicted the outcome of the peer assessments while controlling for age, sex, training and specialty, and if the addition of the QE2 scores provided additional prediction of quality of care. RESULTS Fifteen (7.2%) of the 208 doctors assessed were considered to provide unacceptable quality of care. Doctors in the bottom quartile of QE1 scores had a greater than three-fold increase in the risk of an unacceptable quality-of-care assessment outcome (odds ratio [OR] 3.41, 95% confidence interval [CI] 1.14-10.22). Doctors in the bottom quartile of QE2 scores were also at higher risk of being assessed as providing unacceptable quality of care (OR 4.24, 95% CI 1.32-13.61). However, QE2 results provided no significant improvement in predicting peer assessment results over QE1 results (likelihood ratio test: chi(2) = 3.21, P-value((1 d.f.)) = 0.07). CONCLUSIONS Doctor scores on qualifying examinations are significant predictors of quality-of-care problems based on regulatory, practice-based peer assessment.
Collapse
|
14
|
Baerheim A, Hjortdahl P, Holen A, Anvik T, Fasmer OB, Grimstad H, Gude T, Risberg T, Vaglum P. Curriculum factors influencing knowledge of communication skills among medical students. BMC MEDICAL EDUCATION 2007; 7:35. [PMID: 17925041 PMCID: PMC2089059 DOI: 10.1186/1472-6920-7-35] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2006] [Accepted: 10/10/2007] [Indexed: 05/25/2023]
Abstract
BACKGROUND Communication training builds on the assumption that understanding of the concepts related to professional communication facilitates the training. We know little about whether students' knowledge of clinical communication skills is affected by their attendance of communication training courses, or to what degree other elements of the clinical training or curriculum design also play a role. The aim of this study was to determine which elements of the curriculum influence acquisition of knowledge regarding clinical communication skills by medical students. METHODS The study design was a cross-sectional survey performed in the four Norwegian medical schools with different curricula, spring 2003. A self-administered questionnaire regarding knowledge of communication skills (an abridged version of van Dalen's paper-and-pencil test) was sent to all students attending the four medical schools. A total of 1801 (59%) students responded with complete questionnaires. RESULTS At the end of the 1st year of study, the score on the knowledge test was higher in students at the two schools running communication courses and providing early patient contact (mean 81%) than in the other two medical schools (mean 69-75%, P < or = 0.001), with students studying a traditional curriculum scoring the lowest. Their scores increased sharply towards the end of the 3rd year, during which they had been subjected to extensive patient contact and had participated in an intensive communication course (77% vs. 72% the previous year, P <or = 0.01). All students scored generally lower in academic years in which there was no communication training. However, at the end of the final year the difference between the schools was only 5% (81% vs. 86%, P < or = 0.001). CONCLUSION The acquisition of knowledge regarding communication skills by medical students may be optimised when the training is given together with extensive supervised patient contact, especially if this teaching takes place in the initial years of the curriculum.
Collapse
Affiliation(s)
- Anders Baerheim
- Department of Public Health and Primary Health Care, University of Bergen, Norway
| | - Per Hjortdahl
- Institute of General Practice and Community Medicine University of Oslo, Norway
| | - Are Holen
- Department of Neuroscience, University of Science and Technology, Trondheim, Norway
| | - Tor Anvik
- Department of Community Medicine, University of Tromsø, Norway, Trondheim, Norway
| | | | - Hilde Grimstad
- Department of Public Health and General Practice, University of Science and Technology, Trondheim, Norway
| | - Tore Gude
- Department of Behavioural Sciences in Medicine, University of Oslo, Norway
| | - Terje Risberg
- Department of Oncology, University of Tromsø, Norway, Trondheim, Norway
| | - Per Vaglum
- Department of Behavioural Sciences in Medicine, University of Oslo, Norway
| |
Collapse
|
15
|
Cameron WA, Taylor GK, Broadfoot R, O'Donnell G. The role of the Clinical Governance Adviser in supporting quality improvement in general dental practice: the Glasgow Quality Practice Initiative. Br Dent J 2007; 202:193-201. [PMID: 17322843 DOI: 10.1038/bdj.2007.129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
AIM The purpose of this paper is to share information derived from the Glasgow Quality Practice Initiative with general dental practice teams, Dental Practice Advisers and others involved in quality improvement. METHOD A sample of 16 general dental practices was selected from volunteers to receive assistance in working towards a Quality Practice Award. Two Clinical Governance Advisers were appointed to provide this support. DATA COLLECTED Quantitative, qualitative and observational data were collected, and comparisons made between practices that had and had not received support. RESULTS Selected results are presented demonstrating both the baseline position and comparisons of the 'Intervention' and 'Non-Intervention' groups. CONCLUSIONS and recommendations Baseline levels of quality assurance were generally poor. It is asserted that the practices receiving Clinical Governance Adviser support benefited from the experience and made meaningful improvements. This has implications for the development of national policy in Scotland.
Collapse
Affiliation(s)
- W A Cameron
- Greater Glasgow NHS Dental Directorate, Clutha House, 120 Cornwall Street South, UK.
| | | | | | | |
Collapse
|
16
|
Abstract
BACKGROUND General practitioners (GPs) have a role in the early management of major trauma in rural Australia. The Early Management of Severe Trauma (EMST) course fulfils their educational needs by providing skills for the systematic management of the seriously injured patient. However, with any skill there is a natural loss over time. This study surveyed GPs who have completed the EMST course to determine their confidence in trauma management. METHODS A two-page survey was mailed in December 2004 to all GPs who had completed an EMST course from 1989 to 2004 and were currently residing in Western Australia. The survey consisted of background questions, open-ended questions regarding the EMST course and skills confidence ratings using visual analogue scales. The final sample size was 223. RESULTS Response rate was 55%. GPs were least confident in carrying out diagnostic peritoneal lavage and cricothyroidotomy. They were most confident inserting i.v. cannulas and managing fluid replacement. Their confidence in some of these skills were related to the frequency of managing trauma patients but not to the interval since completing the EMST course. GPs found the systematic approach to trauma management and practical/procedural skills as the most relevant components of EMST. They felt that EMST could be improved with more accessible refresher courses and more practical/procedural skills. CONCLUSION Most of these GPs were involved in rural hospital work where they may be required to manage seriously injured patients. They require regular refresher courses to maintain their confidence levels in treating seriously injured patients.
Collapse
Affiliation(s)
- Derrick G Lopez
- School of Primary, Aboriginal and Rural Health Care, Discipline of General Practice, Claremont, WA, Australia.
| | | | | | | |
Collapse
|
17
|
Affiliation(s)
- Richard K Reznick
- Department of Surgery, University of Toronto, University Health Network, Toronto, ON M5G 1L5, Canada.
| | | |
Collapse
|
18
|
Wenghofer EF, Williams AP, Klass DJ, Faulkner D. Physician-patient encounters: the structure of performance in family and general office practice. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2006; 26:285-93. [PMID: 17163493 DOI: 10.1002/chp.81] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
INTRODUCTION The College of Physicians and Surgeons of Ontario, the regulatory authority for physicians in Ontario, Canada, conducts peer assessments of physicians' practices as part of a broad quality assurance program. Outcomes are summarized as a single score and there is no differentiation between performance in various aspects of care. In this study we test the hypothesis that physician performance is multidimensional and that dimensions can be defined in terms of physician-patient encounters. METHODS Peer assessment data from 532 randomly selected family practitioners were analyzed using factor analysis to assess the dimensional structure of performance. Content validity was confirmed through consultation sessions with 130 physicians. Multiple-item measures were constructed for each dimension and reliability calculated. Analysis of variance determined the extent to which multiple-item measure scores would vary across peer assessment outcomes. RESULTS Six performance dimensions were confirmed: acute care, chronic conditions, continuity of care and referrals, well care and health maintenance, psychosocial care, and patient records. DISCUSSION Physician performance is multidimensional, including types of physician-patient encounters and variation across dimensions, as demonstrated by individual practice. A conceptual framework for multidimensional performance may inform the design of meaningful evaluation and educational recommendations to meet the individual performance of practicing physicians.
Collapse
Affiliation(s)
- Elizabeth F Wenghofer
- Quality Management Division, College of Physicians and Surgeons of Ontario, and Department of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario.
| | | | | | | |
Collapse
|
19
|
Belos G, Lionis C, Fioretos M, Vlachonicolis J, Philalithis A. Clinical undergraduate training and assessment in primary health care: experiences gained from Crete, Greece. BMC MEDICAL EDUCATION 2005; 5:13. [PMID: 15882464 PMCID: PMC1142318 DOI: 10.1186/1472-6920-5-13] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2005] [Accepted: 05/09/2005] [Indexed: 05/02/2023]
Abstract
BACKGROUND Primary health care (PHC) is increasingly being introduced into undergraduate medical education. In Greece, the Faculty of Medicine of the University of Crete was the first to introduce a 4-week long training in primary health care. This paper presents the experiences gained from the initial implementation of the teaching of practice-based primary care in rural Crete and reports on the assessment scale that was developed. METHODS 284 students' case write-ups from the 6 primary care units (PCUs) where they were allocated for the period 1990 to 1994 were analysed. The demographic data of the students and patients and the number of home visits were studied. Content analysis of the students' write-ups was carried out, using an assessment scale consisting of 10 dichotomous variables, in order to quantify eight (8) primary qualitative criteria. RESULTS Internal reliability was estimated by the index KR20 = 0.67. Face and content validity was found to conform to the standards set for the course, while logistic linear regression analysis showed that the quality criteria could be used as an assessment scale. The number of home visits carried out varied between the various different PCUs (p < 0.001) and more were reported in the write-ups that fulfilled criteria related to the biopsychosocial approach (p < 0.05). Nine quantitative criteria were fulfilled in more than 90% of case reports, but laboratory investigations were reported only in 69.0% of case reports. Statistically significant differences between the PCUs were observed in the fulfilment of criteria related to the community approach, patient assessment and information related to the patient's perception of the illness, but not to those related to aspects of clinical patient management. Differences in reporting laboratory investigations (p < 0.001) are explained by the lack of such facilities in some PCUs. Demographic characteristics of the patients or the students' do not affect the criteria. CONCLUSION The primary health care course achieved the objectives of introducing students to comprehensive, community oriented care, although there was variation between the PCUs. The assessment scale that was developed to analyse the case-write ups of the students provided data that can be used to evaluate the course.
Collapse
Affiliation(s)
- George Belos
- Koropi Health Centre, Athens, Greece
- Health Planning Unit, School of Medicine, University of Crete, Heraklion, Greece
| | - Christos Lionis
- Clinic of Social and Family Medicine, School of Medicine, University of Crete, Heraklion, Greece
| | - Michael Fioretos
- Clinic of Social and Family Medicine, School of Medicine, University of Crete, Heraklion, Greece
| | - John Vlachonicolis
- Laboratory of Biostatistics, School of Medicine, University of Crete, Heraklion, Greece
| | - Anastas Philalithis
- Health Planning Unit, School of Medicine, University of Crete, Heraklion, Greece
| |
Collapse
|
20
|
Abstract
Context:Certified athletic trainers (ATCs) must be able to manage sport-related emergencies.Objective:To report emergency medical services (EMS) directors’ perception of how ATCs manage emergencies and ATCs’ comfort level in managing them.Design:2 descriptive questionnaires.Participants:EMS directors (n = 64) were asked about their perceptions of ATCs’ ability to handle emergencies. ATCs (n = 224) identified their comfort level with handling emergencies.Results:EMS directors who had preseason meetings with ATCs had a significantly better perception of the ATCs’ ability to handle emergencies than did those who did not have preseason meetings. ATCs with advanced certifications (emergency medical technician-basic, emergency medical technician-paramedic, and automated external defibrillator) were more comfortable handling emergencies than those without.Conclusions:EMS directors and ATCs revealed that ATCs could manage most emergencies that might arise in athletic activities. ATCs had a higher perception of their own ability to manage emergency situations than did the EMS directors.
Collapse
|
21
|
Schuwirth L, Gorter S, Van der Heijde D, Rethans JJ, Brauer J, Houben H, Van der Linden S, Van der Vleuten C, Scherpbier A. The role of a computerised case-based testing procedure in practice performance assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2005; 10:145-55. [PMID: 16078099 DOI: 10.1007/s10459-004-2784-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2004] [Accepted: 09/01/2004] [Indexed: 05/03/2023]
Abstract
INTRODUCTION For postgraduate training of doctors there is a need for valid and reliable instruments to assess their daily performance. Various instruments have been suggested, some of which use incognito simulated patients (SPs). These methods are resource intensive. Computerised Case-based testing (CCT) is logistically simpler and may still predict performance well. The research question was to evaluate the predictive validity of CCT for performance. METHODS Seventeen rheumatologists were each visited by eight incognito SPs presenting various rheumatological complaints, and scoring the performance of the rheumatologists using a predefined checklist. From this checklist a panel of experts identified essential items. In addition the rheumatologists sat a CCT test containing 55 cases with a total of 121 items. RESULTS Negative correlations were found between the SP scores and the CCT scores. This was unexpected. Therefore, background variables on experience were used to compare both methods. The correlation between these and CCT were high and positive and with the SP scores high and negative. This pattern did not differ when using the essential items of the checklist. Reliabilities of the SP scores were markedly high. DISCUSSION Although CCT was not predictive of SP scores, it was related to working experience. There are good reasons to assume that although SP-scores were more authentic, they were less valid than CCT scores, mainly because they focussed more on thoroughness than on efficiency in data gathering. The results underpin the assumption that for valid performance assessment the most important issue is what information about the candidate is collected and now how authentic the method is.
Collapse
Affiliation(s)
- L Schuwirth
- Department of Educational Development and Research, Maastricht University, The Netherlands
| | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Cockburn K, Bernard P. Child and Adolescent Mental Health within Primary Care: A Study of General Practitioners' Perceptions. Child Adolesc Ment Health 2004; 9:21-24. [PMID: 32797600 DOI: 10.1046/j.1475-357x.2003.00072.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
BACKGROUND Primary Care teams play an important role in the provision of mental health care to children and young people. METHODS We developed and distributed a questionnaire to all General Practitioners within one Health Authority area. RESULTS Many of the respondents rated as less than satisfactory their competence and their knowledge and skills in important areas of child and adolescent mental health practice. A significant minority expressed a high level of interest in child and adolescent mental health and most respondents reported that they would value further training. CONCLUSIONS General Practitioners should be provided with more training and support in their role as providers of child and adolescent mental health care.
Collapse
Affiliation(s)
- Katrina Cockburn
- Clairmont Family Centre, Princes Street, Bishop Auckland, County Durham, UK
| | - Paul Bernard
- Mulberry Centre, Darlington Memorial Hospital, Hollyhurst Road, Darlington, UK
| |
Collapse
|
23
|
Kramer A, Muijtjens A, Jansen K, Düsman H, Tan L, van der Vleuten C. Comparison of a rational and an empirical standard setting procedure for an OSCE. Objective structured clinical examinations. MEDICAL EDUCATION 2003; 37:132-139. [PMID: 12558884 DOI: 10.1046/j.1365-2923.2003.01429.x] [Citation(s) in RCA: 68] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE Earlier studies of absolute standard setting procedures for objective structured clinical examinations (OSCEs) show inconsistent results. This study compared a rational and an empirical standard setting procedure. Reliability and credibility were examined first. The impact of a reality check was then established. METHODS The OSCE included 16 stations and was taken by trainees in their final year of postgraduate training in general practice and experienced general practitioners. A modified Angoff (independent judgements, no group discussion) with and without a reality check was used as a rational procedure. A method related to the borderline group procedure, the borderline regression (BR) method, was used as an empirical procedure. Reliability was assessed using generalisability theory. Credibility was assessed by comparing pass rates and by relating the passing scores to test difficulty. RESULTS The passing scores were 73.4% for the Angoff procedure without reality check (Angoff I), 66.0% for the Angoff procedure with reality check (Angoff II) and 57.6% for the BR method. The reliabilities (expressed as root mean square errors) were 2.1% for Angoffs I and II, and 0.6% for the BR method. The pass rates of the trainees and GPs were 19% and 9% for Angoff I, 66% and 46% for Angoff II, and 95% and 80% for the BR method, respectively. The correlation between test difficulty and passing score was 0.69 for Angoff I, 0.88 for Angoff II and 0.86 for the BR method. CONCLUSION The BR method provides a more credible and reliable standard for an OSCE than a modified Angoff procedure. A reality check improves the credibility of the Angoff procedure but does not improve its reliability.
Collapse
Affiliation(s)
- Anneke Kramer
- National Centre for Evaluation of Postgraduate Training in General Practice (SVUH), Utrecht, the Netherlands.
| | | | | | | | | | | |
Collapse
|
24
|
Kramer AWM, Jansen JJM, Zuithoff P, Düsman H, Tan LHC, Grol RPTM, van der Vleuten CPM. Predictive validity of a written knowledge test of skills for an OSCE in postgraduate training for general practice. MEDICAL EDUCATION 2002; 36:812-819. [PMID: 12354243 DOI: 10.1046/j.1365-2923.2002.01297.x] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE To examine the validity of a written knowledge test of skills for performance on an OSCE in postgraduate training for general practice. METHODS A randomly-selected sample of 47 trainees in general practice took a knowledge test of skills, a general knowledge test and an OSCE. The OSCE included technical stations and stations including complete patient encounters. Each station was checklist rated and global rated. RESULTS The knowledge test of skills was better correlated to the OSCE than the general knowledge test. Technical stations were better correlated to the knowledge test of skills than stations including complete patient encounters. For the technical stations the rating system had no influence on the correlation. For the stations including complete patient encounters the checklist rating correlated better to the knowledge test of skills than the global rating. CONCLUSION The results of this study support the predictive validity of the knowledge test of skills. In postgraduate training for general practice a written knowledge test of skills can be used as an instrument to estimate the level of clinical skills, especially for group evaluation, such as in studies examining the efficacy of a training programme or as a screening instrument for deciding about courses to be offered. This estimation is more accurate when the content of the test matches the skills under study. However, written testing of skills cannot replace direct observation of performance of skills.
Collapse
Affiliation(s)
- A W M Kramer
- National Centre for Evaluation of Postgraduate Training in General Practice (SVUH), Mauritsstraat 92, 3583 HV Utrecht, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
25
|
Wanzel KR, Ward M, Reznick RK. Teaching the surgical craft: From selection to certification. Curr Probl Surg 2002; 39:573-659. [PMID: 12037512 DOI: 10.1067/mog.2002.123481] [Citation(s) in RCA: 158] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Affiliation(s)
- Kyle R Wanzel
- Department of Surgery, University of Toronto, Ontario, Canada
| | | | | |
Collapse
|
26
|
van Dalen J, Kerkhofs E, Verwijnen GM, van Knippenberg-van den Berg BW, van den Hout HA, Scherpbier AJJA, van der Vleuten CPM. Predicting communication skills with a paper-and-pencil test. MEDICAL EDUCATION 2002; 36:148-53. [PMID: 11869442 DOI: 10.1046/j.1365-2923.2002.01066.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
AIM This study was conducted to investigate the value of a written knowledge test of communication skills for predicting scores on a performance test of communication skills. METHOD A paper-and-pencil test of knowledge about communication skills and a performance test of communication skills, consisting of four stations with standardised patients, were administered to students of two classes of the medical schools of Maastricht and Leiden, the Netherlands. The results on these tests were compared. RESULTS From the results of both instruments, the classes of the participating students could be recognised equally well: 60% correct qualifications of the classes by the knowledge test and 64% by the multiple station examination. Between the two tests an overall, disattenuated correlation of 0.60 was found (N=133, P < 0.01), suggesting moderate predictive value of the knowledge test for the performance test of communication skills. The correlation is stronger for students from Maastricht medical school than for their colleagues in Leiden. Correlation between the knowledge of communication skills test and other available test results of the participating Maastricht students is close to zero, suggesting that the test measures a distinct quality of students' competence. DISCUSSION The paper-and-pencil test of knowledge of communication skills has predictive value for the performance of these skills, but this value seems to be less pronounced than similar findings for clinical procedural skills. The stronger relationship between 'knowing how' and 'showing' in the Maastricht student group might be indicative of an effect of the training format.
Collapse
|
27
|
Boon JM, Meiring JH, Richards PA. Clinical anatomy as the basis for clinical examination: development and evaluation of an Introduction to Clinical Examination in a problem-oriented medical curriculum. Clin Anat 2002; 15:45-50. [PMID: 11835544 DOI: 10.1002/ca.1091] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Clinical anatomy is usually defined as anatomy applied to patient care. The question is asked whether students of a new horizontally and vertically integrated medical curriculum recognize the subject as the basis for clinical examination. A clinical anatomy practicum was developed in the special activity, "Introduction to Clinical Medicine," held in the second year of the Pretoria medical curriculum. The practicum was conducted on a station basis to anatomically prepare the student for the inspection, palpation, percussion, and auscultation of the cardiovascular, respiratory, abdominal, and urogenital systems. A total of 23 stations consisting of eight cardiovascular, seven respiratory, and eight abdominal/urogenital stations were designed. Standardized patients, cadavers, skeletons, prosected specimens, x-rays, computed tomography (CT) scans, magnetic resonance imaging (MRI), multimedia programs, and clinical case studies were used as resources. A Likert-type questionnaire was used for student evaluation of the practicum. Most students realized the importance of surface anatomy for a family physician. More than two-thirds thought the practicum improved their understanding of the anatomical basis for clinical examination. The minority of students were stimulated to do further reading on clinical examination. The students' response to their ability to integrate the clinical examination with the radiological anatomy was average. Most students were continuously aware of the appropriateness of the practicum for their future career. We conclude that medical students recognize the importance of anatomy as the basis for clinical examination when exposed to an appropriate integrated presentation format.
Collapse
Affiliation(s)
- J M Boon
- Department of Anatomy, Faculty of Medicine, University of Pretoria, Pretoria, South Africa.
| | | | | |
Collapse
|
28
|
Remmen R, Scherpbier A, van der Vleuten C, Denekens J, Derese A, Hermann I, Hoogenboom R, Kramer A, Van Rossum H, Van Royen P, Bossaert L. Effectiveness of basic clinical skills training programmes: a cross-sectional comparison of four medical schools. MEDICAL EDUCATION 2001; 35:121-8. [PMID: 11169083 DOI: 10.1046/j.1365-2923.2001.00835.x] [Citation(s) in RCA: 60] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
OBJECTIVE Training in physical diagnostic skills is an important part of undergraduate medical education. The objective of this study was to study the outcome of skills training at four medical schools. CONTEXT At the time of the study, three schools had a traditional lecture-based curriculum and one school had a problem-based learning curriculum with a longitudinal skills training programme. All schools offer extended exposure to clerkships. METHOD A cross-sectional study in four medical schools was performed, using a written test of skills that has good correlation with actual student performance. The scores attained from four student groups were compared within and between the four medical schools. A total of 859 volunteer students from the later four years at each medical school participated in the study. RESULTS The mean scores in the traditional medical schools increased with the start of skill training and the hands-on experience offered during the clerkships. Students from the school with the longitudinal skills training programme and the problem-based learning approach had significantly higher mean scores at the start of the clerkships, and maintained their lead in the subsequent clinical years. CONCLUSIONS Longitudinal skills training seems to offer the students a superior preparation for clerkships as well as influencing the students' learning abilities during the clerkships. The effect of the problem-based learning approach, also related to the innovative philosophy of the curriculum, could not be accounted for.
Collapse
Affiliation(s)
- R Remmen
- Department of General Practice, University of Antwerp, Belgium.
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
29
|
Vernooij-Dassen MJ, Ram PM, Brenninkmeijer WJ, Franssen LJ, Bottema BJ, van der Vleuten CP, Grol RP. Quality assessment in general practice trainers. MEDICAL EDUCATION 2000; 34:1001-6. [PMID: 11123563 DOI: 10.1046/j.1365-2923.2000.00662.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
INTRODUCTION General practice trainers hold a key position in general practice training, especially through their provision of a role model. Their own competence in general practice care is important in this regard. The purpose of the study was to evaluate whether a quality assessment programme could identify the strengths and weaknesses of GP trainers in four main domains of general practice care. METHODS The quality assessment programme comprised validated tests on four domains of general practice: general medical knowledge, knowledge of medical-technical skills, consultation skills and practice management. The criterion for the identification of relative strengths and weaknesses of GP trainers was a variation in the scores of trainers indicating higher and lower scores (strengths and weaknesses) within each domain. RESULTS GP trainers (n=105) were invited to participate in the study and 90% (n=94) did so. The variation in scores allowed the indication of strengths and weaknesses. Main strengths were: general medical knowledge of the digestive system; knowledge of medical skills relating to the skin; consultation skills concerning empathy; practice management with regard to accessibility. Main weaknesses were: general medical knowledge of the neurological system; knowledge of the medical/technical skills relating to the endocrine metabolic and nutritional system; consultation skills regarding shared decision making; practice management involving cooperation with staff and other care providers. DISCUSSION This first systematic evaluation of GP trainers identified their strengths and weaknesses. The weaknesses identified will be used in the improvement process as topics for collective improvement in the GP trainers' general curriculum and in individual learning plans.
Collapse
Affiliation(s)
- M J Vernooij-Dassen
- Centre for Quality of Care Research/Vocational Training, Nijmegen University, The Netherlands
| | | | | | | | | | | | | |
Collapse
|
30
|
Ratcliffe J, Gask L, Creed F, Lewis B. Psychiatric training for family doctors: what do GP registrars want and can a brief course provide this? MEDICAL EDUCATION 1999; 33:434-438. [PMID: 10354320 DOI: 10.1046/j.1365-2923.1999.00343.x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
CONTEXT About 40% of British General Practitioners (GPs) train formally in a psychiatric post as part of their general practice training, but such training may not fully meet the needs of future GPs. A specific course in psychiatry for family doctors has run in Manchester for more than a decade. METHOD Semi-structured interviews conducted with GP registrars before attending the Manchester course in psychiatry with questionnaire follow-up afterwards to ascertain (a) the training 'wants' of GP registrars and (b) whether the course was providing them. RESULTS GP registrars most frequently wanted training in communication skills, how to access the resources that are available to GPs, the detection of psychiatric illness, drug treatment and the management of aggression. The course was successful in satisfying the first three but failed in the last two. There was trend for those who attended Manchester Medical School, which scored significantly higher on number of topics covered at undergraduate level, to perceive a greater need for training than those who attended other medical schools. However, there was no evidence to link self-perception of greater need with having already worked in general practice during postgraduate training. CONCLUSIONS More attention needs to be paid to how to address the specific mental health skills training requirements of GP registrars both within the attachment in psychiatry and during the practice year. Preliminary research is required to devise teaching packages before they are entirely satisfactory for GP education.
Collapse
Affiliation(s)
- J Ratcliffe
- Research Fellow and Hon. Senior Registrar in Psychiatry University of Manchester and Central Manchester NHS Trust
| | | | | | | |
Collapse
|
31
|
Guest AR, Roubidoux MA, Blane CE, Fitzgerald JT, Bowerman RA. Limitations of student evaluations of curriculum. Acad Radiol 1999; 6:229-35. [PMID: 10894081 DOI: 10.1016/s1076-6332(99)80210-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
RATIONALE AND OBJECTIVES Medical student surveys are used extensively in the development and modification of curriculum. The purpose of this study was to look at medical student surveys of a radiology lecture series, evaluating the accuracy of student perceptions of learning and factors affecting them. MATERIALS AND METHODS After a "Case of the Week" lecture series, 156 3rd-year medical students returned a survey evaluating the experience with 10 questions on a four-point scale (1 = disagree, 4 = agree very much) and took a clinical competency assessment (CCA) examination with a radiology substation. Survey responses were compared with actual examination performance, analyzed for how overall learning was characterized in specific educational objectives, and evaluated for factors affecting perceived learning. RESULTS The mean response for perceived CCA examination preparedness was 1.83. The mean radiology station test score was 90.43%. Correlations between student perception of learning and the scoring of focused learning objectives ranged from 0.33 to 0.48 (P < .01). Students responding 1 to items assessing perceived lecture organization, stimulation to read, and interest in the field of radiology had mean scores for perception of overall learning of 2.09-2.44 and mean scores for recommendation of course continuation of 1.68-2.46. Students responding 4 had means of 3.25-3.81 and 3.06-4.0, respectively. CONCLUSION Student perceptions of the value of curriculum were inaccurate compared with external measures of performance, and students poorly related their general impressions to specific learning objectives. Perceived lecture organization, stimulation to read, and interest in radiology as a specialty affected perceived overall learning and perceived value of the lecture series.
Collapse
Affiliation(s)
- A R Guest
- Department of Radiology, University of Michigan Medical Center, Ann Arbor 48109-0326, USA
| | | | | | | | | |
Collapse
|
32
|
Ram P, van der Vleuten C, Rethans JJ, Schouten B, Hobma S, Grol R. Assessment in general practice: the predictive value of written-knowledge tests and a multiple-station examination for actual medical performance in daily practice. MEDICAL EDUCATION 1999; 33:197-203. [PMID: 10211240 DOI: 10.1046/j.1365-2923.1999.00280.x] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This study compares the predictive values of written-knowledge tests and a standardized multiple-station examination for the actual medical performance of general practitioners (GPs) in order to select effective assessment methods to be used in quality-improvement activities. A comprehensive assessment was performed in four phases. First, 100 GPs from the southern part of the Netherlands were assessed by a general medical knowledge test and by a knowledge test on technical skills. Second, in order to check for time-order effects, participants were randomly divided into two groups of 50 each, comparable on scores of both knowledge tests and on professional characteristics. Finally, both groups went through a multiple station examination using standardized patients and a practice video assessment of real surgery, but in opposite orders. Consultations were videotaped and assessed by well-trained peer observers. The drop-out rate was 10%. In both groups the predictive value of medical knowledge tests, ranging from 0.43 to 0.56 (Pearson correlation disattenuated), proved to be comparable with the predictive value of the multiple-station examination for actual performance (0.33-0.59). The overall explained variance of scores of the practice video assessment, measured by multiple regression analysis with performance scores as dependent variables and scores on the knowledge tests and the multiple-station examination as independent variables was moderate (19%). A time-order effect showed in only one direction: from practice video assessment to the multiple-station examination. The GP's professional characteristics did not contribute to the explanation of variation in performance. Medical knowledge tests can predict actual clinical performance to the same extent as a multiple-station examination. Compared with a station examination, a knowledge test may be a good alternative method for assessment the procedures of a large number of practising GPs.
Collapse
Affiliation(s)
- P Ram
- Centre for Quality Research, Universities of Maastricht, The Netherlands
| | | | | | | | | | | |
Collapse
|
33
|
Affiliation(s)
- J Turnbull
- Department of Medicine, University of Ottawa, Ont., Canada
| | | | | |
Collapse
|
34
|
Farnill D, Hayes SC, Todisco J. Interviewing skills: self-evaluation by medical students. MEDICAL EDUCATION 1997; 31:122-127. [PMID: 9231116 DOI: 10.1111/j.1365-2923.1997.tb02470.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
In an independent learning project, 52 third-year medical students carried out a structured self-assessment of two videotaped psycho-social interviews they had conducted with volunteer clients 1 year earlier, as part of a previous course. The interviews had been conducted in small tutorials with feedback from their clients, fellow students and tutors, facilitated by videotape playback. During the sequence of 16 tutorials each student had carried out an early and a late interview and had observed and participated in the discussion of the interviews of 14 peers. Students were asked to tally the frequencies of various interview behaviours, to evaluate the quality of their behaviours, and to establish priorities for future learning. The videotapes were also reliably rated by an independent observer. Students' overall self-assessments correlated 0.46 with those of the independent observer. This correlation was higher than is typically reported in studies of the validity of self-assessment. In absolute terms, the students' mean rating of interviewing performance was 3.2 (adequate plus) which was significantly lower than the observer's mean of 3.6 (adequate to good). Results are discussed in terms of Gordon's (1992) two recommendations for improving the validity of self-assessments and two further suggestions, for paired comparisons and low-threat learning environments, are added.
Collapse
Affiliation(s)
- D Farnill
- Department of Behavioural Sciences in Medicine, University of Sydney, Australia
| | | | | |
Collapse
|
35
|
Jansen JJ, Berden HJ, van der Vleuten CP, Grol RP, Rethans J, Verhoeff CP. Evaluation of cardiopulmonary resuscitation skills of general practitioners using different scoring methods. Resuscitation 1997; 34:35-41. [PMID: 9051822 DOI: 10.1016/s0300-9572(96)01028-3] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
In this study we evaluated the practical performance of 70 general practitioners in cardiopulmonary resuscitation (CPR) before and after instruction and compared checklist-based scores to mechanical recording scores in order to investigate which scoring method is preferable. Both checklist and recording strip-based scores showed significant improvement after instruction, but only 37% were judged proficient according to the American Heart Association standards (checklist scoring), and 47% according to the recording print-based scoring system, while rates judged 97% as satisfactory by general impression. Interrater reliability was highest for the recording print (0.97) and lower for the checklist (0.79), especially for CPR performance (0.56). Comparison of checklist and recording print showed that the checklist was specific but not very sensitive in identifying poor performance for cardiac compression rate, since observers overestimated performance. The correlation for CPR performance between checklist score and recording strip score was low (0.45), indicating that candidates were ranked differently. The correlation between diagnosis and performance score was low for checklist as well as recording print (0.22), indicating that the score on diagnosis was a poor predictor for the score on performance of CPR. These results support the use of the recording manikin as compared with the use of a checklist for formative evaluation of basic life support skills. However, as proficiency in diagnosis and performance in CPR are poorly correlated, assessment of diagnosis using a checklist must be included. Therefore we strongly recommend the combination of assessment by observers using a checklist for diagnostic procedures and the recording strip of the manikin for performance of CPR, as employed in most evaluation schemes.
Collapse
Affiliation(s)
- J J Jansen
- Department of General Practice, University of Limburg, Maastricht, The Netherlands.
| | | | | | | | | | | |
Collapse
|
36
|
Jansen JJ, Scherpbier AJ, Metz JC, Grol RP, van der Vleuten CP, Rethans JJ. Performance-based assessment in continuing medical education for general practitioners: construct validity. MEDICAL EDUCATION 1996; 30:339-344. [PMID: 8949472 DOI: 10.1111/j.1365-2923.1996.tb00844.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
The use of performance-based assessment has been extended to postgraduate education and practising doctors, despite criticism of validity. While differences in expertise at this level are easily reflected in scores on a written test, these differences are relatively small on performance-based tests. However, scores on written tests and performance-based tests of clinical competence generally show moderate correlations. A study was designed to evaluate construct validity of a performance-based test for technical clinical skills in continuing medical education for general practitioners, and to explore the correlation between performance and knowledge of specific skills. A 1-day skills training was given to 71 general practitioners, covering four different technical clinical skills. The effect of the training on performance was measured with a performance-based test using a randomized controlled trial design, while the effect on knowledge was measured with a written test administered 1 month before and directly after the training. A training effect could be shown by the performance-based test for all four clinical skills. The written test also demonstrated a training effect for all but one skill. However, correlations between scores on the written test and on the performance-based test were low for all skills. It is concluded that construct validity of a performance-based test for technical clinical skills of general practitioners was demonstrated, while the knowledge test score was shown to be a poor predictor of competence for specific technical skills.
Collapse
Affiliation(s)
- J J Jansen
- Centre for Research on Quality in Health Care, Universities of Nijmegen and Limburg, Masstricht, Netherlands
| | | | | | | | | | | |
Collapse
|