1
|
Shenoy RV, Newbern D, Cooke DW, Chia DJ, Panagiotakopoulos L, DiVall S, Torres-Santiago L, Vangala S, Gupta N. The Structured Oral Examination: A Method to Improve Formative Assessment of Fellows in Pediatric Endocrinology. Acad Pediatr 2022; 22:1091-1096. [PMID: 34999252 DOI: 10.1016/j.acap.2021.12.032] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 12/08/2021] [Accepted: 12/13/2021] [Indexed: 01/19/2023]
Abstract
OBJECTIVE A structured oral exam (SOE) can be utilized as a formative assessment to provide high-quality formative feedback to trainees, but has not been adequately studied in graduate medical education. We obtained fellow and faculty perspectives on: 1) educational effectiveness, 2) feasibility/acceptability, and 3) time/cost of a SOE for formative feedback. METHODS Four pediatric endocrinology cases were developed and peer-reviewed to generate a SOE. The exam was administered by faculty to pediatric endocrinology fellows individually, with feedback after each case. Fellow/faculty perspectives of the SOE were obtained through a questionnaire. Qualitative thematic analysis was utilized to analyze written comments generated by faculty and fellows. RESULTS Seven of 10 pediatric endocrinology fellowship programs and all 18 fellows within those programs agreed to participate. Thematic analysis of fellow and faculty comments resulted in 5 perceived advantages of the SOE: 1) improved identification of clinically relevant knowledge deficits, 2) improved assessment of clinical reasoning, 3) immediate feedback/teaching, 4) assurance of adequate teaching/assessment of uncommon cases, and 5) more clinically relevant assessment. Mean time to administer one case was 15.8 minutes (2.0) and was mentioned as a potential barrier to implementation. Almost all fellows (17/18, 94%) and faculty (6/7, 86%) would recommend or would most likely recommend implementation of the SOE into their curriculum. CONCLUSIONS The SOE utilized for formative feedback was perceived by fellows and faculty to have several educational advantages over current assessments and high acceptability. Objective educational advantages should be assessed on future studies of the SOE.
Collapse
Affiliation(s)
- Ranjit V Shenoy
- Division of Pediatric Endocrinology, Department of Pediatrics, UCLA Mattel Children's Hospital (RV Shenoy), Los Angeles, Calif.
| | - Dorothee Newbern
- Division of Pediatric Endocrinology & Diabetes, Phoenix Children's Hospital (D Newbern), Phoenix, Ariz
| | - David W Cooke
- Division of Pediatric Endocrinology, Department of Pediatrics, John Hopkins University School of Medicine (DW Cooke), Baltimore, Md
| | - Dennis J Chia
- Division of Pediatric Endocrinology, Department of Pediatrics, UCLA Mattel Children's Hospital (DJ Chia), Los Angeles, Calif
| | - Leonidas Panagiotakopoulos
- Division of Pediatric Endocrinology, Department of Pediatrics, Emory University (L Panagiotakopoulos), Atlanta, Ga
| | - Sara DiVall
- Division of Endocrinology, Department of Pediatrics, Seattle Children's Hospital/University of Washington (S DiVall), Seattle, Wash
| | - Lournaris Torres-Santiago
- Division of Endocrinology, Diabetes & Metabolism, Nemours Children's Health (L Torres-Santiago), Jacksonville, Fla
| | - Sitaram Vangala
- UCLA Department of Medicine Statistics Core (S Vangala), Los Angeles, Calif
| | - Nidhi Gupta
- Division of Pediatric Endocrinology and Diabetes, Department of Pediatrics, Vanderbilt University Medical Center (N Gupta), Nashville, Tenn
| |
Collapse
|
2
|
Zarowitz B. “The world hates change, yet it is the only thing that has brought progress.”—Charles Kettering, 1959. J Am Coll Clin Pharm 2020. [DOI: 10.1002/jac5.1333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Barbara Zarowitz
- Peter Lamy Center on Drug Therapy and Aging University of Maryland School of Pharmacy Las Vegas Nevada USA
| |
Collapse
|
3
|
Al-Mohammed A, Al Mohanadi D, Rahil A, Elhiday AH, Al khal A, Suliman S. Evaluation of Progress of an ACGME-International Accredited Residency Program in Qatar. Qatar Med J 2020; 2020:6. [PMID: 32300550 PMCID: PMC7147266 DOI: 10.5339/qmj.2020.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Accepted: 10/21/2019] [Indexed: 11/24/2022] Open
Abstract
Background: The American College of Physicians’ (ACP) Internal Medicine In-Training Examination (IM-ITE) is designed to evaluate the cognitive knowledge of residents to aid them and program directors in evaluating the training experience. Objective: To determine the impact of the curriculum reform accompanied by the Accreditation Council for Graduate Medical Education (ACGME)-I alignment and accreditation on the internal medicine residency program (IMRP) using residents’ performance in the ACP's ITE from 2008 to 2016, and where the IMRP stands in comparison to all ACGME and ACGME-I accredited programs. Methods: This is a descriptive study conducted at a hospital-based IMRP in Doha, Qatar from 2008 to 2016. The study population is 1052 residents at all levels of training in IMRP. The ACP-generated ITE results of all the United States and ACGME-I accredited programs were compared with IM-ITE results in Qatar. These results were expressed in the total program average and the ranking percentile. Results: There is a progressive improvement in resident performance in Qatar as shown by the rise in total average program score from 52% in 2008 to 72% in 2016 and the sharp rise in percentile rank from 3rd percentile in 2008 to 93rd percentile in 2016 with a dramatic increase during the period 2013 to 2014 (from 32nd percentile to 73rd percentile), which represents the period of ACGME-I accreditation. None of the factors (ethnicity, USMLE or year of residency) were statistically significant with a p value >0.05 and standard coefficient ( − 0.017–0.495). There was negligible correlation between the USMLE test scores with the residents’ ITE scores with a p value = 0.023 and a Pearson correlation r = 0.097. Conclusion: The initial ACGME-I alignment followed by the accreditation, together with whole curriculum redesign to a structured, competency-based program starting from 2008, has led to an improvement in the ITE scores in the IMRP. This was further evidenced by the lack of change in the residency entry selection criteria.
Collapse
|
4
|
Olson AS, Williamson K, Hartman N, Cheema N, Olson N. The Correlation Between Emergency Medicine Residents' Grit and Achievement. AEM Educ Train 2020; 4:24-29. [PMID: 31989067 PMCID: PMC6965685 DOI: 10.1002/aet2.10399] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 09/23/2019] [Accepted: 09/30/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND Early identification of emergency medicine (EM) residents who struggle with educational attainment is difficult. In-training examination (ITE) scores predict success on the American Board of Emergency Medicine (ABEM) Qualifying Examination; however, results are not available until late in the academic year. The noncognitive trait "grit," defined as "perseverance and passion for long-term goals," predicts achievement in high school graduation rates, undergraduate GPA, and gross anatomy. Grit-S is a validated eight-question scale scored 1 to 5; the average of responses represents a person's grit. Our objective was to determine the correlation between EM resident Grit-S scores and achievement, as measured by MCAT percentiles, ITE scores, and remediation rates. STUDY DESIGN AND METHODS This was a 1-year prospective, multicenter trial involving ten EM residencies from 2017 to 2018. Subjects were PGY-1 to -4 EM residents. Grit-S scores, MCAT percentile, remediation rates, ITE scores, and the ITE score's prediction of passing the ABEM Qualifying Examination were collected. Correlation coefficients were computed to assess the relationship between residents' grit and achievement. RESULTS A total 385 of 434 (88.7%) residents participated who completed the Grit-S as part of a larger study. The mean Grit-S score was 3.62. Grit positively correlated with the predicted likelihood of passing the ABEM Qualifying Examination (r = 0.134, n = 382, p = 0.025). There was no correlation between grit and remediation (r = -0.04, n = 378, p = 0.46) or grit and MCAT percentiles (r =- 0.08, n = 262, p = 0.22). CONCLUSIONS The positive correlation between Grit-S scores and percent likelihood of passing the ABEM Qualifying Examination demonstrates grit's potential to assist residency leadership in early identification of residents who may attain a lower ITE score.
Collapse
Affiliation(s)
- Adriana Segura Olson
- Department of MedicineSection of Emergency MedicineUniversity of ChicagoChicagoIL
- Department of Emergency MedicineUniversity of Texas Health San AntonioSan AntonioTX
| | - Kelly Williamson
- Department of Emergency MedicineAdvocate Christ Medical CenterOak LawnIL
| | - Nicholas Hartman
- Department of Emergency MedicineWake Forest School of MedicineWinston‐SalemNC
| | - Navneet Cheema
- Department of MedicineSection of Emergency MedicineUniversity of ChicagoChicagoIL
| | - Nathan Olson
- Department of MedicineSection of Emergency MedicineUniversity of ChicagoChicagoIL
- Department of Emergency/Military MedicineSan Antonio Military Medical CenterFort Sam HoustonTX
| |
Collapse
|
5
|
Abraham RM, Singaram VS. Using deliberate practice framework to assess the quality of feedback in undergraduate clinical skills training. BMC Med Educ 2019; 19:105. [PMID: 30975213 PMCID: PMC6460682 DOI: 10.1186/s12909-019-1547-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2018] [Accepted: 04/04/2019] [Indexed: 05/03/2023]
Abstract
BACKGROUND In this research paper we report on the quality of feedback provided in the logbooks of pre-clinical undergraduate students based on a model of 'actionable feedback'. Feedback to clinical learners about their performance is crucial to their learning, which ultimately impacts on their development into competent clinicians. Due to students' concerns regarding the inconsistency and quality of feedback provided by clinicians, a structured feedback improvement strategy to move feedback forward was added to the clinical skills logbook. The instrument was also extended for peer assessment. This study aims to assess the quality of feedback using the deliberate practice framework. METHODS A feedback scoring system was used to retrospectively assess the quality of tutor and peer logbook feedback provided to second and third year medical students to identify deliberate practice components i.e. task, performance gap and action plan. The sample consisted of 425 second year and 600 third year feedback responses over a year. RESULTS All three deliberate practice components were observed in the majority of the written feedback for both classes. The frequency was higher in peer (83%, 89%) than tutor logbook assessments (51%, 67%) in both classes respectively. Average tutor and peer task, gap and action feedback scores ranged from 1.84-2.07 and 1.93-2.21 respectively. The overall quality of feedback provided by the tutor and peer was moderate and less specific (average score < or = 2). The absence of the three components was noted in only 1% of the feedback responses in both 2nd and 3rd year. CONCLUSION This study found that adding in a feed-forward strategy to the logbooks increased the overall quality of tutor and peer feedback as the task, gap and action plans were described. Deliberate practice framework provides an objective assessment of tutor and peer feedback quality and can be used for faculty development and training. The findings from our study suggest that the ratings from the tool can also be used as guidelines to provide feedback providers with feedback on the quality of feedback they provided. This includes specifically describing a task, performance gap and providing a learning plan as feed-forward to enhance feedback given.
Collapse
Affiliation(s)
- Reina M Abraham
- Clinical and Professional Practice, School of Clinical Medicine, College of Health Sciences, University of KwaZulu-Natal, Durban, 4000, South Africa.
| | - Veena S Singaram
- Clinical and Professional Practice, School of Clinical Medicine, College of Health Sciences, University of KwaZulu-Natal, Durban, 4000, South Africa
| |
Collapse
|
6
|
Olson N, Olson AS, Williamson K, Hartman N, Branzetti J, Lank P. Faculty Assessment of Emergency Medicine Resident Grit: A Multicenter Study. AEM Educ Train 2019; 3:6-13. [PMID: 30680342 PMCID: PMC6339547 DOI: 10.1002/aet2.10309] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Revised: 09/05/2018] [Accepted: 09/16/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND Assessment of trainees' competency is challenging; the predictive power of traditional evaluations is debatable especially in regard to noncognitive traits. New assessments need to be sought to better understand affective areas like personality. Grit, defined as "perseverance and passion for long-term goals," can assess aspects of personality. Grit predicts educational attainment and burnout rates in other populations and is accurate with an informant report version. Self-assessments, while useful, have inherent limitations. Faculty's ability to accurately assess trainees' grit could prove helpful in identifying learner needs and avenues for further development. OBJECTIVE This study sought to determine the correlation between EM resident self-assessed and faculty-assessed Grit Scale (Grit-S) scores of that same resident. METHODS Subjects were PGY-1 to -4 EM residents and resident-selected faculty as part of a larger multicenter trial involving 10 EM residencies during 2017. The Grit-S Scale was administered to participating EM residents; an informant version was completed by their self-selected faculty. Correlation coefficients were computed to assess the relationship between residents' self-assessed and the residents' faculty-assessed Grit-S score. RESULTS A total of 281 of 303 residents completed the Grit-S, for a 93% response rate; 200 of 281 residents had at least one faculty-assessed Grit-S score. No correlation was found between residents' self-assessed and faculty-assessed Grit-S scores. There was a correlation between the two faculty-assessed Grit-S scores for the same resident. CONCLUSION There was no correlation between resident and faculty-assessed Grit-S scores; additionally, faculty-assessed Grit-S scores of residents were higher. This corroborates the challenges faculty face at accurately assessing aspects of residents they supervise. While faculty and resident Grit-S scores did not show significant concordance, grit may still be a useful predictive personality trait that could help shape future training.
Collapse
Affiliation(s)
- Nathan Olson
- Department of Emergency MedicineSan Antonio Military Medical CenterSan AntonioTX
- Present address:
Department of MedicineSection of Emergency MedicineUniversity of ChicagoChicagoIL
| | - Adriana Segura Olson
- Department of Emergency MedicineUniversity of Texas Health San AntonioSan AntonioTX
- Present address:
Department of MedicineSection of Emergency MedicineUniversity of ChicagoChicagoIL
| | - Kelly Williamson
- Department of Emergency MedicineAdvocate Christ Medical CenterOak LawnIL
| | - Nicholas Hartman
- Department of Emergency MedicineWake Forest School of MedicineWinston‐SalemNC;
| | - Jeremy Branzetti
- Department of Emergency MedicineNew York University Langone HealthNew YorkNY
| | - Patrick Lank
- Department of Emergency MedicineNorthwestern UniversityChicagoIL
| | | |
Collapse
|
7
|
Hauer KE, Vandergrift J, Lipner RS, Holmboe ES, Hood S, McDonald FS. National Internal Medicine Milestone Ratings: Validity Evidence From Longitudinal Three-Year Follow-up. Acad Med 2018; 93:1189-1204. [PMID: 29620673 DOI: 10.1097/acm.0000000000002234] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
PURPOSE To evaluate validity evidence for internal medicine milestone ratings across programs for three resident cohorts by quantifying "not assessable" ratings; reporting mean longitudinal milestone ratings for individual residents; and correlating medical knowledge ratings across training years with certification examination scores to determine predictive validity of milestone ratings for certification outcomes. METHOD This retrospective study examined milestone ratings for postgraduate year (PGY) 1-3 residents in U.S. internal medicine residency programs. Data sources included milestone ratings, program characteristics, and certification examination scores. RESULTS Among 35,217 participants, there was a decreased percentage with "not assessable" ratings across years: 1,566 (22.5%) PGY1s in 2013-2014 versus 1,219 (16.6%) in 2015-2016 (P = .01), and 342 (5.1%) PGY3s in 2013-2014 versus 177 (2.6%) in 2015-2016 (P = .04). For individual residents with three years of ratings, mean milestone ratings increased from around 3 (behaviors of an early learner or advancing resident) in PGY1 (ranging from a mean of 2.73 to 3.19 across subcompetencies) to around 4 (ready for unsupervised practice) in PGY3 (mean of 4.00 to 4.22 across subcompetencies, P < .001 for all subcompetencies). For each increase of 0.5 units in two medical knowledge (MK1, MK2) subcompetency ratings, the difference in examination scores for PGY3s was 19.5 points for MK1 (P < .001) and 19.0 for MK2 (P < .001). CONCLUSIONS These findings provide evidence of validity of the milestones by showing how training programs have applied them over time and how milestones predict other training outcomes.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is associate dean for assessment and professor, Department of Medicine, University of California at San Francisco, San Francisco, California. J. Vandergrift is a health services researcher, American Board of Internal Medicine (ABIM), Philadelphia, Pennsylvania. R.S. Lipner is senior vice president of assessment and research, ABIM, Philadelphia, Pennsylvania. E.S. Holmboe is senior vice president of milestones development and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois. S. Hood is director of initial certification, ABIM, Philadelphia, Pennsylvania. F.S. McDonald is senior vice president of academic and medical affairs, ABIM, Philadelphia, Pennsylvania
| | | | | | | | | | | |
Collapse
|
8
|
Cheung WJ, Dudek NL, Wood TJ, Frank JR. Supervisor-trainee continuity and the quality of work-based assessments. Med Educ 2017; 51:1260-1268. [PMID: 28971502 DOI: 10.1111/medu.13415] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 05/30/2017] [Accepted: 07/11/2017] [Indexed: 05/12/2023]
Abstract
CONTEXT Work-based assessments (WBAs) represent an increasingly important means of reporting expert judgements of trainee competence in clinical practice. However, the quality of WBAs completed by clinical supervisors is of concern. The episodic and fragmented interaction that often occurs between supervisors and trainees has been proposed as a barrier to the completion of high-quality WBAs. OBJECTIVES The primary purpose of this study was to determine the effect of supervisor-trainee continuity on the quality of assessments documented on daily encounter cards (DECs), a common form of WBA. The relationship between trainee performance and DEC quality was also examined. METHODS Daily encounter cards representing three differing degrees of supervisor-trainee continuity (low, intermediate, high) were scored by two raters using the Completed Clinical Evaluation Report Rating (CCERR), a previously published nine-item quantitative measure of DEC quality. An analysis of variance (anova) was performed to compare mean CCERR scores among the three groups. Linear regression analysis was conducted to examine the relationship between resident performance and DEC quality. RESULTS Differences in mean CCERR scores were observed between the three continuity groups (p = 0.02); however, the magnitude of the absolute differences was small (partial eta-squared = 0.03) and not educationally meaningful. Linear regression analysis demonstrated a significant inverse relationship between resident performance and CCERR score (p < 0.001, r2 = 0.18). This inverse relationship was observed in both groups representing on-service residents (p = 0.001, r2 = 0.25; p = 0.04, r2 = 0.19), but not in the Off-service group (p = 0.62, r2 = 0.05). CONCLUSIONS Supervisor-trainee continuity did not have an educationally meaningful influence on the quality of assessments documented on DECs. However, resident performance was found to affect assessor behaviours in the On-service group, whereas DEC quality remained poor regardless of performance in the Off-service group. The findings suggest that greater attention should be given to determining ways of improving the quality of assessments reported for off-service residents, as well as for those residents demonstrating appropriate clinical competence progression.
Collapse
Affiliation(s)
- Warren J Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Nancy L Dudek
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Timothy J Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Jason R Frank
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| |
Collapse
|
9
|
O'Neill TR, Peabody MR, Song H. The Predictive Validity of the National Board of Osteopathic Medical Examiners' COMLEX-USA Examinations With Regard to Outcomes on American Board of Family Medicine Examinations. Acad Med 2016; 91:1568-1575. [PMID: 27254014 DOI: 10.1097/acm.0000000000001254] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
PURPOSE To examine the predictive validity of the National Board of Osteopathic Medical Examiners' Comprehensive Osteopathic Medical Licensing Examination of the United States of America (COMLEX-USA) series with regard to the American Board of Family Medicine's (ABFM's) In-Training Examination (ITE) and Maintenance of Certification for Family Physicians (MC-FP) Examination. METHOD A repeated-measures design was employed, using test scores across seven levels of training for 1,023 DOs who took the MC-FP for the first time between April 2012 and November 2014 and for whom the ABFM had ITE scores for each of their residency years. Pearson and disattenuated correlations were calculated; Fisher r to z transformation was performed; and sensitivity, specificity, and positive and negative predictive values for the COMLEX-USA Level 2-Cognitive Evaluation (CE) with regard to the MC-FP were computed. RESULTS The Pearson and disattenuated correlations ranged from 0.55 to 0.69 and from 0.61 to 0.80, respectively. For MC-FP scores, only the correlation increase from the COMLEX-USA Level 2-CE to Level 3 was statistically significant (for Pearson correlations: z = 2.41, P = .008; for disattenuated correlations: z = 3.16, P < .001). The sensitivity, specificity, and positive and negative predictive values of the COMLEX-USA Level 2-CE with the MC-FP were 0.90, 0.39, 0.96, and 0.19, respectively. CONCLUSIONS Evidence was found that the COMLEX-USA can assist family medicine residency program directors in predicting later resident performance on the ABFM's ITE and MC-FP, which is becoming increasingly important as graduate medical education accreditation moves toward a single aligned model.
Collapse
Affiliation(s)
- Thomas R O'Neill
- T.R. O'Neill is vice president of psychometric services, American Board of Family Medicine, Lexington, Kentucky. M.R. Peabody is a psychometrician, American Board of Family Medicine, Lexington, Kentucky. H. Song is senior director for psychometrics and research, National Board of Osteopathic Medical Examiners, Chicago, Illinois
| | | | | |
Collapse
|
10
|
Bogener JW, Bernhardt M, Cil A. New Paradigms in Orthopedic Education. Orthopedics 2016; 39:269-71. [PMID: 27636682 DOI: 10.3928/01477447-20160823-02] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
|
11
|
Post JA, Wittich CM, Thomas KG, Dupras DM, Halvorsen AJ, Mandrekar JN, Oxentenko AS, Beckman TJ. Rating the Quality of Entrustable Professional Activities: Content Validation and Associations with the Clinical Context. J Gen Intern Med 2016; 31:518-23. [PMID: 26902239 PMCID: PMC4835372 DOI: 10.1007/s11606-016-3611-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Revised: 11/18/2015] [Accepted: 01/27/2016] [Indexed: 11/29/2022]
Abstract
BACKGROUND Entrustable professional activities (EPAs) have been developed to assess resident physicians with respect to Accreditation Council for Graduate Medical Education (ACGME) competencies and milestones. Although the feasibility of using EPAs has been reported, we are unaware of previous validation studies on EPAs and potential associations between EPA quality scores and characteristics of educational programs. OBJECTIVES Our aim was to validate an instrument for assessing the quality of EPAs for assessment of internal medicine residents, and to examine associations between EPA quality scores and features of rotations. DESIGN This was a prospective content validation study to design an instrument to measure the quality of EPAs that were written for assessing internal medicine residents. PARTICIPANTS Residency leadership at Mayo Clinic, Rochester participated in this study. This included the Program Director, Associate program directors and individual rotation directors. INTERVENTIONS The authors reviewed salient literature. Items were developed to reflect domains of EPAs useful for assessment. The instrument underwent further testing and refinement. Each participating rotation director created EPAs that they felt would be meaningful to assess learner performance in their area. These 229 EPAs were then assessed with the QUEPA instrument to rate the quality of each EPA. MAIN MEASURES Performance characteristics of the QUEPA are reported. Quality ratings of EPAs were compared to the primary ACGME competency, inpatient versus outpatient setting and specialty type. KEY RESULTS QUEPA tool scores demonstrated excellent reliability (ICC range 0.72 to 0.94). Higher ratings were given to inpatient versus outpatient (3.88, 3.66; p = 0.03) focused EPAs. Medical knowledge EPAs scored significantly lower than EPAs assessing other competencies (3.34, 4.00; p < 0.0001). CONCLUSIONS The QUEPA tool is supported by good validity evidence and may help in rating the quality of EPAs developed by individual programs. Programs should take care when writing EPAs for the outpatient setting or to assess medical knowledge, as these tended to be rated lower.
Collapse
Affiliation(s)
- Jason A Post
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA.
| | - Christopher M Wittich
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Kris G Thomas
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Denise M Dupras
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Andrew J Halvorsen
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Jay N Mandrekar
- College of Medicine, Department of Health Sciences Research, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Amy S Oxentenko
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Thomas J Beckman
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| |
Collapse
|
12
|
Jackson JL, Kay C, Jackson WC, Frank M. The Quality of Written Feedback by Attendings of Internal Medicine Residents. J Gen Intern Med 2015; 30:973-8. [PMID: 25691242 DOI: 10.1007/s11606-015-3237-2] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/14/2014] [Revised: 01/12/2015] [Accepted: 02/04/2015] [Indexed: 10/24/2022]
Abstract
BACKGROUND Attending evaluations are commonly used to evaluate residents. OBJECTIVES Evaluate the quality of written feedback of internal medicine residents. DESIGN Retrospective. PARTICIPANTS Internal medicine residents and faculty at the Medical College of Wisconsin from 2004 to 2012. MAIN MEASURES From monthly evaluations of residents by attendings, a randomly selected sample of 500 written comments by attendings were qualitatively coded and rated as high-, moderate-, or low-quality feedback by two independent coders with good inter-rater reliability (kappa: 0.94). Small group exercises with residents and attendings also coded the utterances as high, moderate, or low quality and developed criteria for this categorization. In-service examination scores were correlated with written feedback. KEY RESULTS There were 228 internal medicine residents who had 6,603 evaluations by 334 attendings. Among 500 randomly selected written comments, there were 2,056 unique utterances: 29% were coded as nonspecific statements, 20% were comments about resident personality, 16% about patient care, 14% interpersonal communication, 7% medical knowledge, 6% professionalism, and 4% each on practice-based learning and systems-based practice. Based on criteria developed by group exercises, the majority of written comments were rated as moderate quality (65%); 22% were rated as high quality and 13% as low quality. Attendings who provided high-quality feedback rated residents significantly lower in all six of the Accreditation Council for Graduate Medical Education (ACGME) competencies (p <0.0005 for all), and had a greater range of scores. Negative comments on medical knowledge were associated with lower in-service examination scores. CONCLUSIONS Most attending written evaluation was of moderate or low quality. Attendings who provided high-quality feedback appeared to be more discriminating, providing significantly lower ratings of residents in all six ACGME core competencies, and across a greater range. Attendings' negative written comments on medical knowledge correlated with lower in-service training scores.
Collapse
|
13
|
Aldeen AZ, Quattromani EN, Williamson K, Hartman ND, Wheaton NB, Branzetti JB. Faculty Prediction of In-Training Examination Scores of Emergency Medicine Residents: A Multicenter Study. J Emerg Med 2015; 49:64-9. [PMID: 25843930 DOI: 10.1016/j.jemermed.2015.01.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2014] [Revised: 01/08/2015] [Accepted: 01/11/2015] [Indexed: 10/23/2022]
Abstract
BACKGROUND The Emergency Medicine In-Training Examination (EMITE) is one of the few validated instruments for medical knowledge assessment of emergency medicine (EM) residents. The EMITE is administered only once annually, with results available just 2 months before the end of the academic year. An earlier predictor of EMITE scores would be helpful for educators to institute timely remediation plans. A previous single-site study found that only 69% of faculty predictions of EMITE scores were accurate. OBJECTIVE The goal of this article was to measure the accuracy with which EM faculty at five residency programs could predict EMITE scores for resident physicians. METHODS We asked EM faculty at five different residency programs to predict the 2014 EMITE scores for all their respective resident physicians. The primary outcome was prediction accuracy, defined as the proportion of predictions within 6% of the actual scores. The secondary outcome was prediction precision, defined as the mean deviation of predictions from the actual scores. We assessed faculty background variables for correlation with the two outcomes. RESULTS One hundred and eleven faculty participated in the study (response rate 68.9%). Mean prediction accuracy for all faculty was 60.0%. Mean prediction precision was 6.3%. Participants were slightly more accurate at predicting scores of noninterns compared to interns. No faculty background variable correlated with the primary or secondary outcomes. Eight participants predicted scores with high accuracy (>80%). CONCLUSIONS In this multicenter study, EM faculty possessed only moderate accuracy at predicting resident EMITE scores. A very small subset of faculty members is highly accurate.
Collapse
Affiliation(s)
- Amer Z Aldeen
- Emergency Medicine Physicians, Ltd., Department of Emergency Medicine, Presence St. Joseph Medical Center, Joliet, Illinois
| | - Erin N Quattromani
- Department of Emergency Medicine, St. Louis University School of Medicine, St. Louis, Missouri
| | - Kelly Williamson
- Department of Emergency Medicine, Advocate Christ Medical Center, Chicago, Illinois
| | - Nicholas D Hartman
- Department of Emergency Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Natasha B Wheaton
- Department of Emergency Medicine, University of Iowa Carver College of Medicine, Iowa City, Iowa
| | - Jeremy B Branzetti
- Division of Emergency Medicine, University of Washington School of Medicine, Seattle, Washington
| |
Collapse
|
14
|
Sisson SD, Bertram A, Yeh HC. Concurrent Validity Between a Shared Curriculum, the Internal Medicine In-Training Examination, and the American Board of Internal Medicine Certifying Examination. J Grad Med Educ 2015. [PMID: 26217421 PMCID: PMC4507926 DOI: 10.4300/jgme-d-14-00054.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND A core objective of residency education is to facilitate learning, and programs need more curricula and assessment tools with demonstrated validity evidence. OBJECTIVE We sought to demonstrate concurrent validity between performance on a widely shared, ambulatory curriculum (the Johns Hopkins Internal Medicine Curriculum), the Internal Medicine In-Training Examination (IM-ITE), and the American Board of Internal Medicine Certifying Examination (ABIM-CE). METHODS A cohort study of 443 postgraduate year (PGY)-3 residents at 22 academic and community hospital internal medicine residency programs using the curriculum through the Johns Hopkins Internet Learning Center (ILC). Total and percentile rank scores on ILC didactic modules were compared with total and percentile rank scores on the IM-ITE and total scores on the ABIM-CE. RESULTS The average score on didactic modules was 80.1%; the percentile rank was 53.8. The average IM-ITE score was 64.1% with a percentile rank of 54.8. The average score on the ABIM-CE was 464. Scores on the didactic modules, IM-ITE, and ABIM-CE correlated with each other (P < .05). Residents completing greater numbers of didactic modules, regardless of scores, had higher IM-ITE total and percentile rank scores (P < .05). Resident performance on modules covering back pain, hypertension, preoperative evaluation, and upper respiratory tract infection was associated with IM-ITE percentile rank. CONCLUSIONS Performance on a widely shared ambulatory curriculum is associated with performance on the IM-ITE and the ABIM-CE.
Collapse
|
15
|
Clanton J, Gardner A, Cheung M, Mellert L, Evancho-Chapman M, George RL. The relationship between confidence and competence in the development of surgical skills. J Surg Educ 2014; 71:405-412. [PMID: 24797858 DOI: 10.1016/j.jsurg.2013.08.009] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2013] [Revised: 08/01/2013] [Accepted: 08/31/2013] [Indexed: 06/03/2023]
Abstract
BACKGROUND Confidence is a crucial trait of any physician, but its development and relationship to proficiency are still unknown. This study aimed to evaluate the relationship between confidence and competency of medical students undergoing basic surgical skills training. METHODS Medical students completed confidence surveys before and after participating in an introductory workshop across 2 samples. Performance was assessed via video recordings and compared with pretraining and posttraining confidence levels. RESULTS Overall, 150 students completed the workshop over 2 years and were evaluated for competency. Most students (88%) reported improved confidence after training. Younger medical students exhibited lower pretraining confidence scores but were just as likely to achieve competence after training. There was no association between pretraining confidence and competence, but confidence was associated with demonstrated competence after training (p < 0.001). CONCLUSIONS Most students reported improved confidence after a surgical skills workshop. Confidence was associated with competency only after training. Future training should investigate this relationship on nonnovice samples and identify training methods that can capitalize on these findings.
Collapse
Affiliation(s)
- Jesse Clanton
- Department of Surgery, Summa Akron City Hospital, Akron, Ohio.
| | - Aimee Gardner
- Austen BioInnovation Institute of Akron, Akron, Ohio
| | - Maureen Cheung
- Ohio University Heritage College of Osteopathic Medicine, Athens, Ohio
| | - Logan Mellert
- Ohio University Heritage College of Osteopathic Medicine, Athens, Ohio
| | | | - Richard L George
- Department of Surgery, Summa Akron City Hospital, Akron, Ohio; Northeast Ohio Medical University, Rootstown, Ohio
| |
Collapse
|
16
|
Aldeen AZ, Salzman DH, Gisondi MA, Courtney DM. Faculty Prediction of In-training Examination Scores of Emergency Medicine Residents. J Emerg Med 2014; 46:390-5. [DOI: 10.1016/j.jemermed.2013.08.047] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2012] [Revised: 05/13/2013] [Accepted: 08/15/2013] [Indexed: 11/21/2022]
|
17
|
Ryan JG, Barlas D, Pollack S. The relationship between faculty performance assessment and results on the in-training examination for residents in an emergency medicine training program. J Grad Med Educ 2013; 5:582-6. [PMID: 24455005 PMCID: PMC3886455 DOI: 10.4300/jgme-d-12-00240.1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2012] [Revised: 01/05/2013] [Accepted: 02/25/2013] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Medical knowledge (MK) in residents is commonly assessed by the in-training examination (ITE) and faculty evaluations of resident performance. OBJECTIVE We assessed the reliability of clinical evaluations of residents by faculty and the relationship between faculty assessments of resident performance and ITE scores. METHODS We conducted a cross-sectional, observational study at an academic emergency department with a postgraduate year (PGY)-1 to PGY-3 emergency medicine residency program, comparing summative, quarterly, faculty evaluation data for MK and overall clinical competency (OC) with annual ITE scores, accounting for PGY level. We also assessed the reliability of faculty evaluations using a random effects, intraclass correlation analysis. RESULTS We analyzed data for 59 emergency medicine residents during a 6-year period. Faculty evaluations of MK and OC were highly reliable (κ = 0.99) and remained reliable after stratification by year of training (mean κ = 0.68-0.84). Assessments of resident performance (MK and OC) and the ITE increased with PGY level. The MK and OC results had high correlations with PGY level, and ITE scores correlated moderately with PGY. The OC and MK results had a moderate correlation with ITE score. When residents were grouped by PGY level, there was no significant correlation between MK as assessed by the faculty and the ITE score. CONCLUSIONS Resident clinical performance and ITE scores both increase with resident PGY level, but ITE scores do not predict resident clinical performance compared with peers at their PGY level.
Collapse
|
18
|
Christianson MS, Ducie JA, Altman K, Khafagy AM, Shen W. Menopause education: needs assessment of American obstetrics and gynecology residents. Menopause 2013; 20:1120-5. [DOI: 10.1097/gme.0b013e31828ced7f] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
19
|
McGill DA, van der Vleuten CPM, Clarke MJ. A critical evaluation of the validity and the reliability of global competency constructs for supervisor assessment of junior medical trainees. Adv Health Sci Educ Theory Pract 2013; 18:701-725. [PMID: 23053869 DOI: 10.1007/s10459-012-9410-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2012] [Accepted: 09/19/2012] [Indexed: 05/28/2023]
Abstract
Supervisor assessments are critical for both formative and summative assessment in the workplace. Supervisor ratings remain an important source of such assessment in many educational jurisdictions even though there is ambiguity about their validity and reliability. The aims of this evaluation is to explore the: (1) construct validity of ward-based supervisor competency assessments; (2) reliability of supervisors for observing any overarching domain constructs identified (factors); (3) stability of factors across subgroups of contexts, supervisors and trainees; and (4) position of the observations compared to the established literature. Evaluated assessments were all those used to judge intern (trainee) suitability to become an unconditionally registered medical practitioner in the Australian Capital Territory, Australia in 2007-2008. Initial construct identification is by traditional exploratory factor analysis (EFA) using Principal component analysis with Varimax rotation. Factor stability is explored by EFA of subgroups by different contexts such as hospital type, and different types of supervisors and trainees. The unit of analysis is each assessment, and includes all available assessments without aggregation of any scores to obtain the factors. Reliability of identified constructs is by variance components analysis of the summed trainee scores for each factor and the number of assessments needed to provide an acceptably reliable assessment using the construct, the reliability unit of analysis being the score for each factor for every assessment. For the 374 assessments from 74 trainees and 73 supervisors, the EFA resulted in 3 factors identified from the scree plot, accounting for only 68 % of the variance with factor 1 having features of a "general professional job performance" competency (eigenvalue 7.630; variance 54.5 %); factor 2 "clinical skills" (eigenvalue 1.036; variance 7.4 %); and factor 3 "professional and personal" competency (eigenvalue 0.867; variance 6.2 %). The percent trainee score variance for the summed competency item scores for factors 1, 2 and 3 were 40.4, 27.4 and 22.9 % respectively. The number of assessments needed to give a reliability coefficient of 0.80 was 6, 11 and 13 respectively. The factor structure remained stable for subgroups of female trainees, Australian graduate trainees, the central hospital, surgeons, staff specialist, visiting medical officers and the separation into single years. Physicians as supervisors, male trainees, and male supervisors all had a different grouping of items within 3 factors which all had competency items that collapsed into the predefined "face value" constructs of competence. These observations add new insights compared to the established literature. For the setting, most supervisors appear to be assessing a dominant construct domain which is similar to a general professional job performance competency. This global construct consists of individual competency items that supervisors spontaneously align and has acceptable assessment reliability. However, factor structure instability between different populations of supervisors and trainees means that subpopulations of trainees may be assessed differently and that some subpopulations of supervisors are assessing the same trainees with different constructs than other supervisors. The lack of competency criterion standardisation of supervisors' assessments brings into question the validity of this assessment method as currently used.
Collapse
Affiliation(s)
- D A McGill
- Department of Cardiology, The Canberra Hospital, Garran, ACT, 2605, Australia,
| | | | | |
Collapse
|
20
|
Visconti A, Gaeta T, Cabezon M, Briggs W, Pyle M. Focused Board Intervention (FBI): A Remediation Program for Written Board Preparation and the Medical Knowledge Core Competency. J Grad Med Educ 2013; 5:464-7. [PMID: 24404311 PMCID: PMC3771177 DOI: 10.4300/jgme-d-12-00229.1] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2012] [Revised: 01/08/2013] [Accepted: 02/04/2013] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Residents deemed at risk for low performance on standardized examinations require focused attention and remediation. OBJECTIVE To determine whether a remediation program for residents identified as at risk for failure on the Emergency Medicine (EM) Written Board Examination is associated with improved outcomes. INTERVENTION All residents in 8 classes of an EM 1-3 program were assessed using the In-Training Examination. Residents enrolled in the Focused Board Intervention (FBI) remediation program based on an absolute score on the EM 3 examination of <70% or a score more than 1 SD below the national mean on the EM 1 or 2 examination. Individualized education plans (IEPs) were created for residents in the FBI program, combining self-study audio review lectures with short-answer examinations. The association between first-time pass rate for the American Board of Emergency Medicine (ABEM) Written Qualifying Examination (WQE) and completion of all IEPs was examined using the χ(2) test. RESULTS Of the 64 residents graduating and sitting for the ABEM examination between 2000 and 2008, 26 (41%) were eligible for the program. Of these, 10 (38%) residents were compliant and had a first-time pass rate of 100%. The control group (12 residents who matched criteria but graduated before the FBI program was in place and 4 who were enrolled but failed to complete the program) had a 44% pass rate (7 of 16), which was significantly lower (χ(2) = 8.6, P = .003). CONCLUSIONS The probability of passing the ABEM WQE on the first attempt was improved through the completion of a structured IEP.
Collapse
|
21
|
Goyal N, Aldeen A, Leone K, Ilgen JS, Branzetti J, Kessler C. Assessing medical knowledge of emergency medicine residents. Acad Emerg Med 2012; 19:1360-5. [PMID: 23252401 DOI: 10.1111/acem.12033] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2012] [Accepted: 06/28/2012] [Indexed: 11/30/2022]
Abstract
The Accreditation Council for Graduate Medical Education (ACGME) requires that emergency medicine (EM) residency graduates are competent in the medical knowledge (MK) core competency. EM educators use a number of tools to measure a resident's progress toward this goal; it is not always clear whether these tools provide a valid assessment. A workshop was convened during the 2012 Academic Emergency Medicine consensus conference "Education Research in Emergency Medicine: Opportunities, Challenges, and Strategies for Success" where assessment for each core competency was discussed in detail. This article provides a description of the validity evidence behind current MK assessment tools used in EM and other specialties. Tools in widespread use are discussed, as well as emerging methods that may form valid assessments in the future. Finally, an agenda for future research is proposed to help address gaps in the current understanding of MK assessment.
Collapse
Affiliation(s)
- Nikhil Goyal
- Department of Emergency Medicine; Henry Ford Hospital; Detroit; MI
| | - Amer Aldeen
- Department of Emergency Medicine; Northwestern University Feinberg School of Medicine; Chicago; IL
| | - Katrina Leone
- Department of Emergency Medicine; Oregon Health & Science University; Portland; OR
| | - Jonathan S. Ilgen
- Department of Emergency Medicine; University of Washington; Seattle; WA
| | - Jeremy Branzetti
- Department of Emergency Medicine; University of Washington; Seattle; WA
| | - Chad Kessler
- Department of Emergency Medicine; University of Illinois-Chicago; Chicago; IL
| |
Collapse
|
22
|
Steiner AZ, Fritz M, Sites CK, Coutifaris C, Carr BR, Barnhart K. Resident Experience on Reproductive Endocrinology and Infertility Rotations and Perceived Knowledge. Obstet Gynecol 2011; 117:324-30. [DOI: 10.1097/aog.0b013e3182056457] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
23
|
Guffey RC, Rusin K, Chidiac EJ, Marsh HM. The utility of pre-residency standardized tests for anesthesiology resident selection: the place of United States Medical Licensing Examination scores. Anesth Analg 2011; 112:201-6. [PMID: 21048098 DOI: 10.1213/ane.0b013e3181fcfacd] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND The resident selection process could be improved if United States Medical Licensing Examination (USMLE) scores obtained during residency application were found to predict success on the American Board of Anesthesiology (ABA) written examination (part 1). In this study, we compared USMLE performance during medical school to anesthesiology residency standardized examination performance. METHODS Sixty-nine anesthesiology residents' USMLE, ABA/American Society of Anesthesiologists (ASA) In-Training Examination, and ABA written board examination (part 1) scores were compared. Linear regression, adjusted Pearson partial correlation, multiple regression, and analysis of variance were used to cross-correlate pre-residency and intra-residency scores. Residents' school of medicine location and year of graduation were noted. RESULTS Both USMLE step 1 and step 2 Clinical Knowledge examinations correlated significantly with all intra-residency standardized tests. Averaged step 1 and step 2 USMLE score correlated to ABA written examination (part 1) score with a slope of 0.72 and r of 0.48 (P = 0.001). CONCLUSIONS The USMLE is a significant predictor of residency ABA/ASA In-Training Examination and ABA written examination performance in anesthesiology. Our program has significantly increased its average written board examination performance while increasing the relative importance of USMLE in resident selection.
Collapse
Affiliation(s)
- Ryan C Guffey
- Department of Anesthesiology, Wayne State University School of Medicine, Detroit, MI 48201, USA
| | | | | | | |
Collapse
|
24
|
Lopez L, Vranceanu AM, Cohen AP, Betancourt J, Weissman JS. Personal characteristics associated with resident physicians' self perceptions of preparedness to deliver cross-cultural care. J Gen Intern Med 2008; 23:1953-8. [PMID: 18807099 PMCID: PMC2596517 DOI: 10.1007/s11606-008-0782-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/25/2007] [Revised: 01/09/2008] [Accepted: 08/22/2008] [Indexed: 12/16/2022]
Abstract
BACKGROUND Recent reports from the Institute of Medicine emphasize patient-centered care and cross-cultural training as a means of improving the quality of medical care and eliminating racial and ethnic disparities. OBJECTIVE To determine whether, controlling for training received in medical school or during residency, resident physician socio-cultural characteristics influence self-perceived preparedness and skill in delivering cross-cultural care. DESIGN National survey of resident physicians. PARTICIPANTS A probability sample of residents in seven specialties in their final year of training at US academic health centers. MEASUREMENT Nine resident characteristics were analyzed. Differences in preparedness and skill were assessed using the chi(2) statistic and multivariate logistic regression. RESULTS Fifty-eight percent (2047/3500) of residents responded. The most important factor associated with improved perceived skill level in performing selected tasks or services believed to be useful in treating culturally diverse patients was having received cross-cultural skills training during residency (OR range 1.71-4.22). Compared with white residents, African American physicians felt more prepared to deal with patients with distrust in the US healthcare system (OR 1.63) and with racial or ethnic minorities (OR 1.61), Latinos reported feeling more prepared to deal with new immigrants (OR 1.88) and Asians reported feeling more prepared to deal with patients with health beliefs at odds with Western medicine (1.43). CONCLUSIONS Cross-cultural care skills training is associated with increased self-perceived preparedness to care for diverse patient populations providing support for the importance of such training in graduate medical education. In addition, selected resident characteristics are associated with being more or less prepared for different aspects of cross-cultural care. This underscores the need to both include medical residents from diverse backgrounds in all training programs and tailor such programs to individual resident needs in order to maximize the chances that such training is likely to have an impact on the quality of care.
Collapse
Affiliation(s)
- Lenny Lopez
- Department of Medicine, Institute for Health Policy, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA.
| | | | | | | | | |
Collapse
|
25
|
McDonald FS, Zeger SL, Kolars JC. Associations between United States Medical Licensing Examination (USMLE) and Internal Medicine In-Training Examination (IM-ITE) scores. J Gen Intern Med 2008; 23:1016-9. [PMID: 18612735 DOI: 10.1007/s11606-008-0641-x] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
BACKGROUND Little is known about the associations of previous standardized examination scores with scores on subsequent standardized examinations used to assess medical knowledge in internal medicine residencies. OBJECTIVE To examine associations of previous standardized test scores on subsequent standardized test scores. DESIGN Retrospective cohort study. PARTICIPANTS One hundred ninety-five internal medicine residents. METHODS Bivariate associations of United States Medical Licensing Examination (USMLE) Steps and Internal Medicine In-Training Examination (IM-ITE) scores were determined. Random effects analysis adjusting for repeated administrations of the IM-ITE and other variables known or hypothesized to affect IM-ITE score allowed for discrimination of associations of individual USMLE Step scores on IM-ITE scores. RESULTS In bivariate associations, USMLE scores explained 17% to 27% of the variance in IME-ITE scores, and previous IM-ITE scores explained 66% of the variance in subsequent IM-ITE scores. Regression coefficients (95% CI) for adjusted associations of each USMLE Step with IM-ITE scores were USMLE-1 0.19 (0.12, 0.27), USMLE-2 0.23 (0.17, 0.30), and USMLE-3 0.19 (0.09, 0.29). CONCLUSIONS No single USMLE Step is more strongly associated with IM-ITE scores than the others. Because previous IM-ITE scores are strongly associated with subsequent IM-ITE scores, appropriate modeling, such as random effects methods, should be used to account for previous IM-ITE administrations in studies for which IM-ITE score is an outcome.
Collapse
|
26
|
Babbott SF, Beasley BW, Hinchey KT, Blotzer JW, Holmboe ES. The predictive validity of the internal medicine in-training examination. Am J Med 2007; 120:735-40. [PMID: 17679136 DOI: 10.1016/j.amjmed.2007.05.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/22/2006] [Revised: 03/08/2007] [Accepted: 05/02/2007] [Indexed: 10/23/2022]
|
27
|
McDonald FS, Zeger SL, Kolars JC. Factors associated with medical knowledge acquisition during internal medicine residency. J Gen Intern Med 2007; 22:962-8. [PMID: 17468889 PMCID: PMC2219722 DOI: 10.1007/s11606-007-0206-4] [Citation(s) in RCA: 60] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/17/2006] [Revised: 10/25/2006] [Accepted: 04/04/2007] [Indexed: 11/26/2022]
Abstract
BACKGROUND Knowledge acquisition is a goal of residency and is measurable by in-training exams. Little is known about factors associated with medical knowledge acquisition. OBJECTIVE To examine associations of learning habits on medical knowledge acquisition. DESIGN, PARTICIPANTS Cohort study of all 195 residents who took the Internal Medicine In-Training Examination (IM-ITE) 421 times over 4 years while enrolled in the Internal Medicine Residency, Mayo Clinic, Rochester, MN. MEASUREMENTS Score (percent questions correct) on the IM-ITE adjusted for variables known or hypothesized to be associated with score using a random effects model. RESULTS When adjusting for demographic, training, and prior achievement variables, yearly advancement within residency was associated with an IM-ITE score increase of 5.1% per year (95%CI 4.1%, 6.2%; p < .001). In the year before examination, comparable increases in IM-ITE score were associated with attendance at two curricular conferences per week, score increase of 3.9% (95%CI 2.1%, 5.7%; p < .001), or self-directed reading of an electronic knowledge resource 20 minutes each day, score increase of 4.5% (95%CI 1.2%, 7.8%; p = .008). Other factors significantly associated with IM-ITE performance included: age at start of residency, score decrease per year of increasing age, -0.2% (95%CI -0.36%, -0.042%; p = .01), and graduation from a US medical school, score decrease compared to international medical school graduation, -3.4% (95%CI -6.5%, -0.36%; p = .03). CONCLUSIONS Conference attendance and self-directed reading of an electronic knowledge resource had statistically and educationally significant independent associations with knowledge acquisition that were comparable to the benefit of a year in residency training.
Collapse
Affiliation(s)
- Furman S McDonald
- General Internal Medicine-Hospital Internal Medicine, Mayo Clinic College of Medicine, 200 1st Street SW, Rochester, MN 55905, USA.
| | | | | |
Collapse
|
28
|
West CP, Huntington JL, Huschka MM, Novotny PJ, Sloan JA, Kolars JC, Habermann TM, Shanafelt TD. A prospective study of the relationship between medical knowledge and professionalism among internal medicine residents. Acad Med 2007; 82:587-92. [PMID: 17525546 DOI: 10.1097/acm.0b013e3180555fc5] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
PURPOSE To explore residents' competency in medical knowledge and in empathy, one element of professionalism, and to evaluate the relationship between competencies in these domains. METHOD In 2003-2004 and 2004-2005, first-year internal medicine residents at the Mayo Clinic College of Medicine in Rochester, Minnesota were invited to participate in a prospective, longitudinal study of resident competency. Participating residents completed the annual Internal Medicine In-Training Examination (ITE) each October and the Interpersonal Reactivity Index (IRI), a standardized tool to measure empathy administered at multiple time points during training. Changes in medical knowledge and empathy between the fall of postgraduate years one and two were evaluated, and associations between medical knowledge and empathy were explored. RESULTS Residents' medical knowledge as measured by the ITE increased over the first year of training (mean increase 8.7 points, P < .0001), whereas empathy as measured by the empathic concern subscale of the IRI decreased over this same time period (mean decrease 1.6 points, P = .0003). No significant correlation was found between medical knowledge and empathy or between changes in these domains of competency over time. CONCLUSIONS Resident competency in the domains of medical knowledge and empathy seems to be influenced by separate and independent aspects of training. Training environments may promote competency in one domain while simultaneously eroding competency in another. Residency programs should devise specific curricula to promote each domain of physician competency.
Collapse
Affiliation(s)
- Colin P West
- Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota 55095, USA.
| | | | | | | | | | | | | | | |
Collapse
|
29
|
Holmboe ES, Rodak W, Mills G, McFarlane MJ, Schultz HJ. Outcomes-based evaluation in resident education: creating systems and structured portfolios. Am J Med 2006; 119:708-14. [PMID: 16887420 DOI: 10.1016/j.amjmed.2006.05.031] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2005] [Revised: 03/16/2006] [Accepted: 05/03/2006] [Indexed: 11/20/2022]
Affiliation(s)
- Eric S Holmboe
- American Board of Internal Medicine, Philadelphia, Pa 19106, USA.
| | | | | | | | | |
Collapse
|
30
|
Abstract
This study attempts to determine how internal medicine housestaff screen and intervene for problematic alcohol and illicit drug use, as well as identify factors correlating with favorable practices. A cross-sectional survey was administered to 93 medical housestaff. Of 64 (69%) respondents, 94% reported routinely screening new patients for alcohol or illicit drug use, while only 52% routinely quantified alcohol consumption and 28% routinely used a screening instrument. Housestaff were unfamiliar with national guidelines and felt unprepared to diagnose substance use disorders, particularly prescription drug abuse. Most routinely counseled patients with alcohol (89%) or illicit-drug problems (91%), although only a third of these patients were referred for formal treatment. More thorough screening practices were associated with greater treatment optimism, while favorable referral practices were associated with greater optimism about 12-step program benefit and difficulty with management. These findings suggest areas to be addressed in residency curricula on substance abuse.
Collapse
Affiliation(s)
- Erik W Gunderson
- Department of Psychiatry, Division on Substance Abuse, Columbia University College of Physicians and Surgeons, New York, NY, USA.
| | | | | |
Collapse
|
31
|
Johnson CE, Hurtubise LC, Castrop J, French G, Groner J, Ladinsky M, McLaughlin D, Plachta L, Mahan JD. Learning management systems: technology to measure the medical knowledge competency of the ACGME. Med Educ 2004; 38:599-608. [PMID: 15189256 DOI: 10.1111/j.1365-2929.2004.01792.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
AIMS We report how the learning management system (LMS) Web Course Tools (WebCT) was used to design, implement and evaluate the web-based course "Principles of Ambulatory Paediatrics", taken by paediatric residents during an ambulatory block rotation. This report also illustrates how WebCT can be used to measure the medical knowledge competency required by the Accreditation Council for Graduate Medical Education (ACGME). METHODS Eighty paediatric residents completed a 1-month outpatient rotation between July 1, 2001 and June 30, 2002. During this rotation residents were required to complete 4 modules in asthma, otitis media, gastroenteritis and fever, respectively. Each module was evaluated using a standard questionnaire. RESULTS Completion rates for the required modules ranged from 64-72%. Residents in all 3 years of training showed improvement between the pre- and post-test scores for each module, except for postgraduate Year 2 residents in the asthma module. Most residents somewhat agreed, agreed or strongly agreed that the module components were useful and that the experience of completing the modules would improve their ability to take care of patients. CONCLUSIONS The LMS WebCT is an innovative and adaptable approach for designing a web-based course for primary care education in paediatrics. The LMS addresses the educational needs of both a clinical division and a residency programme. The LMS also provides an information technology infrastructure to measure the medical knowledge competency required by the ACGME.
Collapse
|
32
|
Kolars JC, McDonald FS, Subhiyah RG, Edson RS. Knowledge base evaluation of medicine residents on the gastroenterology service: Implications for competency assessments by faculty. Clin Gastroenterol Hepatol 2003; 1:64-8. [PMID: 15017519 DOI: 10.1053/jcgh.2003.50010] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
BACKGROUND AND AIMS Clinician educators are asked to provide both formative and summative evaluations on the medical knowledge of residents. This study evaluated the accuracy of these evaluations and the perception of residents regarding the ability of faculty to assess medical knowledge. METHODS Gastroenterology knowledge ratings provided by 15 faculty gastroenterologists on 49 internal medicine residents during a required gastroenterology rotation were correlated with performance on the gastroenterology subsection of the In-Training Examination for Internal Medicine. Residents also were surveyed regarding their perception of the ability of faculty to judge their knowledge of medical gastroenterology. RESULTS The mean correlation (Kendall's tau b) of faculty ratings with performance on the ITE was 0.30 (P < 0.01). The range of correlation values for individual faculty (-0.39 to 0.80) indicated that some faculty were able to assess the medical knowledge of residents better than others. Residents, as well as the faculty themselves, perceived that faculty were able to rate their medical knowledge relatively well. CONCLUSIONS The ability of faculty gastroenterologists to judge the knowledge of gastroenterology in their resident trainees was quite limited. Residents, as well as faculty, inaccurately perceive the ability of gastroenterologists to render professional judgments on their knowledge base as good. An end-of-rotation written examination would appear to be required to provide an accurate assessment of the medical knowledge of residents.
Collapse
Affiliation(s)
- Joseph C Kolars
- Department of Internal Medicine, Mayo Clinic, Rochester, Minnesota 55905, USA.
| | | | | | | |
Collapse
|
33
|
Cox CE, Carson SS, Ely EW, Govert JA, Garrett JM, Brower RG, Morris DG, Abraham E, Donnabella V, Spevetz A, Hall JB. Effectiveness of medical resident education in mechanical ventilation. Am J Respir Crit Care Med 2003; 167:32-8. [PMID: 12406827 DOI: 10.1164/rccm.200206-624oc] [Citation(s) in RCA: 54] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
Specific methods of mechanical ventilation management reduce mortality and lower health care costs. However, in the face of a predicted deficit of intensivists, it is unclear whether residency programs are training internists to provide effective care for patients who require mechanical ventilation. To evaluate these educational outcomes, we administered a validated 19-item case-based test and survey to resident physicians at 31 diverse U.S. internal medicine residency programs nationwide. Of 347 senior residents, 259 (75%) responded. The mean test score was 74% correct (SD, 14%; range, 37 to 100%). Important items representing evidence-based standards of critical care answered incorrectly were as follows: use of appropriate tidal volume in the acute respiratory distress syndrome (48% incorrect), identifying a patient ready for a weaning trial (38% incorrect), and recognizing indication for noninvasive ventilation (27% incorrect). Most accurately identified pneumothorax (86% correct) and increased intrathoracic positive end-expiratory pressure (93% correct). Better scores were associated with "closed" versus "open" intensive care unit organization (76 versus 71% correct, p = 0.001), resident perception of greater versus lesser ventilator knowledge (79 versus 71% correct, p = 0.001), and graduation from a U.S. versus international medical school (75 versus 69% correct, p = 0.033). Although overall training satisfaction correlated strongly with program use of learning objectives (r = 0.89, p < 0.0001), only 46% reported being satisfied with their mechanical ventilation training. We conclude that senior residents may not be gaining essential evidence-based knowledge needed to provide effective care for patients who require mechanical ventilation. Residency programs should emphasize evidence-based learning objectives to guide mechanical ventilation instruction.
Collapse
Affiliation(s)
- Christopher E Cox
- Department of Medicine, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, USA
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
34
|
Mazor KM, Campbell EG, Field T, Purwono U, Peterson D, Lockwood JH, Weissman JS, Gurwitz JH. Managed care education: what medical students are telling us. Acad Med 2002; 77:1128-1133. [PMID: 12431927 DOI: 10.1097/00001888-200211000-00015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE To examine graduating medical students' perceptions of the adequacy of instruction in managed care and in 11 curricular content areas identified by experts as a necessary part of managed care education. This study sought to determine whether medical students perceived these content areas as relevant to managed care and to evaluate the extent to which students' perceptions of the adequacy of instruction varied as a function of managed care penetration in the locations of their respective medical schools. METHOD Data from the Association of American Medical Colleges' 1999 Medical School Graduation Questionnaire (GQ) were analyzed. Students' ratings of adequacy of instruction were summarized. Correlations between ratings of instruction in managed care and 11 related content areas were calculated, as well as correlations between managed care penetration in the locations of the students' schools and the proportion of students rating instruction as inadequate. RESULTS A majority of 1999 medical school graduates (60%) rated instruction in managed care as inadequate; other content areas to which majorities of graduates gave inadequate ratings were practice management (72%), quality assurance (57%), medical care cost control (57%), and cost-effective medical practice (56%). Ratings in these four content areas were highly correlated with ratings of instruction in managed care. The correlation between managed care penetration and rating of instruction in managed care was statistically significant (r = -.37); correlations between managed care penetration and instruction in the other content areas were not. CONCLUSIONS On the 1999 GQ, a majority of medical students responded that they felt they had not received adequate instruction in managed care. Further, the responses suggest that these medical students defined managed care in terms of managing costs, rather than managing health care, or developing population-based approaches to the delivery of health care.
Collapse
Affiliation(s)
- Kathleen M Mazor
- Meyers Primary Care Institute, University of Massachusetts Medical School and Fallon Healthcare System, Worcester 01605, USA.
| | | | | | | | | | | | | | | |
Collapse
|
35
|
Emmons S, Sells CW, Eiff MP. A review of medical and allied health learners' satisfaction with their training in women's health. Am J Obstet Gynecol 2002; 186:1259-65; discussion 1265-7. [PMID: 12066107 DOI: 10.1067/mob.2002.123728] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
OBJECTIVE Graduating learners from Oregon Health and Sciences University programs and from the National College of Naturopathic Medicine were surveyed about their attitudes toward their training in women's health. STUDY DESIGN The survey addressed learner satisfaction with training in women's health, their preferred learning methods, and their clinical comfort in managing 17 clinical problems. The survey addressed knowledge of complementary and alternative medicine. RESULTS Satisfaction with training in women's health varied by program. Satisfaction increased with increasing proportion of women seen during training. Clinical confidence scores increased with increasing proportion of women seen during training. Physical assault and breast disease were areas of least clinical confidence. All groups preferred learning in clinical rather than didactic settings. Experience with alternative and complementary medicine was very limited except among naturopathic students. CONCLUSIONS Areas of common educational need were identified among a variety of learners. This information will assist educators in designing multidisciplinary programs to meet the needs of this diverse group.
Collapse
Affiliation(s)
- Sandra Emmons
- Department of Obstetrics and Gynecology, Oregon Health and Sciences University, Portland 97201, USA.
| | | | | |
Collapse
|
36
|
Abstract
Access to care by low-income persons and residents of rural and poor inner-city areas is a persistent problem, yet physicians tend to be maldistributed relative to need. The objectives were to describe preferences of resident physicians to locate in underserved areas and to assess their preparedness to provide service to low-income populations. A national survey was made of residents completing their training in eight specialties at 162 US academic health center hospitals in 1998, with 2,626 residents responding. (Of 4,832 sampled, 813 had invalid addresses or were no longer in the residency program. Among the valid sample of 4,019, the response rate was 65%.) The percentage of residents ranking public hospitals, rural areas, and poor inner-city areas as desirable employment locations and the percentage feeling prepared to provide specified services associated with indigent populations were ascertained. Logistic regressions were used to calculate adjusted percentages, controlling for sex, race/ethnicity, international medical graduate (IMG) status, plans to subspecialize, ownership of hospital, specialty, and exposure to underserved patients during residency. Only one third of residents rated public hospitals as desirable settings, although there were large variations by specialty. Desirability was not associated with having trained in a public hospital or having greater exposure to underserved populations. Only about one quarter of respondents ranked rural (26%) or poor inner-city (25%) areas as desirable. Men (29%, P <.01) and noncitizen IMGs (43%, P <.01) were more likely than others to prefer rural settings. Residents who were more likely to rate poor inner-city settings as desirable included women (28%, P =.03), noncitizen IMGs (35%, P =.01), and especially underrepresented minorities (52%, P <.01). Whereas about 90% or more of residents felt prepared to treat common clinical conditions, only 67% of residents in four primary care specialties felt prepared to counsel patients about domestic violence or to care for human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) or substance abuse patients (all 67%). Women were more likely than men to feel prepared to counsel patients about domestic violence (70% vs. 63%, P =.002) and depression (83% vs. 75%, P <.01). Underrepresented minority residents were more likely than other residents to feel prepared to counsel patients about domestic violence (P <.01) and compliance with care (P =.04). Residents with greater exposure to underserved groups were more prepared to counsel patients about domestic violence (P =.01), substance abuse (P =.01), and to treat patients with HIV/AIDS (P =.01) or with substance abuse problems (P <.01). This study demonstrates the need to expose graduate trainees to underserved populations and suggests a continuing role of minorities, women, and noncitizen physicians in caring for low-income populations.
Collapse
Affiliation(s)
- J S Weissman
- The Department of Medicine, Harvard Medical School, Boston, MA 02114, USA.
| | | | | | | |
Collapse
|
37
|
Campbell EG, Weissman JS, Ausiello J, Wyatt S, Blumenthal D. Understanding the relationship between market competition and students' ratings of the managed care content of their undergraduate medical education. Acad Med 2001; 76:51-59. [PMID: 11154197 DOI: 10.1097/00001888-200101000-00016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE The increase in managed care has led to questions about the inadequacy of instruction undergraduate medical students receive in curricular areas related to managed care. This study examined (1) the percentages of graduating medical students who felt they had received inadequate instruction in six curricular content areas (CCAs): primary care, care of ambulatory patients, health promotion and disease prevention, medical care cost control, teamwork with other health professionals, and cost-effective medical practice; and (2) whether the market competitiveness of these students' medical schools affected their reports of inadequacy of instruction in these CCAs. METHOD Data from the Association of American Medical Colleges' Graduation Questionnaires (GQs) from 1994 to 1997 were analyzed. The GQ asked graduating students to rate the adequacy of instruction they had received in the six CCAs. Students' ratings were collapsed into the dichotomous variables "inadequate" and "not inadequate." The market competitiveness of medical schools was determined using the four-stage Market Evolution Model developed by the University HealthSystem Consortium. Only responses from students graduating from medical schools that could be staged for all four years of the study were analyzed. Statistical analyses were performed to determine trends for each CCA by year, across the entire study period, by market stage, and by market stage across the entire study period. RESULTS A total of 39,136 respondents from 86 medical schools were used in the study. The percentages of graduating medical students who reported inadequate instruction decreased over the study period for five of the six CCAs: primary care (27.6% in 1994 to 13.7% in 1997), ambulatory care (37.4% to 23. 9%), medical care cost control (62.9% to 52.9%) cost-effectiveness of medical practice (62.7% to 53.9%), and health promotion and disease prevention (44.4% to 23.7%); all at p <0.001. The responses for inadequacy of instruction for teamwork with other health professionals remained steady from 1994 to 1996 (10.2% to 10.6%), then increased 21.8% in 1997. Over the course of the study, students graduating from schools in more competitive markets (Stage 3 or Stage 4) were more likely to report inadequate instruction in three CCAs, primary care, ambulatory care, and health promotion and disease prevention, than were those graduating from schools in less competitive markets (Stage 1 and Stage 2). Conversely, students graduating from schools in the more competitive health care markets were less likely to report inadequate instruction in cost-effectiveness and cost control than were students from schools in less competitive markets. CONCLUSION Graduating students' reports of inadequacy of instruction decreased over the study period for five of the six CCAs, increasing only for teamwork with other professionals. Findings were mixed with regard to the relationship of medical schools' market competitiveness and graduating students' reports of inadequacy of instruction. More research is needed to confirm graduating students' perceptions of the inadequacy of their instruction in CCAs related to managed care, particularly once they have gained experience treating patients in managed care environments.
Collapse
Affiliation(s)
- E G Campbell
- Institute for Health Policy, Massachusetts General Hospital, Boston, MA 02114, USA
| | | | | | | | | |
Collapse
|
38
|
|