1
|
Moreira DC, Metzger ML, Antillón-Klussmann F, González-Ramella O, Gao Y, Bazzeh F, Middlekauff J, Fox Irwin L, Gonzalez ML, Chantada G, Barr RD, Garrington T, Hastings C, Kutluk T, Saab R, Khan MS, Saha V, Rodríguez-Galindo C, Friedrich P. Development of EPAT: An assessment tool for pediatric hematology/oncology training programs. Cancer 2023; 129:3448-3456. [PMID: 37417913 DOI: 10.1002/cncr.34946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 05/18/2023] [Accepted: 05/28/2023] [Indexed: 07/08/2023]
Abstract
PURPOSE In the absence of a standardized tool to assess the quality of pediatric hematology/oncology training programs, the Education Program Assessment Tool (EPAT) was conceptualized as a user-friendly and adaptable tool to evaluate and identify areas of opportunity, pinpoint needed modifications, and monitor progress for training programs around the world. METHODS The development of EPAT consisted of three main phases: operationalization, consensus, and piloting. After each phase, the tool was iteratively modified based on feedback to improve its relevance, usability, and clarity. RESULTS The operationalization process led to the development of 10 domains with associated assessment questions. The two-step consensus phase included an internal consensus phase to validate the domains and a subsequent external consensus phase to refine the domains and overall function of the tool. EPAT domains for programmatic evaluation are hospital infrastructure, patient care, education infrastructure, program basics, clinical exposure, theory, research, evaluation, educational culture, and graduate impact. EPAT was piloted in five training programs in five countries, representing diverse medical training and patient care contexts for proper validation of the tool. Face validity was confirmed by a correlation between the perceived and calculated scores for each domain (r = 0.78, p < .0001). CONCLUSIONS EPAT was developed following a systematic approach, ultimately leading to a relevant tool to evaluate the different core elements of pediatric hematology/oncology training programs across the world. With EPAT, programs will have a tool to quantitatively evaluate their training, allowing for benchmarking with centers at the local, regional, and international level.
Collapse
Affiliation(s)
| | | | - Federico Antillón-Klussmann
- Unidad Nacional de Oncología Pediátrica, Guatemala City, Guatemala
- Universidad Francisco Marroquin, Guatemala City, Guatemala
| | | | - Yijin Gao
- Shanghai Children's Medical Center, Shanghai, China
| | | | | | | | | | - Guillermo Chantada
- Fundacion Pérez Scremini-Hospital Pereira Rossell, Montevideo, Uruguay
- Hospital Sant Joan de Déu, Barcelona, Spain
| | - Ronald D Barr
- McMaster Children's Hospital, Hamilton, Ontario, Canada
| | | | | | - Tezer Kutluk
- Hacettepe University Faculty of Medicine & Cancer Institute, Ankara, Turkey
| | - Raya Saab
- Department of Pediatrics, Stanford University, Palo Alto, California, USA
| | - Muhammad Saghir Khan
- King Faisal Specialist Hospital and Research Center, Al Madinah Al Munawarrah, Saudi Arabia
| | | | | | - Paola Friedrich
- St. Jude Children's Research Hospital, Memphis, Tennessee, USA
| |
Collapse
|
2
|
Deal SB, Seabott H, Chang L, Alseidi AA. The Program Evaluation Committee in Action: Lessons Learned From a General Surgery Residency's Experience. JOURNAL OF SURGICAL EDUCATION 2018; 75:7-13. [PMID: 28734949 DOI: 10.1016/j.jsurg.2017.06.026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Revised: 05/22/2017] [Accepted: 06/24/2017] [Indexed: 06/07/2023]
Abstract
OBJECTIVE To evaluate the success of the annual program evaluation process and describe the experience of a Program Evaluation Committee for a General Surgery residency program. DESIGN We conducted a retrospective review of the Program Evaluation Committee's meeting minutes, data inputs, and outcomes from 2014 to 2016. We identified top priorities by year, characterized supporting data, summarized the improvement plans and outcome measures, and evaluated whether the outcomes were achieved at 1 year. SETTING Virginia Mason Medical Center General Surgery Residency Program. PARTICIPANTS Program Evaluation Committee members including the Program Director, 2 Associate Program Directors, 2 Senior Faculty Members, and 1 Resident. RESULTS All outcome measures were achieved or still in progress at 1 year. This included purchasing a GI mentor to improve endoscopic simulation training, establishing an outpatient surgery rotation to increase the volume of cases, and implementing a didactic course on adult learning principles for faculty development to improve intraoperative teaching. Primary reasons for slow progress were lack of follow-through by delegates or communication breakdown. CONCLUSIONS The annual program evaluation process has been successful in identifying top priorities, developing action plans, and achieving outcome measures using our systematic evaluation process.
Collapse
Affiliation(s)
- Shanley B Deal
- Graduate Medical Education, Virginia Mason Medical Center, Seattle, Washington.
| | - Heather Seabott
- Graduate Medical Education, Virginia Mason Medical Center, Seattle, Washington
| | - Lily Chang
- Department of General, Thoracic, and Vascular Surgery, Virginia Mason Medical Center, Seattle, Washington
| | - Adnan A Alseidi
- Department of General, Thoracic, and Vascular Surgery, Virginia Mason Medical Center, Seattle, Washington
| |
Collapse
|
4
|
Boor K, Van Der Vleuten C, Teunissen P, Scherpbier A, Scheele F. Development and analysis of D-RECT, an instrument measuring residents' learning climate. MEDICAL TEACHER 2011; 33:820-7. [PMID: 21355691 DOI: 10.3109/0142159x.2010.541533] [Citation(s) in RCA: 84] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
BACKGROUND Measurement of learning climates can serve as an indicator of a department's educational functioning. AIM This article describes the development and psychometric qualities of an instrument to measure learning climates in postgraduate specialist training: the Dutch Residency Educational Climate Test (D-RECT). METHOD A preliminary questionnaire was evaluated in a modified Delphi procedure. Simultaneously, all residents in the Netherlands were invited to fill out the preliminary questionnaire. We used exploratory factor analysis to analyze the outcomes and construct the definitive D-RECT. Confirmatory factor analysis tested the questionnaire's goodness of fit. Generalizability studies tested the number of residents needed for a reliable outcome. RESULTS In two rounds, the Delphi panel reached consensus. In addition, 1278 residents representing 26 specialties completed the questionnaire. The Delphi panel's input in combination with the exploratory factor analysis of 600 completed surveys led to the definitive D-RECT, consisting of 50 items and 11 subscales (e.g., feedback, supervision, patient handover and professional relations between attendings). Confirmatory factor analyses of the remaining surveys confirmed the construct. The results showed that a feasible number of residents is needed for a reliable outcome. CONCLUSION D-RECT appears to be a valid, reliable and feasible tool to measure the quality of clinical learning climates.
Collapse
Affiliation(s)
- Klarke Boor
- St Lucas Andreas Hospital, Amsterdam, The Netherlands.
| | | | | | | | | |
Collapse
|
5
|
Fluit CRMG, Bolhuis S, Grol R, Laan R, Wensing M. Assessing the quality of clinical teachers: a systematic review of content and quality of questionnaires for assessing clinical teachers. J Gen Intern Med 2010; 25:1337-45. [PMID: 20703952 PMCID: PMC2988147 DOI: 10.1007/s11606-010-1458-y] [Citation(s) in RCA: 87] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2009] [Revised: 02/22/2010] [Accepted: 07/02/2010] [Indexed: 11/25/2022]
Abstract
BACKGROUND Learning in a clinical environment differs from formal educational settings and provides specific challenges for clinicians who are teachers. Instruments that reflect these challenges are needed to identify the strengths and weaknesses of clinical teachers. OBJECTIVE To systematically review the content, validity, and aims of questionnaires used to assess clinical teachers. DATA SOURCES MEDLINE, EMBASE, PsycINFO and ERIC from 1976 up to March 2010. REVIEW METHODS The searches revealed 54 papers on 32 instruments. Data from these papers were documented by independent researchers, using a structured format that included content of the instrument, validation methods, aims of the instrument, and its setting. RESULTS Aspects covered by the instruments predominantly concerned the use of teaching strategies (included in 30 instruments), supporter role (29), role modeling (27), and feedback (26). Providing opportunities for clinical learning activities was included in 13 instruments. Most studies referred to literature on good clinical teaching, although they failed to provide a clear description of what constitutes a good clinical teacher. Instrument length varied from 1 to 58 items. Except for two instruments, all had to be completed by clerks/residents. Instruments served to provide formative feedback ( instruments) but were also used for resource allocation, promotion, and annual performance review (14 instruments). All but two studies reported on internal consistency and/or reliability; other aspects of validity were examined less frequently. CONCLUSIONS No instrument covered all relevant aspects of clinical teaching comprehensively. Validation of the instruments was often limited to assessment of internal consistency and reliability. Available instruments for assessing clinical teachers should be used carefully, especially for consequential decisions. There is a need for more valid comprehensive instruments.
Collapse
Affiliation(s)
- Cornelia R M G Fluit
- Department for Evaluation, Quality and Development of Medical Education, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands.
| | | | | | | | | |
Collapse
|
6
|
Rose SH, Long TR. Accreditation Council for Graduate Medical Education (ACGME) annual anesthesiology residency and fellowship program review: a "report card" model for continuous improvement. BMC MEDICAL EDUCATION 2010; 10:13. [PMID: 20141641 PMCID: PMC2830223 DOI: 10.1186/1472-6920-10-13] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2009] [Accepted: 02/08/2010] [Indexed: 05/21/2023]
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education (ACGME) requires an annual evaluation of all ACGME-accredited residency and fellowship programs to assess program quality. The results of this evaluation must be used to improve the program. This manuscript describes a metric to be used in conducting ACGME-mandated annual program review of ACGME-accredited anesthesiology residencies and fellowships. METHODS A variety of metrics to assess anesthesiology residency and fellowship programs are identified by the authors through literature review and considered for use in constructing a program "report card." RESULTS Metrics used to assess program quality include success in achieving American Board of Anesthesiology (ABA) certification, performance on the annual ABA/American Society of Anesthesiology In-Training Examination, performance on mock oral ABA certification examinations, trainee scholarly activities (publications and presentations), accreditation site visit and internal review results, ACGME and alumni survey results, National Resident Matching Program (NRMP) results, exit interview feedback, diversity data and extensive program/rotation/faculty/curriculum evaluations by trainees and faculty. The results are used to construct a "report card" that provides a high-level review of program performance and can be used in a continuous quality improvement process. CONCLUSIONS An annual program review is required to assess all ACGME-accredited residency and fellowship programs to monitor and improve program quality. We describe an annual review process based on metrics that can be used to focus attention on areas for improvement and track program performance year-to-year. A "report card" format is described as a high-level tool to track educational outcomes.
Collapse
Affiliation(s)
- Steven H Rose
- Department of Anesthesiology, College of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Timothy R Long
- Department of Anesthesiology, College of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| |
Collapse
|
7
|
Thrush CR, Hicks EK, Tariq SG, Johnson AM, Clardy JA, O'Sullivan PS, Williams DK. Optimal learning environments from the perspective of resident physicians and associations with accreditation length. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2007; 82:S121-5. [PMID: 17895676 DOI: 10.1097/acm.0b013e318140658f] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
BACKGROUND Indicators of program quality in graduate medical education have not been thoroughly well developed or studied. This study explores resident physicians' perceptions of program quality and associations with an external quality indicator. METHOD Responses to two open-ended questions about program strengths and areas in need of improvement were analyzed for 392 residents from 14 specialty programs that were reaccredited between 1999 and 2005. Computerized text analysis facilitated reliable categorization of 1,502 comments. Mann-Whitney U tests and nonparametric analyses for correlated data were used to examine associations between resident perceptions and accreditation length. RESULTS The most frequently mentioned program strengths were related to the quality of faculty, exposure to patients, education, and the social environment. Of these core strengths, residents in programs with longer cycle lengths had significantly more comments about the quality of faculty in their program. CONCLUSIONS Resident feedback can provide beneficial information about dimensions of program quality and the learning environment.
Collapse
|
8
|
Phitayakorn R, Levitan N, Shuck JM. Program report cards: evaluation across multiple residency programs at one institution. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2007; 82:608-15. [PMID: 17525552 DOI: 10.1097/acm.0b013e3180556906] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
The designated institution official (DIO) is responsible for monitoring all residency programs within an institution. Although program-evaluation tools have been developed for residency program directors to use, there are currently no such published evaluation tools for DIOs. This manuscript describes the development and implementation of a standardized, dimensional program report card for the more than 60 residency and fellowship programs at our institution. This report card measures the theoretical construct of residency program performance and is divided into four sections: (1) quality of candidates recruited, (2) the resident educational program, (3) graduate success, and (4) overall house officer satisfaction. Each section is measured by objective and subjective metrics that allow the DIO to record programmatic strengths and weaknesses. These results are confidentially shared with the residency program director and encourage a partnership between the DIO and the program director. It is difficult to provide concrete construct validity with this instrument. The process used to develop the report card seems valid. The authors recognize that this report card is a surrogate for each program's RRC's perception of quality. In the future, the authors hope to work closely with the Accreditation Council for Graduate Medical Education and/or the group on resident affairs of the Association of American Medical Colleges to set national benchmark criteria for acceptable residency program performance for each medical discipline. They hope that DIOs and program directors will be able to compare residency programs objectively and identify areas for improvement at a local and national level.
Collapse
Affiliation(s)
- Roy Phitayakorn
- Department of Surgery, University Hospitals Case Medical Center, Cleveland, Ohio 44106, USA
| | | | | |
Collapse
|