1
|
Drake CB, Rhee DW, Panigrahy N, Heery L, Iturrate E, Stern DT, Sartori DJ. Toward precision medical education: Characterizing individual residents' clinical experiences throughout training. J Hosp Med 2025; 20:17-25. [PMID: 39103985 DOI: 10.1002/jhm.13471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 06/22/2024] [Accepted: 07/14/2024] [Indexed: 08/07/2024]
Abstract
BACKGROUND Despite the central role of experiential learning in residency training, the actual clinical experiences residents participate in are not well characterized. A better understanding of the type, volume, and variation in residents' clinical experiences is essential to support precision medical education strategies. OBJECTIVE We sought to characterize the entirety of the clinical experiences had by individual internal medicine residents throughout their time in training. METHOD We evaluated the clinical experiences of medicine residents (n = 51) who completed training at NYU Grossman School of Medicine's Brooklyn campus between 2020 and 2023. Residents' inpatient and outpatient experiences were identified using notes written, orders placed, and care team sign-ins; principal ICD-10 codes for each encounter were converted into medical content categories using a previously described crosswalk tool. RESULTS Of 152,426 clinical encounters with available ICD-10 codes, 132,284 were mapped to medical content categories (94.5% capture). Residents' clinical experiences were particularly enriched in infectious and cardiovascular disease; most had very little exposure to allergy, dermatology, oncology, or rheumatology. Some trainees saw twice as many cases in a given content area as did others. There was little concordance between actual frequency of clinical experience and expected content frequency on the ABIM certification exam. CONCLUSIONS Individual residents' clinical experiences in training vary widely, both in number and in type. Characterizing these experiences paves the way for exploration of the relationships between clinical exposure and educational outcomes, and for the implementation of precision education strategies that could fill residents' experiential gaps and complement strengths with targeted educational interventions.
Collapse
Affiliation(s)
- Carolyn B Drake
- Division of Hospital Medicine, Department of Medicine, Internal Medicine Residency Program, NYU Grossman School of Medicine, New York, New York, USA
| | - David W Rhee
- Leon H. Charney Division of Cardiology, Department of Medicine, NYU Grossman School of Medicine, New York, New York, USA
| | - Neha Panigrahy
- NYU Grossman School of Medicine, New York, New York, USA
| | - Lauren Heery
- NYU Grossman School of Medicine, New York, New York, USA
| | - Eduardo Iturrate
- Division of Hospital Medicine, Department of Medicine, DataCore, Enterprise Research Informatics and Epic Analytics, NYU Grossman School of Medicine, New York, New York, USA
| | - David T Stern
- Department of Medicine, Education and Faculty Affairs, NYU Grossman School of Medicine, New York, New York, USA
- Margaret Cochran Corbin VA Medical Center, New York, New York, USA
| | - Daniel J Sartori
- Division of Hospital Medicine, Department of Medicine, Internal Medicine Residency Program, NYU Grossman School of Medicine, New York, New York, USA
| |
Collapse
|
2
|
Lees AF, Beni C, Lee A, Wedgeworth P, Dzara K, Joyner B, Tarczy-Hornoch P, Leu M. Uses of Electronic Health Record Data to Measure the Clinical Learning Environment of Graduate Medical Education Trainees: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:1326-1336. [PMID: 37267042 PMCID: PMC10615720 DOI: 10.1097/acm.0000000000005288] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
PURPOSE This study systematically reviews the uses of electronic health record (EHR) data to measure graduate medical education (GME) trainee competencies. METHOD In January 2022, the authors conducted a systematic review of original research in MEDLINE from database start to December 31, 2021. The authors searched for articles that used the EHR as their data source and in which the individual GME trainee was the unit of observation and/or unit of analysis. The database query was intentionally broad because an initial survey of pertinent articles identified no unifying Medical Subject Heading terms. Articles were coded and clustered by theme and Accreditation Council for Graduate Medical Education (ACGME) core competency. RESULTS The database search yielded 3,540 articles, of which 86 met the study inclusion criteria. Articles clustered into 16 themes, the largest of which were trainee condition experience (17 articles), work patterns (16 articles), and continuity of care (12 articles). Five of the ACGME core competencies were represented (patient care and procedural skills, practice-based learning and improvement, systems-based practice, medical knowledge, and professionalism). In addition, 25 articles assessed the clinical learning environment. CONCLUSIONS This review identified 86 articles that used EHR data to measure individual GME trainee competencies, spanning 16 themes and 6 competencies and revealing marked between-trainee variation. The authors propose a digital learning cycle framework that arranges sequentially the uses of EHR data within the cycle of clinical experiential learning central to GME. Three technical components necessary to unlock the potential of EHR data to improve GME are described: measures, attribution, and visualization. Partnerships between GME programs and informatics departments will be pivotal in realizing this opportunity.
Collapse
Affiliation(s)
- A Fischer Lees
- A. Fischer Lees is a clinical informatics fellow, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Catherine Beni
- C. Beni is a general surgery resident, Department of Surgery, University of Washington School of Medicine, Seattle, Washington
| | - Albert Lee
- A. Lee is a clinical informatics fellow, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Patrick Wedgeworth
- P. Wedgeworth is a clinical informatics fellow, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Kristina Dzara
- K. Dzara is assistant dean for educator development, director, Center for Learning and Innovation in Medical Education, and associate professor of medical education, Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Byron Joyner
- B. Joyner is vice dean for graduate medical education and a designated institutional official, Graduate Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Peter Tarczy-Hornoch
- P. Tarczy-Hornoch is professor and chair, Department of Biomedical Informatics and Medical Education, and professor, Department of Pediatrics (Neonatology), University of Washington School of Medicine, and adjunct professor, Allen School of Computer Science and Engineering, University of Washington, Seattle, Washington
| | - Michael Leu
- M. Leu is professor and director, Clinical Informatics Fellowship, Department of Biomedical Informatics and Medical Education, and professor, Department of Pediatrics, University of Washington School of Medicine, Seattle, Washington
| |
Collapse
|
3
|
Schwartz A, King B, Mink R, Turner T, Abramson E, Blankenburg R, Degnon L. The APPD Longitudinal Educational Assessment Research Network's First Decade. Pediatrics 2023; 151:191115. [PMID: 37122062 DOI: 10.1542/peds.2022-059113] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/08/2023] [Indexed: 05/02/2023] Open
Abstract
ABSTRACT In 2009, the Association of Pediatric Program Directors (APPD) Longitudinal Educational Assessment Research Network (LEARN), a national educational research network, was formed. We report on evaluation of the network after 10 years of operation by reviewing program context, input, processes, and products to measure its progress in performing educational research that advances training of future pediatricians. Historical changes in medical education shaped the initial development of the network. APPD LEARN now includes 74% (148 of 201) of US Pediatric residency programs and has recently incorporated a network of Pediatric subspecialty fellowship programs. At the time of this evaluation, APPD LEARN had approved 19 member-initiated studies and 14 interorganizational studies, resulting in 23 peer-reviewed publications, numerous presentations, and 7 archived sharable data sets. Most publications focused on how and when interventions work rather than whether they work, had high scores for reporting rigor, and included organizational and objective performance outcomes. Member program representatives had positive perceptions of APPD LEARN's success, with most highly valuing participation in research that impacts training, access to expertise, and the ability to make authorship contributions for presentations and publication. Areas for development and improvement identified in the evaluation include adopting a formal research prioritization process, infrastructure changes to support educational research that includes patient data, and expanding educational outreach within and outside the network. APPD LEARN and similar networks contribute to high-rigor research in pediatric education that can lead to improvements in training and thereby the health care of children.
Collapse
Affiliation(s)
- Alan Schwartz
- Departments of Medical Education and Pediatrics,University of Illinois at Chicago, Chicago, Illinois
- Association of Pediatric Program Directors, McLean, Virginia
| | - Beth King
- Association of Pediatric Program Directors, McLean, Virginia
| | - Richard Mink
- Association of Pediatric Program Directors, McLean, Virginia
- Harbor-UCLA Medical Center, Torrance, California
| | - Teri Turner
- Department of Pediatrics, Baylor College of Medicine, Houston, Texas
| | - Erika Abramson
- Department of Pediatrics, Weill Cornell Medicine, New York, New York
| | | | - Laura Degnon
- Association of Pediatric Program Directors, McLean, Virginia
| |
Collapse
|
4
|
Wang MD, Rosner BI, Rosenbluth G. Where Is the Digitally Silent Provider? Development and Validation of a Team-Centered Electronic Health Record Attribution Model for Supervising Residents. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:62-66. [PMID: 36576768 DOI: 10.1097/acm.0000000000004978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PROBLEM Providing trainees with data and benchmarks on their own patient populations is an Accreditation Council for Graduate Medical Education core residency requirement. Leveraging electronic health records (EHRs) for this purpose relies on correctly attributing patients to the trainees responsible for their care. EHR activity logs are useful for attributing interns to inpatients but not for attributing supervising residents, who often have no inpatient EHR usage obligations, and therefore may generate no digital "footprints" on a given patient-day from which to ascertain attribution. APPROACH The authors developed and tested a novel team-centered binary logistic regression model leveraging EHR activity logs from July 1, 2018, to June 30, 2019, for pediatric hospital medicine (PHM) supervising residents at the University of California, San Francisco. Unlike patient-centered models that determine daily attribution according to the trainee generating the greatest relative activity in individual patients' charts, the team-centered approach predicts daily attribution based on the trainee generating EHR activity across the greatest proportion of a team's patients. To assess generalizability, the authors similarly modeled supervising resident attribution in adult hospital medicine (AHM) and orthopedic surgery (OS). OUTCOMES For PHM, AHM, and OS, 1,100, 1,399, and 803 unique patient encounters and 29, 62, and 10 unique supervising residents were included, respectively. Team-centered models outperformed patient-centered models for the 3 specialties, with respective accuracies of 85.4% versus 72.4% (PHM), 88.7% versus 75.4% (AHM), and 69.3% versus 51.6% (OS; P < .001 for all). AHM and PHM models demonstrated relative generalizability to one another while OS did not. NEXT STEPS Validation at other institutions will be essential to understanding the potential for generalizability of this approach. Accurately attributed data are likely to be trusted more by trainees, enabling programs to operationalize feedback for use cases including performance measurement, case mix assessment, and postdischarge opportunities for follow-up learning.
Collapse
Affiliation(s)
- Michael D Wang
- M.D. Wang is assistant professor, Division of Hospital Medicine, Center for Clinical Informatics and Improvement Research, University of California, San Francisco, San Francisco, California
| | - Benjamin I Rosner
- B.I. Rosner is associate professor, Division of Hospital Medicine, Center for Clinical Informatics and Improvement Research, University of California, San Francisco, San Francisco, California
| | - Glenn Rosenbluth
- G. Rosenbluth is professor, Department of Pediatrics, and director of quality and safety programs, Office of Graduate Medical Education, University of California, San Francisco, San Francisco, California
| |
Collapse
|
5
|
Rule A, Melnick ER, Apathy NC. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J Am Med Inform Assoc 2022; 30:144-154. [PMID: 36173361 PMCID: PMC9748581 DOI: 10.1093/jamia/ocac177] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 09/15/2022] [Accepted: 09/19/2022] [Indexed: 01/24/2023] Open
Abstract
OBJECTIVE The aim of this article is to compare the aims, measures, methods, limitations, and scope of studies that employ vendor-derived and investigator-derived measures of electronic health record (EHR) use, and to assess measure consistency across studies. MATERIALS AND METHODS We searched PubMed for articles published between July 2019 and December 2021 that employed measures of EHR use derived from EHR event logs. We coded the aims, measures, methods, limitations, and scope of each article and compared articles employing vendor-derived and investigator-derived measures. RESULTS One hundred and two articles met inclusion criteria; 40 employed vendor-derived measures, 61 employed investigator-derived measures, and 1 employed both. Studies employing vendor-derived measures were more likely than those employing investigator-derived measures to observe EHR use only in ambulatory settings (83% vs 48%, P = .002) and only by physicians or advanced practice providers (100% vs 54% of studies, P < .001). Studies employing vendor-derived measures were also more likely to measure durations of EHR use (P < .001 for 6 different activities), but definitions of measures such as time outside scheduled hours varied widely. Eight articles reported measure validation. The reported limitations of vendor-derived measures included measure transparency and availability for certain clinical settings and roles. DISCUSSION Vendor-derived measures are increasingly used to study EHR use, but only by certain clinical roles. Although poorly validated and variously defined, both vendor- and investigator-derived measures of EHR time are widely reported. CONCLUSION The number of studies using event logs to observe EHR use continues to grow, but with inconsistent measure definitions and significant differences between studies that employ vendor-derived and investigator-derived measures.
Collapse
Affiliation(s)
- Adam Rule
- Information School, University of Wisconsin–Madison, Madison,
Wisconsin, USA
| | - Edward R Melnick
- Emergency Medicine, Yale School of Medicine, New Haven,
Connecticut, USA
- Biostatistics (Health Informatics), Yale School of Public
Health, New Haven, Connecticut, USA
| | - Nate C Apathy
- MedStar Health National Center for Human Factors in Healthcare, MedStar
Health Research Institute, District of Columbia, Washington, USA
- Regenstrief Institute, Indianapolis, Indiana, USA
| |
Collapse
|
6
|
Lam AC, Tang B, Lalwani A, Verma AA, Wong BM, Razak F, Ginsburg S. Methodology paper for the General Medicine Inpatient Initiative Medical Education Database (GEMINI MedED): a retrospective cohort study of internal medicine resident case-mix, clinical care and patient outcomes. BMJ Open 2022; 12:e062264. [PMID: 36153026 PMCID: PMC9511606 DOI: 10.1136/bmjopen-2022-062264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
INTRODUCTION Unwarranted variation in patient care among physicians is associated with negative patient outcomes and increased healthcare costs. Care variation likely also exists for resident physicians. Despite the global movement towards outcomes-based and competency-based medical education, current assessment strategies in residency do not routinely incorporate clinical outcomes. The widespread use of electronic health records (EHRs) may enable the implementation of in-training assessments that incorporate clinical care and patient outcomes. METHODS AND ANALYSIS The General Medicine Inpatient Initiative Medical Education Database (GEMINI MedED) is a retrospective cohort study of senior residents (postgraduate year 2/3) enrolled in the University of Toronto Internal Medicine (IM) programme between 1 April 2010 and 31 December 2020. This study focuses on senior IM residents and patients they admit overnight to four academic hospitals. Senior IM residents are responsible for overseeing all overnight admissions; thus, care processes and outcomes for these clinical encounters can be at least partially attributed to the care they provide. Call schedules from each hospital, which list the date, location and senior resident on-call, will be used to link senior residents to EHR data of patients admitted during their on-call shifts. Patient data will be derived from the GEMINI database, which contains administrative (eg, demographic and disposition) and clinical data (eg, laboratory and radiological investigation results) for patients admitted to IM at the four academic hospitals. Overall, this study will examine three domains of resident practice: (1) case-mix variation across residents, hospitals and academic year, (2) resident-sensitive quality measures (EHR-derived metrics that are partially attributable to resident care) and (3) variations in patient outcomes across residents and factors that contribute to such variation. ETHICS AND DISSEMINATION GEMINI MedED was approved by the University of Toronto Ethics Board (RIS#39339). Results from this study will be presented in academic conferences and peer-reviewed journals.
Collapse
Affiliation(s)
- Andrew Cl Lam
- Department of Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
| | - Brandon Tang
- Department of Medicine, Division of General Internal Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
| | - Anushka Lalwani
- Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, Ontario, Canada
| | - Amol A Verma
- Department of Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
- Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, Ontario, Canada
- Division of General Internal Medicine, Unity Health Toronto, Toronto, Ontario, Canada
| | - Brian M Wong
- Department of Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
- Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
| | - Fahad Razak
- Department of Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
- Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, Ontario, Canada
- Division of General Internal Medicine, Unity Health Toronto, Toronto, Ontario, Canada
| | - Shiphra Ginsburg
- Department of Medicine, Division of Respirology, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
- Division of Respirology, Sinai Health System, Toronto, Ontario, Canada
| |
Collapse
|
7
|
Yarahuan JKW, Lo HY, Bass L, Wright J, Hess LM. Design, Usability, and Acceptability of a Needs-Based, Automated Dashboard to Provide Individualized Patient-Care Data to Pediatric Residents. Appl Clin Inform 2022; 13:380-390. [PMID: 35294985 PMCID: PMC8926457 DOI: 10.1055/s-0042-1744388] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022] Open
Abstract
BACKGROUND AND OBJECTIVES Pediatric residency programs are required by the Accreditation Council for Graduate Medical Education to provide residents with patient-care and quality metrics to facilitate self-identification of knowledge gaps to prioritize improvement efforts. Trainees are interested in receiving this data, but this is a largely unmet need. Our objectives were to (1) design and implement an automated dashboard providing individualized data to residents, and (2) examine the usability and acceptability of the dashboard among pediatric residents. METHODS We developed a dashboard containing individualized patient-care data for pediatric residents with emphasis on needs identified by residents and residency leadership. To build the dashboard, we created a connection from a clinical data warehouse to data visualization software. We allocated patients to residents based on note authorship and created individualized reports with masked identities that preserved anonymity. After development, we conducted usability and acceptability testing with 11 resident users utilizing a mixed-methods approach. We conducted interviews and anonymous surveys which evaluated technical features of the application, ease of use, as well as users' attitudes toward using the dashboard. Categories and subcategories from usability interviews were identified using a content analysis approach. RESULTS Our dashboard provides individualized metrics including diagnosis exposure counts, procedure counts, efficiency metrics, and quality metrics. In content analysis of the usability testing interviews, the most frequently mentioned use of the dashboard was to aid a resident's self-directed learning. Residents had few concerns about the dashboard overall. Surveyed residents found the dashboard easy to use and expressed intention to use the dashboard in the future. CONCLUSION Automated dashboards may be a solution to the current challenge of providing trainees with individualized patient-care data. Our usability testing revealed that residents found our dashboard to be useful and that they intended to use this tool to facilitate development of self-directed learning plans.
Collapse
Affiliation(s)
- Julia K W Yarahuan
- Division of Pediatric Hospital Medicine, Department of Pediatrics, Boston Children's Hospital, Boston, Massachusetts, United States
| | - Huay-Ying Lo
- Section of Pediatric Hospital Medicine, Department of Pediatrics, Baylor College of Medicine/Texas Children's Hospital, Houston, Texas, United States
| | - Lanessa Bass
- Section of Pediatric Hospital Medicine, Department of Pediatrics, Baylor College of Medicine/Texas Children's Hospital, Houston, Texas, United States
| | - Jeff Wright
- Information Services, Texas Children's Hospital, Houston, Texas, United States
| | - Lauren M Hess
- Section of Pediatric Hospital Medicine, Department of Pediatrics, Baylor College of Medicine/Texas Children's Hospital, Houston, Texas, United States
| |
Collapse
|
8
|
Mai MV, Orenstein EW, Manning JD, Luberti AA, Dziorny AC. Attributing Patients to Pediatric Residents Using Electronic Health Record Features Augmented with Audit Logs. Appl Clin Inform 2020; 11:442-451. [PMID: 32583389 DOI: 10.1055/s-0040-1713133] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
OBJECTIVE Patient attribution, or the process of attributing patient-level metrics to specific providers, attempts to capture real-life provider-patient interactions (PPI). Attribution holds wide-ranging importance, particularly for outcomes in graduate medical education, but remains a challenge. We developed and validated an algorithm using EHR data to identify pediatric resident PPIs (rPPIs). METHODS We prospectively surveyed residents in three care settings to collect self-reported rPPIs. Participants were surveyed at the end of primary care clinic, emergency department (ED), and inpatient shifts, shown a patient census list, asked to mark the patients with whom they interacted, and encouraged to provide a short rationale behind the marked interaction. We extracted routine EHR data elements, including audit logs, note contribution, order placement, care team assignment, and chart closure, and applied a logistic regression classifier to the data to predict rPPIs in each care setting. We also performed a comment analysis of the resident-reported rationales in the inpatient care setting to explore perceived patient interactions in a complicated workflow. RESULTS We surveyed 81 residents over 111 shifts and identified 579 patient interactions. Among EHR extracted data, time-in-chart was the best predictor in all three care settings (primary care clinic: odds ratio [OR] = 19.36, 95% confidence interval [CI]: 4.19-278.56; ED: OR = 19.06, 95% CI: 9.53-41.65' inpatient: OR = 2.95, 95% CI: 2.23-3.97). Primary care clinic and ED specific models had c-statistic values > 0.98, while the inpatient-specific model had greater variability (c-statistic = 0.89). Of 366 inpatient rPPIs, residents provided rationales for 90.1%, which were focused on direct involvement in a patient's admission or transfer, or care as the front-line ordering clinician (55.6%). CONCLUSION Classification models based on routinely collected EHR data predict resident-defined rPPIs across care settings. While specific to pediatric residents in this study, the approach may be generalizable to other provider populations and scenarios in which accurate patient attribution is desirable.
Collapse
Affiliation(s)
- Mark V Mai
- Department of Anesthesia and Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States.,Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
| | - Evan W Orenstein
- Department of Pediatrics, Children's Healthcare of Atlanta, Atlanta, Georgia, United States
| | - John D Manning
- Department of Emergency Medicine, Atrium Health's Carolinas Medical Center, Charlotte, North Carolina, United States
| | - Anthony A Luberti
- Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States.,Department of Pediatrics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
| | - Adam C Dziorny
- Department of Anesthesia and Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States.,Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
| |
Collapse
|
9
|
Ten Cate O, Dahdal S, Lambert T, Neubauer F, Pless A, Pohlmann PF, van Rijen H, Gurtner C. Ten caveats of learning analytics in health professions education: A consumer's perspective. MEDICAL TEACHER 2020; 42:673-678. [PMID: 32150499 DOI: 10.1080/0142159x.2020.1733505] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
A group of 22 medical educators from different European countries, gathered in a meeting in Utrecht in July 2019, discussed the topic of learning analytics (LA) in an open conversation and addressed its definition, its purposes and potential risks for learners and teachers. LA was seen as a significant advance with important potential to improve education, but the group felt that potential drawbacks of using LA may yet be under-exposed in the literature. After transcription and interpretation of the discussion's conclusions, a document was drafted and fed back to the group in two rounds to arrive at a series of 10 caveats educators should be aware of when developing and using LA, including too much standardized learning, with undue consequences of over-efficiency and pressure on learners and teachers, and a decrease of the variety of 'valid' learning resources. Learning analytics may misalign with eventual clinical performance and can run the risk of privacy breaches and inescapability of documented failures. These consequences may not happen, but the authors, on behalf of the full group of educators, felt it worth to signal these caveats from a consumers' perspective.
Collapse
Affiliation(s)
- Olle Ten Cate
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - Thomas Lambert
- Kepler University Hospital Linz, Johannes Kepler University Linz, Linz, Austria
| | - Florian Neubauer
- Institute for Medical Education, University of Bern, Bern, Switzerland
| | - Anina Pless
- Institute of Primary Health Care (BIHAM), University of Bern, Bern, Switzerland
| | | | - Harold van Rijen
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Corinne Gurtner
- Institute of Animal Pathology, Vetsuisse Faculty Bern, University of Bern, Bern, Switzerland
| |
Collapse
|
10
|
Rosenbluth G. Trainee and Program Director Perspectives on Meaningful Patient Attribution and Clinical Outcomes Data. J Grad Med Educ 2020; 12:295-302. [PMID: 32595849 PMCID: PMC7301928 DOI: 10.4300/jgme-d-19-00730.1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Revised: 02/24/2020] [Accepted: 02/29/2020] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education specifies that trainees must receive clinical outcomes and quality benchmark data at specific levels related to institutional patient populations. Program directors (PDs) are challenged to identify meaningful data and provide them in formats acceptable to trainees. OBJECTIVE We sought to understand what types of patients, data/metrics, and data delivery systems trainees and PDs prefer for supplying trainees with clinical outcomes data. METHODS Trainees (n = 21) and PDs (n = 12) from multiple specialties participated in focus groups during academic year 2017-2018. They described key themes for providing clinical outcomes data to trainees. RESULTS Trainees and PDs differed in how they identified patients for clinical outcomes data for trainees. Trainees were interested in encounters where they felt a sense of responsibility or had autonomy/independent decision-making opportunities, continuity, or learned something new; PDs used broader criteria including all patients cared for by their trainees. Both groups thought trainees should be given trainee-level metrics and consistently highlighted the importance of comparison to peers and/or benchmarks. Both groups found value in "push" and "pull" data systems, although trainees wanted both, while PDs wanted one or the other. Both groups agreed that trainees should review data with specific faculty. Trainees expressed concern about being judged based on their patients' clinical outcomes. CONCLUSIONS Trainee and PD perspectives on which patients they would like outcomes data for differed, but they overlapped for types of metrics, formats, and review processes for the data.
Collapse
|
11
|
How are medical students using the Electronic Health Record (EHR)?: An analysis of EHR use on an inpatient medicine rotation. PLoS One 2019; 14:e0221300. [PMID: 31419265 PMCID: PMC6697335 DOI: 10.1371/journal.pone.0221300] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 08/03/2019] [Indexed: 11/19/2022] Open
Abstract
Physicians currently spend as much as half of their day in front of the computer. The Electronic Health Record (EHR) has been associated with declining bedside skills and physician burnout. Medical student EHR use has not been well studied or characterized. However, student responsibilities for EHR documentation will likely increase as the Centers for Medicare and Medicaid Services (CMS) most recent provisions now allow student notes for billing which will likely increase the role of medical student use of the EHR over time. To gain a better understanding of how medical students use the EHR at our institution, we retrospectively analyzed 6,692,994 EHR interactions from 49 third-year clerkship medical students and their supervising physicians assigned to the inpatient medicine ward rotation between June 25 2015 and June 24 2016 at a tertiary academic medical center. Medical students spent 4.42 hours (37%) of each day at the on the EHR and 35 minutes logging in from home. Improved understanding of student EHR-use and the effects on well-being warrants further attention, especially as EHR use increases with early trainees.
Collapse
|