1
|
McCormick C, Ahluwalia S, Segon A. Effect of a Performance Feedback Dashboard on Hospitalist Laboratory Test Utilization. Am J Med Qual 2023; 38:273-278. [PMID: 37908029 DOI: 10.1097/jmq.0000000000000150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
BACKGROUND Healthcare spending continues to be an area of improvement across all forms of medicine. Overtreatment or low-value care, including overutilization of laboratory testing, has an estimated annual cost of waste of $75.7-$101.2 billion annually. Providing performance feedback to hospitalists has been shown to be an effective way to encourage the practice of quality-improvement-focused medicine. There remains limited data regarding the implementation of performance feedback and direct results on hospital laboratory testing spending in the short term. OBJECTIVE The objective of this project was to identify whether performance-based feedback on laboratory utilization between both hospitalists and resident teams results in more conservative utilization of laboratory testing. DESIGN, SETTING, PARTICIPANTS This quality improvement project was conducted at a tertiary academic medical center, including both direct-care and house-staff teams. INTERVENTION OR EXPOSURE A weekly performance feedback report was generated and distributed to providers detailing laboratory test utilization by all hospitalists in a ranked system, normalized by the census of patients, for 3 months. MAIN OUTCOMES AND MEASURES The outcome measure was cumulative laboratory utilization during the intervention period compared to baseline utilization during the corresponding 3 months in the year prior and the weekly trend in laboratory utilization over 52 weeks. The aggregate laboratory utilization rate during intervention and control time periods was defined as the total number of laboratory tests ordered divided by the total number of patient encounters. Additionally, the cost difference was averaged per quarter and reported. The week-by-week trend in laboratory utilization was evaluated using a statistical process control (SPC) chart. RESULTS We found that following intervention during January-March 2020, the cumulative complete blood count utilization rate decreased from 5.54 to 4.83 per patient encounter and the basic metabolic panels/CMP utilization rate decreased from 6.65 to 6.11 per patient encounter compared with January-March 2019. This equated to cost savings of ~$42,700 in total for the quarter. Nonrandom variation was seen on SPC charts in weekly laboratory utilization rates for common laboratory tests during the intervention period. CONCLUSIONS We found that our intervention did result in a decrease in laboratory test utilization rates across direct-care and house-staff teams. This study lays promising groundwork for one tool that can be used to eliminate a source of hospital waste and improve the quality and efficiency of patient care.
Collapse
Affiliation(s)
| | | | - Ankur Segon
- Medicine, University of Texas Health Science Center, San Antonio, TX
| |
Collapse
|
2
|
Hartford EA, Thomas AA, Kerwin O, Usoro E, Yoshida H, Burns B, Rutman LE, Migita R, Bradford M, Akhter S. Toward Improving Patient Equity in a Pediatric Emergency Department: A Framework for Implementation. Ann Emerg Med 2023; 81:385-392. [PMID: 36669917 DOI: 10.1016/j.annemergmed.2022.11.015] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 11/10/2022] [Accepted: 11/15/2022] [Indexed: 01/20/2023]
Abstract
Disparities in health care delivery and health outcomes for patients in the emergency department (ED) by race, ethnicity, and language for care (REaL) are common and well documented. Addressing inequities from structural racism, implicit bias, and language barriers can be challenging, and there is a lack of data on effective interventions. We describe the implementation of a multifaceted equity improvement strategy in a pediatric ED using Kotter's model for change as a framework to identify the key drivers. The main elements included a data dashboard with quality metrics stratified by patient self-reported REaL to visualize disparities, a staff workshop on implicit bias and microaggressions, and several clinical and operational tools that highlight equity. Our next steps include refining and repeating interventions and tracking important patient outcomes, including timely pain treatment, triage assessment, diagnostic evaluations, and interpreter use, with the overall goal of improving patient equity by REaL over time. This article presents a roadmap for a disparity reduction intervention, which can be part of a multifaceted approach to address health equity in EDs.
Collapse
Affiliation(s)
- Emily A Hartford
- University of Washington, Department Pediatrics, Division of Emergency Medicine, Seattle, WA, USA.
| | - Anita A Thomas
- University of Washington, Department Pediatrics, Division of Emergency Medicine, Seattle, WA, USA
| | - Olivia Kerwin
- Seattle Children's Hospital Emergency Department, Seattle, WA, USA
| | - Etiowo Usoro
- Seattle Children's Hospital Emergency Department, Seattle, WA, USA
| | - Hiromi Yoshida
- University of Washington, Department Pediatrics, Division of Emergency Medicine, Seattle, WA, USA
| | - Brian Burns
- Seattle Children's Hospital Emergency Department, Seattle, WA, USA
| | - Lori E Rutman
- University of Washington, Department Pediatrics, Division of Emergency Medicine, Seattle, WA, USA
| | - Russell Migita
- University of Washington, Department Pediatrics, Division of Emergency Medicine, Seattle, WA, USA
| | | | - Sabreen Akhter
- University of Washington, Department Pediatrics, Division of Emergency Medicine, Seattle, WA, USA
| |
Collapse
|
3
|
Donnelly C, Janssen A, Vinod S, Stone E, Harnett P, Shaw T. A Systematic Review of Electronic Medical Record Driven Quality Measurement and Feedback Systems. Int J Environ Res Public Health 2022; 20:ijerph20010200. [PMID: 36612522 PMCID: PMC9819986 DOI: 10.3390/ijerph20010200] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 12/16/2022] [Accepted: 12/21/2022] [Indexed: 06/09/2023]
Abstract
Historically, quality measurement analyses utilize manual chart abstraction from data collected primarily for administrative purposes. These methods are resource-intensive, time-delayed, and often lack clinical relevance. Electronic Medical Records (EMRs) have increased data availability and opportunities for quality measurement. However, little is known about the effectiveness of Measurement Feedback Systems (MFSs) in utilizing EMR data. This study explores the effectiveness and characteristics of EMR-enabled MFSs in tertiary care. The search strategy guided by the PICO Framework was executed in four databases. Two reviewers screened abstracts and manuscripts. Data on effect and intervention characteristics were extracted using a tailored version of the Cochrane EPOC abstraction tool. Due to study heterogeneity, a narrative synthesis was conducted and reported according to PRISMA guidelines. A total of 14 unique MFS studies were extracted and synthesized, of which 12 had positive effects on outcomes. Findings indicate that quality measurement using EMR data is feasible in certain contexts and successful MFSs often incorporated electronic feedback methods, supported by clinical leadership and action planning. EMR-enabled MFSs have the potential to reduce the burden of data collection for quality measurement but further research is needed to evaluate EMR-enabled MFSs to translate and scale findings to broader implementation contexts.
Collapse
Affiliation(s)
- Candice Donnelly
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2006, Australia
| | - Anna Janssen
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2006, Australia
| | - Shalini Vinod
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW 2170, Australia
- South West Sydney Clinical Campuses, University of New South Wales, Liverpool, NSW 2170, Australia
| | - Emily Stone
- Department of Thoracic Medicine and Lung Transplantation, St Vincent’s Hospital, Darlinghurst, NSW 2010, Australia
- School of Clinical Medicine, University of New South Wales, Randwick, NSW 2031, Australia
| | - Paul Harnett
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2006, Australia
- Crown Princess Mary Cancer Centre, Western Sydney Local Health District, Westmead, NSW 2145, Australia
| | - Tim Shaw
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2006, Australia
| |
Collapse
|
4
|
Patel S, Pierce L, Jones M, Lai A, Cai M, Sharpe BA, Harrison JD. Using Participatory Design to Engage Physicians in the Development of a Provider-Level Performance Dashboard and Feedback System. Jt Comm J Qual Patient Saf 2022; 48:165-172. [PMID: 35058160 PMCID: PMC8885889 DOI: 10.1016/j.jcjq.2021.10.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 10/13/2021] [Accepted: 10/14/2021] [Indexed: 10/20/2022]
Abstract
PROBLEM DEFINITION Performance feedback, in which clinicians are given data on select metrics, is widely used in the context of quality improvement. However, there is a lack of practical guidance describing the process of developing performance feedback systems. INITIAL APPROACH This study took place at the University of California, San Francisco (UCSF) with hospitalist physicians. Participatory design methodology was used to develop a performance dashboard and feedback system. Twenty hospitalist physicians participated in a series of six design sessions and two surveys. Each design session and survey systematically addressed key components of the feedback system, including design, metric selection, data delivery, and incentives. The Capability Opportunity Motivation and Behavior (COM-B) model was then used to identify behavior change interventions to facilitate engagement with the dashboard during a pilot implementation. KEY INSIGHTS, LESSONS LEARNED In regard to performance improvement, physicians preferred collaboration over competition and internal motivation over external incentives. Physicians preferred that the dashboard be used as a tool to aid in clinical practice improvement and not punitively by leadership. Metrics that were clinical or patient-centered were perceived as more meaningful and more likely to motivate behavior change. NEXT STEPS The performance dashboard has been introduced to the entire hospitalist group, and evaluation of implementation continues by monitoring engagement and physician attitudes. This will be followed by targeted feedback interventions to attempt to improve performance.
Collapse
|
5
|
Bucalon B, Shaw T, Brown K, Kay J. State-of-the-art Dashboards on Clinical Indicator Data to Support Reflection on Practice: Scoping Review. JMIR Med Inform 2022; 10:e32695. [PMID: 35156928 PMCID: PMC8887640 DOI: 10.2196/32695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 11/19/2021] [Accepted: 12/04/2021] [Indexed: 11/13/2022] Open
Abstract
Background There is an increasing interest in using routinely collected eHealth data to support reflective practice and long-term professional learning. Studies have evaluated the impact of dashboards on clinician decision-making, task completion time, user satisfaction, and adherence to clinical guidelines. Objective This scoping review aims to summarize the literature on dashboards based on patient administrative, medical, and surgical data for clinicians to support reflective practice. Methods A scoping review was conducted using the Arksey and O’Malley framework. A search was conducted in 5 electronic databases (MEDLINE, Embase, Scopus, ACM Digital Library, and Web of Science) to identify studies that met the inclusion criteria. Study selection and characterization were performed by 2 independent reviewers (BB and CP). One reviewer extracted the data that were analyzed descriptively to map the available evidence. Results A total of 18 dashboards from 8 countries were assessed. Purposes for the dashboards were designed for performance improvement (10/18, 56%), to support quality and safety initiatives (6/18, 33%), and management and operations (4/18, 22%). Data visualizations were primarily designed for team use (12/18, 67%) rather than individual clinicians (4/18, 22%). Evaluation methods varied among asking the clinicians directly (11/18, 61%), observing user behavior through clinical indicators and use log data (14/18, 78%), and usability testing (4/18, 22%). The studies reported high scores on standard usability questionnaires, favorable surveys, and interview feedback. Improvements to underlying clinical indicators were observed in 78% (7/9) of the studies, whereas 22% (2/9) of the studies reported no significant changes in performance. Conclusions This scoping review maps the current literature landscape on dashboards based on routinely collected clinical indicator data. Although there were common data visualization techniques and clinical indicators used across studies, there was diversity in the design of the dashboards and their evaluation. There was a lack of detail regarding the design processes documented for reproducibility. We identified a lack of interface features to support clinicians in making sense of and reflecting on their personal performance data.
Collapse
Affiliation(s)
- Bernard Bucalon
- Human Centred Technology Cluster, School of Computer Science, The University of Sydney, Darlington, Australia.,Practice Analytics, Digital Health Cooperative Research Centre, Sydney, Australia
| | - Tim Shaw
- Practice Analytics, Digital Health Cooperative Research Centre, Sydney, Australia.,Research in Implementation Science and e-Health Group, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Kerri Brown
- Practice Analytics, Digital Health Cooperative Research Centre, Sydney, Australia.,Professional Practice Directorate, The Royal Australasian College of Physicians, Sydney, Australia
| | - Judy Kay
- Human Centred Technology Cluster, School of Computer Science, The University of Sydney, Darlington, Australia.,Practice Analytics, Digital Health Cooperative Research Centre, Sydney, Australia
| |
Collapse
|
6
|
Tsang JY, Peek N, Buchan I, van der Veer SN, Brown B. OUP accepted manuscript. J Am Med Inform Assoc 2022; 29:1106-1119. [PMID: 35271724 PMCID: PMC9093027 DOI: 10.1093/jamia/ocac031] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 02/08/2021] [Accepted: 02/24/2022] [Indexed: 11/26/2022] Open
Abstract
Objectives (1) Systematically review the literature on computerized audit and feedback (e-A&F) systems in healthcare. (2) Compare features of current systems against e-A&F best practices. (3) Generate hypotheses on how e-A&F systems may impact patient care and outcomes. Methods We searched MEDLINE (Ovid), EMBASE (Ovid), and CINAHL (Ebsco) databases to December 31, 2020. Two reviewers independently performed selection, extraction, and quality appraisal (Mixed Methods Appraisal Tool). System features were compared with 18 best practices derived from Clinical Performance Feedback Intervention Theory. We then used realist concepts to generate hypotheses on mechanisms of e-A&F impact. Results are reported in accordance with the PRISMA statement. Results Our search yielded 4301 unique articles. We included 88 studies evaluating 65 e-A&F systems, spanning a diverse range of clinical areas, including medical, surgical, general practice, etc. Systems adopted a median of 8 best practices (interquartile range 6–10), with 32 systems providing near real-time feedback data and 20 systems incorporating action planning. High-confidence hypotheses suggested that favorable e-A&F systems prompted specific actions, particularly enabled by timely and role-specific feedback (including patient lists and individual performance data) and embedded action plans, in order to improve system usage, care quality, and patient outcomes. Conclusions e-A&F systems continue to be developed for many clinical applications. Yet, several systems still lack basic features recommended by best practice, such as timely feedback and action planning. Systems should focus on actionability, by providing real-time data for feedback that is specific to user roles, with embedded action plans. Protocol Registration PROSPERO CRD42016048695.
Collapse
Affiliation(s)
- Jung Yin Tsang
- Corresponding Author: Jung Yin Tsang, Centre for Primary Care and Health Services Research, University of Manchester, 6th Floor Williamson Building, Oxford Road, Manchester M13 9PL, UK;
| | - Niels Peek
- Centre for Health Informatics, Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
- NIHR Greater Manchester Patient Safety Translational Research Centre (GMPSTRC), University of Manchester, Manchester, UK
- NIHR Applied Research Collaboration Greater Manchester, University of Manchester, Manchester, UK
| | - Iain Buchan
- Institute of Population Health, University of Liverpool, Liverpool, UK
| | - Sabine N van der Veer
- Centre for Health Informatics, Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
| | - Benjamin Brown
- Centre for Health Informatics, Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
- Centre for Primary Care and Health Services Research, University of Manchester, Manchester, UK
- NIHR Greater Manchester Patient Safety Translational Research Centre (GMPSTRC), University of Manchester, Manchester, UK
| |
Collapse
|
7
|
Scarpis E, Brunelli L, Tricarico P, Poletto M, Panzera A, Londero C, Castriotta L, Brusaferro S. How to assure the quality of clinical records? A 7-year experience in a large academic hospital. PLoS One 2021; 16:e0261018. [PMID: 34882705 PMCID: PMC8659650 DOI: 10.1371/journal.pone.0261018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Accepted: 11/22/2021] [Indexed: 11/23/2022] Open
Abstract
INTRODUCTION Clinical record (CR) is the primary tool used by healthcare workers (HCWs) to record clinical information and its completeness can help achieve safer practices. CR is the most appropriate source in order to measure and evaluate the quality of care. In order to achieve a safety climate is fundamental to involve a responsive healthcare workforce thorough peer-review and feedbacks. This study aims to develop a peer-review tool for clinical records quality assurance, presenting the seven-year experience in the evolution of it; secondary aims are to describe the CR completeness and HCWs' diligence toward recording information in it. METHODS To assess the completeness of CRs a peer-review tool was developed in a large Academic Hospital of Northern Italy. This tool included measurable items that examined different themes, moments and levels of the clinical process. Data were collected every three months between 2010 and 2016 by appointed and trained HCWs from 42 Units; the hospital Quality Unit was responsible for of processing and validating them. Variations in the proportion of CR completeness were assessed using Cochran-Armitage test for trends. RESULTS A total of 9,408 CRs were evaluated. Overall CR completeness improved significantly from 79.6% in 2010 to 86.5% in 2016 (p<0.001). Doctors' attitude showed a trend similar to the overall completeness, while nurses improved more consistently (p<0.001). Most items exploring themes, moments and levels registered a significant improvement in the early years, then flattened in last years. Results of the validation process were always above the cut-off of 75%. CONCLUSIONS This peer-review tool enabled the Quality Unit and hospital leadership to obtain a reliable picture of CRs completeness, while involving the HCWs in the quality evaluation. The completeness of CR showed an overall positive and significant trend during these seven years.
Collapse
Affiliation(s)
- Enrico Scarpis
- Department of Medicine, University of Udine, Udine, Italy
| | - Laura Brunelli
- Department of Medicine, University of Udine, Udine, Italy
| | | | - Marco Poletto
- Department of Medicine, University of Udine, Udine, Italy
| | - Angela Panzera
- Health District of Udine, Friuli Centrale Healthcare and University Integrated Trust, ASUFC, Udine, Italy
| | - Carla Londero
- Accreditation, Clinical Risk Management and Performance Assessment Unit, Friuli Centrale Healthcare and University Integrated Trust, ASUFC, Udine, Italy
| | - Luigi Castriotta
- Hygiene and Clinical Epidemiology Institute, Friuli Centrale Healthcare and University Integrated Trust, ASUFC, Udine, Italy
| | | |
Collapse
|
8
|
Van Den Bulck S, Spitaels D, Vaes B, Goderis G, Hermens R, Vankrunkelsven P. The effect of electronic audits and feedback in primary care and factors that contribute to their effectiveness: a systematic review. Int J Qual Health Care 2021; 32:708-720. [PMID: 33057648 DOI: 10.1093/intqhc/mzaa128] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 09/21/2020] [Accepted: 10/06/2020] [Indexed: 12/14/2022] Open
Abstract
PURPOSE The aim of this systematic review was (i) to assess whether electronic audit and feedback (A&F) is effective in primary care and (ii) to evaluate important features concerning content and delivery of the feedback in primary care, including the use of benchmarks, the frequency of feedback, the cognitive load of feedback and the evidence-based aspects of the feedback. DATA SOURCES The MEDLINE, Embase, CINAHL and CENTRAL databases were searched for articles published since 2010 by replicating the search strategy used in the last Cochrane review on A&F. STUDY SELECTION Two independent reviewers assessed the records for their eligibility, performed the data extraction and evaluated the risk of bias. Our search resulted in 8744 records, including the 140 randomized controlled trials (RCTs) from the last Cochrane Review. The full texts of 431 articles were assessed to determine their eligibility. Finally, 29 articles were included. DATA EXTRACTION Two independent reviewers extracted standard data, data on the effectiveness and outcomes of the interventions, data on the kind of electronic feedback (static versus interactive) and data on the aforementioned feedback features. RESULTS OF DATA SYNTHESIS Twenty-two studies (76%) showed that electronic A&F was effective. All interventions targeting medication safety, preventive medicine, cholesterol management and depression showed an effect. Approximately 70% of the included studies used benchmarks and high-quality evidence in the content of the feedback. In almost half of the studies, the cognitive load of feedback was not reported. Due to high heterogeneity in the results, no meta-analysis was performed. CONCLUSION This systematic review included 29 articles examining electronic A&F interventions in primary care, and 76% of the interventions were effective. Our findings suggest electronic A&F is effective in primary care for different conditions such as medication safety and preventive medicine. Some of the benefits of electronic A&F include its scalability and the potential to be cost effective. The use of benchmarks as comparators and feedback based on high-quality evidence are widely used and important features of electronic feedback in primary care. However, other important features such as the cognitive load of feedback and the frequency of feedback provision are poorly described in the design of many electronic A&F intervention, indicating that a better description or implementation of these features is needed. Developing a framework or methodology for automated A&F interventions in primary care could be useful for future research.
Collapse
Affiliation(s)
- Steve Van Den Bulck
- Academic Center for General Practice, Department of Public Health and Primary Care, KU Leuven, Kapucijnenvoer 33, blok J, 3000, Leuven, Belgium
| | - David Spitaels
- Academic Center for General Practice, Department of Public Health and Primary Care, KU Leuven, Kapucijnenvoer 33, blok J, 3000, Leuven, Belgium
| | - Bert Vaes
- Academic Center for General Practice, Department of Public Health and Primary Care, KU Leuven, Kapucijnenvoer 33, blok J, 3000, Leuven, Belgium
| | - Geert Goderis
- Academic Center for General Practice, Department of Public Health and Primary Care, KU Leuven, Kapucijnenvoer 33, blok J, 3000, Leuven, Belgium
| | - Rosella Hermens
- Academic Center for General Practice, Department of Public Health and Primary Care, KU Leuven, Kapucijnenvoer 33, blok J, 3000, Leuven, Belgium.,Scientific Institute for Quality of Healthcare (IQ Healthcare), Radboud Institute for Health Science (RIHS), Radboud University Medical Center, Radboud University Nijmegen, PO Box 9101, Nijmegen, 6500, HB, The Netherlands
| | - Patrik Vankrunkelsven
- Academic Center for General Practice, Department of Public Health and Primary Care, KU Leuven, Kapucijnenvoer 33, blok J, 3000, Leuven, Belgium
| |
Collapse
|
9
|
Chen Y, Nagendran M, Kilic Y, Cavlan D, Feather A, Westwood M, Rowland E, Gutteridge C, Lambiase PD. The diagnostic certainty levels of junior clinicians: A retrospective cohort study. Health Inf Manag 2021; 51:118-125. [PMID: 34112021 PMCID: PMC9449434 DOI: 10.1177/18333583211019134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Background: Clinical decision-making is influenced by many factors, including clinicians’
perceptions of the certainty around what is the best course of action to pursue. Objective: To characterise the documentation of working diagnoses and the associated level of
real-time certainty expressed by clinicians and to gauge patient opinion about the
importance of research into clinician decision certainty. Method: This was a single-centre retrospective cohort study of non-consultant grade clinicians
and their assessments of patients admitted from the emergency department between 01
March 2019 and 31 March 2019. De-identified electronic health record proformas were
extracted that included the type of diagnosis documented and the certainty adjective
used. Patient opinion was canvassed from a focus group. Results: During the study period, 850 clerking proformas were analysed; 420 presented a single
diagnosis, while 430 presented multiple diagnoses. Of the 420 single diagnoses, 67 (16%)
were documented as either a symptom or physical sign and 16 (4%) were
laboratory-result-defined diagnoses. No uncertainty was expressed in 309 (74%) of the
diagnoses. Of 430 multiple diagnoses, uncertainty was expressed in 346 (80%) compared to
84 (20%) in which no uncertainty was expressed. The patient focus group were unanimous
in their support of this research. Conclusion: The documentation of working diagnoses is highly variable among non-consultant grade
clinicians. In nearly three quarters of assessments with single diagnoses, no element of
uncertainty was implied or quantified. More uncertainty was expressed in multiple
diagnoses than single diagnoses. Implications: Increased standardisation of documentation will help future studies to better analyse
and quantify diagnostic certainty in both single and multiple working diagnoses. This
could lead to subsequent examination of their association with important process or
clinical outcome measures.
Collapse
Affiliation(s)
- Yang Chen
- University College London, UK.,The London School of Economics and Political Science, UK.,St Bartholomew's Hospital, 9744Barts Health NHS Trust, UK
| | | | - Yakup Kilic
- St Bartholomew's Hospital, 9744Barts Health NHS Trust, UK
| | | | - Adam Feather
- Royal London Hospital, 9744Barts Health NHS Trust, UK
| | - Mark Westwood
- St Bartholomew's Hospital, 9744Barts Health NHS Trust, UK
| | - Edward Rowland
- St Bartholomew's Hospital, 9744Barts Health NHS Trust, UK
| | | | - Pier D Lambiase
- University College London, UK.,St Bartholomew's Hospital, 9744Barts Health NHS Trust, UK
| |
Collapse
|
10
|
Becker B, Nagavally S, Wagner N, Walker R, Segon Y, Segon A. Creating a culture of quality: our experience with providing feedback to frontline hospitalists. BMJ Open Qual 2021; 10:e001141. [PMID: 33674345 PMCID: PMC7938999 DOI: 10.1136/bmjoq-2020-001141] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 01/14/2021] [Accepted: 02/16/2021] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND One way to provide performance feedback to hospitalists is through the use of dashboards, which deliver data based on agreed-upon standards. Despite the growing trend on feedback performance on quality metrics, there remain limited data on the means, frequency and content of feedback that should be provided to frontline hospitalists. OBJECTIVE The objective of our research is to report our experience with a comprehensive feedback system for frontline hospitalists, as well as report the change in our quality metrics after implementation. DESIGN, SETTING AND PARTICIPANTS This quality improvement project was conducted at a tertiary academic medical centre among our hospitalist group consisting of 46 full-time faculty members. INTERVENTION OR EXPOSURE A monthly performance feedback report was distributed to provide ongoing feedback to our hospitalist faculty, including an individual dashboard and a peer comparison report, complemented by coaching to incorporate process improvement tactics into providers' daily workflow. MAIN OUTCOMES AND MEASURES The main outcome of our study is the change in quality metrics after implementation of the monthly performance feedback report RESULTS: The dashboard and rank order list were sent to all faculty members every month. An improvement was seen in the following quality metrics: length of stay index, 30-day readmission rate, catheter-associated urinary tract infections, central line-associated bloodstream infections, provider component of Healthcare Consumer Assessment of Healthcare Providers and Systems scores, attendance at care coordination rounds and percentage of discharge orders placed by 10:00. CONCLUSIONS Implementation of a monthly performance feedback report for hospitalists, complemented by peer comparison and guidance on tactics to achieve these metrics, created a culture of quality and improvement in the quality of care delivered.
Collapse
Affiliation(s)
- Brittany Becker
- Medical student, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - Sneha Nagavally
- Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - Nicholas Wagner
- Data analytics, Froedtert Hospital, Milwaukee, Wisconsin, USA
| | - Rebekah Walker
- Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - Yogita Segon
- Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - Ankur Segon
- Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| |
Collapse
|
11
|
Hartford EA, Klein EJ, Migita R, Richling S, Chen J, Rutman LE. Improving Patient Outcomes by Addressing Provider Variation in Emergency Department Asthma Care. Pediatr Qual Saf 2021; 6:e372. [PMID: 33403318 DOI: 10.1097/pq9.0000000000000372] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Accepted: 08/24/2020] [Indexed: 01/08/2023] Open
Abstract
Supplemental Digital Content is available in the text. Asthma exacerbations are frequent in the pediatric emergency department (ED) and result in significant morbidity and costs; standardized treatment improves outcomes. In this study, we aimed to use provider adherence data and the associated patient outcomes as an intervention to change behavior and improve care.
Collapse
|
12
|
Abstract
BackgroundWe sought to establish to what extent decision certainty has been measured in real time and whether high or low levels of certainty correlate with clinical outcomes.MethodsOur pre-specified study protocol is published on PROSPERO, CRD42019128112. We identified prospective studies from Medline, Embase and PsycINFO up to February 2019 that measured real time self-rating of the certainty of a medical decision by a clinician.FindingsNine studies were included and all were generally at high risk of bias. Only one study assessed long-term clinical outcomes: patients rated with high diagnostic uncertainty for heart failure had longer length of stay, increased mortality and higher readmission rates at 1 year than those rated with diagnostic certainty. One other study demonstrated the danger of extreme diagnostic confidence - 7% of cases (24/341) labelled as having either 0% or 100% diagnostic likelihood of heart failure were made in error.ConclusionsThe literature on real time self-rated certainty of clinician decisions is sparse and only relates to diagnostic decisions. Further prospective research with a view to generating hypotheses for testable interventions that can better calibrate clinician certainty with accuracy of decision making could be valuable in reducing diagnostic error and improving outcomes.
Collapse
Affiliation(s)
- Myura Nagendran
- NIHR academic clinical fellow in intensive care medicine, Imperial College London, UK
| | - Yang Chen
- NIHR academic clinical fellow in cardiology, Institute of Cardiovascular Science, University College London, UK
| | - Anthony C Gordon
- Imperial College London, UK and Centre for Perioperative and Critical Care Research, London, UK
| |
Collapse
|
13
|
Foster M, Presseau J, McCleary N, Carroll K, McIntyre L, Hutton B, Brehaut J. Audit and feedback to improve laboratory test and transfusion ordering in critical care: a systematic review. Implement Sci 2020; 15:46. [PMID: 32560666 PMCID: PMC7303577 DOI: 10.1186/s13012-020-00981-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 03/12/2020] [Indexed: 01/28/2023] Open
Abstract
BACKGROUND Laboratory tests and transfusions are sometimes ordered inappropriately, particularly in the critical care setting, which sees frequent use of both. Audit and Feedback (A&F) is a potentially useful intervention for modifying healthcare provider behaviors, but its application to the complex, team-based environment of critical care is not well understood. We conducted a systematic review of the literature on A&F interventions for improving test or transfusion ordering in the critical care setting. METHODS Five databases, two registries, and the bibliographies of relevant articles were searched. We included critical care studies that assessed the use of A&F targeting healthcare provider behaviors, alone or in combination with other interventions to improve test and transfusion ordering, as compared to historical practice, no intervention, or another healthcare behaviour change intervention. Studies were included only if they reported laboratory test or transfusion orders, or the appropriateness of orders, as outcomes. There were no restrictions based on study design, date of publication, or follow-up time. Intervention characteristics and absolute differences in outcomes were summarized. The quality of individual studies was assessed using a modified version of the Effective Practice and Organisation of Care Cochrane Review Group's criteria. RESULTS We identified 16 studies, including 13 uncontrolled before-after studies, one randomized controlled trial, one controlled before-after study, and one controlled clinical trial (quasi-experimental). These studies described 17 interventions, mostly (88%) multifaceted interventions with an A&F component. Feedback was most often provided in a written format only (41%), more than once (53%), and most often only provided data aggregated to the group-level (41%). Most studies saw a change in the hypothesized direction, but not all studies provided statistical analyses to formally test improvement. Overall study quality was low, with studies often lacking a concurrent control group. CONCLUSIONS Our review summarizes characteristics of A&F interventions implemented in the critical care context, points to some mechanisms by which A&F might be made more effective in this setting, and provides an overview of how the appropriateness of orders was reported. Our findings suggest that A&F can be effective in the context of critical care; however, further research is required to characterize approaches that optimize the effectiveness in this setting alongside more rigorous evaluation methods. TRIAL REGISTRATION PROSPERO CRD42016051941.
Collapse
Affiliation(s)
- Madison Foster
- School of Epidemiology and Public Health, University of Ottawa, 451 Smyth Road, Ottawa, ON K1H 8M5 Canada
- Ottawa Hospital Research Institute, Clinical Epidemiology Program, The Ottawa Hospital, General Campus, 501 Smyth Road, Centre for Practice Changing Research, Box 201B, Ottawa, ON K1H 8L6 Canada
| | - Justin Presseau
- School of Epidemiology and Public Health, University of Ottawa, 451 Smyth Road, Ottawa, ON K1H 8M5 Canada
- Ottawa Hospital Research Institute, Clinical Epidemiology Program, The Ottawa Hospital, General Campus, 501 Smyth Road, Centre for Practice Changing Research, Box 201B, Ottawa, ON K1H 8L6 Canada
- School of Psychology, University of Ottawa, 136 Jean-Jacques Lussier, Vanier Hall, Ottawa, ON K1N 6N5 Canada
| | - Nicola McCleary
- School of Epidemiology and Public Health, University of Ottawa, 451 Smyth Road, Ottawa, ON K1H 8M5 Canada
- Ottawa Hospital Research Institute, Clinical Epidemiology Program, The Ottawa Hospital, General Campus, 501 Smyth Road, Centre for Practice Changing Research, Box 201B, Ottawa, ON K1H 8L6 Canada
| | - Kelly Carroll
- Ottawa Hospital Research Institute, Clinical Epidemiology Program, The Ottawa Hospital, General Campus, 501 Smyth Road, Centre for Practice Changing Research, Box 201B, Ottawa, ON K1H 8L6 Canada
| | - Lauralyn McIntyre
- School of Epidemiology and Public Health, University of Ottawa, 451 Smyth Road, Ottawa, ON K1H 8M5 Canada
- Ottawa Hospital Research Institute, Clinical Epidemiology Program, The Ottawa Hospital, General Campus, 501 Smyth Road, Centre for Practice Changing Research, Box 201B, Ottawa, ON K1H 8L6 Canada
- Department of Critical Care Medicine, The Ottawa Hospital, General Campus, 501 Smyth Road, Ottawa, ON K1H 8L6 Canada
| | - Brian Hutton
- School of Epidemiology and Public Health, University of Ottawa, 451 Smyth Road, Ottawa, ON K1H 8M5 Canada
- Ottawa Hospital Research Institute, Clinical Epidemiology Program, The Ottawa Hospital, General Campus, 501 Smyth Road, Centre for Practice Changing Research, Box 201B, Ottawa, ON K1H 8L6 Canada
- Ottawa Hospital Research Institute, Knowledge Synthesis Unit, The Ottawa Hospital, General Campus, 501 Smyth Road, Centre for Practice Changing Research, Box 201B, Ottawa, ON K1H 8L6 Canada
| | - Jamie Brehaut
- School of Epidemiology and Public Health, University of Ottawa, 451 Smyth Road, Ottawa, ON K1H 8M5 Canada
- Ottawa Hospital Research Institute, Clinical Epidemiology Program, The Ottawa Hospital, General Campus, 501 Smyth Road, Centre for Practice Changing Research, Box 201B, Ottawa, ON K1H 8L6 Canada
| |
Collapse
|