1
|
Li X, Evans JM. Incentivizing performance in health care: a rapid review, typology and qualitative study of unintended consequences. BMC Health Serv Res 2022; 22:690. [PMID: 35606747 PMCID: PMC9128153 DOI: 10.1186/s12913-022-08032-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Accepted: 05/04/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Health systems are increasingly implementing policy-driven programs to incentivize performance using contracts, scorecards, rankings, rewards, and penalties. Studies of these "Performance Management" (PM) programs have identified unintended negative consequences. However, no single comprehensive typology of the negative and positive unintended consequences of PM in healthcare exists and most studies of unintended consequences were conducted in England or the United States. The aims of this study were: (1) To develop a comprehensive typology of unintended consequences of PM in healthcare, and (2) To describe multiple stakeholder perspectives of the unintended consequences of PM in cancer and renal care in Ontario, Canada. METHODS We conducted a rapid review of unintended consequences of PM in healthcare (n = 41 papers) to develop a typology of unintended consequences. We then conducted a secondary analysis of data from a qualitative study involving semi-structured interviews with 147 participants involved with or impacted by a PM system used to oversee 40 care delivery networks in Ontario, Canada. Participants included administrators and clinical leads from the networks and the government agency managing the PM system. We undertook a hybrid inductive and deductive coding approach using the typology we developed from the rapid review. RESULTS We present a comprehensive typology of 48 negative and positive unintended consequences of PM in healthcare, including five novel unintended consequences not previously identified or well-described in the literature. The typology is organized into two broad categories: unintended consequences on (1) organizations and providers and on (2) patients and patient care. The most common unintended consequences of PM identified in the literature were measure fixation, tunnel vision, and misrepresentation or gaming, while those most prominent in the qualitative data were administrative burden, insensitivity, reduced morale, and systemic dysfunction. We also found that unintended consequences of PM are often mutually reinforcing. CONCLUSIONS Our comprehensive typology provides a common language for discourse on unintended consequences and supports systematic, comparable analyses of unintended consequences across PM regimes and healthcare systems. Healthcare policymakers and managers can use the results of this study to inform the (re-)design and implementation of evidence-informed PM programs.
Collapse
Affiliation(s)
- Xinyu Li
- Faculty of Health Sciences, McMaster University, Hamilton, Canada
| | - Jenna M Evans
- DeGroote School of Business, McMaster University, 1280 Main Street West, Hamilton, ON, L8S4M4, Canada.
| |
Collapse
|
2
|
Graham JMK, Ambroggio L, Leonard JE, Ziniel SI, Grubenhoff JA. Evaluation of feedback modalities and preferences regarding feedback on decision-making in a pediatric emergency department. Diagnosis (Berl) 2021; 9:216-224. [PMID: 34894116 DOI: 10.1515/dx-2021-0122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/17/2021] [Indexed: 11/15/2022]
Abstract
OBJECTIVES To compare pediatric emergency clinicians' attitudes toward three feedback modalities and assess clinicians' case-based feedback preferences. METHODS Electronic survey sent to pediatric emergency medicine (PEM) physicians and fellows; general pediatricians; and advanced practice providers (APPs) with nine questions exploring effectiveness and emotional impact of three feedback modalities: case-based feedback, bounce-back notifications, and biannual performance reports. Additional questions used a four-point ordinal agreement response scale and assessed clinicians' attitudes toward case review notification, case-based feedback preferences, and emotional support. Survey responses were compared by feedback modality using Pearson's chi-squared. RESULTS Of 165 eligible providers, 93 (56%) responded. Respondents agreed that case-based feedback was timely (81%), actionable (75%), prompted reflection on decision-making (92%), prompted research on current clinical practice (53%), and encouraged practice change (58%). Pediatric Emergency Care Applied Research Network (PECARN) performance reports scored the lowest on all metrics except positive feedback. No more than 40% of providers indicated that any feedback modality provided emotional support. Regarding case-based feedback, 88% of respondents desired email notification before case review and 88% desired feedback after case review. Clinicians prefer receiving feedback from someone with similar or more experience/training. Clinicians receiving feedback desire succinctness, supporting evidence, consistency, and sensitive delivery. CONCLUSIONS Case-based feedback scored highest of the three modalities and is perceived to be the most likely to improve decision-making and promote practice change. Most providers did not perceive emotional support from any feedback modality. Emotional safety warrants purposeful attention in feedback delivery. Critical components of case-based feedback include succinctness, supporting evidence, consistency, and sensitive delivery.
Collapse
Affiliation(s)
- Jessica M K Graham
- Pediatric Emergency Medicine, Children's Hospital of Colorado, Aurora, CO, USA
| | - Lilliam Ambroggio
- Pediatric Emergency Medicine, Children's Hospital of Colorado, Aurora, CO, USA.,Pediatric Hospital Medicine, Children's Hospital of Colorado, Aurora, CO, USA
| | - Jan E Leonard
- Pediatric Emergency Medicine, Children's Hospital of Colorado, Aurora, CO, USA
| | - Sonja I Ziniel
- Pediatric Hospital Medicine, Children's Hospital of Colorado, Aurora, CO, USA.,Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Joseph A Grubenhoff
- Pediatric Emergency Medicine, Children's Hospital of Colorado, Aurora, CO, USA
| |
Collapse
|
3
|
Martin B, Jones J, Miller M, Johnson-Koenke R. Health Care Professionals' Perceptions of Pay-for-Performance in Practice: A Qualitative Metasynthesis. INQUIRY: The Journal of Health Care Organization, Provision, and Financing 2020; 57:46958020917491. [PMID: 32448014 PMCID: PMC7249558 DOI: 10.1177/0046958020917491] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Incentive-based pay-for-performance (P4P) models have been introduced during the
last 2 decades as a mechanism to improve the delivery of evidence-based care
that ensures clinical quality and improves health outcomes. There is mixed
evidence that P4P has a positive effect on health outcomes and researchers cite
lack of engagement from health care professionals as a limiting factor. This
qualitative metasynthesis of existing qualitative research was conducted to
integrate health care professionals’ perceptions of P4P in clinical practice.
Four themes emerged during the research process: positive perceptions of the
value of performance measurement and associated financial incentives; negative
perceptions of the performance measurement and associated financial incentives;
perceptions of how P4P programs influence the quality/appropriateness of care;
and perceptions of the influence of P4P program on professional roles and
workplace dynamics. Identifying factors that influence health care
professionals’ perceptions about this type of value-based payment model will
guide future research.
Collapse
Affiliation(s)
| | | | - Matthew Miller
- University of Colorado, Aurora, USA.,VA Eastern Colorado Geriatric Research Education and Clinical Center, Aurora, USA
| | - Rachel Johnson-Koenke
- University of Colorado, Aurora, USA.,Rocky Mountain Regional VA Medical Center, Denver, CO, USA
| |
Collapse
|
4
|
Brown B, Gude WT, Blakeman T, van der Veer SN, Ivers N, Francis JJ, Lorencatto F, Presseau J, Peek N, Daker-White G. Clinical Performance Feedback Intervention Theory (CP-FIT): a new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research. Implement Sci 2019; 14:40. [PMID: 31027495 PMCID: PMC6486695 DOI: 10.1186/s13012-019-0883-5] [Citation(s) in RCA: 185] [Impact Index Per Article: 30.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Accepted: 03/25/2019] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Providing health professionals with quantitative summaries of their clinical performance when treating specific groups of patients ("feedback") is a widely used quality improvement strategy, yet systematic reviews show it has varying success. Theory could help explain what factors influence feedback success, and guide approaches to enhance effectiveness. However, existing theories lack comprehensiveness and specificity to health care. To address this problem, we conducted the first systematic review and synthesis of qualitative evaluations of feedback interventions, using findings to develop a comprehensive new health care-specific feedback theory. METHODS We searched MEDLINE, EMBASE, CINAHL, Web of Science, and Google Scholar from inception until 2016 inclusive. Data were synthesised by coding individual papers, building on pre-existing theories to formulate hypotheses, iteratively testing and improving hypotheses, assessing confidence in hypotheses using the GRADE-CERQual method, and summarising high-confidence hypotheses into a set of propositions. RESULTS We synthesised 65 papers evaluating 73 feedback interventions from countries spanning five continents. From our synthesis we developed Clinical Performance Feedback Intervention Theory (CP-FIT), which builds on 30 pre-existing theories and has 42 high-confidence hypotheses. CP-FIT states that effective feedback works in a cycle of sequential processes; it becomes less effective if any individual process fails, thus halting progress round the cycle. Feedback's success is influenced by several factors operating via a set of common explanatory mechanisms: the feedback method used, health professional receiving feedback, and context in which feedback takes place. CP-FIT summarises these effects in three propositions: (1) health care professionals and organisations have a finite capacity to engage with feedback, (2) these parties have strong beliefs regarding how patient care should be provided that influence their interactions with feedback, and (3) feedback that directly supports clinical behaviours is most effective. CONCLUSIONS This is the first qualitative meta-synthesis of feedback interventions, and the first comprehensive theory of feedback designed specifically for health care. Our findings contribute new knowledge about how feedback works and factors that influence its effectiveness. Internationally, practitioners, researchers, and policy-makers can use CP-FIT to design, implement, and evaluate feedback. Doing so could improve care for large numbers of patients, reduce opportunity costs, and improve returns on financial investments. TRIAL REGISTRATION PROSPERO, CRD42015017541.
Collapse
Affiliation(s)
- Benjamin Brown
- Centre for Health Informatics, University of Manchester, Manchester, UK
- Centre for Primary Care, University of Manchester, Manchester, UK
| | - Wouter T. Gude
- Department of Medical Informatics, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Thomas Blakeman
- Centre for Primary Care, University of Manchester, Manchester, UK
| | | | - Noah Ivers
- Department of Family and Community Medicine, University of Toronto, Toronto, Canada
| | - Jill J. Francis
- Centre for Health Services Research, City University of London, London, UK
- Centre for Implementation Research, Ottawa Hospital Research Institute, Ottawa, Canada
| | | | - Justin Presseau
- Centre for Implementation Research, Ottawa Hospital Research Institute, Ottawa, Canada
- School of Epidemiology & Public Health, University of Ottawa, Ottawa, Canada
- School of Psychology, University of Ottawa, Ottawa, Canada
| | - Niels Peek
- Centre for Health Informatics, University of Manchester, Manchester, UK
| | | |
Collapse
|
5
|
Gude WT, Brown B, van der Veer SN, Colquhoun HL, Ivers NM, Brehaut JC, Landis-Lewis Z, Armitage CJ, de Keizer NF, Peek N. Clinical performance comparators in audit and feedback: a review of theory and evidence. Implement Sci 2019; 14:39. [PMID: 31014352 PMCID: PMC6480497 DOI: 10.1186/s13012-019-0887-1] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Accepted: 04/01/2019] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Audit and feedback (A&F) is a common quality improvement strategy with highly variable effects on patient care. It is unclear how A&F effectiveness can be maximised. Since the core mechanism of action of A&F depends on drawing attention to a discrepancy between actual and desired performance, we aimed to understand current and best practices in the choice of performance comparator. METHODS We described current choices for performance comparators by conducting a secondary review of randomised trials of A&F interventions and identifying the associated mechanisms that might have implications for effective A&F by reviewing theories and empirical studies from a recent qualitative evidence synthesis. RESULTS We found across 146 trials that feedback recipients' performance was most frequently compared against the performance of others (benchmarks; 60.3%). Other comparators included recipients' own performance over time (trends; 9.6%) and target standards (explicit targets; 11.0%), and 13% of trials used a combination of these options. In studies featuring benchmarks, 42% compared against mean performance. Eight (5.5%) trials provided a rationale for using a specific comparator. We distilled mechanisms of each comparator from 12 behavioural theories, 5 randomised trials, and 42 qualitative A&F studies. CONCLUSION Clinical performance comparators in published literature were poorly informed by theory and did not explicitly account for mechanisms reported in qualitative studies. Based on our review, we argue that there is considerable opportunity to improve the design of performance comparators by (1) providing tailored comparisons rather than benchmarking everyone against the mean, (2) limiting the amount of comparators being displayed while providing more comparative information upon request to balance the feedback's credibility and actionability, (3) providing performance trends but not trends alone, and (4) encouraging feedback recipients to set personal, explicit targets guided by relevant information.
Collapse
Affiliation(s)
- Wouter T. Gude
- Department of Medical Informatics, Amsterdam UMC, Amsterdam Public Health Research Institute, University of Amsterdam, Amsterdam, The Netherlands
- NIHR Greater Manchester Primary Care Patient Safety Translational Research Centre, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
| | - Benjamin Brown
- Centre for Health Informatics, Division of Informatics, Imaging and Data Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
| | - Sabine N. van der Veer
- NIHR Greater Manchester Primary Care Patient Safety Translational Research Centre, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
- Centre for Health Informatics, Division of Informatics, Imaging and Data Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
| | - Heather L. Colquhoun
- Occupational Science and Occupational Therapy, University of Toronto, Toronto, Ontario Canada
| | - Noah M. Ivers
- Family and Community Medicine, Women’s College Hospital, University of Toronto, Toronto, Ontario Canada
| | - Jamie C. Brehaut
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario Canada
| | - Zach Landis-Lewis
- Center for Health Informatics for the Underserved, Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA USA
| | - Christopher J. Armitage
- NIHR Greater Manchester Primary Care Patient Safety Translational Research Centre, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
- Manchester Centre for Health Psychology, Division of Psychology and Mental Health, The University of Manchester, Manchester, UK
- NIHR Manchester Biomedical Research Centre, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
| | - Nicolette F. de Keizer
- Department of Medical Informatics, Amsterdam UMC, Amsterdam Public Health Research Institute, University of Amsterdam, Amsterdam, The Netherlands
| | - Niels Peek
- NIHR Greater Manchester Primary Care Patient Safety Translational Research Centre, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
- Centre for Health Informatics, Division of Informatics, Imaging and Data Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
| |
Collapse
|
6
|
Kondo KK, Wyse J, Mendelson A, Beard G, Freeman M, Low A, Kansagara D. Pay-for-Performance and Veteran Care in the VHA and the Community: a Systematic Review. J Gen Intern Med 2018; 33:1155-1166. [PMID: 29700789 PMCID: PMC6025676 DOI: 10.1007/s11606-018-4444-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Revised: 03/09/2018] [Accepted: 04/10/2018] [Indexed: 10/17/2022]
Abstract
BACKGROUND Although pay-for-performance (P4P) strategies have been used by the Veterans Health Administration (VHA) for over a decade, the long-term benefits of P4P are unclear. The use of P4P is further complicated by the increased use of non-VHA healthcare providers as part of the Veterans Choice Program. We conducted a systematic review and key informant interviews to better understand the effectiveness and potential unintended consequences of P4P, as well as the implementation factors and design features important in both VHA and non-VHA/community settings. METHODS We searched PubMed, PsycINFO, and CINAHL through March 2017 and reviewed reference lists. We included trials and observational studies of P4P targeting Veteran health. Two investigators abstracted data and assessed study quality. We interviewed VHA stakeholders to gain further insight. RESULTS The literature search yielded 1031 titles and abstracts, of which 30 studies met pre-specified inclusion criteria. Twenty-five examined P4P in VHA settings and 5 in community settings. There was no strong evidence supporting the effectiveness of P4P in VHA settings. Interviews with 17 key informants were consistent with studies that identified the potential for overtreatment associated with performance metrics in the VHA. Key informants' views on P4P in community settings included the need to develop relationships with providers and health systems with records of strong performance, to improve coordination by targeting documentation and data sharing processes, and to troubleshoot the limited impact of P4P among practices where Veterans make up a small fraction of the patient population. DISCUSSION The evidence to support the effectiveness of P4P on Veteran health is limited. Key informants recognize the potential for unintended consequences, such as overtreatment in VHA settings, and suggest that implementation of P4P in the community focus on relationship building and target areas such as documentation and coordination of care.
Collapse
Affiliation(s)
- Karli K Kondo
- Portland VA Health Care System, Evidence-based Synthesis Program, Portland, OR, USA.
- Oregon Health and Science University, Portland, OR, USA.
| | - Jessica Wyse
- Portland VA Health Care System, Evidence-based Synthesis Program, Portland, OR, USA
| | | | - Gabriella Beard
- Portland VA Health Care System, Evidence-based Synthesis Program, Portland, OR, USA
| | - Michele Freeman
- Portland VA Health Care System, Evidence-based Synthesis Program, Portland, OR, USA
| | - Allison Low
- Portland VA Health Care System, Evidence-based Synthesis Program, Portland, OR, USA
| | - Devan Kansagara
- Portland VA Health Care System, Evidence-based Synthesis Program, Portland, OR, USA
- Oregon Health and Science University, Portland, OR, USA
| |
Collapse
|
7
|
An Assessment of the Quality Oncology Practice Initiative: Lessons Learned From a Detailed Assessment of a Well-Established Profession-Based Performance Measurement Program. J Healthc Qual 2017; 39:e49-e58. [DOI: 10.1097/jhq.0000000000000054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
8
|
Phillips JL, Heneka N, Hickman L, Lam L, Shaw T. Can A Complex Online Intervention Improve Cancer Nurses’ Pain Screening and Assessment Practices? Results from a Multicenter, Pre-post Test Pilot Study. Pain Manag Nurs 2017; 18:75-89. [DOI: 10.1016/j.pmn.2017.01.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Revised: 01/24/2017] [Accepted: 01/26/2017] [Indexed: 11/15/2022]
|
9
|
Blok AC, May CN, Sadasivam RS, Houston TK. Virtual Patient Technology: Engaging Primary Care in Quality Improvement Innovations. JMIR MEDICAL EDUCATION 2017; 3:e3. [PMID: 28202429 PMCID: PMC5332834 DOI: 10.2196/mededu.7042] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2016] [Revised: 01/07/2017] [Accepted: 01/31/2017] [Indexed: 06/06/2023]
Abstract
BACKGROUND Engaging health care staff in new quality improvement programs is challenging. OBJECTIVE We developed 2 virtual patient (VP) avatars in the context of a clinic-level quality improvement program. We sought to determine differences in preferences for VPs and the perceived influence of interacting with the VP on clinical staff engagement with the quality improvement program. METHODS Using a participatory design approach, we developed an older male smoker VP and a younger female smoker VP. The older male smoker was described as a patient with cardiovascular disease and was ethnically ambiguous. The female patient was younger and was worried about the impact of smoking on her pregnancy. Clinical staff were allowed to choose the VP they preferred, and the more they engaged with the VP, the more likely the VP was to quit smoking and become healthier. We deployed the VP within the context of a quality improvement program designed to encourage clinical staff to refer their patients who smoke to a patient-centered Web-assisted tobacco intervention. To evaluate the VPs, we used quantitative analyses using multivariate models of provider and practice characteristics and VP characteristic preference and analyses of a brief survey of positive deviants (clinical staff in practices with high rates of encouraging patients to use the quit smoking innovation). RESULTS A total of 146 clinical staff from 76 primary care practices interacted with the VPs. Clinic staff included medical providers (35/146, 24.0%), nurse professionals (19/146, 13.0%), primary care technicians (5/146, 3.4%), managerial staff (67/146, 45.9%), and receptionists (20/146, 13.7%). Medical staff were mostly male, and other roles were mostly female. Medical providers (OR 0.031; CI 0.003-0.281; P=.002) and younger staff (OR 0.411; CI 0.177-0.952; P=.038) were less likely to choose the younger, female VP when controlling for all other characteristics. VP preference did not influence online patient referrals by staff. In high-performing practices that referred 20 or more smokers to the ePortal (13/76), the majority of clinic staff were motivated by or liked the virtual patient (20/26, 77%). CONCLUSIONS Medical providers are more likely motivated by VPs that are similar to their patient population, while nurses and other staff may prefer avatars that are more similar to them.
Collapse
Affiliation(s)
- Amanda C Blok
- Quantitative Health Sciences, University of Massachusetts Medical School, Worcester, MA, United States
- Graduate School of Nursing, University of Massachusetts Medical School, Worcester, MA, United States
| | - Christine N May
- Quantitative Health Sciences, University of Massachusetts Medical School, Worcester, MA, United States
- Preventative and Behavioral Medicine, University of Massachusetts Medical School, Worcester, MA, United States
| | - Rajani S Sadasivam
- Quantitative Health Sciences, University of Massachusetts Medical School, Worcester, MA, United States
| | - Thomas K Houston
- Quantitative Health Sciences, University of Massachusetts Medical School, Worcester, MA, United States
- Center for Healthcare Organization and Implementation Research, Bedford Veterans Affairs Medical Center, Bedford, MA, United States
| |
Collapse
|
10
|
Damschroder LJ, Robinson CH, Francis J, Bentley DR, Krein SL, Rosland AM, Hofer TP, Kerr EA. Effects of performance measure implementation on clinical manager and provider motivation. J Gen Intern Med 2014; 29 Suppl 4:877-84. [PMID: 25234554 PMCID: PMC4239289 DOI: 10.1007/s11606-014-3020-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
BACKGROUND Clinical performance measurement has been a key element of efforts to transform the Veterans Health Administration (VHA). However, there are a number of signs that current performance measurement systems used within and outside the VHA may be reaching the point of maximum benefit to care and in some settings, may be resulting in negative consequences to care, including overtreatment and diminished attention to patient needs and preferences. Our research group has been involved in a long-standing partnership with the office responsible for clinical performance measurement in the VHA to understand and develop potential strategies to mitigate the unintended consequences of measurement. OBJECTIVE Our aim was to understand how the implementation of diabetes performance measures (PMs) influences management actions and day-to-day clinical practice. DESIGN This is a mixed methods study design based on quantitative administrative data to select study facilities and quantitative data from semi-structured interviews. PARTICIPANTS Sixty-two network-level and facility-level executives, managers, front-line providers and staff participated in the study. APPROACH Qualitative content analyses were guided by a team-based consensus approach using verbatim interview transcripts. A published interpretive motivation theory framework is used to describe potential contributions of local implementation strategies to unintended consequences of PMs. KEY RESULTS Implementation strategies used by management affect providers' response to PMs, which in turn potentially undermines provision of high-quality patient-centered care. These include: 1) feedback reports to providers that are dissociated from a realistic capability to address performance gaps; 2) evaluative criteria set by managers that are at odds with patient-centered care; and 3) pressure created by managers' narrow focus on gaps in PMs that is viewed as more punitive than motivating. CONCLUSIONS Next steps include working with VHA leaders to develop and test implementation approaches to help ensure that the next generation of PMs motivate truly patient-centered care and are clinically meaningful.
Collapse
|