1
|
Dhyppolito IM, Nadanovsky P, Cruz LR, de Oliveira BH, Dos Santos APP. Economic evaluation of fluoride varnish application in preschoolers: A systematic review. Int J Paediatr Dent 2023; 33:431-449. [PMID: 36695007 DOI: 10.1111/ipd.13049] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 10/20/2022] [Accepted: 01/12/2023] [Indexed: 01/26/2023]
Abstract
BACKGROUND Fluoride varnish (FV) is a convenient way of professionally applying fluoride in preschoolers. However, its modest anticaries effect highlights the need for economic evaluations. AIM To assess economic evaluations reporting applications of FV to reduce caries incidence in preschoolers. DESIGN We included full economic evaluations with preschool participants, in which the intervention was FV and the outcome was related to dentin caries. We searched in CENTRAL; MEDLINE via PubMed; WEB OF SCIENCE; EMBASE; SCOPUS; LILACS; BBO; and BVS Economia em saúde, OpenGrey, and EconoLit. Clinical trial registers, thesis and dissertations, and meeting abstracts were hand searched, as well as 11 dental journals. Risk of bias in the included studies was assessed using the Philips' and Drummond's (full and simplified) tools. RESULTS Titles and abstracts of 2871 articles were evaluated, and 200 were read in full. Eight cost-effectiveness studies were included: five modeling and three within-trial evaluations. None of the studies gave sufficient information to allow a thorough assessment using the bias tools. We did not combine the results of the studies due to the great heterogeneity among them. Four studies reported that FV in preschool children was a cost-effective measure, but in one of these studies, sealants and fluoride toothpaste were more cost-effective measures than the varnish, and three studies used limited data that compromised the generalizability of their results. The other four studies showed a large increase in costs due to the application of varnish and/or low cost-effectiveness. CONCLUSION We did not find convincing overall evidence that applying FV in preschoolers is an anticaries cost-effective measure. The protocol of this systematic review is available at Open Science Framework (https://osf.io/xw5va/).
Collapse
Affiliation(s)
- Izabel Monteiro Dhyppolito
- Department of Epidemiology, Institute of Social Medicine, Rio de Janeiro State University, Rio de Janeiro, Brazil
- Department of Community and Preventive Dentistry, Faculty of Dentistry, Rio de Janeiro State University, Rio de Janeiro, Brazil
| | - Paulo Nadanovsky
- Department of Epidemiology, Institute of Social Medicine, Rio de Janeiro State University, Rio de Janeiro, Brazil
- Department of Epidemiology, National School of Public Health, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
| | - Laís Rueda Cruz
- Department of Community and Preventive Dentistry, Faculty of Dentistry, Rio de Janeiro State University, Rio de Janeiro, Brazil
| | - Branca Heloisa de Oliveira
- Department of Community and Preventive Dentistry, Faculty of Dentistry, Rio de Janeiro State University, Rio de Janeiro, Brazil
| | - Ana Paula Pires Dos Santos
- Department of Community and Preventive Dentistry, Faculty of Dentistry, Rio de Janeiro State University, Rio de Janeiro, Brazil
| |
Collapse
|
2
|
Alarid-Escudero F, Krijkamp E, Enns EA, Yang A, Hunink MM, Pechlivanoglou P, Jalal H. An Introductory Tutorial on Cohort State-Transition Models in R Using a Cost-Effectiveness Analysis Example. Med Decis Making 2023; 43:3-20. [PMID: 35770931 PMCID: PMC9742144 DOI: 10.1177/0272989x221103163] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Decision models can combine information from different sources to simulate the long-term consequences of alternative strategies in the presence of uncertainty. A cohort state-transition model (cSTM) is a decision model commonly used in medical decision making to simulate the transitions of a hypothetical cohort among various health states over time. This tutorial focuses on time-independent cSTM, in which transition probabilities among health states remain constant over time. We implement time-independent cSTM in R, an open-source mathematical and statistical programming language. We illustrate time-independent cSTMs using a previously published decision model, calculate costs and effectiveness outcomes, and conduct a cost-effectiveness analysis of multiple strategies, including a probabilistic sensitivity analysis. We provide open-source code in R to facilitate wider adoption. In a second, more advanced tutorial, we illustrate time-dependent cSTMs.
Collapse
Affiliation(s)
- Fernando Alarid-Escudero
- Division of Public Administration, Center for Research and Teaching in Economics (CIDE), Aguascalientes, Aguascalientes, Mexico
| | - Eline Krijkamp
- Department of Epidemiology and Department of Radiology, Erasmus University Medical Center, Rotterdam, The Netherlands
| | - Eva A. Enns
- Division of Health Policy and Management, University of Minnesota School of Public Health, Minneapolis, MN, USA
| | - Alan Yang
- The Hospital for Sick Children, Toronto, Ontario, Canada
| | - M.G. Myriam Hunink
- Department of Epidemiology and Department of Radiology, Erasmus University Medical Center, Rotterdam, The Netherlands,Center for Health Decision Sciences, Harvard T.H. Chan School of Public Health, Boston, USA
| | - Petros Pechlivanoglou
- The Hospital for Sick Children, Toronto and University of Toronto, Toronto, Ontario, Canada
| | - Hawre Jalal
- School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
3
|
Mauskopf J. Multivariable and Structural Uncertainty Analyses for Cost-Effectiveness Estimates: Back to the Future. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2019; 22:570-574. [PMID: 31104736 DOI: 10.1016/j.jval.2018.11.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2018] [Revised: 11/01/2018] [Accepted: 11/09/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND In this commentary, celebrating the 20th anniversary of the journal Value in Health, I present a brief overview and illustration of the evolution over the past 20 years of the methodological literature providing guidelines for multivariable and structural uncertainty analysis for cost-effectiveness estimates. METHODS To illustrate the impact of the guidelines for uncertainty analyses, I show how the inclusion of multivariable and structural uncertainty analyses in cost-effectiveness analyses published in Value in Health changed over the past 20 years using publications from 1999/2000, 2007 and 2017. RESULTS The commentary is organized in three sections: past, focusing on the development and use of methods for multivariable uncertainty analysis; present, focusing on the growing awareness of the need for structural uncertainty analysis, suggested frameworks for structural uncertainty analysis and how it is currently implemented; and future, considering different methods for combining multivariable and structural uncertainty analyses over the next decades. CONCLUSIONS I conclude by suggesting how the continued evolution of uncertainty analyses in published studies and health technology assessment submissions can best take into account an important goal of cost-effectiveness analyses: to provide useful information to decision makers.
Collapse
|
4
|
Tancredi A. Approximate Bayesian inference for discretely observed continuous-time multi-state models. Biometrics 2019; 75:966-977. [PMID: 30648730 DOI: 10.1111/biom.13019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Accepted: 12/21/2018] [Indexed: 11/30/2022]
Abstract
Inference for continuous time multi-state models presents considerable computational difficulties when the process is only observed at discrete time points with no additional information about the state transitions. In fact, for general multi-state Markov model, evaluation of the likelihood function is possible only via intensive numerical approximations. Moreover, in real applications, transitions between states may depend on the time since entry into the current state, and semi-Markov models, where the likelihood function is not available in closed form, should be fitted to the data. Approximate Bayesian Computation (ABC) methods, which make use only of comparisons between simulated and observed summary statistics, represent a solution to intractable likelihood problems and provide alternative algorithms when the likelihood calculation is computationally too costly. In this article we investigate the potentiality of ABC techniques for multi-state models both to obtain the posterior distributions of the model parameters and to compare Markov and semi-Markov models. In addition, we will also exploit ABC methods to estimate and compare hidden Markov and semi-Markov models when observed states are subject to classification errors. We illustrate the performance of the ABC methodology both with simulated data and with a real data example.
Collapse
Affiliation(s)
- Andrea Tancredi
- Department of Methods and Models for Economics Territory and Finance, Sapienza University of Rome, Via del Castro Laurenziano 9, 00161, Rome, Italy
| |
Collapse
|
5
|
Crossan C, Lord J, Ryan R, Nherera L, Marshall T. Cost effectiveness of case-finding strategies for primary prevention of cardiovascular disease: a modelling study. Br J Gen Pract 2017; 67:e67-e77. [PMID: 27821671 PMCID: PMC5198616 DOI: 10.3399/bjgp16x687973] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Accepted: 08/09/2016] [Indexed: 01/21/2023] Open
Abstract
BACKGROUND Policies of active case finding for cardiovascular disease (CVD) prevention in healthy adults are common, but economic evaluation has not investigated targeting such strategies at those who are most likely to benefit. AIM To assess the cost effectiveness of targeted case finding for CVD prevention. DESIGN AND SETTING Cost-effectiveness modelling in an English primary care population. METHOD A cohort of 10 000 individuals aged 30-74 years and without existing CVD or diabetes was sampled from The Health Improvement Network database, a large primary care database. A discrete-event simulation was used to model the process of inviting people for assessment, assessing cardiovascular risk, and initiation and persistence with drug treatment. Risk factors and drug cessation rates were obtained from primary care data. Published sources provided estimates of uptake of assessment, treatment initiation, and treatment effects. The researchers determined the lifetime costs and quality-adjusted life years (QALYs) with opportunistic case finding, and strategies prioritising and targeting patients by age or prior estimate of cardiovascular risk. This study reports on the optimum strategy if a QALY is valued at £20 000. RESULTS Compared with no case finding, inviting all adults aged 30-74 years in a population of 10 000 yields 30.32 QALYs at a total cost of £705 732. The optimum strategy is to rank patients by prior risk estimate and invite 8% of those who are assessed as being at highest risk (those at ≥12.76% predicted 10-year CVD risk), yielding 17.53 QALYs at a cost of £162 280. There is an 89.4% probability that the optimum strategy is to invite <35% of patients for assessment. CONCLUSION Across all age ranges, targeted case finding using a prior estimate of CVD risk is more efficient than universal case finding in healthy adults.
Collapse
Affiliation(s)
- Catriona Crossan
- Health Economics Research Group, Brunel University London, Uxbridge
| | - Joanne Lord
- Southampton Health Technology Assessments Centre, University of Southampton, Southampton
| | - Ronan Ryan
- Primary Care Clinical Sciences, School of Health and Population Sciences, University of Birmingham, Birmingham
| | | | - Tom Marshall
- School of Health and Population Sciences, University of Birmingham, Birmingham
| |
Collapse
|
6
|
Fenwick E, Palmer S, Claxton K, Sculpher M, Abrams K, Sutton A. An Iterative Bayesian Approach to Health Technology Assessment: Application to a Policy of Preoperative Optimization for Patients Undergoing Major Elective Surgery. Med Decis Making 2016; 26:480-96. [PMID: 16997926 DOI: 10.1177/0272989x06290493] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Purpose. This article presents an iterative framework for managing the dynamic process of health technology assessment. The framework uses Bayesian statistical decision theory and value of information (VOI) analysis to inform decision making regarding appropriate patient management and to direct future research effort over the lifetime of a technology. Within the article, the framework is applied to a policy decision regarding preoperative patient management before major elective surgery, for which trial data are available. Method. The evidence available prior to the trial is used to determine the appropriate method of patient management and to ascertain whether, at the time of commissioning, the trial was potentially worthwhile. The prior information is then updated with the trial data via a Bayesian analysis using informative priors. This post trial information set is then used to reassess the appropriate method for patient management and to determine whether there is a requirement for any further research. Results. Prior to the trial, preoperative optimization with dopexamine is identified as the appropriate method of patient management. The results of the VOI analysis suggest that a short-term trial was potentially worthwhile (population expected value of perfect information [EVPI] = £48 million). Following the trial, the uncertainty surrounding the choice of appropriate patient management and the potential worth of further research had increased (population EVPI = £67 million). Conclusions. The article demonstrates the value and practicality of applying the iterative framework to the dynamic process of health technology assessment. It is only by formally incorporating all of the information available to decision makers, through informed priors, that the appropriate decisions can be made.
Collapse
Affiliation(s)
- Elisabeth Fenwick
- Department of Economics and Related Studies, University of York, York, United Kingdom.
| | | | | | | | | | | |
Collapse
|
7
|
Johnson-Masotti AP, Laud PW, Hoffmann RG, Hayat MJ, Pinkerton SD. A Bayesian Approach to Net Health Benefits: An Illustration and Application to Modeling HIV Prevention. Med Decis Making 2016; 24:634-53. [PMID: 15534344 DOI: 10.1177/0272989x04271040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Purpose. To conduct a cost-effectiveness analysis of HIV prevention when costs and effects cannot be measured directly. To quantify the total estimation of uncertainty due to sampling variability as well as inexact knowledge of HIV transmission parameters. Methods. The authors focus on estimating the incremental net health benefit (INHB) in a randomized trial of HIV prevention with intervention and control conditions. Using a Bernoulli model of HIV transmission, changes in the participants’ risk behaviors are converted into the number of HIV infections averted. A sampling model is used to account for variation in the behavior measurements. Bayes’s theorem and Monte Carlo methods are used to attain the stated objectives. Results. The authors obtained a positive mean INHB of 0.0008, indicating that advocacy training is just slightly favored over the control condition for men, assuming a $50,000 per quality-adjusted life year (QALY) threshold. To be confident of a positive INHB, the decision maker would need to spend more than $100,000 per QALY.
Collapse
Affiliation(s)
- Ana P Johnson-Masotti
- Clinical Epidemiology and Biostatistics Department, McMaster University, Hamilton, Ontario, Canada.
| | | | | | | | | |
Collapse
|
8
|
Welton NJ, Ades AE. Estimation of Markov Chain Transition Probabilities and Rates from Fully and Partially Observed Data: Uncertainty Propagation, Evidence Synthesis, and Model Calibration. Med Decis Making 2016; 25:633-45. [PMID: 16282214 DOI: 10.1177/0272989x05282637] [Citation(s) in RCA: 85] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Markov transition models are frequently used to model disease progression. The authors show how the solution to Kolmogorov’s forward equations can be exploited to map between transition rates and probabilities from probability data in multistate models. They provide a uniform, Bayesian treatment of estimation and propagation of uncertainty of transition rates and probabilities when 1) observations are available on all transitions and exact time at risk in each state (fully observed data) and 2) observations are on initial state and final state after a fixed interval of time but not on the sequence of transitions (partially observed data). The authors show how underlying transition rates can be recovered from partially observed data using Markov chain Monte Carlo methods in WinBUGS, and they suggest diagnostics to investigate inconsistencies between evidence from different starting states. An illustrative example for a 3-state model is given, which shows how the methods extend to more complex Markov models using the software WBDiff to compute solutions. Finally, the authors illustrate how to statistically combine data from multiple sources, including partially observed data at several follow-up times and also how to calibrate a Markov model to be consistent with data from one specific study.
Collapse
Affiliation(s)
- Nicky J Welton
- MRC Health Services Research Collaboration, Bristol, United Kingdom.
| | | |
Collapse
|
9
|
Dias S, Sutton AJ, Welton NJ, Ades AE. Evidence synthesis for decision making 6: embedding evidence synthesis in probabilistic cost-effectiveness analysis. Med Decis Making 2014; 33:671-8. [PMID: 23804510 PMCID: PMC3704202 DOI: 10.1177/0272989x13487257] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
When multiple parameters are estimated from the same synthesis model, it is likely that correlations will be induced between them. Network meta-analysis (mixed treatment comparisons) is one example where such correlations occur, along with meta-regression and syntheses involving multiple related outcomes. These correlations may affect the uncertainty in incremental net benefit when treatment options are compared in a probabilistic decision model, and it is therefore essential that methods are adopted that propagate the joint parameter uncertainty, including correlation structure, through the cost-effectiveness model. This tutorial paper sets out 4 generic approaches to evidence synthesis that are compatible with probabilistic cost-effectiveness analysis. The first is evidence synthesis by Bayesian posterior estimation and posterior sampling where other parameters of the cost-effectiveness model can be incorporated into the same software platform. Bayesian Markov chain Monte Carlo simulation methods with WinBUGS software are the most popular choice for this option. A second possibility is to conduct evidence synthesis by Bayesian posterior estimation and then export the posterior samples to another package where other parameters are generated and the cost-effectiveness model is evaluated. Frequentist methods of parameter estimation followed by forward Monte Carlo simulation from the maximum likelihood estimates and their variance-covariance matrix represent’a third approach. A fourth option is bootstrap resampling—a frequentist simulation approach to parameter uncertainty. This tutorial paper also provides guidance on how to identify situations in which no correlations exist and therefore simpler approaches can be adopted. Software suitable for transferring data between different packages, and software that provides a user-friendly interface for integrated software platforms, offering investigators a flexible way of examining alternative scenarios, are reviewed.
Collapse
Affiliation(s)
- Sofia Dias
- School of Social and Community Medicine, University of Bristol, Bristol, UK (SD, NJW, AEA)
| | - Alex J Sutton
- Department of Health Sciences, University of Leicester, Leicester, UK (AJS)
| | - Nicky J Welton
- School of Social and Community Medicine, University of Bristol, Bristol, UK (SD, NJW, AEA)
| | - A E Ades
- School of Social and Community Medicine, University of Bristol, Bristol, UK (SD, NJW, AEA)
| |
Collapse
|
10
|
Craig BA, Black MA. Incremental cost-effectiveness ratio and incremental net-health benefit: two sides of the same coin. Expert Rev Pharmacoecon Outcomes Res 2010; 1:37-46. [PMID: 19807506 DOI: 10.1586/14737167.1.1.37] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In recent years, an alternative framework for cost-effectiveness analyses has been growing in popularity. Instead of the incremental cost-effectiveness ratio for which statistical inference is often difficult, the incremental net-health benefit (INHB), a linear transformation of incremental costs and effectiveness, has been utilized. The linear structure of this statistic allows easy computation and interpretation of confidence intervals, hypothesis tests and acceptability curves. It is often difficult, however, to switch decision-making procedures without first verifying the appropriateness of the new methods. In this paper, we demonstrate the decision-making similarities between the INHB and the incremental cost-effectiveness ratio and describe how the INHB can be used to clarify inference of the incremental cost-effectiveness ratio. We also describe the two statistics in terms of the DeltaE-DeltaC plane, thus allowing both a mathematical and graphical comparison of these similarities. We conclude with a general discussion of cost-effectiveness analyses and advocate Bayesian, rather than frequentist inference as the more intuitive and powerful decision-making framework.
Collapse
Affiliation(s)
- B A Craig
- Department of Statistics,Purdue University, West Lafayette, IN 47907-1399, USA.
| | | |
Collapse
|
11
|
Ades AE, Welton NJ, Caldwell D, Price M, Goubar A, Lu G. Multiparameter evidence synthesis in epidemiology and medical decision-making. J Health Serv Res Policy 2009; 13 Suppl 3:12-22. [PMID: 18806188 DOI: 10.1258/jhsrp.2008.008020] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Meta-analysis has been well-established for many years, but has been largely confined to pooling evidence on pair-wise contrasts. Broader forms of synthesis have also been described, apparently re-invented in disparate fields, each time taking different computational approaches. The potential value of Bayesian estimation of a joint posterior parameter distribution and simultaneously sampling from it for decision analysis has also been appreciated. However, applications have been relatively few in number, sometimes stylized, and presented mainly to a statistical methods audience. As a result, the potential for multiparameter evidence synthesis in both epidemiology and health technology assessment has remained largely unrecognized. The advent of flexible software for Bayesian Markov chain Monte Carlo in the shape of WinBUGS has the made these earlier strands of work more widely available. Researchers can now carry out synthesis at a realistic level of complexity. The Bristol programme has not only contributed to a growing body of literature on how to synthesize different evidence structures, but also on how to check the consistency of multiple information sources and how to use the resulting models to prioritize future research.
Collapse
Affiliation(s)
- A E Ades
- MRC Health Services Collaboration, Bristol, UK.
| | | | | | | | | | | |
Collapse
|
12
|
Ades AE, Claxton K, Sculpher M. Evidence synthesis, parameter correlation and probabilistic sensitivity analysis. HEALTH ECONOMICS 2006; 15:373-81. [PMID: 16389628 DOI: 10.1002/hec.1068] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Over the last decade or so, there have been many developments in methods to handle uncertainty in cost-effectiveness studies. In decision modelling, it is widely accepted that there needs to be an assessment of how sensitive the decision is to uncertainty in parameter values. The rationale for probabilistic sensitivity analysis (PSA) is primarily based on a consideration of the needs of decision makers in assessing the consequences of decision uncertainty. In this paper, we highlight some further compelling reasons for adopting probabilistic methods for decision modelling and sensitivity analysis, and specifically for adopting simulation from a Bayesian posterior distribution. Our reasoning is as follows. Firstly, cost-effectiveness analyses need to be based on all the available evidence, not a selected subset, and the uncertainties in the data need to be propagated through the model in order to provide a correct analysis of the uncertainties in the decision. In many--perhaps most--cases the evidence structure requires a statistical analysis that inevitably induces correlations between parameters. Deterministic sensitivity analysis requires that models are run with parameters fixed at 'extreme' values, but where parameter correlation exists it is not possible to identify sets of parameter values that can be considered 'extreme' in a meaningful sense. However, a correct probabilistic analysis can be readily achieved by Monte Carlo sampling from the joint posterior distribution of parameters. In this paper, we review some evidence structures commonly occurring in decision models, where analyses that correctly reflect the uncertainty in the data induce correlations between parameters. Frequently, this is because the evidence base includes information on functions of several parameters. It follows that, if health technology assessments are to be based on a correct analysis of all available data, then probabilistic methods must be used both for sensitivity analysis and for estimation of expected costs and benefits.
Collapse
Affiliation(s)
- A E Ades
- MRC Health Services Collaboration, Canynge Hall, England, UK.
| | | | | |
Collapse
|
13
|
Basu A, Meltzer HY, Dukic V. Estimating transitions between symptom severity states over time in schizophrenia: a Bayesian meta-analytic approach. Stat Med 2006; 25:2886-910. [PMID: 16220519 DOI: 10.1002/sim.2317] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
We obtain the posterior predictive distribution of transition probabilities between symptom severity states over time for patients with schizophrenia by (i) employing a Bayesian meta-analysis of published clinical trials and observational studies to estimate the posterior distribution of parameters that guide changes in Positive and Negative Syndrome Scale (PANSS) scores over time and under the influence of various drugs and (ii) by propagating the variability from the posterior distributions of the parameters through a micro-simulation model that is formulated based on schizophrenia progression. Results show detailed differences among haloperidol, risperidone and olanzapine in controlling various levels of severities of positive, negative and joint symptoms over time. For example, risperidone seems best in controlling severe positive symptoms while olanzapine is the worst in that during the first quarter of drug treatment; however, olanzapine seems to be best in controlling severe negative symptoms across all four quarters of treatment while haloperidol is the worst in this regard. These details may further serve to better estimate quality of life of patients and aid in resource utilization decisions in treating schizophrenic patients. In addition, consistent estimation of uncertainty in the time-profile parameters also has important implications for the practice of cost-effectiveness analysis and for future resource allocation policies in schizophrenia treatment.
Collapse
Affiliation(s)
- Anirban Basu
- Department of Medicine, Section of General Internal Medicine, University of Chicago, 5841 S. Maryland Ave, MC 2007, AMD B201, Chicago, IL 60637, USA.
| | | | | |
Collapse
|
14
|
Dominici F, Zeger SL. Smooth quantile ratio estimation with regression: estimating medical expenditures for smoking-attributable diseases. Biostatistics 2005; 6:505-19. [PMID: 15872022 DOI: 10.1093/biostatistics/kxi031] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The methodological development of this paper is motivated by a common problem in econometrics where we are interested in estimating the difference in the average expenditures between two populations, say with and without a disease, as a function of the covariates. For example, let Y(1) and Y(2) be two non-negative random variables denoting the health expenditures for cases and controls. Smooth Quantile Ratio Estimation (SQUARE) is a novel approach for estimating Delta=E[Y(1)] - E[Y(2)] by smoothing across percentiles the log-transformed ratio of the two quantile functions. Dominici et al. (2005) have shown that SQUARE defines a large class of estimators of Delta, is more efficient than common parametric and nonparametric estimators of Delta, and is consistent and asymptotically normal. However, in applications it is often desirable to estimate Delta(x)=E[Y(1)|x]--E[Y(2)|x], that is, the difference in means as a function of x. In this paper we extend SQUARE to a regression model and we introduce a two-part regression SQUARE for estimating Delta(x) as a function of x. We use the first part of the model to estimate the probability of incurring any costs and the second part of the model to estimate the mean difference in health expenditures, given that a nonzero cost is observed. In the second part of the model, we apply the basic definition of SQUARE for positive costs to compare expenditures for the cases and controls having 'similar' covariate profiles. We determine strata of cases and control with 'similar' covariate profiles by the use of propensity score matching. We then apply two-part regression SQUARE to the 1987 National Medicare Expenditure Survey to estimate the difference Delta(x) between persons suffering from smoking-attributable diseases and persons without these diseases as a function of the propensity of getting the disease. Using a simulation study, we compare frequentist properties of two-part regression SQUARE with maximum likelihood estimators for the log-transformed expenditures.
Collapse
Affiliation(s)
- Francesca Dominici
- Department of Biostatistics, Bloomberg School of Public Health, The Johns Hopkins University Baltimore, MD 21205-3179, USA.
| | | |
Collapse
|
15
|
Parmigiani G. Uncertainty and the value of diagnostic information, with application to axillary lymph node dissection in breast cancer. Stat Med 2004; 23:843-55. [PMID: 14981678 DOI: 10.1002/sim.1623] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In clinical decision making, it is common to ask whether, and how much, a diagnostic procedure is contributing to subsequent treatment decisions. Statistically, quantification of the value of the information provided by a diagnostic procedure can be carried out using decision trees with multiple decision points, representing both the diagnostic test and the subsequent treatments that may depend on the test's results. This article investigates probabilistic sensitivity analysis approaches for exploring and communicating parameter uncertainty in such decision trees. Complexities arise because uncertainty about a model's inputs determines uncertainty about optimal decisions at all decision nodes of a tree. We present the expected utility solution strategy for multistage decision problems in the presence of uncertainty on input parameters, propose a set of graphical displays and summarization tools for probabilistic sensitivity analysis in multistage decision trees, and provide an application to axillary lymph node dissection in breast cancer.
Collapse
Affiliation(s)
- Giovanni Parmigiani
- The Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University, Baltimore, MD 21205, USA.
| |
Collapse
|
16
|
Briggs AH, Ades AE, Price MJ. Probabilistic sensitivity analysis for decision trees with multiple branches: use of the Dirichlet distribution in a Bayesian framework. Med Decis Making 2003; 23:341-50. [PMID: 12926584 DOI: 10.1177/0272989x03255922] [Citation(s) in RCA: 118] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.
Collapse
Affiliation(s)
- Andrew H Briggs
- Health Economics Research Centre, University of Oxford, Institute of Health Sciences, Headington, Oxford OX3 7LF, United Kingdom.
| | | | | |
Collapse
|
17
|
Abstract
Prediction models used in support of clinical and health policy decision making often need to consider the course of a disease over an extended period of time, and draw evidence from a broad knowledge base, including epidemiologic cohort and case control studies, randomized clinical trials, expert opinions, and more. This paper is a brief introduction to these complex decision models, their relation to Bayesian decision theory, and the tools typically used to describe the uncertainties involved. Concepts are illustrated throughout via a simplified tutorial.
Collapse
Affiliation(s)
- G Parmigiani
- The Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University, Baltimore, USA.
| |
Collapse
|
18
|
Cooper NJ, Sutton AJ, Abrams KR. Decision analytical economic modelling within a Bayesian framework: application to prophylactic antibiotics use for caesarean section. Stat Methods Med Res 2002; 11:491-512. [PMID: 12516986 DOI: 10.1191/0962280202sm306ra] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Economic evaluation of health care interventions based on decision analytic modelling can generate valuable information for health policy decision makers. However, the usefulness of the results obtained depends on the quality of the data input into the model; that is, the accuracy of the estimates for the costs, effectiveness, and transition probabilities between the different health states of the model. The aim of this paper is to review the use of Bayesian decision models in economic evaluation and to demonstrate how the individual components required for decision analytical modelling (i.e., systematic review incorporating meta-analyses, estimation of transition probabilities, evaluation of the model, and sensitivity analysis) may be addressed simultaneously in one coherent Bayesian model evaluated using Markov Chain Monte Carlo simulation implemented in the specialist Bayesian statistics software WinBUGS. To illustrate the method described, a simple probabilistic decision model is developed to evaluate the cost implications of using prophylactic antibiotics in caesarean section to reduce the incidence of wound infection. The advantages of using the Bayesian statistical approach outlined compared to the conventional classical approaches to decision analysis include the ability to: (i) perform all necessary analyses, including all intermediate analyses (e.g., meta-analyses) required to derive model parameters, in a single coherent model; (ii) incorporate expert opinion either directly or regarding the relative credibility of different data sources; (iii) use the actual posterior distributions for parameters of interest (opposed to making distributional assumptions necessary for the classical formulation); and (iv) incorporate uncertainty for all model parameters.
Collapse
Affiliation(s)
- N J Cooper
- Department of Epidemiology and Public Health, University of Leicester, Leicester, UK.
| | | | | |
Collapse
|
19
|
Vanness DJ, Kim WR. Bayesian estimation, simulation and uncertainty analysis: the cost-effectiveness of ganciclovir prophylaxis in liver transplantation. HEALTH ECONOMICS 2002; 11:551-566. [PMID: 12203757 DOI: 10.1002/hec.739] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This paper demonstrates the usefulness of combining simulation with Bayesian estimation methods in analysis of cost-effectiveness data collected alongside a clinical trial. Specifically, we use Markov Chain Monte Carlo (MCMC) to estimate a system of generalized linear models relating costs and outcomes to a disease process affected by treatment under alternative therapies. The MCMC draws are used as parameters in simulations which yield inference about the relative cost-effectiveness of the novel therapy under a variety of scenarios. Total parametric uncertainty is assessed directly by examining the joint distribution of simulated average incremental cost and effectiveness. The approach allows flexibility in assessing treatment in various counterfactual premises and quantifies the global effect of parametric uncertainty on a decision-maker's confidence in adopting one therapy over the other.
Collapse
Affiliation(s)
- David J Vanness
- Division of Health Care Policy & Research, Mayo Clinic, Rochester, Minnesota 55905, USA.
| | | |
Collapse
|
20
|
Samsa GP, Matchar DB, Williams GR, Levy DE. Cost-effectiveness of ancrod treatment of acute ischaemic stroke: results from the Stroke Treatment with Ancrod Trial (STAT). J Eval Clin Pract 2002; 8:61-70. [PMID: 11882102 DOI: 10.1046/j.1365-2753.2002.00315.x] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
RATIONALE, AIMS AND OBJECTIVES This paper describes a recent randomized controlled trial in which 42% of patients receiving ancrod attained a favourable outcome in comparison with 34% of controls. Although the above effect size corresponds to a number needed to treat (to achieve a favourable outcome) of approximately 13, intuition does not necessarily suggest what would be the overall impact of a treatment with this level of efficacy. METHODS The objective was to evaluate the cost-effectiveness of ancrod. Cost-effectiveness analysis of data from the Stroke Treatment with Ancrod Trial (STAT) trial was carried out. The participants were 495 patients with data on functional status at the conclusion of follow-up. Short-term results were based upon utilization and quality of life observed during the trial; these were merged with expected long-term results obtained through simulation using the Stroke Policy Model. The main outcome measure was incremental cost-effectiveness ratio. RESULTS Ancrod treatment resulted in both better quality-adjusted life expectancy and lower medical costs than placebo as supported by sensitivity analysis. The cost differential was primarily attributable to the long-term implications of ancrod's role in reducing disability. CONCLUSIONS If ancrod is even modestly effective, it will probably be cost-effective (and, indeed, cost-saving) as well. The net population-level impact of even modestly effective stroke treatments can be substantial.
Collapse
Affiliation(s)
- Gregory P Samsa
- Center for Clinical Health Policy Research, Suite 230, Duke University, 2200 West Main Street, Durham, NC 27705, USA.
| | | | | | | |
Collapse
|
21
|
Prevost TC, Abrams KR, Jones DR. Hierarchical models in generalized synthesis of evidence: an example based on studies of breast cancer screening. Stat Med 2000; 19:3359-76. [PMID: 11122501 DOI: 10.1002/1097-0258(20001230)19:24<3359::aid-sim710>3.0.co;2-n] [Citation(s) in RCA: 85] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Evidence regarding the potential benefits of a particular health care intervention is often available from a variety of disparate sources. However, formal synthesis of such evidence has traditionally concentrated almost exclusively on that derived from randomized studies, although for a range of conditions the randomized evidence will be less than adequate due to economic, organizational or ethical considerations. In such situations a formal synthesis of the evidence that is available from observational studies can be valuable whilst awaiting higher quality evidence from randomized trials. Consideration of randomized studies alone may be appropriate when assessing the efficacy of an intervention, but assessment of the effectiveness of such an intervention within a more general target population may be improved by consideration of evidence from non-randomized studies as well. Standard meta-analysis methods may allow for both within- and between-study heterogeneity; however when multiple sources of evidence are considered an extra level of complexity is introduced, namely study type. One possible solution to the problem of making inferences, particularly regarding an overall population effect, in such situations is to model the heterogeneity, both quantitative and qualitative, using a Bayesian hierarchical model. The hierarchical nature of such models specifically allows for the quantitative within and between sources of heterogeneity, whilst the Bayesian approach can accommodate a priori beliefs regarding qualitative differences between the various sources of evidence. The use of such methods in practice is illustrated in the context of screening for breast cancer; in this example evidence is available from both randomized clinical trials and observational studies. A particular appeal of a Bayesian approach for this type of problem lies in the prediction of future benefits likely to be observed in a target population. This approach to health service monitoring in general is discussed.
Collapse
Affiliation(s)
- T C Prevost
- Department of Epidemiology and Public Health, University of Leicester, UK
| | | | | |
Collapse
|
22
|
Niessen LW, Grijseels EW, Rutten FF. The evidence-based approach in health policy and health care delivery. Soc Sci Med 2000; 51:859-69. [PMID: 10972430 DOI: 10.1016/s0277-9536(00)00066-6] [Citation(s) in RCA: 88] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Evidence-based approaches are prominent on the national and international agendas for health policy and health research. It is unclear what the implications of this approach are for the production and distribution of health in populations, given the notion of multiple determinants in health. It is equally unclear what kind of barriers there are to the adoption of evidence-based approaches in health care practice. This paper sketches some developments in the way in which health policy is informed by the results from health research. It summarises evidence-based approaches in health at three impact levels: intersectoral assessment, national health care policy, and evidence-based medicine in everyday practice. Consensus is growing on the role of broad and specific health determinants, including health care, as well as on priority setting based on the burden of diseases. In spite of methodological constraints, there is a demand for intersectoral assessments, especially in health sector reform. Initiators of policy changes in other sectors may be held responsible for providing the evidence related to health. There are limited possibilities for priority setting at the national health care policy level. Hence, there is a decentralisation of responsibilities for resource use. Health care providers are encouraged to assume agency roles for both patients and society and asked to promote and deliver effective and efficient health care. Governments will have to design a national framework to facilitate their organisation and legal framework to enhance evidence-based health policy. Treatment guidelines supported by evidence on effectiveness and efficiency will be one essential element in this process. With the increasing number of advocates for the enhancement of population health in the policy arenas, evidence-based approaches provide the information and some of the tools to help with priority setting.
Collapse
Affiliation(s)
- L W Niessen
- Institute of Medical Technology Assessment, Erasmus University, Rotterdam, The Netherlands.
| | | | | |
Collapse
|
23
|
Liu G, Zhao Z. Stochastic cost-effectiveness analysis: a simultaneous marginal-effect approach. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 1999; 2:420-8. [PMID: 16674328 DOI: 10.1046/j.1524-4733.1999.26004.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
OBJECTIVE The purpose of this study is to develop a cost-effectiveness methodology in the context of a simultaneous modeling framework that provides consistent point and interval estimates. METHODS A simultaneous model of cost and effectiveness functions was developed to measure the incremental cost-effectiveness ratio for competing medical interventions. A feasible nonlinear least-squares method was suggested to estimate the simultaneous model. Using a series of hypothetical data, a simulation analysis was performed to show the superior performance of the proposed model, relative to the average-effect model, a widely used approach to cost-effectiveness estimation. RESULTS The traditional average-effect approach has two shortcomings. First, it assumes two strong conditions: truly random distributions of all the significant nontreatment variables (both observed and unobserved) across study groups, and the independence of cost and effectiveness variables. Second, it does not give the confidence interval, an important measure to assess the stochastic nature and robustness of point estimates. In contrast, the simultaneous modeling approach provides marginal-effect estimates, imposing no restrictions on the random distributions of the individual characteristics across study groups. Furthermore, it takes into account the simultaneity of cost and effectiveness functions being estimated. The simulation analysis showed that the simultaneous modeling approach is significantly more unbiased and efficient in predicting the true cost-effectiveness ratio. CONCLUSION The simultaneous modeling approach is superior to the average-effect approach in the estimation of incremental cost-effectiveness ratios using data with significant nontreatment confounding factors. The advantages of the simultaneous modeling approach are particularly appealing for evaluative studies dealing with large-scale retrospective data at the patient level.
Collapse
Affiliation(s)
- G Liu
- Pharmaceutical Policy and Evaluative Sciences, University of North Carolina, Chapel Hill, NC 27516-7360, USA.
| | | |
Collapse
|
24
|
Samsa GP, Reutter RA, Parmigiani G, Ancukiewicz M, Abrahamse P, Lipscomb J, Matchar DB. Performing cost-effectiveness analysis by integrating randomized trial data with a comprehensive decision model: application to treatment of acute ischemic stroke. J Clin Epidemiol 1999; 52:259-71. [PMID: 10210244 DOI: 10.1016/s0895-4356(98)00151-6] [Citation(s) in RCA: 111] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
A recent national panel on cost-effectiveness in health and medicine has recommended that cost-effectiveness analysis (CEA) of randomized controlled trials (RCTs) should reflect the effect of treatments on long-term outcomes. Because the follow-up period of RCTs tends to be relatively short, long-term implications of treatments must be assessed using other sources. We used a comprehensive simulation model of the natural history of stroke to estimate long-term outcomes after a hypothetical RCT of an acute stroke treatment. The RCT generates estimates of short-term quality-adjusted survival and cost and also the pattern of disability at the conclusion of follow-up. The simulation model incorporates the effect of disability on long-term outcomes, thus supporting a comprehensive CEA. Treatments that produce relatively modest improvements in the pattern of outcomes after ischemic stroke are likely to be cost-effective. This conclusion was robust to modifying the assumptions underlying the analysis. More effective treatments in the acute phase immediately following stroke would generate significant public health benefits, even if these treatments have a high price and result in relatively small reductions in disability. Simulation-based modeling can provide the critical link between a treatment's short-term effects and its long-term implications and can thus support comprehensive CEA.
Collapse
Affiliation(s)
- G P Samsa
- Center for Clinical Health Policy Research, Duke University, Department of Medicine, Duke University Medical Center, Durham, North Carolina 27705, USA
| | | | | | | | | | | | | |
Collapse
|
25
|
Lipscomb J, Ancukiewicz M, Parmigiani G, Hasselblad V, Samsa G, Matchar DB. Predicting the cost of illness: a comparison of alternative models applied to stroke. Med Decis Making 1998; 18:S39-56. [PMID: 9566466 DOI: 10.1177/0272989x98018002s07] [Citation(s) in RCA: 50] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Predictions of cost over well-defined time horizons are frequently required in the analysis of clinical trials and social experiments, for decision models investigating the cost-effectiveness of interventions, and for macro-level estimates of the resource impact of disease. With rare exceptions, cost predictions used in such applications continue to take the form of deterministic point estimates. However, the growing availability of large administrative and clinical data sets offers new opportunities for a more general approach to disease cost forecasting: the estimation of multivariable cost functions that yield predictions at the individual level, conditional on intervention(s), patient characteristics, and other factors. This raises the fundamental question of how to choose the "best" cost model for a given application. The central purpose of this paper is to demonstrate how to evaluate competing models on the basis of predictive validity. This concept is operationalized according to three alternative criteria: 1) root mean square error (RMSE), for evaluating predicted mean cost; 2) mean absolute error (MAE), for evaluating predicted median cost; and 3) a logarithmic scoring rule (log score), an information-theoretic index for evaluating the entire predictive distribution of cost. To illustrate these concepts, the authors conducted a split-sample analysis of data from a national sample of Medicare-covered patients hospitalized for ischemic stroke in 1991 and followed to the end of 1993. Using test and training samples of about 500,000 observations each, they investigated five models: single-equation linear models, with and without log transform of cost; two-part (mixture) models, with and without log transform, to directly address the problem of zero-cost observations; and a Cox proportional-hazards model stratified by time interval. For deriving the predictive distribution of cost, the log transformed two-part and proportional-hazards models are superior. For deriving the predicted mean or median cost, these two models and the commonly used log-transformed linear model all perform about the same. The untransformed models are dominated in every instance. The approaches to model selection illustrated here can be applied across a wide range of settings.
Collapse
Affiliation(s)
- J Lipscomb
- Sanford Institute of Public Policy, Duke University, Durham, North Carolina 27708-0245, USA
| | | | | | | | | | | |
Collapse
|