1
|
Zhong H, Brandeau ML, Yazdi GE, Wang J, Nolen S, Hagan L, Thompson WW, Assoumou SA, Linas BP, Salomon JA. Metamodeling for Policy Simulations with Multivariate Outcomes. Med Decis Making 2022; 42:872-884. [PMID: 35735216 PMCID: PMC9452454 DOI: 10.1177/0272989x221105079] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PURPOSE Metamodels are simplified approximations of more complex models that can be used as surrogates for the original models. Challenges in using metamodels for policy analysis arise when there are multiple correlated outputs of interest. We develop a framework for metamodeling with policy simulations to accommodate multivariate outcomes. METHODS We combine 2 algorithm adaptation methods-multitarget stacking and regression chain with maximum correlation-with different base learners including linear regression (LR), elastic net (EE) with second-order terms, Gaussian process regression (GPR), random forests (RFs), and neural networks. We optimize integrated models using variable selection and hyperparameter tuning. We compare the accuracy, efficiency, and interpretability of different approaches. As an example application, we develop metamodels to emulate a microsimulation model of testing and treatment strategies for hepatitis C in correctional settings. RESULTS Output variables from the simulation model were correlated (average ρ = 0.58). Without multioutput algorithm adaptation methods, in-sample fit (measured by R2) ranged from 0.881 for LR to 0.987 for GPR. The multioutput algorithm adaptation method increased R2 by an average 0.002 across base learners. Variable selection and hyperparameter tuning increased R2 by 0.009. Simpler models such as LR, EE, and RF required minimal training and prediction time. LR and EE had advantages in model interpretability, and we considered methods for improving the interpretability of other models. CONCLUSIONS In our example application, the choice of base learner had the largest impact on R2; multioutput algorithm adaptation and variable selection and hyperparameter tuning had a modest impact. Although advantages and disadvantages of specific learning algorithms may vary across different modeling applications, our framework for metamodeling in policy analyses with multivariate outcomes has broad applicability to decision analysis in health and medicine.
Collapse
Affiliation(s)
- Huaiyang Zhong
- Department of Management Science and Engineering, Stanford University, Stanford, CA, USA
| | - Margaret L Brandeau
- Department of Management Science and Engineering, Stanford University, Stanford, CA, USA
| | - Golnaz Eftekhari Yazdi
- Section of Infectious Diseases, Department of Medicine, Boston Medical Center, Boston, MA, USA
| | - Jianing Wang
- Section of Infectious Diseases, Department of Medicine, Boston Medical Center, Boston, MA, USA
| | - Shayla Nolen
- Section of Infectious Diseases, Department of Medicine, Boston Medical Center, Boston, MA, USA
| | | | - William W Thompson
- Division of Viral Hepatitis, Center for Disease Control and Prevention, Atlanta, GA, USA
| | - Sabrina A Assoumou
- Section of Infectious Diseases, Department of Medicine, Boston Medical Center, Boston, MA, USA
| | - Benjamin P Linas
- Section of Infectious Diseases, Department of Medicine, Boston Medical Center, Boston, MA, USA
| | - Joshua A Salomon
- Center for Health Policy and Center for Primary Care and Outcomes Research, Stanford University, Stanford, CA, USA
| |
Collapse
|
2
|
Weyant C, Brandeau ML. Personalization of Medical Treatment Decisions: Simplifying Complex Models while Maintaining Patient Health Outcomes. Med Decis Making 2021; 42:450-460. [PMID: 34416832 PMCID: PMC8858337 DOI: 10.1177/0272989x211037921] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
BACKGROUND Personalizing medical treatments based on patient-specific risks and preferences can improve patient health. However, models to support personalized treatment decisions are often complex and difficult to interpret, limiting their clinical application. METHODS We present a new method, using machine learning to create meta-models, for simplifying complex models for personalizing medical treatment decisions. We consider simple interpretable models, interpretable ensemble models, and noninterpretable ensemble models. We use variable selection with a penalty for patient-specific risks and/or preferences that are difficult, risky, or costly to obtain. We interpret the meta-models to the extent permitted by their model architectures. We illustrate our method by applying it to simplify a previously developed model for personalized selection of antipsychotic drugs for patients with schizophrenia. RESULTS The best simplified interpretable, interpretable ensemble, and noninterpretable ensemble models contained at most half the number of patient-specific risks and preferences compared with the original model. The simplified models achieved 60.5% (95% credible interval [crI]: 55.2-65.4), 60.8% (95% crI: 55.5-65.7), and 83.8% (95% crI: 80.8-86.6), respectively, of the net health benefit of the original model (quality-adjusted life-years gained). Important variables in all models were similar and made intuitive sense. Computation time for the meta-models was orders of magnitude less than for the original model. LIMITATIONS The simplified models share the limitations of the original model (e.g., potential biases). CONCLUSIONS Our meta-modeling method is disease- and model- agnostic and can be used to simplify complex models for personalization, allowing for variable selection in addition to improved model interpretability and computational performance. Simplified models may be more likely to be adopted in clinical settings and can help improve equity in patient outcomes.
Collapse
Affiliation(s)
- Christopher Weyant
- Department of Management Science and Engineering, Stanford University, Stanford, CA, USA
| | - Margaret L Brandeau
- Department of Management Science and Engineering, Stanford University, Stanford, CA, USA
| |
Collapse
|
3
|
de Carvalho TM, van Rosmalen J, Wolff HB, Koffijberg H, Coupé VMH. Choosing a Metamodel of a Simulation Model for Uncertainty Quantification. Med Decis Making 2021; 42:28-42. [PMID: 34098793 DOI: 10.1177/0272989x211016307] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Metamodeling may substantially reduce the computational expense of individual-level state transition simulation models (IL-STM) for calibration, uncertainty quantification, and health policy evaluation. However, because of the lack of guidance and readily available computer code, metamodels are still not widely used in health economics and public health. In this study, we provide guidance on how to choose a metamodel for uncertainty quantification. METHODS We built a simulation study to evaluate the prediction accuracy and computational expense of metamodels for uncertainty quantification using life-years gained (LYG) by treatment as the IL-STM outcome. We analyzed how metamodel accuracy changes with the characteristics of the simulation model using a linear model (LM), Gaussian process regression (GP), generalized additive models (GAMs), and artificial neural networks (ANNs). Finally, we tested these metamodels in a case study consisting of a probabilistic analysis of a lung cancer IL-STM. RESULTS In a scenario with low uncertainty in model parameters (i.e., small confidence interval), sufficient numbers of simulated life histories, and simulation model runs, commonly used metamodels (LM, ANNs, GAMs, and GP) have similar, good accuracy, with errors smaller than 1% for predicting LYG. With a higher level of uncertainty in model parameters, the prediction accuracy of GP and ANN is superior to LM. In the case study, we found that in the worst case, the best metamodel had an error of about 2.1%. CONCLUSION To obtain good prediction accuracy, in an efficient way, we recommend starting with LM, and if the resulting accuracy is insufficient, we recommend trying ANNs and eventually also GP regression.
Collapse
Affiliation(s)
- Tiago M de Carvalho
- Department of Epidemiology and Biostatistics, Amsterdam UMC, Location VUMC, Amsterdam, the Netherlands
| | | | - Harold B Wolff
- Department of Epidemiology and Biostatistics, Amsterdam UMC, Location VUMC, Amsterdam, the Netherlands
| | - Hendrik Koffijberg
- Health Technology and Services Research Department, Faculty of Behavioral Management and Social Sciences, Technical Medical Centre, University of Twente, Enschede, the Netherlands
| | - Veerle M H Coupé
- Department of Epidemiology and Biostatistics, Amsterdam UMC, Location VUMC, Amsterdam, the Netherlands
| |
Collapse
|
4
|
Jalal H, Trikalinos TA, Alarid-Escudero F. BayCANN: Streamlining Bayesian Calibration With Artificial Neural Network Metamodeling. Front Physiol 2021; 12:662314. [PMID: 34113262 PMCID: PMC8185956 DOI: 10.3389/fphys.2021.662314] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 04/20/2021] [Indexed: 12/03/2022] Open
Abstract
Purpose: Bayesian calibration is generally superior to standard direct-search algorithms in that it estimates the full joint posterior distribution of the calibrated parameters. However, there are many barriers to using Bayesian calibration in health decision sciences stemming from the need to program complex models in probabilistic programming languages and the associated computational burden of applying Bayesian calibration. In this paper, we propose to use artificial neural networks (ANN) as one practical solution to these challenges. Methods: Bayesian Calibration using Artificial Neural Networks (BayCANN) involves (1) training an ANN metamodel on a sample of model inputs and outputs, and (2) then calibrating the trained ANN metamodel instead of the full model in a probabilistic programming language to obtain the posterior joint distribution of the calibrated parameters. We illustrate BayCANN using a colorectal cancer natural history model. We conduct a confirmatory simulation analysis by first obtaining parameter estimates from the literature and then using them to generate adenoma prevalence and cancer incidence targets. We compare the performance of BayCANN in recovering these "true" parameter values against performing a Bayesian calibration directly on the simulation model using an incremental mixture importance sampling (IMIS) algorithm. Results: We were able to apply BayCANN using only a dataset of the model inputs and outputs and minor modification of BayCANN's code. In this example, BayCANN was slightly more accurate in recovering the true posterior parameter estimates compared to IMIS. Obtaining the dataset of samples, and running BayCANN took 15 min compared to the IMIS which took 80 min. In applications involving computationally more expensive simulations (e.g., microsimulations), BayCANN may offer higher relative speed gains. Conclusions: BayCANN only uses a dataset of model inputs and outputs to obtain the calibrated joint parameter distributions. Thus, it can be adapted to models of various levels of complexity with minor or no change to its structure. In addition, BayCANN's efficiency can be especially useful in computationally expensive models. To facilitate BayCANN's wider adoption, we provide BayCANN's open-source implementation in R and Stan.
Collapse
Affiliation(s)
- Hawre Jalal
- Department of Health Policy and Management, University of Pittsburgh, Graduate School of Public Health, Pittsburgh, PA, United States
| | - Thomas A. Trikalinos
- Departments of Health Services, Policy & Practice and Biostatistics, Brown University, Providence, RI, United States
| | - Fernando Alarid-Escudero
- Division of Public Administration, Center for Research and Teaching in Economics (CIDE), Aguascalientes, Mexico
| |
Collapse
|
5
|
Simulation Modeling and Metamodeling to Inform National and International HIV Policies for Children and Adolescents. J Acquir Immune Defic Syndr 2019; 78 Suppl 1:S49-S57. [PMID: 29994920 PMCID: PMC6042862 DOI: 10.1097/qai.0000000000001749] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Objective and Approach: Computer-based simulation models serve an important purpose in informing HIV care for children and adolescents. We review current model-based approaches to informing pediatric and adolescent HIV estimates and guidelines. Findings: Clinical disease simulation models and epidemiologic models are used to inform global and regional estimates of numbers of children and adolescents living with HIV and in need of antiretroviral therapy, to develop normative guidelines addressing strategies for diagnosis and treatment of HIV in children, and to forecast future need for pediatric and adolescent antiretroviral therapy formulations and commodities. To improve current model-generated estimates and policy recommendations, better country-level and regional-level data are needed about children living with HIV, as are improved data about survival and treatment outcomes for children with perinatal HIV infection as they age into adolescence and adulthood. In addition, novel metamodeling and value of information methods are being developed to improve the transparency of model methods and results, as well as to allow users to more easily tailor model-based analyses to their own settings. Conclusions: Substantial progress has been made in using models to estimate the size of the pediatric and adolescent HIV epidemic, to inform the development of guidelines for children and adolescents affected by HIV, and to support targeted implementation of policy recommendations to maximize impact. Ongoing work will address key limitations and further improve these model-based projections.
Collapse
|
6
|
Alam MF, Briggs A. Artificial neural network metamodel for sensitivity analysis in a total hip replacement health economic model. Expert Rev Pharmacoecon Outcomes Res 2019; 20:629-640. [PMID: 31491359 DOI: 10.1080/14737167.2019.1665512] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Objectives: Metamodels have been used to approximate complex simulations and have many applications with sensitivity analysis, optimization, etc. However, their use in health economics is very limited. Application of artificial neural network (ANN) with a health economic model has never been investigated. The study intends to introduce ANN as a metamodeling method to conduct sensitivity analysis in a total hip replacement decision analytical model and compare its performance with two other counterparts. Methods: First, a nonlinear factor screening method was adopted to screen out unimportant factors from the simulation. Second, an ANN was developed using the important variables to approximate the simulation. Performance of the ANN metamodel was then compared with its Gaussian Process (GP) and multiple linear regression (MLR) counterparts. Results: Out of 31, the factor screening method identified 12 important variables from the simulation. ANN metamodels showed best predictive capabilities in terms of performance measures (mean squared error of prediction, MSEP and mean absolute percentage deviation, MAPD) used for predicting both costs and quality-adjusted life years (QALYs) for two prostheses. Conclusion: The study provides a methodological development in sensitivity analysis and demonstrates that an ANN metamodel is a potential approximation method for computationally expensive health economic simulations.
Collapse
Affiliation(s)
- M Fasihul Alam
- Department of Public Health, College of Health Sciences, Qatar University , Doha, Qatar
| | - Andrew Briggs
- HEHTA, Institute of Health & Wellbeing, University of Glasgow , Glasgow, UK
| |
Collapse
|
7
|
Youn JH, Stevenson MD, Thokala P, Payne K, Goddard M. Modeling the Economic Impact of Interventions for Older Populations with Multimorbidity: A Method of Linking Multiple Single-Disease Models. Med Decis Making 2019; 39:842-856. [PMID: 31431188 DOI: 10.1177/0272989x19868987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Introduction. Individuals from older populations tend to have more than 1 health condition (multimorbidity). Current approaches to produce economic evidence for clinical guidelines using decision-analytic models typically use a single-disease approach, which may not appropriately reflect the competing risks within a population with multimorbidity. This study aims to demonstrate a proof-of-concept method of modeling multiple conditions in a single decision-analytic model to estimate the impact of multimorbidity on the cost-effectiveness of interventions. Methods. Multiple conditions were modeled within a single decision-analytic model by linking multiple single-disease models. Individual discrete event simulation models were developed to evaluate the cost-effectiveness of preventative interventions for a case study assuming a UK National Health Service perspective. The case study used 3 diseases (heart disease, Alzheimer's disease, and osteoporosis) that were combined within a single linked model. The linked model, with and without correlations between diseases incorporated, simulated the general population aged 45 years and older to compare results in terms of lifetime costs and quality-adjusted life-years (QALYs). Results. The estimated incremental costs and QALYs for health care interventions differed when 3 diseases were modeled simultaneously (£840; 0.234 QALYs) compared with aggregated results from 3 single-disease models (£408; 0.280QALYs). With correlations between diseases additionally incorporated, both absolute and incremental costs and QALY estimates changed in different directions, suggesting that the inclusion of correlations can alter model results. Discussion. Linking multiple single-disease models provides a methodological option for decision analysts who undertake research on populations with multimorbidity. It also has potential for wider applications in informing decisions on commissioning of health care services and long-term priority setting across diseases and health care programs through providing potentially more accurate estimations of the relative cost-effectiveness of interventions.
Collapse
Affiliation(s)
- Ji-Hee Youn
- Manchester Centre for Health Economics, Faculty of Biology, Medicine and Health, the University of Manchester, Oxford Road, Manchester, UK
| | - Matt D Stevenson
- School of Health and Related Research, the University of Sheffield, Regent Court, Sheffield, UK
| | - Praveen Thokala
- School of Health and Related Research, the University of Sheffield, Regent Court, Sheffield, UK
| | - Katherine Payne
- Manchester Centre for Health Economics, Faculty of Biology, Medicine and Health, the University of Manchester, Oxford Road, Manchester, UK
| | - Maria Goddard
- Centre for Health Economics, the University of York, Heslington, York, UK
| |
Collapse
|
8
|
Sai A, Vivas-Valencia C, Imperiale TF, Kong N. Multiobjective Calibration of Disease Simulation Models Using Gaussian Processes. Med Decis Making 2019; 39:540-552. [PMID: 31375053 DOI: 10.1177/0272989x19862560] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background. Developing efficient procedures of model calibration, which entails matching model predictions to observed outcomes, has gained increasing attention. With faithful but complex simulation models established for cancer diseases, key parameters of cancer natural history can be investigated for possible fits, which can subsequently inform optimal prevention and treatment strategies. When multiple calibration targets exist, one approach to identifying optimal parameters relies on the Pareto frontier. However, computational burdens associated with higher-dimensional parameter spaces require a metamodeling approach. The goal of this work is to explore multiobjective calibration using Gaussian process regression (GPR) with an eye toward how multiple goodness-of-fit (GOF) criteria identify Pareto-optimal parameters. Methods. We applied GPR, a metamodeling technique, to estimate colorectal cancer (CRC)-related prevalence rates simulated from a microsimulation model of CRC natural history, known as the Colon Modeling Open Source Tool (CMOST). We embedded GPR metamodels within a Pareto optimization framework to identify best-fitting parameters for age-, adenoma-, and adenoma staging-dependent transition probabilities and risk factors. The Pareto frontier approach is demonstrated using genetic algorithms with both sum-of-squared errors (SSEs) and Poisson deviance GOF criteria. Results. The GPR metamodel is able to approximate CMOST outputs accurately on 2 separate parameter sets. Both GOF criteria are able to identify different best-fitting parameter sets on the Pareto frontier. The SSE criterion emphasizes the importance of age-specific adenoma progression parameters, while the Poisson criterion prioritizes adenoma-specific progression parameters. Conclusion. Different GOF criteria assert different components of the CRC natural history. The combination of multiobjective optimization and nonparametric regression, along with diverse GOF criteria, can advance the calibration process by identifying optimal regions of the underlying parameter landscape.
Collapse
Affiliation(s)
- Aditya Sai
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | | | - Thomas F Imperiale
- Indiana University School of Medicine, Indiana University, Indianapolis, IN, USA.,Richard A. Roudebush VA Medical Center, Indianapolis, IN, USA.,Regenstrief Institute, Indianapolis, IN, USA
| | - Nan Kong
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
9
|
de Carvalho TM, Heijnsdijk EAM, Coffeng L, de Koning HJ. Evaluating Parameter Uncertainty in a Simulation Model of Cancer Using Emulators. Med Decis Making 2019; 39:405-413. [PMID: 31179833 DOI: 10.1177/0272989x19837631] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Background. Microsimulation models have been extensively used in the field of cancer modeling. However, there is substantial uncertainty regarding estimates from these models, for example, overdiagnosis in prostate cancer. This is usually not thoroughly examined due to the high computational effort required. Objective. To quantify uncertainty in model outcomes due to uncertainty in model parameters, using a computationally efficient emulator (Gaussian process regression) instead of the model. Methods. We use a microsimulation model of prostate cancer (microsimulation screening analysis [MISCAN]) to simulate individual life histories. We analyze the effect of parametric uncertainty on overdiagnosis with probabilistic sensitivity analyses (ProbSAs). To minimize the number of MISCAN runs needed for ProbSAs, we emulate MISCAN, using data pairs of parameter values and outcomes to fit a Gaussian process regression model. We evaluate to what extent the emulator accurately reproduces MISCAN by computing its prediction error. Results. Using an emulator instead of MISCAN, we may reduce the computation time necessary to run a ProbSA by more than 85%. The average relative prediction error of the emulator for overdiagnosis equaled 1.7%. We predicted that 42% of screen-detected men are overdiagnosed, with an associated empirical confidence interval between 38% and 48%. Sensitivity analyses show that the accuracy of the emulator is sensitive to which model parameters are included in the training runs. Conclusions. For a computationally expensive simulation model with a large number of parameters, we show it is possible to conduct a ProbSA, within a reasonable computation time, by using a Gaussian process regression emulator instead of the original simulation model.
Collapse
Affiliation(s)
- Tiago M de Carvalho
- Department of Public Health, Erasmus Medical Center, Rotterdam, Zuid-Holland, The Netherlands.,Department of Applied Health Research, University College London, UK
| | - Eveline A M Heijnsdijk
- Department of Public Health, Erasmus Medical Center, Rotterdam, Zuid-Holland, The Netherlands
| | - Luc Coffeng
- Department of Public Health, Erasmus Medical Center, Rotterdam, Zuid-Holland, The Netherlands
| | - Harry J de Koning
- Department of Public Health, Erasmus Medical Center, Rotterdam, Zuid-Holland, The Netherlands
| |
Collapse
|
10
|
Degeling K, IJzerman M, Koffijberg H. A scoping review of metamodeling applications and opportunities for advanced health economic analyses. Expert Rev Pharmacoecon Outcomes Res 2018; 19:181-187. [DOI: 10.1080/14737167.2019.1548279] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Affiliation(s)
- K. Degeling
- Health Technology and Services Research Department, Technical Medical Centre, University of Twente, Enschede, the Netherlands
| | - M.J. IJzerman
- Health Technology and Services Research Department, Technical Medical Centre, University of Twente, Enschede, the Netherlands
- Cancer Health Services Research Unit, School of Population and Global Health, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, Australia
- Victorian Comprehensive Cancer Centre, Melbourne, Australia
| | - H. Koffijberg
- Health Technology and Services Research Department, Technical Medical Centre, University of Twente, Enschede, the Netherlands
| |
Collapse
|
11
|
Jones-Hughes T, Snowsill T, Haasova M, Coelho H, Crathorne L, Cooper C, Mujica-Mota R, Peters J, Varley-Campbell J, Huxley N, Moore J, Allwood M, Lowe J, Hyde C, Hoyle M, Bond M, Anderson R. Immunosuppressive therapy for kidney transplantation in adults: a systematic review and economic model. Health Technol Assess 2018; 20:1-594. [PMID: 27578428 DOI: 10.3310/hta20620] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND End-stage renal disease is a long-term irreversible decline in kidney function requiring renal replacement therapy: kidney transplantation, haemodialysis or peritoneal dialysis. The preferred option is kidney transplantation, followed by immunosuppressive therapy (induction and maintenance therapy) to reduce the risk of kidney rejection and prolong graft survival. OBJECTIVES To review and update the evidence for the clinical effectiveness and cost-effectiveness of basiliximab (BAS) (Simulect(®), Novartis Pharmaceuticals UK Ltd) and rabbit anti-human thymocyte immunoglobulin (rATG) (Thymoglobulin(®), Sanofi) as induction therapy, and immediate-release tacrolimus (TAC) (Adoport(®), Sandoz; Capexion(®), Mylan; Modigraf(®), Astellas Pharma; Perixis(®), Accord Healthcare; Prograf(®), Astellas Pharma; Tacni(®), Teva; Vivadex(®), Dexcel Pharma), prolonged-release tacrolimus (Advagraf(®) Astellas Pharma), belatacept (BEL) (Nulojix(®), Bristol-Myers Squibb), mycophenolate mofetil (MMF) (Arzip(®), Zentiva; CellCept(®), Roche Products; Myfenax(®), Teva), mycophenolate sodium (MPS) (Myfortic(®), Novartis Pharmaceuticals UK Ltd), sirolimus (SRL) (Rapamune(®), Pfizer) and everolimus (EVL) (Certican(®), Novartis) as maintenance therapy in adult renal transplantation. METHODS Clinical effectiveness searches were conducted until 18 November 2014 in MEDLINE (via Ovid), EMBASE (via Ovid), Cochrane Central Register of Controlled Trials (via Wiley Online Library) and Web of Science (via ISI), Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects and Health Technology Assessment (The Cochrane Library via Wiley Online Library) and Health Management Information Consortium (via Ovid). Cost-effectiveness searches were conducted until 18 November 2014 using a costs or economic literature search filter in MEDLINE (via Ovid), EMBASE (via Ovid), NHS Economic Evaluation Database (via Wiley Online Library), Web of Science (via ISI), Health Economic Evaluations Database (via Wiley Online Library) and the American Economic Association's electronic bibliography (via EconLit, EBSCOhost). Included studies were selected according to predefined methods and criteria. A random-effects model was used to analyse clinical effectiveness data (odds ratios for binary data and mean differences for continuous data). Network meta-analyses were undertaken within a Bayesian framework. A new discrete time-state transition economic model (semi-Markov) was developed, with acute rejection, graft function (GRF) and new-onset diabetes mellitus used to extrapolate graft survival. Recipients were assumed to be in one of three health states: functioning graft, graft loss or death. RESULTS Eighty-nine randomised controlled trials (RCTs), of variable quality, were included. For induction therapy, no treatment appeared more effective than another in reducing graft loss or mortality. Compared with placebo/no induction, rATG and BAS appeared more effective in reducing biopsy-proven acute rejection (BPAR) and BAS appeared more effective at improving GRF. For maintenance therapy, no treatment was better for all outcomes and no treatment appeared most effective at reducing graft loss. BEL + MMF appeared more effective than TAC + MMF and SRL + MMF at reducing mortality. MMF + CSA (ciclosporin), TAC + MMF, SRL + TAC, TAC + AZA (azathioprine) and EVL + CSA appeared more effective than CSA + AZA and EVL + MPS at reducing BPAR. SRL + AZA, TAC + AZA, TAC + MMF and BEL + MMF appeared to improve GRF compared with CSA + AZA and MMF + CSA. In the base-case deterministic and probabilistic analyses, BAS, MMF and TAC were predicted to be cost-effective at £20,000 and £30,000 per quality-adjusted life-year (QALY). When comparing all regimens, only BAS + TAC + MMF was cost-effective at £20,000 and £30,000 per QALY. LIMITATIONS For included trials, there was substantial methodological heterogeneity, few trials reported follow-up beyond 1 year, and there were insufficient data to perform subgroup analysis. Treatment discontinuation and switching were not modelled. FUTURE WORK High-quality, better-reported, longer-term RCTs are needed. Ideally, these would be sufficiently powered for subgroup analysis and include health-related quality of life as an outcome. CONCLUSION Only a regimen of BAS induction followed by maintenance with TAC and MMF is likely to be cost-effective at £20,000-30,000 per QALY. STUDY REGISTRATION This study is registered as PROSPERO CRD42014013189. FUNDING The National Institute for Health Research Health Technology Assessment programme.
Collapse
Affiliation(s)
- Tracey Jones-Hughes
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Tristan Snowsill
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Marcela Haasova
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Helen Coelho
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Louise Crathorne
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Chris Cooper
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Ruben Mujica-Mota
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Jaime Peters
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Jo Varley-Campbell
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Nicola Huxley
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Jason Moore
- Exeter Kidney Unit, Royal Devon and Exeter Foundation Trust Hospital, Exeter, UK
| | - Matt Allwood
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Jenny Lowe
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Chris Hyde
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Martin Hoyle
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Mary Bond
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| | - Rob Anderson
- Peninsula Technology Assessment Group (PenTAG), University of Exeter, Exeter, UK
| |
Collapse
|
12
|
Lee Y, Park JS. Model selection algorithm in Gaussian process regression for computer experiments. COMMUNICATIONS FOR STATISTICAL APPLICATIONS AND METHODS 2017. [DOI: 10.5351/csam.2017.24.4.383] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Youngsaeng Lee
- Department of Statistics, Chonnam National University, Korea
| | - Jeong-Soo Park
- Department of Statistics, Chonnam National University, Korea
| |
Collapse
|
13
|
Sharif B, Wong H, Anis AH, Kopec JA. A Practical ANOVA Approach for Uncertainty Analysis in Population-Based Disease Microsimulation Models. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2017; 20:710-717. [PMID: 28408016 DOI: 10.1016/j.jval.2017.01.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 12/30/2016] [Accepted: 01/05/2017] [Indexed: 06/07/2023]
Abstract
OBJECTIVES To provide a practical approach for calculating uncertainty intervals and variance components associated with initial-condition and dynamic-equation parameters in computationally expensive population-based disease microsimulation models. METHODS In the proposed uncertainty analysis approach, we calculated the required computational time and the number of runs given a user-defined error bound on the variance of the grand mean. The equations for optimal sample sizes were derived by minimizing the variance of the grand mean using initial estimates for variance components. Finally, analysis of variance estimators were used to calculate unbiased variance estimates. RESULTS To illustrate the proposed approach, we performed uncertainty analysis to estimate the uncertainty associated with total direct cost of osteoarthritis in Canada from 2010 to 2031 according to a previously published population health microsimulation model of osteoarthritis. We first calculated crude estimates for initial-population sampling and dynamic-equation parameters uncertainty by performing a small number of runs. We then calculated the optimal sample sizes and finally derived 95% uncertainty intervals of the total cost and unbiased estimates for variance components. According to our results, the contribution of dynamic-equation parameter uncertainty to the overall variance was higher than that of initial parameter sampling uncertainty throughout the study period. CONCLUSIONS The proposed analysis of variance approach provides the uncertainty intervals for the mean outcome in addition to unbiased estimates for each source of uncertainty. The contributions of each source of uncertainty can then be compared with each other for validation purposes so as to improve the model accuracy.
Collapse
Affiliation(s)
- Behnam Sharif
- Faculty of Medicine, Department of Community Health Sciences, University of Calgary, Calgary, Alberta, Canada.
| | - Hubert Wong
- School of Population and Public Health, University of British Columbia, Vancouver, British Columbia, Canada
| | - Aslam H Anis
- School of Population and Public Health, University of British Columbia, Vancouver, British Columbia, Canada
| | - Jacek A Kopec
- School of Population and Public Health, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
14
|
Hawkins N, Sculpher M, Epstein D. Cost-Effectiveness Analysis of Treatments for Chronic Disease: Using R to Incorporate Time Dependency of Treatment Response. Med Decis Making 2016; 25:511-9. [PMID: 16160207 DOI: 10.1177/0272989x05280562] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
When constructing decision-analytic models to evaluate the cost-effectiveness of alternative treatments, we often need to extrapolate beyond the available experimental data, as these typically relate to a limited period starting from the initiation of a new treatment or the diagnosis of the current disease state. We may also be required to extrapolate beyond the available experimental evidence to compare potential treatment sequences. Markov models are often used for this extrapolation. These models have the defining assumption that future transition probabilities are independent of past transitions. This means that, in general, transition probabilities cannot be conditional of the time spent in a given state. Where data exist to show that the risks of transition are conditional on the time spent in the treatment state, the simplifying Markov assumption can result in a loss in the model’s “face validity,” and misleading results might be generated. Several methods are available to incorporate time dependency into transition probabilities based on standard methods and software. These include the inclusion of tunnel states in Markov models and patient-level simulation, where a series of individual patients are simulated. This article considers the features and limitations of these methods and also describes a novel approach to building time dependency into a Markov model by incorporating an additional time dimension resulting in a “semi-Markov” model. An example of the implementation of such a model, using the R statistical programming language, is illustrated using a cost-effectiveness model for new epilepsy therapies.
Collapse
Affiliation(s)
- Neil Hawkins
- Centre for Health Economics, University of York, Centre for Health Economics, York, UK Y010 5DD
| | | | | |
Collapse
|
15
|
Goldhaber-Fiebert JD, Jalal HJ. Some Health States Are Better Than Others: Using Health State Rank Order to Improve Probabilistic Analyses. Med Decis Making 2015; 36:927-40. [PMID: 26377369 DOI: 10.1177/0272989x15605091] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2014] [Accepted: 08/18/2015] [Indexed: 11/15/2022]
Abstract
BACKGROUND Probabilistic sensitivity analyses (PSA) may lead policy makers to take nonoptimal actions due to misestimates of decision uncertainty caused by ignoring correlations. We developed a method to establish joint uncertainty distributions of quality-of-life (QoL) weights exploiting ordinal preferences over health states. METHODS Our method takes as inputs independent, univariate marginal distributions for each QoL weight and a preference ordering. It establishes a correlation matrix between QoL weights intended to preserve the ordering. It samples QoL weight values from their distributions, ordering them with the correlation matrix. It calculates the proportion of samples violating the ordering, iteratively adjusting the correlation matrix until this proportion is below an arbitrarily small threshold. We compare our method with the uncorrelated method and other methods for preserving rank ordering in terms of violation proportions and fidelity to the specified marginal distributions along with PSA and expected value of partial perfect information (EVPPI) estimates, using 2 models: 1) a decision tree with 2 decision alternatives and 2) a chronic hepatitis C virus (HCV) Markov model with 3 alternatives. RESULTS All methods make tradeoffs between violating preference orderings and altering marginal distributions. For both models, our method simultaneously performed best, with largest performance advantages when distributions reflected wider uncertainty. For PSA, larger changes to the marginal distributions induced by existing methods resulted in differing conclusions about which strategy was most likely optimal. For EVPPI, both preference order violations and altered marginal distributions caused existing methods to misestimate the maximum value of seeking additional information, sometimes concluding that there was no value. CONCLUSIONS Analysts can characterize the joint uncertainty in QoL weights to improve PSA and value-of-information estimates using Open Source implementations of our method.
Collapse
Affiliation(s)
- Jeremy D Goldhaber-Fiebert
- Stanford Health Policy, Centers for Health Policy and Primary Care and Outcomes Research, Department of Medicine, Stanford University, Stanford, CA, USA (JDGF, HJJ)
| | - Hawre J Jalal
- Stanford Health Policy, Centers for Health Policy and Primary Care and Outcomes Research, Department of Medicine, Stanford University, Stanford, CA, USA (JDGF, HJJ),Health Policy and Management, University of Pittsburgh, Pittsburgh, PA, USA (HJJ)
| |
Collapse
|
16
|
Wolowacz S, Pearson I, Shannon P, Chubb B, Gundgaard J, Davies M, Briggs A. Development and validation of a cost-utility model for Type 1 diabetes mellitus. Diabet Med 2015; 32:1023-35. [PMID: 25484028 DOI: 10.1111/dme.12663] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/02/2014] [Indexed: 11/28/2022]
Abstract
AIMS To develop a health economic model to evaluate the cost-effectiveness of new interventions for Type 1 diabetes mellitus by their effects on long-term complications (measured through mean HbA1c ) while capturing the impact of treatment on hypoglycaemic events. METHODS Through a systematic review, we identified complications associated with Type 1 diabetes mellitus and data describing the long-term incidence of these complications. An individual patient simulation model was developed and included the following complications: cardiovascular disease, peripheral neuropathy, microalbuminuria, end-stage renal disease, proliferative retinopathy, ketoacidosis, cataract, hypoglycemia and adverse birth outcomes. Risk equations were developed from published cumulative incidence data and hazard ratios for the effect of HbA1c , age and duration of diabetes. We validated the model by comparing model predictions with observed outcomes from studies used to build the model (internal validation) and from other published data (external validation). We performed illustrative analyses for typical patient cohorts and a hypothetical intervention. RESULTS Model predictions were within 2% of expected values in the internal validation and within 8% of observed values in the external validation (percentages represent absolute differences in the cumulative incidence). CONCLUSIONS The model utilized high-quality, recent data specific to people with Type 1 diabetes mellitus. In the model validation, results deviated less than 8% from expected values.
Collapse
Affiliation(s)
- S Wolowacz
- Health Economics, RTI Health Solutions, Manchester
| | - I Pearson
- Health Economics, RTI Health Solutions, Manchester
| | - P Shannon
- Patient-Reported Outcomes, RTI Health Solutions, Manchester
| | - B Chubb
- European Health Economics & Outcomes Research, Novo Nordisk Ltd, Gatwick, UK
| | - J Gundgaard
- Health Economics and HTA, Novo Nordisk A/S, Bagsvaerd, Denmark
| | - M Davies
- Diabetes Research Centre, University of Leicester, Leicester
| | - A Briggs
- Institute for Health and Wellbeing, University of Glasgow, Glasgow, UK
| |
Collapse
|
17
|
Welton NJ, Thom HHZ. Value of Information: We've Got Speed, What More Do We Need? Med Decis Making 2015; 35:564-6. [PMID: 25840903 DOI: 10.1177/0272989x15579164] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Accepted: 03/05/2015] [Indexed: 11/15/2022]
Affiliation(s)
- Nicky J Welton
- School of Social and Community Medicine, University of Bristol, Bristol, UK (NJW, HHZT)
| | - Howard H Z Thom
- School of Social and Community Medicine, University of Bristol, Bristol, UK (NJW, HHZT)
| |
Collapse
|
18
|
Stevenson MD, Selby PL. Modelling the cost effectiveness of interventions for osteoporosis: issues to consider. PHARMACOECONOMICS 2014; 32:735-743. [PMID: 24715605 DOI: 10.1007/s40273-014-0156-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Expenditure on treating osteoporotic fractures and on preventative intervention is considerable and is likely to rise in forthcoming years due to the association between fracture risk and age. With funders such as the National Institute for Health and Care Excellence and the Pharmaceutical Benefits Advisory Committee explicitly considering cost-effectiveness analyses within the process of producing guidance, it is imperative that economic models are as robust as possible. This article details issues that need to be considered specifically related to health technology assessments of interventions for osteoporosis, and highlights limitations within the current evidence base. A likely direction of impact on cost effectiveness of addressing the key issues has been included alongside a tentative categorization of the level of these impacts. It is likely that cost-effectiveness ratios presented in previous models that did not address the identified issues were favourable to interventions.
Collapse
Affiliation(s)
- Matt D Stevenson
- University of Sheffield, School of Health and Related Research, 30 Regent Street, Sheffield, S1 4DA, UK,
| | | |
Collapse
|
19
|
Strong M, Oakley JE, Brennan A. Estimating multiparameter partial expected value of perfect information from a probabilistic sensitivity analysis sample: a nonparametric regression approach. Med Decis Making 2014; 34:311-26. [PMID: 24246566 PMCID: PMC4819801 DOI: 10.1177/0272989x13505910] [Citation(s) in RCA: 159] [Impact Index Per Article: 15.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2013] [Accepted: 08/27/2013] [Indexed: 11/17/2022]
Abstract
The partial expected value of perfect information (EVPI) quantifies the expected benefit of learning the values of uncertain parameters in a decision model. Partial EVPI is commonly estimated via a 2-level Monte Carlo procedure in which parameters of interest are sampled in an outer loop, and then conditional on these, the remaining parameters are sampled in an inner loop. This is computationally demanding and may be difficult if correlation between input parameters results in conditional distributions that are hard to sample from. We describe a novel nonparametric regression-based method for estimating partial EVPI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method is applicable in a model of any complexity and with any specification of input parameter distribution. We describe the implementation of the method via 2 nonparametric regression modeling approaches, the Generalized Additive Model and the Gaussian process. We demonstrate in 2 case studies the superior efficiency of the regression method over the 2-level Monte Carlo method. R code is made available to implement the method.
Collapse
Affiliation(s)
- Mark Strong
- Mark Strong PhD, School of Health and Related Research (ScHARR), University of Sheffield, 30 Regent Street, Sheffield S1 4DA, UK; tel +44 (0)114 222 0812; e-mail
| | - Jeremy E. Oakley
- School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK (MS, AB)
- School of Mathematics and Statistics, University of Sheffield, Sheffield, UK (JEO)
| | - Alan Brennan
- School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK (MS, AB)
- School of Mathematics and Statistics, University of Sheffield, Sheffield, UK (JEO)
| |
Collapse
|
20
|
Green LE, Dinh TA, Hinds DA, Walser BL, Allman R. Economic evaluation of using a genetic test to direct breast cancer chemoprevention in white women with a previous breast biopsy. APPLIED HEALTH ECONOMICS AND HEALTH POLICY 2014; 12:203-217. [PMID: 24595521 DOI: 10.1007/s40258-014-0089-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
BACKGROUND Tamoxifen therapy reduces the risk of breast cancer but increases the risk of serious adverse events including endometrial cancer and thromboembolic events. OBJECTIVES The cost effectiveness of using a commercially available breast cancer risk assessment test (BREVAGen™) to inform the decision of which women should undergo chemoprevention by tamoxifen was modeled in a simulated population of women who had undergone biopsies but had no diagnosis of cancer. METHODS A continuous time, discrete event, mathematical model was used to simulate a population of white women aged 40-69 years, who were at elevated risk for breast cancer because of a history of benign breast biopsy. Women were assessed for clinical risk of breast cancer using the Gail model and for genetic risk using a panel of seven common single nucleotide polymorphisms. We evaluated the cost effectiveness of using genetic risk together with clinical risk, instead of clinical risk alone, to determine eligibility for 5 years of tamoxifen therapy. In addition to breast cancer, the simulation included health states of endometrial cancer, pulmonary embolism, deep-vein thrombosis, stroke, and cataract. Estimates of costs in 2012 US dollars were based on Medicare reimbursement rates reported in the literature and utilities for modeled health states were calculated as an average of utilities reported in the literature. A 50-year time horizon was used to observe lifetime effects including survival benefits. RESULTS For those women at intermediate risk of developing breast cancer (1.2-1.66 % 5-year risk), the incremental cost-effectiveness ratio for the combined genetic and clinical risk assessment strategy over the clinical risk assessment-only strategy was US$47,000, US$44,000, and US$65,000 per quality-adjusted life-year gained, for women aged 40-49, 50-59, and 60-69 years, respectively (assuming a price of US$945 for genetic testing). Results were sensitive to assumptions about patient adherence, utility of life while taking tamoxifen, and cost of genetic testing. CONCLUSIONS From the US payer's perspective, the combined genetic and clinical risk assessment strategy may be a moderately cost-effective alternative to using clinical risk alone to guide chemoprevention recommendations for women at intermediate risk of developing breast cancer.
Collapse
Affiliation(s)
- Linda E Green
- Department of Mathematics, University of North Carolina at Chapel Hill, CB#3250, Chapel Hill, NC, 27599, USA,
| | | | | | | | | |
Collapse
|
21
|
Madan J, Ades AE, Price M, Maitland K, Jemutai J, Revill P, Welton NJ. Strategies for efficient computation of the expected value of partial perfect information. Med Decis Making 2014; 34:327-42. [PMID: 24449434 PMCID: PMC4948652 DOI: 10.1177/0272989x13514774] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria.
Collapse
Affiliation(s)
- Jason Madan
- School of Social and Community Medicine, University of Bristol, Bristol, UK (JM, AEA, MP, NJW).,Warwick Medical School, University of Warwick, Coventry, UK (JM)
| | - Anthony E Ades
- School of Social and Community Medicine, University of Bristol, Bristol, UK (JM, AEA, MP, NJW)
| | - Malcolm Price
- School of Social and Community Medicine, University of Bristol, Bristol, UK (JM, AEA, MP, NJW).,School of Health and Population Sciences, University of Birmingham, Birmingham, UK (MP)
| | - Kathryn Maitland
- Department of Medicine, Imperial College, London, UK (KM),KEMRI-Wellcome Trust Research Programme, Kilifi, Kenya (KM, JJ)
| | - Julie Jemutai
- KEMRI-Wellcome Trust Research Programme, Kilifi, Kenya (KM, JJ)
| | - Paul Revill
- Centre for Health Economics, University of York, York, UK (PR)
| | - Nicky J Welton
- School of Social and Community Medicine, University of Bristol, Bristol, UK (JM, AEA, MP, NJW)
| |
Collapse
|
22
|
Jalal H, Dowd B, Sainfort F, Kuntz KM. Linear regression metamodeling as a tool to summarize and present simulation model results. Med Decis Making 2013; 33:880-90. [PMID: 23811758 DOI: 10.1177/0272989x13492014] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND / OBJECTIVE Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. METHODS We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. RESULTS The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. CONCLUSIONS Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Collapse
Affiliation(s)
- Hawre Jalal
- Division of Health Policy and Management, School of Public Health, University of Minnesota, Minneapolis, MN (HJ, BD, FS, KMK)
| | - Bryan Dowd
- Division of Health Policy and Management, School of Public Health, University of Minnesota, Minneapolis, MN (HJ, BD, FS, KMK)
| | - François Sainfort
- Division of Health Policy and Management, School of Public Health, University of Minnesota, Minneapolis, MN (HJ, BD, FS, KMK)
| | - Karen M Kuntz
- Division of Health Policy and Management, School of Public Health, University of Minnesota, Minneapolis, MN (HJ, BD, FS, KMK)
| |
Collapse
|
23
|
Sadatsafavi M, Bansback N, Zafari Z, Najafzadeh M, Marra C. Need for speed: an efficient algorithm for calculation of single-parameter expected value of partial perfect information. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2013; 16:438-448. [PMID: 23538197 DOI: 10.1016/j.jval.2012.10.018] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2011] [Revised: 09/13/2012] [Accepted: 10/21/2012] [Indexed: 06/02/2023]
Abstract
BACKGROUND The expected value of partial perfect information (EVPPI) is a theoretically justifiable and informative measure of uncertainty in decision-analytic cost-effectiveness models, but its calculation is computationally intensive because it generally requires two-level Monte Carlo simulation. We introduce an efficient, one-level simulation method for the calculation of single-parameter EVPPI. OBJECTIVE We show that under mild regularity assumptions, the expectation-maximization-expectation sequence in EVPPI calculation can be transformed into an expectation-maximization-maximization sequence. By doing so, calculations can be performed in a single-step expectation by using data generated for probabilistic sensitivity analysis. We prove that the proposed estimator of EVPPI converges in probability to the true EVPPI. METHODS AND RESULTS The performance of the new method was empirically demonstrated by using three exemplary decision models. Our proposed method seems to achieve remarkably higher accuracy than the two-level method with a fraction of its computation costs, though the achievement in accuracy was not uniform and varied across the parameters of the models. Software is provided to calculate single-parameter EVPPI based on the probabilistic sensitivity analysis data. CONCLUSIONS The new method, though applicable only to single-parameter EVPPI, is fast, accurate, and easy to implement. Further research is needed to evaluate the performance of this method in more complex scenarios and to extend such a concept to similar measures of decision uncertainty.
Collapse
Affiliation(s)
- Mohsen Sadatsafavi
- Department of Medicine, University of British Columbia, Vancouver, BC, Canada.
| | | | | | | | | |
Collapse
|
24
|
Karnon J, Stahl J, Brennan A, Caro JJ, Mar J, Möller J. Modeling using discrete event simulation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-4. Med Decis Making 2013; 32:701-11. [PMID: 22990085 DOI: 10.1177/0272989x12455462] [Citation(s) in RCA: 144] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Discrete event simulation (DES) is a form of computer-based modeling that provides an intuitive and flexible approach to representing complex systems. It has been used in a wide range of health care applications. Most early applications involved analyses of systems with constrained resources, where the general aim was to improve the organization of delivered services. More recently, DES has increasingly been applied to evaluate specific technologies in the context of health technology assessment. The aim of this article is to provide consensus-based guidelines on the application of DES in a health care setting, covering the range of issues to which DES can be applied. The article works through the different stages of the modeling process: structural development, parameter estimation, model implementation, model analysis, and representation and reporting. For each stage, a brief description is provided, followed by consideration of issues that are of particular relevance to the application of DES in a health care setting. Each section contains a number of best practice recommendations that were iterated among the authors, as well as the wider modeling task force.
Collapse
Affiliation(s)
- Jonathan Karnon
- School of Population Health and Clinical Practice, University of Adelaide, Adelaide, South Australia (JK)
| | - James Stahl
- MGH Institute for Technology Assessment and Harvard Medical School, Boston, Massachusetts (JS)
| | - Alan Brennan
- University of Sheffield, Sheffield, England, UK (AB)
| | - J Jaime Caro
- United BioSource Corporation and McGill University, Montreal, Canada (JJC)
| | - Javier Mar
- Clinical Management Unit, Hospital Alto Deba, Mondragon, Spain (JM)
| | | |
Collapse
|
25
|
Benaglia T, Sharples LD, Fitzgerald RC, Lyratzopoulos G. Health benefits and cost effectiveness of endoscopic and nonendoscopic cytosponge screening for Barrett's esophagus. Gastroenterology 2013; 144:62-73.e6. [PMID: 23041329 DOI: 10.1053/j.gastro.2012.09.060] [Citation(s) in RCA: 117] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2012] [Revised: 09/21/2012] [Accepted: 09/24/2012] [Indexed: 02/06/2023]
Abstract
BACKGROUND & AIMS We developed a model to compare the health benefits and cost effectiveness of screening for Barrett's esophagus by either Cytosponge™ or by conventional endoscopy vs no screening, and to estimate their abilities to reduce mortality from esophageal adenocarcinoma. METHODS We used microsimulation modeling of a hypothetical cohort of 50-year-old men in the United Kingdom with histories of gastroesophageal reflux disease symptoms, assuming the prevalence of Barrett's esophagus to be 8%. Participants were invited to undergo screening by endoscopy or Cytosponge (invitation acceptance rates of 23% and 45%, respectively), and outcomes were compared with those from men who underwent no screening. We estimated the number of incident esophageal adenocarcinoma cases prevented and the incremental cost-effectiveness ratio of quality-adjusted life years (QALYs) of the different strategies. Patients found to have high-grade dysplasia or intramucosal cancer received endotherapy. Model inputs included data on disease progression, test accuracy, post-treatment status, and surveillance protocols. Costs and benefits were discounted at 3.5% per year. Supplementary and sensitivity analyses comprised esophagectomy management of high-grade dysplasia or intramucosal cancer, screening by ultrathin nasal endoscopy, and different assumptions of uptake of screening invitations for either strategy. RESULTS We estimated that compared with no screening, Cytosponge screening followed by treatment of patients with dysplasia or intramucosal cancer costs an additional $240 (95% credible interval, $196-$320) per screening participant and results in a mean gain of 0.015 (95% credible interval, -0.001 to 0.029) QALYs and an incremental cost-effectiveness ratio of $15.7 thousand (K) per QALY. The respective values for endoscopy were $299 ($261-$367), 0.013 (0.003-0.023) QALYs, and $22.2K. Screening by the Cytosponge followed by treatment of patients with dysplasia or intramucosal cancer would reduce the number of cases of incident symptomatic esophageal adenocarcinoma by 19%, compared with 17% for screening by endoscopy, although this greater benefit for Cytosponge depends on more patients accepting screening by Cytosponge compared with screening by endoscopy. CONCLUSIONS In a microsimulation model, screening 50-year-old men with symptoms of gastroesophageal reflux disease by Cytosponge is cost effective and would reduce mortality from esophageal adenocarcinoma compared with no screening.
Collapse
Affiliation(s)
- Tatiana Benaglia
- Medical Research Council, Biostatistics Unit, Cambridge, United Kingdom
| | | | | | | |
Collapse
|
26
|
Soares MO, Canto E Castro L. Continuous time simulation and discretized models for cost-effectiveness analysis. PHARMACOECONOMICS 2012; 30:1101-1117. [PMID: 23116289 DOI: 10.2165/11599380-000000000-00000] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The design of decision-analytic models for cost-effectiveness analysis has been the subject of discussion. The current work addresses this issue by noting that, when time is to be explicitly modelled, we need to represent phenomena occurring in continuous time. Models evaluated in continuous time may not have closed-form solutions, and in this case, two approximations can be used: simulation models in continuous time and discretized models at the aggregate level. Stylized examples were set up where both approximations could be implemented. These aimed to illustrate determinants of the use of the two approximations: cycle length and precision, the use of continuity corrections in discretized models and the discretization of rates into probabilities. The examples were also used to explore the impact of the approximations not only in terms of absolute survival but also cost effectiveness and incremental comparisons. Discretized models better approximate continuous time results if lower cycle lengths are used. Continuous time simulation models are inherently stochastic, and the precision of the results is determined by the simulation sample size. The use of continuity corrections in discretized models allows the use of greater cycle lengths, producing no significant bias from the discretization. How the process is discretized (the conversion of rates into probabilities) is key. Results show that appropriate discretization coupled with the use of a continuity correction produces results unbiased for higher cycle lengths. Alternative methods of discretization are less efficient, i.e. lower cycle lengths are needed to obtain unbiased results. The developed work showed the importance of acknowledging bias in estimating cost effectiveness. When the alternative approximations can be applied, we argue that it is preferable to implement a cohort discretized model rather than a simulation model in continuous time. In practice, however, it may not be possible to represent the decision problem by any conventionally defined discretized model, in which case other model designs need to be applied, e.g. a simulation model.
Collapse
|
27
|
Karnon J, Stahl J, Brennan A, Caro JJ, Mar J, Möller J. Modeling using discrete event simulation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force--4. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2012; 15:821-7. [PMID: 22999131 DOI: 10.1016/j.jval.2012.04.013] [Citation(s) in RCA: 145] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2012] [Accepted: 04/05/2012] [Indexed: 05/07/2023]
Abstract
Discrete event simulation (DES) is a form of computer-based modeling that provides an intuitive and flexible approach to representing complex systems. It has been used in a wide range of health care applications. Most early applications involved analyses of systems with constrained resources, where the general aim was to improve the organization of delivered services. More recently, DES has increasingly been applied to evaluate specific technologies in the context of health technology assessment. The aim of this article was to provide consensus-based guidelines on the application of DES in a health care setting, covering the range of issues to which DES can be applied. The article works through the different stages of the modeling process: structural development, parameter estimation, model implementation, model analysis, and representation and reporting. For each stage, a brief description is provided, followed by consideration of issues that are of particular relevance to the application of DES in a health care setting. Each section contains a number of best practice recommendations that were iterated among the authors, as well as among the wider modeling task force.
Collapse
Affiliation(s)
- Jonathan Karnon
- School of Population Health and Clinical Practice, University of Adelaide, Adelaide, SA, Australia.
| | | | | | | | | | | |
Collapse
|
28
|
Abstract
Objective. Uncertainty analysis (UA) is an important part of simulation model validation. However, literature is imprecise as to how UA should be performed in the context of population-based microsimulation (PMS) models. In this expository paper, we discuss a practical approach to UA for such models. Methods. By adapting common concepts from published UA guidelines, we developed a comprehensive, step-by-step approach to UA in PMS models, including sample size calculation to reduce the computational time. As an illustration, we performed UA for POHEM-OA, a microsimulation model of osteoarthritis (OA) in Canada. Results. The resulting sample size of the simulated population was 500,000 and the number of Monte Carlo (MC) runs was 785 for 12-hour computational time. The estimated 95% uncertainty intervals for the prevalence of OA in Canada in 2021 were 0.09 to 0.18 for men and 0.15 to 0.23 for women. The uncertainty surrounding the sex-specific prevalence of OA increased over time. Conclusion. The proposed approach to UA considers the challenges specific to PMS models, such as selection of parameters and calculation of MC runs and population size to reduce computational burden. Our example of UA shows that the proposed approach is feasible. Estimation of uncertainty intervals should become a standard practice in the reporting of results from PMS models.
Collapse
|
29
|
Bilcke J, Beutels P, Brisson M, Jit M. Accounting for Methodological, Structural, and Parameter Uncertainty in Decision-Analytic Models. Med Decis Making 2011; 31:675-92. [DOI: 10.1177/0272989x11409240] [Citation(s) in RCA: 101] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. Methods for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.
Collapse
Affiliation(s)
- Joke Bilcke
- Center for Health Economic Research and Modeling for Infectious Diseases (CHERMID), Vaccine and Infectious Disease Institute (Vaxinfectio), Antwerp University, Antwerp, Belgium (JB, PB)
- Département de Médecine sociale et préventive, Université Laval, Québec, Canada (MB)
- URESP, Centre de recherche FRSQ du CHA universitaire de Québec, Québec, Canada (MB)
- Modelling and Economics Unit, Health Protection Agency, London, United Kingdom (MJ)
| | - Philippe Beutels
- Center for Health Economic Research and Modeling for Infectious Diseases (CHERMID), Vaccine and Infectious Disease Institute (Vaxinfectio), Antwerp University, Antwerp, Belgium (JB, PB)
- Département de Médecine sociale et préventive, Université Laval, Québec, Canada (MB)
- URESP, Centre de recherche FRSQ du CHA universitaire de Québec, Québec, Canada (MB)
- Modelling and Economics Unit, Health Protection Agency, London, United Kingdom (MJ)
| | - Marc Brisson
- Center for Health Economic Research and Modeling for Infectious Diseases (CHERMID), Vaccine and Infectious Disease Institute (Vaxinfectio), Antwerp University, Antwerp, Belgium (JB, PB)
- Département de Médecine sociale et préventive, Université Laval, Québec, Canada (MB)
- URESP, Centre de recherche FRSQ du CHA universitaire de Québec, Québec, Canada (MB)
- Modelling and Economics Unit, Health Protection Agency, London, United Kingdom (MJ)
| | - Mark Jit
- Center for Health Economic Research and Modeling for Infectious Diseases (CHERMID), Vaccine and Infectious Disease Institute (Vaxinfectio), Antwerp University, Antwerp, Belgium (JB, PB)
- Département de Médecine sociale et préventive, Université Laval, Québec, Canada (MB)
- URESP, Centre de recherche FRSQ du CHA universitaire de Québec, Québec, Canada (MB)
- Modelling and Economics Unit, Health Protection Agency, London, United Kingdom (MJ)
| |
Collapse
|
30
|
Peasgood T, Ward SE, Brazier J. Health-state utility values in breast cancer. Expert Rev Pharmacoecon Outcomes Res 2011; 10:553-66. [PMID: 20950071 DOI: 10.1586/erp.10.65] [Citation(s) in RCA: 101] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Health-related quality of life is an important issue in the treatment of breast cancer and health-state utility values are essential for cost-utility analysis. A literature review was conducted to identify published values for common health states for breast cancer. In total, 13 databases were searched and 49 articles were identified providing 476 unique utility values. Where possible mean utility estimates were pooled using ordinary least squares with utilities clustered within study group and weighted by both number of respondents and inverse of the variance of each utility. Regressions included controls for disease state, utility assessment method and other features of study design. Utility values found in the review were summarized for six categories: screening-related states; preventative states; adverse events in breast cancer and its treatment; nonspecific breast cancer; metastatic breast cancer states; and early breast cancer states. The large number of values identified for metastatic breast cancer and early breast cancer states enabled data to be synthesized by meta-regression. Utilities were found to vary significantly between valuation methods and depending on who conducted the valuation. For metastatic breast cancer, values significantly varied by severity of condition, treatment and side-effects. Despite the numerous studies it is not feasible to generate a definitive list of health-state utility values that can be used in future economic evaluations owing to the complexity of the health states involved and the variety of methods used to obtain values. Future research into quality of life in breast cancer should make greater use of validated generic preference-based measures for which public preferences exist.
Collapse
Affiliation(s)
- Tessa Peasgood
- School of Health and Related Research, University of Sheffield, Sheffield, S1 4DA, UK
| | | | | |
Collapse
|
31
|
Kremers HM, Gabriel SE, Drummond MF. Principles of health economics and application to rheumatic disorders. Rheumatology (Oxford) 2011. [DOI: 10.1016/b978-0-323-06551-1.00003-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
|
32
|
An evaluation of the NICE guidance for the prevention of osteoporotic fragility fractures in postmenopausal women. Arch Osteoporos 2010. [DOI: 10.1007/s11657-010-0045-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
|
33
|
Koerkamp BG, Stijnen T, Weinstein MC, Hunink MGM. The Combined Analysis of Uncertainty and Patient Heterogeneity in Medical Decision Models. Med Decis Making 2010; 31:650-61. [DOI: 10.1177/0272989x10381282] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The analysis of both patient heterogeneity and parameter uncertainty in decision models is increasingly recommended. In addition, the complexity of current medical decision models commonly requires simulating individual subjects, which introduces stochastic uncertainty. The combined analysis of uncertainty and heterogeneity often involves complex nested Monte Carlo simulations to obtain the model outcomes of interest. In this article, the authors distinguish eight model types, each dealing with a different combination of patient heterogeneity, parameter uncertainty, and stochastic uncertainty. The analyses that are required to obtain the model outcomes are expressed in equations, explained in stepwise algorithms, and demonstrated in examples. Patient heterogeneity is represented by frequency distributions and analyzed with Monte Carlo simulation. Parameter uncertainty is represented by probability distributions and analyzed with 2nd-order Monte Carlo simulation (aka probabilistic sensitivity analysis). Stochastic uncertainty is analyzed with 1st-order Monte Carlo simulation (i.e., trials or random walks). This article can be used as a reference for analyzing complex models with more than one type of uncertainty and patient heterogeneity.
Collapse
Affiliation(s)
- Bas Groot Koerkamp
- Program for the Assessment of Radiological Technology, Departments of Radiology and Epidemiology, Erasmus MC, Rotterdam, The Netherlands (BGK, MGMH)
- Department of Surgery, Gelre Ziekenhuizen, Apeldoorn, The Netherlands (BGK)
- Department of Medical Statistics and Bioinformatics, Leiden University Medical Center, Leiden, The Netherlands (TS)
- Program in Health Decision Science, Department of Health Policy and Management, Harvard School of Public Health, Boston, Massachusetts (MCW, MGMH)
| | - Theo Stijnen
- Program for the Assessment of Radiological Technology, Departments of Radiology and Epidemiology, Erasmus MC, Rotterdam, The Netherlands (BGK, MGMH)
- Department of Surgery, Gelre Ziekenhuizen, Apeldoorn, The Netherlands (BGK)
- Department of Medical Statistics and Bioinformatics, Leiden University Medical Center, Leiden, The Netherlands (TS)
- Program in Health Decision Science, Department of Health Policy and Management, Harvard School of Public Health, Boston, Massachusetts (MCW, MGMH)
| | - Milton C. Weinstein
- Program for the Assessment of Radiological Technology, Departments of Radiology and Epidemiology, Erasmus MC, Rotterdam, The Netherlands (BGK, MGMH)
- Department of Surgery, Gelre Ziekenhuizen, Apeldoorn, The Netherlands (BGK)
- Department of Medical Statistics and Bioinformatics, Leiden University Medical Center, Leiden, The Netherlands (TS)
- Program in Health Decision Science, Department of Health Policy and Management, Harvard School of Public Health, Boston, Massachusetts (MCW, MGMH)
| | - M. G. Myriam Hunink
- Program for the Assessment of Radiological Technology, Departments of Radiology and Epidemiology, Erasmus MC, Rotterdam, The Netherlands (BGK, MGMH)
- Department of Surgery, Gelre Ziekenhuizen, Apeldoorn, The Netherlands (BGK)
- Department of Medical Statistics and Bioinformatics, Leiden University Medical Center, Leiden, The Netherlands (TS)
- Program in Health Decision Science, Department of Health Policy and Management, Harvard School of Public Health, Boston, Massachusetts (MCW, MGMH)
| |
Collapse
|
34
|
Abstract
Parameter uncertainty, patient heterogeneity, and stochastic uncertainty of outcomes are increasingly important concepts in medical decision models. The purpose of this study is to demonstrate the various methods to analyze uncertainty and patient heterogeneity in a decision model. The authors distinguish various purposes of medical decision modeling, serving various stakeholders. Differences and analogies between the analyses are pointed out, as well as practical issues. The analyses are demonstrated with an example comparing imaging tests for patients with chest pain. For complicated analyses step-by-step algorithms are provided. The focus is on Monte Carlo simulation and value of information analysis. Increasing model complexity is a major challenge for probabilistic sensitivity analysis and value of information analysis. The authors discuss nested analyses that are required in patient-level models, and in nonlinear models for analyses of partial value of information analysis.
Collapse
|
35
|
Gandjour A. Investment in quality improvement: how to maximize the return. HEALTH ECONOMICS 2010; 19:31-42. [PMID: 19212939 DOI: 10.1002/hec.1449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Today, one of the most pressing concerns of health-care policymakers in industrialized countries are deficits in the quality of health care. This paper presents a decision program that addresses the question in which disease areas and at what intensity to invest in quality improvement (QI) in order to maximize population health. The decision program considers both a budget constraint as well as time constraints of educators and health professionals to participate in educational activities. The calculations of the model are based on a single assumption which is that more intense quality efforts lead to larger QIs, but with diminishing returns. This assumption has been validated by previous studies. All other relationships described by the model are deduced from this assumption. The model uses data from QI trials published in the literature. Thus, it is able to assess how the vast number of published QI strategies compare in terms of their value.
Collapse
Affiliation(s)
- Afschin Gandjour
- James A. Baker III Institute for Public Policy, Rice University, Houston, TX 77251-1892, USA.
| |
Collapse
|
36
|
Mueller D, Gandjour A. Cost-effectiveness of using clinical risk factors with and without DXA for osteoporosis screening in postmenopausal women. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2009; 12:1106-1117. [PMID: 19706151 DOI: 10.1111/j.1524-4733.2009.00577.x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
BACKGROUND According to several guidelines, the assessment of postmenopausal fracture risk should be based on clinical risk factors (CRFs) and bone density. Because measurement of bone density by dual x-ray absorptiometry (DXA) is quite expensive, there has been increasing interest to estimate fracture risk by CRFs. OBJECTIVE The aim of this study was to determine the cost-effectiveness of osteoporosis screening of CRFs with and without DXA compared with no screening in postmenopausal women in Germany. METHODS A cost-utility analysis and a budget-impact analysis were performed from the perspective of the statutory health insurance. A Markov model simulated costs and benefits discounted at 3% over lifetime. RESULTS Cost-effectiveness of CRFs compared with no screening is euro4607, euro21,181, and euro10,171 per quality-adjusted life-year (QALY) for 60-, 70-, and 80-year-old women, respectively. Cost-effectiveness of DXA plus CRFs compared with CRFs alone is euro20,235 for 60-year-old women. In women above the age of 70, DXA plus CRFs dominates CRFs alone. DXA plus CRFs results in annual costs of euro175 million, or 0.4% of the statutory health insurance's annual budget. CONCLUSION Funders should be careful in adopting a strategy based on CRFs alone instead of DXA plus CRFs. Only if DXA is not available, assessing CRFs only is an acceptable option in predicting a woman's risk of fracture.
Collapse
MESH Headings
- Absorptiometry, Photon/economics
- Aged
- Aged, 80 and over
- Alendronate/economics
- Alendronate/therapeutic use
- Bone Density
- Bone Density Conservation Agents/economics
- Bone Density Conservation Agents/therapeutic use
- Cohort Studies
- Cost-Benefit Analysis
- Female
- Fractures, Bone/economics
- Fractures, Bone/epidemiology
- Fractures, Bone/prevention & control
- Germany/epidemiology
- Humans
- Markov Chains
- Mass Screening/economics
- Middle Aged
- Models, Economic
- Osteoporosis, Postmenopausal/complications
- Osteoporosis, Postmenopausal/diagnosis
- Osteoporosis, Postmenopausal/economics
- Osteoporosis, Postmenopausal/epidemiology
- Postmenopause
- Quality-Adjusted Life Years
- Risk Assessment
- Risk Factors
- Sensitivity and Specificity
- Women's Health
Collapse
Affiliation(s)
- Dirk Mueller
- Department of Health Economics and Health Care Management, University of Cologne, Cologne, Germany.
| | | |
Collapse
|
37
|
Stollenwerk B, Stock S, Siebert U, Lauterbach KW, Holle R. Uncertainty assessment of input parameters for economic evaluation: Gauss's error propagation, an alternative to established methods. Med Decis Making 2009; 30:304-13. [PMID: 19815659 DOI: 10.1177/0272989x09347015] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In decision modeling for health economic evaluation, bootstrapping and the Cholesky decomposition method are frequently used to assess parameter uncertainty and to support probabilistic sensitivity analysis. An alternative, Gauss's error propagation law, is rarely known but may be useful in some settings. Bootstrapping, the Cholesky decomposition method, and the error propagation law were compared regarding standard deviation estimates of a hypothetic parameter, which was derived from a regression model fitted to simulated data. Furthermore, to demonstrate its value, the error propagation law was applied to German administrative claims data. All 3 methods yielded almost identical estimates of the standard deviation of the target parameter. The error propagation law was much faster than the other 2 alternatives. Furthermore, it succeeded the claims data example, a case in which the established methods failed. In conclusion, the error propagation law is a useful extension of parameter uncertainty assessment.
Collapse
Affiliation(s)
- Björn Stollenwerk
- Institute of Health Economics and Clinical Epidemiology of the University of Cologne, Gleueler Strasse Cologne, Germany.
| | | | | | | | | |
Collapse
|
38
|
Conti S, Claxton K. Dimensions of design space: a decision-theoretic approach to optimal research design. Med Decis Making 2009; 29:643-60. [PMID: 19605884 DOI: 10.1177/0272989x09336142] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
Collapse
Affiliation(s)
- Stefano Conti
- Centre for Health Economics, University of York, York, UK.
| | | |
Collapse
|
39
|
Systematic review of the use of computer simulation modeling of patient flow in surgical care. J Med Syst 2009; 35:1-16. [PMID: 20703590 DOI: 10.1007/s10916-009-9336-z] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2009] [Accepted: 06/21/2009] [Indexed: 10/20/2022]
Abstract
Computer simulation has been employed to evaluate proposed changes in the delivery of health care. However, little is known about the utility of simulation approaches for analysis of changes in the delivery of surgical care. We searched eight bibliographic databases for this comprehensive review of the literature published over the past five decades, and found 34 publications that reported on simulation models for the flow of surgical patients. The majority of these publications presented a description of the simulation approach: 91% outlined the underlying assumptions for modeling, 88% presented the system requirements, and 91% described the input and output data. However, only half of the publications reported that models were constructed to address the needs of policy-makers, and only 26% reported some involvement of health system managers and policy-makers in the simulation study. In addition, we found a wide variation in the presentation of assumptions, system requirements, input and output data, and results of simulation-based policy analysis.
Collapse
|
40
|
Hiligsmann M, Ethgen O, Bruyère O, Richy F, Gathon HJ, Reginster JY. Development and validation of a Markov microsimulation model for the economic evaluation of treatments in osteoporosis. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2009; 12:687-96. [PMID: 19508659 DOI: 10.1111/j.1524-4733.2008.00497.x] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
OBJECTIVE Markov models are increasingly used in economic evaluations of treatments for osteoporosis. Most of the existing evaluations are cohort-based Markov models missing comprehensive memory management and versatility. In this article, we describe and validate an original Markov microsimulation model to accurately assess the cost-effectiveness of prevention and treatment of osteoporosis. METHODS We developed a Markov microsimulation model with a lifetime horizon and a direct health-care cost perspective. The patient history was recorded and was used in calculations of transition probabilities, utilities, and costs. To test the internal consistency of the model, we carried out an example calculation for alendronate therapy. Then, external consistency was investigated by comparing absolute lifetime risk of fracture estimates with epidemiologic data. RESULTS For women at age 70 years, with a twofold increase in the fracture risk of the average population, the costs per quality-adjusted life-year gained for alendronate therapy versus no treatment were estimated at €9105 and €15,325, respectively, under full and realistic adherence assumptions. All the sensitivity analyses in terms of model parameters and modeling assumptions were coherent with expected conclusions and absolute lifetime risk of fracture estimates were within the range of previous estimates, which confirmed both internal and external consistency of the model. CONCLUSION Microsimulation models present some major advantages over cohort-based models, increasing the reliability of the results and being largely compatible with the existing state of the art, evidence-based literature. The developed model appears to be a valid model for use in economic evaluations in osteoporosis.
Collapse
|
41
|
|
42
|
Griffin S, Claxton K, Sculpher M. Decision analysis for resource allocation in health care. J Health Serv Res Policy 2008; 13 Suppl 3:23-30. [DOI: 10.1258/jhsrp.2008.008017] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper addresses the use of economic evaluation to inform resource allocation decisions within health care systems about which interventions to reimburse and whether additional research should be funded. A social decision-making view of economic evaluation, that is to maximize health gains subject to an exogenous budget constraint, is adopted. A brief overview of the components of an economic evaluation is presented. Particular attention is paid to how uncertainty is inherent to decisions about resource allocation, the consequences of that uncertainty and how it can be incorporated informatively into economic evaluation. A Bayesian approach to uncertainty is used as it meets the needs of social decision-making, allowing analysts to quantify the probability that an intervention is cost-effective given the available evidence and to quantify the expected value of further research. The discussion covers methods to represent parameter and structural uncertainty and considers the role of formal elicitation of expert judgements. The association between decisions to approve interventions for reimbursement and decisions about future research funding, and how value of information analysis can be used to formalize this link, is explained. Recent developments in the UK highlight the evolving policy environment for economic evaluation, such as the Cooksey report on the funding of UK health research, the review of the Pharmaceutical Price Regulation Scheme by the Office of Fair Trading and the update of the methodological guidelines issued by the National Institute for Health and Clinical Excellence. The paper concludes by describing ongoing methodological work designed to meet the challenges of undertaking decision analysis for resource allocation in health care.
Collapse
Affiliation(s)
- Susan Griffin
- Centre for Health Economics, University of York, York, UK
| | - Karl Claxton
- Centre for Health Economics, University of York, York, UK
| | - Mark Sculpher
- Centre for Health Economics, University of York, York, UK
| |
Collapse
|
43
|
Rojnik K, Naversnik K. Gaussian process metamodeling in Bayesian value of information analysis: a case of the complex health economic model for breast cancer screening. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2008; 11:240-250. [PMID: 18380636 DOI: 10.1111/j.1524-4733.2007.00244.x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
OBJECTIVES To determine whether allocation of resources into further research of breast cancer screening is warranted; also, to identify the parameters, for which the information would be most valuable, to prioritize the further research if deemed justifiable. METHODS The Bayesian value of information analysis was conducted to calculate the overall expected value of perfect information (EVPI) and the partial EVPI for the six groups of parameters. Computational expense of the partial EVPI calculation was challenged with the use of Multiple Linear Regression and Gaussian Process metamodels to significantly cut down the computing time. RESULTS Of the two metamodeling techniques, the Gaussian Process was proven to perform superiorly and was therefore chosen for the partial EVPI calculation. The results indicate a considerable range in the population EVPI estimates, between euro100 and euro500 millions at the willingness-to-pay values between euro10,000 and euro40,000 per quality-adjusted life-year. The partial EVPI for the groups of parameters indicated that future research would be most valuable if directed toward obtaining more precise estimates of the cancer sojourn times. With the use of the Gaussian process metamodels, the computing time was reduced from 44 years to 47 days. CONCLUSIONS Although the large values of EVPI suggest collection of further information before choosing the screening policy, it is argued that delaying the decision would result in significantly higher opportunity loss. Therefore, the best option would be to implement the most cost-effective policy given the existing information (screening women aged 40-80 years, at 3-year intervals) and simultaneously conduct observational studies alongside the implemented policy. The decision analytic model could be in this manner periodically updated with additional information as it became available and the most cost-effective policy chosen iteratively.
Collapse
Affiliation(s)
- Klemen Rojnik
- Roche d.o.o. farmacevtska druZba, Ljubljana, Slovenia.
| | | |
Collapse
|
44
|
Abstract
This paper describes the key principles of why an assessment of uncertainty and its consequences are critical for the types of decisions that a body such as the UK National Institute for Health and Clinical Excellence (NICE) has to make. In doing so, it poses the question of whether formal methods may be useful to NICE and its advisory committees in making such assessments. Broadly, these include the following: (i) should probabilistic sensitivity analysis continue to be recommended as a means to characterize parameter uncertainty; (ii) which methods should be used to represent other sources of uncertainty; (iii) when can computationally expensive models be justified and is computation expense a sufficient justification for failing to express uncertainty; (iv) which summary measures of uncertainty should be used to present the results to decision makers; and (v) should formal methods be recommended to inform the assessment of the need for evidence and the consequences of an uncertain decision for the UK NHS?
Collapse
Affiliation(s)
- Karl Claxton
- Centre for Health Economics, Department of Economics and NICE Decision Support Unit, University of York, Heslington, York, UK.
| |
Collapse
|
45
|
Goldhaber-Fiebert JD, Stout NK, Ortendahl J, Kuntz KM, Goldie SJ, Salomon JA. Modeling human papillomavirus and cervical cancer in the United States for analyses of screening and vaccination. Popul Health Metr 2007; 5:11. [PMID: 17967185 PMCID: PMC2213637 DOI: 10.1186/1478-7954-5-11] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2007] [Accepted: 10/29/2007] [Indexed: 01/19/2023] Open
Abstract
Background To provide quantitative insight into current U.S. policy choices for cervical cancer prevention, we developed a model of human papillomavirus (HPV) and cervical cancer, explicitly incorporating uncertainty about the natural history of disease. Methods We developed a stochastic microsimulation of cervical cancer that distinguishes different HPV types by their incidence, clearance, persistence, and progression. Input parameter sets were sampled randomly from uniform distributions, and simulations undertaken with each set. Through systematic reviews and formal data synthesis, we established multiple epidemiologic targets for model calibration, including age-specific prevalence of HPV by type, age-specific prevalence of cervical intraepithelial neoplasia (CIN), HPV type distribution within CIN and cancer, and age-specific cancer incidence. For each set of sampled input parameters, likelihood-based goodness-of-fit (GOF) scores were computed based on comparisons between model-predicted outcomes and calibration targets. Using 50 randomly resampled, good-fitting parameter sets, we assessed the external consistency and face validity of the model, comparing predicted screening outcomes to independent data. To illustrate the advantage of this approach in reflecting parameter uncertainty, we used the 50 sets to project the distribution of health outcomes in U.S. women under different cervical cancer prevention strategies. Results Approximately 200 good-fitting parameter sets were identified from 1,000,000 simulated sets. Modeled screening outcomes were externally consistent with results from multiple independent data sources. Based on 50 good-fitting parameter sets, the expected reductions in lifetime risk of cancer with annual or biennial screening were 76% (range across 50 sets: 69–82%) and 69% (60–77%), respectively. The reduction from vaccination alone was 75%, although it ranged from 60% to 88%, reflecting considerable parameter uncertainty about the natural history of type-specific HPV infection. The uncertainty surrounding the model-predicted reduction in cervical cancer incidence narrowed substantially when vaccination was combined with every-5-year screening, with a mean reduction of 89% and range of 83% to 95%. Conclusion We demonstrate an approach to parameterization, calibration and performance evaluation for a U.S. cervical cancer microsimulation model intended to provide qualitative and quantitative inputs into decisions that must be taken before long-term data on vaccination outcomes become available. This approach allows for a rigorous and comprehensive description of policy-relevant uncertainty about health outcomes under alternative cancer prevention strategies. The model provides a tool that can accommodate new information, and can be modified as needed, to iteratively assess the expected benefits, costs, and cost-effectiveness of different policies in the U.S.
Collapse
|
46
|
O'Hagan A, Stevenson M, Madan J. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA. HEALTH ECONOMICS 2007; 16:1009-23. [PMID: 17173339 DOI: 10.1002/hec.1199] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
Collapse
Affiliation(s)
- Anthony O'Hagan
- Department of Probability and Statistics, University of Sheffield, Sheffield S3 7RH, UK.
| | | | | |
Collapse
|
47
|
Brennan A, Chick SE, Davies R. A taxonomy of model structures for economic evaluation of health technologies. HEALTH ECONOMICS 2006; 15:1295-310. [PMID: 16941543 DOI: 10.1002/hec.1148] [Citation(s) in RCA: 186] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Models for the economic evaluation of health technologies provide valuable information to decision makers. The choice of model structure is rarely discussed in published studies and can affect the results produced. Many papers describe good modelling practice, but few describe how to choose from the many types of available models. This paper develops a new taxonomy of model structures. The horizontal axis of the taxonomy describes assumptions about the role of expected values, randomness, the heterogeneity of entities, and the degree of non-Markovian structure. Commonly used aggregate models, including decision trees and Markov models require large population numbers, homogeneous sub-groups and linear interactions. Individual models are more flexible, but may require replications with different random numbers to estimate expected values. The vertical axis of the taxonomy describes potential interactions between the individual actors, as well as how the interactions occur through time. Models using interactions, such as system dynamics, some Markov models, and discrete event simulation are fairly uncommon in the health economics but are necessary for modelling infectious diseases and systems with constrained resources. The paper provides guidance for choosing a model, based on key requirements, including output requirements, the population size, and system complexity.
Collapse
Affiliation(s)
- Alan Brennan
- School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK.
| | | | | |
Collapse
|
48
|
Griffin S, Claxton K, Hawkins N, Sculpher M. Probabilistic analysis and computationally expensive models: Necessary and required? VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2006; 9:244-52. [PMID: 16903994 DOI: 10.1111/j.1524-4733.2006.00107.x] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
OBJECTIVE To assess the importance of considering decision uncertainty, the appropriateness of probabilistic sensitivity analysis (PSA), and the use of patient-level simulation (PLS) in appraisals for the National Institute for Health and Clinical Excellence (NICE). METHODS Decision-makers require estimates of decision uncertainty alongside expected net benefits (NB) of interventions. This requirement may be difficult in computationally expensive models, for example, those employing PLS. NICE appraisals published up until January 2005 were reviewed to identify those where the assessment group utilized a PLS model structure to estimate NB. After identifying PLS models, all appraisals published in the same year were reviewed. RESULTS Among models using PLS, one out of six conducted PSA, compared with 16 out of 24 cohort models. Justification for omitting PSA was absent in most cases. Reasons for choosing PLS included treatment switching, sampling patient characteristics and dependence on patient history. Alternative modeling approaches exist to handle these, including semi-Markov models and emulators that eliminate the need for two-level simulation. Stochastic treatment switching and sampling baseline characteristics do not inform adoption decisions. Modeling patient history does not necessitate PLS, and can depend on the software used. PLS addresses nonlinear relationships between patient variability and model outputs, but other options exist. Increased computing power, emulators or closed-form approximations can facilitate PSA in computationally expensive models. CONCLUSIONS In developing models analysts should consider the dual requirement of estimating expected NB and characterizing decision uncertainty. It is possible to develop models that meet these requirements within the constraints set by decision-makers.
Collapse
Affiliation(s)
- Susan Griffin
- Center for Health Economics, University of York, York, UK.
| | | | | | | |
Collapse
|
49
|
Groot Koerkamp B, Myriam Hunink MG, Stijnen T, Weinstein MC. Identifying key parameters in cost-effectiveness analysis using value of information: a comparison of methods. HEALTH ECONOMICS 2006; 15:383-92. [PMID: 16389669 DOI: 10.1002/hec.1064] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Decisions in health care must be made, despite uncertainty about benefits, risks, and costs. Value of information analysis is a theoretically sound method to estimate the expected value of future quantitative research pertaining to an uncertain decision. If the expected value of future research does not exceed the cost of research, additional research is not justified, and decisions should be based on current evidence, despite the uncertainty. To assess the importance of individual parameters relevant to a decision, different value of information methods have been suggested. The generally recommended method assumes that the expected value of perfect knowledge concerning a parameter is estimated as the reduction in expected opportunity loss. This method, however, results in biased expected values and incorrect importance ranking of parameters. The objective of this paper is to set out the correct methods to estimate the partial expected value of perfect information and to demonstrate why the generally recommended method is incorrect conceptually and mathematically.
Collapse
Affiliation(s)
- Bas Groot Koerkamp
- Department of Health Policy and Management, Harvard School of Public Health, Harvard Center for Risk Analysis, Boston, MA, USA
| | | | | | | |
Collapse
|
50
|
Fleurence RL, Iglesias CP, Torgerson DJ. Economic evaluations of interventions for the prevention and treatment of osteoporosis: a structured review of the literature. Osteoporos Int 2006; 17:29-40. [PMID: 15981019 DOI: 10.1007/s00198-005-1943-z] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2005] [Accepted: 05/03/2005] [Indexed: 12/31/2022]
Abstract
Economic evaluations are increasingly being used by decision-makers to estimate the cost-effectiveness of interventions. The objective of this study was to conduct a structured review of economic evaluations of interventions to prevent and treat osteoporosis. Articles were identified independently by two reviewers through searches on MEDLINE, the bibliographies of reviews and identified economic models, and expert opinion, using predefined inclusion and exclusion criteria. Data on country, type and level of interventions, type of fractures, interventions, study population and the authors' stated conclusions were extracted. Forty-two relevant studies were identified. The majority of studies (71%) were conducted in Sweden, the UK and the US. The main interventions investigated were hormone replacement therapy (27%), bisphosphonates (17%) and combinations of vitamin D and calcium (16%). In 38% of studies, hip fracture was the sole fracture outcome. Eighty-eight percent (88%) of studies investigated female populations only. A relatively large number of economic evaluations were identified in the field of osteoporosis. Major changes have recently occurred in the treatment of this disease, following the publication of the results of the Women's Health Initiative trial. Methodological developments in economic evaluations, such as the use of probabilistic sensitivity analysis and cost-effectiveness acceptability curves, have also taken place. Such changes are reflected in the studies that were reviewed. The development of economic models should be an iterative process that incorporates new information, whether clinical or methodological, as it becomes available.
Collapse
|