1
|
Jiang Z, Cappelleri JC, Gamalo M, Chen Y, Thomas N, Chu H. A comprehensive review and shiny application on the matching-adjusted indirect comparison. Res Synth Methods 2024; 15:671-686. [PMID: 38380799 DOI: 10.1002/jrsm.1709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 01/11/2024] [Accepted: 01/19/2024] [Indexed: 02/22/2024]
Abstract
Population-adjusted indirect comparison (PAIC) is an increasingly used technique for estimating the comparative effectiveness of different treatments for the health technology assessments when head-to-head trials are unavailable. Three commonly used PAIC methods include matching-adjusted indirect comparison (MAIC), simulated treatment comparison (STC), and multilevel network meta-regression (ML-NMR). MAIC enables researchers to achieve balanced covariate distribution across two independent trials when individual participant data are only available in one trial. In this article, we provide a comprehensive review of the MAIC methods, including their theoretical derivation, implicit assumptions, and connection to calibration estimation in survey sampling. We discuss the nuances between anchored and unanchored MAIC, as well as their required assumptions. Furthermore, we implement various MAIC methods in a user-friendly R Shiny application Shiny-MAIC. To our knowledge, it is the first Shiny application that implements various MAIC methods. The Shiny-MAIC application offers choice between anchored or unanchored MAIC, choice among different types of covariates and outcomes, and two variance estimators including bootstrap and robust standard errors. An example with simulated data is provided to demonstrate the utility of the Shiny-MAIC application, enabling a user-friendly approach conducting MAIC for healthcare decision-making. The Shiny-MAIC is freely available through the link: https://ziren.shinyapps.io/Shiny_MAIC/.
Collapse
Affiliation(s)
- Ziren Jiang
- Division of Biostatistics and Health Data Science, University of Minnesota School of Public Health, Minneapolis, Minnesota, USA
| | - Joseph C Cappelleri
- Statistical Research and Data Science Center, Pfizer Inc., New York, New York, USA
| | - Margaret Gamalo
- Inflammation & Immunology Statistics, Pfizer Inc., New York, New York, USA
| | - Yong Chen
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Neal Thomas
- Statistical Research and Data Science Center, Pfizer Inc., New York, New York, USA
| | - Haitao Chu
- Division of Biostatistics and Health Data Science, University of Minnesota School of Public Health, Minneapolis, Minnesota, USA
- Statistical Research and Data Science Center, Pfizer Inc., New York, New York, USA
| |
Collapse
|
2
|
Macabeo B, Quenéchdu A, Aballéa S, François C, Boyer L, Laramée P. Methods for Indirect Treatment Comparison: Results from a Systematic Literature Review. JOURNAL OF MARKET ACCESS & HEALTH POLICY 2024; 12:58-80. [PMID: 38660413 PMCID: PMC11036291 DOI: 10.3390/jmahp12020006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 12/08/2023] [Accepted: 03/18/2024] [Indexed: 04/26/2024]
Abstract
INTRODUCTION Health technology assessment (HTA) agencies express a clear preference for randomized controlled trials when assessing the comparative efficacy of two or more treatments. However, an indirect treatment comparison (ITC) is often necessary where a direct comparison is unavailable or, in some cases, not possible. Numerous ITC techniques are described in the literature. A systematic literature review (SLR) was conducted to identify all the relevant literature on existing ITC techniques, provide a comprehensive description of each technique and evaluate their strengths and limitations from an HTA perspective in order to develop guidance on the most appropriate method to use in different scenarios. METHODS Electronic database searches of Embase and PubMed, as well as grey literature searches, were conducted on 15 November 2021. Eligible articles were peer-reviewed papers that specifically described the methods used for different ITC techniques and were written in English. The review was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. RESULTS A total of 73 articles were included in the SLR, reporting on seven different ITC techniques. All reported techniques were forms of adjusted ITC. Network meta-analysis (NMA) was the most frequently described technique (in 79.5% of the included articles), followed by matching-adjusted indirect comparison (MAIC) (30.1%), network meta-regression (24.7%), the Bucher method (23.3%), simulated treatment comparison (STC) (21.9%), propensity score matching (4.1%) and inverse probability of treatment weighting (4.1%). The appropriate choice of ITC technique is critical and should be based on the feasibility of a connected network, the evidence of heterogeneity between and within studies, the overall number of relevant studies and the availability of individual patient-level data (IPD). MAIC and STC were found to be common techniques in the case of single-arm studies, which are increasingly being conducted in oncology and rare diseases, whilst the Bucher method and NMA provide suitable options where no IPD is available. CONCLUSION ITCs can provide alternative evidence where direct comparative evidence may be missing. ITCs are currently considered by HTA agencies on a case-by-case basis; however, their acceptability remains low. Clearer international consensus and guidance on the methods to use for different ITC techniques is needed to improve the quality of ITCs submitted to HTA agencies. ITC techniques continue to evolve quickly, and more efficient techniques may become available in the future.
Collapse
Affiliation(s)
- Bérengère Macabeo
- Department of Public Health, Aix-Marseille University, 13005 Marseille, France
- Pierre Fabre Laboratories, 92100 Paris, France
| | | | - Samuel Aballéa
- Department of Public Health, Aix-Marseille University, 13005 Marseille, France
- InovIntell, 3023GJ Rotterdam, The Netherlands
| | - Clément François
- Department of Public Health, Aix-Marseille University, 13005 Marseille, France
| | - Laurent Boyer
- Department of Public Health, Aix-Marseille University, 13005 Marseille, France
| | - Philippe Laramée
- Department of Public Health, Aix-Marseille University, 13005 Marseille, France
| |
Collapse
|
3
|
Zhang L, Bujkiewicz S, Jackson D. Four alternative methodologies for simulated treatment comparison: How could the use of simulation be re-invigorated? Res Synth Methods 2024; 15:227-241. [PMID: 38104969 DOI: 10.1002/jrsm.1681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 08/23/2023] [Accepted: 10/30/2023] [Indexed: 12/19/2023]
Abstract
Simulated treatment comparison (STC) is an established method for performing population adjustment for the indirect comparison of two treatments, where individual patient data (IPD) are available for one trial but only aggregate level information is available for the other. The most commonly used method is what we call 'standard STC'. Here we fit an outcome model using data from the trial with IPD, and then substitute mean covariate values from the trial where only aggregate level data are available, to predict what the first of these trial's outcomes would have been if its population had been the same as the second. However, this type of STC methodology does not involve simulation and can result in bias when the link function used in the outcome model is non-linear. An alternative approach is to use the fitted outcome model to simulate patient profiles in the trial for which IPD are available, but in the other trial's population. This stochastic alternative presents additional challenges. We examine the history of STC and propose two new simulation-based methods that resolve many of the difficulties associated with the current stochastic approach. A virtue of the simulation-based STC methods is that the marginal estimands are then clearly targeted. We illustrate all methods using a numerical example and explore their use in a simulation study.
Collapse
Affiliation(s)
- Landan Zhang
- Statistical Innovation, AstraZeneca, Cambridge, UK
| | - Sylwia Bujkiewicz
- Biostatistics Research Group, Department of Population Health Sciences, University of Leicester, Leicester, UK
| | - Dan Jackson
- Statistical Innovation, AstraZeneca, Cambridge, UK
| |
Collapse
|
4
|
Park JE, Campbell H, Towle K, Yuan Y, Jansen JP, Phillippo D, Cope S. Unanchored Population-Adjusted Indirect Comparison Methods for Time-to-Event Outcomes Using Inverse Odds Weighting, Regression Adjustment, and Doubly Robust Methods With Either Individual Patient or Aggregate Data. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2024; 27:278-286. [PMID: 38135212 DOI: 10.1016/j.jval.2023.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 10/18/2023] [Accepted: 11/15/2023] [Indexed: 12/24/2023]
Abstract
OBJECTIVES Several methods for unanchored population-adjusted indirect comparisons (PAICs) are available. Exploring alternative adjustment methods, depending on the available individual patient data (IPD) and the aggregate data (AD) in the external study, may help minimize bias in unanchored indirect comparisons. However, methods for time-to-event outcomes are not well understood. This study provides an overview and comparison of methods using a case study to increase familiarity. A recent method is applied to marginalize conditional hazard ratios, which allows for the comparisons of methods, and a doubly robust method is proposed. METHODS The following PAIC methods were compared through a case study in third-line small cell lung cancer, comparing nivolumab with standard of care based on a single-arm phase II trial (CheckMate 032) and real-world study (Flatiron) in terms of overall survival: IPD-IPD analyses using inverse odds weighting, regression adjustment, and a doubly robust method; IPD-AD analyses using matching-adjusted indirect comparison, simulated treatment comparison, and a doubly robust method. RESULTS Nivolumab extended survival versus standard of care with hazard ratios ranging from 0.63 (95% CI 0.44-0.90) in naive comparisons (identical estimates for IPD-IPD and IPD-AD analyses) to 0.69 (95% CI 0.44-0.98) in the IPD-IPD analyses using regression adjustment. Regression-based and doubly robust estimates yielded slightly wider confidence intervals versus the propensity score-based analyses. CONCLUSIONS The proposed doubly robust approach for time-to-event outcomes may help to minimize bias due to model misspecification. However, all methods for unanchored PAIC rely on the strong assumption that all prognostic covariates have been included.
Collapse
Affiliation(s)
- Julie E Park
- PRECISIONheor, Evidence Synthesis and Decision Modeling, Vancouver, BC, Canada
| | - Harlan Campbell
- PRECISIONheor, Evidence Synthesis and Decision Modeling, Vancouver, BC, Canada; University of British Columbia, Vancouver, BC, Canada
| | - Kevin Towle
- PRECISIONheor, Evidence Synthesis and Decision Modeling, Vancouver, BC, Canada
| | - Yong Yuan
- Worldwide Health Economics and Outcomes Research, Bristol Myers Squibb, Princeton, NJ, USA
| | - Jeroen P Jansen
- PRECISIONheor, Evidence Synthesis and Decision Modeling, Vancouver, BC, Canada
| | - David Phillippo
- University of Bristol, Bristol Medical School, Bristol, England, UK
| | - Shannon Cope
- PRECISIONheor, Evidence Synthesis and Decision Modeling, Vancouver, BC, Canada.
| |
Collapse
|
5
|
Ades AE, Welton NJ, Dias S, Phillippo DM, Caldwell DM. Twenty years of network meta-analysis: Continuing controversies and recent developments. Res Synth Methods 2024. [PMID: 38234221 DOI: 10.1002/jrsm.1700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 12/15/2023] [Accepted: 12/18/2023] [Indexed: 01/19/2024]
Abstract
Network meta-analysis (NMA) is an extension of pairwise meta-analysis (PMA) which combines evidence from trials on multiple treatments in connected networks. NMA delivers internally consistent estimates of relative treatment efficacy, needed for rational decision making. Over its first 20 years NMA's use has grown exponentially, with applications in both health technology assessment (HTA), primarily re-imbursement decisions and clinical guideline development, and clinical research publications. This has been a period of transition in meta-analysis, first from its roots in educational and social psychology, where large heterogeneous datasets could be explored to find effect modifiers, to smaller pairwise meta-analyses in clinical medicine on average with less than six studies. This has been followed by narrowly-focused estimation of the effects of specific treatments at specific doses in specific populations in sparse networks, where direct comparisons are unavailable or informed by only one or two studies. NMA is a powerful and well-established technique but, in spite of the exponential increase in applications, doubts about the reliability and validity of NMA persist. Here we outline the continuing controversies, and review some recent developments. We suggest that heterogeneity should be minimized, as it poses a threat to the reliability of NMA which has not been fully appreciated, perhaps because it has not been seen as a problem in PMA. More research is needed on the extent of heterogeneity and inconsistency in datasets used for decision making, on formal methods for making recommendations based on NMA, and on the further development of multi-level network meta-regression.
Collapse
Affiliation(s)
- A E Ades
- Population Health Sciences, Bristol Medical School, Bristol, UK
| | - Nicky J Welton
- Population Health Sciences, Bristol Medical School, Bristol, UK
| | - Sofia Dias
- Centre for Reviews and Dissemination, University of York, York, UK
| | | | | |
Collapse
|
6
|
Zhang L, Jackson D. Generalizing some key results from "alternative weighting schemes when performing matching-adjusted indirect comparisons". Res Synth Methods 2024; 15:152-156. [PMID: 37956977 DOI: 10.1002/jrsm.1682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/03/2023] [Accepted: 10/12/2023] [Indexed: 11/21/2023]
Abstract
A recent paper proposed an alternative weighting scheme when performing matching-adjusted indirect comparisons. This alternative approach follows the conventional one in matching the covariate means across two studies but differs in that it maximizes the effective sample size when doing so. The appendix of this paper showed, assuming there is one covariate and negative weights are permitted, that the resulting weights are linear in the covariates. This explains how the alternative method achieves a larger effective sample size and results in a metric that quantifies the difficulty of matching on particular covariates. We explain how these key results generalize to the case where there are multiple covariates, giving rise to a new metric that can be used to quantify the impact of matching on multiple covariates.
Collapse
Affiliation(s)
- Landan Zhang
- Statistical Innovation Group, AstraZeneca, Cambridge, UK
| | - Dan Jackson
- Statistical Innovation Group, AstraZeneca, Cambridge, UK
| |
Collapse
|
7
|
Truong B, Tran LAT, Le TA, Pham TT, Vo TT. Population adjusted-indirect comparisons in health technology assessment: A methodological systematic review. Res Synth Methods 2023; 14:660-670. [PMID: 37400080 DOI: 10.1002/jrsm.1653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 06/13/2023] [Accepted: 06/20/2023] [Indexed: 07/05/2023]
Abstract
In health technology assessment (HTA), population-adjusted indirect comparisons (PAICs) are increasingly considered to adjust for the difference in the target population between studies. We aim to assess the conduct and reporting of PAICs in recent HTA practice, by performing, a methodological systematic review of studies implementing PAICs from PubMed, EMBASE Classic, Embase/Ovid Medline All, and Cochrane databases from January 1, 2010 to Feb 13, 2023. Four independent researchers screened the titles, abstracts, and full-texts of the identified records, then extracted data on methodological and reporting characteristics of 106 eligible articles. Most PAIC analyses (96.9%, n = 157) were conducted by (or received funding from) pharmaceutical companies. Prior to adjustment, 44.5% of analyses (n = 72) (partially) aligned the eligibility criteria of different studies to enhance the similarity of their target populations. In 37.0% of analyses (n = 60), the clinical and methodological heterogeneity across studies were extensively assessed. In 9.3% of analyses (n = 15), the quality (or bias) of individual studies was evaluated. Among 18 analyses using methods that required an outcome model specification, results of the model fitting procedure were adequately reported in three analyses (16.7%). These findings suggest that the conduct and reporting of PAICs are remarkably heterogeneous and suboptimal in current practice. More recommendations and guidelines on PAICs are thus warranted to enhance the quality of these analyses in the future.
Collapse
Affiliation(s)
- Bang Truong
- Faculty of Pharmacy, HUTECH University, Ho Chi Minh City, Vietnam
- Department of Health Outcomes Research and Policy, Auburn University Harrison College of Pharmacy, Auburn, Alabama, USA
| | - Lan-Anh T Tran
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Ghent, Belgium
| | - Tuan Anh Le
- Department of Biology, KU Leuven, Leuven, Belgium
| | - Thi Thu Pham
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Tat-Thang Vo
- Department of Statistics and Data Science, The Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
8
|
Harrer M, Cuijpers P, Schuurmans LKJ, Kaiser T, Buntrock C, van Straten A, Ebert D. Evaluation of randomized controlled trials: a primer and tutorial for mental health researchers. Trials 2023; 24:562. [PMID: 37649083 PMCID: PMC10469910 DOI: 10.1186/s13063-023-07596-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 08/18/2023] [Indexed: 09/01/2023] Open
Abstract
BACKGROUND Considered one of the highest levels of evidence, results of randomized controlled trials (RCTs) remain an essential building block in mental health research. They are frequently used to confirm that an intervention "works" and to guide treatment decisions. Given their importance in the field, it is concerning that the quality of many RCT evaluations in mental health research remains poor. Common errors range from inadequate missing data handling and inappropriate analyses (e.g., baseline randomization tests or analyses of within-group changes) to unduly interpretations of trial results and insufficient reporting. These deficiencies pose a threat to the robustness of mental health research and its impact on patient care. Many of these issues may be avoided in the future if mental health researchers are provided with a better understanding of what constitutes a high-quality RCT evaluation. METHODS In this primer article, we give an introduction to core concepts and caveats of clinical trial evaluations in mental health research. We also show how to implement current best practices using open-source statistical software. RESULTS Drawing on Rubin's potential outcome framework, we describe that RCTs put us in a privileged position to study causality by ensuring that the potential outcomes of the randomized groups become exchangeable. We discuss how missing data can threaten the validity of our results if dropouts systematically differ from non-dropouts, introduce trial estimands as a way to co-align analyses with the goals of the evaluation, and explain how to set up an appropriate analysis model to test the treatment effect at one or several assessment points. A novice-friendly tutorial is provided alongside this primer. It lays out concepts in greater detail and showcases how to implement techniques using the statistical software R, based on a real-world RCT dataset. DISCUSSION Many problems of RCTs already arise at the design stage, and we examine some avoidable and unavoidable "weak spots" of this design in mental health research. For instance, we discuss how lack of prospective registration can give way to issues like outcome switching and selective reporting, how allegiance biases can inflate effect estimates, review recommendations and challenges in blinding patients in mental health RCTs, and describe problems arising from underpowered trials. Lastly, we discuss why not all randomized trials necessarily have a limited external validity and examine how RCTs relate to ongoing efforts to personalize mental health care.
Collapse
Affiliation(s)
- Mathias Harrer
- Psychology and Digital Mental Health Care, Technical University Munich, Georg-Brauchle-Ring 60-62, Munich, 80992, Germany.
- Clinical Psychology and Psychotherapy, Institute for Psychology, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany.
| | - Pim Cuijpers
- Department of Clinical, Neuro and Developmental Psychology, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
- WHO Collaborating Centre for Research and Dissemination of Psychological Interventions, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Lea K J Schuurmans
- Psychology and Digital Mental Health Care, Technical University Munich, Georg-Brauchle-Ring 60-62, Munich, 80992, Germany
| | - Tim Kaiser
- Methods and Evaluation/Quality Assurance, Freie Universität Berlin, Berlin, Germany
| | - Claudia Buntrock
- Institute of Social Medicine and Health Systems Research (ISMHSR), Medical Faculty, Otto Von Guericke University Magdeburg, Magdeburg, Germany
| | - Annemieke van Straten
- Department of Clinical, Neuro and Developmental Psychology, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - David Ebert
- Psychology and Digital Mental Health Care, Technical University Munich, Georg-Brauchle-Ring 60-62, Munich, 80992, Germany
| |
Collapse
|
9
|
Dahabreh IJ, Robertson SE, Petito LC, Hernán MA, Steingrimsson JA. Efficient and robust methods for causally interpretable meta-analysis: Transporting inferences from multiple randomized trials to a target population. Biometrics 2023; 79:1057-1072. [PMID: 35789478 PMCID: PMC10948002 DOI: 10.1111/biom.13716] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 05/10/2022] [Indexed: 11/27/2022]
Abstract
We present methods for causally interpretable meta-analyses that combine information from multiple randomized trials to draw causal inferences for a target population of substantive interest. We consider identifiability conditions, derive implications of the conditions for the law of the observed data, and obtain identification results for transporting causal inferences from a collection of independent randomized trials to a new target population in which experimental data may not be available. We propose an estimator for the potential outcome mean in the target population under each treatment studied in the trials. The estimator uses covariate, treatment, and outcome data from the collection of trials, but only covariate data from the target population sample. We show that it is doubly robust in the sense that it is consistent and asymptotically normal when at least one of the models it relies on is correctly specified. We study the finite sample properties of the estimator in simulation studies and demonstrate its implementation using data from a multicenter randomized trial.
Collapse
Affiliation(s)
- Issa J. Dahabreh
- CAUSALab, Harvard T.H. Chan School of Public Health, Boston, MA
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA
| | - Sarah E. Robertson
- CAUSALab, Harvard T.H. Chan School of Public Health, Boston, MA
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA
| | - Lucia C. Petito
- Department of Preventative Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL
| | - Miguel A. Hernán
- CAUSALab, Harvard T.H. Chan School of Public Health, Boston, MA
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA
- Harvard-MIT Division of Health Sciences and Technology, Boston, MA
| | - Jon A. Steingrimsson
- Department of Biostatistics, School of Public Health, Brown University, Providence, RI
| |
Collapse
|
10
|
Remiro‐Azócar A, Heath A, Baio G. Parametric G-computation for compatible indirect treatment comparisons with limited individual patient data. Res Synth Methods 2022; 13:716-744. [PMID: 35485582 PMCID: PMC9790405 DOI: 10.1002/jrsm.1565] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 01/28/2022] [Accepted: 04/27/2022] [Indexed: 12/30/2022]
Abstract
Population adjustment methods such as matching-adjusted indirect comparison (MAIC) are increasingly used to compare marginal treatment effects when there are cross-trial differences in effect modifiers and limited patient-level data. MAIC is based on propensity score weighting, which is sensitive to poor covariate overlap and cannot extrapolate beyond the observed covariate space. Current outcome regression-based alternatives can extrapolate but target a conditional treatment effect that is incompatible in the indirect comparison. When adjusting for covariates, one must integrate or average the conditional estimate over the relevant population to recover a compatible marginal treatment effect. We propose a marginalization method based on parametric G-computation that can be easily applied where the outcome regression is a generalized linear model or a Cox model. The approach views the covariate adjustment regression as a nuisance model and separates its estimation from the evaluation of the marginal treatment effect of interest. The method can accommodate a Bayesian statistical framework, which naturally integrates the analysis into a probabilistic framework. A simulation study provides proof-of-principle and benchmarks the method's performance against MAIC and the conventional outcome regression. Parametric G-computation achieves more precise and more accurate estimates than MAIC, particularly when covariate overlap is poor, and yields unbiased marginal treatment effect estimates under no failures of assumptions. Furthermore, the marginalized regression-adjusted estimates provide greater precision and accuracy than the conditional estimates produced by the conventional outcome regression, which are systematically biased because the measure of effect is non-collapsible.
Collapse
Affiliation(s)
- Antonio Remiro‐Azócar
- Department of Statistical ScienceUniversity College LondonLondonUK
- Quantitative ResearchStatistical Outcomes Research & Analytics (SORA) LtdLondonUK
| | - Anna Heath
- Department of Statistical ScienceUniversity College LondonLondonUK
- Child Health Evaluative SciencesThe Hospital for Sick ChildrenTorontoCanada
- Dalla Lana School of Public HealthUniversity of TorontoTorontoCanada
| | - Gianluca Baio
- Department of Statistical ScienceUniversity College LondonLondonUK
| |
Collapse
|
11
|
Remiro-Azócar A. Two-stage matching-adjusted indirect comparison. BMC Med Res Methodol 2022; 22:217. [PMID: 35941551 PMCID: PMC9358807 DOI: 10.1186/s12874-022-01692-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/19/2022] [Indexed: 01/03/2023] Open
Abstract
BACKGROUND Anchored covariate-adjusted indirect comparisons inform reimbursement decisions where there are no head-to-head trials between the treatments of interest, there is a common comparator arm shared by the studies, and there are patient-level data limitations. Matching-adjusted indirect comparison (MAIC), based on propensity score weighting, is the most widely used covariate-adjusted indirect comparison method in health technology assessment. MAIC has poor precision and is inefficient when the effective sample size after weighting is small. METHODS A modular extension to MAIC, termed two-stage matching-adjusted indirect comparison (2SMAIC), is proposed. This uses two parametric models. One estimates the treatment assignment mechanism in the study with individual patient data (IPD), the other estimates the trial assignment mechanism. The first model produces inverse probability weights that are combined with the odds weights produced by the second model. The resulting weights seek to balance covariates between treatment arms and across studies. A simulation study provides proof-of-principle in an indirect comparison performed across two randomized trials. Nevertheless, 2SMAIC can be applied in situations where the IPD trial is observational, by including potential confounders in the treatment assignment model. The simulation study also explores the use of weight truncation in combination with MAIC for the first time. RESULTS Despite enforcing randomization and knowing the true treatment assignment mechanism in the IPD trial, 2SMAIC yields improved precision and efficiency with respect to MAIC in all scenarios, while maintaining similarly low levels of bias. The two-stage approach is effective when sample sizes in the IPD trial are low, as it controls for chance imbalances in prognostic baseline covariates between study arms. It is not as effective when overlap between the trials' target populations is poor and the extremity of the weights is high. In these scenarios, truncation leads to substantial precision and efficiency gains but induces considerable bias. The combination of a two-stage approach with truncation produces the highest precision and efficiency improvements. CONCLUSIONS Two-stage approaches to MAIC can increase precision and efficiency with respect to the standard approach by adjusting for empirical imbalances in prognostic covariates in the IPD trial. Further modules could be incorporated for additional variance reduction or to account for missingness and non-compliance in the IPD trial.
Collapse
Affiliation(s)
- Antonio Remiro-Azócar
- Medical Affairs Statistics, Bayer plc, 400 South Oak Way, Reading, UK.
- Department of Statistical Science, University College London, 1-19 Torrington Place, London, UK.
| |
Collapse
|
12
|
Alsop JC, Pont LO. Matching-adjusted indirect comparison via a polynomial-based non-linear optimisation method. J Comp Eff Res 2022; 11:551-561. [PMID: 35506464 DOI: 10.2217/cer-2021-0266] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Aim: To demonstrate the potential of fourth-order polynomials within a non-linear optimisation framework for matching-adjusted indirect comparison (MAIC). Materials & methods: Simulated individual patient data were reweighted via fourth-order polynomials (polyMAIC) to match aggregate-level data across multiple baseline characteristics. The polyMAIC approach employed pre-specified matching tolerances and maximum allowable weights. Matching performance against aggregate-level targets was assessed, and also compared against the current industry-standard MAIC approach (Signorovitch). Results: The polyMAIC method matched aggregate-level targets within pre-specified tolerances. Effective sample sizes were either similar to or somewhat higher than those obtained from the Signorovitch method. Performance gains from polyMAIC tended to increase as matching complexity increased. Conclusion: PolyMAIC incorporates greater flexibility than the industry-standard MAIC approach and demonstrates matching potential.
Collapse
|