Walley RJ, Grieve AP. Optimising the trade-off between type I and II error rates in the Bayesian context.
Pharm Stat 2021;
20:710-720. [PMID:
33619884 DOI:
10.1002/pst.2102]
[Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 01/28/2021] [Accepted: 01/29/2021] [Indexed: 12/13/2022]
Abstract
For any decision-making study, there are two sorts of errors that can be made, declaring a positive result when the truth is negative, and declaring a negative result when the truth is positive. Traditionally, the primary analysis of a study is a two-sided hypothesis test, the type I error rate will be set to 5% and the study is designed to give suitably low type II error - typically 10 or 20% - to detect a given effect size. These values are standard, arbitrary and, other than the choice between 10 and 20%, do not reflect the context of the study, such as the relative costs of making type I and II errors and the prior belief the drug will be placebo-like. Several authors have challenged this paradigm, typically for the scenario where the planned analysis is frequentist. When resource is limited, there will always be a trade-off between the type I and II error rates, and this article explores optimising this trade-off for a study with a planned Bayesian statistical analysis. This work provides a scientific basis for a discussion between stakeholders as to what type I and II error rates may be appropriate and some algebraic results for normally distributed data.
Collapse