1
|
O'Neill RT, Boulatov R. Mechanochemical Approaches to Fundamental Studies in Soft-Matter Physics. Angew Chem Int Ed Engl 2024; 63:e202402442. [PMID: 38404161 DOI: 10.1002/anie.202402442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 02/24/2024] [Accepted: 02/25/2024] [Indexed: 02/27/2024]
Abstract
Stretching a segment of a polymer beyond its contour length makes its (primarily backbone) bonds more dissociatively labile, which enables polymer mechanochemistry. Integrating some backbone bonds into suitably designed molecular moieties yields mechanistically and kinetically diverse chemistry, which is becoming increasingly exploitable. Examples include, most prominently, attempts to improve mechanical properties of bulk polymers, as well as prospective applications in drug delivery and synthesis. This review aims to highlight an emerging effort to apply the concepts and experimental tools of mechanochemistry to fundamental physical questions in soft matter. A succinct summary of the state-of-the-knowledge of the field, with emphasis on foundational concepts and generalizable observations, is followed by analysis of 3 recent examples of mechanochemistry yielding molecular-level details of elastomer failure, macromolecular chain dynamics in elongational flows and kinetic allostery. We conclude with reasons to assume that the highlighted approaches are generalizable to a broader range of physical problems than considered to date.
Collapse
Affiliation(s)
- Robert T O'Neill
- Department of Chemistry, University of Liverpool, University of Liverpool, Department of Chemistry, Grove Street, Liverpool, L69 7ZD
| | - Roman Boulatov
- Department of Chemistry, University of Liverpool, University of Liverpool, Department of Chemistry, Grove Street, Liverpool, L69 7ZD
| |
Collapse
|
2
|
O'Neill RT, Boulatov R. Experimental quantitation of molecular conditions responsible for flow-induced polymer mechanochemistry. Nat Chem 2023; 15:1214-1223. [PMID: 37430105 DOI: 10.1038/s41557-023-01266-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Accepted: 06/05/2023] [Indexed: 07/12/2023]
Abstract
Fragmentation of macromolecular solutes in rapid flows is of considerable fundamental and practical importance. The sequence of molecular events preceding chain fracture is poorly understood, because such events cannot be visualized directly but must be inferred from changes in the bulk composition of the flowing solution. Here we describe how analysis of same-chain competition between fracture of a polystyrene chain and isomerization of a chromophore embedded in its backbone yields detailed characterization of the distribution of molecular geometries of mechanochemically reacting chains in sonicated solutions. In our experiments the overstretched (mechanically loaded) chain segment grew and drifted along the backbone on the same timescale as, and in competition with, the mechanochemical reactions. Consequently, only <30% of the backbone of a fragmenting chain is overstretched, with both the maximum force and the maximum reaction probabilities located away from the chain centre. We argue that quantifying intrachain competition is likely to be mechanistically informative for any flow fast enough to fracture polymer chains.
Collapse
Affiliation(s)
| | - Roman Boulatov
- Department of Chemistry, University of Liverpool, Liverpool, UK.
| |
Collapse
|
3
|
Yu Y, O'Neill RT, Boulatov R, Widenhoefer RA, Craig SL. Allosteric control of olefin isomerization kinetics via remote metal binding and its mechanochemical analysis. Nat Commun 2023; 14:5074. [PMID: 37604905 PMCID: PMC10442431 DOI: 10.1038/s41467-023-40842-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 08/14/2023] [Indexed: 08/23/2023] Open
Abstract
Allosteric control of reaction thermodynamics is well understood, but the mechanisms by which changes in local geometries of receptor sites lower activation reaction barriers in electronically uncoupled, remote reaction moieties remain relatively unexplored. Here we report a molecular scaffold in which the rate of thermal E-to-Z isomerization of an alkene increases by a factor of as much as 104 in response to fast binding of a metal ion to a remote receptor site. A mechanochemical model of the olefin coupled to a compressive harmonic spring reproduces the observed acceleration quantitatively, adding the studied isomerization to the very few reactions demonstrated to be sensitive to extrinsic compressive force. The work validates experimentally the generalization of mechanochemical kinetics to compressive loads and demonstrates that the formalism of force-coupled reactivity offers a productive framework for the quantitative analysis of the molecular basis of allosteric control of reaction kinetics. Important differences in the effects of compressive vs. tensile force on the kinetic stabilities of molecules are discussed.
Collapse
Affiliation(s)
- Yichen Yu
- Department of Chemistry, Duke University, Durham, NC, 27708, USA
| | - Robert T O'Neill
- Department of Chemistry, University of Liverpool, Crown Street, Liverpool, L69 7ZD, UK
| | - Roman Boulatov
- Department of Chemistry, University of Liverpool, Crown Street, Liverpool, L69 7ZD, UK.
| | | | - Stephen L Craig
- Department of Chemistry, Duke University, Durham, NC, 27708, USA.
| |
Collapse
|
4
|
Chan APY, Jakoobi M, Wang C, O'Neill RT, Aydin GSS, Halcovitch N, Boulatov R, Sergeev AG. Selective ortho-C-H Activation in Arenes without Functional Groups. J Am Chem Soc 2022; 144:11564-11568. [PMID: 35728272 PMCID: PMC9348813 DOI: 10.1021/jacs.2c04621] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Aromatic C-H activation in alkylarenes is a key step for the synthesis of functionalized organic molecules from simple hydrocarbon precursors. Known examples of such C-H activations often yield mixtures of products resulting from activation of the least hindered C-H bonds. Here we report highly selective ortho-C-H activation in alkylarenes by simple iridium complexes. We demonstrate that the capacity of the alkyl substituent to override the typical preference of metal-mediated C-H activation for the least hindered aromatic C-H bonds results from transient insertion of iridium into the benzylic C-H bond. This enables fast iridium insertion into the ortho-C-H bond, followed by regeneration of the benzylic C-H bond by reductive elimination. Bulkier alkyl substituents increase the ortho selectivity. The described chemistry represents a conceptually new alternative to existing approaches for aromatic C-H bond activation.
Collapse
Affiliation(s)
- Antony P Y Chan
- Department of Chemistry, University of Liverpool, Crown Street, Liverpool L69 7ZD, U.K
| | - Martin Jakoobi
- Department of Chemistry, University of Liverpool, Crown Street, Liverpool L69 7ZD, U.K
| | - Chenxu Wang
- Department of Chemistry, University of Liverpool, Crown Street, Liverpool L69 7ZD, U.K
| | - Robert T O'Neill
- Department of Chemistry, University of Liverpool, Crown Street, Liverpool L69 7ZD, U.K
| | - Gülsevim S S Aydin
- Department of Chemistry, University of Liverpool, Crown Street, Liverpool L69 7ZD, U.K
| | - Nathan Halcovitch
- Department of Chemistry, Lancaster University, Bailrigg, Lancaster LA1 4YW, U.K
| | - Roman Boulatov
- Department of Chemistry, University of Liverpool, Crown Street, Liverpool L69 7ZD, U.K.,State Key Laboratory of Supramolecular Structure and Materials, College of Chemistry, Jilin University, Changchun 130012, P. R. China
| | - Alexey G Sergeev
- Department of Chemistry, University of Liverpool, Crown Street, Liverpool L69 7ZD, U.K
| |
Collapse
|
5
|
O'Neill RT. Reacting to crises: The COVID-19 impact on biostatistics/epidemiology. Contemp Clin Trials 2020; 102:106214. [PMID: 33186685 PMCID: PMC7654297 DOI: 10.1016/j.cct.2020.106214] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 10/21/2020] [Accepted: 11/06/2020] [Indexed: 12/03/2022]
Abstract
Most crises, though difficult and challenging to address, offer opportunities for change and for development of new perspectives or approaches to deal with traditional strategies. The reaction to and the managing of the COVID-19 pandemic has provided a platform for evaluating how we quantify disease prevalence, incidence, time courses and sequellae as well as how well we plan, design, analyze and interpret health care associated data, including clinical trials and electronic medical records and health claims data. Whether the Covid-19 crisis provides opportunities to advance the fields of biostatistics and epidemiology in select ways remains to be seen. This article describes three areas of crises experienced by the author during a career in the regulation of pharmaceutical products and how they were responded to. Some suggestions for potential future opportunities in reaction to the Covid-19 crises are provided.
Collapse
Affiliation(s)
- Robert T O'Neill
- 200 West 10(th) Street, South Bethany, Delaware 19930, United States.
| |
Collapse
|
6
|
Temple RJ, O'Neill RT, Peck CC. Farewell to our Wonderful Friend and Colleague, J. Richard (Dick) Crout (1929-2020). Clin Pharmacol Ther 2020; 109:1384-1387. [PMID: 33111972 DOI: 10.1002/cpt.2063] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 09/23/2020] [Indexed: 11/11/2022]
Affiliation(s)
- Robert J Temple
- Deputy Director for Clinical Science, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, Maryland, USA
| | - Robert T O'Neill
- Former Director of the Office of Biostatistics, Center for Drug Evaluation and Research, US Food and Drug Administration, Bethany, Delaward, USA
| | - Carl C Peck
- Department of Bioengineering and Therapeutic Sciences, University of California at San Francisco, San Francisco, California, USA.,NDA Partners, LLC, Rochelle, Virginia, USA
| |
Collapse
|
7
|
Abstract
Multicentre trials are very common in the field of drug development. In recent years, multicentre trials have taken on a multinational and multiregional aspect. We provide a conceptual framework for the use of multicentre trials in the context of drug development, from the perspective of drug regulation in the United States. In this paper, we review some regulatory history, milestones and standards as they relate to multicentre trials. Special attention is given to the similarities and differences in the approaches to multicentre trials in the following documents; Guideline for the Format and Content of the Clinical and Statistical Sections of New Drug Applications, International Conference on Harmonization, Draft Guideline on Statistical Principles for clinical trials and the Guidance for Industry Providing Clinical Evidence of Effectiveness for Human Drug and Biologic Products. The paper includes a consideration of some of the issues in the analysis of data from multicentre trials.
Collapse
Affiliation(s)
- Charles Anello
- Office of Biostatistics, Center for Drug Evaluation and Research, Food and Drug Administration, USA.
| | | | | |
Collapse
|
8
|
O'Neill RT, Temple R. The Prevention and Treatment of Missing Data in Clinical Trials: An FDA Perspective on the Importance of Dealing With It. Clin Pharmacol Ther 2012; 91:550-4. [DOI: 10.1038/clpt.2011.340] [Citation(s) in RCA: 102] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
9
|
Wang SJ, James Hung HM, O'Neill RT. Impacts on type I error rate with inappropriate use of learn and confirm in confirmatory adaptive design trials. Biom J 2011; 52:798-810. [PMID: 21154897 DOI: 10.1002/bimj.200900207] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A two-stage adaptive design trial is a single trial that combines the learning data from stage 1 (or phase II) and the confirming data in stage 2 (or phase III) for formal statistical testing. We call it a "Learn and Confirm" trial. The studywise type I error rate remains to be at issue in a "Learn and Confirm" trial. For studying multiple doses or multiple enpdoints, a "Learn and Confirm" adaptive design can be more attractive than a fixed design approach. This is because intuitively the learning data in stage 1 should not be subjected to type I error scrutiny if there is no formal interim analysis performed and only an adaptive selection of design parameters is made at stage 1. In this work, we conclude from extensive simulation studies that the intuition is most often misleading. That is, regardless of whether or not there is a formal interim analysis for making an adaptive selection, the type I error rates are always at risk of inflation. Inappropriate use of any "Learn and Confirm" strategy should not be overlooked.
Collapse
Affiliation(s)
- Sue-Jane Wang
- OTS/CDER, US FDA, Silver Spring, MD 20993-0002, USA.
| | | | | |
Collapse
|
10
|
Abstract
BACKGROUND The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. PURPOSE To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. METHODS Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. RESULTS The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. LIMITATIONS Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. CONCLUSION Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.
Collapse
Affiliation(s)
- Sue-Jane Wang
- Office of Biostatistics, Office of Translational Sciences, Center for Drug Evaluation and Research, US FDA, Silver Spring, MD 20993, USA.
| | | | | |
Collapse
|
11
|
Hung HMJ, Wang SJ, O'Neill RT. Consideration of regional difference in design and analysis of multi-regional trials. Pharm Stat 2010; 9:173-8. [DOI: 10.1002/pst.440] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
12
|
Abstract
The utility of clinical trial designs with adaptive patient enrichment is investigated in an adequate and well-controlled trial setting. The overall treatment effect is the weighted average of the treatment effects in the mutually exclusive subsets of the originally intended entire study population. The adaptive enrichment approaches permit assessment of treatment effect that may be applicable to specific nested patient (sub)sets due to heterogeneous patient characteristics and/or differential response to treatment, e.g. a responsive patient subset versus a lack of beneficial patient subset, in all patient (sub)sets studied. The adaptive enrichment approaches considered include three adaptive design scenarios: (i) total sample size fixed and with futility stopping, (ii) sample size adaptation and futility stopping, and (iii) sample size adaptation without futility stopping. We show that regardless of whether the treatment effect eventually assessed is applicable to the originally studied patient population or only to the nested patient subsets; it is possible to devise an adaptive enrichment approach that statistically outperforms one-size-fits-all fixed design approach and the fixed design with a pre-specified multiple test procedure. We emphasize the need of additional studies to replicate the finding of a treatment effect in an enriched patient subset. The replication studies are likely to need fewer number of patients because of an identified treatment effect size that is larger than the diluted overall effect size. The adaptive designs, when applicable, are along the line of efficiency consideration in a drug development program.
Collapse
Affiliation(s)
- Sue-Jane Wang
- Office of Biostatistics, Division of Biometrics I/OB, Office of Translational Sciences, Center for Drug Evaluation and Research, US Food and Drug Administration, USA.
| | | | | |
Collapse
|
13
|
Abstract
Traditionally, in clinical development plan, phase II trials are relatively small and can be expected to result in a large degree of uncertainty in the estimates based on which Phase III trials are planned. Phase II trials are also to explore appropriate primary efficacy endpoint(s) or patient populations. When the biology of the disease and pathogenesis of disease progression are well understood, the phase II and phase III studies may be performed in the same patient population with the same primary endpoint, e.g. efficacy measured by HbA1c in non-insulin dependent diabetes mellitus trials with treatment duration of at least three months. In the disease areas that molecular pathways are not well established or the clinical outcome endpoint may not be observed in a short-term study, e.g. mortality in cancer or AIDS trials, the treatment effect may be postulated through use of intermediate surrogate endpoint in phase II trials. However, in many cases, we generally explore the appropriate clinical endpoint in the phase II trials. An important question is how much of the effect observed in the surrogate endpoint in the phase II study can be translated into the clinical effect in the phase III trial. Another question is how much of the uncertainty remains in phase III trials. In this work, we study the utility of adaptation by design (not by statistical test) in the sense of adapting the phase II information for planning the phase III trials. That is, we investigate the impact of using various phase II effect size estimates on the sample size planning for phase III trials. In general, if the point estimate of the phase II trial is used for planning, it is advisable to size the phase III trial by choosing a smaller alpha level or a higher power level. The adaptation via using the lower limit of the one standard deviation confidence interval from the phase II trial appears to be a reasonable choice since it balances well between the empirical power of the launched trials and the proportion of trials not launched if a threshold lower than the true effect size of phase III trial can be chosen for determining whether the phase III trial is to be launched.
Collapse
Affiliation(s)
- Sue-Jane Wang
- Office of Biostatistics, Office of Pharmacoepidemiology and Statistical Science, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, MD 20993, USA.
| | | | | |
Collapse
|
14
|
Abstract
With the advances in human genomic/genetic studies, the clinical trial community gradually recognizes that phenotypically homogeneous patients may be heterogeneous at the genomic level. The genomic technology brings a possible avenue for developing a genomic (composite) biomarker to predict a genomically responsive patient subset that may have a (much) higher likelihood of benefiting from a treatment. Randomized controlled trial is the mainstay to provide scientifically convincing evidence of a purported effect a new treatment may demonstrate. In conventional clinical trials, the primary clinical hypothesis pertains to the therapeutic effect in all patients who are eligible for the study defined by the primary efficacy endpoint. The aspect of one-size-fits-all surrounding the conventional design has been challenged, particularly when the diseases may be heterogeneous due to observable clinical characteristics and/or unobservable underlying the genomic characteristics. Extension from the conventional single population design objective to an objective that encompasses two possible patient populations will allow more informative evaluation in the patients having different degrees of responsiveness to medication. Building in conventional clinical trials, an additional genomic objective can generate an appealing conceptual framework from the patient's perspective in addressing personalized medicine in well-controlled clinical trials. There are many perceived benefits of personalized medicine that are based on the notion of being genomically proactive in the identification of disease and prevention of disease or recurrence. In this paper, we show that an adaptive design approach can be constructed to study a clinical hypothesis of overall treatment effect and a hypothesis of treatment effect in a genomic subset more efficiently than the conventional non-adaptive approach.
Collapse
Affiliation(s)
- Sue-Jane Wang
- Office of Biostatistics, Office of Translational Sciences, Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, MD 20993, USA.
| | | | | |
Collapse
|
15
|
Abstract
Recently there is growing interest in use of adaptive or flexible designs for development of pharmaceutical products. Statistical methodology has been greatly advanced in the literature. However, there are still some important issues with the methodology and application. In addition, there are many other challenges with these designs, including efficiency of these designs in the entire development program, trial conduct and logistics, the infrastructure of an adaptive trial, the regulatory evaluation of trial results and trial conduct, etc. Up till now, regulatory experience in these designs is very limited. We share some of the challenges.
Collapse
Affiliation(s)
- H M James Hung
- Division of Biometrics I, OB/OTS/CDER/FDA, Rockville, MD, USA.
| | | | | | | |
Collapse
|
16
|
|
17
|
Abstract
This article describes the motivation for, description of, and the objectives and plans for the FDA's initiative that was introduced in March of 2004 by way of a report titled 'Innovation or Stagnation?--Challenge and Opportunity on the Critical Path to New Medical Products'. The FDA initiative is very much an outreach effort and a wake-up call to many constituencies to contribute and partner to improve the product development process and thereby to contribute to the success rate of new products that will benefit the public. We discuss in general terms where some of the opportunities and challenges exist for the discipline of biostatistics to make contributions to this effort over the next few years. In particular, guidance development in five areas is considered as is the need to devote new energy and efforts to quantitative risk assessment and safety evaluation, an area that has lagged the attention received in the efficacy evaluation area.
Collapse
Affiliation(s)
- R T O'Neill
- Office of Biostatistics, OTS/CDER/FDA, 10903 New Hampshire Avenue, Bldg 22, Room 6012, Silver Spring, MD 20993-0002, USA.
| |
Collapse
|
18
|
Abstract
Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.
Collapse
Affiliation(s)
- H M James Hung
- Division of Biometrics I, OB/CDER/FDA, 10903 New Hampshire Avenue, BLDG 22 Rm 4238, HFD-710, Mail Stop 4105, Silver Spring, MD 20993-0002, USA.
| | | | | |
Collapse
|
19
|
|
20
|
Abstract
In response to overwhelming evidence and the consequences of poor-quality reporting of randomized, controlled trials (RCTs), many medical journals and editorial groups have now endorsed the CONSORT (Consolidated Standards of Reporting Trials) statement, a 22-item checklist and flow diagram. Because CONSORT primarily aimed at improving the quality of reporting of efficacy, only 1 checklist item specifically addressed the reporting of safety. Considerable evidence suggests that reporting of harms-related data from RCTs also needs improvement. Members of the CONSORT Group, including journal editors and scientists, met in Montebello, Quebec, Canada, in May 2003 to address this problem. The result is the following document: the standard CONSORT checklist with 10 new recommendations about reporting harms-related issues, accompanying explanation, and examples to highlight specific aspects of proper reporting. We hope that this document, in conjunction with other CONSORT-related materials (http://www.consort-statement.org), will help authors improve their reporting of harms-related data from RCTs. Better reporting will help readers critically appraise and interpret trial results. Journals can support this goal by revising Instructions to Authors so that they refer authors to this document.
Collapse
Affiliation(s)
- John P A Ioannidis
- University of Ioannina School of Medicine and Biomedical Research Institute, Foundation for Research and Technology-Hellas, Ioannina, Greece
| | | | | | | | | | | | | |
Collapse
|
21
|
Abstract
This discussion considers arguments for and against separating responsibility for the unblinded interim analysis of a clinical trial from responsibility for trial management and modifications to the ongoing trial. The degree to which one or different statisticians carry out these responsibilities and thus the degree of statistician independence for the two activities can vary, but a sponsor should recognize that giving a single statistician both responsibilities might limit flexibility in managing the trial, particularly with respect to modifying an ongoing trial.
Collapse
Affiliation(s)
- Jay P Siegel
- Center for Biologics Evaluations and Research, Food and Drug Administration, Rockville, MD 20852, USA
| | | | | | | | | |
Collapse
|
22
|
O'Neill RT. Sam Greenhouse: his contributions as a consultant to the Food and Drug Administration. Stat Med 2003; 22:3285-9. [PMID: 14566912 DOI: 10.1002/sim.1629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper recounts contributions made by Sam Greenhouse to the Food and Drug Administration during his tenure as an advisory committee member and as chair of the committee. The events and topics are taken from available recollections and minutes and selectively describe a range of topic areas and issues about which Sam Greenhouse played a substantial leadership role. Published in 2003 by John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Robert T O'Neill
- Office of Biostatistics, Center for Drug Evaluation and Research, Food and Drug Administration, Room 15B45, 5600 Fishers Lane, Rockville, MD 20857, USA.
| |
Collapse
|
23
|
|
24
|
Wang SJ, Winchell CJ, McCormick CG, Nevius SE, O'Neill RT. Short of complete abstinence: an analysis exploration of multiple drinking episodes in alcoholism treatment trials. Alcohol Clin Exp Res 2002; 26:1803-9. [PMID: 12500103 DOI: 10.1097/01.alc.0000042009.07691.12] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND In alcoholism treatment clinical trials, conventional analysis of efficacy outcomes often focuses on the time to a first event, where the event may be "any drinking", "safe (or low risk) drinking", "moderate drinking" or "heavy drinking," in addition to multiple outcomes such as frequency of drinking days, percent abstinence days, etc. METHODS We consider the multivariate failure time analytic methods. In alcoholism treatment trials, the naturalistic course of drinking behavior during treatment intervention often presents with a gradual change in drinking before the emergence of a more stable drinking or abstinence pattern. Thus, for each subject, evaluation of all drinking events, and incorporating the event times over a defined duration, may give a more comprehensive description of his/her drinking pattern. As a consequence, the efficacy of a new treatment for alcoholism may be elevated with greater statistical sensitivity. RESULTS The utility of the multiple failure time method is demonstrated via a real case study for evaluation of alcoholism treatments. The multiple event time analyses showed that the risk of having "any drinking days" or "heavy drinking days" during the entire duration of the study was significantly lower with experimental treatment than with placebo. Further explorations showed that the treatment effect was primarily observed in the later relapse events and not the first event with respect to relapse to any drinking episodes. Such effect would have missed using the traditional time to first event analysis approach. The observed effect of treatment with respect to relapse to multiple heavy drinking episodes was shown not only in the first event but also in the later events. CONCLUSION The multiple failure time approach may be applicable when 'drinking failure' is variously defined as a single drink, one at-risk drinking day, one heavy drinking day, or one alcohol-related social, occupational or medical problem. If "a drinking episode" is properly defined and the design gains statistical efficiency, the multiple event analytic strategy should provide improved statistical power to detect treatment effects.
Collapse
Affiliation(s)
- Sue-Jane Wang
- Division of Biometrics II, OB/OPaSS/CDER/FDA, Rockville, Maryland 20857, USA.
| | | | | | | | | |
Collapse
|
25
|
Wang SJ, Winchell CJ, McCormick CG, Nevius SE, O'Neill RT. Short of Complete Abstinence: An Analysis Exploration of Multiple Drinking Episodes in Alcoholism Treatment Trials. Alcohol Clin Exp Res 2002. [DOI: 10.1111/j.1530-0277.2002.tb02486.x] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
26
|
Abstract
Data monitoring is a critical component of the conduct of clinical trials that provide the evidence of efficacy and safety of investigational drugs. These trials may be conducted either by a pharmaceutical sponsor or by the government, especially those large trials that assess the impact of therapies on serious morbidity and/or mortality. While not extensive, I will review a regulatory history of FDA's evolving concerns and positions on data monitoring. I will review the key aspects of data monitoring and interim analysis of clinical trials contained in the recently published International Conference on Harmonization's statistical guidance as well as some other issues being considered for a draft guidance on data monitoring. Finally, some suggestions for improving and enhancing tools and statistical methods for monitoring clinical trials for safety assessment will be offered. This latter area deserves more consideration by statisticians than it has received to date.
Collapse
Affiliation(s)
- Robert T O'Neill
- Food and Drug Administration CDER/HFD-700, Room 15B-45, 5600 Fishers Lane, Rockville, Maryland 20857, USA
| |
Collapse
|
27
|
Szarfman A, Machado SG, O'Neill RT. Use of screening algorithms and computer systems to efficiently signal higher-than-expected combinations of drugs and events in the US FDA's spontaneous reports database. Drug Saf 2002; 25:381-92. [PMID: 12071774 DOI: 10.2165/00002018-200225060-00001] [Citation(s) in RCA: 406] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
Since 1998, the US Food and Drug Administration (FDA) has been exploring new automated and rapid Bayesian data mining techniques. These techniques have been used to systematically screen the FDA's huge MedWatch database of voluntary reports of adverse drug events for possible events of concern. The data mining method currently being used is the Multi-Item Gamma Poisson Shrinker (MGPS) program that replaced the Gamma Poisson Shrinker (GPS) program we originally used with the legacy database. The MGPS algorithm, the technical aspects of which are summarised in this paper, computes signal scores for pairs, and for higher-order (e.g. triplet, quadruplet) combinations of drugs and events that are significantly more frequent than their pair-wise associations would predict. MGPS generates consistent, redundant, and replicable signals while minimising random patterns. Signals are generated without using external exposure data, adverse event background information, or medical information on adverse drug reactions. The MGPS interface streamlines multiple input-output processes that previously had been manually integrated. The system, however, cannot distinguish between already-known associations and new associations, so the reviewers must filter these events. In addition to detecting possible serious single-drug adverse event problems, MGPS is currently being evaluated to detect possible synergistic interactions between drugs (drug interactions) and adverse events (syndromes), and to detect differences among subgroups defined by gender and by age, such as paediatrics and geriatrics. In the current data, only 3.4% of all 1.2 million drug-event pairs ever reported (with frequencies > or = 1) generate signals [lower 95% confidence interval limit of the adjusted ratios of the observed counts over expected (O/E) counts (denoted EB05) of > or = 2]. The total frequency count that contributed to signals comprised 23% (2.4 million) of the total number, 10.4 million of drug-event pairs reported, greatly facilitating a more focused follow-up and evaluation. The algorithm provides an objective, systematic view of the data alerting reviewers to critically important, new safety signals. The study of signals detected by current methods, signals stored in the Center for Drug Evaluation and Research's Monitoring Adverse Reports Tracking System, and the signals regarding cerivastatin, a cholesterol-lowering drug voluntarily withdrawn from the market in August 2001, exemplify the potential of data mining to improve early signal detection. The operating characteristics of data mining in detecting early safety signals, exemplified by studying a drug recently well characterised by large clinical trials confirms our experience that the signals generated by data mining have high enough specificity to deserve further investigation. The application of these tools may ultimately improve usage recommendations.
Collapse
Affiliation(s)
- Ana Szarfman
- Office of Biostatistics, Center for Drug Evaluation and Research, Food and Drug Administration, Rockville, Maryland 20857, USA.
| | | | | |
Collapse
|
28
|
Woodard ML, O'Neill RT. Bringing baby friendly to Rhode Island. Med Health R I 2001; 84:79-80. [PMID: 11280133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
Affiliation(s)
- M L Woodard
- South County Hospital, 100 Kenyon Avenue, Wakefield, Rhode Island 02879, USA.
| | | |
Collapse
|
29
|
Affiliation(s)
- R T O'Neill
- Food and Drug Administration, Center for Drug Evaluation and Review, Office of Biostatistics, 5600 Fishers Lane, Parklawn Building, Room 15B-45, Rockville, MD 20857, USA
| |
Collapse
|
30
|
O'Neill RT, Szarfman A. [Bayesian Data Mining in Large Frequency Tables, with an Application to the FDA Spontaneous Reporting System]: Discussion. AM STAT 1999. [DOI: 10.2307/2686094] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
31
|
O'Neill RT. Biostatistical considerations in pharmacovigilance and pharmacoepidemiology: linking quantitative risk assessment in pre-market licensure application safety data, post-market alert reports and formal epidemiological studies. Stat Med 1998; 17:1851-8; discussion 1859-62. [PMID: 9749452 DOI: 10.1002/(sici)1097-0258(19980815/30)17:15/16<1851::aid-sim987>3.0.co;2-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper deals with a conceptual discussion of a variety of statistical concepts, methods and strategies that are relevant to the quantitative assessment of risk derived from safety data collected during the pre- and post-marketing phase of a new drug's life cycle. A call is made for the use of more standard approaches to the analysis of safety data that are statistically and epidemiologically rigorous and for attempts to link the strategies for pre-market safety assessment with strategies for post-market safety evaluation. This link may be facilitated by recognizing the limitations and complementary roles played by pre- and post-market safety data collection schemes and by linking the quantitative analyses utilized for either exploratory or confirmatory purposes of risk assessment in each phase of safety data collection. Examples are provided of studies specifically designed to evaluate risk in a post approval setting and several available guidelines intended to improve the quality of these studies are discussed.
Collapse
Affiliation(s)
- R T O'Neill
- Food and Drug Administration, Division of Biometrics, Rockville, MD 20857, USA
| |
Collapse
|
32
|
O'Neill RT. Secondary endpoints cannot be validly analyzed if the primary endpoint does not demonstrate clear statistical significance. Control Clin Trials 1997; 18:550-6; discussion 561-7. [PMID: 9408717 DOI: 10.1016/s0197-2456(97)00075-5] [Citation(s) in RCA: 90] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
There is lack of consensus surrounding the interpretation of observed treatment effects for secondary clinical endpoints when the primary endpoint for which the clinical trial was initially designed does not meet the objective of a demonstrated effect. We provide some arguments to support caution in making inferences for secondary endpoints in this situation. We examine the definitions of primary and secondary endpoints within the context of a hypothesis-testing framework for multiple endpoints, and we address the relationship of the correlation structure of these endpoints and the statistical adjustments needed to preserve experiment-wise type I error for a valid inference. We also address the hypothesis-testing framework and the estimation framework for valid inference, focusing on the interpretation of p-values associated with differentially powered hypothesis tests for each endpoint to detect an important clinical effect. We point out the limitations on the strength of evidence (and quantification of uncertainty) for a secondary endpoint effect that can be derived from only one study and introduce the likelihood of replication of the finding in another study of identical size and design as a useful concept to guide this interpretation.
Collapse
Affiliation(s)
- R T O'Neill
- Office of Epidemiology and Biostatistics, Center for Drug Evaluation and Research/FDA, Rockville, Maryland 20857, USA
| |
Collapse
|
33
|
Hung HM, O'Neill RT, Bauer P, Köhne K. The behavior of the P-value when the alternative hypothesis is true. Biometrics 1997; 53:11-22. [PMID: 9147587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The P-value is a random variable derived from the distribution of the test statistic used to analyze a data set and to test a null hypothesis. Under the null hypothesis, the P-value based on a continuous test statistic has a uniform distribution over the interval [0, 1], regardless of the sample size of the experiment. In contrast, the distribution of the P-value under the alternative hypothesis is a function of both sample size and the true value or range of true values of the tested parameter. The characteristics, such as mean and percentiles, of the P-value distribution can give valuable insight into how the P-value behaves for a variety of parameter values and sample sizes. Potential applications of the P-value distribution under the alternative hypothesis to the design, analysis, and interpretation of results of clinical trials are considered.
Collapse
Affiliation(s)
- H M Hung
- Division of Biometrics I, Food and Drug Administration, Rockville, Maryland 20852, USA
| | | | | | | |
Collapse
|
34
|
|
35
|
Hung HM, Chi GY, O'Neill RT. Efficacy evaluation for monotherapies in two-by-two factorial trials. Biometrics 1995; 51:1483-93. [PMID: 8589235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
For factorial clinical trials in which two monotherapy treatments under study can interact only in the presence of treatment effects for each treatment, the always-pooled test statistic using data from all four groups has a correct size in detecting the simple effect of an individual treatment used alone. However, this test statistic may have an unbounded bias in estimation of the simple effect. The never-pooled test statistic that uses only data from the treatment group not receiving the other treatment has poor precision for estimating the simple effect. Two alternative test statistics under consideration are the two-stage statistic involving a preliminary test of treatment interaction and the maximum test statistic taking the larger of the always-pooled and the never-pooled statistics. The power, bias, and mean square error of all four tests are compared. When negative interactions exist, the two-stage and maximum statistics are generally superior to the always-pooled statistic and compare reasonably well with the never-pooled statistic; the maximum statistic seems slightly more favorable than the two-stage statistic. The two-stage statistic is the best choice when a treatment interaction can be large.
Collapse
Affiliation(s)
- H M Hung
- Division of Biometrics, CDER/FDA, Rockville, Maryland 20857, USA
| | | | | |
Collapse
|
36
|
|
37
|
O'Neill RT. Statistical concepts in the planning and evaluation of drug safety from clinical trials in drug development: issues of international harmonization. Stat Med 1995; 14:117-27. [PMID: 7569506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
The assessment of the safety of new drugs during pre-marketing clinical studies is an important and integral part of the drug development and regulatory evaluation process. The International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) is a project that brings together the regulatory decision-makers of Europe, Japan and the United States of America and the experts from the pharmaceutical industry in the three regions to seek ways to eliminate redundant and duplicative technical requirements among the developed countries for registering new medicinal substances and products. The ICH is developing guidelines or position papers to achieve the goal of harmonizing technical standards in three broad areas, namely, drug efficacy, safety and quality. Within the area of drug safety, this paper will discuss three of the safety topics because of their relevant statistical framework, and because these topics have not, to date, received any attention by the statistical community. The three issues under consideration by the International Conference on Harmonization (ICH), are: 1. Dose-response information to support drug registration (especially dose/toxicity relationships). 2. Studies in support of special populations; Geriatrics, A Draft Guideline. 3. ICH Draft Guideline 3 on 'The extent of population exposure required to assess clinical safety for drugs intended for long-term-treatment of non-life-threatening conditions'. The ICH special population guideline concerning studies in geriatric patients is closely related to a recent Food and Drug Administration 'Guideline for the study and evaluation of gender differences in the clinical evaluation of drugs', which is another example of a 'subgroup' for whom specific interest exists to evaluate drug safety and efficacy.
Collapse
Affiliation(s)
- R T O'Neill
- Division of Biometrics, Center for Drug Evaluation and Research/FDA, Rockville, Maryland, USA
| |
Collapse
|
38
|
O'Neill RT. STATISTICAL CONCEPTS IN THE PLANNING AND EVALUATION OF DRUG SAFETY FROM CLINICAL TRIALS IN DRUG DEVELOPMENT: ISSUES OF INTERNATIONAL HARMONIZATION. Stat Med 1995. [DOI: 10.1002/sim.4780140932] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
39
|
DeMets DL, Anbar D, Fairweather W, Louis TA, O'Neill RT. Training the Next Generation of Biostatisticians. AM STAT 1994. [DOI: 10.2307/2684833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
40
|
|
41
|
Abstract
The FDA's interest in data monitoring of clinical trials derives from its public health responsibility to assure the safety and efficacy of new drugs based on evidence from adequate and well-controlled studies. Therefore the FDA is concerned that clinical trials of new drugs are designed and carried out in a manner that will insure the integrity and validity of study inferences. The FDA regulation and guidelines recognize the role of data monitoring and the variety and diversity of situations utilizing a data monitoring process in clinical studies. This paper describes relevant aspects of the regulations and guidelines, some concerns the FDA has with regard to monitoring of both government- and industry-sponsored trials and the consequences of early termination of trials of new drugs in the investigational and marketed stages. Comments include advice on communication between the FDA and data monitoring committees.
Collapse
Affiliation(s)
- R T O'Neill
- Food and Drug Administration, Center for Drug Evaluation and Research, Rockville, Maryland 20857
| |
Collapse
|
42
|
O'Neill RT, Beninger P, Wykoff R. Statistical Issues Arising in AIDS Clinical Trials: Comment. J Am Stat Assoc 1992. [DOI: 10.2307/2290296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
43
|
Abstract
In this pedagogic note we propose to assess the safety of treatment in a clinical trial, or the effect of risk exposure in a chronic animal study, in terms of two lifetime risks. These risks are computable from life table type data and take into account the effects of competing risks. We first describe their computational procedures in detail to demonstrate the need for their implementation in a computer program. We then illustrate their practical application through use of the data obtained from an actual clinical study.
Collapse
Affiliation(s)
- M H Dong
- Division of Biometrics, U.S. Food and Drug Administration, Rockville, MD 20857
| | | | | | | |
Collapse
|
44
|
Abstract
To estimate vaccine protective efficacy, defined as VE = 1 - ARV/ARU where ARV is the disease attack rate in the vaccinated group and ARU is the disease attack rate in the controls, investigators have used both cohort and case-control designs. For each design, we present a method for calculation of the sample size required to provide an approximate confidence interval for VE of predetermined width and probability of coverage. The required sample size is a function of the desired width of the confidence interval, the probability of coverage, the assumed VE, and, for cohort designs, the assumed disease attack rate in the controls, and for case-control designs, the assumed vaccine exposure prevalence for the controls.
Collapse
Affiliation(s)
- R T O'Neill
- Food and Drug Administration, Center for Drug Evaluation and Research, Rockville, Maryland 20857
| |
Collapse
|
45
|
Abstract
The purpose of this paper is to describe some methods for analyzing and summarizing adverse event rates from clinical trials, emphasizing, in particular, serious adverse drug events and their time of occurrence, and the impact of differential subject exposure and pretreatment status on the estimation of rates.
Collapse
|
46
|
O'Neill JJ, O'Neill RT, Schwartz M, Curhan RP. Epidural analgesia in a community hospital. R I Med J (1976) 1986; 69:405-7. [PMID: 3465011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
47
|
|
48
|
Abstract
A method is presented to obtain sample sizes for cases and controls that are required to provide approximate confidence intervals on the log odds ratio of predetermined width 2d and probability of coverage as a function of assumed exposure rates in the control group, assumed odds ratio psi, required d, and ratio C:1 of controls to cases.
Collapse
|
49
|
Rossi AC, Knapp DE, Anello C, O'Neill RT, Graham CF, Mendelis PS, Stanley GR. Discovery of adverse drug reactions. A comparison of selected phase IV studies with spontaneous reporting methods. JAMA 1983; 249:2226-28. [PMID: 6834622 DOI: 10.1001/jama.249.16.2226] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
50
|
Knapp DE, Zax BB, Rossi AC, O'Neill RT. A method for post-marketing screening of adverse reactions to drugs: initial results. Drug Intell Clin Pharm 1980; 14:23-7. [PMID: 10245773 DOI: 10.1177/106002808001400105] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The FDA is pilot-testing a methodology for signaling previously unsuspected relationships between drugs and important adverse events. This method uses data it receives through the FDA spontaneous reporting program. Reviewing drugs used primarily on an outpatient basis, this screening methodology focuses on "tracer" adverse events and the organization of these reactions into body/functional systems. This review process enables a clinical evaluator to perceive more easily the clinically important drug-adverse event patterns. The method can incorporate drug use data; this enables a drug's proportional share of specific adverse events, relative to its therapeutic class, to be compared to its respective proportional share of drug use. The assumption is that the adverse event distribution of drugs in a therapeutic class should be the same as the distribution of drug use in that class, if all drugs in the class were to carry the same risk. Actual examples of drug-adverse event associations signaled by the screening method are presented. The potential uses of this methodology in other settings, and under other data situations, are discussed.
Collapse
|