1
|
Lucas L, Whittaker C, John Bailer A. Visualizing the NIOSH Pocket Guide: Open-source web application for accessing and exploring the NIOSH Pocket Guide to Chemical Hazards. J Occup Environ Hyg 2024; 21:47-57. [PMID: 37874933 PMCID: PMC10922582 DOI: 10.1080/15459624.2023.2267098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2023]
Abstract
The NIOSH Pocket Guide to Chemical Hazards is a trusted resource that displays key information for a collection of chemicals commonly encountered in the workplace. Entries contain chemical structures-occupational exposure limit information ranging from limits based on full-shift time-weighted averages to acute limits such as short-term exposure limits and immediately dangerous to life or health values, as well as a variety of other data such as chemical-physical properties and symptoms of exposure. The NIOSH Pocket Guide (NPG) is available as a printed, hardcopy book, a PDF version, an electronic database, and a downloadable application for mobile phones. All formats of the NIOSH Pocket Guide allow users to access the data for each chemical separately, however, the guide does not support data analytics or visualization across chemicals. This project reformatted existing data in the NPG to make it searchable and compatible with exploration and analysis using a web application. The resulting application allows users to investigate the relationships between occupational exposure limits, the range and distribution of occupational exposure limits, and the specialized sorting of chemicals by health endpoint or to summarize information of particular interest. These tasks would have previously required manual extraction of the data and analysis. The usability of this application was evaluated among industrial hygienists and researchers and while the existing application seems most relevant to researchers, the open-source code and data are amenable to modification by users to increase customization.
Collapse
Affiliation(s)
- LeeAnn Lucas
- Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | | | - A. John Bailer
- Department of Statistics, Miami University, Oxford, Ohio
| |
Collapse
|
2
|
Zhang J, Bailer AJ. Impact of Organism Allocations on Potency Estimates from Ceriodaphnia dubia Reproduction Tests. Environ Toxicol Chem 2023. [PMID: 37014189 DOI: 10.1002/etc.5625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 03/07/2023] [Accepted: 03/30/2023] [Indexed: 05/25/2023]
Abstract
In aquatic toxicology experiments, organisms are randomly assigned to an exposure group that receives a particular concentration level of a toxicant (including a control group with no exposure), and their survival, growth, or reproduction outcomes are recorded. Standard experiments use equal numbers of organisms in each exposure group. In the present study, we explored the potential benefits of modifying the current design of aquatic toxicology experiments when it is of interest to estimate the concentration associated with a specific level of decrease from control reproduction responses. A function of the parameter estimates from fitting a generalized linear regression model used to describe the relationship between individual responses and the toxicant concentration provides an estimate of the potency of the toxicant. After comparing different allocations of organisms to concentration groups, we observed that a reallocation of organisms among these concentration groups could provide more precise estimates of toxicity endpoints than the standard experimental design that uses equal number of organisms in each concentration group; this provides greater precision without the added cost of conducting the experiment. More specifically, assigning more observations to the control zero-concentration condition may result in more precise interval estimates of potency. Environ Toxicol Chem 2023;00:1-10. © 2023 SETAC.
Collapse
Affiliation(s)
- Jing Zhang
- Department of Statistics, Miami University, Oxford, Ohio, USA
| | - A John Bailer
- Department of Statistics, Miami University, Oxford, Ohio, USA
| |
Collapse
|
3
|
Wheeler MW, Lim S, House J, Shockley K, Bailer AJ, Fostel J, Yang L, Talley D, Raghuraman A, Gift JS, Davis JA, Auerbach SS, Motsinger-Reif AA. ToxicR: A computational platform in R for computational toxicology and dose-response analyses. Comput Toxicol 2023; 25:100259. [PMID: 36909352 PMCID: PMC9997717 DOI: 10.1016/j.comtox.2022.100259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
The need to analyze the complex relationships observed in high-throughput toxicogenomic and other omic platforms has resulted in an explosion of methodological advances in computational toxicology. However, advancements in the literature often outpace the development of software researchers can implement in their pipelines, and existing software is frequently based on pre-specified workflows built from well-vetted assumptions that may not be optimal for novel research questions. Accordingly, there is a need for a stable platform and open-source codebase attached to a programming language that allows users to program new algorithms. To fill this gap, the Biostatistics and Computational Biology Branch of the National Institute of Environmental Health Sciences, in cooperation with the National Toxicology Program (NTP) and US Environmental Protection Agency (EPA), developed ToxicR, an open-source R programming package. The ToxicR platform implements many of the standard analyses used by the NTP and EPA, including dose-response analyses for continuous and dichotomous data that employ Bayesian, maximum likelihood, and model averaging methods, as well as many standard tests the NTP uses in rodent toxicology and carcinogenicity studies, such as the poly-K and Jonckheere trend tests. ToxicR is built on the same codebase as current versions of the EPA's Benchmark Dose software and NTP's BMDExpress software but has increased flexibility because it directly accesses this software. To demonstrate ToxicR, we developed a custom workflow to illustrate its capabilities for analyzing toxicogenomic data. The unique features of ToxicR will allow researchers in other fields to add modules, increasing its functionality in the future.
Collapse
Affiliation(s)
- Matthew W. Wheeler
- Biostatistics and Computational Biology Branch Division of Intramural Research, National Institute of Environmental Health Sciences Durham, NC
| | - Sooyeong Lim
- Miami University Department of Statistics Oxford, OH
| | - John House
- Biostatistics and Computational Biology Branch Division of Intramural Research, National Institute of Environmental Health Sciences Durham, NC
| | - Keith Shockley
- Biostatistics and Computational Biology Branch Division of Intramural Research, National Institute of Environmental Health Sciences Durham, NC
| | | | - Jennifer Fostel
- Contractor, Division of the National Toxicology Program Durham, NC
| | - Longlong Yang
- Contractor, Division of the National Toxicology Program Durham, NC
| | - Dawan Talley
- Contractor, Division of the National Toxicology Program Durham, NC
| | | | - Jeffery S. Gift
- US Environmental Protection Agency (B243-01), National Center for Environmental Assessment, Durham, NC
| | - J. Allen Davis
- National Center for Environmental Assessment, US Environmental Protection Agency, Cincinnati, OH
| | - Scott S. Auerbach
- Predictive Toxicology Branch, Division of the National Toxicology Program National Institute of Environmental Health Sciences Durham, NC
| | - Alison A. Motsinger-Reif
- Biostatistics and Computational Biology Branch Division of Intramural Research, National Institute of Environmental Health Sciences Durham, NC
| |
Collapse
|
4
|
Rasnick E, Ryan PH, Bailer AJ, Fisher T, Parsons PJ, Yolton K, Newman NC, Lanphear BP, Brokamp C. Identifying sensitive windows of airborne lead exposure associated with behavioral outcomes at age 12. Environ Epidemiol 2021; 5:e144. [PMID: 33870016 PMCID: PMC8043737 DOI: 10.1097/ee9.0000000000000144] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 02/12/2021] [Indexed: 11/25/2022] Open
Abstract
Despite the precipitous decline of airborne lead concentrations following the removal of lead in gasoline, lead is still detectable in ambient air in most urban areas. Few studies, however, have examined the health effects of contemporary airborne lead concentrations in children. METHODS We estimated monthly air lead exposure among 263 children (Cincinnati Childhood Allergy and Air Pollution Study; Cincinnati, OH; 2001-2005) using temporally scaled predictions from a validated land use model and assessed neurobehavioral outcomes at age 12 years using the parent-completed Behavioral Assessment System for Children, 2nd edition. We used distributed lag models to estimate the effect of airborne lead exposure on behavioral outcomes while adjusting for potential confounding by maternal education, community-level deprivation, blood lead concentrations, greenspace, and traffic related air pollution. RESULTS We identified sensitive windows during mid- and late childhood for increased anxiety and atypicality scores, whereas sensitive windows for increased aggression and attention problems were identified immediately following birth. The strongest effect was at age 12, where a 1 ng/m3 increase in airborne lead exposure was associated with a 3.1-point (95% confidence interval: 0.4, 5.7) increase in anxiety scores. No sensitive windows were identified for depression, somatization, conduct problems, hyperactivity, or withdrawal behaviors. CONCLUSIONS We observed associations between exposure to airborne lead concentrations and poor behavioral outcomes at concentrations 10 times lower than the National Ambient Air Quality Standards set by the US Environmental Protection Agency.
Collapse
Affiliation(s)
- Erika Rasnick
- Department of Statistics, Miami University, Oxford
- Division of Biostatistics and Epidemiology, Cincinnati Children’s Hospital Medical Center
| | - Patrick H. Ryan
- Division of Biostatistics and Epidemiology, Cincinnati Children’s Hospital Medical Center
- Department of Pediatrics, University of Cincinnati, Cincinnati, Ohio
| | | | | | - Patrick J. Parsons
- Division of Environmental Health Sciences, Wadsworth Center, New York State Department of Health, Albany
- Department of Environmental Health Sciences, School of Public Health, University at Albany, Rensselaer, New York
| | - Kimberly Yolton
- Department of Pediatrics, University of Cincinnati, Cincinnati, Ohio
- Division of General and Community Pediatrics, Cincinnati Children’s Hospital Medical Center
| | - Nicholas C. Newman
- Department of Pediatrics, University of Cincinnati, Cincinnati, Ohio
- Division of Environmental Health Sciences, Wadsworth Center, New York State Department of Health, Albany
- Department of Environmental and Public Health Sciences, University of Cincinnati, Cincinnati, Ohio
| | - Bruce P. Lanphear
- Faculty of Health Sciences, Simon Fraser University, Burnaby, BC, Canada
| | - Cole Brokamp
- Division of Biostatistics and Epidemiology, Cincinnati Children’s Hospital Medical Center
- Department of Pediatrics, University of Cincinnati, Cincinnati, Ohio
| |
Collapse
|
5
|
Alessio HM, Reiman T, Kemper B, von Carlowitz W, Bailer AJ, Timmerman KL. Metabolic and Cardiovascular Responses to a Simulated Commute on an E-Bike. Transl J ACSM 2021. [DOI: 10.1249/tjx.0000000000000155] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
6
|
Wheeler MW, Piegorsch WW, Bailer AJ. Quantal Risk Assessment Database: A Database for Exploring Patterns in Quantal Dose-Response Data in Risk Assessment and its Application to Develop Priors for Bayesian Dose-Response Analysis. Risk Anal 2019; 39:616-629. [PMID: 30368842 PMCID: PMC6408269 DOI: 10.1111/risa.13218] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 08/03/2018] [Accepted: 08/18/2018] [Indexed: 05/07/2023]
Abstract
Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose-response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose-response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose-response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose-response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.
Collapse
Affiliation(s)
- Matthew W. Wheeler
- National Institute for Occupational Safety and Health Risk
Evaluation Branch 1050 Tusculum Ave, Cincinnati OH 45226
- Address correspondence to Matthew W. Wheeler,
National Institute for Occupational Safety and Health (NIOSH), Education and
Information Division, Cincinnati, OH 45226, USA;
| | - Walter W. Piegorsch
- Graduate Interdisciplinary Program in Statistics and BIO5
Institute University of Arizona, Tucson, AZ, USA
| | | |
Collapse
|
7
|
Carr GJ, Bailer AJ, Rawlings JM, Belanger SE. On the impact of sample size on median lethal concentration estimation in acute fish toxicity testing: Is n = 7/group enough? Environ Toxicol Chem 2018; 37:1565-1578. [PMID: 29350430 DOI: 10.1002/etc.4098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2017] [Revised: 11/10/2017] [Accepted: 01/17/2018] [Indexed: 06/07/2023]
Abstract
The fish acute toxicity test method is foundational to aquatic toxicity testing strategies, yet the literature lacks a concise sample size assessment. Although various sources address sample size, historical precedent seems to play a larger role than objective measures. We present a novel and comprehensive quantification of the effect of sample size on estimation of the median lethal concentration (LC50), covering a wide range of scenarios. The results put into perspective the practical differences across a range of sample sizes, from n = 5/concentration up to n = 23/concentration. We also provide a framework for setting sample size guidance illustrating ways to quantify the performance of LC50 estimation, which can be used to set sample size guidance given reasonably difficult (or worst-case) scenarios. There is a clear benefit to larger sample size studies: they reduce error in the determination of LC50s, and lead to more robust safe environmental concentration determinations, particularly in cases likely to be called worst-case (shallow slope and true LC50 near the edges of the concentration range). Given that the use of well-justified sample sizes is crucial to reducing uncertainty in toxicity estimates, these results lead us to recommend a reconsideration of the current de minimis 7/concentration sample size for critical studies (e.g., studies needed for a chemical registration, which are being tested for the first time, or involving difficult test substances). Environ Toxicol Chem 2018;37:1565-1578. © 2018 SETAC.
Collapse
Affiliation(s)
- Gregory J Carr
- Data and Modeling Sciences, Procter & Gamble, Mason, Ohio, United States
| | - A John Bailer
- Department of Statistics, Miami University, Oxford, Ohio, United States
| | - Jane M Rawlings
- Environmental Stewardship and Sustainability, Procter & Gamble, Mason, Ohio, United States
| | - Scott E Belanger
- Environmental Stewardship and Sustainability, Procter & Gamble, Mason, Ohio, United States
| |
Collapse
|
8
|
Wheeler MW, Bailer AJ, Cole T, Park RM, Shao K. Bayesian Quantile Impairment Threshold Benchmark Dose Estimation for Continuous Endpoints. Risk Anal 2017; 37:2107-2118. [PMID: 28555874 PMCID: PMC5740488 DOI: 10.1111/risa.12762] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2016] [Revised: 10/25/2016] [Accepted: 10/31/2016] [Indexed: 05/30/2023]
Abstract
Quantitative risk assessment often begins with an estimate of the exposure or dose associated with a particular risk level from which exposure levels posing low risk to populations can be extrapolated. For continuous exposures, this value, the benchmark dose, is often defined by a specified increase (or decrease) from the median or mean response at no exposure. This method of calculating the benchmark dose does not take into account the response distribution and, consequently, cannot be interpreted based upon probability statements of the target population. We investigate quantile regression as an alternative to the use of the median or mean regression. By defining the dose-response quantile relationship and an impairment threshold, we specify a benchmark dose as the dose associated with a specified probability that the population will have a response equal to or more extreme than the specified impairment threshold. In addition, in an effort to minimize model uncertainty, we use Bayesian monotonic semiparametric regression to define the exposure-response quantile relationship, which gives the model flexibility to estimate the quantal dose-response function. We describe this methodology and apply it to both epidemiology and toxicology data.
Collapse
Affiliation(s)
- Matthew W. Wheeler
- National Institute for Occupational Safety and Health, MS C15, 1050 N. Tusculum Avenue, Cincinnati, OH USA
| | - A. John Bailer
- Department of Statistics, Miami University, Oxford, OH USA
| | - Tarah Cole
- Department of Statistics, Miami University, Oxford, OH USA
- Cincinnati Children’s Hospital, Cincinnati, OH USA
| | - Robert M. Park
- National Institute for Occupational Safety and Health, MS C15, 1050 N. Tusculum Avenue, Cincinnati, OH USA
| | - Kan Shao
- Indiana University School of Public Health, Bloomington, IN USA
| |
Collapse
|
9
|
|
10
|
Webb JM, Smucker BJ, Bailer AJ. Selecting the best design for nonstandard toxicology experiments. Environ Toxicol Chem 2014; 33:2399-2406. [PMID: 24943385 DOI: 10.1002/etc.2671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2014] [Revised: 03/05/2014] [Accepted: 06/16/2014] [Indexed: 06/03/2023]
Abstract
Although many experiments in environmental toxicology use standard statistical experimental designs, there are situations that arise where no such standard design is natural or applicable because of logistical constraints. For example, the layout of a laboratory may suggest that each shelf serve as a block, with the number of experimental units per shelf either greater than or less than the number of treatments in a way that precludes the use of a typical block design. In such cases, an effective and powerful alternative is to employ optimal experimental design principles, a strategy that produces designs with precise statistical estimates. Here, a D-optimal design was generated for an experiment in environmental toxicology that has 2 factors, 16 treatments, and constraints similar to those described above. After initial consideration of a randomized complete block design and an intuitive cyclic design, it was decided to compare a D-optimal design and a slightly more complicated version of the cyclic design. Simulations were conducted generating random responses under a variety of scenarios that reflect conditions motivated by a similar toxicology study, and the designs were evaluated via D-efficiency as well as by a power analysis. The cyclic design performed well compared to the D-optimal design.
Collapse
Affiliation(s)
- Jennifer M Webb
- Department of Statistics, Miami University, Oxford, Ohio, USA
| | | | | |
Collapse
|
11
|
Wheeler MW, Bailer AJ. An empirical comparison of low-dose extrapolation from points of departure (PoD) compared to extrapolations based upon methods that account for model uncertainty. Regul Toxicol Pharmacol 2013; 67:75-82. [PMID: 23831127 DOI: 10.1016/j.yrtph.2013.06.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2012] [Revised: 05/28/2013] [Accepted: 06/26/2013] [Indexed: 11/24/2022]
Abstract
Experiments with relatively high doses are often used to predict risks at appreciably lower doses. A point of departure (PoD) can be calculated as the dose associated with a specified moderate response level that is often in the range of experimental doses considered. A linear extrapolation to lower doses often follows. An alternative to the PoD method is to develop a model that accounts for the model uncertainty in the dose-response relationship and to use this model to estimate the risk at low doses. Two such approaches that account for model uncertainty are model averaging (MA) and semi-parametric methods. We use these methods, along with the PoD approach in the context of a large animal (40,000+ animal) bioassay that exhibited sub-linearity. When models are fit to high dose data and risks at low doses are predicted, the methods that account for model uncertainty produce dose estimates associated with an excess risk that are closer to the observed risk than the PoD linearization. This comparison provides empirical support to accompany previous simulation studies that suggest methods that incorporate model uncertainty provide viable, and arguably preferred, alternatives to linear extrapolation from a PoD.
Collapse
Affiliation(s)
- Matthew W Wheeler
- Risk Evaluation Branch, National Institute for Occupational Safety and Health, 4626 Columbia Parkway, Cincinnati, OH 45226, USA.
| | | |
Collapse
|
12
|
Reynolds DJ, Stiles WB, Bailer AJ, Hughes MR. Impact of Exchanges and Client–Therapist Alliance in Online-Text Psychotherapy. Cyberpsychology, Behavior, and Social Networking 2013. [DOI: 10.1089/cyber.2012.0195\] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Affiliation(s)
- D'Arcy J. Reynolds
- Department of Psychology, University of Southern Indiana, Evansville, Indiana
| | | | - A. John Bailer
- Department of Statistics, Miami University, Oxford, Ohio
| | | |
Collapse
|
13
|
Reynolds DJ, Stiles WB, Bailer AJ, Hughes MR. Impact of exchanges and client-therapist alliance in online-text psychotherapy. Cyberpsychol Behav Soc Netw 2013; 16:370-7. [PMID: 23530546 DOI: 10.1089/cyber.2012.0195] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
The impact of exchanges and client-therapist alliance of online therapy text exchanges were compared to previously published results in face-to-face therapy, and the moderating effects of four participant factors found significant in previously published face-to-face studies were investigated using statistical mixed-effect modeling analytic techniques. Therapists (N=30) and clients (N=30) engaged in online therapy were recruited from private practitioner sites, e-clinics, online counseling centers, and mental-health-related discussion boards. In a naturalistic design, they each visited an online site weekly and completed the standard impact and alliance questionnaires for at least 6 weeks. Results indicated that the impact of exchanges and client-therapist alliance in text therapy was similar to, but in some respects more positive than, previous evaluations of face-to-face therapy. The significance of participant factors previously found to influence impact and alliance in face-to-face therapy (client symptom severity, social support, therapist theoretical orientation, and therapist experience) was not replicated, except that therapists with the more symptomatic clients rated their text exchanges as less smooth and comfortable. Although its small size and naturalistic design impose limitations on sensitivity and generalizability, this study provides some insights into treatment impact and the alliance in online therapy.
Collapse
Affiliation(s)
- D'Arcy J Reynolds
- Department of Psychology, University of Southern Indiana, Evansville, Indiana 47712, USA.
| | | | | | | |
Collapse
|
14
|
Zhang J, Noe DA, Wu J, Bailer AJ, Wright SE. Estimating infectivity rates and attack windows for two viruses. Math Biosci 2012; 240:267-74. [PMID: 23009904 DOI: 10.1016/j.mbs.2012.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2011] [Revised: 08/31/2012] [Accepted: 09/06/2012] [Indexed: 11/25/2022]
Abstract
Cells exist in an environment in which they are simultaneously exposed to a number of viral challenges. In some cases, infection by one virus may preclude infection by other viruses. Under the assumption of independent times until infection by two viruses, a procedure is presented to estimate the infectivity rates along with the time window during which a cell might be susceptible to infection by multiple viruses. A test for equal infectivity rates is proposed and interval estimates of parameters are derived. Additional hypothesis tests of potential interest are also presented. The operating characteristics of these tests and the estimation procedure are explored in simulation studies.
Collapse
Affiliation(s)
- J Zhang
- Department of Statistics, Miami University, Oxford, OH 45056, USA.
| | | | | | | | | |
Collapse
|
15
|
Zhang J, Bailer AJ, Oris JT. Bayesian approach to potency estimation for aquatic toxicology experiments when a toxicant affects both fecundity and survival. Environ Toxicol Chem 2012; 31:1920-1930. [PMID: 22605507 DOI: 10.1002/etc.1886] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2011] [Revised: 02/23/2012] [Accepted: 03/30/2012] [Indexed: 05/31/2023]
Abstract
Chemicals in aquatic systems may impact a variety of endpoints including mortality, growth, or reproduction. Clearly, growth or reproduction will only be observed in organisms that survive. Because it is common to observe mortality in studies focusing on the reproduction of organisms, especially in higher concentration conditions, the resulting observed numbers of young become a mixture of zeroes and positive counts. Zeroes are recorded for organisms that die before having any young and living organisms with no offspring. Positive counts are recorded for living organisms with offspring. Thus, responses reflect both fecundity and mortality of the organisms used in such tests. In the present study, the authors propose the estimation of the concentration associated with a specified level of reproductive inhibition (RIp) using a Bayesian zero-inflated Poisson (ZIP) regression model. This approach allows any prior information and expert knowledge about the model parameters to be incorporated into the regression coefficients or RIp estimation. Simulation studies are conducted to compare the Bayesian ZIP regression model and classical methods. The Bayesian estimator outperforms the frequentist alternative by producing more precise point estimates with smaller mean square differences between RIp estimates and true values, narrower interval estimates with better coverage probabilities. The authors also applied their proposed model to a study of Ceriodaphnia dubia exposed to a test toxicant.
Collapse
Affiliation(s)
- Jing Zhang
- Department of Statistics, Miami University, Oxford, Ohio, USA.
| | | | | |
Collapse
|
16
|
Abstract
Quantitative risk assessment proceeds by first estimating a dose-response model and then inverting this model to estimate the dose that corresponds to some prespecified level of response. The parametric form of the dose-response model often plays a large role in determining this dose. Consequently, the choice of the proper model is a major source of uncertainty when estimating such endpoints. While methods exist that attempt to incorporate the uncertainty by forming an estimate based upon all models considered, such methods may fail when the true model is on the edge of the space of models considered and cannot be formed from a weighted sum of constituent models. We propose a semiparametric model for dose-response data as well as deriving a dose estimate associated with a particular response. In this model formulation, the only restriction on the model form is that it is monotonic. We use this model to estimate the dose-response curve from a long-term cancer bioassay, as well as compare this to methods currently used to account for model uncertainty. A small simulation study is conducted showing that the method is superior to model averaging when estimating exposure that arises from a quantal-linear dose-response mechanism, and is similar to these methods when investigating nonlinear dose-response patterns.
Collapse
Affiliation(s)
- Matthew Wheeler
- Risk Evaluation Branch, National Institute for Occupational Safety and Health, Cincinnati, OH, USA.
| | | |
Collapse
|
17
|
Zhang J, Bailer AJ, Oris JT. Bayesian approach to estimating reproductive inhibition potency in aquatic toxicity testing. Environ Toxicol Chem 2012; 31:916-927. [PMID: 22431139 DOI: 10.1002/etc.1769] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Effectively and accurately assessing the toxicity of chemicals and their impact on the environment continues to be an important concern in ecotoxicology. Single experiments conducted by a particular laboratory commonly serve as the basis of toxicity risk assessment. These laboratories often have a long history of conducting experiments using particular protocols. In the present study, a Bayesian analysis for estimating potency based on a single experiment was formulated, which then served as the basis for incorporating the experimental information from historical controls. A Bayesian hierarchical model was developed to estimate the relative inhibition concentrations (RIp) of a toxicant and flexible ways of using historical control information were suggested. The methods were illustrated using a data set produced by the test for reproduction in Ceriodaphnia dubia in which the number of young produced over three broods was recorded. In addition, simulation studies were included to compare the Bayesian methods with previously proposed estimators of potency. The Bayesian methods gave more precise RIp estimates with smaller variation and nominal coverage probability offsetting a small negative bias in the point estimate. Incorporating historical control information in the Bayesian hierarchical model effectively uses the useful information from past similar experiments when estimating the RIp, and results in potency estimates that are more precise compared to frequentist methods.
Collapse
Affiliation(s)
- Jing Zhang
- Department of Statistics, Miami University, Oxford, Ohio, USA.
| | | | | |
Collapse
|
18
|
Oris JT, Belanger SE, Bailer AJ. Baseline characteristics and statistical implications for the OECD 210 fish early-life stage chronic toxicity test. Environ Toxicol Chem 2012; 31:370-376. [PMID: 22095530 DOI: 10.1002/etc.747] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2011] [Revised: 09/14/2011] [Accepted: 10/02/2011] [Indexed: 05/31/2023]
Abstract
The fish toxicity assay most commonly used to establish chronic effects is the Organisation for Economic Co-operation and Development (OECD) 210, fish early-life stage test. However, the authors are not aware of any systematic analysis of the experimental design or statistical characteristics of the test since the test guideline was adopted nearly 20 years ago. Here, the authors report the results of an analysis of data compiled from OECD 210 tests conducted by industry labs. The distribution of responses observed in control treatments was analyzed, with the goal of understanding the implication of this variability on the sensitivity of the OECD 210 test guideline and providing recommendations on revised experimental design requirements of the test. Studies were confined to fathead minnows, rainbow trout, and zebrafish. Dichotomous endpoints (hatching success and posthatch survival) were examined for indications of overdispersion to evaluate whether significant chamber-to-chamber variability was present. Dichotomous and continuous (length, wet wt, dry wt) measurement endpoints were analyzed to determine minimum sample size requirements to detect differences from control responses with specified power. Results of the analysis indicated that sensitivity of the test could be improved by maximizing the number of replicate chambers per treatment concentration, increasing the acceptable level of control hatching success and larval survival compared to current levels, using wet weight measurements rather than dry weight, and focusing test efforts on species that demonstrate less variability in outcome measures. From these analyses, the authors provide evidence of the impact of expected levels of variability on the sensitivity of traditional OECD 210 studies and the implications for defining a target for future animal alternative methods for chronic toxicity testing in fish.
Collapse
Affiliation(s)
- James T Oris
- Department of Zoology, Miami University, Oxford, Ohio, USA.
| | | | | |
Collapse
|
19
|
Yamashita T, Jeon H, Bailer AJ, Nelson IM, Mehdizadeh S. Fall Risk Factors in Community-Dwelling Elderly Who Receive Medicaid-Supported Home- and Community-Based Care Services. J Aging Health 2010; 23:682-703. [DOI: 10.1177/0898264310390941] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Objective: This study identifies fall risk factors in an understudied population of older people who receive community-based care services. Method: Data were collected from enrollees of Ohio’s Medicaid home- and community-based waiver program (preadmission screening system providing options and resources today [PASSPORT]). A total of 23,182 participants receiving PASSPORT services in 2005/2006 was classified as fallers and nonfallers, and a variety of risk factors for falling was analyzed using logistic regressions. Results: The following factors were identified as risk factors for falling: previous fall history, older age, White race, incontinence, higher number of medications, fewer numbers of activity of daily living limitations, unsteady gait, tremor, grasping strength, and absence of supervision. Discussion: Identifying risk factors for the participants of a Medicaid home- and community-based waiver program are useful for a fall risk assessment, but it would be most helpful if the community-based care service programs incorporate measurements of known fall risk factors into their regular data collection, if not already included.
Collapse
|
20
|
Kuempel ED, Tran L, Castranova V, Bailer AJ. Response to Dr. Morfeld's Letter. Inhal Toxicol 2010. [DOI: 10.1080/08958370601056650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
21
|
Anderson SG, Bailer AJ. Comparing weighted and unweighted analyses applied to data with a mix of pooled and individual observations. Environ Toxicol Chem 2010; 29:1168-1171. [PMID: 20821554 DOI: 10.1002/etc.147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Smaller organisms may have too little tissue to allow assaying as individuals. To get a sufficient sample for assaying, a collection of smaller individual organisms is pooled together to produce a simple observation for modeling and analysis. When a dataset contains a mix of pooled and individual organisms, the variances of the observations are not equal. An unweighted regression method is no longer appropriate because it assumes equal precision among the observations. A weighted regression method is more appropriate and yields more precise estimates because it incorporates a weight to the pooled observations. To demonstrate the benefits of using a weighted analysis when some observations are pooled, the bias and confidence interval (CI) properties were compared using an ordinary least squares and a weighted least squares t-based confidence interval. The slope and intercept estimates were unbiased for both weighted and unweighted analyses. While CIs for the slope and intercept achieved nominal coverage, the CI lengths were smaller using a weighted analysis instead of an unweighted analysis, implying that a weighted analysis will yield greater precision.
Collapse
Affiliation(s)
- Sarah G Anderson
- Department of Statistics, Miami University, Oxford, Ohio 45056, USA
| | | |
Collapse
|
22
|
Abstract
Endpoints in aquatic toxicity tests can be measured using a variety of measurement scales including dichotomous (survival), continuous (growth) and count (number of young). A distribution is assumed for an endpoint and analyses proceed accordingly. In certain situations, the assumed distribution may be incorrect and this may lead to incorrect statistical inference. The present study considers the analysis of count effects, here motivated by the Ceriodaphnia dubia reproduction study. While the Poisson probability model is a common starting point, this distribution assumes that the mean and variance are the same. This will not be the case if there is some extraneous source of variability in the system, and in this case, the variability may exceed the mean. A computer simulation study was used to examine the impact of overdispersion or outliers on the analysis of count data. Methods that assumed Poisson or negative binomially distributed outcomes were compared to methods that accommodated this potential overdispersion using quasi-likelihood (QL) or generalized linear mixed models (GLMM). If the data were truly Poisson, the adjusted methods still performed at nominal type I error rates. In the cases of overdispersed counts, the Poisson assumed methods resulted in rejection rates that exceeded nominal levels and standard errors for regression coefficients that were too narrow. The negative binomial methods worked best in the case when the data were, in fact, negative binomial but did not maintain nominal characteristics in other situations. In general, the QL and GLMM methods performed reasonably based on the present study, although all procedures suffered some impact in the presence of potential outliers. In particular, the QL is arguably preferred because it makes fewer assumptions than the GLMM and performed well over the range of conditions considered.
Collapse
Affiliation(s)
- Douglas A Noe
- Department of Statistics, Miami University, Oxford, Ohio 45056, USA.
| | | | | |
Collapse
|
23
|
Abstract
The combination of information from diverse sources is a common task encountered in computational statistics. A popular label for analyses involving the combination of results from independent studies is meta-analysis. The goal of the methodology is to bring together results of different studies, re-analyze the disparate results within the context of their common endpoints, synthesize where possible into a single summary endpoint, increase the sensitivity of the analysis to detect the presence of adverse effects, and provide a quantitative analysis of the phenomenon of interest based on the combined data. This entry discusses some basic methods in meta-analytic calculations, and includes commentary on how to combine or average results from multiple models applied to the same set of data.
Collapse
|
24
|
Park RM, Bushnell PT, Bailer AJ, Collins JW, Stayner LT. Impact of publicly sponsored interventions on musculoskeletal injury claims in nursing homes. Am J Ind Med 2009; 52:683-97. [PMID: 19670260 DOI: 10.1002/ajim.20731] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
BACKGROUND The rate of lost-time sprains and strains in private nursing homes is over three times the national average, and for back injuries, almost four times the national average. The Ohio Bureau of Workers' Compensation (BWC) has sponsored interventions that were preferentially promoted to nursing homes in 2000-2001, including training, consultation, and grants up to $40,000 for equipment purchases. METHODS This study evaluated the impact of BWC interventions on back injury claim rates using BWC data on claims, interventions, and employer payroll for all Ohio nursing homes during 1995-2004 using Poisson regression. A subset of nursing homes was analyzed with more detailed data that allowed estimation of the impact of staffing levels and resident acuity on claim rates. Costs of interventions were compared to the associated savings in claim costs. RESULTS A $500 equipment purchase per nursing home worker was associated with a 21% reduction in back injury rate. Assuming an equipment life of 10 years, this translates to an estimated $768 reduction in claim costs per worker, a present value of $495 with a 5% discount rate applied. Results for training courses were equivocal. Only those receiving below-median hours had a significant 19% reduction in claim rates. Injury rates did not generally decline with consultation independent of equipment purchases, although possible confounding, misclassification, and bias due to non-random management participation clouds interpretation. In nursing homes with available data, resident acuity was modestly associated with back injury risk, and the injury rate increased with resident-to-staff ratio (acting through three terms: RR = 1.50 for each additional resident per staff member; for the ratio alone, RR = 1.32, 95% CI = 1.18-1.48). In these NHs, an expenditure of $908 per resident care worker (equivalent to $500 per employee in the other model) was also associated with a 21% reduction in injury rate. However, with a resident-to-staff ratio greater than 2.0, the same expenditure was associated with a $1,643 reduction in back claim costs over 10 years per employee, a present value of $1,062 with 5% discount rate. CONCLUSIONS Expenditures for ergonomic equipment in nursing homes by the Ohio BWC were associated with fewer worker injuries and reductions in claim costs that were similar in magnitude to expenditures. Un-estimated benefits and costs also need to be considered in assessing full health and financial impacts.
Collapse
Affiliation(s)
- Robert M Park
- Education and Information Division, National Institute for Occupational Safety and Health, Cincinnati, Ohio 45226, USA.
| | | | | | | | | |
Collapse
|
25
|
Loomis D, Schulman MD, Bailer AJ, Stainback K, Wheeler M, Richardson DB, Marshall SW. Political economy of US states and rates of fatal occupational injury. Am J Public Health 2009; 99:1400-8. [PMID: 19542025 DOI: 10.2105/ajph.2007.131409] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
OBJECTIVES We investigated the extent to which the political economy of US states, including the relative power of organized labor, predicts rates of fatal occupational injury. METHODS We described states' political economies with 6 contextual variables measuring social and political conditions: "right-to-work" laws, union membership density, labor grievance rates, state government debt, unemployment rates, and social wage payments. We obtained data on fatal occupational injuries from the National Traumatic Occupational Fatality surveillance system and population data from the US national census. We used Poisson regression methods to analyze relationships for the years 1980 and 1995. RESULTS States differed notably with respect to political-economic characteristics and occupational fatality rates, although these characteristics were more homogeneous within rather than between regions. Industry and workforce composition contributed significantly to differences in state injury rates, but political-economic characteristics of states were also significantly associated with injury rates, after adjustment accounting for those factors. CONCLUSIONS Higher rates of fatal occupational injury were associated with a state policy climate favoring business over labor, with distinct regional clustering of such state policies in the South and Northeast.
Collapse
Affiliation(s)
- Dana Loomis
- School of Public Health, MS-274, University of Nevada, Reno, NV 89557-0274, USA.
| | | | | | | | | | | | | |
Collapse
|
26
|
Noble RB, Bailer AJ, Noe DA. Comparing methods for analyzing overdispersed binary data in aquatic toxicology. Environ Toxicol Chem 2009; 28:997-1006. [PMID: 19049261 DOI: 10.1897/08-221.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2008] [Accepted: 11/10/2008] [Indexed: 05/27/2023]
Abstract
Historically, death is the most commonly studied effect in aquatic toxicity tests. These tests typically employ a gradient of concentrations and exposure with more than one organism in a series of replicate chambers in each concentration. Whereas a binomial distribution commonly is employed for such effects, variability may exceed that predicted by binomial probability models. This additional variability could result from heterogeneity in the probabilities across the chambers in which the organisms are housed and subsequently exposed to concentrations of toxins. Incorrectly assuming a binomial distribution for the statistical analysis may lead to incorrect statistical inference. We consider the analysis of grouped binary data, here motivated by the study of survival. We use a computer simulation study to examine the impact of overdispersion or outliers on the analysis of binary data. We compare methods that assume binomial or generalizations that accommodate this potential overdispersion. These generalizations include adjusting the standard probit model for clustering/correlation or using alternative estimation methods, generalized estimating equations, or generalized linear mixed models (GLMM). When data were binomial or overdispersed binomial, none of the models exhibited any significant bias when estimating regression coefficients. When the data were truly binomial, the probit model controlled type I errors, as did the Donald and Donner method and the GLMM method. When data were overdispersed, the probit model no longer controlled type I error, and the standard errors were too small. In general, the Donald and Donner and the GLMM methods performed reasonably based on this study, although all procedures suffered some impact in the presence of potential outliers.
Collapse
Affiliation(s)
- Robert B Noble
- Department of Mathematics and Statistics, Miami University, Oxford, Ohio 45056, USA.
| | | | | |
Collapse
|
27
|
Abstract
Worker populations often provide data on adverse responses associated with exposure to potential hazards. The relationship between hazard exposure levels and adverse response can be modeled and then inverted to estimate the exposure associated with some specified response level. One concern is that this endpoint may be sensitive to the concentration metric and other variables included in the model. Further, it may be that the models yielding different risk endpoints are all providing relatively similar fits. We focus on evaluating the impact of exposure on a continuous response by constructing a model-averaged benchmark concentration from a weighted average of model-specific benchmark concentrations. A method for combining the estimates based on different models is applied to lung function in a cohort of miners exposed to coal dust. In this analysis, we see that a small number of the thousands of models considered survive a filtering criterion for use in averaging. Even after filtering, the models considered yield benchmark concentrations that differ by a factor of 2 to 9 depending on the concentration metric and covariates. The model-average BMC captures this uncertainty, and provides a useful strategy for addressing model uncertainty.
Collapse
Affiliation(s)
- Robert B Noble
- Department of Mathematics & Statistics, Miami University, Oxford, OH 45056, USA.
| | | | | |
Collapse
|
28
|
Abstract
With the increased availability of toxicological hazard information arising from multiple experimental sources, risk assessors are often confronted with the challenge of synthesizing all available scientific information into an analysis. This analysis is further complicated because significant between-source heterogeneity/lab-to-lab variability is often evident. We estimate benchmark doses using hierarchical models to account for the observed heterogeneity. These models are used to construct source-specific and population-average estimates of the benchmark dose (BMD). This is illustrated with an analysis of the U.S. EPA Region IX's reference toxicity database on the effects of sodium chloride on reproduction in Ceriodaphnia dubia. Results show that such models may effectively account for the lab-source heterogeneity while producing BMD estimates that more truly reflect the variability of the system under study. Failing to account for such heterogeneity may result in estimates having confidence intervals that are overly narrow.
Collapse
Affiliation(s)
- Matthew W Wheeler
- National Institute for Occupational Safety and Health, Risk Evaluation Branch, Cincinnati, OH, USA
| | | |
Collapse
|
29
|
Kuempel ED, Tran CL, Castranova V, Bailer AJ. Lung Dosimetry and Risk Assessment of Nanoparticles: Evaluating and Extending Current Models in Rats and Humans. Inhal Toxicol 2008; 18:717-24. [PMID: 16774860 DOI: 10.1080/08958370600747887] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Risk assessment of occupational exposure to nanomaterials is needed. Human data are limited, but quantitative data are available from rodent studies. To use these data in risk assessment, a scientifically reasonable approach for extrapolating the rodent data to humans is required. One approach is allometric adjustment for species differences in the relationship between airborne exposure and internal dose. Another approach is lung dosimetry modeling, which provides a biologically-based, mechanistic method to extrapolate doses from animals to humans. However, current mass-based lung dosimetry models may not fully account for differences in the clearance and translocation of nanoparticles. In this article, key steps in quantitative risk assessment are illustrated, using dose-response data in rats chronically exposed to either fine or ultrafine titanium dioxide (TiO2), carbon black (CB), or diesel exhaust particulate (DEP). The rat-based estimates of the working lifetime airborne concentrations associated with 0.1% excess risk of lung cancer are approximately 0.07 to 0.3 mg/m3 for ultrafine TiO2, CB, or DEP, and 0.7 to 1.3 mg/m3 for fine TiO2. Comparison of observed versus model-predicted lung burdens in rats shows that the dosimetry models predict reasonably well the retained mass lung burdens of fine or ultrafine poorly soluble particles in rats exposed by chronic inhalation. Additional model validation is needed for nanoparticles of varying characteristics, as well as extension of these models to include particle translocation to organs beyond the lungs. Such analyses would provide improved prediction of nanoparticle dose for risk assessment.
Collapse
Affiliation(s)
- E D Kuempel
- National Institute for Occupational Safety and Health, Cincinnati, Ohio 45226, USA.
| | | | | | | |
Collapse
|
30
|
|
31
|
Bailer AJ. Useless Arithmetic: Why Environmental Scientists Can't Predict the Future by PILKEY, O. H. and PILKEY-JARVIS, L. Biometrics 2008. [DOI: 10.1111/j.1541-0420.2008.01026_1.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
32
|
|
33
|
Abstract
A common measure of the relative toxicity is the ratio of median lethal doses for responses estimated in two bioassays. Robertson and Preisler previously proposed a method for constructing a confidence interval for the ratio. The applicability of this technique in common experimental situations, especially those involving small samples, may be questionable because the sampling distribution of this ratio estimator may be highly skewed. To examine this possibility, we did a computer simulation experiment to evaluate the coverage properties of the Robertson and Preisler method. The simulation showed that the method provided confidence intervals that performed at the nominal confidence level for the range of responses often observed in pesticide bioassays. Results of this study provide empirical support for the continued use this technique.
Collapse
Affiliation(s)
- Matthew Wheeler
- Risk Education Branch, National Institutional for Occupational Safety and Health, 4676 Columbia Pkwy., Cincinnati, OH 45226, USA.
| | | | | | | |
Collapse
|
34
|
|
35
|
See K, Noble RB, John Bailer A. Comparison of relative efficiencies of sampling plans excluding certain neighboring units: a simulation study. J STAT COMPUT SIM 2007. [DOI: 10.1080/10629360600569444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
36
|
Abstract
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
Collapse
Affiliation(s)
- Matthew W Wheeler
- National Institute for Occupational Safety and Health, Risk Evaluation Branch, Cincinnati, OH 45226, USA.
| | | |
Collapse
|
37
|
Barton HA, Chiu WA, Setzer RW, Andersen ME, Bailer AJ, Bois FY, Dewoskin RS, Hays S, Johanson G, Jones N, Loizou G, Macphail RC, Portier CJ, Spendiff M, Tan YM. Characterizing Uncertainty and Variability in Physiologically Based Pharmacokinetic Models: State of the Science and Needs for Research and Implementation. Toxicol Sci 2007; 99:395-402. [PMID: 17483121 DOI: 10.1093/toxsci/kfm100] [Citation(s) in RCA: 103] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.
Collapse
Affiliation(s)
- Hugh A Barton
- US EPA, ORD, National Center for Computational Toxicology, Research Triangle Park, North Carolina 27711, USA.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
38
|
Calabrese EJ, Bachmann KA, Bailer AJ, Bolger PM, Borak J, Cai L, Cedergreen N, Cherian MG, Chiueh CC, Clarkson TW, Cook RR, Diamond DM, Doolittle DJ, Dorato MA, Duke SO, Feinendegen L, Gardner DE, Hart RW, Hastings KL, Hayes AW, Hoffmann GR, Ives JA, Jaworowski Z, Johnson TE, Jonas WB, Kaminski NE, Keller JG, Klaunig JE, Knudsen TB, Kozumbo WJ, Lettieri T, Liu SZ, Maisseu A, Maynard KI, Masoro EJ, McClellan RO, Mehendale HM, Mothersill C, Newlin DB, Nigg HN, Oehme FW, Phalen RF, Philbert MA, Rattan SIS, Riviere JE, Rodricks J, Sapolsky RM, Scott BR, Seymour C, Sinclair DA, Smith-Sonneborn J, Snow ET, Spear L, Stevenson DE, Thomas Y, Tubiana M, Williams GM, Mattson MP. Biological stress response terminology: Integrating the concepts of adaptive response and preconditioning stress within a hormetic dose-response framework. Toxicol Appl Pharmacol 2007; 222:122-8. [PMID: 17459441 DOI: 10.1016/j.taap.2007.02.015] [Citation(s) in RCA: 461] [Impact Index Per Article: 27.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2006] [Revised: 02/08/2007] [Accepted: 02/26/2007] [Indexed: 02/03/2023]
Abstract
Many biological subdisciplines that regularly assess dose-response relationships have identified an evolutionarily conserved process in which a low dose of a stressful stimulus activates an adaptive response that increases the resistance of the cell or organism to a moderate to severe level of stress. Due to a lack of frequent interaction among scientists in these many areas, there has emerged a broad range of terms that describe such dose-response relationships. This situation has become problematic because the different terms describe a family of similar biological responses (e.g., adaptive response, preconditioning, hormesis), adversely affecting interdisciplinary communication, and possibly even obscuring generalizable features and central biological concepts. With support from scientists in a broad range of disciplines, this article offers a set of recommendations we believe can achieve greater conceptual harmony in dose-response terminology, as well as better understanding and communication across the broad spectrum of biological disciplines.
Collapse
Affiliation(s)
- Edward J Calabrese
- School of Public Health, Morrill I, N344, University of Massachusetts, Amherst, MA 01003, USA.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
39
|
Noble RB, Bailer AJ, Kunkel SR, Straker JK. Sample size requirements for studying small populations in gerontology research. Health Serv Outcomes Res Method 2006. [DOI: 10.1007/s10742-006-0001-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
40
|
Abstract
Previous studies on frugivory in temperate bird-dispersed plants have concluded that fleshy fruits are removed more rapidly in cold than in warm winters. However, these studies do not distinguish between fruit abscission and frugivory. The implicit assumption that fruit loss reflects frugivory may not be valid; fruit abscission may be important and respond differently to weather. During two winters, we measured fruit loss from an invasive shrub ( Lonicera maackii (Rupr.) Herder) using fruit traps. We examined the effects of temperature and precipitation on fruit retention on shrubs and fruit abscission. In the first year of our study, there was no effect of temperature or precipitation on fruit retention. In the second year, both warmer temperatures and lower precipitation resulted in more fruit retention. In both years, fruit abscission was greater during periods of cold temperatures and high precipitation. These findings suggest that weather-dependent “frugivory” reported for other bird-dispersed plants may actually reflect patterns of abscission.
Collapse
Affiliation(s)
- Anne M. Bartuszevige
- Department of Botany, Miami University, Oxford, OH 45056, USA
- Department of Mathematics and Statistics, Miami University, Oxford, OH 45056, USA
| | - Michael R. Hughes
- Department of Botany, Miami University, Oxford, OH 45056, USA
- Department of Mathematics and Statistics, Miami University, Oxford, OH 45056, USA
| | - A. John Bailer
- Department of Botany, Miami University, Oxford, OH 45056, USA
- Department of Mathematics and Statistics, Miami University, Oxford, OH 45056, USA
| | - David L. Gorchov
- Department of Botany, Miami University, Oxford, OH 45056, USA
- Department of Mathematics and Statistics, Miami University, Oxford, OH 45056, USA
| |
Collapse
|
41
|
Wheeler MW, Park RM, Bailer AJ. Comparing median lethal concentration values using confidence interval overlap or ratio tests. Environ Toxicol Chem 2006; 25:1441-4. [PMID: 16704080 DOI: 10.1897/05-320r.1] [Citation(s) in RCA: 227] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Experimenters in toxicology often compare the concentration-response relationship between two distinct populations using the median lethal concentration (LC50). This comparison is sometimes done by calculating the 95% confidence interval for the LC50 for each population, concluding that no significant difference exists if the two confidence intervals overlap. A more appropriate test compares the ratio of the LC50s to 1 or the log(LC50 ratio) to 0. In this ratio test, we conclude that no difference exists in LC50s if the confidence interval for the ratio of the LC50s contains 1 or the confidence interval for the log(LC50 ratio) contains 0. A Monte Carlo simulation study was conducted to compare the confidence interval overlap test to the ratio test. The confidence interval overlap test performs substantially below the nominal alpha = 0.05 level, closer to p = 0.005; therefore, it has considerably less power for detecting true differences compared to the ratio test. The ratio-based method exhibited better type I error rates and superior power properties in comparison to the confidence interval overlap test. Thus, a ratio-based statistical procedure is preferred to using simple overlap of two independently derived confidence intervals.
Collapse
Affiliation(s)
- Matthew W Wheeler
- National Institute for Occupational Safety and Health, 4676 Columbia Parkway MS-15, Cincinnati, Ohio 45226, USA.
| | | | | |
Collapse
|
42
|
Abstract
A start-stop experiment in environmental toxicology provides a backdrop for this design discussion. The basic problem is to decide when to sample a nonlinear response in order to minimize the generalized variance of the estimated parameters. An easily coded heuristic optimization strategy can be applied to this problem to obtain optimal or nearly optimal designs. The efficiency of the heuristic approach allows a straightforward exploration of the sensitivity of the suggested design with respect to such problem-specific concerns as variance heterogeneity, time-grid resolution, design criteria, and interval specification of planning values for parameters. A second illustration of design optimization is briefly presented in the context of concentration spacing for a reproductive toxicity study.
Collapse
Affiliation(s)
- Stephen E Wright
- Department of Mathematics and Statistics, Miami University, Oxford, Ohio 45056, USA.
| | | |
Collapse
|
43
|
Bell BR, Bailer AJ, Wright SE. A simulation study comparing different experimental designs for estimating uptake and elimination rates. Environ Toxicol Chem 2006; 25:248-52. [PMID: 16494249 DOI: 10.1897/05-235r.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
The design of ecotoxicological studies requires decisions about the number and spacing of exposure groups tested, the number of replications, the spacing of sampling times, the duration of the study, and other considerations. For example, geometric spacing of sampling times or toxicant concentrations is often used as a default design. Optimal design methods in statistics can suggest alternative spacing of sampling times that yield more precise estimates of regression coefficients. In this study, we use a computer simulation to explore the impact of the spacing of sampling times and other factors on the estimation of uptake and elimination rate constants in an experiment addressing the bioaccumulation of a contaminant. Careful selection of sampling times can result in smaller standard errors for the parameter estimates, thereby allowing the construction of smaller, more precise confidence intervals. Thus, the effort invested in constructing an optimal experimental design may result in more precise inference or in a reduction of replications in an experimental design.
Collapse
Affiliation(s)
- Bryan R Bell
- Center for Environmental Toxicology and Statistics, Miami University, Oxford, Ohio 45056, USA.
| | | | | |
Collapse
|
44
|
Abstract
Experimental animal studies often serve as the basis for predicting risk of adverse responses in humans exposed to occupational hazards. A statistical model is applied to exposure-response data and this fitted model may be used to obtain estimates of the exposure associated with a specified level of adverse response. Unfortunately, a number of different statistical models are candidates for fitting the data and may result in wide ranging estimates of risk. Bayesian model averaging (BMA) offers a strategy for addressing uncertainty in the selection of statistical models when generating risk estimates. This strategy is illustrated with two examples: applying the multistage model to cancer responses and a second example where different quantal models are fit to kidney lesion data. BMA provides excess risk estimates or benchmark dose estimates that reflects model uncertainty.
Collapse
Affiliation(s)
- A John Bailer
- Department of Mathematics and Statistics, Miami University, Oxford, OH 45056, USA.
| | | | | |
Collapse
|
45
|
Craft JL, Bailer AJ. Comparison of step-stress data among multiple groups. Environ Toxicol Chem 2005; 24:1004-1006. [PMID: 15839577 DOI: 10.1897/04-242r.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Ecotoxicological studies may use organism fitness (e.g., time until exhaustion in a swim test experiment) as an indicator of the impact of some toxic chemical. Failure time studies can use constant stress or may use a protocol in which the stress level is increased at specific times. In these step-stress tests, the units are tested at a given stress level for a predetermined amount of time. At the end of that time, if there are units surviving, the stress level is increased and held constant for another amount of time. This process is repeated until all of the test units fail or the predetermined test time has expired. The purpose of this study is to suggest a method for analyzing responses from step-stress studies. A likelihood-ratio test is developed for comparing the failure times in multiple groups based on a piecewise constant hazard assumption. This likelihood-ratio test can be extended to other piecewise distributions. A small simulation study compares this analysis procedure to other methods currently used to compare multiple groups with regard to type 1 error rate and power.
Collapse
Affiliation(s)
- Jeremy L Craft
- Center for Environmental Toxicology and Statistics, Department of Mathematics and Statistics, Miami University, Oxford, Ohio 45056, USA.
| | | |
Collapse
|
46
|
Bailer AJ, Wheeler M, Dankovic D, Noble R, Bena J. Incorporating uncertainty and variability in the assessment of occupational hazards. ACTA ACUST UNITED AC 2005. [DOI: 10.1504/ijram.2005.007176] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
47
|
Abstract
OBJECTIVES To compare the profile of unintentional fatal occupational injuries in the Republic of Korea and the United States to help establish prevention strategies for Korea and to understand country specific differences in fatality risks in different industries. METHODS Occupational fatal injury data from 1998-2001 were collected from Korea's Occupational Safety and Health Agency's Survey of Causes of Occupational Injuries (identified by the Korea Labor Welfare Corporation) and from the United States Census of Fatal Occupational Injuries. Employment estimates were obtained in both countries. Industry coding and external cause of death coding were standardized. Descriptive analyses of injury rates and Poisson regression models to examine time trends were conducted. RESULTS Korea exhibited a significantly higher fatal injury rate, at least two times higher than the United States, after accounting for different employment patterns. The ordering of industries with respect to risk is the same in the two countries, with mining, agriculture/forestry/fishing, and construction being the most dangerous. Fatal injury rates are decreasing in these two countries, although at a faster rate in Korea. CONCLUSIONS Understanding industrial practices within different countries is critical for fully understanding country specific occupational injury statistics. However, differences in surveillance systems and employment estimation methods serve as caveats to any transnational comparison, and need to be harmonized to the fullest extent possible.
Collapse
Affiliation(s)
- Y-S Ahn
- Korea Occupational Safety and Health Agency, Incheon, Korea
| | | | | |
Collapse
|
48
|
Abstract
OBJECTIVES We investigated fatal occupational injury rates in the United States by race and Hispanic ethnicity during the period 1990-1996. METHODS Fatalities were identified by means of the national traumatic occupational fatalities surveillance system. Fatal occupational injury rates were calculated by race/ethnicity and region using US-census-based workforce estimates. RESULTS Non-Hispanic Black men in the South had the highest fatal occupational injury rate (8.5 per 100000 worker-years), followed by Hispanic men in the South (7.9 per 100000 worker-years). Fatal injury rates for Hispanic men increased over the study period, exceeding rates for non-Hispanic Black men in the latter years of observation. CONCLUSIONS These data suggest a change in the demographics of fatal occupational injuries in the United States. Hispanic men in the South appear to be emerging as the group with the nation's highest unintentional fatal occupational injury rate.
Collapse
Affiliation(s)
- David B Richardson
- Department of Epidemiology, School of Public Health, CB No. 8050, Bank of America Plaza, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-8050, USA.
| | | | | | | |
Collapse
|
49
|
Abstract
We propose a method based on parametric survival analysis to analyze step-stress data. Step-stress studies are failure time studies in which the experimental stressor is increased at specified time intervals. While this protocol has been frequently employed in industrial reliability studies, it is less common in the life sciences. Possible biological applications include experiments on swimming performance of fish using a step function defining increasing water velocity over time, and treadmill tests on humans. A likelihood-ratio test is developed for comparing the failure times in two groups based on a piecewise constant hazard assumption. The test can be extended to other piecewise distributions and to include covariates. An example data set is used to illustrate the method and highlight experimental design issues. A small simulation study compares this analysis procedure to currently used methods with regard to type I error rate and power.
Collapse
Affiliation(s)
- Sonja Greven
- Department of Biostatistics, University of North Carolina at Chapel Hill, North Carolina 27599-7420, USA
| | | | | | | | | |
Collapse
|
50
|
Abstract
BACKGROUND Occupational fatal injury rate studies are often based upon uncertain and variable data. The numerator in rate calculations is often obtained from surveillance systems that can understate the true number of deaths. Worker-years, the denominator in many occupational rate calculations, are frequently estimated from sources that exhibit different amounts of variability. METHODS Effects of these data limitations on analyses of trends in occupational fatal injuries were studied using computer simulation. Fatality counts were generated assuming an undercount. Employment estimates were produced using two different strategies, reflecting either frequent but variable measurements or infrequent, precise estimates with interpolated estimates for intervening years. Poisson regression models were fit to the generated data. A range of empirically motivated fatality rate and employment parameters were studied. RESULTS Undercounting fatalities resulted in biased estimation of the intercept in the Poisson regression model. Relative bias in the trend estimate was near zero for most situations, but increased when a change in fatality undercounting over time was present. Biases for both the intercept and trend were larger when small employment populations were present. Denominator options resulted in similar rate and trend estimates, except where the interpolated method did not capture true trends in employment. CONCLUSIONS Data quality issues such as consistency of conditions throughout the study period and the size of population being studied affect the size of the bias in parameter estimation.
Collapse
Affiliation(s)
- James F Bena
- National Institute for Occupational Safety and Health, Cincinnati, Ohio 44195, USA.
| | | | | | | | | |
Collapse
|