1
|
Yan Z, Yang M. Statistical considerations in model-based dose finding for binary responses under model uncertainty. Stat Med 2024. [PMID: 38605556 DOI: 10.1002/sim.10082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 02/21/2024] [Accepted: 04/01/2024] [Indexed: 04/13/2024]
Abstract
The statistical methodology for model-based dose finding under model uncertainty has attracted increasing attention in recent years. While the underlying principles are simple and easy to understand, developing and implementing an efficient approach for binary responses can be a formidable task in practice. Motivated by the statistical challenges encountered in a phase II dose finding study, we explore several key design and analysis issues related to the hybrid testing-modeling approaches for binary responses. The issues include candidate model selection and specifications, optimal design and efficient sample size allocations, and, notably, the methods for dose-response testing and estimation. Specifically, we consider a class of generalized linear models suited for the candidate set and establish D-optimal designs for these models. Additionally, we propose using permutation-based tests for dose-response testing to avoid asymptotic normality assumptions typically required for contrast-based tests. We perform trial simulations to enhance our understanding of these issues.
Collapse
Affiliation(s)
- Zhiwu Yan
- Biostatistics Department, 89bio, Inc., San Francisco, California, USA
| | - Min Yang
- Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, Chicago, Illinois
| |
Collapse
|
2
|
Pouzou JG, Zagmutt FJ. Observational Dose-Response Meta-Analysis Methods May Bias Risk Estimates at Low Consumption Levels: The Case of Meat and Colorectal Cancer. Adv Nutr 2024; 15:100214. [PMID: 38521239 PMCID: PMC11061242 DOI: 10.1016/j.advnut.2024.100214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 03/07/2024] [Accepted: 03/11/2024] [Indexed: 03/25/2024] Open
Abstract
Observational studies of foods and health are susceptible to bias, particularly from confounding between diet and other lifestyle factors. Common methods for deriving dose-response meta-analysis (DRMA) may contribute to biased or overly certain risk estimates. We used DRMA models to evaluate the empirical evidence for colorectal cancer (CRC) association with unprocessed red meat (RM) and processed meats (PM), and the consistency of this association for low and high consumers under different modeling assumptions. Using the Global Burden of Disease project's systematic reviews as a start, we compiled a data set of studies of PM with 29 cohorts contributing 23,522,676 person-years and of 23 cohorts for RM totaling 17,259,839 person-years. We fitted DRMA models to lower consumers only [consumption < United States median of PM (21 g/d) or RM (56 g/d)] and compared them with DRMA models using all consumers. To investigate impacts of model selection, we compared classical DRMA models against an empirical model for both lower consumers only and for all consumers. Finally, we assessed if the type of reference consumer (nonconsumer or mixed consumer/nonconsumer) influenced a meta-analysis of the lowest consumption arm. We found no significant association with consumption of 50 g/d RM using an empirical fit with lower consumption (relative risk [RR] 0.93 (0.8-1.02) or all consumption levels (1.04 (0.99-1.10)), while classical models showed RRs as high as 1.09 (1.00-1.18) at 50g/day. PM consumption of 20 g/d was not associated with CRC (1.01 (0.87-1.18)) when using lower consumer data, regardless of model choice. Using all consumption data resulted in association with CRC at 20g/day of PM for the empirical models (1.07 (1.02-1.12)) and with as little as 1g/day for classical models. The empirical DRMA showed nonlinear, nonmonotonic relationships for PM and RM. Nonconsumer reference groups did not affect RM (P = 0.056) or PM (P = 0.937) association with CRC in lowest consumption arms. In conclusion, classical DRMA model assumptions and inclusion of higher consumption levels influence the association between CRC and low RM and PM consumption. Furthermore, a no-risk limit of 0 g/d consumption of RM and PM is inconsistent with the evidence.
Collapse
Affiliation(s)
- Jane G Pouzou
- EpiX Analytics, LLC. Fort Collins, CO, United States
| | | |
Collapse
|
3
|
Zou T, Wu W, Liu K, Wang K, Lv C. Bayesian Averaging Evaluation Method of Accelerated Degradation Testing Considering Model Uncertainty Based on Relative Entropy. Sensors (Basel) 2024; 24:1426. [PMID: 38474962 DOI: 10.3390/s24051426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 02/16/2024] [Accepted: 02/19/2024] [Indexed: 03/14/2024]
Abstract
To evaluate the lifetime and reliability of long-life, high-reliability products under limited resources, accelerated degradation testing (ADT) technology has been widely applied. Furthermore, the Bayesian evaluation method for ADT can comprehensively utilize historical information and overcome the limitations caused by small sample sizes, garnering significant attention from scholars. However, the traditional ADT Bayesian evaluation method has inherent shortcomings and limitations. Due to the constraints of small samples and an incomplete understanding of degradation mechanisms or accelerated mechanisms, the selected evaluation model may be inaccurate, leading to potentially inaccurate evaluation results. Therefore, describing and quantifying the impact of model uncertainty on evaluation results is a challenging issue that urgently needs resolution in the theoretical research of ADT Bayesian methods. This article addresses the issue of model uncertainty in the ADT Bayesian evaluation process. It analyzes the modeling process of ADT Bayesian and proposes a new model averaging evaluation method for ADT Bayesian based on relative entropy, which, to a certain extent, can resolve the issue of evaluation inaccuracy caused by model selection uncertainty. This study holds certain theoretical and engineering application value for conducting ADT Bayesian evaluation under model uncertainty.
Collapse
Affiliation(s)
- Tianji Zou
- University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China
- Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China
| | - Wenbo Wu
- University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China
- Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China
| | - Kai Liu
- University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China
- Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China
| | - Ke Wang
- University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China
- Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China
| | - Congmin Lv
- University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China
- Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China
| |
Collapse
|
4
|
Sun T, Zhang X, Lv S, Lin X, Ma J, Liu J, Fang Q, Tang L, Liu L, Cao W, Liu B, Zhu Y. Improving the predictions of leaf photosynthesis during and after short-term heat stress with current rice models. Plant Cell Environ 2023; 46:3353-3370. [PMID: 37575035 DOI: 10.1111/pce.14683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 07/26/2023] [Accepted: 07/31/2023] [Indexed: 08/15/2023]
Abstract
In response to increasing global warming, extreme heat stress significantly alters photosynthetic production. While numerous studies have investigated the temperature effects on photosynthesis, factors like vapour pressure deficit (VPD), leaf nitrogen, and feedback of sink limitation during and after extreme heat stress remain underexplored. This study assessed photosynthesis calculations in seven rice growth models using observed maximum photosynthetic rate (Pmax ) during and after short-term extreme heat stress in multi-year environment-controlled experiments. Biochemical models (FvCB-type) outperformed light response curve-based models (LRC-type) when incorporating observed leaf nitrogen, photosynthetically active radiation, temperatures, and intercellular CO2 concentration (Ci ) as inputs. Prediction uncertainty during heat stress treatment primarily resulted from variation in temperatures and Ci . Improving FVPD (the slope for the linear effect of VPD on Ci /Ca ) to be temperature-dependent, rather than constant as in original models, significantly improved Ci prediction accuracy under heat stress. Leaf nitrogen response functions led to model variation in leaf photosynthesis predictions after heat stress, which was mitigated by calibrated nitrogen response functions based on active photosynthetic nitrogen. Additionally, accounting for observed differences in carbohydrate accumulation between panicles and stems during grain filling improved the feedback of sink limitation, reducing Ci overestimation under heat stress treatments.
Collapse
Affiliation(s)
- Ting Sun
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
- Key Laboratory of Specialty Agri-product Quality and Hazard Controlling Technology of Zhejiang Province, College of Life Sciences, China Jiliang University, Hangzhou, Zhejiang, China
| | - Xiaohu Zhang
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
| | - Suyu Lv
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
| | - Xuanhao Lin
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
| | - Jifeng Ma
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
| | - Jiaming Liu
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
| | - Qizhao Fang
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
| | - Liang Tang
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
| | - Leilei Liu
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
| | - Weixing Cao
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
| | - Bing Liu
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
| | - Yan Zhu
- National Engineering and Technology Center for Information Agriculture, Engineering Research Center of Smart Agriculture, Ministry of Education, Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture, Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, College of Agriculture, Nanjing Agricultural University, Nanjing, Jiangsu, China
| |
Collapse
|
5
|
Liu H, Zhang X. Frequentist model averaging for undirected Gaussian graphical models. Biometrics 2023; 79:2050-2062. [PMID: 36106680 DOI: 10.1111/biom.13758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 07/12/2022] [Accepted: 08/30/2022] [Indexed: 11/30/2022]
Abstract
Advances in information technologies have made network data increasingly frequent in a spectrum of big data applications, which is often explored by probabilistic graphical models. To precisely estimate the precision matrix, we propose an optimal model averaging estimator for Gaussian graphs. We prove that the proposed estimator is asymptotically optimal when candidate models are misspecified. The consistency and the asymptotic distribution of model averaging estimator, and the weight convergence are also studied when at least one correct model is included in the candidate set. Furthermore, numerical simulations and a real data analysis on yeast genetic data are conducted to illustrate that the proposed method is promising.
Collapse
Affiliation(s)
- Huihang Liu
- School of Management, University of Science and Technology of China, Hefei, China
| | - Xinyu Zhang
- School of Management, University of Science and Technology of China, Hefei, China
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
6
|
Allan AL, Cuchiero C, Liu C, Prömel DJ. Model-free portfolio theory: A rough path approach. Math Financ 2023; 33:709-765. [PMID: 38505114 PMCID: PMC10946658 DOI: 10.1111/mafi.12376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 12/21/2022] [Indexed: 03/21/2024]
Abstract
Based on a rough path foundation, we develop a model-free approach to stochastic portfolio theory (SPT). Our approach allows to handle significantly more general portfolios compared to previous model-free approaches based on Föllmer integration. Without the assumption of any underlying probabilistic model, we prove a pathwise formula for the relative wealth process, which reduces in the special case of functionally generated portfolios to a pathwise version of the so-called master formula of classical SPT. We show that the appropriately scaled asymptotic growth rate of a far reaching generalization of Cover's universal portfolio based on controlled paths coincides with that of the best retrospectively chosen portfolio within this class. We provide several novel results concerning rough integration, and highlight the advantages of the rough path approach by showing that (nonfunctionally generated) log-optimal portfolios in an ergodic Itô diffusion setting have the same asymptotic growth rate as Cover's universal portfolio and the best retrospectively chosen one.
Collapse
Affiliation(s)
| | | | - Chong Liu
- ShanghaiTech UniversityShanghaiChina
| | | |
Collapse
|
7
|
Amer L, Islam TU. An Entropic Approach for Pair Trading in PSX. Entropy (Basel) 2023; 25:494. [PMID: 36981382 PMCID: PMC10048239 DOI: 10.3390/e25030494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 02/09/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
The perception in pair trading is to recognize that when two stocks move together, their prices will converge to a mean value in the future. However, finding the mean-reverted point at which the value of the pair will converge as well as the optimal boundaries of the trade is not easy, as uncertainty and model misspecifications may lead to losses. To cater to these problems, this study employed a novel entropic approach that utilizes entropy as a penalty function for the misspecification of the model. The use of entropy as a measure of risk in pair trading is a nascent idea, and this study utilized daily data for 64 companies listed on the PSX for the years 2017, 2018, and 2019 to compute their returns based on the entropic approach. The returns to these stocks were then evaluated and compared with the buy and hold strategy. The results show positive and significant returns from pair trading using an entropic approach. The entropic approach seems to have an edge to buy and hold, distance-based, and machine learning approaches in the context of the Pakistani market.
Collapse
|
8
|
Abstract
We examine how policymakers react to a pandemic with uncertainty around key epidemiological and economic policy parameters by embedding a macroeconomic SIR model in a robust control framework. Uncertainty about disease virulence and severity leads to stricter and more persistent quarantines, while uncertainty about the economic costs of mitigation leads to less stringent quarantines. On net, an uncertainty-averse planner adopts stronger mitigation measures. Intuitively, the cost of underestimating the pandemic is out-of-control growth and permanent loss of life, while the cost of underestimating the economic consequences of quarantine is more transitory.
Collapse
|
9
|
Govind K.R. A, Mahapatra S. Frequency Domain Specifications Based Robust Decentralized PI/PID Control Algorithm for Benchmark Variable-Area Coupled Tank Systems. Sensors (Basel) 2022; 22:9165. [PMID: 36501864 PMCID: PMC9741177 DOI: 10.3390/s22239165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 11/18/2022] [Accepted: 11/22/2022] [Indexed: 06/17/2023]
Abstract
A decentralized PI/PID controller based on the frequency domain analysis for two input two output (TITO) coupled tank systems is exploited in this paper. The fundamentals of the gain margin and phase margin are used to design the proposed PI/PID controller. The basic objective is to keep the tank at the predetermined level. To satisfy the design specifications, the control algorithm is implemented for decoupled subsystems by employing a decoupler. First-order plus dead time (FOPDT) models are obtained for the decoupled subsystems using the model-reduction technique. In addition, the control law is realized by considering the frequency domain analysis. Further, the robustness of the controller is verified by considering multiplicative input and output uncertainties. The proposed method is briefly contrasted with existing techniques. It is envisaged that the proposed control algorithm exhibits better servo and regulatory responses compared to the existing techniques.
Collapse
|
10
|
Oliveira CR, Shapiro ED, Weinberger DM. Bayesian Model Averaging to Account for Model Uncertainty in Estimates of a Vaccine's Effectiveness. Clin Epidemiol 2022; 14:1167-1175. [PMID: 36281232 PMCID: PMC9587703 DOI: 10.2147/clep.s378039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 09/28/2022] [Indexed: 02/05/2023] Open
Abstract
Purpose Vaccine effectiveness (VE) studies are often conducted after the introduction of new vaccines to ensure they provide protection in real-world settings. Control of confounding is often needed during the analyses, which is most efficiently done through multivariable modeling. When many confounders are being considered, it can be challenging to know which variables need to be included in the final model. We propose an intuitive Bayesian model averaging (BMA) framework for this task. Patients and Methods Data were used from a matched case-control study that aimed to assess the effectiveness of the Lyme vaccine post-licensure. Cases were residents of Connecticut, 15-70 years of age with confirmed Lyme disease. Up to 2 healthy controls were matched to each case subject by age. All participants were interviewed, and medical records were reviewed to ascertain immunization history and evaluate potential confounders. BMA was used to systematically search for potential models and calculate the weighted average VE estimate from the top subset of models. The performance of BMA was compared to three traditional single-best-model-selection methods: two-stage selection, stepwise elimination, and the leaps and bounds algorithm. Results The analysis included 358 cases and 554 matched controls. VE ranged between 56% and 73% and 95% confidence intervals crossed zero in <5% of all candidate models. Averaging across the top 15 models, the BMA VE was 69% (95% CI: 18-88%). The two-stage, stepwise, and leaps and bounds algorithm yielded VE of 71% (95% CI: 21-90%), 73% (95% CI: 26-90%), and 74% (95% CI: 27-91%), respectively. Conclusion This paper highlights how the BMA framework can be used to generate transparent and robust estimates of VE. The BMA-derived VE and confidence intervals were similar to those estimated using traditional methods. However, by incorporating model uncertainty into the parameter estimation, BMA can lend additional rigor and credibility to a well-designed study.
Collapse
Affiliation(s)
- Carlos R Oliveira
- Department of Pediatrics, Yale University School of Medicine, New Haven, CT, USA,Department of Biostatistics, Yale University School of Public Health, New Haven, CT, USA,Correspondence: Carlos R Oliveira, Yale University School of Medicine, 464 Congress Avenue, Suite 204, New Haven, CT, 06520, USA, Tel +1 203 785 5474, Email
| | - Eugene D Shapiro
- Department of Pediatrics, Yale University School of Medicine, New Haven, CT, USA,Department of Epidemiology of Microbial Diseases, Yale University School of Public Health, New Haven, CT, USA
| | - Daniel M Weinberger
- Department of Epidemiology of Microbial Diseases, Yale University School of Public Health, New Haven, CT, USA
| |
Collapse
|
11
|
Robitzsch A. Exploring the Multiverse of Analytical Decisions in Scaling Educational Large-Scale Assessment Data: A Specification Curve Analysis for PISA 2018 Mathematics Data. Eur J Investig Health Psychol Educ 2022; 12:731-53. [PMID: 35877454 DOI: 10.3390/ejihpe12070054] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 06/29/2022] [Accepted: 07/04/2022] [Indexed: 11/29/2022] Open
Abstract
In educational large-scale assessment (LSA) studies such as PISA, item response theory (IRT) scaling models summarize students’ performance on cognitive test items across countries. This article investigates the impact of different factors in model specifications for the PISA 2018 mathematics study. The diverse options of the model specification also firm under the labels multiverse analysis or specification curve analysis in the social sciences. In this article, we investigate the following five factors of model specification in the PISA scaling model for obtaining the two country distribution parameters; country means and country standard deviations: (1) the choice of the functional form of the IRT model, (2) the treatment of differential item functioning at the country level, (3) the treatment of missing item responses, (4) the impact of item selection in the PISA test, and (5) the impact of test position effects. In our multiverse analysis, it turned out that model uncertainty had almost the same impact on variability in the country means as sampling errors due to the sampling of students. Model uncertainty had an even larger impact than standard errors for country standard deviations. Overall, each of the five specification factors in the multiverse analysis had at least a moderate effect on either country means or standard deviations. In the discussion section, we critically evaluate the current practice of model specification decisions in LSA studies. It is argued that we would either prefer reporting the variability in model uncertainty or choosing a particular model specification that might provide the strategy that is most valid. It is emphasized that model fit should not play a role in selecting a scaling strategy for LSA applications.
Collapse
|
12
|
Euskirchen ES, Serbin SP, Carman TB, Fraterrigo JM, Genet H, Iversen CM, Salmon V, McGuire AD. Assessing dynamic vegetation model parameter uncertainty across Alaskan arctic tundra plant communities. Ecol Appl 2022; 32:e2499. [PMID: 34787932 PMCID: PMC9285828 DOI: 10.1002/eap.2499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 06/22/2021] [Accepted: 07/20/2021] [Indexed: 06/13/2023]
Abstract
As the Arctic region moves into uncharted territory under a warming climate, it is important to refine the terrestrial biosphere models (TBMs) that help us understand and predict change. One fundamental uncertainty in TBMs relates to model parameters, configuration variables internal to the model whose value can be estimated from data. We incorporate a version of the Terrestrial Ecosystem Model (TEM) developed for arctic ecosystems into the Predictive Ecosystem Analyzer (PEcAn) framework. PEcAn treats model parameters as probability distributions, estimates parameters based on a synthesis of available field data, and then quantifies both model sensitivity and uncertainty to a given parameter or suite of parameters. We examined how variation in 21 parameters in the equation for gross primary production influenced model sensitivity and uncertainty in terms of two carbon fluxes (net primary productivity and heterotrophic respiration) and two carbon (C) pools (vegetation C and soil C). We set up different parameterizations of TEM across a range of tundra types (tussock tundra, heath tundra, wet sedge tundra, and shrub tundra) in northern Alaska, along a latitudinal transect extending from the coastal plain near Utqiaġvik to the southern foothills of the Brooks Range, to the Seward Peninsula. TEM was most sensitive to parameters related to the temperature regulation of photosynthesis. Model uncertainty was mostly due to parameters related to leaf area, temperature regulation of photosynthesis, and the stomatal responses to ambient light conditions. Our analysis also showed that sensitivity and uncertainty to a given parameter varied spatially. At some sites, model sensitivity and uncertainty tended to be connected to a wider range of parameters, underlining the importance of assessing tundra community processes across environmental gradients or geographic locations. Generally, across sites, the flux of net primary productivity (NPP) and pool of vegetation C had about equal uncertainty, while heterotrophic respiration had higher uncertainty than the pool of soil C. Our study illustrates the complexity inherent in evaluating parameter uncertainty across highly heterogeneous arctic tundra plant communities. It also provides a framework for iteratively testing how newly collected field data related to key parameters may result in more effective forecasting of Arctic change.
Collapse
Affiliation(s)
| | - Shawn P. Serbin
- Terrestrial Ecosystem Science & Technology GroupEnvironmental Sciences DepartmentBrookhaven National LaboratoryUptonNew York11973USA
| | - Tobey B. Carman
- Institute of Arctic BiologyUniversity of Alaska FairbanksFairbanksAlaska99775USA
| | - Jennifer M. Fraterrigo
- Department of Natural Resources and Environmental SciencesUniversity of Illinois at Urbana‐ChampaignUrbanaIllinois61801USA
| | - Hélène Genet
- Institute of Arctic BiologyUniversity of Alaska FairbanksFairbanksAlaska99775USA
| | - Colleen M. Iversen
- Environmental Sciences Division and Climate Change Science InstituteOak Ridge National LaboratoryOak RidgeTennessee37831USA
| | - Verity Salmon
- Environmental Sciences Division and Climate Change Science InstituteOak Ridge National LaboratoryOak RidgeTennessee37831USA
| | - A. David McGuire
- Institute of Arctic BiologyUniversity of Alaska FairbanksFairbanksAlaska99775USA
| |
Collapse
|
13
|
Zagar A, Kadziola Z, Lipkovich I, Madigan D, Faries D. Evaluating bias control strategies in observational studies using frequentist model averaging. J Biopharm Stat 2022; 32:247-276. [PMID: 35213288 DOI: 10.1080/10543406.2021.1998095] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Estimating a treatment effect from observational data requires modeling treatment and outcome subject to uncertainty/misspecification. A previous research has shown that it is not possible to find a uniformly best strategy. In this article we propose a novel Frequentist Model Averaging (FMA) framework encompassing any estimation strategy and accounting for model uncertainty by computing a cross-validated estimate of Mean Squared Prediction Error (MSPE). We present a simulation study with data mimicking an observational database. Model averaging over 15+ strategies was compared with individual strategies as well as the best strategy selected by minimum MSPE. FMA showed robust performance (Bias, Mean Squared Error (MSE), and Confidence Interval (CI) coverage). Other strategies, such as linear regression, did well in simple scenarios but were inferior to the FMA in a scenario with complex confounding.
Collapse
Affiliation(s)
- Anthony Zagar
- Lilly Research Labs, Eli Lilly and Company, Indianapolis, Indiana, USA
| | | | - Ilya Lipkovich
- Lilly Research Labs, Eli Lilly and Company, Indianapolis, Indiana, USA
| | - David Madigan
- Provost, Northeastern University, Boston, Massachusetts, USA
| | - Doug Faries
- Lilly Research Labs, Eli Lilly and Company, Indianapolis, Indiana, USA
| |
Collapse
|
14
|
Pesenti SM, Millossovich P, Tsanakas A. Cascade Sensitivity Measures. Risk Anal 2021; 41:2392-2414. [PMID: 35088442 DOI: 10.1111/risa.13758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Revised: 03/11/2021] [Accepted: 04/06/2021] [Indexed: 06/14/2023]
Abstract
In risk analysis, sensitivity measures quantify the extent to which the probability distribution of a model output is affected by changes (stresses) in individual random input factors. For input factors that are statistically dependent, we argue that a stress on one input should also precipitate stresses in other input factors. We introduce a novel sensitivity measure, termed cascade sensitivity, defined as a derivative of a risk measure applied on the output, in the direction of an input factor. The derivative is taken after suitably transforming the random vector of inputs, thus explicitly capturing the direct impact of the stressed input factor, as well as indirect effects via other inputs. Furthermore, alternative representations of the cascade sensitivity measure are derived, allowing us to address practical issues, such as incomplete specification of the model and high computational costs. The applicability of the methodology is illustrated through the analysis of a commercially used insurance risk model.
Collapse
Affiliation(s)
- Silvana M Pesenti
- Department of Statistical Sciences, University of Toronto, 700 University Avenue, Toronto, Ontario, M5G 1X6, Canada
| | - Pietro Millossovich
- The Business School (formerly Cass), University of London, London, UK
- DEAMS, University of Trieste, Trieste, Italy
| | | |
Collapse
|
15
|
Van der Plas D, Verbraecken J, Willemen M, Meert W, Davis J. Evaluation of Automated Hypnogram Analysis on Multi-Scored Polysomnographies. Front Digit Health 2021; 3:707589. [PMID: 34713177 PMCID: PMC8521900 DOI: 10.3389/fdgth.2021.707589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 06/29/2021] [Indexed: 11/21/2022] Open
Abstract
A new method for automated sleep stage scoring of polysomnographies is proposed that uses a random forest approach to model feature interactions and temporal effects. The model mostly relies on features based on the rules from the American Academy of Sleep Medicine, which allows medical experts to gain insights into the model. A common way to evaluate automated approaches to constructing hypnograms is to compare the one produced by the algorithm to an expert's hypnogram. However, given the same data, two expert annotators will construct (slightly) different hypnograms due to differing interpretations of the data or individual mistakes. A thorough evaluation of our method is performed on a multi-labeled dataset in which both the inter-rater variability as well as the prediction uncertainties are taken into account, leading to a new standard for the evaluation of automated sleep stage scoring algorithms. On all epochs, our model achieves an accuracy of 82.7%, which is only slightly lower than the inter-rater disagreement. When only considering the 63.3% of the epochs where both the experts and algorithm are certain, the model achieves an accuracy of 97.8%. Transition periods between sleep stages are identified and studied for the first time. Scoring guidelines for medical experts are provided to complement the certain predictions by scoring only a few epochs manually. This makes the proposed method highly time-efficient while guaranteeing a highly accurate final hypnogram.
Collapse
Affiliation(s)
- Dries Van der Plas
- Onafhankelijke Software Groep (OSG bv), Micromed Group, Kontich, Belgium.,Department of Computer Science, Leuven AI, KU Leuven, Leuven, Belgium.,Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Johan Verbraecken
- Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium.,Multidisciplinary Sleep Disorders Centre, Antwerp University Hospital, Antwerp, Belgium.,Department of Pulmonary Medicine, Antwerp University Hospital, Antwerp, Belgium
| | - Marc Willemen
- Multidisciplinary Sleep Disorders Centre, Antwerp University Hospital, Antwerp, Belgium
| | - Wannes Meert
- Department of Computer Science, Leuven AI, KU Leuven, Leuven, Belgium
| | - Jesse Davis
- Department of Computer Science, Leuven AI, KU Leuven, Leuven, Belgium
| |
Collapse
|
16
|
MacGillivray BH. Handling Uncertainty in Models of Seismic and Postseismic Hazards: Toward Robust Methods and Resilient Societies. Risk Anal 2021; 41:1499-1512. [PMID: 33368460 PMCID: PMC8596732 DOI: 10.1111/risa.13663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Revised: 10/12/2020] [Accepted: 11/11/2020] [Indexed: 06/12/2023]
Abstract
Earthquakes, tsunamis, and landslides take a devastating toll on human lives, critical infrastructure, and ecosystems. Harnessing the predictive capacities of hazard models is key to transitioning from reactive approaches to disaster management toward building resilient societies, yet the knowledge that these models produce involves multiple uncertainties. The failure to properly account for these uncertainties has at times had important implications, from the flawed safety measures at the Fukushima power plant, to the reliance on short-term earthquake prediction models (reportedly at the expense of mitigation efforts) in modern China. This article provides an overview of methods for handling uncertainty in probabilistic seismic hazard assessment, tsunami hazard analysis, and debris flow modeling, considering best practices and areas for improvement. It covers sensitivity analysis, structured approaches to expert elicitation, methods for characterizing structural uncertainty (e.g., ensembles and logic trees), and the value of formal decision-analytic frameworks even in situations of deep uncertainty.
Collapse
|
17
|
Chen B, Craiu RV, Strug LJ, Sun L. The X factor: A robust and powerful approach to X-chromosome-inclusive whole-genome association studies. Genet Epidemiol 2021; 45:694-709. [PMID: 34224641 PMCID: PMC9292551 DOI: 10.1002/gepi.22422] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/14/2021] [Accepted: 05/28/2021] [Indexed: 12/17/2022]
Abstract
The X‐chromosome is often excluded from genome‐wide association studies because of analytical challenges. Some of the problems, such as the random, skewed, or no X‐inactivation model uncertainty, have been investigated. Other considerations have received little to no attention, such as the value in considering nonadditive and gene–sex interaction effects, and the inferential consequence of choosing different baseline alleles (i.e., the reference vs. the alternative allele). Here we propose a unified and flexible regression‐based association test for X‐chromosomal variants. We provide theoretical justifications for its robustness in the presence of various model uncertainties, as well as for its improved power when compared with the existing approaches under certain scenarios. For completeness, we also revisit the autosomes and show that the proposed framework leads to a more robust approach than the standard method. Finally, we provide supporting evidence by revisiting several published association studies. Supporting Information for this article are available online.
Collapse
Affiliation(s)
- Bo Chen
- Department of Statistical Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Radu V Craiu
- Department of Statistical Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Lisa J Strug
- Department of Statistical Sciences, University of Toronto, Toronto, Ontario, Canada.,Biostatistics Division, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada.,Department of Computer Science, University of Toronto, Toronto, Ontario, Canada.,Program in Genetics and Genome Biology, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Lei Sun
- Department of Statistical Sciences, University of Toronto, Toronto, Ontario, Canada.,Biostatistics Division, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
18
|
Abstract
Policymaking during a pandemic can be extremely challenging. As COVID-19 is a new disease and its global impacts are unprecedented, decisions are taken in a highly uncertain, complex, and rapidly changing environment. In such a context, in which human lives and the economy are at stake, we argue that using ideas and constructs from modern decision theory, even informally, will make policymaking a more responsible and transparent process.
Collapse
Affiliation(s)
- Loïc Berger
- Centre National de la Recherche Scientifique, IÉSEG School of Management, University of Lille, Unité Mixte de Recherche 9221-Lille Economics Management, 59000 Lille, France;
- Resources for the Future-Euro-Mediterranean Center on Climate Change (RFF-CMCC) European Institute on Economics and the Environment, Centro Euro-Mediterraneo sui Cambiamenti Climatici, 20123 Milan, Italy
| | - Nicolas Berger
- Faculty of Public Health and Policy, London School of Hygiene & Tropical Medicine, London WC1H 9SH, United Kingdom
- Department of Epidemiology and Public Health, Sciensano (Belgian Scientific Institute of Public Health), 1050 Brussels, Belgium
| | - Valentina Bosetti
- Resources for the Future-Euro-Mediterranean Center on Climate Change (RFF-CMCC) European Institute on Economics and the Environment, Centro Euro-Mediterraneo sui Cambiamenti Climatici, 20123 Milan, Italy
- Department of Economics, Bocconi University, 20136 Milan, Italy
- Innocenzo Gasparini Institute for Economic Research, Bocconi University, 20136 Milan, Italy
| | - Itzhak Gilboa
- Economics and Decision Sciences Department, École des Hautes Études Commerciales de Paris, 78351 Jouy-en-Josas, France
- Eitan Berglas School of Economics, Tel Aviv University, Tel Aviv 69978, Israel
| | - Lars Peter Hansen
- Department of Economics, University of Chicago, Chicago, IL 60637;
- Department of Statistics, University of Chicago, Chicago, IL 60637
- Booth School of Business, University of Chicago, Chicago, IL 60637
| | - Christopher Jarvis
- Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine, London WC1E 7HT, United Kingdom
| | - Massimo Marinacci
- Innocenzo Gasparini Institute for Economic Research, Bocconi University, 20136 Milan, Italy
- Department of Decision Sciences, Bocconi University, 20136 Milan, Italy
| | - Richard D Smith
- Faculty of Public Health and Policy, London School of Hygiene & Tropical Medicine, London WC1H 9SH, United Kingdom
- College of Medicine and Health, University of Exeter, Exeter EX1 2LU, United Kingdom
| |
Collapse
|
19
|
Xia J, Wang J, Niu S. Research challenges and opportunities for using big data in global change biology. Glob Chang Biol 2020; 26:6040-6061. [PMID: 32799353 DOI: 10.1111/gcb.15317] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Accepted: 07/13/2020] [Indexed: 06/11/2023]
Abstract
Global change biology has been entering a big data era due to the vast increase in availability of both environmental and biological data. Big data refers to large data volume, complex data sets, and multiple data sources. The recent use of such big data is improving our understanding of interactions between biological systems and global environmental changes. In this review, we first explore how big data has been analyzed to identify the general patterns of biological responses to global changes at scales from gene to ecosystem. After that, we investigate how observational networks and space-based big data have facilitated the discovery of emergent mechanisms and phenomena on the regional and global scales. Then, we evaluate the predictions of terrestrial biosphere under global changes by big modeling data. Finally, we introduce some methods to extract knowledge from big data, such as meta-analysis, machine learning, traceability analysis, and data assimilation. The big data has opened new research opportunities, especially for developing new data-driven theories for improving biological predictions in Earth system models, tracing global change impacts across different organismic levels, and constructing cyberinfrastructure tools to accelerate the pace of model-data integrations. These efforts will uncork the bottleneck of using big data to understand biological responses and adaptations to future global changes.
Collapse
Affiliation(s)
- Jianyang Xia
- Zhejiang Tiantong Forest Ecosystem National Observation and Research Station, Research Center for Global Change and Ecological Forecasting, School of Ecological and Environmental Sciences, East China Normal University, Shanghai, China
| | - Jing Wang
- Zhejiang Tiantong Forest Ecosystem National Observation and Research Station, Research Center for Global Change and Ecological Forecasting, School of Ecological and Environmental Sciences, East China Normal University, Shanghai, China
- Shanghai Institute of Pollution Control and Ecological Security, Shanghai, China
| | - Shuli Niu
- Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
20
|
Tapia Stefanoni J. Welfare Cost of Model Uncertainty in a Small Open Economy. Entropy (Basel) 2020; 22:E1221. [PMID: 33286989 PMCID: PMC7712009 DOI: 10.3390/e22111221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 10/14/2020] [Accepted: 10/22/2020] [Indexed: 06/12/2023]
Abstract
This paper extends the canonical small open-economy real-business-cycle model, when considering model uncertainty. Domestic households have multiplier preferences, which leads them to take robust decisions in response to possible model misspecification for the economy's aggregate productivity. Using perturbation methods, the paper extends the literature on real business cycle models by deriving a closed-form solution for the combined welfare effect of the two sources of uncertainty, namely risk and model uncertainty. While classical risk has an ambiguous effect on welfare, the addition of model uncertainty is unambiguously welfare-deteriorating. Hence, the overall effect of uncertainty on welfare is ambiguous, depending on consumers preferences and model parameters. The paper provides numerical results for the welfare effects of uncertainty measured by units of consumption equivalence. At moderate (high) levels of risk aversion, the effect of risk on household welfare is positive (negative). The addition of model uncertainty-for all levels of concern about model uncertainty and most risk aversion values-turns the overall effect of uncertainty on household welfare negative. It is important to remark that the analytical decomposition and combination of the effects of the two types of uncertainty considered here and the resulting ambiguous effect on overall welfare have not been derived in the previous literature on small open economies.
Collapse
Affiliation(s)
- Jocelyn Tapia Stefanoni
- Department of Industrial Engineering, Universidad Diego Portales, 441 Ejercito Ave, Santiago 8370191, Chile
| |
Collapse
|
21
|
Steinbrener J, Posch K, Pilz J. Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference. Sensors (Basel) 2020; 20:s20216011. [PMID: 33113927 PMCID: PMC7660222 DOI: 10.3390/s20216011] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 10/20/2020] [Indexed: 11/16/2022]
Abstract
We present a novel approach for training deep neural networks in a Bayesian way. Compared to other Bayesian deep learning formulations, our approach allows for quantifying the uncertainty in model parameters while only adding very few additional parameters to be optimized. The proposed approach uses variational inference to approximate the intractable a posteriori distribution on basis of a normal prior. By representing the a posteriori uncertainty of the network parameters per network layer and depending on the estimated parameter expectation values, only very few additional parameters need to be optimized compared to a non-Bayesian network. We compare our approach to classical deep learning, Bernoulli dropout and Bayes by Backprop using the MNIST dataset. Compared to classical deep learning, the test error is reduced by 15%. We also show that the uncertainty information obtained can be used to calculate credible intervals for the network prediction and to optimize network architecture for the dataset at hand. To illustrate that our approach also scales to large networks and input vector sizes, we apply it to the GoogLeNet architecture on a custom dataset, achieving an average accuracy of 0.92. Using 95% credible intervals, all but one wrong classification result can be detected.
Collapse
Affiliation(s)
- Jan Steinbrener
- Control of Networked Systems Group, Department of Smart Systems Technologies, Universität Klagenfurt, Universitätsstr 65-67, 9020 Klagenfurt, Austria
- Department of Statistics, Universität Klagenfurt, Universitätsstr 65-67, 9020 Klagenfurt, Austria; (K.P.); (J.P.)
- Correspondence:
| | - Konstantin Posch
- CTR Carinthian Tech Research AG, Europastr 12, 9524 Villach, Austria
| | - Jürgen Pilz
- CTR Carinthian Tech Research AG, Europastr 12, 9524 Villach, Austria
| |
Collapse
|
22
|
Eckstein S, Kupper M, Pohl M. Robust risk aggregation with neural networks. Math Financ 2020; 30:1229-1272. [PMID: 33041536 PMCID: PMC7540357 DOI: 10.1111/mafi.12280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 05/20/2020] [Accepted: 05/21/2020] [Indexed: 06/11/2023]
Abstract
We consider settings in which the distribution of a multivariate random variable is partly ambiguous. We assume the ambiguity lies on the level of the dependence structure, and that the marginal distributions are known. Furthermore, a current best guess for the distribution, called reference measure, is available. We work with the set of distributions that are both close to the given reference measure in a transportation distance (e.g., the Wasserstein distance), and additionally have the correct marginal structure. The goal is to find upper and lower bounds for integrals of interest with respect to distributions in this set. The described problem appears naturally in the context of risk aggregation. When aggregating different risks, the marginal distributions of these risks are known and the task is to quantify their joint effect on a given system. This is typically done by applying a meaningful risk measure to the sum of the individual risks. For this purpose, the stochastic interdependencies between the risks need to be specified. In practice, the models of this dependence structure are however subject to relatively high model ambiguity. The contribution of this paper is twofold: First, we derive a dual representation of the considered problem and prove that strong duality holds. Second, we propose a generally applicable and computationally feasible method, which relies on neural networks, in order to numerically solve the derived dual problem. The latter method is tested on a number of toy examples, before it is finally applied to perform robust risk aggregation in a real-world instance.
Collapse
Affiliation(s)
| | - Michael Kupper
- Department of MathematicsUniversity of KonstanzKonstanzGermany
| | - Mathias Pohl
- Faculty of Business, Economics & StatisticsUniversity of ViennaViennaAustria
| |
Collapse
|
23
|
de Vos CJ, Taylor RA, Simons RRL, Roberts H, Hultén C, de Koeijer AA, Lyytikäinen T, Napp S, Boklund A, Petie R, Sörén K, Swanenburg M, Comin A, Seppä-Lassila L, Cabral M, Snary EL. Cross-Validation of Generic Risk Assessment Tools for Animal Disease Incursion Based on a Case Study for African Swine Fever. Front Vet Sci 2020; 7:56. [PMID: 32133376 PMCID: PMC7039936 DOI: 10.3389/fvets.2020.00056] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Accepted: 01/22/2020] [Indexed: 12/26/2022] Open
Abstract
In recent years, several generic risk assessment (RA) tools have been developed that can be applied to assess the incursion risk of multiple infectious animal diseases allowing for a rapid response to a variety of newly emerging or re-emerging diseases. Although these tools were originally developed for different purposes, they can be used to answer similar or even identical risk questions. To explore the opportunities for cross-validation, seven generic RA tools were used to assess the incursion risk of African swine fever (ASF) to the Netherlands and Finland for the 2017 situation and for two hypothetical scenarios in which ASF cases were reported in wild boar and/or domestic pigs in Germany. The generic tools ranged from qualitative risk assessment tools to stochastic spatial risk models but were all parameterized using the same global databases for disease occurrence and trade in live animals and animal products. A comparison of absolute results was not possible, because output parameters represented different endpoints, varied from qualitative probability levels to quantitative numbers, and were expressed in different units. Therefore, relative risks across countries and scenarios were calculated for each tool, for the three pathways most in common (trade in live animals, trade in animal products, and wild boar movements) and compared. For the 2017 situation, all tools evaluated the risk to the Netherlands to be higher than Finland for the live animal trade pathway, the risk to Finland the same or higher as the Netherlands for the wild boar pathway, while the tools were inconclusive on the animal products pathway. All tools agreed that the hypothetical presence of ASF in Germany increased the risk to the Netherlands, but not to Finland. The ultimate aim of generic RA tools is to provide risk-based evidence to support risk managers in making informed decisions to mitigate the incursion risk of infectious animal diseases. The case study illustrated that conclusions on the ASF risk were similar across the generic RA tools, despite differences observed in calculated risks. Hence, it was concluded that the cross-validation contributed to the credibility of their results.
Collapse
Affiliation(s)
- Clazien J. de Vos
- Department of Bacteriology and Epidemiology, Wageningen Bioveterinary Research (WBVR), Wageningen University & Research, Lelystad, Netherlands
| | - Rachel A. Taylor
- Department of Epidemiological Sciences, Animal and Plant Health Agency (APHA), Weybridge, United Kingdom
| | - Robin R. L. Simons
- Department of Epidemiological Sciences, Animal and Plant Health Agency (APHA), Weybridge, United Kingdom
| | - Helen Roberts
- Department for Environment, Food & Rural Affairs (Defra), London, United Kingdom
| | | | - Aline A. de Koeijer
- Department of Bacteriology and Epidemiology, Wageningen Bioveterinary Research (WBVR), Wageningen University & Research, Lelystad, Netherlands
| | | | - Sebastian Napp
- Centre de Recerca en Sanitat Animal (CReSA IRTA-UAB), Bellaterra, Spain
| | - Anette Boklund
- Department of Veterinary and Animal Sciences, Section for Animal Welfare and Disease Control, University of Copenhagen, Frederiksberg, Denmark
| | - Ronald Petie
- Department of Bacteriology and Epidemiology, Wageningen Bioveterinary Research (WBVR), Wageningen University & Research, Lelystad, Netherlands
| | - Kaisa Sörén
- National Veterinary Institute (SVA), Uppsala, Sweden
| | - Manon Swanenburg
- Department of Bacteriology and Epidemiology, Wageningen Bioveterinary Research (WBVR), Wageningen University & Research, Lelystad, Netherlands
| | - Arianna Comin
- National Veterinary Institute (SVA), Uppsala, Sweden
| | | | - Maria Cabral
- Department of Bacteriology and Epidemiology, Wageningen Bioveterinary Research (WBVR), Wageningen University & Research, Lelystad, Netherlands
| | - Emma L. Snary
- Department of Epidemiological Sciences, Animal and Plant Health Agency (APHA), Weybridge, United Kingdom
| |
Collapse
|
24
|
Ul Abideen Z, Ghafoor M, Munir K, Saqib M, Ullah A, Zia T, Tariq SA, Ahmed G, Zahra A. Uncertainty Assisted Robust Tuberculosis Identification With Bayesian Convolutional Neural Networks. IEEE Access 2020; 8:22812-22825. [PMID: 32391238 PMCID: PMC7176037 DOI: 10.1109/access.2020.2970023] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2020] [Accepted: 01/21/2020] [Indexed: 05/07/2023]
Abstract
Tuberculosis (TB) is an infectious disease that can lead towards death if left untreated. TB detection involves extraction of complex TB manifestation features such as lung cavity, air space consolidation, endobronchial spread, and pleural effusions from chest x-rays (CXRs). Deep learning based approach named convolutional neural network (CNN) has the ability to learn complex features from CXR images. The main problem is that CNN does not consider uncertainty to classify CXRs using softmax layer. It lacks in presenting the true probability of CXRs by differentiating confusing cases during TB detection. This paper presents the solution for TB identification by using Bayesian-based convolutional neural network (B-CNN). It deals with the uncertain cases that have low discernibility among the TB and non-TB manifested CXRs. The proposed TB identification methodology based on B-CNN is evaluated on two TB benchmark datasets, i.e., Montgomery and Shenzhen. For training and testing of proposed scheme we have utilized Google Colab platform which provides NVidia Tesla K80 with 12 GB of VRAM, single core of 2.3 GHz Xeon Processor, 12 GB RAM and 320 GB of disk. B-CNN achieves 96.42% and 86.46% accuracy on both dataset, respectively as compared to the state-of-the-art machine learning and CNN approaches. Moreover, B-CNN validates its results by filtering the CXRs as confusion cases where the variance of B-CNN predicted outputs is more than a certain threshold. Results prove the supremacy of B-CNN for the identification of TB and non-TB sample CXRs as compared to counterparts in terms of accuracy, variance in the predicted probabilities and model uncertainty.
Collapse
Affiliation(s)
- Zain Ul Abideen
- 1Department of Computer ScienceCOMSATS University Islamabad (CUI)Islamabad44000Pakistan
| | - Mubeen Ghafoor
- 1Department of Computer ScienceCOMSATS University Islamabad (CUI)Islamabad44000Pakistan
- 2FET - Computer Science and Creative TechnologiesUniversity of the West of EnglandBristolBS16 1QYU.K
| | - Kamran Munir
- 2FET - Computer Science and Creative TechnologiesUniversity of the West of EnglandBristolBS16 1QYU.K
| | - Madeeha Saqib
- 3Department of Computer Information SystemsCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam34212Saudi Arabia
| | - Ata Ullah
- 4Department of Computer ScienceNational University of Modern Languages (NUML)Islamabad44000Pakistan
| | - Tehseen Zia
- 1Department of Computer ScienceCOMSATS University Islamabad (CUI)Islamabad44000Pakistan
| | - Syed Ali Tariq
- 1Department of Computer ScienceCOMSATS University Islamabad (CUI)Islamabad44000Pakistan
| | - Ghufran Ahmed
- 5Department of Computer ScienceNational University of Computer and Emerging Sciences (NUCES)Karachi54700Pakistan
| | - Asma Zahra
- 1Department of Computer ScienceCOMSATS University Islamabad (CUI)Islamabad44000Pakistan
| |
Collapse
|
25
|
Mobiny A, Singh A, Van Nguyen H. Risk-Aware Machine Learning Classifier for Skin Lesion Diagnosis. J Clin Med 2019; 8:E1241. [PMID: 31426482 PMCID: PMC6723257 DOI: 10.3390/jcm8081241] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2019] [Revised: 08/12/2019] [Accepted: 08/15/2019] [Indexed: 01/01/2023] Open
Abstract
Knowing when a machine learning system is not confident about its prediction is crucial in medical domains where safety is critical. Ideally, a machine learning algorithm should make a prediction only when it is highly certain about its competency, and refer the case to physicians otherwise. In this paper, we investigate how Bayesian deep learning can improve the performance of the machine-physician team in the skin lesion classification task. We used the publicly available HAM10000 dataset, which includes samples from seven common skin lesion categories: Melanoma (MEL), Melanocytic Nevi (NV), Basal Cell Carcinoma (BCC), Actinic Keratoses and Intraepithelial Carcinoma (AKIEC), Benign Keratosis (BKL), Dermatofibroma (DF), and Vascular (VASC) lesions. Our experimental results show that Bayesian deep networks can boost the diagnostic performance of the standard DenseNet-169 model from 81.35% to 83.59% without incurring additional parameters or heavy computation. More importantly, a hybrid physician-machine workflow reaches a classification accuracy of 90 % while only referring 35 % of the cases to physicians. The findings are expected to generalize to other medical diagnosis applications. We believe that the availability of risk-aware machine learning methods will enable a wider adoption of machine learning technology in clinical settings.
Collapse
Affiliation(s)
- Aryan Mobiny
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77004, USA.
| | - Aditi Singh
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77004, USA
| | - Hien Van Nguyen
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77004, USA
| |
Collapse
|
26
|
Collalti A, Thornton PE, Cescatti A, Rita A, Borghetti M, Nolè A, Trotta C, Ciais P, Matteucci G. The sensitivity of the forest carbon budget shifts across processes along with stand development and climate change. Ecol Appl 2019; 29:e01837. [PMID: 30549378 PMCID: PMC6849766 DOI: 10.1002/eap.1837] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2018] [Revised: 11/05/2018] [Accepted: 11/13/2018] [Indexed: 05/10/2023]
Abstract
The future trajectory of atmospheric CO2 concentration depends on the development of the terrestrial carbon sink, which in turn is influenced by forest dynamics under changing environmental conditions. An in-depth understanding of model sensitivities and uncertainties in non-steady-state conditions is necessary for reliable and robust projections of forest development and under scenarios of global warming and CO2 enrichment. Here, we systematically assessed if a biogeochemical process-based model (3D-CMCC-CNR), which embeds similarities with many other vegetation models, applied in simulating net primary productivity (NPP) and standing woody biomass (SWB), maintained a consistent sensitivity to its 55 input parameters through time, during forest ageing and structuring as well as under climate change scenarios. Overall, the model applied at three contrasting European forests showed low sensitivity to the majority of its parameters. Interestingly, model sensitivity to parameters varied through the course of >100 yr of simulations. In particular, the model showed a large responsiveness to the allometric parameters used for initialize forest carbon and nitrogen pools early in forest simulation (i.e., for NPP up to ~37%, 256 g C·m-2 ·yr-1 and for SWB up to ~90%, 65 Mg C/ha, when compared to standard simulation), with this sensitivity decreasing sharply during forest development. At medium to longer time scales, and under climate change scenarios, the model became increasingly more sensitive to additional and/or different parameters controlling biomass accumulation and autotrophic respiration (i.e., for NPP up to ~30%, 167 g C·m-2 ·yr-1 and for SWB up to ~24%, 64 Mg C/ha, when compared to standard simulation). Interestingly, model outputs were shown to be more sensitive to parameters and processes controlling stand development rather than to climate change (i.e., warming and changes in atmospheric CO2 concentration) itself although model sensitivities were generally higher under climate change scenarios. Our results suggest the need for sensitivity and uncertainty analyses that cover multiple temporal scales along forest developmental stages to better assess the potential of future forests to act as a global terrestrial carbon sink.
Collapse
Affiliation(s)
- Alessio Collalti
- National Research Council of ItalyInstitute for Agriculture and Forestry Systems in the Mediterranean (CNR‐ISAFOM)87036RendeCosenzaItaly
- Impacts on Agriculture, Forests and Ecosystem Services (CMCC‐IAFES) DivisionFoundation Euro‐Mediterranean Centre on Climate Change01100ViterboItaly
| | - Peter E. Thornton
- Environmental Sciences Division and Climate Change Science InstituteOak Ridge National LaboratoryOak RidgeTennessee37830USA
| | - Alessandro Cescatti
- Joint Research CentreDirectorate for Sustainable ResourcesEuropean CommissionIspraItaly
| | - Angelo Rita
- Scuola di Scienze Agrarie, Forestali, Alimentari e AmbientaliUniversità degli Studi della BasilicataViale dell'Ateneo Lucano 10PotenzaPotenza85100Italy
| | - Marco Borghetti
- Scuola di Scienze Agrarie, Forestali, Alimentari e AmbientaliUniversità degli Studi della BasilicataViale dell'Ateneo Lucano 10PotenzaPotenza85100Italy
| | - Angelo Nolè
- Scuola di Scienze Agrarie, Forestali, Alimentari e AmbientaliUniversità degli Studi della BasilicataViale dell'Ateneo Lucano 10PotenzaPotenza85100Italy
| | - Carlo Trotta
- Department for Innovation in Biological, Agro‐Food and Forest Systems (DIBAF)University of Tuscia01100ViterboItaly
| | - Philippe Ciais
- IPSL–LSCE CEA CNRS UVSQ UPSaclay Centre d'Etudes Orme des Merisiers91191Gif sur YvetteFrance
| | - Giorgio Matteucci
- National Research Council of ItalyInstitute for Agriculture and Forestry Systems in the Mediterranean (CNR‐ISAFOM)87036RendeCosenzaItaly
| |
Collapse
|
27
|
Moges T, Ameta G, Witherell P. A Review of Model Inaccuracy and Parameter Uncertainty in Laser Powder Bed Fusion Models and Simulations. J Manuf Sci Eng 2019; 141:10.1115/1.4042789. [PMID: 31097908 PMCID: PMC6513316 DOI: 10.1115/1.4042789] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper presents a comprehensive review on the sources of model inaccuracy and parameter uncertainty in metal laser powder bed fusion (L-PBF) process. Metal additive manufacturing (AM) involves multiple physical phenomena and parameters that potentially affect the quality of the final part. To capture the dynamics and complexity of heat and phase transformations that exist in the metal L-PBF process, computational models and simulations ranging from low to high fidelity have been developed. Since it is difficult to incorporate all the physical phenomena encountered in the L-PBF process, computational models rely on assumptions that may neglect or simplify some physics of the process. Modeling assumptions and uncertainty play significant role in the predictive accuracy of such L-PBF models. In this study, sources of modeling inaccuracy at different stages of the process from powder bed formation to melting and solidification are reviewed. The sources of parameter uncertainty related to material properties and process parameters are also reviewed. The aim of this review is to support the development of an approach to quantify these sources of uncertainty in L-PBF models in the future. The quantification of uncertainty sources is necessary for understanding the tradeoffs in model fidelity and guiding the selection of a model suitable for its intended purpose.
Collapse
Affiliation(s)
- Tesfaye Moges
- Engineering Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899
| | - Gaurav Ameta
- Engineering Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899
| | - Paul Witherell
- Engineering Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899
| |
Collapse
|
28
|
Calabrese EJ, Hanekamp JC, Shamoun DY. The EPA Cancer Risk Assessment Default Model Proposal: Moving Away From the LNT. Dose Response 2018; 16:1559325818789840. [PMID: 30116166 PMCID: PMC6088500 DOI: 10.1177/1559325818789840] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Revised: 06/12/2018] [Accepted: 06/12/2018] [Indexed: 11/16/2022] Open
Abstract
This article strongly supports the Environmental Protection Agency proposal to make significant changes in their cancer risk assessment principles and practices by moving away from the use of the linear nonthreshold (LNT) dose-response as the default model. An alternate approach is proposed based on model uncertainty which integrates the most scientifically supportable features of the threshold, hormesis, and LNT models to identify the doses that optimize population-based responses (ie, maximize health benefits/minimize health harm). This novel approach for cancer risk assessment represents a significant improvement to the current LNT default method from scientific and public health perspectives.
Collapse
Affiliation(s)
- Edward J. Calabrese
- Department of Environmental Health Sciences, University of Massachusetts, Amherst, MA, USA
| | - Jaap C. Hanekamp
- Science Department, University College Roosevelt, Middelburg, The Netherlands
| | | |
Collapse
|
29
|
Regan HM, Bohórquez CI, Keith DA, Regan TJ, Anderson KE. Implications of different population model structures for management of threatened plants. Conserv Biol 2017; 31:459-468. [PMID: 27596063 DOI: 10.1111/cobi.12831] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Revised: 05/23/2016] [Accepted: 07/19/2016] [Indexed: 06/06/2023]
Abstract
Population viability analysis (PVA) is a reliable tool for ranking management options for a range of species despite parameter uncertainty. No one has yet investigated whether this holds true for model uncertainty for species with complex life histories and for responses to multiple threats. We tested whether a range of model structures yielded similar rankings of management and threat scenarios for 2 plant species with complex postfire responses. We examined 2 contrasting species from different plant functional types: an obligate seeding shrub and a facultative resprouting shrub. We exposed each to altered fire regimes and an additional, species-specific threat. Long-term demographic data sets were used to construct an individual-based model (IBM), a complex stage-based model, and a simple matrix model that subsumes all life stages into 2 or 3 stages. Agreement across models was good under some scenarios and poor under others. Results from the simple and complex matrix models were more similar to each other than to the IBM. Results were robust across models when dominant threats are considered but were less so for smaller effects. Robustness also broke down as the scenarios deviated from baseline conditions, likely the result of a number of factors related to the complexity of the species' life history and how it was represented in a model. Although PVA can be an invaluable tool for integrating data and understanding species' responses to threats and management strategies, this is best achieved in the context of decision support for adaptive management alongside multiple lines of evidence and expert critique of model construction and output.
Collapse
Affiliation(s)
- Helen M Regan
- Biology Department, University of California Riverside, 900 University Avenue, Riverside, CA, 92521, U.S.A
| | - Clara I Bohórquez
- Biology Department, University of California Riverside, 900 University Avenue, Riverside, CA, 92521, U.S.A
- Biology Department, Universidad Nacional de Colombia, Carrera 45 #26-85, Edif. Uriel Gutiérrez, Bogotá D.C., Colombia
| | - David A Keith
- Centre for Ecosystem Science, School of Biological, Earth and Environmental Sciences, The University of New South Wales, Sydney, NSW, 2052, Australia
- New South Wales Office of Environment & Heritage, P.O. Box A290, Sydney South, NSW, 1232, Australia
| | - Tracey J Regan
- Arthur Rylah Institute for Environmental Research, The Department of Environment, Land, Water and Planning, 123 Brown Street, Heidelberg, VIC, 3084, Australia
- School of Biosciences, University of Melbourne, Parkville Campus, VIC, 3010, Australia
| | - Kurt E Anderson
- Biology Department, University of California Riverside, 900 University Avenue, Riverside, CA, 92521, U.S.A
| |
Collapse
|
30
|
Otava M, Shkedy Z, Hothorn LA, Talloen W, Gerhard D, Kasim A. Identification of the minimum effective dose for normally distributed data using a Bayesian variable selection approach. J Biopharm Stat 2017; 27:1073-1088. [PMID: 28328286 DOI: 10.1080/10543406.2017.1295247] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The identification of the minimum effective dose is of high importance in the drug development process. In early stage screening experiments, establishing the minimum effective dose can be translated into a model selection based on information criteria. The presented alternative, Bayesian variable selection approach, allows for selection of the minimum effective dose, while taking into account model uncertainty. The performance of Bayesian variable selection is compared with the generalized order restricted information criterion on two dose-response experiments and through the simulations study. Which method has performed better depends on the complexity of the underlying model and the effect size relative to noise.
Collapse
Affiliation(s)
- Martin Otava
- a Interuniversity Institute for Biostatistics and Statistical Bioinformatics , Hasselt University , Hasselt , Belgium
| | - Ziv Shkedy
- a Interuniversity Institute for Biostatistics and Statistical Bioinformatics , Hasselt University , Hasselt , Belgium
| | - Ludwig A Hothorn
- b Institute of Biostatistics , Leibniz University Hannover, Hannover, Germany
| | - Willem Talloen
- c Janssen, Pharmaceutical companies of Johnson & Johnson , Beerse , Belgium
| | - Daniel Gerhard
- d School of Mathematics and Statistics , University of Canterbury , Christchurch , New Zealand
| | - Adetayo Kasim
- e Wolfson Research Institute for Health and Wellbeing , Durham University, Queen's Campus, University Boulevard , Stockton-on-Tees , United Kingdom
| |
Collapse
|
31
|
Kim BW, Park BS. Robust Control for the Segway with Unknown Control Coefficient and Model Uncertainties. Sensors (Basel) 2016; 16:s16071000. [PMID: 27367696 PMCID: PMC4970050 DOI: 10.3390/s16071000] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2016] [Revised: 06/10/2016] [Accepted: 06/23/2016] [Indexed: 06/06/2023]
Abstract
The Segway, which is a popular vehicle nowadays, is an uncertain nonlinear system and has an unknown time-varying control coefficient. Thus, we should consider the unknown time-varying control coefficient and model uncertainties to design the controller. Motivated by this observation, we propose a robust control for the Segway with unknown control coefficient and model uncertainties. To deal with the time-varying unknown control coefficient, we employ the Nussbaum gain technique. We introduce an auxiliary variable to solve the underactuated problem. Due to the prescribed performance control technique, the proposed controller does not require the adaptive technique, neural network, and fuzzy logic to compensate the uncertainties. Therefore, it can be simple. From the Lyapunov stability theory, we prove that all signals in the closed-loop system are bounded. Finally, we provide the simulation results to demonstrate the effectiveness of the proposed control scheme.
Collapse
Affiliation(s)
- Byung Woo Kim
- Department of Electronic Engineering, Chosun University, 375 Seosuk-Dong, Dong-Gu, Gwangju 61452, Korea.
| | - Bong Seok Park
- Division of Electrical, Electronic, and Control Engineering, Kongju National University, 1223-24 Cheonan-Daero, Seobuk-Gu, Cheonan 31080, Korea.
| |
Collapse
|
32
|
Abstract
The problem of constructing Bayesian optimal discriminating designs for a class of regression models with respect to the T-optimality criterion introduced by Atkinson and Fedorov (1975a) is considered. It is demonstrated that the discretization of the integral with respect to the prior distribution leads to locally T-optimal discriminating design problems with a large number of model comparisons. Current methodology for the numerical construction of discrimination designs can only deal with a few comparisons, but the discretization of the Bayesian prior easily yields to discrimination design problems for more than 100 competing models. A new efficient method is developed to deal with problems of this type. It combines some features of the classical exchange type algorithm with the gradient methods. Convergence is proved and it is demonstrated that the new method can find Bayesian optimal discriminating designs in situations where all currently available procedures fail.
Collapse
Affiliation(s)
- Holger Dette
- Ruhr-Universität Bochum, Fakultät für Mathematik, 44780 Bochum, Germany,
| | - Viatcheslav B Melas
- St. Petersburg State University, Department of Mathematics, St. Petersburg, Russia,
| | - Roman Guchenko
- St. Petersburg State University, Department of Mathematics, St. Petersburg, Russia,
| |
Collapse
|
33
|
Juricke S, Jung T. Influence of stochastic sea ice parametrization on climate and the role of atmosphere-sea ice-ocean interaction. Philos Trans A Math Phys Eng Sci 2014; 372:20130283. [PMID: 24842027 PMCID: PMC4024236 DOI: 10.1098/rsta.2013.0283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The influence of a stochastic sea ice strength parametrization on the mean climate is investigated in a coupled atmosphere-sea ice-ocean model. The results are compared with an uncoupled simulation with a prescribed atmosphere. It is found that the stochastic sea ice parametrization causes an effective weakening of the sea ice. In the uncoupled model this leads to an Arctic sea ice volume increase of about 10-20% after an accumulation period of approximately 20-30 years. In the coupled model, no such increase is found. Rather, the stochastic perturbations lead to a spatial redistribution of the Arctic sea ice thickness field. A mechanism involving a slightly negative atmospheric feedback is proposed that can explain the different responses in the coupled and uncoupled system. Changes in integrated Antarctic sea ice quantities caused by the stochastic parametrization are generally small, as memory is lost during the melting season because of an almost complete loss of sea ice. However, stochastic sea ice perturbations affect regional sea ice characteristics in the Southern Hemisphere, both in the uncoupled and coupled model. Remote impacts of the stochastic sea ice parametrization on the mean climate of non-polar regions were found to be small.
Collapse
Affiliation(s)
- Stephan Juricke
- Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Bremerhaven, Germany
| | - Thomas Jung
- Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Bremerhaven, Germany
| |
Collapse
|
34
|
Abstract
BACKGROUND To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. METHODS We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. RESULTS For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. CONCLUSIONS Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets.
Collapse
Affiliation(s)
- Eva A Enns
- University of Minnesota School of Public Health, Division of Health Policy and Management, Minneapolis, MN (EAE)
| | - Lauren E Cipriano
- Ivey Business School, University of Western Ontario, London, ON, Canada (LEC)
| | | | - Chung Yin Kong
- Institute for Technology Assessment, Massachusetts General Hospital, Boston, MA (CYK),Harvard Medical School, Boston, MA (CYK)
| |
Collapse
|
35
|
Kim SB, Kodell RL, Moon H. A diversity index for model space selection in the estimation of benchmark and infectious doses via model averaging. Risk Anal 2014; 34:453-464. [PMID: 23980524 DOI: 10.1111/risa.12104] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA.
Collapse
Affiliation(s)
- Steven B Kim
- Department of Statistics, University of California, Irvine, CA, 92697, USA
| | | | | |
Collapse
|
36
|
Abstract
This article discusses how analyst's or expert's beliefs on the credibility and quality of models can be assessed and incorporated into the uncertainty assessment of an unknown of interest. The proposed methodology is a specialization of the Bayesian framework for the assessment of model uncertainty presented in an earlier paper. This formalism treats models as sources of information in assessing the uncertainty of an unknown, and it allows the use of predictions from multiple models as well as experimental validation data about the models' performances. In this article, the methodology is extended to incorporate additional types of information about the model, namely, subjective information in terms of credibility of the model and its applicability when it is used outside its intended domain of application. An example in the context of fire risk modeling is also provided.
Collapse
|
37
|
Abstract
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the "hybrid" method proposed by Crump, two strategies of BMA, including both "maximum likelihood estimation based" and "Markov Chain Monte Carlo based" methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose-response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose-response data.
Collapse
Affiliation(s)
- Kan Shao
- ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency
| | - Jeffrey S Gift
- National Center for Environmental Assessment, U.S. Environmental Protection Agency, Triangle Park, NC, USA
| |
Collapse
|
38
|
Meyer ALS, Pie MR, Passos FC. Assessing the exposure of lion tamarins (Leontopithecus spp.) to future climate change. Am J Primatol 2013; 76:551-62. [PMID: 24346860 DOI: 10.1002/ajp.22247] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2013] [Revised: 11/14/2013] [Accepted: 11/15/2013] [Indexed: 11/06/2022]
Abstract
Understanding how biodiversity will respond to climate change is a major challenge in conservation science. Climatic changes are likely to impose serious threats to many organisms, especially those with narrow distribution ranges, small populations and low dispersal capacity. Lion tamarins (Leontopithecus spp.) are endangered primates endemic to Brazilian Atlantic Forest (BAF), and all four living species are typical examples of these aggravating conditions. Here, we integrate ecological niche modeling and GIS-based information about BAF remnants and protected areas to estimate the exposure (i.e., the extent of climate change predicted to be experienced by a species) of current suitable habitats to climate change for 2050 and 2080, and to evaluate the efficacy of existing reserves to protect climatically suitable areas. Niche models were built using Maxent and then projected onto seven global circulation models derived from the A1B climatic scenario. According to our projections, the occurrence area of L. caissara will be little exposed to climate change. Western populations of L. chrysomelas could be potentially exposed, while climatically suitable habitats will be maintained only in part of the eastern region. Protected areas that presently harbor large populations of L. chrysopygus and L. rosalia will not retain climatic suitability by 2080. Monitoring trends of exposed populations and protecting areas predicted to hold suitable conditions should be prioritized. Given the potential exposure of key lion tamarin populations, we stress the importance of conducting additional studies to assess other aspects of their vulnerability (i.e., sensitivity to climate and adaptive capacity) and, therefore, to provide a more solid framework for future management decisions in the context of climate change.
Collapse
Affiliation(s)
- Andreas L S Meyer
- Programa de Pós-Graduação em Zoologia, Departamento de Zoologia, Universidade Federal do Paraná, Curitiba, Paraná, Brazil; Laboratório de Dinâmica Evolutiva e Sistemas Complexos, Departamento de Zoologia, Universidade Federal do Paraná, Curitiba, Paraná, Brazil
| | | | | |
Collapse
|
39
|
Wenger SJ, Som NA, Dauwalter DC, Isaak DJ, Neville HM, Luce CH, Dunham JB, Young MK, Fausch KD, Rieman BE. Probabilistic accounting of uncertainty in forecasts of species distributions under climate change. Glob Chang Biol 2013; 19:3343-3354. [PMID: 23765608 DOI: 10.1111/gcb.12294] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2013] [Accepted: 05/08/2013] [Indexed: 06/02/2023]
Abstract
Forecasts of species distributions under future climates are inherently uncertain, but there have been few attempts to describe this uncertainty comprehensively in a probabilistic manner. We developed a Monte Carlo approach that accounts for uncertainty within generalized linear regression models (parameter uncertainty and residual error), uncertainty among competing models (model uncertainty), and uncertainty in future climate conditions (climate uncertainty) to produce site-specific frequency distributions of occurrence probabilities across a species' range. We illustrated the method by forecasting suitable habitat for bull trout (Salvelinus confluentus) in the Interior Columbia River Basin, USA, under recent and projected 2040s and 2080s climate conditions. The 95% interval of total suitable habitat under recent conditions was estimated at 30.1-42.5 thousand km; this was predicted to decline to 0.5-7.9 thousand km by the 2080s. Projections for the 2080s showed that the great majority of stream segments would be unsuitable with high certainty, regardless of the climate data set or bull trout model employed. The largest contributor to uncertainty in total suitable habitat was climate uncertainty, followed by parameter uncertainty and model uncertainty. Our approach makes it possible to calculate a full distribution of possible outcomes for a species, and permits ready graphical display of uncertainty for individual locations and of total habitat.
Collapse
|
40
|
Pisano R, Fissore D, Barresi AA, Rastelli M. Quality by design: scale-up of freeze-drying cycles in pharmaceutical industry. AAPS PharmSciTech 2013; 14:1137-49. [PMID: 23884856 PMCID: PMC3755168 DOI: 10.1208/s12249-013-0003-9] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2012] [Accepted: 07/05/2013] [Indexed: 11/30/2022] Open
Abstract
This paper shows the application of mathematical modeling to scale-up a cycle developed with lab-scale equipment on two different production units. The above method is based on a simplified model of the process parameterized with experimentally determined heat and mass transfer coefficients. In this study, the overall heat transfer coefficient between product and shelf was determined by using the gravimetric procedure, while the dried product resistance to vapor flow was determined through the pressure rise test technique. Once model parameters were determined, the freeze-drying cycle of a parenteral product was developed via dynamic design space for a lab-scale unit. Then, mathematical modeling was used to scale-up the above cycle in the production equipment. In this way, appropriate values were determined for processing conditions, which allow the replication, in the industrial unit, of the product dynamics observed in the small scale freeze-dryer. This study also showed how inter-vial variability, as well as model parameter uncertainty, can be taken into account during scale-up calculations.
Collapse
Affiliation(s)
- Roberto Pisano
- Dipartimento di Scienza Applicata e Tecnologia, Politecnico di Torino, corso Duca degli Abruzzi 24, 10129, Turin, Italy.
| | | | | | | |
Collapse
|
41
|
Gauriot R, Gunaratnam L, Moroni R, Reinikainen T, Corander J. Statistical challenges in the quantification of gunshot residue evidence. J Forensic Sci 2013; 58:1149-1155. [PMID: 23822522 DOI: 10.1111/1556-4029.12179] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2012] [Revised: 07/24/2012] [Accepted: 07/29/2012] [Indexed: 11/27/2022]
Abstract
The discharging of a gun results in the formation of extremely small particles known as gunshot residues (GSR). These may be deposited on the skin and clothing of the shooter, on other persons present, and on nearby items or surfaces. Several factors and their complex interactions affect the number of detectable GSR particles, which can deeply influence the conclusions drawn from likelihood ratios or posterior probabilities for prosecution hypotheses of interest. We present Bayesian network models for casework examples and demonstrate that probabilistic quantification of GSR evidence can be very sensitive to the assumptions concerning the model structure, prior probabilities, and the likelihood components. This finding has considerable implications for the use of statistical quantification of GSR evidence in the legal process.
Collapse
Affiliation(s)
- Romain Gauriot
- Department of Mathematics and Statistics, University of Helsinki, FI-00014, Helsinki, Finland
| | - Lawrence Gunaratnam
- National Bureau of Investigation Forensic Laboratory, Jokiniemenkuja 4, FI-01370, Vantaa, Finland
| | - Rossana Moroni
- National Bureau of Investigation Forensic Laboratory, Jokiniemenkuja 4, FI-01370, Vantaa, Finland
- Department of Mathematics, Åbo Akademi University, Fänriksgatan 3B, FI-20500, Turku, Finland
| | - Tapani Reinikainen
- National Bureau of Investigation Forensic Laboratory, Jokiniemenkuja 4, FI-01370, Vantaa, Finland
| | - Jukka Corander
- Department of Mathematics and Statistics, University of Helsinki, FI-00014, Helsinki, Finland
| |
Collapse
|
42
|
Abstract
Decision-makers have been shown to rely on probabilistic models for perception and action. However, these models can be incorrect or partially wrong in which case the decision-maker has to cope with model uncertainty. Model uncertainty has recently also been shown to be an important determinant of sensorimotor behaviour in humans that can lead to risk-sensitive deviations from Bayes optimal behaviour towards worst-case or best-case outcomes. Here, we investigate the effect of model uncertainty on cooperation in sensorimotor interactions similar to the stag-hunt game, where players develop models about the other player and decide between a pay-off-dominant cooperative solution and a risk-dominant, non-cooperative solution. In simulations, we show that players who allow for optimistic deviations from their opponent model are much more likely to converge to cooperative outcomes. We also implemented this agent model in a virtual reality environment, and let human subjects play against a virtual player. In this game, subjects' pay-offs were experienced as forces opposing their movements. During the experiment, we manipulated the risk sensitivity of the computer player and observed human responses. We found not only that humans adaptively changed their level of cooperation depending on the risk sensitivity of the computer player but also that their initial play exhibited characteristic risk-sensitive biases. Our results suggest that model uncertainty is an important determinant of cooperation in two-player sensorimotor interactions.
Collapse
Affiliation(s)
- J Grau-Moya
- Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, Tübingen 72076, Germany.
| | | | | | | |
Collapse
|
43
|
Piegorsch WW, An L, Wickens AA, West RW, Peña EA, Wu W. Information-theoretic model-averaged benchmark dose analysis in environmental risk assessment. Environmetrics 2013; 24:143-157. [PMID: 24039461 PMCID: PMC3768164 DOI: 10.1002/env.2201] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs), that induce a pre-specified Benchmark Response (BMR) in a dose-response experiment. In such settings, representations of the risk are traditionally based on a specified parametric model. It is a well-known concern, however, that existing parametric estimation techniques are sensitive to the form employed for modeling the dose response. If the chosen parametric model is in fact misspecified, this can lead to inaccurate low-dose inferences. Indeed, avoiding the impact of model selection was one early motivating issue behind development of the BMD technology. Here, we apply a frequentist model averaging approach for estimating benchmark doses, based on information-theoretic weights. We explore how the strategy can be used to build one-sided lower confidence limits on the BMD, and we study the confidence limits' small-sample properties via a simulation study. An example from environmental carcinogenicity testing illustrates the calculations. It is seen that application of this information-theoretic, model averaging methodology to benchmark analysis can improve environmental health planning and risk regulation when dealing with low-level exposures to hazardous agents.
Collapse
Affiliation(s)
- Walter W Piegorsch
- Interdisciplinary Program in Statistics, University of Arizona, Tucson, AZ, USA
| | | | | | | | | | | |
Collapse
|
44
|
Lo K, Raftery AE, Dombek KM, Zhu J, Schadt EE, Bumgarner RE, Yeung KY. Integrating external biological knowledge in the construction of regulatory networks from time-series expression data. BMC Syst Biol 2012; 6:101. [PMID: 22898396 PMCID: PMC3465231 DOI: 10.1186/1752-0509-6-101] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2012] [Accepted: 07/24/2012] [Indexed: 01/27/2023]
Abstract
BACKGROUND Inference about regulatory networks from high-throughput genomics data is of great interest in systems biology. We present a Bayesian approach to infer gene regulatory networks from time series expression data by integrating various types of biological knowledge. RESULTS We formulate network construction as a series of variable selection problems and use linear regression to model the data. Our method summarizes additional data sources with an informative prior probability distribution over candidate regression models. We extend the Bayesian model averaging (BMA) variable selection method to select regulators in the regression framework. We summarize the external biological knowledge by an informative prior probability distribution over the candidate regression models. CONCLUSIONS We demonstrate our method on simulated data and a set of time-series microarray experiments measuring the effect of a drug perturbation on gene expression levels, and show that it outperforms leading regression-based methods in the literature.
Collapse
Affiliation(s)
- Kenneth Lo
- Department of Microbiology, University of Washington, Box 358070, Seattle, WA, 98195, USA
| | - Adrian E Raftery
- Department of Statistics, University of Washington, Box 354320, Seattle, WA, 98195, USA
| | - Kenneth M Dombek
- Department of Biochemistry, University of Washington, Box 357350, Seattle, WA, 98195, USA
| | - Jun Zhu
- Department of Genetics and Genomic Sciences, Mount Sinai School of Medicine, New York, NY, 10029, USA
| | - Eric E Schadt
- Department of Genetics and Genomic Sciences, Mount Sinai School of Medicine, New York, NY, 10029, USA
| | - Roger E Bumgarner
- Department of Microbiology, University of Washington, Box 358070, Seattle, WA, 98195, USA
| | - Ka Yee Yeung
- Department of Microbiology, University of Washington, Box 358070, Seattle, WA, 98195, USA
| |
Collapse
|
45
|
Abstract
Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models.
Collapse
Affiliation(s)
- Jennifer F Bobb
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland 21205, USA.
| | | | | |
Collapse
|
46
|
Abstract
BACKGROUND AND AIMS Constructing functional-structural plant models (FSPMs) is a valuable method for examining how physiology and morphology interact in determining plant processes. However, such models always have uncertainty concerned with whether model components have been selected and represented effectively, with the number of model outputs simulated and with the quality of data used in assessment. We provide a procedure for defining uncertainty of an FSPM and how this uncertainty can be reduced. METHODS An important characteristic of FSPMs is that typically they calculate many variables. These can be variables that the model is designed to predict and also variables that give indications of how the model functions. Together these variables are used as criteria in a method of multi-criteria assessment. Expected ranges are defined and an evolutionary computation algorithm searches for model parameters that achieve criteria within these ranges. Typically, different combinations of model parameter values provide solutions achieving different combinations of variables within their specified ranges. We show how these solutions define a Pareto Frontier that can inform about the functioning of the model. KEY RESULTS The method of multi-criteria assessment is applied to development of BRANCHPRO, an FSPM for foliage reiteration on old-growth branches of Pseudotsuga menziesii. A geometric model utilizing probabilities for bud growth is developed into a causal explanation for the pattern of reiteration found on these branches and how this pattern may contribute to the longevity of this species. CONCLUSIONS FSPMs should be assessed by their ability to simulate multiple criteria simultaneously. When different combinations of parameter values achieve different groups of assessment criteria effectively a Pareto Frontier can be calculated and used to define the sources of model uncertainty.
Collapse
Affiliation(s)
- E David Ford
- School of Forest Resources, University of Washington, Seattle, WA 98195-2100, USA.
| | | |
Collapse
|
47
|
Abstract
Health economic decision models are subject to various forms of uncertainty, including uncertainty about the parameters of the model and about the model structure. These uncertainties can be handled within a Bayesian framework, which also allows evidence from previous studies to be combined with the data. As an example, we consider a Markov model for assessing the cost-effectiveness of implantable cardioverter defibrillators. Using Markov chain Monte Carlo posterior simulation, uncertainty about the parameters of the model is formally incorporated in the estimates of expected cost and effectiveness. We extend these methods to include uncertainty about the choice between plausible model structures. This is accounted for by averaging the posterior distributions from the competing models using weights that are derived from the pseudo-marginal-likelihood and the deviance information criterion, which are measures of expected predictive utility. We also show how these cost-effectiveness calculations can be performed efficiently in the widely used software WinBUGS.
Collapse
|
48
|
Abstract
In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of complex data. That is, we use convex combinations of two different model averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the models in each average are re-chosen adaptively at each time step. To assess the complexity of a given data set, we introduce measures of data complexity for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several model classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between model list bias and model list variability in cases where the data is very complex. This implies that the choices of model class and averaging method should be guided by a concept of complexity matching, i.e. the analysis of a complex data set may require a more complex model class and averaging strategy than the analysis of a simpler data set. We propose that complexity matching is akin to a bias-variance tradeoff in statistical modeling.
Collapse
Affiliation(s)
- Jennifer Clarke
- Department of Epidemiology and Public Health, University of Miami, Miami, FL 33136, USA
| | - Bertrand Clarke
- Department of Epidemiology and Public Health, University of Miami, Miami, FL 33136, USA
- Department of Medicine, University of Miami, Miami, FL 33136, USA
- Center for Computational Science, University of Miami, Miami, FL 33136, USA
| |
Collapse
|
49
|
Jackson CH, Thompson SG, Sharples LD. Accounting for uncertainty in health economic decision models by using model averaging. J R Stat Soc Ser A Stat Soc 2009; 172:383-404. [PMID: 19381329 PMCID: PMC2667305 DOI: 10.1111/j.1467-985x.2008.00573.x] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.
Collapse
|
50
|
Stow CA, Jolliff J, McGillicuddy DJ, Doney SC, Allen JI, Friedrichs MAM, Rose KA, Wallhead P. Skill Assessment for Coupled Biological/Physical Models of Marine Systems. J Mar Syst 2009; 76:4-15. [PMID: 28366997 PMCID: PMC5375119 DOI: 10.1016/j.jmarsys.2008.03.011] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Coupled biological/physical models of marine systems serve many purposes including the synthesis of information, hypothesis generation, and as a tool for numerical experimentation. However, marine system models are increasingly used for prediction to support high-stakes decision-making. In such applications it is imperative that a rigorous model skill assessment is conducted so that the model's capabilities are tested and understood. Herein, we review several metrics and approaches useful to evaluate model skill. The definition of skill and the determination of the skill level necessary for a given application is context specific and no single metric is likely to reveal all aspects of model skill. Thus, we recommend the use of several metrics, in concert, to provide a more thorough appraisal. The routine application and presentation of rigorous skill assessment metrics will also serve the broader interests of the modeling community, ultimately resulting in improved forecasting abilities as well as helping us recognize our limitations.
Collapse
Affiliation(s)
- Craig A Stow
- NOAA, Great Lakes Environmental Research Laboratory, 2205 Commonwealth Blvd., Ann Arbor, MI USA, 734-741-2055 (fax)
| | - Jason Jolliff
- Naval Research Laboratory, Stennis Space Center, MS USA, 228-688-4149 (fax)
| | | | - Scott C Doney
- Woods Hole Oceanographic Institution, Woods Hole MA USA, 508-457-2193 (fax)
| | - J Icarus Allen
- Plymouth Marine Laboratory, Prospect Place, West Hoe, Plymouth PL1 3DH UK, +44 1752 633101 (fax)
| | - Marjorie A M Friedrichs
- Virginia Institute of Marine Science, College of William and Mary, P.O. Box 1346, Gloucester Point, VA USA, 804-684-7889 (fax)
| | - Kenneth A Rose
- Dept. of Oceanography and Coastal Sciences, Louisiana State University, Baton Rouge, LA USA, 225-578-6513 (fax)
| | - Philip Wallhead
- National Oceanography Centre, Southampton, UK, +44 2380 596485
| |
Collapse
|